fdb-1.6.1/0000755000175000001440000000000013017556716012725 5ustar pcisarusers00000000000000fdb-1.6.1/fdb.egg-info/0000755000175000001440000000000013017556716015152 5ustar pcisarusers00000000000000fdb-1.6.1/fdb.egg-info/dependency_links.txt0000644000175000001440000000000113017556716021220 0ustar pcisarusers00000000000000 fdb-1.6.1/fdb.egg-info/PKG-INFO0000644000175000001440000000117113017556716016247 0ustar pcisarusers00000000000000Metadata-Version: 1.1 Name: fdb Version: 1.6.1 Summary: Firebird RDBMS bindings for Python. Home-page: http://www.firebirdsql.org/en/devel-python-driver/ Author: Pavel Cisar Author-email: pcisar@users.sourceforge.net License: BSD Description: fdb package is a set of Firebird RDBMS bindings for python. It works on Python 2.6+ and Python 3.x. Keywords: Firebird Platform: UNKNOWN Classifier: Development Status :: 5 - Production/Stable Classifier: License :: OSI Approved :: BSD License Classifier: Operating System :: OS Independent Classifier: Programming Language :: Python Classifier: Topic :: Database fdb-1.6.1/fdb.egg-info/not-zip-safe0000644000175000001440000000000112431143160017360 0ustar pcisarusers00000000000000 fdb-1.6.1/fdb.egg-info/top_level.txt0000644000175000001440000000000413017556716017676 0ustar pcisarusers00000000000000fdb fdb-1.6.1/fdb.egg-info/SOURCES.txt0000644000175000001440000000467113017556716017046 0ustar pcisarusers00000000000000LICENSE.TXT MANIFEST.in PKG-INFO README setup.cfg setup.py docs/.buildinfo docs/changelog.html docs/differences-from-kdb.html docs/genindex.html docs/getting-started.html docs/index.html docs/license.html docs/listinv.py docs/objects.inv docs/py-modindex.html docs/python-db-api-compliance.html docs/reference.html docs/requirements.html docs/search.html docs/searchindex.js docs/usage-guide.html docs/_sources/changelog.txt docs/_sources/differences-from-kdb.txt docs/_sources/getting-started.txt docs/_sources/index.txt docs/_sources/license.txt docs/_sources/python-db-api-compliance.txt docs/_sources/reference.txt docs/_sources/usage-guide.txt docs/_static/ajax-loader.gif docs/_static/basic.css docs/_static/comment-bright.png docs/_static/comment-close.png docs/_static/comment.png docs/_static/dialog-note.png docs/_static/dialog-seealso.png docs/_static/dialog-topic.png docs/_static/dialog-warning.png docs/_static/doctools.js docs/_static/down-pressed.png docs/_static/down.png docs/_static/epub.css docs/_static/fdbtheme.css docs/_static/file.png docs/_static/footerbg.png docs/_static/headerbg.png docs/_static/ie6.css docs/_static/jquery.js docs/_static/middlebg.png docs/_static/minus.png docs/_static/plus.png docs/_static/pygments.css docs/_static/searchtools.js docs/_static/transparent.gif docs/_static/underscore.js docs/_static/up-pressed.png docs/_static/up.png docs/_static/websupport.js fdb/__init__.py fdb/blr.py fdb/fbcore.py fdb/ibase.py fdb/monitor.py fdb/schema.py fdb/services.py fdb/utils.py fdb.egg-info/PKG-INFO fdb.egg-info/SOURCES.txt fdb.egg-info/dependency_links.txt fdb.egg-info/entry_points.txt fdb.egg-info/not-zip-safe fdb.egg-info/top_level.txt sphinx/Makefile sphinx/changelog.txt sphinx/conf.py sphinx/differences-from-kdb.txt sphinx/getting-started.txt sphinx/index.txt sphinx/license.txt sphinx/python-db-api-compliance.txt sphinx/reference.txt sphinx/requirements.txt sphinx/usage-guide.txt sphinx/fdbtheme/layout.html sphinx/fdbtheme/theme.conf sphinx/fdbtheme/static/dialog-note.png sphinx/fdbtheme/static/dialog-seealso.png sphinx/fdbtheme/static/dialog-topic.png sphinx/fdbtheme/static/dialog-warning.png sphinx/fdbtheme/static/epub.css sphinx/fdbtheme/static/fdbtheme.css_t sphinx/fdbtheme/static/footerbg.png sphinx/fdbtheme/static/headerbg.png sphinx/fdbtheme/static/ie6.css sphinx/fdbtheme/static/middlebg.png sphinx/fdbtheme/static/transparent.gif test/fbtest.fdb test/fbtest20.fdb test/fbtest21.fdb test/fbtest25.fdb test/testfdb.pyfdb-1.6.1/fdb.egg-info/entry_points.txt0000644000175000001440000000000513017556716020443 0ustar pcisarusers00000000000000 fdb-1.6.1/fdb/0000755000175000001440000000000013017556716013460 5ustar pcisarusers00000000000000fdb-1.6.1/fdb/utils.py0000644000175000001440000001517212431144632015165 0ustar pcisarusers00000000000000#coding:utf-8 # # PROGRAM: fdb # MODULE: utils.py # DESCRIPTION: Various utility classes and functions # CREATED: 10.5.2013 # # Software distributed under the License is distributed AS IS, # WITHOUT WARRANTY OF ANY KIND, either express or implied. # See the License for the specific language governing rights # and limitations under the License. # # The Original Code was created by Pavel Cisar # # Copyright (c) 2013 Pavel Cisar # and all contributors signed below. # # All Rights Reserved. # Contributor(s): ______________________________________. def update_meta (self, other): "Helper function for :class:`LateBindingProperty` class." self.__name__ = other.__name__ self.__doc__ = other.__doc__ self.__dict__.update(other.__dict__) return self class LateBindingProperty (property): """Peroperty class that binds to getter/setter/deleter methods when **instance** of class that define the property is created. This allows you to override these methods in descendant classes (if they are not private) without necessity to redeclare the property itself in descendant class. Recipe from Tim Delaney, 2005/03/31 http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/408713 :: class C(object): def getx(self): print 'C.getx' return self._x def setx(self, x): print 'C.setx' self._x = x def delx(self): print 'C.delx' del self._x x = LateBindingProperty(getx, setx, delx) class D(C): def setx(self, x): print 'D.setx' super(D, self).setx(x) def delx(self): print 'D.delx' super(D, self).delx() c = C() c.x = 1 c.x c.x del c.x print d = D() d.x = 1 d.x d.x del d.x This has the advantages that: a. You get back an actual property object (with attendant memory savings, performance increases, etc); b. It's the same syntax as using property(fget, fset, fdel, doc) except for the name; c. It will fail earlier (when you define the class as opposed to when you use it). d. It's shorter ;) e. If you inspect the property you will get back functions with the correct __name__, __doc__, etc. """ def __new__(self, fget=None, fset=None, fdel=None, doc=None): if fget is not None: def __get__(obj, objtype=None, name=fget.__name__): fget = getattr(obj, name) return fget() fget = update_meta(__get__, fget) if fset is not None: def __set__(obj, value, name=fset.__name__): fset = getattr(obj, name) return fset(value) fset = update_meta(__set__, fset) if fdel is not None: def __delete__(obj, name=fdel.__name__): fdel = getattr(obj, name) return fdel() fdel = update_meta(__delete__, fdel) return property(fget, fset, fdel, doc) class Iterator(object): """Generic iterator implementation. """ def __init__(self, method, sentinel = None): """ :param method: Callable without parameters that returns next item. :param sentinel: Value that when returned by `method` indicates the end of sequence. """ self.getnext = method self.sentinel = sentinel self.exhausted = False def __iter__(self): return self def next(self): if self.exhausted: raise StopIteration else: result = self.getnext() self.exhausted = (result == self.sentinel) if self.exhausted: raise StopIteration else: return result __next__ = next class EmbeddedProperty(property): """Property class that forwards calls to getter/setter/deleter methods to respective property methods of another object. This class allows you to "inject" properties from embedded object into class definition of parent object.""" def __init__(self,obj,prop): """ :param string obj: Attribute name with embedded object. :param property prop: Property instance from embedded object. """ self.obj = obj self.prop = prop self.__doc__ = prop.__doc__ def __get__(self,obj,objtype): if obj is None: return self return self.prop.__get__(getattr(obj,self.obj)) def __set__(self,obj,val): self.prop.__set__(getattr(obj,self.obj),val) def __delete__(self,obj): self.prop.__delete__(getattr(obj,self.obj)) class EmbeddedAttribute(property): """Property class that gets/sets attribute of another object. This class allows you to "inject" attributes from embedded object into class definition of parent object.""" def __init__(self,obj,attr): """ :param string obj: Attribute name with embedded object. :param string attr: Attribute name from embedded object. """ self.obj = obj self.attr = attr self.__doc__ = attr.__doc__ def __get__(self,obj,objtype): if obj is None: return self return getattr(getattr(obj,self.obj),self.attr) def __set__(self,obj,val): setattr(getattr(obj,self.obj),self.attr,val) def iter_class_properties(cls): """Iterator that yields `name, property` pairs for all properties in class. :param class cls: Class object.""" for varname in vars(cls): value = getattr(cls, varname) if isinstance(value, property): yield varname, value def iter_class_variables(cls): """Iterator that yields names of all non-callable attributes in class. :param class cls: Class object.""" for varname in vars(cls): value = getattr(cls, varname) if not (isinstance(value, property) or callable(value)): yield varname def embed_attributes(from_class,attr): """Class decorator that injects properties and non-callable attributes from another class instance embedded in class instances. :param class from_class: Class that should extend decorated class. :param string attr: Attribute name that holds instance of embedded class within decorated class instance.""" def d(class_): for pname,prop in iter_class_properties(from_class): if not hasattr(class_,pname): setattr(class_,pname,EmbeddedProperty(attr,prop)) for attrname in iter_class_variables(from_class): if not hasattr(class_,attrname): setattr(class_,attrname,EmbeddedAttribute(attr,attrname)) return class_ return d fdb-1.6.1/fdb/fbcore.py0000644000175000001440000073331613017027556015302 0ustar pcisarusers00000000000000#coding:utf-8 # # PROGRAM/MODULE: fdb # FILE: fbcore.py # DESCRIPTION: Python driver for Firebird # CREATED: 8.10.2011 # # Software distributed under the License is distributed AS IS, # WITHOUT WARRANTY OF ANY KIND, either express or implied. # See the License for the specific language governing rights # and limitations under the License. # # The Original Code was created by Pavel Cisar # # Copyright (c) 2011 Pavel Cisar # and all contributors signed below. # # All Rights Reserved. # Contributor(s): Philippe Makowski # ______________________________________. # # See LICENSE.TXT for details. import sys import os import ctypes import struct import time import datetime import decimal import weakref import threading from . import ibase from . import schema from . import monitor from . import utils try: # Python 2 from itertools import izip_longest except ImportError: # Python 3 from itertools import zip_longest as izip_longest from fdb.ibase import (frb_info_att_charset, isc_dpb_activate_shadow, isc_dpb_address_path, isc_dpb_allocation, isc_dpb_begin_log, isc_dpb_buffer_length, isc_dpb_cache_manager, isc_dpb_cdd_pathname, isc_dpb_connect_timeout, isc_dpb_damaged, isc_dpb_dbkey_scope, isc_dpb_debug, isc_dpb_delete_shadow, isc_dpb_disable_journal, isc_dpb_disable_wal, isc_dpb_drop_walfile, isc_dpb_dummy_packet_interval, isc_dpb_enable_journal, isc_dpb_encrypt_key, isc_dpb_force_write, isc_dpb_garbage_collect, isc_dpb_gbak_attach, isc_dpb_gfix_attach, isc_dpb_gsec_attach, isc_dpb_gstat_attach, isc_dpb_interp, isc_dpb_journal, isc_dpb_lc_ctype, isc_dpb_lc_messages, isc_dpb_license, isc_dpb_no_garbage_collect, isc_dpb_no_reserve, isc_dpb_num_buffers, isc_dpb_number_of_users, isc_dpb_old_dump_id, isc_dpb_old_file, isc_dpb_old_file_size, isc_dpb_old_num_files, isc_dpb_old_start_file, isc_dpb_old_start_page, isc_dpb_old_start_seqno, isc_dpb_online, isc_dpb_online_dump, isc_dpb_overwrite, isc_dpb_page_size, isc_dpb_password, isc_dpb_password_enc, isc_dpb_quit_log, isc_dpb_reserved, isc_dpb_sec_attach, isc_dpb_set_db_charset, isc_dpb_set_db_readonly, isc_dpb_set_db_sql_dialect, isc_dpb_set_page_buffers, isc_dpb_shutdown, isc_dpb_shutdown_delay, isc_dpb_sql_dialect, isc_dpb_sql_role_name, isc_dpb_sweep, isc_dpb_sweep_interval, isc_dpb_sys_user_name, isc_dpb_sys_user_name_enc, isc_dpb_trace, isc_dpb_user_name, isc_dpb_verify, isc_dpb_version1, isc_dpb_wal_backup_dir, isc_dpb_wal_bufsize, isc_dpb_wal_chkptlen, isc_dpb_wal_grp_cmt_wait, isc_dpb_wal_numbufs, isc_dpb_working_directory, isc_dpb_no_db_triggers, isc_dpb_nolinger, isc_info_active_tran_count, isc_info_end, isc_info_truncated, isc_info_sql_stmt_type, isc_info_sql_get_plan, isc_info_sql_records, isc_info_req_select_count, isc_info_req_insert_count, isc_info_req_update_count, isc_info_req_delete_count, isc_info_blob_total_length, isc_info_blob_max_segment, fb_info_page_contents, isc_info_active_transactions, isc_info_allocation, isc_info_attachment_id, isc_info_backout_count, isc_info_base_level, isc_info_bpage_errors, isc_info_creation_date, isc_info_cur_log_part_offset, isc_info_cur_logfile_name, isc_info_current_memory, isc_info_db_class, isc_info_db_id, isc_info_db_provider, isc_info_db_read_only, isc_info_db_size_in_pages, isc_info_db_sql_dialect, isc_info_delete_count, isc_info_dpage_errors, isc_info_expunge_count, isc_info_fetches, isc_info_firebird_version, isc_info_forced_writes, isc_info_implementation, isc_info_insert_count, isc_info_ipage_errors, isc_info_isc_version, isc_info_license, isc_info_limbo, isc_info_logfile, isc_info_marks, isc_info_max_memory, isc_info_next_transaction, isc_info_no_reserve, isc_info_num_buffers, isc_info_num_wal_buffers, isc_info_ods_minor_version, isc_info_ods_version, isc_info_oldest_active, isc_info_oldest_snapshot, isc_info_oldest_transaction, isc_info_page_errors, isc_info_page_size, isc_info_ppage_errors, isc_info_purge_count, isc_info_read_idx_count, isc_info_read_seq_count, isc_info_reads, isc_info_record_errors, isc_info_set_page_buffers, isc_info_sql_stmt_commit, isc_info_sql_stmt_ddl, isc_info_sql_stmt_delete, isc_info_sql_stmt_exec_procedure, isc_info_sql_stmt_get_segment, isc_info_sql_stmt_insert, isc_info_sql_stmt_put_segment, isc_info_sql_stmt_rollback, isc_info_sql_stmt_savepoint, isc_info_sql_stmt_select, isc_info_sql_stmt_select_for_upd, isc_info_sql_stmt_set_generator, isc_info_sql_stmt_start_trans, isc_info_sql_stmt_update, isc_info_sweep_interval, isc_info_tpage_errors, isc_info_tra_access, isc_info_tra_concurrency, isc_info_tra_consistency, isc_info_tra_id, isc_info_tra_isolation, isc_info_tra_lock_timeout, isc_info_tra_no_rec_version, isc_info_tra_oldest_active, isc_info_tra_oldest_interesting, isc_info_tra_oldest_snapshot, isc_info_tra_read_committed, isc_info_tra_readonly, isc_info_tra_readwrite, isc_info_tra_rec_version, fb_info_tra_dbpath, isc_info_update_count, isc_info_user_names, isc_info_version, isc_info_wal_avg_grpc_size, isc_info_wal_avg_io_size, isc_info_wal_buffer_size, isc_info_wal_ckpt_length, isc_info_wal_cur_ckpt_interval, isc_info_wal_grpc_wait_usecs, isc_info_wal_num_commits, isc_info_wal_num_io, isc_info_wal_prv_ckpt_fname, isc_info_wal_prv_ckpt_poffset, isc_info_wal_recv_ckpt_fname, isc_info_wal_recv_ckpt_poffset, isc_info_window_turns, isc_info_writes, isc_tpb_autocommit, isc_tpb_commit_time, isc_tpb_concurrency, isc_tpb_consistency, isc_tpb_exclusive, isc_tpb_ignore_limbo, isc_tpb_lock_read, isc_tpb_lock_timeout, isc_tpb_lock_write, isc_tpb_no_auto_undo, isc_tpb_no_rec_version, isc_tpb_nowait, isc_tpb_protected, isc_tpb_read, isc_tpb_read_committed, isc_tpb_rec_version, isc_tpb_restart_requests, isc_tpb_shared, isc_tpb_verb_time, isc_tpb_version3, isc_tpb_wait, isc_tpb_write, b, s, ord2, int2byte, mychr, mybytes, myunicode, mylong, StringType, IntType, LongType, FloatType, ListType, UnicodeType, TupleType, xrange, charset_map, #isc_sqlcode, isc_sql_interprete, fb_interpret, isc_dsql_execute_immediate, XSQLDA_PTR, ISC_SHORT, ISC_LONG, ISC_SCHAR, ISC_UCHAR, ISC_QUAD, SHRT_MIN, SHRT_MAX, USHRT_MAX, INT_MIN, INT_MAX, LONG_MIN, LONG_MAX, SQL_TEXT, SQL_VARYING, SQL_SHORT, SQL_LONG, SQL_FLOAT, SQL_DOUBLE, SQL_D_FLOAT, SQL_TIMESTAMP, SQL_BLOB, SQL_ARRAY, SQL_QUAD, SQL_TYPE_TIME, SQL_TYPE_DATE, SQL_INT64, SQL_BOOLEAN, SUBTYPE_NUMERIC, SUBTYPE_DECIMAL, MAX_BLOB_SEGMENT_SIZE, ISC_INT64, XSQLVAR, ISC_TEB, RESULT_VECTOR, ISC_STATUS, ISC_STATUS_ARRAY, ISC_STATUS_PTR, ISC_EVENT_CALLBACK, ISC_ARRAY_DESC, blr_varying, blr_varying2, blr_text, blr_text2, blr_short, blr_long, blr_int64, blr_float, blr_d_float, blr_double, blr_timestamp, blr_sql_date, blr_sql_time, SQLDA_version1, isc_segment, isc_db_handle, isc_tr_handle, isc_stmt_handle, isc_blob_handle, fbclient_API, ) PYTHON_MAJOR_VER = sys.version_info[0] __version__ = '1.6.1' apilevel = '2.0' threadsafety = 1 paramstyle = 'qmark' def load_api(fb_library_name=None): """Initializes bindings to Firebird Client Library unless they are already initialized. Called automatically by :func:`fdb.connect` and :func:`fdb.create_database`. :param string fb_library_name: (optional) Path to Firebird Client Library. When it's not specified, FDB does its best to locate appropriate client library. :returns: :class:`fdb.ibase.fbclient_API` instance. """ if not hasattr(sys.modules[__name__],'api'): setattr(sys.modules[__name__],'api',fbclient_API(fb_library_name)) return getattr(sys.modules[__name__],'api') # Exceptions required by Python Database API class Warning(Exception): """Exception raised for important warnings like data truncations while inserting, etc.""" pass class Error(Exception): """Exception that is the base class of all other error exceptions. You can use this to catch all errors with one single 'except' statement. Warnings are not considered errors and thus should not use this class as base.""" pass class InterfaceError(Error): """Exception raised for errors that are related to the database interface rather than the database itself.""" pass class DatabaseError(Error): "Exception raised for errors that are related to the database." pass #def __init__(self,msg, sqlcode=0,error_code=0): #self.sqlcode = sqlcode #self.error_code = error_code #self.msg = msg class DataError(DatabaseError): """Exception raised for errors that are due to problems with the processed data like division by zero, numeric value out of range, etc.""" pass class OperationalError(DatabaseError): """Exception raised for errors that are related to the database's operation and not necessarily under the control of the programmer, e.g. an unexpected disconnect occurs, the data source name is not found, a transaction could not be processed, a memory allocation error occurred during processing, etc.""" pass class IntegrityError(DatabaseError): """Exception raised when the relational integrity of the database is affected, e.g. a foreign key check fails.""" pass class InternalError(DatabaseError): """Exception raised when the database encounters an internal error, e.g. the cursor is not valid anymore, the transaction is out of sync, etc.""" pass class ProgrammingError(DatabaseError): """Exception raised for programming errors, e.g. table not found or already exists, syntax error in the SQL statement, wrong number of parameters specified, etc.""" pass class NotSupportedError(DatabaseError): """Exception raised in case a method or database API was used which is not supported by the database""" pass class TransactionConflict(DatabaseError): pass # Named positional constants to be used as indices into the description # attribute of a cursor (these positions are defined by the DB API spec). # For example: # nameOfFirstField = cursor.description[0][kinterbasdb.DESCRIPTION_NAME] DESCRIPTION_NAME = 0 DESCRIPTION_TYPE_CODE = 1 DESCRIPTION_DISPLAY_SIZE = 2 DESCRIPTION_INTERNAL_SIZE = 3 DESCRIPTION_PRECISION = 4 DESCRIPTION_SCALE = 5 DESCRIPTION_NULL_OK = 6 # Types Required by Python DB-API 2.0 def Date(year, month, day): return datetime.date(year, month, day) def Time(hour, minite, second): return datetime.time(hour, minite, second) def DateFromTicks(ticks): return apply(Date, time.localtime(ticks)[:3]) def TimeFromTicks(ticks): return apply(Time, time.localtime(ticks)[3:6]) def TimestampFromTicks(ticks): return apply(Timestamp, time.localtime(ticks)[:6]) def Binary(b): return b class DBAPITypeObject: def __init__(self, *values): self.values = values def __cmp__(self, other): if other in self.values: return 0 if other < self.values: return 1 else: return -1 STRING = DBAPITypeObject(str) if PYTHON_MAJOR_VER == 3: BINARY = DBAPITypeObject(bytes) else: BINARY = DBAPITypeObject(str) NUMBER = DBAPITypeObject(int, decimal.Decimal) DATETIME = DBAPITypeObject(datetime.datetime, datetime.date, datetime.time) ROWID = DBAPITypeObject() _FS_ENCODING = sys.getfilesystemencoding() DIST_TRANS_MAX_DATABASES = 16 def bs(byte_array): return bytes(byte_array) if PYTHON_MAJOR_VER == 3 else ''.join((chr(c) for c in byte_array)) ISOLATION_LEVEL_READ_COMMITED_LEGACY = bs([isc_tpb_version3, isc_tpb_write, isc_tpb_wait, isc_tpb_read_committed, isc_tpb_no_rec_version]) ISOLATION_LEVEL_READ_COMMITED = bs([isc_tpb_version3, isc_tpb_write, isc_tpb_wait, isc_tpb_read_committed, isc_tpb_rec_version]) ISOLATION_LEVEL_REPEATABLE_READ = bs([isc_tpb_version3, isc_tpb_write, isc_tpb_wait, isc_tpb_concurrency]) ISOLATION_LEVEL_SNAPSHOT = ISOLATION_LEVEL_REPEATABLE_READ ISOLATION_LEVEL_SERIALIZABLE = bs([isc_tpb_version3, isc_tpb_write, isc_tpb_wait, isc_tpb_consistency]) ISOLATION_LEVEL_SNAPSHOT_TABLE_STABILITY = ISOLATION_LEVEL_SERIALIZABLE ISOLATION_LEVEL_READ_COMMITED_RO = bs([isc_tpb_version3, isc_tpb_read, isc_tpb_wait, isc_tpb_read_committed, isc_tpb_rec_version]) # ODS constants ODS_FB_20 = 11.0 ODS_FB_21 = 11.1 ODS_FB_25 = 11.2 ODS_FB_30 = 12.0 # Translation tables d = dir(ibase) s = 'isc_info_db_impl_' q = [x for x in d if x.startswith(s) and x[len(s):] != 'last_value'] #: Dictionary to map Implementation codes to names IMPLEMENTATION_NAMES = dict(zip([getattr(ibase,x) for x in q],[x[len(s):] for x in q])) s = 'isc_info_db_code_' q = [x for x in d if x.startswith(s) and x[len(s):] != 'last_value'] #: Dictionary to map provider codes to names PROVIDER_NAMES = dict(zip([getattr(ibase,x) for x in q],[x[len(s):] for x in q])) s = 'isc_info_db_class_' q = [x for x in d if x.startswith(s) and x[len(s):] != 'last_value'] #: Dictionary to map database class codes to names DB_CLASS_NAMES = dict(zip([getattr(ibase,x) for x in q],[x[len(s):] for x in q])) # Private constants _SIZE_OF_SHORT = ctypes.sizeof(ctypes.c_short) _tenTo = [10 ** x for x in range(20)] if PYTHON_MAJOR_VER != 3: del x __xsqlda_cache = {} __tebarray_cache = {} _DATABASE_INFO_CODES_WITH_INT_RESULT = ( isc_info_allocation, isc_info_no_reserve, isc_info_db_sql_dialect, isc_info_ods_minor_version, isc_info_ods_version, isc_info_page_size, isc_info_current_memory, isc_info_forced_writes, isc_info_max_memory, isc_info_num_buffers, isc_info_sweep_interval, isc_info_limbo, isc_info_attachment_id, isc_info_fetches, isc_info_marks, isc_info_reads, isc_info_writes, isc_info_set_page_buffers, isc_info_db_read_only, isc_info_db_size_in_pages, isc_info_page_errors, isc_info_record_errors, isc_info_bpage_errors, isc_info_dpage_errors, isc_info_ipage_errors, isc_info_ppage_errors, isc_info_tpage_errors,frb_info_att_charset, isc_info_oldest_transaction, isc_info_oldest_active, isc_info_oldest_snapshot, isc_info_next_transaction, isc_info_active_tran_count,isc_info_db_class,isc_info_db_provider, ) _DATABASE_INFO_CODES_WITH_COUNT_RESULTS = ( isc_info_backout_count, isc_info_delete_count, isc_info_expunge_count, isc_info_insert_count, isc_info_purge_count, isc_info_read_idx_count, isc_info_read_seq_count, isc_info_update_count ) _DATABASE_INFO_CODES_WITH_TIMESTAMP_RESULT = (isc_info_creation_date,) _DATABASE_INFO__KNOWN_LOW_LEVEL_EXCEPTIONS = (isc_info_user_names, fb_info_page_contents, isc_info_active_transactions) def xsqlda_factory(size): if size in __xsqlda_cache: cls = __xsqlda_cache[size] else: class XSQLDA(ctypes.Structure): pass XSQLDA._fields_ = [ ('version', ISC_SHORT), ('sqldaid', ISC_SCHAR * 8), ('sqldabc', ISC_LONG), ('sqln', ISC_SHORT), ('sqld', ISC_SHORT), ('sqlvar', XSQLVAR * size), ] __xsqlda_cache[size] = XSQLDA cls = XSQLDA xsqlda = cls() xsqlda.version = SQLDA_version1 xsqlda.sqln = size return xsqlda def tebarray_factory(size): if size in __tebarray_cache: cls = __tebarray_cache[size] else: cls = ISC_TEB * size __xsqlda_cache[size] = cls teb_array = cls() return teb_array buf_pointer = ctypes.POINTER(ctypes.c_char) def is_dead_proxy(obj): return isinstance(obj,weakref.ProxyType) and not dir(obj) def b2u(st, charset): "Decode to unicode if charset is defined. For conversion of result set data." if charset: return st.decode(charset) else: return st def p3fix(st, charset): """For P3 convert bytes to string using connection charset, P2 as is. For conversion of info results to native strings.""" if PYTHON_MAJOR_VER == 3: return st.decode(charset) else: return st def inc_pointer(pointer): t = type(pointer) p = ctypes.cast(pointer,ctypes.c_void_p) p.value += 1 return ctypes.cast(p,t) def bytes_to_bint(b): # Read as big endian len_b = len(b) if len_b == 1: fmt = 'B' elif len_b == 2: fmt = '>H' elif len_b == 4: fmt = '>L' elif len_b == 8: fmt = '>Q' else: raise InternalError return struct.unpack(fmt, b)[0] def bytes_to_int(b): # Read as little endian. len_b = len(b) if len_b == 1: fmt = 'b' elif len_b == 2: fmt = ' 0 def exception_from_status(error, status, preamble=None): msglist = [] msg = ctypes.create_string_buffer(512) if preamble: msglist.append(preamble) sqlcode = api.isc_sqlcode(status) error_code = status[1] msglist.append('- SQLCODE: %i' % sqlcode) #isc_sql_interprete(sqlcode, msg, 512) #if PYTHON_MAJOR_VER == 3: #### Todo: trouble? decode from connection charset? #msglist.append('- ' + (msg.value).decode('utf_8')) #else: #msglist.append('- ' + msg.value) pvector = ctypes.cast(ctypes.addressof(status), ISC_STATUS_PTR) while True: result = api.fb_interpret(msg, 512, pvector) if result != 0: if PYTHON_MAJOR_VER == 3: ### Todo: trouble? decode from connection charset? msglist.append('- ' + (msg.value).decode('utf_8')) else: msglist.append('- ' + msg.value) else: break return error('\n'.join(msglist), sqlcode, error_code) def build_dpb(user, password, sql_dialect, role, charset, buffers, force_write, no_reserve, db_key_scope, no_gc, no_db_triggers, no_linger): params = [int2byte(isc_dpb_version1)] def addString(codeAsByte, s): if PYTHON_MAJOR_VER == 3 or isinstance(s,UnicodeType): s = s.encode(charset_map.get(charset, charset)) sLen = len(s) if sLen >= 256: # Because the length is denoted in the DPB by a single byte. raise ProgrammingError("Individual component of database" " parameter buffer is too large. Components must be less" " than 256 bytes." ) myformat = 'cc%ds' % sLen # like 'cc50s' for a 50-byte string newEntry = struct.pack(myformat, int2byte(codeAsByte), int2byte(sLen), s) params.append(newEntry) def addByte(codeAsByte, value): if (not isinstance(value, (int, mylong)) or value < 0 or value > 255): raise ProgrammingError("The value for an integer DPB code must be" " an int or long with a value between 0 and 255." ) newEntry = struct.pack('ccc', int2byte(codeAsByte), b('\x01'), int2byte(value)) params.append(newEntry) def addInt(codeAsByte, value): if not isinstance(value, (int, mylong)): raise ProgrammingError("The value for an integer DPB code must be" " an int or long." ) newEntry = struct.pack('=ccI', int2byte(codeAsByte), b('\x04'), value) params.append(newEntry) if user: addString(isc_dpb_user_name, user) if password: addString(isc_dpb_password, password) if role: addString(isc_dpb_sql_role_name, role) if sql_dialect: addByte(isc_dpb_sql_dialect, sql_dialect) if charset: addString(isc_dpb_lc_ctype, charset.upper()) if buffers: addInt(isc_dpb_num_buffers, buffers) if force_write: addByte(isc_dpb_force_write, force_write) if no_reserve: addByte(isc_dpb_no_reserve, no_reserve) if db_key_scope: addByte(isc_dpb_dbkey_scope, db_key_scope) if no_gc: addByte(isc_dpb_no_garbage_collect, no_gc) if no_db_triggers: addByte(isc_dpb_no_db_triggers, no_db_triggers) if no_linger: addByte(isc_dpb_nolinger, no_linger) return b('').join(params) def connect(dsn='', user=None, password=None, host=None, port=3050, database=None, sql_dialect=3, role=None, charset=None, buffers=None, force_write=None, no_reserve=None, db_key_scope=None, isolation_level=ISOLATION_LEVEL_READ_COMMITED, connection_class=None, fb_library_name=None, no_gc=None, no_db_triggers=None, no_linger=None): """ Establish a connection to database. :param dsn: Connection string in format [host[/port]]:database :param string user: User name. If not specified, fdb attempts to use ISC_USER envar. :param string password: User password. If not specified, fdb attempts to use ISC_PASSWORD envar. :param string host: Server host machine specification. :param integer port: Port used by Firebird server **(not used)**. :param string database: Database specification (file spec. or alias) :param sql_dialect: SQL Dialect for connection. :type sql_dialect): 1, 2 or 3 :param string role: User role. :param string charset: Character set for connection. :param integer buffers: Page case size override for connection. :param integer force_writes: Forced writes override for connection. :param integer no_reserve: Page space reservation override for connection. :param integer db_key_scope: DBKEY scope override for connection. :param isolation_level: Default transaction isolation level for connection **(not used)**. :type isolation_level: 0, 1, 2 or 3 :param connection_class: Custom connection class :type connection_class: subclass of :class:`Connection` :param string fb_library_name: Full path to Firebird client library. See :func:`~fdb.load_api` for details. :param integer no_gc: No Garbage Collection flag. :param integer no_db_triggers: No database triggers flag (FB 2.1). :param integer no_linger: No linger flag (FB3). :returns: Connection to database. :rtype: :class:`Connection` instance. :raises ProgrammingError: For bad parameter values. :raises DatabaseError: When connection cannot be established. .. important:: You may specify the database using either `dns` or `database` (with optional `host`), but not both. Examples: .. code-block:: python con = fdb.connect(dsn='host:/path/database.fdb', user='sysdba', password='pass', charset='UTF8') con = fdb.connect(host='myhost', database='/path/database.fdb', user='sysdba', password='pass', charset='UTF8') """ load_api(fb_library_name) if connection_class == None: connection_class = Connection if not issubclass(connection_class,Connection): raise ProgrammingError("'connection_class' must be subclass of Connection") if not user: user = os.environ.get('ISC_USER', None) if not password: password = os.environ.get('ISC_PASSWORD', None) if sql_dialect not in [1, 2, 3]: raise ProgrammingError("SQl Dialect must be either 1, 2 or 3") if ((not dsn and not host and not database) or (dsn and (host or database)) or (host and not database) ): raise ProgrammingError( "Must supply one of:\n" " 1. keyword argument dsn='host:/path/to/database'\n" " 2. both keyword arguments host='host' and" " database='/path/to/database'\n" " 3. only keyword argument database='/path/to/database'" ) if not dsn: if host and host.endswith(':'): raise ProgrammingError("Host must not end with a colon." " You should specify host='%s' rather than host='%s'." % (host[:-1], host) ) elif host: dsn = '%s:%s' % (host, database) else: dsn = database dsn = b(dsn,_FS_ENCODING) if charset: charset = charset.upper() dpb = build_dpb(user, password, sql_dialect, role, charset, buffers,force_write, no_reserve, db_key_scope, no_gc, no_db_triggers, no_linger) _isc_status = ISC_STATUS_ARRAY() _db_handle = isc_db_handle(0) api.isc_attach_database(_isc_status, len(dsn), dsn, _db_handle, len(dpb), dpb) if db_api_error(_isc_status): raise exception_from_status(DatabaseError, _isc_status, "Error while connecting to database:") return connection_class(_db_handle, dpb, sql_dialect, charset, isolation_level) def create_database(sql='', sql_dialect=3, dsn='', user=None, password=None, host=None, port=3050, database=None, page_size=None, length=None, charset=None, files=None, connection_class=None, fb_library_name=None): """ Creates a new database. Parameters could be specified either by supplied "CREATE DATABASE" statement, or set of database parameters. :param sql: "CREATE DATABASE" statement. :param sql_dialect: SQL Dialect for newly created database. :type sql_dialect: 1 or 3 :param dsn: Connection string in format [host[/port]]:database :param string user: User name. If not specified, fdb attempts to use ISC_USER envar. :param string password: User password. If not specified, fdb attempts to use ISC_PASSWORD envar. :param string host: Server host machine specification. :param integer port: Port used by Firebird server **(not used)**. :param string database: Database specification (file spec. or alias) :param integer page_size: Database page size. :param integer length: Database size in pages. :param string charset: Character set for connection. :param string files: Specification of secondary database files. :param connection_class: Custom connection class :type connection_class: subclass of :class:`Connection` :param string fb_library_name: Full path to Firebird client library. See :func:`~fdb.load_api` for details. :returns: Connection to the newly created database. :rtype: :class:`Connection` instance. :raises ProgrammingError: For bad parameter values. :raises DatabaseError: When database creation fails. Example: .. code-block:: python con = fdb.create_database("create database '/temp/db.fdb' user 'sysdba' password 'pass'") con = fdb.create_database(dsn='/temp/db.fdb',user='sysdba',password='pass',page_size=8192) """ load_api(fb_library_name) if connection_class == None: connection_class = Connection if not issubclass(connection_class,Connection): raise ProgrammingError("'connection_class' must be subclass of Connection") # Database to create must be specified by either `sql` or other parameters. if sql: if isinstance(sql, myunicode): sql = sql.encode(_FS_ENCODING) else: if not user: user = os.environ.get('ISC_USER', None) if not password: password = os.environ.get('ISC_PASSWORD', None) if sql_dialect not in [1, 2, 3]: raise ProgrammingError("SQl Dialect must be either 1, 2 or 3") if ((not dsn and not host and not database) or (dsn and (host or database)) or (host and not database) ): raise ProgrammingError( "Must supply one of:\n" " 1. keyword argument dsn='host:/path/to/database'\n" " 2. both keyword arguments host='host' and" " database='/path/to/database'\n" " 3. only keyword argument database='/path/to/database'" ) if not dsn: if host and host.endswith(':'): raise ProgrammingError("Host must not end with a colon." " You should specify host='%s' rather than host='%s'." % (host[:-1], host) ) elif host: dsn = '%s:%s' % (host, database) else: dsn = database dsn = b(dsn,_FS_ENCODING) # Parameter checks sql = "create database '%s' user '%s' password '%s'" % (dsn,user,password) if page_size: sql = '%s page_size %i' % (sql,page_size) if length: sql = '%s length %i' % (sql,length) if charset: sql = '%s default character set %s' % (sql,charset.upper()) if files: sql = '%s %s' % (sql,files) isc_status = ISC_STATUS_ARRAY(0) trans_handle = isc_tr_handle(0) db_handle = isc_db_handle(0) xsqlda = xsqlda_factory(1) # For yet unknown reason, the isc_dsql_execute_immediate segfaults when # NULL (None) is passed as XSQLDA, so we provide one here api.isc_dsql_execute_immediate(isc_status, db_handle, trans_handle, ctypes.c_ushort(len(sql)), sql, sql_dialect, ctypes.cast(ctypes.pointer(xsqlda),XSQLDA_PTR)) if db_api_error(isc_status): raise exception_from_status(DatabaseError, isc_status, "Error while creating database:") return connection_class(db_handle,sql_dialect=sql_dialect, charset=charset) class _cursor_weakref_callback(object): """Wraps callback function used in weakrefs so it's called only if still exists. """ def __init__(self, obj): self.__obj = weakref.ref(obj) def __call__(self, *args, **kwargs): self.__obj()._cursors.remove(*args, **kwargs) class _weakref_callback(object): """Wraps callback function used in weakrefs so it's called only if still exists. """ def __init__(self, func): self.__funcref = weakref.ref(func) def __call__(self, *args, **kwargs): func = self.__funcref() if func: func(*args, **kwargs) class TransactionContext(object): """Context Manager that manages transaction for object passed to constructor. Performs `rollback` if exception is thrown inside code block, otherwise performs `commit` at the end of block. .. note: :class:`~fdb.Transaction` acts as context manager and supports `with` statement directly. Example:: with TransactionContext(my_transaction): cursor.execute('insert into tableA (x,y) values (?,?)',(x,y)) cursor.execute('insert into tableB (x,y) values (?,?)',(x,y)) """ #: Transaction-like object this instance manages. transaction = None def __init__(self,transaction): ":param transaction: Any object that supports `begin()`, `commit()` and `rollback()`." self.transaction = transaction def __enter__(self): self.transaction.begin() return self.transaction def __exit__(self, exc_type, exc_val, exc_tb): if exc_type == None: self.transaction.commit() else: self.transaction.rollback() class Connection(object): """ Represents a connection between the database client (the Python process) and the database server. .. important:: DO NOT create instances of this class directly! Use only :func:`connect` or :func:`create_database` to get Connection instances. """ # PEP 249 (Python DB API 2.0) extensions Warning = Warning Error = Error InterfaceError = InterfaceError DatabaseError = DatabaseError DataError = DataError OperationalError = OperationalError IntegrityError = IntegrityError InternalError = InternalError ProgrammingError = ProgrammingError NotSupportedError = NotSupportedError def __init__(self, db_handle, dpb=None, sql_dialect=3, charset=None, isolation_level=ISOLATION_LEVEL_READ_COMMITED): """ :param db_handle: Database handle provided by factory function. :param dpb: Database Parameter Block associated with database handle. :param integer sql_dialect: SQL Dialect associated with database handle. :param string charset: Character set associated with database handle. """ if charset: self.__charset = charset.upper() else: self.__charset = None self._python_charset = charset_map.get(self.charset, self.charset) self._default_tpb = isolation_level # Main transaction self._main_transaction = Transaction([self], default_tpb=self._default_tpb) # ReadOnly ReadCommitted transaction self._query_transaction = Transaction([self], default_tpb=ISOLATION_LEVEL_READ_COMMITED_RO) self._transactions = [self._main_transaction,self._query_transaction] self.__precision_cache = {} self.__sqlsubtype_cache = {} self.__conduits = [] self.__group = None self.__schema = None self.__monitor = None self.__ods = None self.__sql_dialect = sql_dialect self._dpb = dpb self._isc_status = ISC_STATUS_ARRAY() self._db_handle = db_handle # Cursor for internal use self.__ic = self.query_transaction.cursor() self.__ic._set_as_internal() # Get Firebird engine version verstr = self.db_info(isc_info_firebird_version) x = verstr.split() if x[0].find('V') > 0: (x,self.__version) = x[0].split('V') elif x[0].find('T') > 0: (x,self.__version) = x[0].split('T') else: # Unknown version self.__version = '0.0.0.0' x = self.__version.split('.') self.__engine_version = float('%s.%s' % (x[0],x[1])) # self.__page_size = self.db_info(isc_info_page_size) def __remove_group(self, group_ref): self.__group = None def __ensure_group_membership(self, must_be_member, err_msg): if must_be_member: if self.group is None: raise ProgrammingError(err_msg) else: if self.group is not None: raise ProgrammingError(err_msg) def __check_attached(self): if self._db_handle == None: raise ProgrammingError("Connection object is detached from database") def __close(self, detach=True): if self._db_handle != None: if self.__schema: self.__schema._close() if self.__monitor: self.__monitor._close() self.__ic.close() del self.__ic try: for conduit in self.__conduits: conduit.close() for transaction in self._transactions: transaction.default_action = 'rollback' # Required by Python DB API 2.0 transaction.close() if detach: api.isc_detach_database(self._isc_status, self._db_handle) finally: self._db_handle = None def __enter__(self): return self def __exit__(self, *args): self.close() def __get_sql_dialect(self): return self.__sql_dialect def __get_main_transaction(self): return self._main_transaction def __get_query_transaction(self): return self._query_transaction def __get_transactions(self): return tuple(self._transactions) def __get_closed(self): return self._db_handle == None def __get_server_version(self): return self.db_info(isc_info_version) def __get_firebird_version(self): return self.db_info(isc_info_firebird_version) def __get_version(self): return self.__version def __get_engine_version(self): return self.__engine_version def __get_default_tpb(self): return self._default_tpb def __set_default_tpb(self, value): self._default_tpb = _validateTPB(value) def __get_charset(self): return self.__charset def __set_charset(self, value): # More informative error message: raise AttributeError("A connection's 'charset' property can be" " specified upon Connection creation as a keyword argument to" " fdb.connect, but it cannot be modified thereafter." ) def __get_group(self): if self.__group: try: return self.__group() except: return None else: return None def __get_ods(self): if not self.__ods: self.__ods = float('%d.%d' % (self.ods_version,self.ods_minor_version)) return self.__ods def __get_ods_version(self): return self.db_info(isc_info_ods_version) def __get_ods_minor_version(self): return self.db_info(isc_info_ods_minor_version) def __get_page_size(self): return self.__page_size def __get_page_cache_size(self): return self.db_info(isc_info_num_buffers) def __get_database_name(self): return self.db_info(isc_info_db_id)[0] def __get_site_name(self): return self.db_info(isc_info_db_id)[1] def __get_attachment_id(self): return self.db_info(isc_info_attachment_id) def __get_io_stats(self): return self.db_info([isc_info_reads,isc_info_writes,isc_info_fetches,isc_info_marks]) def __get_current_memory(self): return self.db_info(isc_info_current_memory) def __get_max_memory(self): return self.db_info(isc_info_max_memory) def __get_pages_allocated(self): return self.db_info(isc_info_allocation) def __get_database_sql_dialect(self): return self.db_info(isc_info_db_sql_dialect) def __get_sweep_interval(self): return self.db_info(isc_info_sweep_interval) def __get_space_reservation(self): value = self.db_info(isc_info_no_reserve) return value == 0 def __get_forced_writes(self): value = self.db_info(isc_info_forced_writes) return value == 1 def __get_creation_date(self): return self.db_info(isc_info_creation_date) def __get_implementation_id(self): return self.db_info(isc_info_implementation)[0] def __get_provider_id(self): return self.db_info(isc_info_db_provider) def __get_db_class_id(self): return self.db_info(isc_info_db_class) def __get_oit(self): return self.db_info(isc_info_oldest_transaction) def __get_oat(self): return self.db_info(isc_info_oldest_active) def __get_ost(self): return self.db_info(isc_info_oldest_snapshot) def __get_next_transaction(self): return self.db_info(isc_info_next_transaction) def __parse_date(self, raw_value): "Convert raw data to datetime.date" nday = bytes_to_int(raw_value) + 678882 century = (4 * nday - 1) // 146097 nday = 4 * nday - 1 - 146097 * century day = nday // 4 nday = (4 * day + 3) // 1461 day = 4 * day + 3 - 1461 * nday day = (day + 4) // 4 month = (5 * day - 3) // 153 day = 5 * day - 3 - 153 * month day = (day + 5) // 5 year = 100 * century + nday if month < 10: month += 3 else: month -= 9 year += 1 return year, month, day def __parse_time(self, raw_value): "Convert raw data to datetime.time" n = bytes_to_int(raw_value) s = n // 10000 m = s // 60 h = m // 60 m = m % 60 s = s % 60 return (h, m, s, (n % 10000) * 100) def _get_schema(self): if not self.__schema: self.__schema = schema.Schema() self.__schema.bind(self) self.__schema._set_as_internal() return self.__schema def _get_monitor(self): if not self.__monitor: if self.ods >= ODS_FB_21: self.__monitor = monitor.Monitor() self.__monitor.bind(self) self.__monitor._set_as_internal() else: raise ProgrammingError("Monitoring tables are available only " \ "for databases with ODS 11.1 and higher.") return self.__monitor def _get_array_sqlsubtype(self, relation, column): subtype = self.__sqlsubtype_cache.get((relation,column)) if subtype is not None: return subtype self.__ic.execute("SELECT FIELD_SPEC.RDB$FIELD_SUB_TYPE" " FROM RDB$FIELDS FIELD_SPEC, RDB$RELATION_FIELDS REL_FIELDS" " WHERE" " FIELD_SPEC.RDB$FIELD_NAME = REL_FIELDS.RDB$FIELD_SOURCE" " AND REL_FIELDS.RDB$RELATION_NAME = ?" " AND REL_FIELDS.RDB$FIELD_NAME = ?", (p3fix(relation,self._python_charset), p3fix(column,self._python_charset))) result = self.__ic.fetchone() self.__ic.close() if result: self.__sqlsubtype_cache[(relation,column)] = result[0] return result[0] def _determine_field_precision(self, sqlvar): if sqlvar.relname_length == 0 or sqlvar.sqlname_length == 0: # Either or both field name and relation name are not provided, # so we cannot determine field precision. It's normal situation # for example for queries with dynamically computed fields return 0 # Special case for automatic RDB$DB_KEY fields. if ((sqlvar.sqlname_length == 6 and sqlvar.sqlname == 'DB_KEY') or (sqlvar.sqlname_length == 10 and sqlvar.sqlname == 'RDB$DB_KEY')): return 0 precision = self.__precision_cache.get((sqlvar.relname, sqlvar.sqlname)) if precision is not None: return precision # First, try table self.__ic.execute("SELECT FIELD_SPEC.RDB$FIELD_PRECISION" " FROM RDB$FIELDS FIELD_SPEC," " RDB$RELATION_FIELDS REL_FIELDS" " WHERE" " FIELD_SPEC.RDB$FIELD_NAME =" " REL_FIELDS.RDB$FIELD_SOURCE" " AND REL_FIELDS.RDB$RELATION_NAME = ?" " AND REL_FIELDS.RDB$FIELD_NAME = ?", (p3fix(sqlvar.relname,self._python_charset), p3fix(sqlvar.sqlname,self._python_charset))) result = self.__ic.fetchone() self.__ic.close() if result: self.__precision_cache[(sqlvar.relname,sqlvar.sqlname)] = result[0] return result[0] # Next, try stored procedure output parameter self.__ic.execute("SELECT FIELD_SPEC.RDB$FIELD_PRECISION" " FROM RDB$FIELDS FIELD_SPEC," " RDB$PROCEDURE_PARAMETERS REL_FIELDS" " WHERE" " FIELD_SPEC.RDB$FIELD_NAME =" " REL_FIELDS.RDB$FIELD_SOURCE" " AND RDB$PROCEDURE_NAME = ?" " AND RDB$PARAMETER_NAME = ?" " AND RDB$PARAMETER_TYPE = 1", (p3fix(sqlvar.relname,self._python_charset), p3fix(sqlvar.sqlname,self._python_charset))) result = self.__ic.fetchone() self.__ic.close() if result: self.__precision_cache[(sqlvar.relname,sqlvar.sqlname)] = result[0] return result[0] # We ran out of options return 0 def drop_database(self): """Drops the database to which this connection is attached. Unlike plain file deletion, this method behaves responsibly, in that it removes shadow files and other ancillary files for this database. :raises ProgrammingError: When connection is a member of a :class:`ConnectionGroup`. :raises DatabaseError: When error is returned from server. """ self.__ensure_group_membership(False, "Cannot drop database via" " connection that is part of a ConnectionGroup.") saved_handle = isc_db_handle(self._db_handle.value) self.__close(detach=False) api.isc_drop_database(self._isc_status, saved_handle) if db_api_error(self._isc_status): self._db_handle = saved_handle raise exception_from_status(DatabaseError, self._isc_status, "Error while dropping database:") def execute_immediate(self, sql): """Executes a statement in context of :attr:`main_transaction` without caching its prepared form. Automatically starts transaction if it's not already started. :param string sql: SQL statement to execute. .. important:: **The statement must not be of a type that returns a result set.** In most cases (especially cases in which the same statement – perhaps a parameterized statement – is executed repeatedly), it is better to create a cursor using the connection’s cursor method, then execute the statement using one of the cursor’s execute methods. :param string sql: SQL statement to execute. :raises ProgrammingError: When connection is closed. :raises DatabaseError: When error is returned from server. """ self.__check_attached() self.main_transaction.execute_immediate(sql) def database_info(self, info_code, result_type, page_number = None): """Wraps the Firebird C API function `isc_database_info`. For documentation, see the IB 6 API Guide section entitled "Requesting information about an attachment" (p. 51). Note that this method is a VERY THIN wrapper around the FB C API function `isc_database_info`. This method does NOT attempt to interpret its results except with regard to whether they are a string or an integer. For example, requesting `isc_info_user_names` will return a string containing a raw succession of length-name pairs. A thicker wrapper might interpret those raw results and return a Python tuple, but it would need to handle a multitude of special cases in order to cover all possible isc_info_* items. :param integer info_code: One of the `isc_info_*` constants. :param string result_type: Must be either ‘s’ if you expect a string result, or ‘i’ if you expect an integer result. :param integer page_number: Page number for `fb_info_page_contents` info code. :raises DatabaseError: When error is returned from server. :raises OperationalError: When returned information is bigger than SHRT_MAX. :raises InternalError: On unexpected processing condition. :raises ValueError: On illegal `result_type` value. .. seealso:: Extracting data with the database_info function is rather clumsy. See :meth:`db_info` for higher-level means of accessing the same information. .. note:: Some of the information available through this method would be more easily retrieved with the Services API (see submodule :mod:`fdb.services`). """ self.__check_attached() buf_size = 256 if info_code != fb_info_page_contents else self.page_size + 10 request_buffer = bs([info_code]) if info_code == fb_info_page_contents: request_buffer += int_to_bytes(2, 2) request_buffer += int_to_bytes(page_number, 4) while True: res_buf = int2byte(0) * buf_size api.isc_database_info(self._isc_status, self._db_handle, len(request_buffer), request_buffer, len(res_buf), res_buf) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Error while requesting database information:") i = buf_size - 1 while i >= 0: if res_buf[i] != mychr(0): break else: i -= 1 if ord2(res_buf[i]) == isc_info_truncated: if buf_size < SHRT_MAX: buf_size *= 2 if buf_size > SHRT_MAX: buf_size = SHRT_MAX continue else: raise OperationalError("Result is too large to fit into" " buffer of size SHRT_MAX, yet underlying info " " function only accepts buffers with size <= SHRT_MAX.") else: break if ord2(res_buf[i]) != isc_info_end: raise InternalError("Exited request loop sucessfuly, but" " res_buf[i] != sc_info_end.") if (request_buffer[0] != res_buf[0]) and (info_code != isc_info_active_transactions): # isc_info_active_transactions with no active transactions returns empty buffer # and does not follow this rule, so we'll report it only for other codes. raise InternalError("Result code does not match request code.") if result_type.upper() == 'I': return bytes_to_int(res_buf[3:3 + bytes_to_int(res_buf[1:3])]) elif (result_type.upper() == 'S' and info_code in _DATABASE_INFO__KNOWN_LOW_LEVEL_EXCEPTIONS): # The result buffers for a few request codes don't follow the generic # conventions, so we need to return their full contents rather than # omitting the initial infrastructural bytes. return ctypes.string_at(res_buf, i) elif result_type.upper() == 'S': return ctypes.string_at(res_buf[3:], bytes_to_int(res_buf[1:3])) else: raise ValueError("Unknown result type requested " "(must be 'i' or 's').") def db_info(self, request): """ Higher-level convenience wrapper around the :meth:`database_info` method that parses the output of `database_info` into Python-friendly objects instead of returning raw binary buffers in the case of complex result types. :param request: Single `fdb.isc_info_*` info request code or a sequence of such codes. :returns: Mapping of (info request code -> result). :raises ValueError: When requested code is not recognized. :raises OperationalError: On unexpected processing condition. """ def _extractDatabaseInfoCounts(buf): # Extract a raw binary sequence # of (unsigned short, signed int) pairs into # a corresponding Python dictionary. uShortSize = struct.calcsize(' 0: if PYTHON_MAJOR_VER == 3: slen = struct.unpack('B', int2byte(buf[pos]))[0] else: slen = struct.unpack('B', buf[pos])[0] pos += 1 item = buf[pos:pos + slen] pos += slen items.append(p3fix(item,self._python_charset)) count -= 1 results[infoCode] = tuple(items) elif infoCode == isc_info_implementation: # (IB 6 API Guide page 52) buf = self.database_info(infoCode, 's') # Skip the first four bytes. pos = 1 if PYTHON_MAJOR_VER == 3: implNumber = struct.unpack('B', int2byte(buf[pos]))[0] else: implNumber = struct.unpack('B', buf[pos])[0] pos += 1 if PYTHON_MAJOR_VER == 3: classNumber = struct.unpack('B', int2byte(buf[pos]))[0] else: classNumber = struct.unpack('B', buf[pos])[0] pos += 1 results[infoCode] = (implNumber, classNumber) elif infoCode in (isc_info_version, isc_info_firebird_version): # (IB 6 API Guide page 53) buf = self.database_info(infoCode, 's') # Skip the first byte. pos = 1 if PYTHON_MAJOR_VER == 3: versionStringLen = (struct.unpack('B', int2byte(buf[pos]))[0]) else: versionStringLen = (struct.unpack('B', buf[pos])[0]) pos += 1 versionString = buf[pos:pos + versionStringLen] results[infoCode] = p3fix(versionString,self._python_charset) elif infoCode == isc_info_user_names: # (IB 6 API Guide page 54) # # The isc_info_user_names results buffer does not exactly match # the format declared on page 54 of the IB 6 API Guide. # The buffer is formatted as a sequence of clusters, each of # which begins with the byte isc_info_user_names, followed by a # two-byte cluster length, followed by a one-byte username # length, followed by a single username. # I don't understand why the lengths are represented # redundantly (the two-byte cluster length is always one # greater than the one-byte username length), but perhaps it's # an attempt to adhere to the general format of an information # cluster declared on page 51 while also [trying, but failing # to] adhere to the isc_info_user_names-specific format # declared on page 54. buf = self.database_info(infoCode, 's') usernames = [] pos = 0 while pos < len(buf): ### Todo: check Python version differences, merge as possible if PYTHON_MAJOR_VER == 3: if buf[pos] != isc_info_user_names: raise OperationalError('While trying to service' ' isc_info_user_names request, found unexpected' ' results buffer contents at position %d of [%s]' % (pos, buf) ) pos += 1 # The two-byte cluster length: nameClusterLen = (struct.unpack(' number of connections by that user. res = {} for un in usernames: res[un] = res.get(un, 0) + 1 results[infoCode] = res elif infoCode == isc_info_active_transactions: buf = self.database_info(infoCode, 's') transactions = [] uShortSize = struct.calcsize('= 0 or attr.startswith('_') or attr in ['close','bind'] or hasattr(self,attr)): val = getattr(self.__schema,attr) if callable(val): setattr(self,attr,val) def _get_schema(self): return self.__schema class EventBlock(object): """Represents Firebird API structure for block of events. .. warning: Internaly used class not intended for direct use. """ #: List of registered event names event_names = [] #: length of internal event buffer buf_length = 0 #: Event ID event_id = 0 #: Event buffer event_buf = None #: Result buffer result_buf = None def __init__(self,queue,db_handle,event_names): self.__first = True def callback(result, length, updated): ctypes.memmove(result, updated, length) self.__queue.put((ibase.OP_RECORD_AND_REREGISTER, self)) return 0 self.__queue = weakref.proxy(queue) self._db_handle = db_handle self._isc_status = ISC_STATUS_ARRAY(0) self.event_names = list(event_names) self.__results = RESULT_VECTOR(0) self.__closed = False self.__callback = ISC_EVENT_CALLBACK(callback) self.event_buf = ctypes.pointer(ISC_UCHAR(0)) self.result_buf = ctypes.pointer(ISC_UCHAR(0)) self.buf_length = 0 self.event_id = ISC_LONG(0) self.buf_length = api.isc_event_block(ctypes.pointer(self.event_buf), ctypes.pointer(self.result_buf), *[b(x) for x in event_names]) def _begin(self): self.__wait_for_events() def __lt__(self,other): return self.event_id.value < other.event_id.value def __wait_for_events(self): api.isc_que_events(self._isc_status,self._db_handle,self.event_id, self.buf_length,self.event_buf, self.__callback,self.result_buf) if db_api_error(self._isc_status): self.close() raise exception_from_status(DatabaseError, self._isc_status, "Error while waiting for events:") def count_and_reregister(self): "Count event occurences and reregister interest in futrther notifications." result = {} api.isc_event_counts(self.__results, self.buf_length, self.event_buf, self.result_buf) if self.__first: # Ignore the first call, it's for setting up the table self.__first = False self.__wait_for_events() return None for i in xrange(len(self.event_names)): result[self.event_names[i]] = int(self.__results[i]) self.__wait_for_events() return result def close(self): "Close this block canceling managed events." if not self.closed: api.isc_cancel_events(self._isc_status,self._db_handle,self.event_id) self.__closed = True del self.__callback if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Error while canceling events:") def __get_closed(self): return self.__closed def __del__(self): self.close() #: (ReadOnly) True if block is closed for business closed = property(__get_closed) class EventConduit(object): """Represents a conduit through which database event notifications will flow into the Python program. .. important:: DO NOT create instances of this class directly! Use only :meth:`Connection.event_conduit` to get EventConduit instances. Notifications of any events are not accumulated until :meth:`begin` method is called. From the moment the :meth:`begin` method is called, notifications of any events that occur will accumulate asynchronously within the conduit’s internal queue until the conduit is closed either explicitly (via the :meth:`close` method) or implicitly (via garbage collection). `EventConduit` implements context manager protocol to call method :meth:`begin` and :meth:`close` automatically. Example: .. code-block:: python with connection.event_conduit( ('event_a', 'event_b') ) as conduit: events = conduit.wait() process_events(events) """ def __init__(self,db_handle,event_names): """ :param db_handle: Database handle. :param event_names: List of strings that represent event names. """ self._db_handle = db_handle self._isc_status = ISC_STATUS_ARRAY(0) self.__event_names = list(event_names) self.__events = {}.fromkeys(self.__event_names,0) self.__event_blocks = [] self.__closed = False self.__queue = ibase.PriorityQueue() self.__events_ready = threading.Event() self.__blocks = [[x for x in y if x] for y in izip_longest(*[iter(event_names)]*15)] self.__initialized = False def __enter__(self): self.begin() return self def __exit__(self, exc_type, exc_val, exc_tb): self.close() def begin(self): """Starts listening for events. Must be called directly or through context manager interface.""" def event_process(queue): while True: operation, data = queue.get() if operation == ibase.OP_RECORD_AND_REREGISTER: events = data.count_and_reregister() if events: for key,value in events.items(): self.__events[key] += value self.__events_ready.set() elif operation == ibase.OP_DIE: return self.__initialized = True self.__process_thread = threading.Thread(target=event_process, args=(self.__queue,)) self.__process_thread.start() for block_events in self.__blocks: event_block = EventBlock(self.__queue, self._db_handle, block_events) self.__event_blocks.append(event_block) event_block._begin() def wait(self,timeout=None): """Wait for events. Blocks the calling thread until at least one of the events occurs, or the specified timeout (if any) expires. :param timeout: Number of seconds (use a float to indicate fractions of seconds). If not even one of the relevant events has occurred after timeout seconds, this method will unblock and return None. The default timeout is infinite. :type timeout: integer or float :returns: `None` if the wait timed out, otherwise a dictionary that maps `event_name -> event_occurrence_count`. Example: .. code-block:: python >>>conduit = connection.event_conduit( ('event_a', 'event_b') ) >>>conduit.begin() >>>conduit.wait() { 'event_a': 1, 'event_b': 0 } In the example above `event_a` occurred once and `event_b` did not occur at all. """ if not self.__initialized: raise ProgrammingError("Event collection not initialized. It's necessary to call begin().") if not self.closed: self.__events_ready.wait(timeout) return self.__events.copy() def flush(self): """Clear any event notifications that have accumulated in the conduit’s internal queue. """ if not self.closed: self.__events_ready.clear() self.__events = {}.fromkeys(self.__event_names,0) def close(self): """Cancels the standing request for this conduit to be notified of events. After this method has been called, this EventConduit object is useless, and should be discarded. """ if not self.closed: self.__queue.put((ibase.OP_DIE,self)) self.__process_thread.join() for block in self.__event_blocks: block.close() self.__closed = True def __get_closed(self): return self.__closed def __del__(self): self.close() #: (Read Only) (boolean) True if conduit is closed. closed = property(__get_closed) class PreparedStatement(object): """Represents a prepared statement, an "inner" database cursor, which is used to manage the SQL statement execution and context of a fetch operation. .. important:: DO NOT create instances of this class directly! Use only :meth:`Cursor.prep` to get PreparedStatement instances. .. note:: PreparedStatements are bound to :class:`Cursor` instance that created them, and using them with other Cursor would report an error. """ #: Constant for internal use by this class. Do not change! RESULT_SET_EXHAUSTED = 100 #: Constant for internal use by this class. Do not change! NO_FETCH_ATTEMPTED_YET = -1 #: :class:`Cursor` instance that manages this PreparedStatement. Do not change! cursor = None #: (integer) An integer code that can be matched against the statement #: type constants in the isc_info_sql_stmt_* series. Do not change! statement_type = 0 #: The number of input parameters the statement requires. Do not change! n_input_params = 0 #: The number of output fields the statement produces. Do not change! n_output_params = 0 def __init__(self, operation, cursor, internal=True): self.__sql = operation self.__internal = internal if internal: self.cursor = weakref.proxy(cursor, _weakref_callback(self.__cursor_deleted)) else: self.cursor = cursor self._stmt_handle = None self._isc_status = ISC_STATUS_ARRAY() # Internal XSQLDA structure for output values. self._out_sqlda = xsqlda_factory(10) # Internal XSQLDA structure for input values. self._in_sqlda = xsqlda_factory(10) # Internal list to save original input SQLDA structures when they has # to temporarily augmented. self._in_sqlda_save = [] # (integer) An integer code that can be matched against the statement # type constants in the isc_info_sql_stmt_* series. self.statement_type = None self.__streamed_blobs = [] self.__blob_readers = [] self.__executed = False self.__prepared = False self.__closed = False self.__description = None self.__output_cache = None self._last_fetch_status = ISC_STATUS(self.NO_FETCH_ATTEMPTED_YET) connection = self.cursor._connection self.__charset = connection.charset self.__python_charset = connection._python_charset self.__sql_dialect = connection.sql_dialect # allocate statement handle self._stmt_handle = isc_stmt_handle(0) api.isc_dsql_allocate_statement(self._isc_status, connection._db_handle, self._stmt_handle) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Error while allocating SQL statement:") # prepare statement op = b(operation,self.__python_charset) api.isc_dsql_prepare(self._isc_status, self.cursor._transaction._tr_handle, self._stmt_handle, len(op),op, self.__sql_dialect, ctypes.cast(ctypes.pointer(self._out_sqlda), XSQLDA_PTR)) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Error while preparing SQL statement:") # Determine statement type info = b(' ') * 20 api.isc_dsql_sql_info(self._isc_status, self._stmt_handle, 1, bs([isc_info_sql_stmt_type]), len(info), info) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Error while determining SQL statement type:") if ord2(info[0]) != isc_info_sql_stmt_type: raise InternalError("Cursor.execute, determine statement type:\n" "first byte must be 'isc_info_sql_stmt_type'") self.statement_type = bytes_to_int(info[3:3 + bytes_to_int(info[1:3])]) # Init XSQLDA for input parameters api.isc_dsql_describe_bind(self._isc_status, self._stmt_handle, self.__sql_dialect, ctypes.cast(ctypes.pointer(self._in_sqlda), XSQLDA_PTR)) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Error while determining SQL statement parameters:") if self._in_sqlda.sqld > self._in_sqlda.sqln: self._in_sqlda = xsqlda_factory(self._in_sqlda.sqld) api.isc_dsql_describe_bind(self._isc_status, self._stmt_handle, self.__sql_dialect, ctypes.cast(ctypes.pointer(self._in_sqlda), XSQLDA_PTR)) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Error while determining SQL statement parameters:") # The number of input parameters the statement requires. self.n_input_params = self._in_sqlda.sqld # record original type and size information so it can be restored for # subsequent executions (mind the implicit string conversions!) for sqlvar in self._in_sqlda.sqlvar[:self.n_input_params]: self._in_sqlda_save.append((sqlvar.sqltype, sqlvar.sqllen)) # Init output XSQLDA api.isc_dsql_describe(self._isc_status, self._stmt_handle, self.__sql_dialect, ctypes.cast(ctypes.pointer(self._out_sqlda), XSQLDA_PTR)) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Error while determining SQL statement output:") if self._out_sqlda.sqld > self._out_sqlda.sqln: self._out_sqlda = xsqlda_factory(self._out_sqlda.sqld) api.isc_dsql_describe(self._isc_status, self._stmt_handle, self.__sql_dialect, ctypes.cast(ctypes.pointer(self._out_sqlda), XSQLDA_PTR)) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Error while determining SQL statement output:") # The number of output fields the statement produces. self.n_output_params = self._out_sqlda.sqld self.__coerce_XSQLDA(self._out_sqlda) self.__prepared = True self._name = None def __cursor_deleted(self,obj): self.cursor = None def __get_name(self): return self._name def __set_name(self,name): if self._name: raise ProgrammingError("Cursor's name has already been declared") self._set_cursor_name(name) def __get_closed(self): return self.__closed def __get_plan(self): buf_size = 256 while True: info = b(' ') * buf_size api.isc_dsql_sql_info(self._isc_status, self._stmt_handle, 2, bs([isc_info_sql_get_plan,isc_info_end]), len(info), info) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Error while determining rowcount:") if ord2(info[0]) == isc_info_truncated: if buf_size < SHRT_MAX: buf_size *= 2 if buf_size > SHRT_MAX: buf_size = SHRT_MAX continue else: return "Plan is too big" else: break if ord2(info[0]) == isc_info_end: return None if ord2(info[0]) != isc_info_sql_get_plan: raise IndentationError("Unexpected code in result buffer while" " querying SQL plan.") size = bytes_to_uint(info[1:_SIZE_OF_SHORT + 1]) # Skip first byte: a new line ### Todo: Better handling of P version specifics result = ctypes.string_at(info[_SIZE_OF_SHORT + 2:], size - 1) if PYTHON_MAJOR_VER == 3: return b2u(result,self.__python_charset) #return result.decode(charset_map.get(self.__charset,self.__charset)) else: return result def __get_sql(self): return self.__sql def __is_fixed_point(self, dialect, data_type, subtype, scale): return ((data_type in [SQL_SHORT, SQL_LONG, SQL_INT64] and (subtype or scale) ) or ((dialect < 3) and scale and (data_type in [SQL_DOUBLE, SQL_D_FLOAT])) ) def __get_external_data_type_name(self, dialect, data_type, subtype, scale): if data_type == SQL_TEXT: return 'CHAR' elif data_type == SQL_VARYING: return 'VARCHAR' elif self.__is_fixed_point(dialect, data_type, subtype, scale): if subtype == SUBTYPE_NUMERIC: return 'NUMERIC' elif subtype == SUBTYPE_DECIMAL: return 'DECIMAL' else: return 'NUMERIC/DECIMAL' elif data_type == SQL_SHORT: return 'SMALLINT' elif data_type == SQL_LONG: return 'INTEGER' elif data_type == SQL_INT64: return 'BIGINT' elif data_type == SQL_FLOAT: return 'FLOAT' elif data_type in [SQL_DOUBLE, SQL_D_FLOAT]: return 'DOUBLE' elif data_type == SQL_TIMESTAMP: return 'TIMESTAMP' elif data_type == SQL_TYPE_DATE: return 'DATE' elif data_type == SQL_TYPE_TIME: return 'TIME' elif data_type == SQL_BLOB: return 'BLOB' elif data_type == SQL_BOOLEAN: return 'BOOLEAN' else: return 'UNKNOWN' def __get_internal_data_type_name(self, data_type): if data_type == SQL_TEXT: return 'SQL_TEXT' elif data_type == SQL_VARYING: return 'SQL_VARYING' elif data_type == SQL_SHORT: return 'SQL_SHORT' elif data_type == SQL_LONG: return 'SQL_LONG' elif data_type == SQL_INT64: return 'SQL_INT64' elif data_type == SQL_FLOAT: return 'SQL_FLOAT' elif data_type in [SQL_DOUBLE, SQL_D_FLOAT]: return 'SQL_DOUBLE' elif data_type == SQL_TIMESTAMP: return 'SQL_TIMESTAMP' elif data_type == SQL_TYPE_DATE: return 'SQL_TYPE_DATE' elif data_type == SQL_TYPE_TIME: return 'SQL_TYPE_TIME' elif data_type == SQL_BLOB: return 'SQL_BLOB' elif data_type == SQL_BOOLEAN: return 'SQL_BOOLEAN' else: return 'UNKNOWN' def __get_description(self): if not self.__description: desc = [] if self.__prepared and (self._out_sqlda.sqld > 0): for sqlvar in self._out_sqlda.sqlvar[:self._out_sqlda.sqld]: # Field name (or alias) sqlname = p3fix(sqlvar.sqlname[:sqlvar.sqlname_length], self.__python_charset) alias = p3fix(sqlvar.aliasname[:sqlvar.aliasname_length], self.__python_charset) if alias != sqlname: sqlname = alias # Type information intsize = sqlvar.sqllen vartype = sqlvar.sqltype & ~1 scale = sqlvar.sqlscale precision = 0 if vartype in [SQL_TEXT, SQL_VARYING]: vtype = StringType dispsize = sqlvar.sqllen elif (vartype in [SQL_SHORT, SQL_LONG, SQL_INT64] and (sqlvar.sqlsubtype or scale)): vtype = decimal.Decimal precision = (self.cursor._connection._determine_field_precision(sqlvar)) dispsize = 20 elif vartype == SQL_SHORT: vtype = IntType dispsize = 6 elif vartype == SQL_LONG: vtype = IntType dispsize = 11 elif vartype == SQL_INT64: vtype = LongType dispsize = 20 elif vartype in [SQL_FLOAT, SQL_DOUBLE, SQL_D_FLOAT]: # Special case, dialect 1 DOUBLE/FLOAT # could be Fixed point if (self.__sql_dialect < 3) and scale: vtype = decimal.Decimal precision = (self.cursor._connection._determine_field_precision(sqlvar)) else: vtype = FloatType dispsize = 17 elif vartype == SQL_BLOB: scale = sqlvar.sqlsubtype vtype = StringType dispsize = 0 elif vartype == SQL_TIMESTAMP: vtype = datetime.datetime dispsize = 22 elif vartype == SQL_TYPE_DATE: vtype = datetime.date dispsize = 10 elif vartype == SQL_TYPE_TIME: vtype = datetime.time dispsize = 11 elif vartype == SQL_ARRAY: vtype = ListType dispsize = -1 elif vartype == SQL_BOOLEAN: vtype = bool dispsize = 5 else: vtype = None dispsize = -1 desc.append(tuple([sqlname, vtype, dispsize, intsize, precision, scale, bool(sqlvar.sqltype & 1)])) self.__description = tuple(desc) return self.__description def __get_rowcount(self): result = -1 if (self.__executed and self.statement_type in [isc_info_sql_stmt_select, isc_info_sql_stmt_insert, isc_info_sql_stmt_update, isc_info_sql_stmt_delete]): info = b(' ') * 64 api.isc_dsql_sql_info(self._isc_status, self._stmt_handle, 2, bs([isc_info_sql_records, isc_info_end]), len(info), info) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Error while determining rowcount:") if ord2(info[0]) != isc_info_sql_records: raise InternalError("Cursor.get_rowcount:\n" "first byte must be 'isc_info_sql_records'") res_walk = 3 short_size = ctypes.sizeof(ctypes.c_short) while ord2(info[res_walk]) != isc_info_end: cur_count_type = ord2(info[res_walk]) res_walk += 1 size = bytes_to_uint(info[res_walk:res_walk + short_size]) res_walk += short_size count = bytes_to_uint(info[res_walk:res_walk + size]) if ((cur_count_type == isc_info_req_select_count and self.statement_type == isc_info_sql_stmt_select) or (cur_count_type == isc_info_req_insert_count and self.statement_type == isc_info_sql_stmt_insert) or (cur_count_type == isc_info_req_update_count and self.statement_type == isc_info_sql_stmt_update) or (cur_count_type == isc_info_req_delete_count and self.statement_type == isc_info_sql_stmt_delete) ): result = count res_walk += size return result def _parse_date(self, raw_value): "Convert raw data to datetime.date" nday = bytes_to_int(raw_value) + 678882 century = (4 * nday - 1) // 146097 nday = 4 * nday - 1 - 146097 * century day = nday // 4 nday = (4 * day + 3) // 1461 day = 4 * day + 3 - 1461 * nday day = (day + 4) // 4 month = (5 * day - 3) // 153 day = 5 * day - 3 - 153 * month day = (day + 5) // 5 year = 100 * century + nday if month < 10: month += 3 else: month -= 9 year += 1 return year, month, day def _parse_time(self, raw_value): "Convert raw data to datetime.time" n = bytes_to_int(raw_value) s = n // 10000 m = s // 60 h = m // 60 m = m % 60 s = s % 60 return (h, m, s, (n % 10000) * 100) def _convert_date(self, v): # Convert datetime.date to BLR format data i = v.month + 9 jy = v.year + (i // 12) - 1 jm = i % 12 c = jy // 100 jy -= 100 * c j = ((146097 * c) // 4 + (1461 * jy) // 4 + (153 * jm + 2) // 5 + v.day - 678882) return int_to_bytes(j, 4) def _convert_time(self, v): # Convert datetime.time to BLR format time t = ((v.hour * 3600 + v.minute * 60 + v.second) * 10000 + v.microsecond // 100) return int_to_bytes(t, 4) def _convert_timestamp(self, v): # Convert datetime.datetime or datetime.date # to BLR format timestamp if isinstance(v, datetime.datetime): return self._convert_date(v.date()) + self._convert_time(v.time()) elif isinstance(v, datetime.date): return self._convert_date(v) + self._convert_time(datetime.time()) else: raise ValueError("datetime.datetime or datetime.date expected") def _check_integer_range(self, value, dialect, data_type, subtype, scale): if data_type == SQL_SHORT: vmin = SHRT_MIN vmax = SHRT_MAX elif data_type == SQL_LONG: vmin = INT_MIN vmax = INT_MAX elif data_type == SQL_INT64: vmin = LONG_MIN vmax = LONG_MAX if (value < vmin) or (value > vmax): msg = """numeric overflow: value %s (%s scaled for %d decimal places) is of too great a magnitude to fit into its internal storage type %s, which has range [%s,%s].""" % (str(value), self.__get_external_data_type_name(dialect, data_type, subtype, scale), scale, self.__get_internal_data_type_name(data_type), str(vmin), str(vmax)) raise ProgrammingError(msg, -802) def __coerce_XSQLDA(self, xsqlda): """Allocate space for SQLVAR data. """ for sqlvar in xsqlda.sqlvar[:self._out_sqlda.sqld]: if sqlvar.sqltype & 1: sqlvar.sqlind = ctypes.pointer(ISC_SHORT(0)) vartype = sqlvar.sqltype & ~1 if vartype in [SQL_TEXT, SQL_VARYING]: sqlvar.sqldata = ctypes.create_string_buffer(sqlvar.sqllen + 2) elif vartype == SQL_SHORT: sqlvar.sqldata = ctypes.cast(ctypes.create_string_buffer( sqlvar.sqllen),buf_pointer) elif vartype == SQL_LONG: sqlvar.sqldata = ctypes.cast(ctypes.create_string_buffer( sqlvar.sqllen),buf_pointer) elif vartype == SQL_INT64: sqlvar.sqldata = ctypes.cast(ctypes.create_string_buffer( sqlvar.sqllen),buf_pointer) elif vartype == SQL_FLOAT: sqlvar.sqldata = ctypes.cast(ctypes.create_string_buffer( sqlvar.sqllen),buf_pointer) elif vartype == SQL_DOUBLE: sqlvar.sqldata = ctypes.cast(ctypes.create_string_buffer( sqlvar.sqllen),buf_pointer) elif vartype == SQL_D_FLOAT: sqlvar.sqldata = ctypes.cast(ctypes.create_string_buffer( sqlvar.sqllen),buf_pointer) elif vartype == SQL_BLOB: sqlvar.sqldata = ctypes.cast(ctypes.create_string_buffer( sqlvar.sqllen),buf_pointer) elif vartype == SQL_TIMESTAMP: sqlvar.sqldata = ctypes.cast(ctypes.create_string_buffer( sqlvar.sqllen),buf_pointer) elif vartype == SQL_TYPE_DATE: sqlvar.sqldata = ctypes.cast(ctypes.create_string_buffer( sqlvar.sqllen),buf_pointer) elif vartype == SQL_TYPE_TIME: sqlvar.sqldata = ctypes.cast(ctypes.create_string_buffer( sqlvar.sqllen),buf_pointer) elif vartype == SQL_ARRAY: sqlvar.sqldata = ctypes.cast(ctypes.create_string_buffer( sqlvar.sqllen),buf_pointer) elif vartype == SQL_BOOLEAN: sqlvar.sqldata = ctypes.cast(ctypes.create_string_buffer( sqlvar.sqllen),buf_pointer) else: pass def __XSQLDA2Tuple(self, xsqlda): """Move data from output XSQLDA to result tuple. """ values = [] for sqlvar in xsqlda.sqlvar[:xsqlda.sqld]: value = '' vartype = sqlvar.sqltype & ~1 scale = sqlvar.sqlscale # NULL handling if ((sqlvar.sqltype & 1) != 0) and (bool(sqlvar.sqlind) and sqlvar.sqlind.contents.value == -1): value = None elif vartype == SQL_TEXT: value = ctypes.string_at(sqlvar.sqldata,sqlvar.sqllen) #value = sqlvar.sqldata[:sqlvar.sqllen] ### Todo: verify handling of P version differences if ((self.__charset or PYTHON_MAJOR_VER == 3) and sqlvar.sqlsubtype != 1): # non OCTETS value = b2u(value,self.__python_charset) # CHAR with multibyte encoding requires special handling if sqlvar.sqlsubtype in (4, 69): # UTF8 and GB18030 reallength = sqlvar.sqllen // 4 elif sqlvar.sqlsubtype == 3: # UNICODE_FSS reallength = sqlvar.sqllen // 3 else: reallength = sqlvar.sqllen value = value[:reallength] elif vartype == SQL_VARYING: size = bytes_to_uint(sqlvar.sqldata[:2]) #value = ctypes.string_at(sqlvar.sqldata[2],2+size) ### Todo: verify handling of P version differences if PYTHON_MAJOR_VER == 3: value = bytes(sqlvar.sqldata[2:2 + size]) else: value = str(sqlvar.sqldata[2:2 + size]) if ((self.__charset or PYTHON_MAJOR_VER == 3) and sqlvar.sqlsubtype != 1): # non OCTETS value = b2u(value,self.__python_charset) elif vartype == SQL_BOOLEAN: value = bool(bytes_to_int(sqlvar.sqldata.contents.value)) elif vartype in [SQL_SHORT, SQL_LONG, SQL_INT64]: value = bytes_to_int(sqlvar.sqldata[:sqlvar.sqllen]) # It's scalled integer? if (sqlvar.sqlsubtype or scale): value = decimal.Decimal(value) / _tenTo[abs(scale)] elif vartype == SQL_TYPE_DATE: yyyy, mm, dd = self._parse_date(sqlvar.sqldata[:sqlvar.sqllen]) value = datetime.date(yyyy, mm, dd) elif vartype == SQL_TYPE_TIME: h, m, s, ms = self._parse_time(sqlvar.sqldata[:sqlvar.sqllen]) value = datetime.time(h, m, s, ms) elif vartype == SQL_TIMESTAMP: yyyy, mm, dd = self._parse_date(sqlvar.sqldata[:4]) h, m, s, ms = self._parse_time(sqlvar.sqldata[4:sqlvar.sqllen]) value = datetime.datetime(yyyy, mm, dd, h, m, s, ms) elif vartype == SQL_FLOAT: value = struct.unpack('f', sqlvar.sqldata[:sqlvar.sqllen])[0] elif vartype == SQL_DOUBLE: value = struct.unpack('d', sqlvar.sqldata[:sqlvar.sqllen])[0] elif vartype == SQL_BLOB: val = sqlvar.sqldata[:sqlvar.sqllen] blobid = ISC_QUAD(bytes_to_uint(val[:4]), bytes_to_uint(val[4:sqlvar.sqllen])) # Check if stream BLOB is requested instead materialized one use_stream = False if self.__streamed_blobs: # Get the BLOB name sqlname = p3fix(sqlvar.sqlname[:sqlvar.sqlname_length], self.__python_charset) alias = p3fix(sqlvar.aliasname[:sqlvar.aliasname_length], self.__python_charset) if alias != sqlname: sqlname = alias if sqlname in self.__streamed_blobs: use_stream = True if use_stream: # Stream BLOB value = BlobReader(blobid,self.cursor._connection._db_handle, self.cursor._transaction._tr_handle, sqlvar.sqlsubtype == 1, self.__charset) self.__blob_readers.append(value) else: # Materialized BLOB blob_handle = isc_blob_handle() api.isc_open_blob2(self._isc_status, self.cursor._connection._db_handle, self.cursor._transaction._tr_handle, blob_handle, blobid, 0, None) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Cursor.read_output_blob/isc_open_blob2:") # Get BLOB total length and max. size of segment result = ctypes.cast(ctypes.create_string_buffer(20), buf_pointer) api.isc_blob_info(self._isc_status, blob_handle, 2, bs([isc_info_blob_total_length, isc_info_blob_max_segment]), 20, result) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Cursor.read_output_blob/isc_blob_info:") offset = 0 while bytes_to_uint(result[offset]) != isc_info_end: code = bytes_to_uint(result[offset]) offset += 1 if code == isc_info_blob_total_length: length = bytes_to_uint(result[offset:offset + 2]) blob_length = bytes_to_uint(result[ offset + 2:offset + 2 + length]) offset += length + 2 elif code == isc_info_blob_max_segment: length = bytes_to_uint(result[offset:offset + 2]) segment_size = bytes_to_uint(result[ offset + 2:offset + 2 + length]) offset += length + 2 # Load BLOB allow_incomplete_segment_read = False status = ISC_STATUS(0) blob = ctypes.create_string_buffer(blob_length) bytes_read = 0 bytes_actually_read = ctypes.c_ushort(0) while bytes_read < blob_length: status = api.isc_get_segment(self._isc_status, blob_handle, bytes_actually_read, min(segment_size, blob_length - bytes_read), ctypes.byref( blob, bytes_read)) if status != 0: if ((status == isc_segment) and allow_incomplete_segment_read): bytes_read += bytes_actually_read.value else: raise exception_from_status(DatabaseError, self._isc_status, "Cursor.read_output_blob/isc_get_segment:") else: bytes_read += bytes_actually_read.value # Finish api.isc_close_blob(self._isc_status, blob_handle) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Cursor.read_otput_blob/isc_close_blob:") value = blob.raw if ((self.__charset or PYTHON_MAJOR_VER == 3) and sqlvar.sqlsubtype == 1): value = b2u(value,self.__python_charset) elif vartype == SQL_ARRAY: value = [] val = sqlvar.sqldata[:sqlvar.sqllen] arrayid = ISC_QUAD(bytes_to_uint(val[:4]), bytes_to_uint(val[4:sqlvar.sqllen])) arraydesc = ISC_ARRAY_DESC(0) sqlsubtype = self.cursor._connection._get_array_sqlsubtype(sqlvar.relname, sqlvar.sqlname) api.isc_array_lookup_bounds(self._isc_status, self.cursor._connection._db_handle, self.cursor._transaction._tr_handle, sqlvar.relname, sqlvar.sqlname, arraydesc) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Cursor.read_otput_array/isc_array_lookup_bounds:") value_type = arraydesc.array_desc_dtype value_scale = arraydesc.array_desc_scale value_size = arraydesc.array_desc_length if value_type in (blr_varying,blr_varying2): value_size += 2 dimensions = [] total_num_elements = 1 for dimension in xrange(arraydesc.array_desc_dimensions): bounds = arraydesc.array_desc_bounds[dimension] dimensions.append((bounds.array_bound_upper+1)-bounds.array_bound_lower) total_num_elements *= dimensions[dimension] total_size = total_num_elements * value_size buf = ctypes.create_string_buffer(total_size) value_buffer = ctypes.cast(buf, buf_pointer) tsize = ISC_LONG(total_size) api.isc_array_get_slice(self._isc_status, self.cursor._connection._db_handle, self.cursor._transaction._tr_handle, arrayid, arraydesc, value_buffer, tsize) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Cursor.read_otput_array/isc_array_get_slice:") (value,bufpos) = self.__extract_db_array_to_list(value_size, value_type, sqlsubtype, value_scale, 0, dimensions, value_buffer,0) values.append(value) return tuple(values) def __extract_db_array_to_list(self,esize,dtype,subtype,scale,dim,dimensions, buf,bufpos): """Extracts ARRRAY column data from buffer to Python list(s). """ value = [] if dim == len(dimensions)-1: for i in xrange(dimensions[dim]): if dtype in (blr_text,blr_text2): val = ctypes.string_at(buf[bufpos:bufpos+esize],esize) ### Todo: verify handling of P version differences if ((self.__charset or PYTHON_MAJOR_VER == 3) and subtype != 1): # non OCTETS val = b2u(val,self.__python_charset) # CHAR with multibyte encoding requires special handling if subtype in (4, 69): # UTF8 and GB18030 reallength = esize // 4 elif subtype == 3: # UNICODE_FSS reallength = esize // 3 else: reallength = esize val = val[:reallength] elif dtype in (blr_varying,blr_varying2): val = ctypes.string_at(buf[bufpos:bufpos+esize]) if ((self.__charset or PYTHON_MAJOR_VER == 3) and subtype != 1): # non OCTETS val = b2u(val,self.__python_charset) elif dtype in (blr_short,blr_long,blr_int64): val = bytes_to_int(buf[bufpos:bufpos+esize]) if (subtype or scale): val = decimal.Decimal(val) / _tenTo[abs(256-scale)] elif dtype == blr_float: val = struct.unpack('f', buf[bufpos:bufpos+esize])[0] elif dtype in (blr_d_float,blr_double): val = struct.unpack('d', buf[bufpos:bufpos+esize])[0] elif dtype == blr_timestamp: yyyy, mm, dd = self._parse_date(buf[bufpos:bufpos+4]) h, m, s, ms = self._parse_time(buf[bufpos+4:bufpos+esize]) val = datetime.datetime(yyyy, mm, dd, h, m, s, ms) elif dtype == blr_sql_date: yyyy, mm, dd = self._parse_date(buf[bufpos:bufpos+esize]) val = datetime.date(yyyy, mm, dd) elif dtype == blr_sql_time: h, m, s, ms = self._parse_time(buf[bufpos:bufpos+esize]) val = datetime.time(h, m, s, ms) else: raise OperationalError("Unsupported Firebird ARRAY subtype: %i" % dtype) value.append(val) bufpos += esize else: for i in xrange(dimensions[dim]): (val,bufpos) = self.__extract_db_array_to_list(esize,dtype,subtype,scale,dim+1,dimensions,buf,bufpos) value.append(val) return (value,bufpos) def __copy_list_to_db_array(self,esize,dtype,subtype,scale,dim,dimensions, value,buf,bufpos): """Copies Python list(s) to ARRRAY column data buffer. """ valuebuf = None if dtype in (blr_text,blr_text2): valuebuf = ctypes.create_string_buffer(bs([0]),esize) elif dtype in (blr_varying,blr_varying2): valuebuf = ctypes.create_string_buffer(bs([0]),esize) elif dtype in (blr_short,blr_long,blr_int64): if esize == 2: valuebuf = ISC_SHORT(0) elif esize == 4: valuebuf = ISC_LONG(0) elif esize == 8: valuebuf = ISC_INT64(0) else: raise OperationalError("Unsupported number type") elif dtype == blr_float: valuebuf = ctypes.create_string_buffer(bs([0]),esize) elif dtype in (blr_d_float,blr_double): valuebuf = ctypes.create_string_buffer(bs([0]),esize) elif dtype == blr_timestamp: valuebuf = ctypes.create_string_buffer(bs([0]),esize) elif dtype == blr_sql_date: valuebuf = ctypes.create_string_buffer(bs([0]),esize) elif dtype == blr_sql_time: valuebuf = ctypes.create_string_buffer(bs([0]),esize) else: raise OperationalError("Unsupported Firebird ARRAY subtype: %i" % dtype) self.__fill_db_array_buffer(esize,dtype, subtype,scale, dim,dimensions, value,valuebuf, buf,bufpos) def __fill_db_array_buffer(self,esize,dtype,subtype,scale,dim,dimensions, value,valuebuf,buf,bufpos): if dim == len(dimensions)-1: for i in xrange(dimensions[dim]): if dtype in (blr_text,blr_text2, blr_varying,blr_varying2): val = value[i] if isinstance(val, UnicodeType): val = val.encode(self.__python_charset) if len(val) > esize: raise ValueError("ARRAY value of parameter is too long," " expected %i, found %i" % (esize, len(val))) valuebuf.value = val ctypes.memmove(ctypes.byref(buf,bufpos),valuebuf,esize) elif dtype in (blr_short,blr_long,blr_int64): if (subtype or scale): val = value[i] if isinstance(val, decimal.Decimal): val = int((val * _tenTo[256-abs(scale)]).to_integral()) elif isinstance(val, (int, mylong, float,)): val = int(val * _tenTo[256-abs(scale)]) else: raise TypeError('Objects of type %s are not ' ' acceptable input for' ' a fixed-point column.' % str(type(val))) valuebuf.value = val else: if esize == 2: valuebuf.value = value[i] elif esize == 4: valuebuf.value = value[i] elif esize == 8: valuebuf.value = value[i] else: raise OperationalError("Unsupported type") ctypes.memmove(ctypes.byref(buf,bufpos), ctypes.byref(valuebuf), esize) elif dtype == blr_float: valuebuf.value = struct.pack('f', value[i]) ctypes.memmove(ctypes.byref(buf,bufpos),valuebuf,esize) elif dtype in (blr_d_float,blr_double): valuebuf.value = struct.pack('d', value[i]) ctypes.memmove(ctypes.byref(buf,bufpos),valuebuf,esize) elif dtype == blr_timestamp: valuebuf.value = self._convert_timestamp(value[i]) ctypes.memmove(ctypes.byref(buf,bufpos),valuebuf,esize) elif dtype == blr_sql_date: valuebuf.value = self._convert_date(value[i]) ctypes.memmove(ctypes.byref(buf,bufpos),valuebuf,esize) elif dtype == blr_sql_time: valuebuf.value = self._convert_time(value[i]) ctypes.memmove(ctypes.byref(buf,bufpos),valuebuf,esize) else: raise OperationalError("Unsupported Firebird ARRAY subtype: %i" % dtype) bufpos += esize else: for i in xrange(dimensions[dim]): bufpos = self.__fill_db_array_buffer(esize,dtype,subtype, scale,dim+1, dimensions,value[i], valuebuf,buf,bufpos) return bufpos def __validate_array_value(self,dim,dimensions,value_type,sqlsubtype, value_scale,value): """Validates whether Python list(s) passed as ARRAY column value matches column definition (length, structure and value types). """ ok = isinstance(value,(ibase.ListType,ibase.TupleType)) ok = ok and (len(value) == dimensions[dim]) if not ok: return False for i in xrange(dimensions[dim]): if dim == len(dimensions)-1: # leaf: check value type if value_type in (blr_text,blr_text2, blr_varying,blr_varying2): ok = isinstance(value[i],(ibase.StringType,ibase.UnicodeType)) elif value_type in (blr_short,blr_long,blr_int64): if (sqlsubtype or value_scale): ok = isinstance(value[i],decimal.Decimal) else: ok = isinstance(value[i],ibase.IntType) elif value_type == blr_float: ok = isinstance(value[i],ibase.FloatType) elif value_type in (blr_d_float,blr_double): ok = isinstance(value[i],ibase.FloatType) elif value_type == blr_timestamp: ok = isinstance(value[i],datetime.datetime) elif value_type == blr_sql_date: ok = isinstance(value[i],datetime.date) elif value_type == blr_sql_time: ok = isinstance(value[i],datetime.time) else: ok = False else: # non-leaf: recurse down ok = ok and self.__validate_array_value(dim+1,dimensions,value_type, sqlsubtype,value_scale, value[i]) if not ok: return False return ok def __Tuple2XSQLDA(self, xsqlda, parameters): """Move data from parameters to input XSQLDA. """ for i in xrange(xsqlda.sqld): sqlvar = xsqlda.sqlvar[i] value = parameters[i] vartype = sqlvar.sqltype & ~1 scale = sqlvar.sqlscale # NULL handling if value == None: # Set the null flag whether sqlvar definition allows it or not, # to give BEFORE triggers to act on value without # our interference. if (sqlvar.sqltype & 1) == 0: # NULLs were not allowed, so set it allowed or FB will complain sqlvar.sqltype += 1 sqlvar.sqlind = ctypes.pointer(ISC_SHORT(-1)) sqlvar.sqldata = None else: # if sqlvar allows null, allocate the null flag # I don't know whether it's necessary, # but we'll do it anyway for safety if ((sqlvar.sqltype & 1) != 0): sqlvar.sqlind = ctypes.pointer(ISC_SHORT(0)) # Fill in value by type if ((vartype != SQL_BLOB and isinstance(value, (StringType, UnicodeType))) or vartype in [SQL_TEXT, SQL_VARYING]): # Place for Implicit Conversion of Input Parameters # to Strings if not isinstance(value, (UnicodeType,StringType,ibase.mybytes)): value = str(value) # Place for Implicit Conversion of Input Parameters # from Strings if isinstance(value, UnicodeType): value = value.encode(self.__python_charset) if vartype in [SQL_TEXT, SQL_VARYING] and len(value) > sqlvar.sqllen: raise ValueError("Value of parameter (%i) is too long," " expected %i, found %i" % (i, sqlvar.sqllen, len(value))) sqlvar.sqltype = SQL_TEXT | (sqlvar.sqltype & 1) sqlvar.sqllen = ctypes.c_short(len(value)) sqlvar.sqldata = ctypes.cast(ctypes.pointer( ctypes.create_string_buffer(value)), buf_pointer) elif vartype in [SQL_SHORT, SQL_LONG, SQL_INT64]: # It's scalled integer? if (sqlvar.sqlsubtype or scale): if isinstance(value, decimal.Decimal): value = int( (value * _tenTo[abs(scale)]).to_integral()) elif isinstance(value, (int, mylong, float,)): value = int(value * _tenTo[abs(scale)]) else: raise TypeError('Objects of type %s are not ' ' acceptable input for' ' a fixed-point column.' % str(type(value))) self._check_integer_range(value, self.__sql_dialect, vartype, sqlvar.sqlsubtype, sqlvar.sqlscale) sqlvar.sqldata = ctypes.cast(ctypes.pointer( ctypes.create_string_buffer( int_to_bytes(value, sqlvar.sqllen))), buf_pointer) elif vartype == SQL_TYPE_DATE: sqlvar.sqldata = ctypes.cast(ctypes.pointer( ctypes.create_string_buffer( self._convert_date(value))), buf_pointer) elif vartype == SQL_TYPE_TIME: sqlvar.sqldata = ctypes.cast(ctypes.pointer( ctypes.create_string_buffer( self._convert_time(value))), buf_pointer) elif vartype == SQL_TIMESTAMP: sqlvar.sqldata = ctypes.cast(ctypes.pointer( ctypes.create_string_buffer( self._convert_timestamp(value))), buf_pointer) elif vartype == SQL_FLOAT: sqlvar.sqldata = ctypes.cast( ctypes.pointer(ctypes.create_string_buffer( struct.pack('f', value))), buf_pointer) elif vartype == SQL_DOUBLE: sqlvar.sqldata = ctypes.cast( ctypes.pointer(ctypes.create_string_buffer( struct.pack('d', value))), buf_pointer) elif vartype == SQL_BOOLEAN: sqlvar.sqldata = ctypes.cast(ctypes.pointer( ctypes.create_string_buffer( int_to_bytes(value, sqlvar.sqllen))), buf_pointer) elif vartype == SQL_BLOB: blobid = ISC_QUAD(0, 0) blob_handle = isc_blob_handle() if hasattr(value,'read'): # It seems we've got file-like object, use stream BLOB api.isc_create_blob2(self._isc_status, self.cursor._connection._db_handle, self.cursor._transaction._tr_handle, blob_handle, blobid, 4, bs([ibase.isc_bpb_version1, ibase.isc_bpb_type,1, ibase.isc_bpb_type_stream])) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Cursor.write_input_blob/isc_create_blob2:") sqlvar.sqldata = ctypes.cast(ctypes.pointer(blobid), buf_pointer) blob = ctypes.create_string_buffer(MAX_BLOB_SEGMENT_SIZE) value_chunk = value.read(MAX_BLOB_SEGMENT_SIZE) blob.raw = ibase.b(value_chunk) while len(value_chunk) > 0: api.isc_put_segment(self._isc_status, blob_handle, len(value_chunk), ctypes.byref(blob) ) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Cursor.write_input_blob/isc_put_segment:") ctypes.memset(blob,0,MAX_BLOB_SEGMENT_SIZE) value_chunk = value.read(MAX_BLOB_SEGMENT_SIZE) blob.raw = ibase.b(value_chunk) api.isc_close_blob(self._isc_status, blob_handle) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Cursor.write_input_blob/isc_close_blob:") else: # Non-stream BLOB if isinstance(value, myunicode): if sqlvar.sqlsubtype == 1: value = value.encode(self.__python_charset) else: raise TypeError('Unicode strings are not' ' acceptable input for' ' a non-textual BLOB column.') blob = ctypes.create_string_buffer(value) api.isc_create_blob2(self._isc_status, self.cursor._connection._db_handle, self.cursor._transaction._tr_handle, blob_handle, blobid, 0, None) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Cursor.write_input_blob/isc_create_blob2:") sqlvar.sqldata = ctypes.cast(ctypes.pointer(blobid), buf_pointer) total_size = len(value) bytes_written_so_far = 0 bytes_to_write_this_time = MAX_BLOB_SEGMENT_SIZE while (bytes_written_so_far < total_size): if ( (total_size - bytes_written_so_far) < MAX_BLOB_SEGMENT_SIZE ): bytes_to_write_this_time = (total_size - bytes_written_so_far) api.isc_put_segment(self._isc_status, blob_handle, bytes_to_write_this_time, ctypes.byref(blob, bytes_written_so_far ) ) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Cursor.write_input_blob/isc_put_segment:") bytes_written_so_far += bytes_to_write_this_time api.isc_close_blob(self._isc_status, blob_handle) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Cursor.write_input_blob/isc_close_blob:") elif vartype == SQL_ARRAY: arrayid = ISC_QUAD(0,0) arrayid_ptr = ctypes.pointer(arrayid) arraydesc = ISC_ARRAY_DESC(0) sqlvar.sqldata = ctypes.cast(ctypes.pointer(arrayid), buf_pointer) sqlsubtype = self.cursor._connection._get_array_sqlsubtype(sqlvar.relname, sqlvar.sqlname) api.isc_array_lookup_bounds(self._isc_status, self.cursor._connection._db_handle, self.cursor._transaction._tr_handle, sqlvar.relname, sqlvar.sqlname, arraydesc) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Cursor.write_otput_array/isc_array_lookup_bounds:") value_type = arraydesc.array_desc_dtype value_scale = arraydesc.array_desc_scale value_size = arraydesc.array_desc_length if value_type in (blr_varying,blr_varying2): value_size += 2 dimensions = [] total_num_elements = 1 for dimension in xrange(arraydesc.array_desc_dimensions): bounds = arraydesc.array_desc_bounds[dimension] dimensions.append((bounds.array_bound_upper+1)-bounds.array_bound_lower) total_num_elements *= dimensions[dimension] total_size = total_num_elements * value_size # Validate value to make sure it matches the array structure if not self.__validate_array_value(0,dimensions,value_type, sqlsubtype, value_scale,value): raise ValueError("Incorrect ARRAY field value.") value_buffer = ctypes.create_string_buffer(total_size) tsize = ISC_LONG(total_size) self.__copy_list_to_db_array(value_size,value_type, sqlsubtype,value_scale, 0, dimensions, value,value_buffer,0) api.isc_array_put_slice(self._isc_status, self.cursor._connection._db_handle, self.cursor._transaction._tr_handle, arrayid_ptr, arraydesc, value_buffer, tsize) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Cursor.read_otput_array/isc_array_put_slice:") sqlvar.sqldata = ctypes.cast(arrayid_ptr,buf_pointer) def _free_handle(self): if self._stmt_handle != None and not self.__closed: self.__executed = False self.__closed = True self.__output_cache = None self._name = None while len(self.__blob_readers) > 0: self.__blob_readers.pop().close() if self.statement_type == isc_info_sql_stmt_select: api.isc_dsql_free_statement(self._isc_status, self._stmt_handle, ibase.DSQL_close) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Error while releasing SQL statement handle:") def _close(self): if self._stmt_handle != None: while len(self.__blob_readers) > 0: self.__blob_readers.pop().close() stmt_handle = self._stmt_handle self._stmt_handle = None self.__executed = False self.__prepared = False self.__closed = True self.__description = None self.__output_cache = None self._name = None if is_dead_proxy(self.cursor): self.cursor = None connection = self.cursor._connection if self.cursor else None if (not connection) or (connection and not connection.closed): api.isc_dsql_free_statement(self._isc_status, stmt_handle, ibase.DSQL_drop) if (db_api_error(self._isc_status) and (self._isc_status[1] not in [335544528,335544485])): raise exception_from_status(DatabaseError, self._isc_status, "Error while closing SQL statement:") def _execute(self, parameters=None): # Bind parameters if parameters: if not isinstance(parameters, (ListType, TupleType)): raise TypeError("parameters must be list or tuple") if len(parameters) > self._in_sqlda.sqln: raise ProgrammingError("Statement parameter sequence contains" " %d parameters, but only %d are allowed" % (len(parameters), self._in_sqlda.sqln)) # Restore original type and size information for input parameters i = 0 for sqlvar in self._in_sqlda.sqlvar[:self.n_input_params]: sqlvar.sqltype, sqlvar.sqllen = self._in_sqlda_save[i] i += 1 self.__Tuple2XSQLDA(self._in_sqlda, parameters) xsqlda_in = ctypes.cast(ctypes.pointer(self._in_sqlda), XSQLDA_PTR) else: xsqlda_in = None # Execute the statement if ((self.statement_type == isc_info_sql_stmt_exec_procedure) and (self._out_sqlda.sqld > 0)): # NOTE: We have to pass xsqlda_out only for statements that return # single row xsqlda_out = ctypes.cast(ctypes.pointer(self._out_sqlda), XSQLDA_PTR) api.isc_dsql_execute2(self._isc_status, self.cursor._transaction._tr_handle, self._stmt_handle, self.__sql_dialect, xsqlda_in, xsqlda_out) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Error while executing Stored Procedure:") # The result was returned immediately, but we have to provide it # via fetch*() calls as Python DB API requires. However, it's not # possible to call fetch on open such statement, so we'll cache # the result and return it in fetchone instead calling fetch. self.__output_cache = self.__XSQLDA2Tuple(self._out_sqlda) else: api.isc_dsql_execute2(self._isc_status, self.cursor._transaction._tr_handle, self._stmt_handle, self.__sql_dialect, xsqlda_in, None) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Error while executing SQL statement:") self.__output_cache = None self.__executed = True self.__closed = False self._last_fetch_status = ISC_STATUS(self.NO_FETCH_ATTEMPTED_YET) def _fetchone(self): if (self._last_fetch_status == self.RESULT_SET_EXHAUSTED and not self.__output_cache): return None if self.__executed: if self.__output_cache: if self._last_fetch_status == self.RESULT_SET_EXHAUSTED: self._free_handle() return None else: self._last_fetch_status = self.RESULT_SET_EXHAUSTED return self.__output_cache else: if self.n_output_params == 0: raise DatabaseError("Attempt to fetch row of results after statement that does not produce result set.") self._last_fetch_status = api.isc_dsql_fetch( self._isc_status, self._stmt_handle, self.__sql_dialect, ctypes.cast(ctypes.pointer(self._out_sqlda), XSQLDA_PTR)) if self._last_fetch_status == 0: return self.__XSQLDA2Tuple(self._out_sqlda) elif self._last_fetch_status == self.RESULT_SET_EXHAUSTED: self._free_handle() return None else: if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Cursor.fetchone:") elif self.__closed: raise ProgrammingError("Cannot fetch from closed cursor.") else: raise ProgrammingError("Cannot fetch from this cursor because" " it has not executed a statement.") def _set_cursor_name(self, name): api.isc_dsql_set_cursor_name(self._isc_status, self._stmt_handle, b(name), 0) if db_api_error(self._isc_status): raise exception_from_status(OperationalError, self._isc_status, "Could not set cursor name:") self._name = name def set_stream_blob(self,blob_name): """Specify a BLOB column(s) to work in `stream` mode instead classic, materialized mode. :param blob_name: Single name or sequence of column names. Name must be in format as it's stored in database (refer to :attr:`description` for real value). :type blob_name: string or sequence .. important:: BLOB name is **permanently** added to the list of BLOBs handled as `stream` BLOBs by this instance. :param string blob_name: Name of BLOB column. """ if isinstance(blob_name,ibase.StringType): self.__streamed_blobs.append(blob_name) else: self.__streamed_blobs.extend(blob_name) def __del__(self): if self._stmt_handle != None: self._close() def close(self): """Drops the resources associated with executed prepared statement, but keeps it prepared for another execution. """ self._free_handle() #: (Read Only) (string) SQL command this PreparedStatement executes. sql = property(__get_sql) #: (Read Only) Sequence of 7-item sequences. #: Each of these sequences contains information describing one result column: #: (name, type_code, display_size,internal_size, precision, scale, null_ok) description = property(__get_description) #: (Read Only) (integer) Specifies the number of rows that the last execution #: produced (for DQL statements like select) or affected (for DML statements #: like update or insert ). #: #: The attribute is -1 in case the statement was not yet executed #: or the rowcount of the operation is not determinable by the interface. rowcount = property(__get_rowcount) #: (Read Only) (string) A string representation of the execution plan generated #: for this statement by the database engine’s optimizer. plan = property(__get_plan) #: (Read/Write) (string) Name for the SQL cursor. This property can be #: ignored entirely if you don’t need to use it. name = property(__get_name, __set_name) #: (Read Only) (boolean) True if closed. Note that closed means that PS #: statement handle was closed for further fetching, releasing server resources, #: but wasn't dropped, and couldbe still used for another execution. closed = property(__get_closed) class Cursor(object): """Represents a database cursor, which is used to execute SQL statement and manage the context of a fetch operation. .. important:: DO NOT create instances of this class directly! Use only :meth:`Connection.cursor`, :meth:`Transaction.cursor` and :meth:`ConnectionGroup.cursor` to get Cursor instances that operate in desired context. .. note:: Cursor is actually a high-level wrapper around :class:`PreparedStatement` instance(s) that handle the actual SQL statement execution and result management. .. tip:: Cursor supports the iterator protocol, yielding tuples of values like :meth:`fetchone`. """ #: (Read/Write) As required by the Python DB API 2.0 spec, the value of this #: attribute is observed with respect to the :meth:`fetchmany` method. However, #: changing the value of this attribute does not make any difference in fetch #: efficiency because the database engine only supports fetching a single row #: at a time. arraysize = 1 def __init__(self, connection, transaction): """ .. important:: The association between a Cursor and its :class:`Transaction` and :class:`Connection` is set when the Cursor is created, and cannot be changed during the lifetime of that Cursor. :param connection: :class:`Connection` instance this cursor should be bound to. :param transaction: :class:`Transaction` instance this cursor should be bound to. """ self._connection = connection self._transaction = transaction self._ps = None # current prepared statement def next(self): """Return the next item from the container. Part of *iterator protocol*. :raises StopIteration: If there are no further items. """ row = self.fetchone() if row: return row else: raise StopIteration __next__ = next def __iter__(self): return self def __valid_ps(self): return (self._ps is not None) and not (isinstance(self._ps,weakref.ProxyType) and not dir(self._ps)) def __get_description(self): if self.__valid_ps(): return self._ps.description else: return [] def __get_rowcount(self): if self.__valid_ps(): return self._ps.rowcount else: return -1 def __get_name(self): if self.__valid_ps(): return self._ps._name else: return None def __set_name(self, name): if name == None or not isinstance(name, StringType): raise ProgrammingError("The name attribute can only be set to a" " string, and cannot be deleted") if not self.__valid_ps(): raise ProgrammingError("This cursor has not yet executed a" " statement, so setting its name attribute" " would be meaningless") if self._ps._name: raise ProgrammingError("Cursor's name has already been declared in" " context of currently executed statement") self._ps._set_cursor_name(name) def __get_plan(self): if self.__valid_ps(): return self._ps.plan else: return None def __get_connection(self): return self._connection def __get_transaction(self): return self._transaction def __connection_deleted(self,obj): self._connection = None def __ps_deleted(self,obj): self._ps = None def _set_as_internal(self): self._connection = weakref.proxy(self._connection, _weakref_callback(self.__connection_deleted)) def callproc(self, procname, parameters=None): """Call a stored database procedure with the given name. The result of the call is available through the standard fetchXXX() methods. :param string procname: Stored procedure name. :param parameters: (Optional) Sequence of parameters. Must contain one entry for each argument that the procedure expects. :type parameters: List or Tuple :returns: parameters, as required by Python DB API 2.0 Spec. :raises TypeError: When parameters is not List or Tuple. :raises ProgrammingError: When more parameters than expected are suplied. :raises DatabaseError: When error is returned by server. """ if not parameters: params = [] else: if isinstance(parameters, (ListType, TupleType)): params = parameters else: raise TypeError("callproc paremeters must be List or Tuple") sql = ('EXECUTE PROCEDURE ' + procname + ' ' + ','.join('?' * len(params))) self.execute(sql, params) return parameters def close(self): """Close the cursor now (rather than whenever `__del__` is called). Closes any currently open :class:`PreparedStatement`. However, the cursor is still bound to :class:`Connection` and :class:`Transaction`, so it could be still used to execute SQL statements. Also the cache with prepared statements is left intact. .. warning:: FDB's implementation of Cursor somewhat violates the Python DB API 2.0, which requires that cursor will be unusable after call to `close`; and an Error (or subclass) exception should be raised if any operation is attempted with the cursor. If you’ll take advantage of this anomaly, your code would be less portable to other Python DB API 2.0 compliant drivers. """ if is_dead_proxy(self._ps): self._ps = None if self._ps != None: self._ps.close() self._ps = None def execute(self, operation, parameters=None): """Prepare and execute a database operation (query or command). .. note:: Execution is handled by :class:`PreparedStatement` that is either supplied as `operation` parameter, or created internally when `operation` is a string. Internally created PreparedStatements are stored in cache for later reuse, when the same `operation` string is used again. :returns: self, so call to execute could be used as iterator. :param operation: SQL command specification. :type operation: string or :class:`PreparedStatement` instance :param parameters: (Optional) Sequence of parameters. Must contain one entry for each argument that the operation expects. :type parameters: List or Tuple :raises ValueError: When operation PreparedStatement belongs to different Cursor instance. :raises TypeError: When parameters is not List or Tuple. :raises ProgrammingError: When more parameters than expected are suplied. :raises DatabaseError: When error is returned by server. """ if is_dead_proxy(self._ps): self._ps = None if self._ps != None: self._ps.close() if not self._transaction.active: self._transaction.begin() if isinstance(operation, PreparedStatement): if operation.cursor is not self: raise ValueError("PreparedStatement was created by different Cursor.") self._ps = weakref.proxy(operation, _weakref_callback(self.__ps_deleted)) else: self._ps = PreparedStatement(operation, self, True) self._ps._execute(parameters) # return self so `execute` call could be used as iterable return self def prep(self, operation): """Create prepared statement for repeated execution. .. note:: Returned :class:`PreparedStatement` instance is bound to its Cursor instance via strong reference, and is not stored in Cursor's internal cache of prepared statements. :param string operation: SQL command :returns: :class:`PreparedStatement` instance. :raises DatabaseError: When error is returned by server. :raises InternalError: On unexpected processing condition. """ if not self._transaction.active: self._transaction.begin() return PreparedStatement(operation, self, False) def executemany(self, operation, seq_of_parameters): """Prepare a database operation (query or command) and then execute it against all parameter sequences or mappings found in the sequence `seq_of_parameters`. .. note:: This function simply calls :meth:`execute` in a loop, feeding it with parameters from `seq_of_parameters`. Because `execute` caches `PreparedStatements`, calling `executemany` is equally efective as direct use of prepared statement and calling `execute` in a loop directly in application. :returns: self, so call to executemany could be used as iterator. :param operation: SQL command specification. :type operation: string or :class:`PreparedStatement` instance :param seq_of_parameters: Sequence of sequences of parameters. Must contain one sequence of parameters for each execution that has one entry for each argument that the operation expects. :type seq_of_parameters: List or Tuple :raises ValueError: When operation PreparedStatement belongs to different Cursor instance. :raises TypeError: When seq_of_parameters is not List or Tuple. :raises ProgrammingError: When there are more parameters in any sequence than expected. :raises DatabaseError: When error is returned by server. """ if not isinstance(operation,PreparedStatement): operation = self.prep(operation) for parameters in seq_of_parameters: self.execute(operation, parameters) return self def fetchone(self): """Fetch the next row of a query result set. :returns: tuple of returned values, or None when no more data is available. :raises DatabaseError: When error is returned by server. :raises ProgrammingError: When underlying :class:`PreparedStatement` is closed, statement was not yet executed, or unknown status is returned by fetch operation. """ if self._ps: return self._ps._fetchone() else: raise ProgrammingError("Cannot fetch from this cursor because" " it has not executed a statement.") def fetchmany(self, size=arraysize): """Fetch the next set of rows of a query result, returning a sequence of sequences (e.g. a list of tuples). An empty sequence is returned when no more rows are available. The number of rows to fetch per call is specified by the parameter. If it is not given, the cursor’s arraysize determines the number of rows to be fetched. The method does try to fetch as many rows as indicated by the size parameter. If this is not possible due to the specified number of rows not being available, fewer rows may be returned. :param integer size: Max. number of rows to fetch. :returns: List of tuples, where each tuple is one row of returned values. :raises DatabaseError: When error is returned by server. :raises ProgrammingError: When underlying :class:`PreparedStatement` is closed, statement was not yet executed, or unknown status is returned by fetch operation. """ i = 0 result = [] while i < size: row = self.fetchone() if row: result.append(row) i += 1 else: return result return result def fetchall(self): """Fetch all (remaining) rows of a query result. :returns: List of tuples, where each tuple is one row of returned values. :raises DatabaseError: When error is returned by server. :raises ProgrammingError: When underlying :class:`PreparedStatement` is closed, statement was not yet executed, or unknown status is returned by fetch operation. """ return [row for row in self] def fetchonemap(self): """Fetch the next row of a query result set like :meth:`fetchone`, except that it returns a mapping of field name to field value, rather than a tuple. :returns: :class:`fbcore._RowMapping` of returned values, or None when no more data is available. :raises DatabaseError: When error is returned by server. :raises ProgrammingError: When underlying :class:`PreparedStatement` is closed, statement was not yet executed, or unknown status is returned by fetch operation. """ row = self.fetchone() if row: row = _RowMapping(self.description, row) return row def fetchmanymap(self, size=arraysize): """Fetch the next set of rows of a query result, like :meth:`fetchmany`, except that it returns a list of mappings of field name to field value, rather than a list of tuples. :param integer size: Max. number of rows to fetch. :returns: List of :class:`fbcore._RowMapping` instances, one such instance for each row. :raises DatabaseError: When error is returned by server. :raises ProgrammingError: When underlying :class:`PreparedStatement` is closed, statement was not yet executed, or unknown status is returned by fetch operation. """ i = 0 result = [] while i < size: row = self.fetchonemap() if row: result.append(row) i += 1 else: return result return result def fetchallmap(self): """Fetch all (remaining) rows of a query result like :meth:`fetchall`, except that it returns a list of mappings of field name to field value, rather than a list of tuples. :returns: List of :class:`fbcore._RowMapping` instances, one such instance for each row. :raises DatabaseError: When error is returned by server. :raises ProgrammingError: When underlying :class:`PreparedStatement` is closed, statement was not yet executed, or unknown status is returned by fetch operation. """ return [row for row in self.itermap()] def iter(self): """Equivalent to the :meth:`fetchall`, except that it returns iterator rather than materialized list. :returns: Iterator that yields tuple of values like :meth:`fetchone`. """ return self def itermap(self): """Equivalent to the :meth:`fetchallmap`, except that it returns iterator rather than materialized list. :returns: Iterator that yields :class:`fbcore._RowMapping` instance like :meth:`fetchonemap`. """ return utils.Iterator(self.fetchonemap, None) def setinputsizes(self, sizes): """Required by Python DB API 2.0, but pointless for Firebird, so it does nothing.""" pass def setoutputsize(self, size, column=None): """Required by Python DB API 2.0, but pointless for Firebird, so it does nothing.""" pass def set_stream_blob(self,blob_name): """Specify a BLOB column(s) to work in `stream` mode instead classic, materialized mode for already executed statement. :param blob_name: Single name or sequence of column names. Name must be in format as it's stored in database (refer to :attr:`description` for real value). :type blob_name: string or sequence .. important:: BLOB name is **permanently** added to the list of BLOBs handled as `stream` BLOBs by current :class:`PreparedStatement` instance. If instance is stored in internal cache of prepared statements, the same command executed repeatedly will retain this setting. :param string blob_name: Name of BLOB column. :raises ProgrammingError: """ if self._ps: self._ps.set_stream_blob(blob_name) else: raise ProgrammingError def __del__(self): self.close() #: (Read Only) Sequence of 7-item sequences. #: Each of these sequences contains information describing one result column: #: (name, type_code, display_size,internal_size, precision, scale, null_ok) #: #: If cursor doesn't have a prepared statement, the value is None. description = property(__get_description) #: (Read Only) (integer) Specifies the number of rows that the last executeXXX() #: produced (for DQL statements like select) or affected (for DML statements #: like update or insert ). #: #: The attribute is -1 in case no executeXXX() has been performed on the cursor #: or the rowcount of the last operation is not determinable by the interface. #: #: .. note:: #: #: The database engine's own support for the determination of #: “rows affectedâ€/â€rows selected†is quirky. The database engine only #: supports the determination of rowcount for INSERT, UPDATE, DELETE, #: and SELECT statements. When stored procedures become involved, row #: count figures are usually not available to the client. Determining #: rowcount for SELECT statements is problematic: the rowcount is reported #: as zero until at least one row has been fetched from the result set, #: and the rowcount is misreported if the result set is larger than 1302 #: rows. The server apparently marshals result sets internally in batches #: of 1302, and will misreport the rowcount for result sets larger than #: 1302 rows until the 1303rd row is fetched, result sets larger than 2604 #: rows until the 2605th row is fetched, and so on, in increments of 1302. rowcount = property(__get_rowcount) #: (Read/Write) (string) Name for the SQL cursor. This property can be #: ignored entirely if you don’t need to use it. name = property(__get_name, __set_name) #: (Read Only) (string) A string representation of the execution plan #: for last executed statement generated by the database engine’s optimizer. #: `None` if no statement was executed. plan = property(__get_plan) #: (Read Only) (:class:`Connection`) PEP 249 Extension. #: Reference to the :class:`Connection` object on which the cursor was created. connection = property(__get_connection) #: (Read Only) (:class:`Transaction`) #: Reference to the :class:`Transaction` object on which the cursor was created. transaction = property(__get_transaction) class Transaction(object): """Represents a transaction context, which is used to execute SQL statement. .. important:: DO NOT create instances of this class directly! :class:`Connection` and :class:`ConnectionGroup` manage Transaction internally, surfacing all important methods directly in their interfaces. If you want additional transactions independent from :attr:`Connection.main_transaction`, use :meth:`Connection.trans` method to obtain such `Transaction` instance. """ #: (Read/Write) Transaction Parameter Block. default_tpb = ISOLATION_LEVEL_READ_COMMITED #: (Read/Write) Default action on active transaction when it's closed. #: Accepted values: commit, rollback default_action = 'commit' def __init__(self, connections, default_tpb=None, default_action='commit'): """ :param iterable connections: Sequence of (up to 16) :class:`Connection` instances. :param default_tpb: Transaction Parameter Block for this transaction. If `None` is specified, uses `ISOLATION_LEVEL_READ_COMMITED`. :type default_tpb: :class:`TPB` instance, list/tuple of `isc_tpb_*` constants or `bytestring` :param default_action: Action taken when active transaction is ended automatically (during :meth:`close` or :meth:`begin`). :type default_action: string 'commit' or 'rollback' :raises ProgrammingError: When zero or more than 16 connections are given. """ if len(connections) > 16: raise ProgrammingError("Transaction can't accept more than 16 Connections") elif len(connections) == 0: raise ProgrammingError("Transaction requires at least one Connection") self._connections = [weakref.ref(c) for c in connections] self.__python_charset = connections[0]._python_charset if default_tpb == None: self.default_tpb = ISOLATION_LEVEL_READ_COMMITED else: self.default_tpb = default_tpb self.default_action = default_action self._cursors = [] # Weak references to cursors self._isc_status = ISC_STATUS_ARRAY() self._tr_handle = None self.__closed = False def __enter__(self): return self def __exit__(self, *args): self.close() def __get_closed(self): return self.__closed #return self._tr_handle == None def __get_active(self): return self._tr_handle != None def __get_cursors(self): return [x() for x in self._cursors] def __check_active(self): if not self.active: raise ProgrammingError("Transaction object is not active") def __close_cursors(self): for cursor in self._cursors: c = cursor() if c: c.close() def __con_in_list(self,connection): for con in self._connections: if con() == connection: return True return False def __get_default_action(self): return self.__default_action def __set_default_action(self,action): action = action.lower() if not action in ('commit','rollback'): raise ProgrammingError("Transaction's default action must be either" "'commit' or 'rollback'.") else: self.__default_action = action def __get_transaction_id(self): return self.trans_info(isc_info_tra_id) def __get_oit(self): return self.trans_info(isc_info_tra_oldest_interesting) def __get_oat(self): return self.trans_info(isc_info_tra_oldest_active) def __get_ost(self): return self.trans_info(isc_info_tra_oldest_snapshot) def __get_isolation(self): return self.trans_info(isc_info_tra_isolation) def __get_lock_timeout(self): return self.trans_info(isc_info_tra_lock_timeout) def execute_immediate(self, sql): """Executes a statement without caching its prepared form on **all connections** this transaction is bound to. Automatically starts transaction if it's not already started. :param string sql: SQL statement to execute. .. important:: **The statement must not be of a type that returns a result set.** In most cases (especially cases in which the same statement – perhaps a parameterized statement – is executed repeatedly), it is better to create a cursor using the connection’s cursor method, then execute the statement using one of the cursor’s execute methods. :param string sql: SQL statement to execute. :raises DatabaseError: When error is returned from server. """ if not self.active: self.begin() for connection in self._connections: con = connection() sql = b(sql, con._python_charset) api.isc_execute_immediate(self._isc_status, con._db_handle, self._tr_handle, ctypes.c_short(len(sql)), sql) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Error while executing SQL statement:") def _finish(self): if self._tr_handle != None: try: if self.default_action == 'commit': self.commit() else: self.rollback() except Exception as e: self._tr_handle = None raise e def begin(self, tpb=None): """Starts a transaction explicitly. :param tpb: (optional) Transaction Parameter Block for newly created Transaction. If not specified, :attr:`default_tpb` is used. :type tpb: :class:`TPB` instance, list/tuple of `isc_tpb_*` constants or `bytestring` .. note:: Calling this method directly is never required; a transaction will be started implicitly if necessary. .. important:: If the physical transaction is unresolved when this method is called, a :meth:`commit` or :meth:`rollback` will be performed first, accordingly to :attr:`default_action` value. :raises DatabaseError: When error is returned by server. :raises ProgrammingError: When TPB is in usupported format, or transaction is permanently :attr:`closed`. """ if self.__closed: raise ProgrammingError("Transaction is permanently closed.") self._finish() # Make sure that previous transaction (if any) is ended self._tr_handle = isc_tr_handle(0) _tpb = tpb if tpb else self.default_tpb if isinstance(_tpb, TPB): _tpb = _tpb.render() elif isinstance(_tpb, (ListType, TupleType)): _tpb = bs(_tpb) elif not isinstance(_tpb, mybytes): raise ProgrammingError("TPB must be either string, list/tuple of" " numeric constants or TPB instance.") ### Todo: verify handling of P version differences if PYTHON_MAJOR_VER == 3: if int2byte(_tpb[0]) != bs([isc_tpb_version3]): _tpb = bs([isc_tpb_version3]) + _tpb else: if _tpb[0] != bs([isc_tpb_version3]): _tpb = bs([isc_tpb_version3]) + _tpb if len(self._connections) == 1: api.isc_start_transaction(self._isc_status, self._tr_handle, 1, self._connections[0]()._db_handle, len(_tpb), _tpb) if db_api_error(self._isc_status): self._tr_handle = None raise exception_from_status(DatabaseError, self._isc_status, "Error while starting transaction:") elif len(self._connections) > 1: cnum = len(self._connections) teb_array = tebarray_factory(cnum) for i in xrange(cnum): teb_array[i].db_ptr = ctypes.pointer(self._connections[i]()._db_handle) teb_array[i].tpb_len = len(_tpb) teb_array[i].tpb_ptr = _tpb api.isc_start_multiple(self._isc_status, self._tr_handle, cnum, teb_array ) if db_api_error(self._isc_status): self._tr_handle = None raise exception_from_status(DatabaseError, self._isc_status, "Error while starting transaction:") def commit(self, retaining=False): """Commit any pending transaction to the database. .. note:: If transaction is not active, this method does nothing, because the consensus among Python DB API experts is that transactions should always be started implicitly, even if that means allowing a `commit()` or `rollback()` without an actual transaction. :param boolean retaining: Indicates whether the transactional context of the transaction being resolved should be recycled. :raises DatabaseError: When error is returned by server as response to commit. """ if not self.active: return if retaining: api.isc_commit_retaining(self._isc_status, self._tr_handle) else: self.__close_cursors() api.isc_commit_transaction(self._isc_status, self._tr_handle) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Error while commiting transaction:") if not retaining: self._tr_handle = None def rollback(self, retaining=False, savepoint=None): """Rollback any pending transaction to the database. .. note:: If transaction is not active, this method does nothing, because the consensus among Python DB API experts is that transactions should always be started implicitly, even if that means allowing a `commit()` or `rollback()` without an actual transaction. :param boolean retaining: Indicates whether the transactional context of the transaction being resolved should be recycled. Mutually exclusive with 'savepoint`. :param string savepoint: Savepoint name. Causes the transaction to roll back only as far as the designated savepoint, rather than rolling back entirely. Mutually exclusive with 'retaining`. :raises ProgrammingError: If both `savepoint` and `retaining` are specified. :raises DatabaseError: When error is returned by server as response to rollback. """ if not self.active: return if retaining and savepoint: raise ProgrammingError("Can't rollback to savepoint while" " retaining context") if savepoint: self.execute_immediate('rollback to %s' % savepoint) else: if retaining: api.isc_rollback_retaining(self._isc_status, self._tr_handle) else: self.__close_cursors() api.isc_rollback_transaction(self._isc_status, self._tr_handle) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Error while rolling back transaction:") if not retaining: self._tr_handle = None def close(self): """Permanently closes the Transaction object and severs its associations with other objects (:class:`Cursor` and :class:`Connection` instances). .. important:: If the physical transaction is unresolved when this method is called, a :meth:`commit` or :meth:`rollback` will be performed first, accordingly to :attr:`default_action` value. """ exc = None try: self._finish() except Exception as e: exc = e del self._cursors[:] del self._connections[:] self.__closed = True if exc: raise exc def savepoint(self, name): """Establishes a savepoint with the specified name. .. note:: If transaction is bound to multiple connections, savepoint is created on all of them. .. important:: Because savepoint is created not through Firebird API (there is no such API call), but by executing `SAVEPOINT ` SQL statement, calling this method starts the transaction if it was not yet started. :param string name: Savepoint name. """ self.execute_immediate('SAVEPOINT %s' % name) def cursor(self,connection = None): """Creates a new :class:`Cursor` that will operate in the context of this Transaction. :param connection: **Required only when** Transaction is bound to multiple `Connections`, to specify to which `Connection` the returned Cursor should be bound. :type connection: :class:`Connection` instance :raises ProgrammingError: When transaction operates on multiple `Connections` and: `connection` parameter is not specified, or specified `connection` is not among `Connections` this Transaction is bound to. """ if len(self._connections) > 1: if not connection: raise ProgrammingError("Transaction.cursor on multiple connections" " requires 'connection' parameter") if not self.__con_in_list(connection): raise ProgrammingError("Transaction.cursor connection not in" " list of connections for this transaction") con = connection else: con = self._connections[0]() c = Cursor(con, self) self._cursors.append(weakref.ref(c, _cursor_weakref_callback(self))) return c def trans_info(self, request): """Pythonic wrapper around :meth:`transaction_info` call. :param request: One or more information request codes (see :meth:`transaction_info` for details). Multiple codes must be passed as tuple. :returns: Decoded response(s) for specified request code(s). When multiple requests are passed, returns a dictionary where key is the request code and value is the response from server. """ # We process request as a sequence of info codes, even if only one code # was supplied by the caller. requestIsSingleton = isinstance(request, int) if requestIsSingleton: request = (request,) results = {} for infoCode in request: # The global().get(...) workaround is here because only recent # versions of FB expose constant isc_info_tra_isolation: if infoCode == globals().get('isc_info_tra_isolation', -1): buf = self.transaction_info(infoCode, 's') buf = buf[1 + struct.calcsize('h'):] if len(buf) == 1: results[infoCode] = bytes_to_uint(buf) else: # For isolation level isc_info_tra_read_committed, the # first byte indicates the isolation level # (isc_info_tra_read_committed), while the second indicates # the record version flag (isc_info_tra_rec_version or # isc_info_tra_no_rec_version). isolationLevelByte, recordVersionByte = struct.unpack('cc', buf) isolationLevel = bytes_to_uint(isolationLevelByte) recordVersion = bytes_to_uint(recordVersionByte) results[infoCode] = (isolationLevel, recordVersion) else: # At the time of this writing (2006.02.09), # isc_info_tra_isolation is the only known return value of # isc_transaction_info that's not a simple integer. results[infoCode] = self.transaction_info(infoCode, 'i') if requestIsSingleton: return results[request[0]] else: return results def transaction_info(self, info_code, result_type): """Return information about active transaction. This is very thin wrapper around Firebird API `isc_transaction_info` call. :param integer info_code: One from the `isc_info_tra_*` constants. :param result_type: Code for result type. :type result_type: string 's' or 'i' :raises ProgrammingError: If transaction is not active. :raises OperationalError: When result is too large to fit into buffer of size SHRT_MAX. :raises InternalError: On unexpected processing condition. :raises ValueError: When illegal result type code is specified. """ self.__check_active() request_buffer = bs([info_code]) buf_size = 256 while True: res_buf = int2byte(0) * buf_size api.isc_transaction_info(self._isc_status, self._tr_handle, len(request_buffer), request_buffer, len(res_buf), res_buf) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Error while requesting transaction information:") i = buf_size - 1 while i >= 0: if res_buf[i] != mychr(0): break else: i -= 1 if ord2(res_buf[i]) == isc_info_truncated: if buf_size < SHRT_MAX: buf_size *= 2 if buf_size > SHRT_MAX: buf_size = SHRT_MAX continue else: raise OperationalError("Result is too large to fit into" " buffer of size SHRT_MAX, yet underlying info" " function only accepts buffers with size <= SHRT_MAX.") else: break if ord2(res_buf[i]) != isc_info_end: raise InternalError("Exited request loop sucessfuly, but" " res_buf[i] != sc_info_end.") if request_buffer[0] != res_buf[0]: raise InternalError("Result code does not match request code.") if result_type.upper() == 'I': return bytes_to_int(res_buf[3:3 + bytes_to_int(res_buf[1:3])]) elif result_type.upper() == 'S': return p3fix(ctypes.string_at(res_buf, i),self.__python_charset) else: raise ValueError("Unknown result type requested (must be 'i'" "or 's').") def prepare(self): """Manually triggers the first phase of a two-phase commit (2PC). .. note:: Direct use of this method is optional; if preparation is not triggered manually, it will be performed implicitly by `commit()` in a 2PC. """ self.__check_active() api.isc_prepare_transaction(self._isc_status, self._tr_handle) if db_api_error(self._isc_status): self.rollback() raise exception_from_status(DatabaseError, self._isc_status, "Error while preparing transaction:") def __del__(self): if self._tr_handle != None: self.close() def isreadonly(self): "Returns True if transaction is Read Only." return self.trans_info(isc_info_tra_access) == isc_info_tra_readonly; #: (Read Only) (int) Internal ID (server-side) for transaction. transaction_id = property(__get_transaction_id) #: (Read Only) True if transaction is closed. closed = property(__get_closed) #: (Read Only) True if transaction is active. active = property(__get_active) #: (Read Only) List of :class:`Cursor` objects associated with this Transaction. cursors = property(__get_cursors) #: (Read/Write) (string) 'commit' or 'rollback', action to be #: taken when physical transaction has to be ended automatically. #: **Default is 'commit'**. default_action = property(__get_default_action,__set_default_action) #: (Read Only) (int) ID of Oldest Interesting Transaction when this transaction started. oit = property(__get_oit) #: (Read Only) (int) ID of Oldest Active Transaction when this transaction started. oat = property(__get_oat) #: (Read Only) (int) ID of Oldest Snapshot Transaction when this transaction started. ost = property(__get_ost) #: (Read Only) (int) or (tuple) Isolation level code [isc_info_tra_consistency, # isc_info_tra_concurrency or isc_info_tra_read_committed]. For `isc_info_tra_read_committed` # return tuple where first item is `isc_info_tra_read_committed` and second one is # [isc_info_tra_no_rec_version or isc_info_tra_rec_version] isolation = property(__get_isolation) #: (Read Only) (int) Lock timeout (seconds or -1 for unlimited). lock_timeout = property(__get_lock_timeout) class ConnectionGroup(object): """Manager for distributed transactions, i.e. transactions that span multiple databases. .. tip:: ConnectionGroup supports `in` operator to check membership of connections. """ # XXX: ConnectionGroup objects currently are not thread-safe. Since # separate Connections can be manipulated simultaneously by different # threads in kinterbasdb, it would make sense for a container of multiple # connections to be safely manipulable simultaneously by multiple threads. # XXX: Adding two connections to the same database freezes the DB client # library. However, I've no way to detect with certainty whether any given # con1 and con2 are connected to the same database, what with database # aliases, IP host name aliases, remote-vs-local protocols, etc. # Therefore, a warning must be added to the docs. def __init__(self, connections=()): """ :param iterable connections: Sequence of :class:`Connection` instances. .. seealso:: See :meth:`add` for list of exceptions the constructor may throw. """ self._cons = [] self._transaction = None self._default_tpb = ISOLATION_LEVEL_READ_COMMITED for con in connections: self.add(con) def __del__(self): self.disband() def __get_default_tpb(self): return self._default_tpb def __set_default_tpb(self, value): self._default_tpb = _validateTPB(value) def disband(self): """Forcefully deletes all connections from connection group. .. note:: If transaction is active, it’s canceled (**rollback**). .. note:: Any error during transaction finalization doesn't stop the disband process, however the exception catched is eventually reported. """ exc = None if self._transaction: try: self._transaction.default_action = 'rollback' self._transaction.close() except Exception as e: exc = e self._transaction = None self.clear() if exc: raise exc # Membership methods: def add(self, con): """Adds active connection to the group. :param con: A :class:`Connection` instance to add to this group. :raises TypeError: When `con` is not :class:`Connection` instance. :raises ProgrammingError: When `con` is already member of this or another group, or :attr:`~Connection.closed`. When this group has unresolved transaction or contains 16 connections. """ ### CONTRAINTS ON $con: ### # con must be an instance of kinterbasdb.Connection: if not isinstance(con, Connection): raise TypeError("con must be an instance of fdb.Connection") # con cannot already be a member of this group: if con in self: raise ProgrammingError("con is already a member of this group.") # con cannot belong to more than one group at a time: if con.group: raise ProgrammingError("con is already a member of another group;" " it cannot belong to more than one group at once.") # con must be connected to a database; it must not have been closed. if con.closed: raise ProgrammingError("con has been closed; it cannot join a group.") #if con._timeout_enabled: #raise ProgrammingError('Connections with timeout enabled cannot' #' participate in distributed transactions.') ### CONTRAINTS ON $self: ### # self cannot accept new members while self has an unresolved # transaction: self.__require_transaction_state(False, "Cannot add connection to group that has an unresolved transaction.") self.__drop_transaction() # self cannot have more than DIST_TRANS_MAX_DATABASES members: if self.count() >= DIST_TRANS_MAX_DATABASES: raise ProgrammingError("The database engine limits the number of" " database handles that can participate in a single" " distributed transaction to %d or fewer; this group already" " has %d members." % (DIST_TRANS_MAX_DATABASES, self.count()) ) ### CONTRAINTS FINISHED ### # Can't set con.group directly (read-only); must use package-private # method. con._set_group(self) self._cons.append(con) def remove(self, con): """Removes specified connection from group. :param con: A :class:`Connection` instance to remove. :raises ProgrammingError: When `con` doesn't belong to this group or transaction is active. """ if con not in self: raise ProgrammingError("con is not a member of this group.") assert con.group is self self.__require_transaction_state(False, "Cannot remove connection from group that has an unresolved transaction.") self.__drop_transaction() con._set_group(None) self._cons.remove(con) def clear(self): """Removes all connections from group. :raises ProgrammingError: When transaction is active. """ self.__require_transaction_state(False, "Cannot clear group that has an unresolved transaction.") self.__drop_transaction() for con in self.members(): self.remove(con) assert self.count() == 0 def cursor(self, connection): """Creates a new :class:`Cursor` that will operate in the context of distributed transaction and specific :class:`Connection` that belongs to this group. .. note:: Automatically starts transaction if it's not already started. :param connection: :class:`Connection` instance. :raises ProgrammingError: When group is empty or specified `connection` doesn't belong to this group. """ if not self._transaction: self.__require_non_empty_group('start') self._transaction = Transaction(self._cons) return self._transaction.cursor(connection) def members(self): "Returns list of connection objects that belong to this group." return self._cons[:] # return a *copy* of the internal list def count(self): "Returns number of :class:`Connection` objects that belong to this group." return len(self._cons) def contains(self, con): """Returns True if specified connection belong to this group. :param con: :class:`Connection` instance. """ return con in self._cons __contains__ = contains # alias to support the 'in' operator def __iter__(self): return iter(self._cons) def __drop_transaction(self): if self._transaction: self._transaction.close() self._transaction = None def __require_transaction_state(self, must_be_active, err_msg=''): transaction = self._transaction if ( (must_be_active and transaction is None) or (not must_be_active and (transaction is not None and transaction.active)) ): raise ProgrammingError(err_msg) def __require_non_empty_group(self, operation_name): if self.count() == 0: raise ProgrammingError("Cannot %s distributed transaction with" " an empty ConnectionGroup." % operation_name) def __ensure_transaction(self): if not self._transaction: self.__require_non_empty_group('start') self._transaction = Transaction(self._cons, default_tpb=self.default_tpb) # Transactional methods: def execute_immediate(self, sql): """Executes a statement on all member connections without caching its prepared form. Automatically starts transaction if it's not already started. :param string sql: SQL statement to execute. .. important:: **The statement must not be of a type that returns a result set.** In most cases (especially cases in which the same statement – perhaps a parameterized statement – is executed repeatedly), it is better to create a cursor using the connection’s cursor method, then execute the statement using one of the cursor’s execute methods. :param string sql: SQL statement to execute. :raises DatabaseError: When error is returned from server. """ self.__ensure_transaction() self._transaction.execute_immediate(sql) def begin(self, tpb=None): """Starts distributed transaction over member connections. :param tpb: (Optional) Transaction Parameter Buffer for newly started transaction. If not specified, :attr:`default_tpb` is used. :type tpb: :class:`TPB` instance, list/tuple of `isc_tpb_*` constants or `bytestring` :raises ProgrammingError: When group is empty or has active transaction. """ self.__require_transaction_state(False, "Must resolve current transaction before starting another.") self.__ensure_transaction() self._transaction.begin(tpb) def savepoint(self, name): """Establishes a named SAVEPOINT on all member connections. See :meth:`Transaction.savepoint` for details. :param string name: Name for savepoint. :raises ProgrammingError: When group is empty. """ self.__require_non_empty_group('savepoint') return self._transaction.savepoint(name) def prepare(self): """Manually triggers the first phase of a two-phase commit (2PC). Use of this method is optional; if preparation is not triggered manually, it will be performed implicitly by commit() in a 2PC.""" self.__require_non_empty_group('prepare') self.__require_transaction_state(True,"This group has no transaction to prepare.") self._transaction.prepare() def commit(self, retaining=False): """Commits distributed transaction over member connections using 2PC. .. note:: If transaction is not active, this method does nothing, because the consensus among Python DB API experts is that transactions should always be started implicitly, even if that means allowing a `commit()` or `rollback()` without an actual transaction. :param boolean retaining: Indicates whether the transactional context of the transaction being resolved should be recycled. :raises ProgrammingError: When group is empty. """ self.__require_non_empty_group('commit') # The consensus among Python DB API experts is that transactions should # always be started implicitly, even if that means allowing a commit() # or rollback() without an actual transaction. if self._transaction is None: return self._transaction.commit(retaining) def rollback(self, retaining=False, savepoint=None): """Rollbacks distributed transaction over member connections. .. note:: If transaction is not active, this method does nothing, because the consensus among Python DB API experts is that transactions should always be started implicitly, even if that means allowing a `commit()` or `rollback()` without an actual transaction. :param boolean retaining: Indicates whether the transactional context of the transaction being resolved should be recycled. :raises ProgrammingError: When group is empty. """ self.__require_non_empty_group('rollback') # The consensus among Python DB API experts is that transactions should # always be started implicitly, even if that means allowing a commit() # or rollback() without an actual transaction. if self._transaction is None: return self._transaction.rollback(retaining, savepoint) #: (Read/Write) Deafult Transaction Parameter Block used for transactions. default_tpb = property(__get_default_tpb, __set_default_tpb) class BlobReader(object): """BlobReader is a “file-like†class, so it acts much like a file instance opened in `rb` mode. .. important:: DO NOT create instances of this class directly! BlobReader instances are returned automatically in place of output BLOB values when `stream` BLOB access is requested via :meth:`PreparedStatement.set_stream_blob`. .. tip:: BlobReader supports iterator protocol, yielding lines like :meth:`readline`. """ def __init__(self, blobid, db_handle, tr_handle, is_text, charset): self.__closed = False self.__mode = 'rb' self.__bytes_read = 0 self.__pos = 0 self.__index = 0 #self.__bstream = api.Bopen(blobid, db_handle, tr_handle, self.__mode) self.__db_handle = db_handle self.__tr_handle = tr_handle self.__is_text = is_text self.__charset = charset self.__python_charset = charset_map.get(charset,charset) self.__blobid = blobid self.__opened = False self._blob_handle = isc_blob_handle() self._isc_status = ISC_STATUS_ARRAY() def __ensure_open(self): if self.closed: raise ProgrammingError("BlobReader is closed.") if not self.__opened: self.__open() def __open(self): api.isc_open_blob2(self._isc_status, self.__db_handle, self.__tr_handle, self._blob_handle, self.__blobid, 4, bs([ibase.isc_bpb_version1, ibase.isc_bpb_type,1, ibase.isc_bpb_type_stream])) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Cursor.read_output_blob/isc_open_blob2:") # Get BLOB total length and max. size of segment result = ctypes.cast(ctypes.create_string_buffer(20), buf_pointer) api.isc_blob_info(self._isc_status, self._blob_handle, 2, bs([isc_info_blob_total_length, isc_info_blob_max_segment]), 20, result) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Cursor.read_output_blob/isc_blob_info:") offset = 0 while bytes_to_uint(result[offset]) != isc_info_end: code = bytes_to_uint(result[offset]) offset += 1 if code == isc_info_blob_total_length: length = bytes_to_uint(result[offset:offset + 2]) self._blob_length = bytes_to_uint(result[ offset + 2:offset + 2 + length]) offset += length + 2 elif code == isc_info_blob_max_segment: length = bytes_to_uint(result[offset:offset + 2]) self._segment_size = bytes_to_uint(result[ offset + 2:offset + 2 + length]) offset += length + 2 # Create internal buffer self.__buf = ctypes.create_string_buffer(self._segment_size) self.__buf_pos = 0 self.__buf_data = 0 self.__opened = True def __reset_buffer(self): ctypes.memset(self.__buf,0,self._segment_size) self.__buf_pos = 0 self.__buf_data = 0 def __BLOB_get(self): self.__reset_buffer() # Load BLOB allow_incomplete_segment_read = False status = ISC_STATUS(0) bytes_read = 0 bytes_actually_read = ctypes.c_ushort(0) status = api.isc_get_segment(self._isc_status, self._blob_handle, bytes_actually_read, self._segment_size, ctypes.byref(self.__buf)) if status != 0: if status == ibase.isc_segstr_eof: self.__buf_data = 0 elif ((status == isc_segment) and allow_incomplete_segment_read): self.__buf_data = bytes_actually_read.value else: raise exception_from_status(DatabaseError, self._isc_status, "BlobReader.__BLOB_get/isc_get_segment:") else: self.__buf_data = bytes_actually_read.value def close(self): """Closes the Reader. Like :meth:`file.close`. :raises DatabaseError: When error is returned by server. """ if self.__opened and not self.closed: self.__closed = True api.isc_close_blob(self._isc_status, self._blob_handle) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "BlobReader.close/isc_close_blob:") def flush(self): """Flush the internal buffer. Like :meth:`file.flush`. Does nothing as it's pointless for reader.""" pass def next(self): """Return the next line from the BLOB. Part of *iterator protocol*. :raises StopIteration: If there are no further lines. """ line = self.readline() if line: return line else: raise StopIteration __next__ = next def __iter__(self): return self def read(self, size = -1): """Read at most size bytes from the file (less if the read hits EOF before obtaining size bytes). If the size argument is negative or omitted, read all data until EOF is reached. The bytes are returned as a string object. An empty string is returned when EOF is encountered immediately. Like :meth:`file.read`. :raises ProgrammingError: When reader is closed. .. note:: Performs automatic conversion to `unicode` for TEXT BLOBs, if used Python is v3 or `connection charset` is defined. """ self.__ensure_open() if size >= 0: to_read = min(size,self._blob_length - self.__pos) else: to_read = self._blob_length - self.__pos return_size = to_read result = ctypes.create_string_buffer(return_size) pos = 0 while to_read > 0: to_copy = min(to_read, self.__buf_data - self.__buf_pos) if to_copy == 0: self.__BLOB_get() to_copy = min(to_read, self.__buf_data - self.__buf_pos) if to_copy == 0: # BLOB EOF break ctypes.memmove(ctypes.byref(result,pos), ctypes.byref(self.__buf,self.__buf_pos), to_copy) pos += to_copy self.__pos += to_copy self.__buf_pos += to_copy to_read -= to_copy result = result.raw[:return_size] if (self.__charset or PYTHON_MAJOR_VER == 3) and self.__is_text: result = b2u(result,self.__python_charset) return result def readline(self): """Read one entire line from the file. A trailing newline character is kept in the string (but may be absent when a file ends with an incomplete line). An empty string is returned when EOF is encountered immediately. Like :meth:`file.readline`. :raises ProgrammingError: When reader is closed. .. note:: Performs automatic conversion to `unicode` for TEXT BLOBs, if used Python is v3 or `connection charset` is defined. """ self.__ensure_open() line = [] to_read = self._blob_length - self.__pos to_copy = 0 found = False while to_read > 0 and not found: to_scan = min(to_read, self.__buf_data - self.__buf_pos) if to_scan == 0: self.__BLOB_get() to_scan = min(to_read, self.__buf_data - self.__buf_pos) if to_scan == 0: # BLOB EOF break pos = 0 result = '' while pos < to_scan: if self.__buf[self.__buf_pos+pos] == ibase.b('\n'): found = True pos += 1 break pos += 1 result = ctypes.string_at(ctypes.byref(self.__buf,self.__buf_pos), pos) if (self.__charset or PYTHON_MAJOR_VER == 3) and self.__is_text: result = b2u(result,self.__python_charset) line.append(result) self.__buf_pos += pos self.__pos += pos to_read -= pos return ''.join(line) def readlines(self, sizehint = None): """Read until EOF using :meth:`readline` and return a list containing the lines thus read. The optional sizehint argument (if present) is ignored. Like :meth:`file.readlines`. .. note:: Performs automatic conversion to `unicode` for TEXT BLOBs, if used Python is v3 or `connection charset` is defined. """ result = [] line = self.readline() while line: result.append(line) line = self.readline() return result def seek(self, offset, whence = os.SEEK_SET): """Set the file’s current position, like stdio‘s `fseek()`. See :meth:`file.seek` details. :param integer offset: Offset from specified position. :param whence: (Optional) Context for offset. :type whence: os.SEEK_SET, os.SEEK_CUR or os.SEEK_END :raises ProgrammingError: When reader is closed. .. warning:: If BLOB was NOT CREATED as `stream` BLOB, this method raises :exc:`DatabaseError` exception. This constraint is set by Firebird. """ self.__ensure_open() pos = ISC_LONG(0) api.isc_seek_blob (self._isc_status, self._blob_handle, whence, ISC_LONG(offset), ctypes.byref(pos)) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "BlobReader.seek/isc_blob_info:") self.__pos = pos.value self.__reset_buffer() def tell(self): """Return current position in BLOB, like stdio‘s `ftell()` and :meth:`file.tell`.""" return self.__pos def __get_closed(self): return self.__closed def __get_mode(self): return self.__mode def __del__(self): self.close() #: (Read Only) (boolean) True is BlobReader is closed. closed = property(__get_closed) #: (Read Only) (string) File mode - always "rb" mode = property(__get_mode) class _RowMapping(object): """An internal dictionary-like class that wraps a row of results in order to map field name to field value. .. warning:: We make ABSOLUTELY NO GUARANTEES about the return value of the `fetch(one|many|all)` methods except that it is a sequence indexed by field position, and no guarantees about the return value of the `fetch(one|many|all)map` methods except that it is a mapping of field name to field value. Therefore, client programmers should NOT rely on the return value being an instance of a particular class or type. """ def __init__(self, description, row): self._description = description fields = self._fields = {} pos = 0 for fieldSpec in description: # It's possible for a result set from the database engine to return # multiple fields with the same name, but kinterbasdb's key-based # row interface only honors the first (thus setdefault, which won't # store the position if it's already present in self._fields). fields.setdefault(fieldSpec[DESCRIPTION_NAME], row[pos]) pos += 1 def __len__(self): return len(self._fields) def __getitem__(self, fieldName): fields = self._fields # Straightforward, unnormalized lookup will work if the fieldName is # already uppercase and/or if it refers to a database field whose # name is case-sensitive. if fieldName in fields: return fields[fieldName] else: fieldNameNormalized = _normalizeDatabaseIdentifier(fieldName) try: return fields[fieldNameNormalized] except KeyError: raise KeyError('Result set has no field named "%s". The field' ' name must be one of: (%s)' % (fieldName, ', '.join(fields.keys())) ) def get(self, fieldName, defaultValue=None): try: return self[fieldName] except KeyError: return defaultValue def __contains__(self, fieldName): try: self[fieldName] except KeyError: return False else: return True def __str__(self): # Return an easily readable dump of this row's field names and their # corresponding values. return '' % ', '.join([ '%s = %s' % (fieldName, self[fieldName]) for fieldName in self._fields.keys() ]) def keys(self): # Note that this is an *ordered* list of keys. return [fieldSpec[DESCRIPTION_NAME] for fieldSpec in self._description] def values(self): # Note that this is an *ordered* list of values. return [self[fieldName] for fieldName in self.keys()] def items(self): return [(fieldName, self[fieldName]) for fieldName in self.keys()] def iterkeys(self): for fieldDesc in self._description: yield fieldDesc[DESCRIPTION_NAME] __iter__ = iterkeys def itervalues(self): for fieldName in self: yield self[fieldName] def iteritems(self): for fieldName in self: yield fieldName, self[fieldName] class _TableAccessStats(object): """An internal class that wraps results from :meth:`~fdb.Connection.get_table_access_stats()`""" def __init__(self,table_id): self.table_id = table_id self.table_name = None self.sequential = None self.indexed = None self.inserts = None self.updates = None self.deletes = None self.backouts = None self.purges = None self.expunges = None def _set_info(self,info_code,value): if info_code == isc_info_read_seq_count: self.sequential = value elif info_code == isc_info_read_idx_count: self.indexed = value elif info_code == isc_info_insert_count: self.inserts = value elif info_code == isc_info_update_count: self.updates = value elif info_code == isc_info_delete_count: self.deletes = value elif info_code == isc_info_backout_count: self.backouts = value elif info_code == isc_info_purge_count: self.purges = value elif info_code == isc_info_expunge_count: self.expunges = value else: ProgrammingError("Unsupported info code: %d" % info_code) class _RequestBufferBuilder(object): def __init__(self, clusterIdentifier=None): self.clear() if clusterIdentifier: self._addCode(clusterIdentifier) def render(self): # Convert the RequestBufferBuilder's components to a binary Python str. return b('').join(self._buffer) def clear(self): self._buffer = [] def _extend(self, otherRequestBuilder): self._buffer.append(otherRequestBuilder.render()) def _addRaw(self, rawBuf): assert isinstance(rawBuf, mybytes) self._buffer.append(rawBuf) def _addCode(self, code): self._code2reqbuf(self._buffer, code) def _code2reqbuf(self, reqBuf, code): if isinstance(code, str): assert len(code) == 1 code = ord(code) # The database engine considers little-endian integers "portable"; they # need to have been converted to little-endianness before being sent # across the network. reqBuf.append(struct.pack(' UINT_MAX): raise ValueError('Lock resolution must be either None' ' or a non-negative int number of seconds between 0 and' ' %d.' % UINT_MAX) self._lock_timeout = lock_timeout #: (integer) Required lock timeout or None. #: #: **Default:** `None` lock_timeout = property(_get_lock_timeout, _set_lock_timeout) # table_reservation property (an instance of TableReservation): def _get_table_reservation(self): if self._table_reservation is None: self._table_reservation = TableReservation() return self._table_reservation def _set_table_reservation_access(self, _): raise ProgrammingError('Instead of changing the value of the' ' .table_reservation object itself, you must change its *elements*' ' by manipulating it as though it were a dictionary that mapped' '\n "TABLE_NAME": (sharingMode, accessMode)' '\nFor example:' '\n tpbBuilder.table_reservation["MY_TABLE"] =' ' (kinterbasdb.isc_tpb_protected, kinterbasdb.isc_tpb_lock_write)' ) #: (:class:`TableReservation`) Table reservation specification. #: #: **Default:** `None`. #: #: Instead of changing the value of the TableReservation object itself, you #: must change its elements by manipulating it as though it were a dictionary #: that mapped “TABLE_NAMEâ€: (sharingMode, accessMode) For example: #: #: .. code-block:: python #: #: tpb.table_reservation["MY_TABLE"] = (fdb.isc_tpb_protected, fdb.isc_tpb_lock_write) table_reservation = property(_get_table_reservation, _set_table_reservation_access) class TableReservation(object): """A dictionary-like helper class that maps “TABLE_NAMEâ€: (sharingMode, accessMode). It performs validation of values assigned to keys. """ _MISSING = object() _SHARING_MODE_STRS = { isc_tpb_shared: 'isc_tpb_shared', isc_tpb_protected: 'isc_tpb_protected', isc_tpb_exclusive: 'isc_tpb_exclusive', } _ACCESS_MODE_STRS = { isc_tpb_lock_read: 'isc_tpb_lock_read', isc_tpb_lock_write: 'isc_tpb_lock_write', } def __init__(self): self._res = {} def copy(self): # A shallow copy is fine. import copy return copy.copy(self) def render(self): """Create valid `table access parameter block` according to current key/value pairs. :returns: (string) Table access definition block. """ if not self: return b('') frags = [] _ = frags.append for tableName, resDefs in self.iteritems(): tableNameLenWithTerm = len(b(tableName)) + 1 for (sharingMode, accessMode) in resDefs: _(int2byte(accessMode)) _(struct.pack('' frags = ['') return ''.join(frags) def keys(self): return self._res.keys() def values(self): return self._res.values() def items(self): return self._res.items() def iterkeys(self): return self._res.iterkeys() def itervalues(self): return self._res.itervalues() def iteritems(self): ### Todo: verify handling of P version differences if PYTHON_MAJOR_VER == 3: return self._res.items() else: return self._res.iteritems() def __setitem__(self, key, value): key = self._validateKey(key) key = _normalizeDatabaseIdentifier(key) # If the += operator is being applied, the form of value will be like: # [(sharingMode0, accessMode0), ..., newSharingMode, newAccessMode] # For the sake of convenience, we detect this situation and handle it # "naturally". if isinstance(value, list) and len(value) >= 3: otherValues = value[:-2] value = tuple(value[-2:]) else: otherValues = None if ( (not isinstance(value, tuple)) or len(value) != 2 or value[0] not in (isc_tpb_shared, isc_tpb_protected, isc_tpb_exclusive) or value[1] not in (isc_tpb_lock_read, isc_tpb_lock_write) ): raise ValueError('Table reservation entry must be a 2-tuple of' ' the following form:\n' 'element 0: sharing mode (one of (isc_tpb_shared,' ' isc_tpb_protected, isc_tpb_exclusive))\n' 'element 1: access mode (one of (isc_tpb_lock_read,' ' isc_tpb_lock_write))\n' '%s is not acceptable.' % str(value) ) if otherValues is None: value = [value] else: otherValues.append(value) value = otherValues self._res[key] = value def _validateKey(self, key): ### Todo: verify handling of P version differences, refactor if PYTHON_MAJOR_VER == 3: keyMightBeAcceptable = isinstance(key, str) if keyMightBeAcceptable and isinstance(key, str): try: key.encode('ASCII') except UnicodeEncodeError: keyMightBeAcceptable = False if not keyMightBeAcceptable: raise TypeError('Only str keys are allowed.') else: keyMightBeAcceptable = isinstance(key, basestring) if keyMightBeAcceptable and isinstance(key, unicode): try: key = key.encode('ASCII') except UnicodeEncodeError: keyMightBeAcceptable = False if not keyMightBeAcceptable: raise TypeError('Only str keys are allowed.') return key def _validateTPB(tpb): if isinstance(tpb, TPB): # TPB's accessor methods perform their own validation, and its # render method takes care of infrastructural trivia. return tpb elif isinstance(tpb, (ListType, TupleType)): return tpb elif not (isinstance(tpb, mybytes) and len(tpb) > 0): raise ProgrammingError('TPB must be non-unicode string of length > 0') # The kinterbasdb documentation promises (or at least strongly implies) # that if the user tries to set a TPB that does not begin with # isc_tpb_version3, kinterbasdb will automatically supply that # infrastructural value. This promise might cause problems in the future, # when isc_tpb_version3 is superseded. A possible solution would be to # check the first byte against all known isc_tpb_versionX version flags, # like this: # if tpb[0] not in (isc_tpb_version3, ..., isc_tpb_versionN): # tpb = isc_tpb_version3 + tpb # That way, compatibility with old versions of the DB server would be # maintained, but client code could optionally specify a newer TPB version. if tpb[0] != isc_tpb_version3: tpb = isc_tpb_version3 + tpb return tpb def _normalizeDatabaseIdentifier(ident): if ident.startswith('"') and ident.endswith('"'): # Quoted name; leave the case of the field name untouched, but # strip the quotes. return ident[1:-1] else: # Everything else is normalized to uppercase to support case- # insensitive lookup. return ident.upper() fdb-1.6.1/fdb/schema.py0000644000175000001440000056101612645427622015302 0ustar pcisarusers00000000000000#coding:utf-8 # # PROGRAM: fdb # MODULE: schema.py # DESCRIPTION: Database schema # CREATED: 10.5.2013 # # Software distributed under the License is distributed AS IS, # WITHOUT WARRANTY OF ANY KIND, either express or implied. # See the License for the specific language governing rights # and limitations under the License. # # The Original Code was created by Pavel Cisar # # Copyright (c) 2013 Pavel Cisar # and all contributors signed below. # # All Rights Reserved. # Contributor(s): ______________________________________. import sys import os import fdb #from . import fbcore as fdb from fdb.utils import LateBindingProperty import string import weakref from itertools import groupby # Firebird Field Types FBT_SMALLINT = 7 FBT_INTEGER = 8 FBT_QUAD = 9 FBT_FLOAT = 10 FBT_CHAR = 14 FBT_DOUBLE_PRECISION = 27 FBT_DATE = 35 FBT_VARCHAR = 37 FBT_CSTRING = 40 FBT_BLOB_ID = 45 FBT_BLOB = 261 FBT_SQL_DATE = 12 FBT_SQL_TIME = 13 FBT_SQL_TIMESTAMP = 35 FBT_BIGINT = 16 FBT_BOOLEAN = 23 MAX_INTSUBTYPES = 2 MAX_BLOBSUBTYPES = 8 TRIGGER_TYPE_SHIFT = 13 TRIGGER_TYPE_MASK = (0x3 << TRIGGER_TYPE_SHIFT) TRIGGER_TYPE_DML = (0 << TRIGGER_TYPE_SHIFT) TRIGGER_TYPE_DB = (1 << TRIGGER_TYPE_SHIFT) TRIGGER_TYPE_DDL = (2 << TRIGGER_TYPE_SHIFT) DDL_TRIGGER_ANY = 4611686018427375615 # 9223372036854751229 DDL_TRIGGER_CREATE_TABLE = 1 DDL_TRIGGER_ALTER_TABLE = 2 DDL_TRIGGER_DROP_TABLE = 3 DDL_TRIGGER_CREATE_PROCEDURE = 4 DDL_TRIGGER_ALTER_PROCEDURE = 5 DDL_TRIGGER_DROP_PROCEDURE = 6 DDL_TRIGGER_CREATE_FUNCTION = 7 DDL_TRIGGER_ALTER_FUNCTION = 8 DDL_TRIGGER_DROP_FUNCTION = 9 DDL_TRIGGER_CREATE_TRIGGER = 10 DDL_TRIGGER_ALTER_TRIGGER = 11 DDL_TRIGGER_DROP_TRIGGER = 12 # gap for TRIGGER_TYPE_MASK - 3 bits DDL_TRIGGER_CREATE_EXCEPTION = 16 DDL_TRIGGER_ALTER_EXCEPTION = 17 DDL_TRIGGER_DROP_EXCEPTION = 18 DDL_TRIGGER_CREATE_VIEW = 19 DDL_TRIGGER_ALTER_VIEW = 20 DDL_TRIGGER_DROP_VIEW = 21 DDL_TRIGGER_CREATE_DOMAIN = 22 DDL_TRIGGER_ALTER_DOMAIN = 23 DDL_TRIGGER_DROP_DOMAIN = 24 DDL_TRIGGER_CREATE_ROLE = 25 DDL_TRIGGER_ALTER_ROLE = 26 DDL_TRIGGER_DROP_ROLE = 27 DDL_TRIGGER_CREATE_INDEX = 28 DDL_TRIGGER_ALTER_INDEX = 29 DDL_TRIGGER_DROP_INDEX = 30 DDL_TRIGGER_CREATE_SEQUENCE = 31 DDL_TRIGGER_ALTER_SEQUENCE = 32 DDL_TRIGGER_DROP_SEQUENCE = 33 DDL_TRIGGER_CREATE_USER = 34 DDL_TRIGGER_ALTER_USER = 35 DDL_TRIGGER_DROP_USER = 36 DDL_TRIGGER_CREATE_COLLATION = 37 DDL_TRIGGER_DROP_COLLATION = 38 DDL_TRIGGER_ALTER_CHARACTER_SET = 39 DDL_TRIGGER_CREATE_PACKAGE = 40 DDL_TRIGGER_ALTER_PACKAGE = 41 DDL_TRIGGER_DROP_PACKAGE = 42 DDL_TRIGGER_CREATE_PACKAGE_BODY = 43 DDL_TRIGGER_DROP_PACKAGE_BODY = 44 DDL_TRIGGER_CREATE_MAPPING = 45 DDL_TRIGGER_ALTER_MAPPING = 46 DDL_TRIGGER_DROP_MAPPING = 47 COLUMN_TYPES = {None: 'UNKNOWN', FBT_SMALLINT: 'SMALLINT', FBT_INTEGER: 'INTEGER', FBT_QUAD: 'QUAD', FBT_FLOAT: 'FLOAT', FBT_CHAR: 'CHAR', FBT_DOUBLE_PRECISION: 'DOUBLE PRECISION', FBT_VARCHAR: 'VARCHAR', FBT_CSTRING: 'CSTRING', FBT_BLOB_ID: 'BLOB_ID', FBT_BLOB: 'BLOB', FBT_SQL_TIME: 'TIME', FBT_SQL_DATE: 'DATE', FBT_SQL_TIMESTAMP: 'TIMESTAMP', FBT_BIGINT: 'BIGINT', FBT_BOOLEAN: 'BOOLEAN', } INTEGRAL_SUBTYPES = ('UNKNOWN','NUMERIC','DECIMAL') BLOB_SUBTYPES = ('BINARY','TEXT','BLR','ACL','RANGES','SUMMARY', 'FORMAT','TRANSACTION_DESCRIPTION','EXTERNAL_FILE_DESCRIPTION', 'DEBUG_INFORMATION') TRIGGER_PREFIX_TYPES = ['BEFORE','AFTER'] TRIGGER_SUFFIX_TYPES = ['','INSERT','UPDATE','DELETE'] TRIGGER_DB_TYPES = ['CONNECT','DISCONNECT','TRANSACTION START', 'TRANSACTION COMMIT','TRANSACTION ROLLBACK'] TRIGGER_DDL_TYPES = [None,"CREATE TABLE","ALTER TABLE","DROP TABLE", "CREATE PROCEDURE","ALTER PROCEDURE","DROP PROCEDURE", "CREATE FUNCTION","ALTER FUNCTION","DROP FUNCTION", "CREATE TRIGGER","ALTER TRIGGER","DROP TRIGGER", None,None,None, # gap for TRIGGER_TYPE_MASK - 3 bits "CREATE EXCEPTION","ALTER EXCEPTION","DROP EXCEPTION", "CREATE VIEW","ALTER VIEW","DROP VIEW", "CREATE DOMAIN","ALTER DOMAIN","DROP DOMAIN", "CREATE ROLE","ALTER ROLE","DROP ROLE", "CREATE INDEX","ALTER INDEX","DROP INDEX", "CREATE SEQUENCE","ALTER SEQUENCE","DROP SEQUENCE", "CREATE USER","ALTER USER","DROP USER", "CREATE COLLATION","DROP COLLATION","ALTER CHARACTER SET", "CREATE PACKAGE","ALTER PACKAGE","DROP PACKAGE", "CREATE PACKAGE BODY","DROP PACKAGE BODY", "CREATE MAPPING","ALTER MAPPING","DROP MAPPING"] COLLATION_PAD_SPACE = 1 COLLATION_CASE_INSENSITIVE = 2 COLLATION_ACCENT_INSENSITIVE = 4 INDEX_TYPE_ASCENDING = 'ASCENDING' INDEX_TYPE_DESCENDING = 'DESCENDING' INDEX_TYPES = [INDEX_TYPE_ASCENDING,INDEX_TYPE_DESCENDING] RELATION_TYPE_TABLE = 0 RELATION_TYPE_VIEW = 1 RELATION_TYPE_GTT = 5 RELATION_TYPE_GTT_PRESERVE = 4 RELATION_TYPE_GTT_DELETE = 5 PROCPAR_DATATYPE = 0 PROCPAR_DOMAIN = 1 PROCPAR_TYPE_OF_DOMAIN = 2 PROCPAR_TYPE_OF_COLUMN = 3 RESERVED = ['ACTIVE','ADD','ADMIN','AFTER','ALL','ALTER','AND', 'ANY','ARE','AS','ASC','ASCENDING','AT','AUTO','AUTODDL','AVG', 'BASED','BASE_NAME','BEFORE','BEGIN','BETWEEN','BIGINT','BIT_LENGTH', 'BLOB','BLOBEDIT','BOTH','BUFFER','BY','BOOLEAN', 'CASE','CAST','CHAR','CHARACTER','CHAR_LENGTH','CHARACTER_LENGTH', 'CHECK','CHECK_POINT_LENGTH','CLOSE','COALESCE','COLLATE','COLLATION', 'COLUMN','COMMIT','COMMITTED','COMPILETIME','COMPUTED','CONDITIONAL', 'CONNECT','CONSTRAINT','CONTAINING','CONTINUE','COUNT','CREATE','CROSS', 'CSTRING','CURRENT','CURRENT_CONNECTION','CURRENT_DATE','CURRENT_ROLE', 'CURRENT_TIME','CURRENT_TIMESTAMP','CURRENT_TRANSACTION','CURRENT_USER', 'CORR','COVAR_POP','COVAR_SAMP', 'DATABASE','DATE','DAY','DB_KEY','DEBUG','DEC','DECIMAL','DECLARE','DEFAULT', 'DELETE','DELETING','DESC','DESCENDING','DESCRIBE','DISCONNECT', 'DISPLAY','DISTINCT','DO','DOMAIN','DOUBLE','DROP','DETERMINISTIC', 'ECHO','EDIT','ELSE','END','ENTRY_POINT','ESCAPE','EVENT','EXCEPTION','EXECUTE', 'EXISTS','EXIT','EXTERN','EXTERNAL','EXTRACT', 'FETCH','FILE','FILTER','FLOAT','FOR','FOREIGN','FOUND','FROM','FULL','FUNCTION', 'FALSE', 'GDSCODE','GENERATOR','GEN_ID','GOTO','GRANT','GROUP','GROUP_COMMIT_WAIT_TIME', 'GLOBAL', 'HAVING','HEADING','HELP','HOUR', 'IF','IMMEDIATE','IN','INACTIVE','INDEX','INDICATOR','INIT','INNER','INPUT', 'INPUT_TYPE','INSERT','INSERTING','INT','INTEGER','INTO','IS','ISOLATION', 'INSENSITIVE', 'JOIN', 'KEY', 'LAST','LC_MESSAGES','LC_TYPE','LEADING','LEAVE','LEFT','LENGTH', 'LEVEL','LIKE','LOCK','LOG_BUFFER_SIZE','LONG','LOWER', 'MANUAL','MAX','MAXIMUM','MAXIMUM_SEGMENT','MAX_SEGMENT','MERGE','MESSAGE', 'MIN','MINIMUM','MINUTE','MODULE_NAME','MONTH', 'NAMES','NATIONAL','NATURAL','NCHAR','NO','NOAUTO','NOT','NULL','NULLIF', 'NULLS','NUM_LOG_BUFFERS','NUMERIC', 'OCTET_LENGTH','OF','ON','ONLY','OPEN','OPTION','OR','ORDER','OUTER','OUTPUT', 'OUTPUT_TYPE','OVERFLOW','OFFSET','OVER', 'PAGE','PAGELENGTH','PAGES','PAGE_SIZE','PARAMETER','PASSWORD','PERCENT', 'PLAN','POSITION','POST_EVENT','PRECISION','PREPARE','PRIMARY','PRIVILEGES', 'PROCEDURE','PUBLIC', 'QUIT', 'RDB$DB_KEY','READ','REAL','RECORD_VERSION','RECREATE','REFERENCES','RELEASE', 'RESERV','RESERVING','RETAIN','RETURN','RETURNING_VALUES','RETURNS','REVOKE', 'RIGHT','ROLLBACK','ROW_COUNT','ROWS','RUNTIME','RECURSIVE','RDB$RECORD_VERSION', 'REGR_AVGX','REGR_AVGY','REGR_COUNT','REGR_INTERCEPT','REGR_R2','REGR_SLOPE', 'REGR_SXX','REGR_SXY','REGR_SYY','ROW', 'SAVEPOINT','SCHEMA','SECOND','SELECT','SET','SHADOW','SHARED','SHELL', 'SHOW','SIMILAR','SINGULAR','SIZE','SMALLINT','SNAPSHOT','SOME','SORT','SQL', 'SQLCODE','SQLERROR','SQLWARNING','STABILITY','STARTING','STARTS','STATEMENT', 'STATIC','STATISTICS','SUB_TYPE','SUM','SUSPEND','SENSITIVE','START','SCROLL', 'SQLSTATE','STDDEV_POP','STDDEV_SAMP', 'TABLE','TERM','TERMINATOR','THEN','TIES','TIME','TIMESTAMP','TO','TRAILING', 'TRANSACTION','TRANSLATE','TRANSLATION','TRIGGER','TRIM','TRUE', 'UNCOMMITTED','UNION','UNIQUE','UNKNOWN','UPDATE','UPDATING','UPPER','USER', 'USING', 'VALUE','VALUES','VARCHAR','VARIABLE','VARYING','VERSION','VIEW','VAR_POP', 'VAR_SAMP', 'WAIT','WHEN','WHENEVER','WHERE','WHILE','WITH','WORK','WRITE', 'YEAR', ] NON_RESERVED = ['ABS','ACCENT','ACOS','ALWAYS','ASCII_CHAR','ASCII_VAL','ASIN','ATAN','ATAN2', 'AUTONOMOUS','ACTION','ABSOLUTE','ACOSH','ASINH','ATANH', 'BIN_AND','BIN_OR','BIN_NOT','BIN_SHL','BIN_SHR','BIN_XOR', 'BLOCK','BACKUP','BREAK','BODY', #removed 'BASENAME', 'CALLER','CEIL','CEILING','CHAR_TO_UUID','CASCADE','COMMENT','COMMON', 'COS','COSH','COT','CURSOR','CONTINUE', #removed 'CACHE','CHECK_POINT_LEN', 'DATEADD','DATEDIFF','DECODE','DIFFERENCE','DATA','DESCRIPTOR','DDL','DECRYPT', 'DENSE_RANK', 'EXP','ENCRYPT','ENGINE', 'FIRSTNAME','FLOOR','FIRST','FREE_IT','FIRST_VALUE', 'GEN_UUID','GENERATED','GRANTED', #removed 'GROUP_COMMIT_WAIT', 'HASH', 'IGNORE','IIF','IDENTITY','INCREMENT', 'LIMBO','LIST','LN','LOG','LOG10','LPAD','LASTNAME','LAST_VALUE','LAG','LEAD', 'LINGER', #removed 'LOGFILE','LOG_BUF_SEZE', 'MAPPING','MATCHED','MATCHING','MAXVALUE','MIDDLENAME','MILLISECOND', 'MINVALUE','MOD', 'NEXT','NAME','NTH_VALUE', #removed 'NUM_LOG_BUFS', 'OS_NAME','OVERLAY', 'PI','PLACING','POWER','PROTECTED','PAD','PRESERVE','PACKAGE','PARTITION', 'PLUGIN','PRIOR', 'REPLACE','REQUESTS','RESTART','RETURNING','REVERSE','ROUND','RPAD','RAND', 'RESTRICT','ROLE','RANK','RELATIVE','ROW_NUMBER', #removed 'RAW_PARTITIONS', 'SEGMENT','SEQUENCE','SIGN','SIN','SINH','SOURCE','SPACE','SQLSTATE','SQRT', 'SCALAR_ARRAY','SKIP','SUBSTRING','SERVERWIDE', 'TIMEOUT','TRUNC','TWO_PHASE','TAN','TANH','TYPE','TEMPORARY','TAGS','TRUSTED', 'UUID_TO_CHAR','UNDO','USAGE', 'WEEK','WEEKDAY', 'YEARDAY' ] #--- Functions def get_grants(privileges,grantors=None): """Get list of minimal set of SQL GRANT statamenets necessary to grant specified privileges. :param list privileges: List of :class:`Privilege` instances. :param list grantors: List of standard grantor names. Generates GRANTED BY clause for privileges granted by user that's not in list. """ tp = {'S':'SELECT','I':'INSERT','U':'UPDATE','D':'DELETE','R':'REFERENCES'} def skey(item): return (item.user_name,item.user_type,item.grantor_name, item.subject_name,item.subject_type,item.has_grant(), item.privilege in tp,item.privilege, str(item.field_name),) def gkey(item): return (item.user_name,item.user_type,item.grantor_name, item.subject_name,item.subject_type,item.has_grant(), item.privilege in tp,) def gkey2(item): return item.privilege grants = [] p = list(privileges) p.sort(key=skey) for k, g in groupby(p,gkey): g = list(g) item = g[0] if item.has_grant(): admin_option = ' WITH %s OPTION' % ('ADMIN' if item.privilege == 'M' else 'GRANT') else: admin_option = '' uname = item.user_name user = item.user if isinstance(user,Procedure): utype = 'PROCEDURE ' elif isinstance(user,Trigger): utype = 'TRIGGER ' elif isinstance(user,View): utype = 'VIEW ' else: utype = '' sname = item.subject_name if (grantors is not None) and (item.grantor_name not in grantors): granted_by = ' GRANTED BY %s' % item.grantor_name else: granted_by = '' priv_list = [] for k,items in groupby(g,gkey2): items = list(items) item = items[0] if item.privilege in tp: privilege = tp[item.privilege] if len(items) > 1: privilege += '(%s)' % ','.join(i.field_name for i in items if i.field_name) elif item.field_name is not None: privilege += '(%s)' % item.field_name priv_list.append(privilege) elif item.privilege == 'X': # procedure privilege = 'EXECUTE ON PROCEDURE ' else: # role membership privilege = '' if priv_list: privilege = ', '.join(priv_list) privilege += ' ON ' grants.append('GRANT %s%s TO %s%s%s%s' % (privilege,sname,utype, uname,admin_option,granted_by)) return grants def isKeyword(ident): "Returns True if `ident` is Firebird keyword." return (ident in RESERVED) or (ident in NON_RESERVED) def escape_single_quotes(text): return text.replace("'","''") #--- Exceptions #--- Classes class Schema(object): """This class represents database schema. """ #: option switch: Always quote db object names on output opt_always_quote = False #: Datatype declaration methods for procedure parameters: key = numID, value = name enum_param_type_from = {PROCPAR_DATATYPE: 'DATATYPE', PROCPAR_DOMAIN: 'DOMAIN', PROCPAR_TYPE_OF_DOMAIN: 'TYPE OF DOMAIN', PROCPAR_TYPE_OF_COLUMN: 'TYPE OF COLUMN'} #: Object types: key = numID, value = type_name enum_object_types = dict() #: Object type codes: key = type_name, value = numID enum_object_type_codes = dict() #: Character set names: key = numID, value = charset_name enum_character_set_names = dict() #: Field types: key = numID, value = type_name enum_field_types = dict() #: Field sub types: key = numID, value = type_name enum_field_subtypes = dict() #: Function types: key = numID, value = type_name enum_function_types = dict() #: Mechanism Types: key = numID, value = type_name enum_mechanism_types = dict() #: Parameter Mechanism Types: key = numID, value = type_name enum_parameter_mechanism_types = dict() #: Procedure Types: key = numID, value = type_name enum_procedure_types = dict() #: Relation Types: key = numID, value = type_name enum_relation_types = dict() #: System Flag Types: key = numID, value = type_name enum_system_flag_types = dict() #: Transaction State Types: key = numID, value = type_name enum_transaction_state_types = dict() #: Trigger Types: key = numID, value = type_name enum_trigger_types = dict() # Firebird 3.0 #: Parameter Types: key = numID, value = type_name enum_parameter_types = dict() #: Index activity status: key = numID, value = flag_name enum_index_activity_flags = dict() #: Index uniqueness: key = numID, value = flag_name enum_index_unique_flags = dict() #: Trigger activity status: key = numID, value = flag_name_name enum_trigger_activity_flags = dict() #: Grant option: key = numID, value = option_name enum_grant_options = dict() #: Page type: key = numID, value = type_name enum_page_types = dict() #: Privacy flags: numID, value = flag_name enum_privacy_flags = dict() #: Legacy flags: numID, value = flag_name enum_legacy_flags = dict() #: Determinism flags: numID, value = flag_name enum_determinism_flags = dict() def __init__(self): self._con = None self._ic = None self.__internal = False def __del__(self): if not self.closed: self._close() def __get_closed(self): return self._con is None def __fail_if_closed(self): if self.closed: raise fdb.ProgrammingError("Schema is not binded to connection.") def _close(self): self._ic.close() self._con = None self._ic = None def _set_as_internal(self): """Mark this instance as `internal` (embedded). This blocks calls to :meth:`bind` and :meth:`close`.""" self.__internal = True self._con = weakref.proxy(self._con) def __object_by_name(self,list,name): if name is None: return None for o in list: if o.name == name: return o return None def __clear(self,data=None): if data: data = data.lower() if data not in ['tables','views','domains','indices','dependencies', 'generators','sequences','triggers','procedures', 'constraints','collations','character sets', 'exceptions','roles','functions','files','shadows', 'privileges','users','packages']: raise fdb.ProgrammingError("Unknown metadata category '%s'" % data) if (not data or data == 'tables'): self.__tables = None if (not data or data == 'views'): self.__views = None if (not data or data == 'domains'): self.__domains = None if (not data or data == 'indices'): self.__indices = None self.__constraint_indices = None if (not data or data == 'dependencies'): self.__dependencies = None if (not data or data in ['generators','sequences']): self.__generators = None if (not data or data == 'triggers'): self.__triggers = None if (not data or data == 'procedures'): self.__procedures = None if (not data or data == 'constraints'): self.__constraints = None if (not data or data == 'collations'): self.__collations = None if (not data or data == 'character sets'): self.__character_sets = None if (not data or data == 'exceptions'): self.__exceptions = None if (not data or data == 'roles'): self.__roles = None if (not data or data == 'functions'): self.__functions = None if (not data or data == 'files'): self.__files = None if (not data or data == 'shadows'): self.__shadows = None if (not data or data == 'privileges'): self.__privileges = None if (not data or data == 'users'): self.__users = None if (not data or data == 'packages'): self.__packages = None #--- protected def _select_row(self,cmd,params=None): if params: self._ic.execute(cmd,params) else: self._ic.execute(cmd) return self._ic.fetchonemap() def _select(self,cmd,params=None): if params: self._ic.execute(cmd,params) else: self._ic.execute(cmd) return self._ic.itermap() def _get_field_dimensions(self,field): return [(r[0],r[1]) for r in self._ic.execute("""SELECT r.RDB$LOWER_BOUND, r.RDB$UPPER_BOUND FROM RDB$FIELD_DIMENSIONS r where r.RDB$FIELD_NAME = '%s' order by r.RDB$DIMENSION""" % field.name)] def _get_item(self,name,itype,subname=None): if itype == 0: # Relation return self.get_table(name) elif itype == 1: # View return self.get_view(name) elif itype == 2: # Trigger return self.get_trigger(name) elif itype == 5: # Procedure return self.get_procedure(name) elif itype == 8: # User result = self.__object_by_name(self._get_users(),name) if not result: result = fdb.services.User(name) self.__users.append(result) return result elif itype == 9: # Field return self.get_table(name).get_column(subname) elif itype == 10: # Index return self.get_index(name) elif itype == 13: # Role return self.get_role(name) elif itype == 14: # Generator return self.get_sequence(name) elif itype == 15: # UDF return self.get_function(name) elif itype == 17: # Collation return self.get_collation(name) elif itype in [17,18]: # Package return self.get_package(name) else: raise fdb.ProgrammingError('Unsupported subject type') #--- special attribute access methods def _get_description(self): return self.__description def _get_default_character_set(self): return self.get_character_set(self._default_charset_name) def _get_owner_name(self): return self.__owner def _get_security_class(self): return self.__security_class def _get_collations(self): if self.__collations is None: self.__fail_if_closed() self._ic.execute("select * from rdb$collations") self.__collations = [Collation(self,row) for row in self._ic.itermap()] return self.__collations def _get_character_sets(self): if self.__character_sets is None: self.__fail_if_closed() self._ic.execute("select * from rdb$character_sets") self.__character_sets = [CharacterSet(self,row) for row in self._ic.itermap()] return self.__character_sets def _get_exceptions(self): if self.__exceptions is None: self.__fail_if_closed() self._ic.execute("select * from rdb$exceptions") self.__exceptions = [DatabaseException(self,row) for row in self._ic.itermap()] return self.__exceptions def _get_all_domains(self): if self.__domains is None: self.__fail_if_closed() cols = ['RDB$FIELD_NAME','RDB$VALIDATION_SOURCE','RDB$COMPUTED_SOURCE', 'RDB$DEFAULT_SOURCE','RDB$FIELD_LENGTH','RDB$FIELD_SCALE', 'RDB$FIELD_TYPE','RDB$FIELD_SUB_TYPE','RDB$DESCRIPTION', 'RDB$SYSTEM_FLAG','RDB$SEGMENT_LENGTH','RDB$EXTERNAL_LENGTH', 'RDB$EXTERNAL_SCALE','RDB$EXTERNAL_TYPE','RDB$DIMENSIONS', 'RDB$NULL_FLAG','RDB$CHARACTER_LENGTH','RDB$COLLATION_ID', 'RDB$CHARACTER_SET_ID','RDB$FIELD_PRECISION'] if self._con.ods >= fdb.ODS_FB_30: cols.extend(['RDB$SECURITY_CLASS', 'RDB$OWNER_NAME']) self._ic.execute("""select %s from RDB$FIELDS""" % ','.join(cols)) self.__domains = [Domain(self,row) for row in self._ic.itermap()] return self.__domains def _get_domains(self): return [d for d in self._get_all_domains() if not d.issystemobject()] def _get_sysdomains(self): return [d for d in self._get_all_domains() if d.issystemobject()] def _get_all_tables(self): if self.__tables is None: self.__fail_if_closed() self._ic.execute("select * from rdb$relations where rdb$view_blr is null") self.__tables = [Table(self,row) for row in self._ic.itermap()] return self.__tables def _get_tables(self): return [t for t in self._get_all_tables() if not t.issystemobject()] def _get_systables(self): return [t for t in self._get_all_tables() if t.issystemobject()] def _get_all_views(self): if self.__views is None: self.__fail_if_closed() self._ic.execute("select * from rdb$relations where rdb$view_blr is not null") self.__views = [View(self,row) for row in self._ic.itermap()] return self.__views def _get_views(self): return [v for v in self._get_all_views() if not v.issystemobject()] def _get_sysviews(self): return [v for v in self._get_all_views() if v.issystemobject()] def _get_constraint_indices(self): if self.__constraint_indices is None: self.__fail_if_closed() self._ic.execute("""select RDB$INDEX_NAME, RDB$CONSTRAINT_NAME from RDB$RELATION_CONSTRAINTS where RDB$INDEX_NAME is not null""") self.__constraint_indices = dict([(key.strip(),value.strip()) for key, value in self._ic]) return self.__constraint_indices def _get_all_indices(self): if self.__indices is None: self.__fail_if_closed() # Dummy call to _get_constraint_indices() is necessary as # Index.issystemobject() that is called in Index.__init__() will # drop result from internal cursor and we'll not load all indices. self._get_constraint_indices() self._ic.execute("""select RDB$INDEX_NAME, RDB$RELATION_NAME, RDB$INDEX_ID, RDB$UNIQUE_FLAG, RDB$DESCRIPTION, RDB$SEGMENT_COUNT, RDB$INDEX_INACTIVE, RDB$INDEX_TYPE, RDB$FOREIGN_KEY, RDB$SYSTEM_FLAG, RDB$EXPRESSION_SOURCE, RDB$STATISTICS from RDB$INDICES""") self.__indices = [Index(self,row) for row in self._ic.itermap()] return self.__indices def _get_indices(self): return [i for i in self._get_all_indices() if not i.issystemobject()] def _get_sysindices(self): return [i for i in self._get_all_indices() if i.issystemobject()] def _get_all_generators(self): if self.__generators is None: self.__fail_if_closed() cols = ['RDB$GENERATOR_NAME', 'RDB$GENERATOR_ID', 'RDB$DESCRIPTION', 'RDB$SYSTEM_FLAG'] if self._con.ods >= fdb.ODS_FB_30: cols.extend(['RDB$SECURITY_CLASS','RDB$OWNER_NAME','RDB$INITIAL_VALUE', 'RDB$GENERATOR_INCREMENT']) self._ic.execute("select %s from rdb$generators" % ','.join(cols)) self.__generators = [Sequence(self,row) for row in self._ic.itermap()] return self.__generators def _get_generators(self): return [g for g in self._get_all_generators() if not g.issystemobject()] def _get_sysgenerators(self): return [g for g in self._get_all_generators() if g.issystemobject()] def _get_all_triggers(self): if self.__triggers is None: self.__fail_if_closed() cols = ['RDB$TRIGGER_NAME', 'RDB$RELATION_NAME', 'RDB$TRIGGER_SEQUENCE', 'RDB$TRIGGER_TYPE', 'RDB$TRIGGER_SOURCE', 'RDB$DESCRIPTION', 'RDB$TRIGGER_INACTIVE', 'RDB$SYSTEM_FLAG', 'RDB$FLAGS'] if self._con.ods >= fdb.ODS_FB_30: cols.extend(['RDB$VALID_BLR', 'RDB$ENGINE_NAME', 'RDB$ENTRYPOINT']) self._ic.execute("select %s from RDB$TRIGGERS" % ','.join(cols)) self.__triggers = [Trigger(self,row) for row in self._ic.itermap()] return self.__triggers def _get_triggers(self): return [g for g in self._get_all_triggers() if not g.issystemobject()] def _get_systriggers(self): return [g for g in self._get_all_triggers() if g.issystemobject()] def _get_all_procedures(self): if self.__procedures is None: self.__fail_if_closed() cols = ['RDB$PROCEDURE_NAME', 'RDB$PROCEDURE_ID', 'RDB$PROCEDURE_INPUTS', 'RDB$PROCEDURE_OUTPUTS', 'RDB$DESCRIPTION', 'RDB$PROCEDURE_SOURCE', 'RDB$SECURITY_CLASS', 'RDB$OWNER_NAME', 'RDB$SYSTEM_FLAG'] if self._con.ods >= fdb.ODS_FB_21: cols.extend(['RDB$PROCEDURE_TYPE','RDB$VALID_BLR']) if self._con.ods >= fdb.ODS_FB_30: cols.extend(['RDB$ENGINE_NAME', 'RDB$ENTRYPOINT', 'RDB$PACKAGE_NAME', 'RDB$PRIVATE_FLAG']) self._ic.execute("select %s from rdb$procedures" % ','.join(cols)) self.__procedures = [Procedure(self,row) for row in self._ic.itermap()] return self.__procedures def _get_procedures(self): return [p for p in self._get_all_procedures() if not p.issystemobject()] def _get_sysprocedures(self): return [p for p in self._get_all_procedures() if p.issystemobject()] def _get_constraints(self): if self.__constraints is None: self.__fail_if_closed() # Dummy call to _get_all_tables() is necessary as # Constraint.issystemobject() that is called in Constraint.__init__() # will drop result from internal cursor and we'll not load all constraints. self._get_all_tables() self._ic.execute("""select * from rdb$relation_constraints C left outer join rdb$ref_constraints R on C.rdb$constraint_name = R.rdb$constraint_name left outer join rdb$check_constraints K on C.rdb$constraint_name = K.rdb$constraint_name""") self.__constraints = [Constraint(self,row) for row in self._ic.itermap()] # Check constrains need special care because they're doubled # (select above returns two records for them with different trigger names) checks = [c for c in self.__constraints if c.ischeck()] self.__constraints = [c for c in self.__constraints if not c.ischeck()] dchecks = {} for check in checks: dchecks.setdefault(check.name,list()).append(check) for checklist in dchecks.values(): names = [c._attributes['RDB$TRIGGER_NAME'] for c in checklist] check = checklist[0] check._attributes['RDB$TRIGGER_NAME'] = names self.__constraints.append(check) return self.__constraints def _get_roles(self): if self.__roles is None: self.__fail_if_closed() self._ic.execute("select * from rdb$roles") self.__roles = [Role(self,row) for row in self._ic.itermap()] return self.__roles def _get_dependencies(self): if self.__dependencies is None: self.__fail_if_closed() self._ic.execute("select * from rdb$dependencies") self.__dependencies = [Dependency(self,row) for row in self._ic.itermap()] return self.__dependencies def _get_all_functions(self): if self.__functions is None: self.__fail_if_closed() cols = ['RDB$FUNCTION_NAME','RDB$FUNCTION_TYPE','RDB$MODULE_NAME', 'RDB$ENTRYPOINT','RDB$DESCRIPTION','RDB$RETURN_ARGUMENT', 'RDB$SYSTEM_FLAG'] if self._con.ods >= fdb.ODS_FB_30: cols.extend(['RDB$ENGINE_NAME','RDB$PACKAGE_NAME','RDB$PRIVATE_FLAG', 'RDB$FUNCTION_SOURCE','RDB$FUNCTION_ID','RDB$VALID_BLR', 'RDB$SECURITY_CLASS','RDB$OWNER_NAME','RDB$LEGACY_FLAG', 'RDB$DETERMINISTIC_FLAG']) self._ic.execute("select %s from rdb$functions" % ','.join(cols)) self.__functions = [Function(self,row) for row in self._ic.itermap()] return self.__functions def _get_functions(self): return [p for p in self._get_all_functions() if not p.issystemobject()] def _get_sysfunctions(self): return [p for p in self._get_all_functions() if p.issystemobject()] def _get_files(self): if self.__files is None: self.__fail_if_closed() self._ic.execute("""select RDB$FILE_NAME, RDB$FILE_SEQUENCE, RDB$FILE_START, RDB$FILE_LENGTH from RDB$FILES where RDB$SHADOW_NUMBER = 0 order by RDB$FILE_SEQUENCE""") self.__files = [DatabaseFile(self,row) for row in self._ic.itermap()] return self.__files def _get_shadows(self): if self.__shadows is None: self.__fail_if_closed() self._ic.execute("""select RDB$FILE_FLAGS, RDB$SHADOW_NUMBER from RDB$FILES where RDB$SHADOW_NUMBER > 0 AND RDB$FILE_SEQUENCE = 0 order by RDB$SHADOW_NUMBER""") self.__shadows = [Shadow(self,row) for row in self._ic.itermap()] return self.__shadows def _get_privileges(self): if self.__privileges is None: self.__fail_if_closed() self._ic.execute("""select RDB$USER, RDB$GRANTOR, RDB$PRIVILEGE, RDB$GRANT_OPTION, RDB$RELATION_NAME, RDB$FIELD_NAME, RDB$USER_TYPE, RDB$OBJECT_TYPE FROM RDB$USER_PRIVILEGES""") self.__privileges = [Privilege(self,row) for row in self._ic.itermap()] return self.__privileges def _get_users(self): if self.__users is None: self.__fail_if_closed() self._ic.execute("select distinct(RDB$USER) FROM RDB$USER_PRIVILEGES") self.__users = [fdb.services.User(row[0].strip()) for row in self._ic] return self.__users def _get_packages(self): if self.__packages is None: if self._con.ods >= fdb.ODS_FB_30: self.__fail_if_closed() self._ic.execute("""select RDB$PACKAGE_NAME, RDB$PACKAGE_HEADER_SOURCE, RDB$PACKAGE_BODY_SOURCE, RDB$VALID_BODY_FLAG, RDB$SECURITY_CLASS, RDB$OWNER_NAME, RDB$SYSTEM_FLAG, RDB$DESCRIPTION FROM RDB$PACKAGES""") self.__packages = [Package(self,row) for row in self._ic.itermap()] else: self.__packages = [] return self.__packages def _get_linger(self): return self.__linger #--- Properties #: True if link to :class:`~fdb.Connection` is closed. closed = property(__get_closed) description = LateBindingProperty(_get_description,None,None, "Database description or None if it doesn't have a description.") owner_name = LateBindingProperty(_get_owner_name,None,None,"Database owner name.") default_character_set = LateBindingProperty(_get_default_character_set,None,None, "Default :class:`CharacterSet` for database") security_class = LateBindingProperty(_get_security_class,None,None, "Can refer to the security class applied as databasewide access control limits.") collations = LateBindingProperty(_get_collations,None,None, "List of all collations in database.\nItems are :class:`Collation` objects.") character_sets = LateBindingProperty(_get_character_sets,None,None, "List of all character sets in database.\nItems are :class:`CharacterSet` objects.") exceptions = LateBindingProperty(_get_exceptions,None,None, "List of all exceptions in database.\nItems are :class:`DatabaseException` objects.") generators = LateBindingProperty(_get_generators,None,None, "List of all user generators in database.\nItems are :class:`Sequence` objects.") sysgenerators = LateBindingProperty(_get_sysgenerators,None,None, "List of all system generators in database.\nItems are :class:`Sequence` objects.") sequences = LateBindingProperty(_get_generators,None,None, "List of all user generators in database.\nItems are :class:`Sequence` objects.") syssequences = LateBindingProperty(_get_sysgenerators,None,None, "List of all system generators in database.\nItems are :class:`Sequence` objects.") domains = LateBindingProperty(_get_domains,None,None, "List of all user domains in database.\nItems are :class:`Domain` objects.") sysdomains = LateBindingProperty(_get_sysdomains,None,None, "List of all system domains in database.\nItems are :class:`Domain` objects.") indices = LateBindingProperty(_get_indices,None,None, "List of all user indices in database.\nItems are :class:`Index` objects.") sysindices = LateBindingProperty(_get_sysindices,None,None, "List of all system indices in database.\nItems are :class:`Index` objects.") tables = LateBindingProperty(_get_tables,None,None, "List of all user tables in database.\nItems are :class:`Table` objects.") systables = LateBindingProperty(_get_systables,None,None, "List of all system tables in database.\nItems are :class:`Table` objects.") views = LateBindingProperty(_get_views,None,None, "List of all user views in database.\nItems are :class:`View` objects.") sysviews = LateBindingProperty(_get_sysviews,None,None, "List of all system views in database.\nItems are :class:`View` objects.") triggers = LateBindingProperty(_get_triggers,None,None, "List of all user triggers in database.\nItems are :class:`Trigger` objects.") systriggers = LateBindingProperty(_get_systriggers,None,None, "List of all system triggers in database.\nItems are :class:`Trigger` objects.") procedures = LateBindingProperty(_get_procedures,None,None, "List of all user procedures in database.\nItems are :class:`Procedure` objects.") sysprocedures = LateBindingProperty(_get_sysprocedures,None,None, "List of all system procedures in database.\nItems are :class:`Procedure` objects.") constraints = LateBindingProperty(_get_constraints,None,None, "List of all constraints in database.\nItems are :class:`Constraint` objects.") roles = LateBindingProperty(_get_roles,None,None, "List of all roles in database.\nItems are :class:`Role` objects.") dependencies = LateBindingProperty(_get_dependencies,None,None, "List of all dependencies in database.\nItems are :class:`Dependency` objects.") functions = LateBindingProperty(_get_functions,None,None, "List of all user functions defined in database.\nItems are :class:`Function` objects.") sysfunctions = LateBindingProperty(_get_sysfunctions,None,None, "List of all system functions defined in database.\nItems are :class:`Function` objects.") files = LateBindingProperty(_get_files,None,None, "List of all extension files defined for database.\nItems are :class:`DatabaseFile` objects.") shadows = LateBindingProperty(_get_shadows,None,None, "List of all shadows defined for database.\nItems are :class:`Shadow` objects.") privileges = LateBindingProperty(_get_privileges,None,None, "List of all privileges defined for database.\nItems are :class:`Privilege` objects.") # FB 3 packages = LateBindingProperty(_get_packages,None,None, "List of all packages defined for database.\nItems are :class:`Package` objects.") linger = LateBindingProperty(_get_linger,None,None,"Database linger value.") #--- Public def bind(self, connection): """Bind this instance to specified :class:`~fdb.Connection`. :param connection: :class:`~fdb.Connection` instance. :raises ProgrammingError: If Schema object was set as internal (via :meth:`_set_as_internal`). """ if self.__internal: raise fdb.ProgrammingError("Call to 'bind' not allowed for embedded Schema.") if self._con: self.close() self._con = connection self._ic = self._con.query_transaction.cursor() self.__clear() self._ic.execute('select * from RDB$DATABASE') row = self._ic.fetchonemap() self.__description = row['RDB$DESCRIPTION'] self.__linger = row.get('RDB$LINGER') self._default_charset_name = row['RDB$CHARACTER_SET_NAME'].strip() self.__security_class = row['RDB$SECURITY_CLASS'] if self.__security_class: self.__security_class = self.__security_class.strip() self._ic.execute("select RDB$OWNER_NAME from RDB$RELATIONS where RDB$RELATION_NAME = 'RDB$DATABASE'") self.__owner = self._ic.fetchone()[0].strip() # Load enumerate types defined in RDB$TYPES table enum_select = 'select RDB$TYPE, RDB$TYPE_NAME from RDB$TYPES where RDB$FIELD_NAME = ?' def enum_dict(enum_type): return dict((key,value.strip()) for key, value in self._ic.execute(enum_select,(enum_type,))) # Object types self.enum_object_types = enum_dict('RDB$OBJECT_TYPE') # Object type codes self.enum_object_type_codes = dict(((value,key) for key,value in self.enum_object_types.items())) # Character set names self.enum_character_set_names = enum_dict('RDB$CHARACTER_SET_NAME') # Field types self.enum_field_types = enum_dict('RDB$FIELD_TYPE') # Field sub types self.enum_field_subtypes = enum_dict('RDB$FIELD_SUB_TYPE') # Function types self.enum_function_types = enum_dict('RDB$FUNCTION_TYPE') # Mechanism Types self.enum_mechanism_types = enum_dict('RDB$MECHANISM') # Parameter Mechanism Types self.enum_parameter_mechanism_types = enum_dict('RDB$PARAMETER_MECHANISM') # Procedure Types self.enum_procedure_types = enum_dict('RDB$PROCEDURE_TYPE') if not self.enum_procedure_types: self.enum_procedure_types = {0: 'LEGACY', 1: 'SELECTABLE', 2: 'EXECUTABLE'} # Relation Types self.enum_relation_types = enum_dict('RDB$RELATION_TYPE') # System Flag Types self.enum_system_flag_types = enum_dict('RDB$SYSTEM_FLAG') # Transaction State Types self.enum_transaction_state_types = enum_dict('RDB$TRANSACTION_STATE') # Trigger Types self.enum_trigger_types = enum_dict('RDB$TRIGGER_TYPE') # Firebird 3.0 # Parameter Types self.enum_parameter_types = enum_dict('RDB$PARAMETER_TYPE') # Index activity self.enum_index_activity_flags = enum_dict('RDB$INDEX_INACTIVE') # Index uniqueness self.enum_index_unique_flags = enum_dict('RDB$UNIQUE_FLAG') # Trigger activity self.enum_trigger_activity_flags = enum_dict('RDB$TRIGGER_INACTIVE') # Grant options self.enum_grant_options = enum_dict('RDB$GRANT_OPTION') # Page types self.enum_page_types = enum_dict('RDB$PAGE_TYPE') # Privacy self.enum_privacy_flags = enum_dict('RDB$PRIVATE_FLAG') # Legacy self.enum_legacy_flags = enum_dict('RDB$LEGACY_FLAG') # Determinism self.enum_determinism_flags = enum_dict('RDB$DETERMINISTIC_FLAG') def close(self): """Sever link to :class:`~fdb.Connection`. :raises ProgrammingError: If Schema object was set as internal (via :meth:`_set_as_internal`). """ if self.__internal: raise fdb.ProgrammingError("Call to 'close' not allowed for embedded Schema.") self._close() self.__clear() def accept_visitor(self,visitor): """Visitor Pattern support. Calls `visitSchema(self)` on parameter object. :param visitor: Visitor object of Vistior Pattern. """ visitor.visitSchema(self) #--- Basic Database manipulation routines def clear(self): "Drop all cached metadata objects." self.__clear() def reload(self,data=None): """Drop all or specified category of cached metadata objects, so they're reloaded from database on next reference. :param string data: `None` or name of metadata category. Recognized (case insensitive) names of metadata categories: - tables - views - domain - indices - dependencies - generators - sequences - triggers - procedures - constraints - collations - character sets - exceptions - roles - functions - files - shadows - privileges - users - packages :raises ProgrammingError: For undefined metadata category. .. note:: Also commits query transaction. """ self.__clear(data) if not self.closed: self._ic.transaction.commit() def ismultifile(self): "Returns true if database has multiple files." return len(self.files) > 0 def get_collation(self,name): """Get :class:`Collation` by name. :param string name: Collation name. :returns: :class:`Collation` with specified name or `None`. """ return self.__object_by_name(self._get_collations(),name) def get_character_set(self,name): """Get :class:`CharacterSet` by name. :param string name: Character set name. :returns: :class:`CharacterSet` with specified name or `None`. """ return self.__object_by_name(self._get_character_sets(),name) def get_exception(self,name): """Get :class:`DatabaseException` by name. :param string name: Exception name. :returns: :class:`DatabaseException` with specified name or `None`. """ return self.__object_by_name(self._get_exceptions(),name) def get_generator(self,name): """Get :class:`Sequence` by name. :param string name: Sequence name. :returns: :class:`Sequence` with specified name or `None`. """ return self.__object_by_name(self._get_all_generators(),name) get_sequence = get_generator def get_index(self,name): """Get :class:`Index` by name. :param string name: Index name. :returns: :class:`Index` with specified name or `None`. """ return self.__object_by_name(self._get_all_indices(),name) def get_domain(self,name): """Get :class:`Domain` by name. :param string name: Domain name. :returns: :class:`Domain` with specified name or `None`. """ return self.__object_by_name(self._get_all_domains(),name) def get_table(self,name): """Get :class:`Table` by name. :param string name: Table name. :returns: :class:`Table` with specified name or `None`. """ return self.__object_by_name(self._get_all_tables(),name) def get_view(self,name): """Get :class:`View` by name. :param string name: View name. :returns: :class:`View` with specified name or `None`. """ return self.__object_by_name(self._get_all_views(),name) def get_trigger(self,name): """Get :class:`Trigger` by name. :param string name: Trigger name. :returns: :class:`Trigger` with specified name or `None`. """ return self.__object_by_name(self._get_all_triggers(),name) def get_procedure(self,name): """Get :class:`Procedure` by name. :param string name: Procedure name. :returns: :class:`Procedure` with specified name or `None`. """ return self.__object_by_name(self._get_all_procedures(),name) def get_constraint(self,name): """Get :class:`Constraint` by name. :param string name: Constraint name. :returns: :class:`Constraint` with specified name or `None`. """ return self.__object_by_name(self._get_constraints(),name) def get_role(self,name): """Get :class:`Role` by name. :param string name: Role name. :returns: :class:`Role` with specified name or `None`. """ return self.__object_by_name(self._get_roles(),name) def get_function(self,name): """Get :class:`Function` by name. :param string name: Function name. :returns: :class:`Function` with specified name or `None`. """ return self.__object_by_name(self._get_all_functions(),name) def get_collation_by_id(self,charset_id,collation_id): """Get :class:`Collation` by ID. :param integer charset_id: Character set ID. :param integer collation_id: Collation ID. :returns: :class:`Collation` with specified ID or `None`. """ for collation in self._get_collations(): if (collation._attributes['RDB$CHARACTER_SET_ID'] == charset_id) and (collation.id == collation_id): return collation else: return None def get_character_set_by_id(self,id): """Get :class:`CharacterSet` by ID. :param integer name: CharacterSet ID. :returns: :class:`CharacterSet` with specified ID or `None`. """ for charset in self._get_character_sets(): if charset.id == id: return charset else: return None def get_privileges_of(self,user, user_type=None): """Get list of all privileges granted to user/database object. :param user: User name or instance of class that represents possible user. Allowed classes are :class:`~fdb.services.User`, :class:`Table`, :class:`View`, :class:`Procedure`, :class:`Trigger` or :class:`Role`. :param int user_type: **Required if** `user` is provided as string name. Numeric code for user type, see :attr:`Schema.enum_object_types`. :returns: List of :class:`Privilege` objects. :raises ProgrammingError: For unknown `user_type` code. """ if isinstance(user,(fdb.StringType,fdb.UnicodeType)): if (user_type is None) or (user_type not in self.enum_object_types): raise fdb.ProgrammingError("Unknown user_type code.") else: uname = user utype = [user_type] elif isinstance(user,(Table,View,Procedure,Trigger,Role)): uname = user.name utype = user._type_code elif isinstance(user,fdb.services.User): uname = user.name utype = [8] return [p for p in self.privileges if ((p.user_name == uname) and (p.user_type in utype))] def get_package(self,name): """Get :class:`Package` by name. :param string name: Package name. :returns: :class:`Package` with specified name or `None`. """ return self.__object_by_name(self._get_packages(),name) class BaseSchemaItem(object): """Base class for all database schema objects. """ #: Weak reference to parent :class:`Schema` instance. schema = None def __init__(self,schema,attributes): self.schema = schema if type(schema) == weakref.ProxyType else weakref.proxy(schema) self._type_code = [] self._attributes = dict(attributes) self._actions = [] #--- protected def _strip_attribute(self,attr): if self._attributes.get(attr): self._attributes[attr] = self._attributes[attr].strip() def _check_params(self,params,param_names): p = set(params.keys()) n = set(param_names) if not p.issubset(n): raise fdb.ProgrammingError("Unsupported parameter(s) '%s'" % ','.join(p.difference(n))) def _needs_quoting(self,ident): if not ident: return False if self.schema.opt_always_quote: return True if len(ident) >= 1 and ident[0] not in string.ascii_uppercase: return True for char in ident: if char not in string.ascii_uppercase + string.digits + '$_': return True return isKeyword(ident) def _get_quoted_ident(self,ident): if self._needs_quoting(ident): return '"%s"' % ident else: return ident def _get_name(self): return None def _get_description(self): return self._attributes.get('RDB$DESCRIPTION') def _get_actions(self): return self._actions def _get_recreate_sql(self,**params): return 'RE'+self._get_create_sql(**params) def _get_create_or_alter_sql(self,**params): return 'CREATE OR ALTER' + self._get_create_sql(**params)[6:] #--- properties name = LateBindingProperty(_get_name,None,None, "Database object name or None if object doesn't have a name.") description = LateBindingProperty(_get_description,None,None, "Database object description or None if object doesn't have a description.") actions = LateBindingProperty(_get_actions,None,None, "List of supported SQL operations on metadata object instance.") #--- Public def accept_visitor(self,visitor): """Visitor Pattern support. Calls `visitMetadatItem(self)` on parameter object. :param visitor: Visitor object of Vistior Pattern. """ visitor.visitMetadataItem(self) def issystemobject(self): "Returns True if this database object is system object." return True if self._attributes.get('RDB$SYSTEM_FLAG',False) else False def get_quoted_name(self): "Returns quoted (if necessary) name." if self._needs_quoting(self.name): return '"%s"' % self.name else: return self.name def get_dependents(self): "Returns list of all database objects that depend on this one." return [d for d in self.schema.dependencies if d.depended_on_name == self.name and d.depended_on_type in self._type_code] def get_dependencies(self): "Returns list of database objects that this object depend on." return [d for d in self.schema.dependencies if d.dependent_name == self.name and d.dependent_type in self._type_code] def get_sql_for(self,action,**params): """Returns SQL command for specified action on metadata object. Supported actions are defined by :attr:`actions` list. :raises ProgrammingError: For unsupported action or wrong parameters passed. """ _action = action.lower() if _action in self._actions: _call = getattr(self,'_get_%s_sql' % _action) return _call(**params) else: raise fdb.ProgrammingError("Unsupported action '%s'" % action) class Collation(BaseSchemaItem): """Represents collation. Supported SQL actions: - User collation: create, drop - System collation: none """ def __init__(self,schema,attributes): super(Collation,self).__init__(schema,attributes) self._type_code = [17,] self._strip_attribute('RDB$COLLATION_NAME') self._strip_attribute('RDB$BASE_COLLATION_NAME') self._strip_attribute('RDB$FUNCTION_NAME') self._strip_attribute('RDB$SECURITY_CLASS') self._strip_attribute('RDB$OWNER_NAME') if not self.issystemobject(): self._actions = ['create','drop'] #--- Protected def _get_drop_sql(self,**params): self._check_params(params,[]) return 'DROP COLLATION %s' % self.get_quoted_name() def _get_create_sql(self,**params): self._check_params(params,[]) base_sql = """CREATE COLLATION %s FOR %s %s %s %s %s %s""" % (self.get_quoted_name(), self.character_set.get_quoted_name(), ("FROM EXTERNAL ('%s')" % self._attributes['RDB$BASE_COLLATION_NAME'] if self.isbasedonexternal() else "FROM %s" % self.base_collation.get_quoted_name()), 'PAD SPACE' if self.ispadded() else 'NO PAD', 'CASE INSENSITIVE' if self.iscaseinsensitive() else 'CASE SENSITIVE', 'ACCENT INSENSITIVE' if self.isaccentinsensitive() else 'ACCENT SENSITIVE', "'%s'" % self.specific_attributes if self.specific_attributes else '') return base_sql.strip() def _get_name(self): return self._attributes['RDB$COLLATION_NAME'] def _get_id(self): return self._attributes['RDB$COLLATION_ID'] def _get_character_set(self): """Return :class:`CharacterSet` object to which this collation belongs. """ return self.schema.get_character_set_by_id(self._attributes['RDB$CHARACTER_SET_ID']) def _get_base_collation(self): base_name = self._attributes['RDB$BASE_COLLATION_NAME'] return self.schema.get_collation(base_name) if base_name else None def _get_attributes(self): return self._attributes['RDB$COLLATION_ATTRIBUTES'] def _get_specific_attributes(self): return self._attributes['RDB$SPECIFIC_ATTRIBUTES'] def _get_function_name(self): return self._attributes['RDB$FUNCTION_NAME'] def _get_security_class(self): return self._attributes.get('RDB$SECURITY_CLASS') def _get_owner_name(self): return self._attributes.get('RDB$OWNER_NAME') #--- Properties id = LateBindingProperty(_get_id,None,None,"Collation ID.") character_set = LateBindingProperty(_get_character_set,None,None, "Character set object associated with collation.") base_collation = LateBindingProperty(_get_base_collation,None,None, "Base Collation object that's extended by this one or None.") attributes = LateBindingProperty(_get_attributes,None,None, "Collation attributes.") specific_attributes = LateBindingProperty(_get_specific_attributes,None,None, "Collation specific attributes.") function_name = LateBindingProperty(_get_function_name,None,None, "Not currently used.") # FB 3.0 security_class = LateBindingProperty(_get_security_class,None,None, "Security class name or None.") owner_name = LateBindingProperty(_get_owner_name,None,None,"Creator user name.") #--- Public def accept_visitor(self,visitor): """Visitor Pattern support. Calls `visitCollation(self)` on parameter object. :param visitor: Visitor object of Vistior Pattern. """ visitor.visitCollation(self) def ispadded(self): """Returns True if collation has PAD SPACE attribute.""" return bool(self.attributes & COLLATION_PAD_SPACE) def iscaseinsensitive(self): "Returns True if collation has CASE INSENSITIVE attribute." return bool(self.attributes & COLLATION_CASE_INSENSITIVE) def isaccentinsensitive(self): "Returns True if collation has ACCENT INSENSITIVE attribute." return bool(self.attributes & COLLATION_ACCENT_INSENSITIVE) def isbasedonexternal(self): "Returns True if collation is based on external collation definition." return (self._attributes['RDB$BASE_COLLATION_NAME'] and not self.base_collation) class CharacterSet(BaseSchemaItem): """Represents character set. Supported SQL actions: alter(collation=Collation instance or collation name) """ def __init__(self,schema,attributes): super(CharacterSet,self).__init__(schema,attributes) self._type_code = [11,] self._strip_attribute('RDB$CHARACTER_SET_NAME') self._strip_attribute('RDB$DEFAULT_COLLATE_NAME') self._strip_attribute('RDB$SECURITY_CLASS') self._strip_attribute('RDB$OWNER_NAME') self._actions = ['alter'] #--- protected def _get_alter_sql(self,**params): self._check_params(params,['collation']) collation = params.get('collation') if collation: return ('ALTER CHARACTER SET %s SET DEFAULT COLLATION %s' % (self.name, collation.name if isinstance(collation,Collation) else collation)) else: raise fdb.ProgrammingError("Missing required parameter: 'collation'.") def _get_name(self): return self._attributes['RDB$CHARACTER_SET_NAME'] def _get_id(self): return self._attributes['RDB$CHARACTER_SET_ID'] def _get_bytes_per_character(self): return self._attributes['RDB$BYTES_PER_CHARACTER'] def _get_default_collate(self): return self.get_collation(self._attributes['RDB$DEFAULT_COLLATE_NAME']) def _get_collations(self): r = [c for c in self.schema.collations if c._attributes['RDB$CHARACTER_SET_ID'] == self.id] return r def _get_security_class(self): return self._attributes.get('RDB$SECURITY_CLASS') def _get_owner_name(self): return self._attributes.get('RDB$OWNER_NAME') #--- properties id = LateBindingProperty(_get_id,None,None,"Character set ID.") bytes_per_character = LateBindingProperty(_get_bytes_per_character,None,None, "Size of characters in bytes.") default_collate = LateBindingProperty(_get_default_collate,None,None, "Collate object of default collate.") collations = LateBindingProperty(_get_collations,None,None, "List of Collations associated with character set.") # FB 3.0 security_class = LateBindingProperty(_get_security_class,None,None, "Security class name or None.") owner_name = LateBindingProperty(_get_owner_name,None,None,"Creator user name.") #--- Public def accept_visitor(self,visitor): """Visitor Pattern support. Calls `visitCharacterSet(self)` on parameter object. :param visitor: Visitor object of Vistior Pattern. """ visitor.visitCharacterSet(self) def get_collation(self,name): """Return :class:`Collation` object with specified name that belongs to this character set. """ for col in self.collations: if col.name == name: return col return None def get_collation_by_id(self,id): """Return :class:`Collation` object with specified id that belongs to this character set. """ for col in self.collations: if col.id == id: return col return None class DatabaseException(BaseSchemaItem): """Represents database exception. Supported SQL actions: - User exception: create, recreate, alter(message=string), create_or_alter, drop - System exception: none """ def __init__(self,schema,attributes): super(DatabaseException,self).__init__(schema,attributes) self._type_code = [7,] self._strip_attribute('RDB$EXCEPTION_NAME') self._strip_attribute('RDB$SECURITY_CLASS') self._strip_attribute('RDB$OWNER_NAME') if not self.issystemobject(): self._actions = ['create','recreate','alter','create_or_alter','drop'] #--- Protected def _get_create_sql(self,**params): self._check_params(params,[]) return "CREATE EXCEPTION %s '%s'" % (self.get_quoted_name(), escape_single_quotes(self.message)) def _get_alter_sql(self,**params): self._check_params(params,['message']) message = params.get('message') if message: return "ALTER EXCEPTION %s '%s'" % (self.get_quoted_name(), escape_single_quotes(message)) else: raise fdb.ProgrammingError("Missing required parameter: 'message'.") def _get_drop_sql(self,**params): self._check_params(params,[]) return 'DROP EXCEPTION %s' % self.get_quoted_name() def _get_name(self): return self._attributes['RDB$EXCEPTION_NAME'] def _get_id(self): return self._attributes['RDB$EXCEPTION_NUMBER'] def _get_message(self): return self._attributes['RDB$MESSAGE'] def _get_security_class(self): return self._attributes.get('RDB$SECURITY_CLASS') def _get_owner_name(self): return self._attributes.get('RDB$OWNER_NAME') #--- Properties id = LateBindingProperty(_get_id,None,None, "System-assigned unique exception number.") message = LateBindingProperty(_get_message,None,None,"Custom message text.") # FB 3.0 security_class = LateBindingProperty(_get_security_class,None,None, "Security class name or None.") owner_name = LateBindingProperty(_get_owner_name,None,None,"Creator user name.") #--- Public def accept_visitor(self,visitor): """Visitor Pattern support. Calls `visitException(self)` on parameter object. :param visitor: Visitor object of Vistior Pattern. """ visitor.visitException(self) class Sequence(BaseSchemaItem): """Represents database generator/sequence. Supported SQL actions: - User sequence: create, alter(value=number), drop - System sequence: none """ def __init__(self,schema,attributes): super(Sequence,self).__init__(schema,attributes) self._type_code = [14,] self._strip_attribute('RDB$GENERATOR_NAME') self._strip_attribute('RDB$SECURITY_CLASS') self._strip_attribute('RDB$OWNER_NAME') if not self.issystemobject(): self._actions = ['create','alter','drop'] #--- protected def _get_create_sql(self,**params): self._check_params(params,[]) return 'CREATE SEQUENCE %s' % self.get_quoted_name() def _get_alter_sql(self,**params): self._check_params(params,['value']) value = params.get('value') if value is not None: return "ALTER SEQUENCE %s RESTART WITH %d" % (self.get_quoted_name(),value) else: raise fdb.ProgrammingError("Missing required parameter: 'value'.") def _get_drop_sql(self,**params): self._check_params(params,[]) return 'DROP SEQUENCE %s' % self.get_quoted_name() def _get_name(self): return self._attributes['RDB$GENERATOR_NAME'] def _get_id(self): return self._attributes['RDB$GENERATOR_ID'] def _get_value(self): return self.schema._select_row("select GEN_ID(%s,0) from RDB$DATABASE" % self.name)['GEN_ID'] def _get_security_class(self): return self._attributes.get('RDB$SECURITY_CLASS') def _get_owner_name(self): return self._attributes.get('RDB$OWNER_NAME') def _get_inital_value(self): return self._attributes.get('RDB$INITIAL_VALUE') def _get_increment(self): return self._attributes.get('RDB$GENERATOR_INCREMENT') #--- Properties id = LateBindingProperty(_get_id,None,None,"Internal ID number of the sequence.") value = LateBindingProperty(_get_value,None,None,"Current sequence value.") # FB 3.0 security_class = LateBindingProperty(_get_security_class,None,None, "Security class name or None.") owner_name = LateBindingProperty(_get_owner_name,None,None,"Creator user name.") inital_value = LateBindingProperty(_get_inital_value,None,None,"Initial sequence value.") increment = LateBindingProperty(_get_increment,None,None,"Sequence increment.") #--- Public def accept_visitor(self,visitor): """Visitor Pattern support. Calls `visitGenerator(self)` on parameter object. :param visitor: Visitor object of Vistior Pattern. """ visitor.visitGenerator(self) def isidentity(self): "Returns True for system generators created for IDENTITY columns." return self._attributes['RDB$SYSTEM_FLAG'] == 6 class TableColumn(BaseSchemaItem): """Represents table column. Supported SQL actions: - User column: alter(name=string,datatype=string_SQLTypeDef,position=number, expression=computed_by_expr,restart=None_or_init_value), drop - System column: none """ def __init__(self,schema,table,attributes): super(TableColumn,self).__init__(schema,attributes) self._type_code = [3,9] self.__table = weakref.proxy(table) self._strip_attribute('RDB$FIELD_NAME') self._strip_attribute('RDB$RELATION_NAME') self._strip_attribute('RDB$FIELD_SOURCE') self._strip_attribute('RDB$SECURITY_CLASS') self._strip_attribute('RDB$GENERATOR_NAME') if not self.issystemobject(): self._actions = ['alter','drop'] #--- Protected def _get_alter_sql(self,**params): self._check_params(params,['expression','datatype','name','position','restart']) new_expr = params.get('expression') new_type = params.get('datatype') new_name = params.get('name') new_position = params.get('position') if new_expr and not self.iscomputed(): raise fdb.ProgrammingError("Change from persistent column to computed" " is not allowed.") elif self.iscomputed() and (new_type and not new_expr): raise fdb.ProgrammingError("Change from computed column to persistent" " is not allowed.") sql = 'ALTER TABLE %s ALTER COLUMN %s' % (self.table.get_quoted_name(), self.get_quoted_name()) if new_name: return '%s TO %s' % (sql,self._get_quoted_ident(new_name)) elif new_position: return '%s POSITION %d' % (sql,new_position) elif new_type or new_expr: result = sql if new_type: result += ' TYPE %s' % new_type if new_expr: result += ' COMPUTED BY %s' % new_expr return result elif 'restart' in params: restart = params.get('restart') sql += ' RESTART' if restart is not None: sql += ' WITH %d' % restart return sql else: raise fdb.ProgrammingError("Parameter required.") def _get_drop_sql(self,**params): self._check_params(params,[]) return 'ALTER TABLE %s DROP %s' % (self.table.get_quoted_name(), self.get_quoted_name()) def _get_name(self): return self._attributes['RDB$FIELD_NAME'] def _get_table(self): return self.__table def _get_domain(self): return self.schema.get_domain(self._attributes['RDB$FIELD_SOURCE']) def _get_position(self): return self._attributes['RDB$FIELD_POSITION'] def _get_security_class(self): return self._attributes['RDB$SECURITY_CLASS'] def _get_default(self): result = self._attributes.get('RDB$DEFAULT_SOURCE') if result: if result.upper().startswith('DEFAULT '): result = result[8:] return result def _get_collation(self): return self.schema.get_collation_by_id(self.domain._attributes['RDB$CHARACTER_SET_ID'], self._attributes['RDB$COLLATION_ID']) def _get_datatype(self): return self.domain.datatype def _get_privileges(self): return [p for p in self.schema.privileges if (p.subject_name == self.table.name and p.field_name == self.name and p.subject_type in self.table._type_code)] def _get_generator(self): return self.schema.get_generator(self._attributes.get('RDB$GENERATOR_NAME')) def _get_identity_type(self): return self._attributes.get('RDB$IDENTITY_TYPE') #--- Properties table = LateBindingProperty(_get_table,None,None, "The Table object this column belongs to.") domain = LateBindingProperty(_get_domain,None,None, "Domain object this column is based on.") position = LateBindingProperty(_get_position,None,None, "Column's sequence number in row.") security_class = LateBindingProperty(_get_security_class,None,None, "Security class name or None.") default = LateBindingProperty(_get_default,None,None, "Default value for column or None.") collation = LateBindingProperty(_get_collation,None,None, "Collation object or None.") datatype = LateBindingProperty(_get_datatype,None,None, "Comlete SQL datatype definition.") privileges = LateBindingProperty(_get_privileges,None,None, "List of :class:`Privilege` objects granted to this object.") # FB 3.0 generator = LateBindingProperty(_get_generator,None,None,"Internal flags.") identity_type = LateBindingProperty(_get_identity_type,None,None,"Internal flags.") #--- Public def accept_visitor(self,visitor): """Visitor Pattern support. Calls `visitTableColumn(self)` on parameter object. :param visitor: Visitor object of Vistior Pattern. """ visitor.visitTableColumn(self) def get_dependents(self): "Return list of all database objects that depend on this one." return [d for d in self.schema.dependencies if d.depended_on_name == self._attributes['RDB$RELATION_NAME'] and d.depended_on_type == 0 and d.field_name == self.name] def get_dependencies(self): "Return list of database objects that this object depend on." return [d for d in self.schema.dependencies if d.dependent_name == self._attributes['RDB$RELATION_NAME'] and d.dependent_type == 0 and d.field_name == self.name] def get_computedby(self): "Returns (string) extression for column computation or None." return self.domain.expression def iscomputed(self): "Returns True if column is computed." return bool(self.domain.expression) def isdomainbased(self): "Returns True if column is based on user domain." return not self.domain.issystemobject() def isnullable(self): "Returns True if column can accept NULL values." return not self._attributes['RDB$NULL_FLAG'] def iswritable(self): "Returns True if column is writable (i.e. it's not computed etc.)." return bool(self._attributes['RDB$UPDATE_FLAG']) def isidentity(self): "Returns True for identity type column." return self._attributes.get('RDB$IDENTITY_TYPE') is not None def has_default(self): "Returns True if column has default value." return bool(self._attributes.get('RDB$DEFAULT_SOURCE')) class Index(BaseSchemaItem): """Represents database index. Supported SQL actions: - User index: create, activate, deactivate, recompute, drop - System index: recompute """ def __init__(self,schema,attributes): super(Index,self).__init__(schema,attributes) self._type_code = [6,10] self.__segment_names = None self.__segment_statistics = None self._strip_attribute('RDB$INDEX_NAME') self._strip_attribute('RDB$RELATION_NAME') self._strip_attribute('RDB$FOREIGN_KEY') if self.issystemobject(): self._actions = ['recompute'] else: self._actions = ['create','activate','deactivate','recompute','drop'] #--- Protected def _get_create_sql(self,**params): self._check_params(params,[]) return """CREATE %s%s INDEX %s ON %s %s""" % ('UNIQUE ' if self.isunique() else '', self.index_type, self.get_quoted_name(),self.table.name, 'COMPUTED BY %s' % self.expression if self.isexpression() else '(%s)' % ','.join(self.segment_names)) def _get_activate_sql(self,**params): self._check_params(params,[]) return 'ALTER INDEX %s ACTIVE' % self.get_quoted_name() def _get_deactivate_sql(self,**params): self._check_params(params,[]) return 'ALTER INDEX %s INACTIVE' % self.get_quoted_name() def _get_recompute_sql(self,**params): self._check_params(params,[]) return 'SET STATISTICS INDEX %s' % self.get_quoted_name() def _get_drop_sql(self,**params): self._check_params(params,[]) return 'DROP INDEX %s' % self.get_quoted_name() def _get_name(self): return self._attributes['RDB$INDEX_NAME'] def _get_table(self): return self.schema.get_table(self._attributes['RDB$RELATION_NAME']) def _get_id(self): return self._attributes['RDB$INDEX_ID'] def _get_index_type(self): return (INDEX_TYPE_DESCENDING if self._attributes['RDB$INDEX_TYPE'] == 1 else INDEX_TYPE_ASCENDING) def _get_partner_index(self): pname = self._attributes['RDB$FOREIGN_KEY'] return self.schema.get_index(pname) if pname else None def _get_expression(self): return self._attributes['RDB$EXPRESSION_SOURCE'] def _get_statistics(self): return self._attributes['RDB$STATISTICS'] def _get_segments(self): return [self.table.get_column(colname) for colname in self.segment_names] def _get_segment_names(self): if self.__segment_names is None: if self._attributes['RDB$SEGMENT_COUNT'] > 0: self.__segment_names = [r['RDB$FIELD_NAME'].strip() for r in self.schema._select("""select rdb$field_name from rdb$index_segments where rdb$index_name = ? order by rdb$field_position""",(self.name,))] else: self.__segment_names = [] return self.__segment_names def _get_segment_statistics(self): if self.__segment_statistics is None: if self._attributes['RDB$SEGMENT_COUNT'] > 0: if self.schema._con.ods >= fdb.ODS_FB_21: self.__segment_statistics = [r['RDB$STATISTICS'] for r in self.schema._select("""select RDB$STATISTICS from rdb$index_segments where rdb$index_name = ? order by rdb$field_position""",(self.name,))] else: self.__segment_statistics = [None for x in range(self._attributes['RDB$SEGMENT_COUNT'])] else: self.__segment_statistics = [] return self.__segment_statistics def _get_constraint(self): const_name = self.schema._get_constraint_indices().get(self.name) if const_name: return self.schema.get_constraint(const_name) else: return None #--- Properties table = LateBindingProperty(_get_table,None,None, "The :class:`Table` instance the index applies to.") id = LateBindingProperty(_get_id,None,None, "Internal number ID of the index.") index_type = LateBindingProperty(_get_index_type,None,None, "ASCENDING or DESCENDING.") partner_index = LateBindingProperty(_get_partner_index,None,None, "Associated unique/primary key :class:`Index` instance, or None.") expression = LateBindingProperty(_get_expression,None,None, "Source of an expression or None.") statistics = LateBindingProperty(_get_statistics,None,None, "Latest selectivity of the index.") segment_names = LateBindingProperty(_get_segment_names,None,None, "List of index segment names.") segment_statistics = LateBindingProperty(_get_segment_statistics,None,None, "List of index segment statistics (for ODS 11.1 and higher).") segments = LateBindingProperty(_get_segments,None,None, "List of index segments as :class:`TableColumn` instances.") constraint = LateBindingProperty(_get_constraint,None,None, ":class:`Constraint` instance that uses this index or None.") #--- Public def accept_visitor(self,visitor): """Visitor Pattern support. Calls `visitIndex(self)` on parameter object. :param visitor: Visitor object of Vistior Pattern. """ visitor.visitIndex(self) def issystemobject(self): "Returns True if this database object is system object." return bool(self._attributes['RDB$SYSTEM_FLAG'] or (self.isenforcer() and self.name.startswith('RDB$'))) def isexpression(self): "Returns True if index is expression index." return not self.segments def isunique(self): "Returns True if index is UNIQUE." return self._attributes['RDB$UNIQUE_FLAG'] == 1 def isinactive(self): "Returns True if index is INACTIVE." return self._attributes['RDB$INDEX_INACTIVE'] == 1 def isenforcer(self): "Returns True if index is used to enforce a constraint." return self.name in self.schema._get_constraint_indices() class ViewColumn(BaseSchemaItem): """Represents view column. Supported SQL actions: none """ def __init__(self,schema,view,attributes): super(ViewColumn,self).__init__(schema,attributes) self._type_code = [3,9] self.__view = weakref.proxy(view) self._strip_attribute('RDB$FIELD_NAME') self._strip_attribute('RDB$BASE_FIELD') self._strip_attribute('RDB$RELATION_NAME') self._strip_attribute('RDB$FIELD_SOURCE') self._strip_attribute('RDB$SECURITY_CLASS') self._strip_attribute('BASE_RELATION') #--- Protected def _get_name(self): return self._attributes['RDB$FIELD_NAME'] def _get_base_field(self): bfield = self._attributes['RDB$BASE_FIELD'] if bfield: brel = self._attributes['BASE_RELATION'] relation = self.schema.get_table(brel) if relation: return relation.get_column(bfield) relation = self.schema.get_view(brel) if relation: return relation.get_column(bfield) relation = self.schema.get_procedure(brel) if relation: return relation.get_outparam(bfield) raise fdb.OperationalError("Can't locate base relation.") return None def _get_view(self): return self.__view def _get_domain(self): return self.schema.get_domain(self._attributes['RDB$FIELD_SOURCE']) def _get_position(self): return self._attributes['RDB$FIELD_POSITION'] def _get_security_class(self): return self._attributes['RDB$SECURITY_CLASS'] def _get_collation(self): return self.schema.get_collation_by_id(self.domain._attributes['RDB$CHARACTER_SET_ID'], self._attributes['RDB$COLLATION_ID']) def _get_datatype(self): return self.domain.datatype def _get_privileges(self): return [p for p in self.schema.privileges if (p.subject_name == self.view.name and p.field_name == self.name and p.subject_type == 0)] # Views are logged as Tables in RDB$USER_PRIVILEGES #--- Properties base_field = LateBindingProperty(_get_base_field,None,None, "The source column from the base relation. Result could be either " ":class:`TableColumn`, :class:`ViewColumn` or :class:`ProcedureParameter` " "instance or None.") view = LateBindingProperty(_get_view,None,None, "View object this column belongs to.") domain = LateBindingProperty(_get_domain,None,None, "Domain object this column is based on.") position = LateBindingProperty(_get_position,None,None, "Column's sequence number in row.") security_class = LateBindingProperty(_get_security_class,None,None, "Security class name or None.") collation = LateBindingProperty(_get_collation,None,None, "Collation object or None.") datatype = LateBindingProperty(_get_datatype,None,None, "Comlete SQL datatype definition.") privileges = LateBindingProperty(_get_privileges,None,None, "List of :class:`Privilege` objects granted to this object.") #--- Public def accept_visitor(self,visitor): """Visitor Pattern support. Calls `visitViewColumn(self)` on parameter object. :param visitor: Visitor object of Vistior Pattern. """ visitor.visitViewColumn(self) def get_dependents(self): "Return list of all database objects that depend on this one." return [d for d in self.schema.dependencies if d.depended_on_name == self._attributes['RDB$RELATION_NAME'] and d.depended_on_type == 1 and d.field_name == self.name] def get_dependencies(self): "Return list of database objects that this object depend on." return [d for d in self.schema.dependencies if d.dependent_name == self._attributes['RDB$RELATION_NAME'] and d.dependent_type == 1 and d.field_name == self.name] def isnullable(self): "Returns True if column is NULLABLE." return not self._attributes['RDB$NULL_FLAG'] def iswritable(self): "Returns True if column is writable." return bool(self._attributes['RDB$UPDATE_FLAG']) class Domain(BaseSchemaItem): """Represents SQl Domain. Supported SQL actions: - User domain: create, alter(name=string,default=string_definition_or_None, check=string_definition_or_None,datatype=string_SQLTypeDef), drop - System domain: none """ def __init__(self,schema,attributes): super(Domain,self).__init__(schema,attributes) self._type_code = [9] self._strip_attribute('RDB$FIELD_NAME') self._strip_attribute('RDB$SECURITY_CLASS') self._strip_attribute('RDB$OWNER_NAME') if not self.issystemobject(): self._actions = ['create','alter','drop'] #--- Protected def _get_create_sql(self,**params): self._check_params(params,[]) sql = 'CREATE DOMAIN %s AS %s' % (self.get_quoted_name(),self.datatype) if self.has_default(): sql += ' DEFAULT %s' % self.default if self.isvalidated(): sql += ' ' + self.validation if self._attributes['RDB$COLLATION_ID']: sql += 'COLLATE %s' % self._attributes['RDB$COLLATION_ID'] return sql def _get_alter_sql(self,**params): self._check_params(params,['name','default','check','datatype']) new_name = params.get('name') new_default = params.get('default','') new_constraint = params.get('check','') new_type = params.get('datatype') sql = 'ALTER DOMAIN %s' % self.get_quoted_name() if len(params) > 1: raise fdb.ProgrammingError("Only one parameter allowed.") if new_name: return '%s TO %s' % (sql,self._get_quoted_ident(new_name)) elif new_default != '': return ('%s SET DEFAULT %s' % (sql,new_default) if new_default else '%s DROP DEFAULT' % sql) elif new_constraint != '': return ('%s ADD CHECK (%s)' % (sql,new_constraint) if new_constraint else '%s DROP CONSTRAINT' % sql) elif new_type: return '%s TYPE %s' % (sql,new_type) else: raise fdb.ProgrammingError("Parameter required.") def _get_drop_sql(self,**params): self._check_params(params,[]) return 'DROP DOMAIN %s' % self.get_quoted_name() def _get_name(self): return self._attributes['RDB$FIELD_NAME'] def _get_expression(self): return self._attributes['RDB$COMPUTED_SOURCE'] def _get_validation(self): return self._attributes['RDB$VALIDATION_SOURCE'] def _get_default(self): result = self._attributes.get('RDB$DEFAULT_SOURCE') if result: if result.upper().startswith('DEFAULT '): result = result[8:] return result def _get_length(self): return self._attributes['RDB$FIELD_LENGTH'] def _get_scale(self): return self._attributes['RDB$FIELD_SCALE'] def _get_field_type(self): return self._attributes['RDB$FIELD_TYPE'] def _get_sub_type(self): return self._attributes['RDB$FIELD_SUB_TYPE'] def _get_segment_length(self): return self._attributes['RDB$SEGMENT_LENGTH'] def _get_external_length(self): return self._attributes['RDB$EXTERNAL_LENGTH'] def _get_external_scale(self): return self._attributes['RDB$EXTERNAL_SCALE'] def _get_external_type(self): return self._attributes['RDB$EXTERNAL_TYPE'] def _get_dimensions(self): if self._attributes['RDB$DIMENSIONS']: return self.schema._get_field_dimensions(self) else: return [] def _get_character_length(self): return self._attributes['RDB$CHARACTER_LENGTH'] def _get_collation(self): return self.schema.get_collation_by_id(self._attributes['RDB$CHARACTER_SET_ID'], self._attributes['RDB$COLLATION_ID']) def _get_character_set(self): return self.schema.get_character_set_by_id(self._attributes['RDB$CHARACTER_SET_ID']) def _get_precision(self): return self._attributes['RDB$FIELD_PRECISION'] def _get_datatype(self): l = [] precision_known = False if self.field_type in (FBT_SMALLINT,FBT_INTEGER,FBT_BIGINT): if self.precision != None: if (self.sub_type > 0) and (self.sub_type < MAX_INTSUBTYPES): l.append('%s(%d, %d)' % \ (INTEGRAL_SUBTYPES[self.sub_type],self.precision,-self.scale)) precision_known = True if not precision_known: if (self.field_type == FBT_SMALLINT) and (self.scale < 0): l.append('NUMERIC(4, %d)' % -self.scale) elif (self.field_type == FBT_INTEGER) and (self.scale < 0): l.append('NUMERIC(9, %d)' % -self.scale) elif (self.field_type == FBT_DOUBLE_PRECISION) and (self.scale < 0): l.append('NUMERIC(15, %d)' % -self.scale) else: l.append(COLUMN_TYPES[self.field_type]) if self.field_type in (FBT_CHAR,FBT_VARCHAR): l.append('(%d)' % (self.length if self.character_length == None else self.character_length)) if self._attributes['RDB$DIMENSIONS'] != None: l.append('[%s]' % ', '.join('%d' % u if l == 1 else '%d:%d' % (l,u) for l,u in self.dimensions)) if self.field_type == FBT_BLOB: if self.sub_type >= 0 and self.sub_type <= MAX_BLOBSUBTYPES: l.append(' SUB_TYPE %s' % BLOB_SUBTYPES[self.sub_type]) else: l.append(' SUB_TYPE %d' % self.sub_type) l.append(' SEGMENT SIZE %d' % self.segment_length) if self.field_type in (FBT_CHAR,FBT_VARCHAR,FBT_BLOB): if self._attributes['RDB$CHARACTER_SET_ID'] is not None and \ (self.character_set.name != self.schema.default_character_set.name) or \ self._attributes['RDB$COLLATION_ID']: if (self._attributes['RDB$CHARACTER_SET_ID'] is not None): l.append(' CHARACTER SET %s' % self.character_set.name) if self._attributes['RDB$COLLATION_ID'] is not None: cname = self.collation.name if self.character_set._attributes['RDB$DEFAULT_COLLATE_NAME'] != cname: l.append(' COLLATE %s' % cname) return ''.join(l) def _get_security_class(self): return self._attributes.get('RDB$SECURITY_CLASS') def _get_owner_name(self): return self._attributes.get('RDB$OWNER_NAME') #--- Properties expression = LateBindingProperty(_get_expression,None,None, "Expression that defines the COMPUTED BY column or None.") validation = LateBindingProperty(_get_validation,None,None, "CHECK constraint for the domain or None.") default = LateBindingProperty(_get_default,None,None, "Expression that defines the default value or None.") length = LateBindingProperty(_get_length,None,None, "Length of the column in bytes.") scale = LateBindingProperty(_get_scale,None,None, "Negative number representing the scale of NUMBER and DECIMAL column.") field_type = LateBindingProperty(_get_field_type,None,None, "Number code of the data type defined for the column.") sub_type = LateBindingProperty(_get_sub_type,None,None,"BLOB subtype.") segment_length = LateBindingProperty(_get_segment_length,None,None, "For BLOB columns, a suggested length for BLOB buffers.") external_length = LateBindingProperty(_get_external_length,None,None, "Length of field as it is in an external table. Always 0 for regular tables.") external_scale = LateBindingProperty(_get_external_scale,None,None, "Scale factor of an integer field as it is in an external table.") external_type = LateBindingProperty(_get_external_type,None,None, "Data type of the field as it is in an external table.") dimensions = LateBindingProperty(_get_dimensions,None,None, "List of dimension definition pairs if column is an array type. Always empty for non-array columns.") character_length = LateBindingProperty(_get_character_length,None,None, "Length of CHAR and VARCHAR column, in characters (not bytes).") collation = LateBindingProperty(_get_collation,None,None, "Collation object for a character column or None.") character_set = LateBindingProperty(_get_character_set,None,None, "CharacterSet object for a character or text BLOB column, or None.") precision = LateBindingProperty(_get_precision,None,None, "Indicates the number of digits of precision available to the data type of the column.") datatype = LateBindingProperty(_get_datatype,None,None, "Comlete SQL datatype definition.") # FB 3.0 security_class = LateBindingProperty(_get_security_class,None,None, "Security class name or None.") owner_name = LateBindingProperty(_get_owner_name,None,None,"Creator user name.") #--- Public def accept_visitor(self,visitor): """Visitor Pattern support. Calls `visitDomain(self)` on parameter object. :param visitor: Visitor object of Vistior Pattern. """ visitor.visitDomain(self) def issystemobject(self): "Return True if this database object is system object." return (self._attributes['RDB$SYSTEM_FLAG'] == 1) or self.name.startswith('RDB$') def isnullable(self): "Returns True if domain is not defined with NOT NULL." return not self._attributes['RDB$NULL_FLAG'] def iscomputed(self): "Returns True if domain is computed." return bool(self._attributes['RDB$COMPUTED_SOURCE']) def isvalidated(self): "Returns True if domain has validation constraint." return bool(self._attributes['RDB$VALIDATION_SOURCE']) def isarray(self): "Returns True if domain defines an array." return bool(self._attributes['RDB$DIMENSIONS']) def has_default(self): "Returns True if domain has default value." return bool(self._attributes['RDB$DEFAULT_SOURCE']) class Dependency(BaseSchemaItem): """Maps dependency between database objects. Supported SQL actions: none """ def __init__(self,schema,attributes): super(Dependency,self).__init__(schema,attributes) self._strip_attribute('RDB$DEPENDENT_NAME') self._strip_attribute('RDB$DEPENDED_ON_NAME') self._strip_attribute('RDB$FIELD_NAME') self._strip_attribute('RDB$PACKAGE_NAME') #--- Protected def _get_dependent_name(self): return self._attributes['RDB$DEPENDENT_NAME'] def _get_dependent_type(self): return self._attributes['RDB$DEPENDENT_TYPE'] def _get_field_name(self): return self._attributes['RDB$FIELD_NAME'] def _get_depended_on_name(self): return self._attributes['RDB$DEPENDED_ON_NAME'] def _get_depended_on_type(self): return self._attributes['RDB$DEPENDED_ON_TYPE'] def _get_dependent(self): if self.dependent_type == 0: # TABLE t = self.schema.get_table(self.dependent_name) elif self.dependent_type == 1: # VIEW return self.schema.get_view(self.dependent_name) elif self.dependent_type == 2: # TRIGGER return self.schema.get_trigger(self.dependent_name) elif self.dependent_type == 3: # COMPUTED FIELD (i.e. DOMAIN) return self.schema.get_domain(self.dependent_name) elif self.dependent_type == 4: ## ToDo: Implement handler for VALIDATION if necessary return None elif self.dependent_type == 5: #PROCEDURE return self.schema.get_procedure(self.dependent_name) elif self.dependent_type == 6: # EXPRESSION INDEX return self.schema.get_index(self.dependent_name) elif self.dependent_type == 7: # EXCEPTION return self.schema.get_exception(self.dependent_name) elif self.dependent_type == 8: ## ToDo: Implement handler for USER if necessary return None elif self.dependent_type == 9: # FIELD (i.e. DOMAIN) return self.schema.get_domain(self.dependent_name) elif self.dependent_type == 10: # INDEX return self.schema.get_index(self.dependent_name) elif self.dependent_type == 11: ## ToDo: Implement handler for DEPENDENT COUNT if necessary return None elif self.dependent_type == 12: ## ToDo: Implement handler for USER GROUP if necessary return None elif self.dependent_type == 13: # ROLE return self.schema.get_role(self.dependent_name) elif self.dependent_type == 14: # GENERATOR return self.schema.get_generator(self.dependent_name) elif self.dependent_type == 15: # UDF return self.schema.get_function(self.dependent_name) elif self.dependent_type == 16: ## ToDo: Implement handler for BLOB_FILTER return None elif self.dependent_type == 17: # Collation return self.schema.get_collation(self.dependent_name) elif self.dependent_type in [18,19]: # Package + package body return self.schema.get_package(self.dependent_name) return None def _get_depended_on(self): if self.depended_on_type == 0: # TABLE t = self.schema.get_table(self.depended_on_name) if self.field_name: return t.get_column(self.field_name) else: return t elif self.depended_on_type == 1: # VIEW return self.schema.get_view(self.depended_on_name) elif self.depended_on_type == 2: # TRIGGER return self.schema.get_trigger(self.depended_on_name) elif self.depended_on_type == 3: # COMPUTED FIELD (i.e. DOMAIN) return self.schema.get_domain(self.depended_on_name) elif self.depended_on_type == 4: ## ToDo: Implement handler for VALIDATION if necessary return None elif self.depended_on_type == 5: #PROCEDURE return self.schema.get_procedure(self.depended_on_name) elif self.depended_on_type == 6: # EXPRESSION INDEX return self.schema.get_index(self.depended_on_name) elif self.depended_on_type == 7: # EXCEPTION return self.schema.get_exception(self.depended_on_name) elif self.depended_on_type == 8: ## ToDo: Implement handler for USER if necessary return None elif self.depended_on_type == 9: # FIELD (i.e. DOMAIN) return self.schema.get_domain(self.depended_on_name) elif self.depended_on_type == 10: # INDEX return self.schema.get_index(self.depended_on_name) elif self.depended_on_type == 11: ## ToDo: Implement handler for DEPENDENT COUNT if necessary return None elif self.depended_on_type == 12: ## ToDo: Implement handler for USER GROUP if necessary return None elif self.depended_on_type == 13: # ROLE return self.schema.get_role(self.depended_on_name) elif self.depended_on_type == 14: # GENERATOR return self.schema.get_generator(self.depended_on_name) elif self.depended_on_type == 15: # UDF return self.schema.get_function(self.depended_on_name) elif self.depended_on_type == 16: ## ToDo: Implement handler for BLOB_FILTER return None return None def _get_package(self): return self.schema.get_package(self._attributes.get('RDB$PACKAGE_NAME')) #--- Properties dependent = LateBindingProperty(_get_dependent,None,None, "Dependent database object.") dependent_name = LateBindingProperty(_get_dependent_name,None,None, "Dependent database object name.") dependent_type = LateBindingProperty(_get_dependent_type,None,None, "Dependent database object type.") field_name = LateBindingProperty(_get_field_name,None,None, "Name of one column in `depended on` object.") depended_on = LateBindingProperty(_get_depended_on,None,None, "Database object on which dependent depends.") depended_on_name = LateBindingProperty(_get_depended_on_name,None,None, "Name of db object on which dependent depends.") depended_on_type = LateBindingProperty(_get_depended_on_type,None,None, "Type of db object on which dependent depends.") # FB 3.0 package = LateBindingProperty(_get_package,None,None, ":class:`Package` instance if dependent depends on object in package or None.") #--- Public def accept_visitor(self,visitor): """Visitor Pattern support. Calls `visitDependency(self)` on parameter object. :param visitor: Visitor object of Vistior Pattern. """ visitor.visitDependency(self) def issystemobject(self): "Returns True as dependency entries are considered as system objects." return True def get_dependents(self): "Returns empty list because Dependency object never has dependents." return [] def get_dependencies(self): "Returns empty list because Dependency object never has dependencies." return [] def ispackaged(self): "Returns True if dependency is defined in package." return bool(self._attributes.get('RDB$PACKAGE_NAME')) class Constraint(BaseSchemaItem): """Represents table or column constraint. Supported SQL actions: - Constraint on user table except NOT NULL constraint: create, drop - Constraint on system table: none """ def __init__(self,schema,attributes): super(Constraint,self).__init__(schema,attributes) self._strip_attribute('RDB$CONSTRAINT_NAME') self._strip_attribute('RDB$CONSTRAINT_TYPE') self._strip_attribute('RDB$RELATION_NAME') self._strip_attribute('RDB$DEFERRABLE') self._strip_attribute('RDB$INITIALLY_DEFERRED') self._strip_attribute('RDB$INDEX_NAME') self._strip_attribute('RDB$TRIGGER_NAME') self._strip_attribute('RDB$CONST_NAME_UQ') self._strip_attribute('RDB$MATCH_OPTION') self._strip_attribute('RDB$UPDATE_RULE') self._strip_attribute('RDB$DELETE_RULE') if not (self.issystemobject() or self.isnotnull()): self._actions = ['create','drop'] #--- Protected def _get_create_sql(self,**params): self._check_params(params,[]) const_def = 'ALTER TABLE %s ADD ' % self.table.get_quoted_name() if not self.name.startswith('INTEG_'): const_def += 'CONSTRAINT %s\n ' % self.get_quoted_name() if self.ischeck(): const_def += self.triggers[0].source elif self.ispkey() or self.isunique(): const_def += 'PRIMARY KEY' if self.ispkey() else 'UNIQUE' i = self.index const_def += ' (%s)' % ','.join(i.segment_names) if not i.issystemobject(): const_def += '\n USING %s INDEX %s' % (i.index_type,i.get_quoted_name()) elif self.isfkey(): const_def += 'FOREIGN KEY (%s)\n ' % ','.join(self.index.segment_names) p = self.partner_constraint const_def += 'REFERENCES %s (%s)' % (p.table.get_quoted_name(), ','.join(p.index.segment_names)) if self.delete_rule != 'RESTRICT': const_def += '\n ON DELETE %s' % self.delete_rule if self.update_rule != 'RESTRICT': const_def += '\n ON UPDATE %s' % self.update_rule i = self.index if not i.issystemobject(): const_def += '\n USING %s INDEX %s' % (i.index_type,i.get_quoted_name()) else: raise fdb.OperationalError("Unrecognized constraint type '%s'" % self.constraint_type) return const_def def _get_drop_sql(self,**params): self._check_params(params,[]) return 'ALTER TABLE %s DROP CONSTRAINT %s' % (self.table.get_quoted_name(), self.get_quoted_name()) def _get_name(self): return self._attributes['RDB$CONSTRAINT_NAME'] def _get_constraint_type(self): return self._attributes['RDB$CONSTRAINT_TYPE'] def _get_table(self): return self.schema.get_table(self._attributes['RDB$RELATION_NAME']) def _get_index(self): return self.schema.get_index(self._attributes['RDB$INDEX_NAME']) def _get_trigger_names(self): if self.ischeck(): return self._attributes['RDB$TRIGGER_NAME'] else: return [] def _get_triggers(self): return [self.schema.get_trigger(tname) for tname in self.trigger_names] def _get_column_name(self): if self.isnotnull(): return self._attributes['RDB$TRIGGER_NAME'] else: return None def _get_partner_constraint(self): return self.schema.get_constraint(self._attributes['RDB$CONST_NAME_UQ']) def _get_match_option(self): return self._attributes['RDB$MATCH_OPTION'] def _get_update_rule(self): return self._attributes['RDB$UPDATE_RULE'] def _get_delete_rule(self): return self._attributes['RDB$DELETE_RULE'] #--- Properties constraint_type = LateBindingProperty(_get_constraint_type,None,None, "primary key/unique/foreign key/check/not null.") table = LateBindingProperty(_get_table,None,None, ":class:`Table` instance this constraint applies to.") index = LateBindingProperty(_get_index,None,None, ":class:`Index` instance that enforces the constraint.\n`None` if constraint is not primary key/unique or foreign key.") trigger_names = LateBindingProperty(_get_trigger_names,None,None, "For a CHECK constraint contains trigger names that enforce the constraint.") triggers = LateBindingProperty(_get_triggers,None,None, "For a CHECK constraint contains :class:`Trigger` instances that enforce the constraint.") column_name = LateBindingProperty(_get_column_name,None,None, "For a NOT NULL constraint, this is the name of the column to which the constraint applies.") partner_constraint = LateBindingProperty(_get_partner_constraint,None,None, "For a FOREIGN KEY constraint, this is the unique or primary key :class:`Constraint` referred.") match_option = LateBindingProperty(_get_match_option,None,None, "For a FOREIGN KEY constraint only. Current value is FULL in all cases.") update_rule = LateBindingProperty(_get_update_rule,None,None, "For a FOREIGN KEY constraint, this is the action applicable to when primary key is updated.") delete_rule = LateBindingProperty(_get_delete_rule,None,None, "For a FOREIGN KEY constraint, this is the action applicable to when primary key is deleted.") #--- Public def accept_visitor(self,visitor): """Visitor Pattern support. Calls `visitConstraint(self)` on parameter object. :param visitor: Visitor object of Vistior Pattern. """ visitor.visitConstraint(self) def issystemobject(self): "Returns True if this database object is system object." return self.schema.get_table(self._attributes['RDB$RELATION_NAME']).issystemobject() def isnotnull(self): "Returns True if it's NOT NULL constraint." return self.constraint_type == 'NOT NULL' def ispkey(self): "Returns True if it's PRIMARY KEY constraint." return self.constraint_type == 'PRIMARY KEY' def isfkey(self): "Returns True if it's FOREIGN KEY constraint." return self.constraint_type == 'FOREIGN KEY' def isunique(self): "Returns True if it's UNIQUE constraint." return self.constraint_type == 'UNIQUE' def ischeck(self): "Returns True if it's CHECK constraint." return self.constraint_type == 'CHECK' def isdeferrable(self): "Returns True if it's DEFERRABLE constraint." return self._attributes['RDB$DEFERRABLE'] != 'NO' def isdeferred(self): "Returns True if it's INITIALLY DEFERRED constraint." return self._attributes['RDB$INITIALLY_DEFERRED'] != 'NO' class Table(BaseSchemaItem): """Represents Table in database. Supported SQL actions: - User table: create, recreate, drop - System table: none """ def __init__(self,schema,attributes): super(Table,self).__init__(schema,attributes) self._type_code = [0,] self.__columns = None self._strip_attribute('RDB$RELATION_NAME') self._strip_attribute('RDB$OWNER_NAME') self._strip_attribute('RDB$SECURITY_CLASS') self._strip_attribute('RDB$DEFAULT_CLASS') if not self.issystemobject(): self._actions = ['create','recreate','drop'] #--- Protected def _get_create_sql(self,**params): self._check_params(params,[]) tabdef = 'CREATE %sTABLE %s' % ('GLOBAL TEMPORARY ' if self.isgtt() else '', self.get_quoted_name()) if self.isexternal(): tabdef += " EXTERNAL FILE '%s'\n" % self.external_file tabdef += '\n(' partdefs = [] for col in self.columns: coldef = '\n %s ' % col.get_quoted_name() collate = '' if col.isdomainbased(): coldef += '%s' % col.domain.get_quoted_name() elif col.iscomputed(): coldef += 'COMPUTED BY %s' % col.get_computedby() else: datatype = col.datatype if datatype.rfind(' COLLATE ') > 0: datatype, collate = datatype.split(' COLLATE ') coldef += '%s' % datatype if col.isidentity(): coldef += ' GENERATED BY DEFAULT AS IDENTITY' if col.generator.inital_value != 0: coldef += ' (START WITH %d)' % col.generator.inital_value else: if col.has_default(): coldef += ' DEFAULT %s' % col.default if not col.isnullable(): coldef += ' NOT NULL' if col._attributes['RDB$COLLATION_ID'] is not None: cname = col.collation.name if col.domain.character_set._attributes['RDB$DEFAULT_COLLATE_NAME'] != cname: collate = cname if collate: coldef += ' COLLATE %s' % collate partdefs.append(coldef) if self.has_pkey(): pk = self.primary_key pkdef = '\n ' if not pk.name.startswith('INTEG_'): pkdef += 'CONSTRAINT %s\n ' % pk.get_quoted_name() i = pk.index pkdef += 'PRIMARY KEY (%s)' % ','.join(i.segment_names) if not i.issystemobject(): pkdef += '\n USING %s INDEX %s' % (i.index_type,i.get_quoted_name()) partdefs.append(pkdef) for uq in self.constraints: if uq.isunique(): uqdef = '\n ' if not uq.name.startswith('INTEG_'): uqdef += 'CONSTRAINT %s\n ' % uq.get_quoted_name() i = uq.index uqdef += 'UNIQUE (%s)' % ','.join(i.segment_names) if not i.issystemobject(): uqdef += '\n USING %s INDEX %s' % (i.index_type,i.get_quoted_name()) partdefs.append(uqdef) tabdef += ','.join(partdefs) tabdef += '\n)' return tabdef def _get_drop_sql(self,**params): self._check_params(params,[]) return 'DROP TABLE %s' % self.get_quoted_name() def _get_name(self): return self._attributes['RDB$RELATION_NAME'] def _get_id(self): return self._attributes['RDB$RELATION_ID'] def _get_dbkey_length(self): return self._attributes['RDB$DBKEY_LENGTH'] def _get_format(self): return self._attributes['RDB$FORMAT'] def _get_table_type(self): return self.schema.enum_relation_types.get(self._attributes.get('RDB$RELATION_TYPE'), 'PERSISTENT') def _get_security_class(self): return self._attributes['RDB$SECURITY_CLASS'] def _get_external_file(self): return self._attributes['RDB$EXTERNAL_FILE'] def _get_owner_name(self): return self._attributes['RDB$OWNER_NAME'] def _get_default_class(self): return self._attributes['RDB$DEFAULT_CLASS'] def _get_flags(self): return self._attributes['RDB$FLAGS'] def _get_indices(self): return [i for i in self.schema._get_all_indices() if i._attributes['RDB$RELATION_NAME'] == self.name] def _get_triggers(self): return [t for t in self.schema.triggers if t._attributes['RDB$RELATION_NAME'] == self.name] def _get_constraints(self): return [c for c in self.schema.constraints if c._attributes['RDB$RELATION_NAME'] == self.name] def _get_columns(self): if self.__columns is None: cols = ['RDB$FIELD_NAME','RDB$RELATION_NAME','RDB$FIELD_SOURCE', 'RDB$FIELD_POSITION','RDB$UPDATE_FLAG','RDB$FIELD_ID', 'RDB$DESCRIPTION','RDB$SECURITY_CLASS','RDB$SYSTEM_FLAG', 'RDB$NULL_FLAG','RDB$DEFAULT_SOURCE','RDB$COLLATION_ID'] if self.schema._con.ods >= fdb.ODS_FB_30: cols.extend(['RDB$GENERATOR_NAME', 'RDB$IDENTITY_TYPE']) self.__columns = [TableColumn(self.schema,self,row) for row in self.schema._select("""select %s from RDB$RELATION_FIELDS where RDB$RELATION_NAME = ? order by RDB$FIELD_POSITION""" % ','.join(cols),(self.name,))] return self.__columns def _get_primary_key(self): for const in self.constraints: if const.ispkey(): return const return None def _get_foreign_keys(self): return [c for c in self.constraints if c.isfkey()] def _get_privileges(self): return [p for p in self.schema.privileges if ((p.subject_name == self.name) and (p.subject_type in self._type_code))] #--- Properties id = LateBindingProperty(_get_id,None,None,"Internam number ID for the table.") dbkey_length = LateBindingProperty(_get_dbkey_length,None,None, "Length of the RDB$DB_KEY column in bytes.") format = LateBindingProperty(_get_format,None,None, "Internal format ID for the table.") table_type = LateBindingProperty(_get_table_type,None,None,"Table type.") security_class = LateBindingProperty(_get_security_class,None,None, "Security class that define access limits to the table.") external_file = LateBindingProperty(_get_external_file,None,None, "Full path to the external data file, if any.") owner_name = LateBindingProperty(_get_owner_name,None,None, "User name of table's creator.") default_class = LateBindingProperty(_get_default_class,None,None, "Default security class.") flags = LateBindingProperty(_get_flags,None,None,"Internal flags.") primary_key = LateBindingProperty(_get_primary_key,None,None, "PRIMARY KEY :class:`Constraint` for this table or None.") foreign_keys = LateBindingProperty(_get_foreign_keys,None,None, "List of FOREIGN KEY :class:`Constraint` instances for this table.") columns = LateBindingProperty(_get_columns,None,None, "Returns list of columns defined for table.\nItems are :class:`TableColumn` objects.") constraints = LateBindingProperty(_get_constraints,None,None, "Returns list of constraints defined for table.\nItems are :class:`Constraint` objects.") indices = LateBindingProperty(_get_indices,None,None, "Returns list of indices defined for table.\nItems are :class:`Index` objects.") triggers = LateBindingProperty(_get_triggers,None,None, "Returns list of triggers defined for table.\nItems are :class:`Trigger` objects.") privileges = LateBindingProperty(_get_privileges,None,None, "List of :class:`Privilege` objects granted to this object.") # FB 3.0 #--- Public def accept_visitor(self,visitor): """Visitor Pattern support. Calls `visitTable(self)` on parameter object. :param visitor: Visitor object of Vistior Pattern. """ visitor.visitTable(self) def get_column(self,name): "Return :class:`TableColumn` object with specified name." for col in self.columns: if col.name == name: return col return None def isgtt(self): "Returns True if table is GLOBAL TEMPORARY table." return self.table_type.startswith('GLOBAL_TEMPORARY') def ispersistent(self): "Returns True if table is persistent one." return self.table_type in ['PERSISTENT','EXTERNAL'] def isexternal(self): "Returns True if table is external table." return bool(self.external_file) def has_pkey(self): "Returns True if table has PRIMARY KEY defined." for const in self.constraints: if const.ispkey(): return True return False def has_fkey(self): "Returns True if table has any FOREIGN KEY constraint." for const in self.constraints: if const.isfkey(): return True return False class View(BaseSchemaItem): """Represents database View. Supported SQL actions: - User views: create, recreate, alter(columns=string_or_list,query=string,check=bool), create_or_alter, drop - System views: none """ def __init__(self,schema,attributes): super(View,self).__init__(schema,attributes) self._type_code = [1,] self.__columns = None self._strip_attribute('RDB$RELATION_NAME') self._strip_attribute('RDB$VIEW_SOURCE') self._strip_attribute('RDB$OWNER_NAME') self._strip_attribute('RDB$SECURITY_CLASS') self._strip_attribute('RDB$DEFAULT_CLASS') if not self.issystemobject(): self._actions = ['create','recreate','alter','create_or_alter','drop'] #--- Protected def _get_create_sql(self,**params): self._check_params(params,[]) return "CREATE VIEW %s (%s)\n AS\n %s" % (self.get_quoted_name(), ','.join([col.get_quoted_name() for col in self.columns]),self.sql) def _get_alter_sql(self,**params): self._check_params(params,['columns','query','check']) columns = params.get('columns') if isinstance(columns,(list,tuple)): columns = ','.join(columns) query = params.get('query') check = params.get('check',False) if query: return "ALTER VIEW %s %s\n AS\n %s" % (self.get_quoted_name(), '(%s)' % columns if columns else '', '%s\n WITH CHECK OPTION' % query if check else query) else: raise fdb.ProgrammingError("Missing required parameter: 'query'.") def _get_drop_sql(self,**params): self._check_params(params,[]) return 'DROP VIEW %s' % self.get_quoted_name() def _get_name(self): return self._attributes['RDB$RELATION_NAME'] def _get_sql(self): return self._attributes['RDB$VIEW_SOURCE'] def _get_id(self): return self._attributes['RDB$RELATION_ID'] def _get_dbkey_length(self): return self._attributes['RDB$DBKEY_LENGTH'] def _get_format(self): return self._attributes['RDB$FORMAT'] def _get_security_class(self): return self._attributes['RDB$SECURITY_CLASS'] def _get_owner_name(self): return self._attributes['RDB$OWNER_NAME'] def _get_default_class(self): return self._attributes['RDB$DEFAULT_CLASS'] def _get_flags(self): return self._attributes['RDB$FLAGS'] def _get_triggers(self): return [t for t in self.schema.triggers if t._attributes['RDB$RELATION_NAME'] == self.name] def _get_columns(self): if self.__columns is None: self.__columns = [ViewColumn(self.schema,self,row) for row in self.schema._select("""select r.RDB$FIELD_NAME, r.RDB$RELATION_NAME, r.RDB$FIELD_SOURCE, r.RDB$FIELD_POSITION, r.RDB$UPDATE_FLAG, r.RDB$FIELD_ID, r.RDB$DESCRIPTION, r.RDB$SYSTEM_FLAG, r.RDB$SECURITY_CLASS, r.RDB$NULL_FLAG, r.RDB$DEFAULT_SOURCE, r.RDB$COLLATION_ID, r.RDB$BASE_FIELD, v.RDB$RELATION_NAME as BASE_RELATION from RDB$RELATION_FIELDS r left join RDB$VIEW_RELATIONS v on r.RDB$VIEW_CONTEXT = v.RDB$VIEW_CONTEXT and v.rdb$view_name = ? where r.RDB$RELATION_NAME = ? order by RDB$FIELD_POSITION""",(self.name,self.name))] return self.__columns def _get_privileges(self): return [p for p in self.schema.privileges if ((p.subject_name == self.name) and (p.subject_type == 0))] # Views are logged as Tables in RDB$USER_PRIVILEGES #--- Properties id = LateBindingProperty(_get_id,None,None,"Internal number ID for the view.") sql= LateBindingProperty(_get_sql,None,None,"The query specification.") dbkey_length = LateBindingProperty(_get_dbkey_length,None,None, "Length of the RDB$DB_KEY column in bytes.") format = LateBindingProperty(_get_format,None,None,"Internal format ID for the view.") security_class = LateBindingProperty(_get_security_class,None,None, "Security class that define access limits to the view.") owner_name = LateBindingProperty(_get_owner_name,None,None,"User name of view's creator.") default_class = LateBindingProperty(_get_default_class,None,None,"Default security class.") flags = LateBindingProperty(_get_flags,None,None,"Internal flags.") columns = LateBindingProperty(_get_columns,None,None, "Returns list of columns defined for view.\nItems are :class:`ViewColumn` objects.") triggers = LateBindingProperty(_get_triggers,None,None, "Returns list of triggers defined for view.\nItems are :class:`Trigger` objects.") privileges = LateBindingProperty(_get_privileges,None,None, "List of :class:`Privilege` objects granted to this object.") #--- Public def accept_visitor(self,visitor): """Visitor Pattern support. Calls `visitView(self)` on parameter object. :param visitor: Visitor object of Vistior Pattern. """ visitor.visitView(self) def get_column(self,name): "Return :class:`TableColumn` object with specified name." for col in self.columns: if col.name == name: return col return None def get_trigger(self,name): "Return :class:`Trigger` object with specified name." for t in self.triggers: if t.name == name: return t return None def has_checkoption(self): "Returns True if View has WITH CHECK OPTION defined." return "WITH CHECK OPTION" in self.sql.upper() class Trigger(BaseSchemaItem): """Represents trigger. Supported SQL actions: - User trigger: create, recreate, create_or_alter, drop, alter(fire_on=string,active=bool,sequence=int,declare=string_or_list, code=string_or_list) - System trigger: none """ def __init__(self,schema,attributes): super(Trigger,self).__init__(schema,attributes) self._type_code = [2,] self._strip_attribute('RDB$TRIGGER_NAME') self._strip_attribute('RDB$RELATION_NAME') self._strip_attribute('RDB$ENGINE_NAME') self._strip_attribute('RDB$ENTRYPOINT') if not self.issystemobject(): self._actions = ['create','recreate','alter','create_or_alter','drop'] #--- Protected def _get_create_sql(self,**params): self._check_params(params,[]) result = 'CREATE TRIGGER %s' % self.get_quoted_name() if self._attributes['RDB$RELATION_NAME']: result += ' FOR %s' % self.relation.get_quoted_name() result += ' %s\n%s POSITION %d\n%s' % ('ACTIVE' if self.isactive() else 'INACTIVE', self.get_type_as_string(), self.sequence,self.source) return result def _get_alter_sql(self,**params): self._check_params(params,['fire_on','active','sequence','declare','code']) action = params.get('fire_on') active = params.get('active') sequence = params.get('sequence') declare = params.get('declare') code = params.get('code') # header = '' if active is not None: header += ' ACTIVE' if active else ' INACTIVE' if action is not None: dbaction = action.upper().startswith('ON ') if ((dbaction and not self.isdbtrigger()) or (not dbaction and self.isdbtrigger())): raise fdb.ProgrammingError("Trigger type change is not allowed.") header += '\n %s' % action if sequence is not None: header += '\n POSITION %d' % sequence # if code is not None: if declare is None: d = '' elif isinstance(declare,(list,tuple)): d = '' for x in declare: d += ' %s\n' % x else: d = '%s\n' % declare if isinstance(code,(list,tuple)): c = '' for x in code: c += ' %s\n' % x else: c = '%s\n' % code body = '\nAS\n%sBEGIN\n%sEND' % (d,c) else: body = '' # if not (header or body): raise fdb.ProgrammingError("Header or body definition required.") return 'ALTER TRIGGER %s%s%s' % (self.get_quoted_name(),header,body) def _get_drop_sql(self,**params): self._check_params(params,[]) return 'DROP TRIGGER %s' % self.get_quoted_name() def _get_action_time(self): if self.isddltrigger(): return (self.trigger_type) & 1 else: return (self.trigger_type + 1) & 1 def _get_action_type(self,slot): if self.isddltrigger(): return (self.trigger_type & ~TRIGGER_TYPE_DDL) >> 1 else: return ((self.trigger_type + 1) >> (slot * 2 - 1)) & 3 def _get_name(self): return self._attributes['RDB$TRIGGER_NAME'] def _get_relation(self): relname = self._attributes['RDB$RELATION_NAME'] rel = self.schema.get_table(relname) if not rel: rel = self.schema.get_view(relname) return rel def _get_sequence(self): return self._attributes['RDB$TRIGGER_SEQUENCE'] def _get_trigger_type(self): return self._attributes['RDB$TRIGGER_TYPE'] def _get_source(self): return self._attributes['RDB$TRIGGER_SOURCE'] def _get_flags(self): return self._attributes['RDB$FLAGS'] def _get_valid_blr(self): result = self._attributes.get('RDB$VALID_BLR') return bool(result) if result is not None else None def _get_engine_name(self): return self._attributes.get('RDB$ENGINE_NAME') def _get_entrypoint(self): return self._attributes.get('RDB$ENTRYPOINT') def _istype(self,type_code): atype = self._get_action_type(1) if atype == type_code: return True atype = self._get_action_type(2) if atype and atype == type_code: return True atype = self._get_action_type(3) if atype and atype == type_code: return True return False #--- Properties relation = LateBindingProperty(_get_relation,None,None, ":class:`Table` or :class:`View` that the trigger is for, or None for database triggers") sequence = LateBindingProperty(_get_sequence,None,None, "Sequence (position) of trigger. Zero usually means no sequence defined.") trigger_type = LateBindingProperty(_get_trigger_type,None,None, "Numeric code for trigger type that define what event and when are covered by trigger.") source = LateBindingProperty(_get_source,None,None,"PSQL source code.") flags = LateBindingProperty(_get_flags,None,None,"Internal flags.") valid_blr = LateBindingProperty(_get_valid_blr,None,None, "Trigger BLR invalidation flag. Coul be True/False or None.") # FB 3 engine_name = LateBindingProperty(_get_engine_name,None,None,"Engine name.") entrypoint = LateBindingProperty(_get_entrypoint,None,None,"Entrypoint.") #--- Public def accept_visitor(self,visitor): """Visitor Pattern support. Calls `visitTrigger(self)` on parameter object. :param visitor: Visitor object of Vistior Pattern. """ visitor.visitTrigger(self) def isactive(self): "Returns True if this trigger is active." return self._attributes['RDB$TRIGGER_INACTIVE'] == 0 def isbefore(self): "Returns True if this trigger is set for BEFORE action." return self._get_action_time() == 0 def isafter(self): "Returns True if this trigger is set for AFTER action." return self._get_action_time() == 1 def isdbtrigger(self): "Returns True if this trigger is database trigger." return (self.trigger_type & TRIGGER_TYPE_MASK) == TRIGGER_TYPE_DB def isddltrigger(self): "Returns True if this trigger is DDL trigger." return (self.trigger_type & TRIGGER_TYPE_MASK) == TRIGGER_TYPE_DDL def isinsert(self): "Returns True if this trigger is set for INSERT operation." return self._istype(1) def isupdate(self): "Returns True if this trigger is set for UPDATE operation." return self._istype(2) def isdelete(self): "Returns True if this trigger is set for DELETE operation." return self._istype(3) def get_type_as_string(self): "Return string with action and operation specification." l = [] if self.isddltrigger(): l.append(TRIGGER_PREFIX_TYPES[self._get_action_time()]) code = self._get_action_type(1) l.append('ANY DDL STATEMENT' if code == DDL_TRIGGER_ANY else TRIGGER_DDL_TYPES[code]) elif self.isdbtrigger(): l.append('ON '+TRIGGER_DB_TYPES[self.trigger_type & ~TRIGGER_TYPE_DB]) else: l.append(TRIGGER_PREFIX_TYPES[self._get_action_time()]) l.append(TRIGGER_SUFFIX_TYPES[self._get_action_type(1)]) sufix = self._get_action_type(2) if sufix: l.append('OR') l.append(TRIGGER_SUFFIX_TYPES[sufix]) sufix = self._get_action_type(3) if sufix: l.append('OR') l.append(TRIGGER_SUFFIX_TYPES[sufix]) return ' '.join(l) class ProcedureParameter(BaseSchemaItem): """Represents procedure parameter. Supported SQL actions: none. """ def __init__(self,schema,proc,attributes): super(ProcedureParameter,self).__init__(schema,attributes) self.__proc = proc self._strip_attribute('RDB$PARAMETER_NAME') self._strip_attribute('RDB$PROCEDURE_NAME') self._strip_attribute('RDB$FIELD_SOURCE') self._strip_attribute('RDB$RELATION_NAME') self._strip_attribute('RDB$FIELD_NAME') self._strip_attribute('RDB$PACKAGE_NAME') #--- Protected def _get_name(self): return self._attributes['RDB$PARAMETER_NAME'] def _get_procedure(self): return self.schema.get_procedure(self._attributes['RDB$PROCEDURE_NAME']) def _get_sequence(self): return self._attributes['RDB$PARAMETER_NUMBER'] def _get_domain(self): return self.schema.get_domain(self._attributes['RDB$FIELD_SOURCE']) def _get_datatype(self): return self.domain.datatype def _get_type_from(self): m = self.mechanism if m is None: return PROCPAR_DATATYPE elif m == 0: return PROCPAR_DATATYPE if self.domain.issystemobject() else PROCPAR_DOMAIN elif m == 1: if self._attributes.get('RDB$RELATION_NAME') is None: return PROCPAR_TYPE_OF_DOMAIN else: return PROCPAR_TYPE_OF_COLUMN else: raise fdb.InternalError("Unknown parameter mechanism code: %d" % m) def _get_default(self): result = self._attributes.get('RDB$DEFAULT_SOURCE') if result: if result.upper().startswith('= '): result = result[2:] elif result.upper().startswith('DEFAULT '): result = result[8:] return result def _get_collation(self): cid = self._attributes.get('RDB$COLLATION_ID') return (None if cid is None else self.schema.get_collation_by_id(self.domain._attributes['RDB$CHARACTER_SET_ID'],cid)) def _get_mechanism(self): return self._attributes.get('RDB$PARAMETER_MECHANISM') def _get_column(self): rname = self._attributes.get('RDB$RELATION_NAME') return (None if rname is None else self.schema.get_table(rname).get_column(self._attributes['RDB$FIELD_NAME'])) def _get_package(self): return self.schema.get_package(self._attributes.get('RDB$PACKAGE_NAME')) #--- Properties procedure = LateBindingProperty(_get_procedure,None,None,"Name of the stored procedure.") sequence = LateBindingProperty(_get_sequence,None,None,"Sequence (position) of parameter.") domain = LateBindingProperty(_get_domain,None,None,":class:`Domain` for this parameter.") datatype = LateBindingProperty(_get_datatype,None,None,"Comlete SQL datatype definition.") type_from = LateBindingProperty(_get_type_from,None,None, "Numeric code. See :attr:`Schema.enum_param_type_from`.`") # FB 2.1 default = LateBindingProperty(_get_default,None,None,"Default value.") collation = LateBindingProperty(_get_collation,None,None, ":class:`collation` for this parameter.") mechanism = LateBindingProperty(_get_mechanism,None,None,"Parameter mechanism code.") # FB 2.5 column = LateBindingProperty(_get_column,None,None,":class:`TableColumn` for this parameter.") # FB 3.0 package = LateBindingProperty(_get_package,None,None, "Package this procedure belongs to. \nObject is :class:`Package` instance or None.") #--- Public def accept_visitor(self,visitor): """Visitor Pattern support. Calls `visitProcedureParameter(self)` on parameter object. :param visitor: Visitor object of Vistior Pattern. """ visitor.visitProcedureParameter(self) def get_sql_definition(self): "Returns SQL definition for parameter." typedef = self.datatype if self.type_from == PROCPAR_DOMAIN: typedef = self.domain.get_quoted_name() elif self.type_from == PROCPAR_TYPE_OF_DOMAIN: typedef = 'TYPE OF %s' % self.domain.get_quoted_name() elif self.type_from == PROCPAR_TYPE_OF_COLUMN: typedef = 'TYPE OF COLUMN %s.%s' % (self.column.table.get_quoted_name(), self.column.get_quoted_name()) result = '%s %s%s' % (self.get_quoted_name(),typedef, '' if self.isnullable() else ' NOT NULL') c = self.collation if c is not None: result += ' COLLATE %s' % c.get_quoted_name() if self.isinput() and self.has_default(): result += ' = %s' % self.default return result def isinput(self): "Returns True if parameter is INPUT parameter." return self._attributes['RDB$PARAMETER_TYPE'] == 0 def isnullable(self): "Returns True if parameter allows NULL." return not bool(self._attributes.get('RDB$NULL_FLAG')) def has_default(self): "Returns True if parameter has default value." return bool(self._attributes.get('RDB$DEFAULT_SOURCE')) def ispackaged(self): "Returns True if procedure parameter is defined in package." return bool(self._attributes.get('RDB$PACKAGE_NAME')) class Procedure(BaseSchemaItem): """Represents stored procedure. Supported SQL actions: - User procedure: create(no_code=bool), recreate(no_code=bool), create_or_alter(no_code=bool), drop, alter(input=string_or_list,output=string_or_list,declare=string_or_list, code=string_or_list) - System procedure: none """ def __init__(self,schema,attributes): super(Procedure,self).__init__(schema,attributes) self._type_code = [5,] self.__inputParams = self.__outputParams = None self._strip_attribute('RDB$PROCEDURE_NAME') self._strip_attribute('RDB$OWNER_NAME') self._strip_attribute('RDB$SECURITY_CLASS') self._strip_attribute('RDB$ENGINE_NAME') self._strip_attribute('RDB$ENTRYPOINT') self._strip_attribute('RDB$PACKAGE_NAME') self.__ods = schema._con.ods if not self.issystemobject(): self._actions = ['create','recreate','alter','create_or_alter','drop'] #--- Protected def _get_create_sql(self,**params): self._check_params(params,['no_code']) no_code = params.get('no_code') result = 'CREATE PROCEDURE %s' % self.get_quoted_name() if self.has_input(): if self._attributes['RDB$PROCEDURE_INPUTS'] == 1: result += ' (%s)\n' % self.input_params[0].get_sql_definition() else: result += ' (\n' for p in self.input_params: result += ' %s%s\n' % (p.get_sql_definition(), '' if p.sequence+1 == self._attributes['RDB$PROCEDURE_INPUTS'] else ',') result += ')\n' else: result += '\n' if self.has_output(): if self._attributes['RDB$PROCEDURE_OUTPUTS'] == 1: result += 'RETURNS (%s)\n' % self.output_params[0].get_sql_definition() else: result += 'RETURNS (\n' for p in self.output_params: result += ' %s%s\n' % (p.get_sql_definition(), '' if p.sequence+1 == self._attributes['RDB$PROCEDURE_OUTPUTS'] else ',') result += ')\n' return result+'AS\n'+('BEGIN\nEND' if no_code else self.source) def _get_alter_sql(self,**params): self._check_params(params,['input','output','declare','code']) inpars = params.get('input') outpars = params.get('output') declare = params.get('declare') code = params.get('code') if code is None: raise fdb.ProgrammingError("Missing required parameter: 'code'.") # header = '' if inpars is not None: if isinstance(inpars,(list,tuple)): numpars = len(inpars) if numpars == 1: header = ' (%s)\n' % inpars else: header = ' (\n' i = 1 for p in inpars: header += ' %s%s\n' % (p,'' if i == numpars else ',') i += 1 header += ')\n' else: header = ' (%s)\n' % inpars # if outpars is not None: if not header: header += '\n' if isinstance(outpars,(list,tuple)): numpars = len(outpars) if numpars == 1: header += 'RETURNS (%s)\n' % outpars else: header += 'RETURNS (\n' i = 1 for p in outpars: header += ' %s%s\n' % (p,'' if i == numpars else ',') i += 1 header += ')\n' else: header += 'RETURNS (%s)\n' % outpars # if code: if declare is None: d = '' elif isinstance(declare,(list,tuple)): d = '' for x in declare: d += ' %s\n' % x else: d = '%s\n' % declare if isinstance(code,(list,tuple)): c = '' for x in code: c += ' %s\n' % x else: c = '%s\n' % code body = '%sAS\n%sBEGIN\n%sEND' % ('' if header else '\n',d,c) else: body = '%sAS\nBEGIN\nEND' % ('' if header else '\n') # return 'ALTER PROCEDURE %s%s%s' % (self.get_quoted_name(),header,body) def _get_drop_sql(self,**params): self._check_params(params,[]) return 'DROP PROCEDURE %s' % self.get_quoted_name() def __param_columns(self): cols = ['RDB$PARAMETER_NAME','RDB$PROCEDURE_NAME','RDB$PARAMETER_NUMBER', 'RDB$PARAMETER_TYPE','RDB$FIELD_SOURCE','RDB$DESCRIPTION', 'RDB$SYSTEM_FLAG'] if self.__ods >= fdb.ODS_FB_21: cols.extend(['RDB$DEFAULT_SOURCE','RDB$COLLATION_ID','RDB$NULL_FLAG', 'RDB$PARAMETER_MECHANISM']) if self.__ods >= fdb.ODS_FB_25: cols.extend(['RDB$FIELD_NAME','RDB$RELATION_NAME']) if self.__ods >= fdb.ODS_FB_30: cols.extend(['RDB$PACKAGE_NAME']) return ','.join(cols) def _get_name(self): return self._attributes['RDB$PROCEDURE_NAME'] def _get_id(self): return self._attributes['RDB$PROCEDURE_ID'] def _get_source(self): return self._attributes['RDB$PROCEDURE_SOURCE'] def _get_security_class(self): return self._attributes['RDB$SECURITY_CLASS'] def _get_owner_name(self): return self._attributes['RDB$OWNER_NAME'] def _get_input_params(self): if self.__inputParams is None: if self.has_input(): self.__inputParams = [ProcedureParameter(self.schema,self,row) for row in self.schema._select("""select %s from rdb$procedure_parameters where rdb$procedure_name = ? and rdb$parameter_type = 0 order by rdb$parameter_number""" % self.__param_columns(),(self.name,))] else: self.__inputParams = [] return self.__inputParams def _get_output_params(self): if self.__outputParams is None: if self.has_output(): self.__outputParams = [ProcedureParameter(self.schema,self,row) for row in self.schema._select("""select %s from rdb$procedure_parameters where rdb$procedure_name = ? and rdb$parameter_type = 1 order by rdb$parameter_number""" % self.__param_columns(),(self.name,))] else: self.__outputParams = [] return self.__outputParams def _get_proc_type(self): return self._attributes.get('RDB$PROCEDURE_TYPE',0) def _get_valid_blr(self): result = self._attributes.get('RDB$VALID_BLR') return bool(result) if result is not None else None def _get_privileges(self): return [p for p in self.schema.privileges if ((p.subject_name == self.name) and (p.subject_type in self._type_code))] def _get_engine_name(self): return self._attributes.get('RDB$ENGINE_NAME') def _get_entrypoint(self): return self._attributes.get('RDB$ENTRYPOINT') def _get_package(self): return self.schema.get_package(self._attributes.get('RDB$PACKAGE_NAME')) def _get_privacy(self): return self._attributes.get('RDB$PRIVATE_FLAG') #--- Properties id = LateBindingProperty(_get_id,None,None,"Internal unique ID number.") source = LateBindingProperty(_get_source,None,None,"PSQL source code.") security_class = LateBindingProperty(_get_security_class,None,None, "Security class that define access limits to the procedure.") owner_name = LateBindingProperty(_get_owner_name,None,None, "User name of procedure's creator.") input_params = LateBindingProperty(_get_input_params,None,None, "List of input parameters.\nInstances are :class:`ProcedureParameter` instances.") output_params = LateBindingProperty(_get_output_params,None,None, "List of output parameters.\nInstances are :class:`ProcedureParameter` instances.") privileges = LateBindingProperty(_get_privileges,None,None, "List of :class:`Privilege` objects granted to this object.") # FB 2.1 proc_type = LateBindingProperty(_get_proc_type,None,None, "Procedure type code. See :attr:`fdb.Connection.enum_procedure_types`.") valid_blr = LateBindingProperty(_get_valid_blr,None,None, "Procedure BLR invalidation flag. Coul be True/False or None.") # FB 3.0 engine_name = LateBindingProperty(_get_engine_name,None,None,"Engine name.") entrypoint = LateBindingProperty(_get_entrypoint,None,None,"Entrypoint.") package = LateBindingProperty(_get_package,None,None, "Package this procedure belongs to. \nObject is :class:`Package` instance or None.") privacy = LateBindingProperty(_get_privacy,None,None,"Privacy flag.") #--- Public def accept_visitor(self,visitor): """Visitor Pattern support. Calls `visitProcedure(self)` on parameter object. :param visitor: Visitor object of Vistior Pattern. """ visitor.visitProcedure(self) def get_param(self,name): "Returns :class:`ProcedureParameter` with specified name or None" for p in self.output_params: if p.name == name: return p for p in self.input_params: if p.name == name: return p return None def has_input(self): "Returns True if procedure has any input parameters." return bool(self._attributes['RDB$PROCEDURE_INPUTS']) def has_output(self): "Returns True if procedure has any output parameters." return bool(self._attributes['RDB$PROCEDURE_OUTPUTS']) def ispackaged(self): "Returns True if procedure is defined in package." return bool(self._attributes.get('RDB$PACKAGE_NAME')) class Role(BaseSchemaItem): """Represents user role. Supported SQL actions: - User role: create, drop - System role: none """ def __init__(self,schema,attributes): super(Role,self).__init__(schema,attributes) self._type_code = [13,] self._strip_attribute('RDB$ROLE_NAME') self._strip_attribute('RDB$OWNER_NAME') self._strip_attribute('RDB$SECURITY_CLASS') if not self.issystemobject(): self._actions = ['create','drop'] #--- Protected def _get_create_sql(self,**params): self._check_params(params,[]) return 'CREATE ROLE %s' % self.get_quoted_name() def _get_drop_sql(self,**params): self._check_params(params,[]) return 'DROP ROLE %s' % self.get_quoted_name() def _get_name(self): return self._attributes['RDB$ROLE_NAME'] def _get_owner_name(self): return self._attributes['RDB$OWNER_NAME'] def _get_security_class(self): return self._attributes.get('RDB$SECURITY_CLASS') def _get_privileges(self): return [p for p in self.schema.privileges if ((p.user_name == self.name) and (p.user_type in self._type_code))] #--- Properties owner_name = LateBindingProperty(_get_owner_name,None,None,"User name of role owner.") privileges = LateBindingProperty(_get_privileges,None,None, "List of :class:`Privilege` objects granted to this object.") security_class = LateBindingProperty(_get_security_class,None,None, "Security class name or None.") #--- Public def accept_visitor(self,visitor): """Visitor Pattern support. Calls `visitRole(self)` on parameter object. :param visitor: Visitor object of Vistior Pattern. """ visitor.visitRole(self) class FunctionArgument(BaseSchemaItem): """Represets UDF argument. Supported SQL actions: none. """ def __init__(self,schema,function,attributes): super(FunctionArgument,self).__init__(schema,attributes) self._type_code = [15,] self.__function = function self._strip_attribute('RDB$FUNCTION_NAME') self._strip_attribute('RDB$PACKAGE_NAME') self._strip_attribute('RDB$ARGUMENT_NAME') self._strip_attribute('RDB$FIELD_SOURCE') self._strip_attribute('RDB$DEFAULT_SOURCE') self._strip_attribute('RDB$FIELD_NAME') self._strip_attribute('RDB$RELATION_NAME') self._strip_attribute('RDB$DESCRIPTION') #--- Protected def _get_name(self): return self.argument_name if self.argument_name else (self.function.name+ '_'+str(self._get_position())) def _get_function(self): return self.__function #return self.schema.get_function(self._attributes['RDB$FUNCTION_NAME']) def _get_position(self): return self._attributes['RDB$ARGUMENT_POSITION'] def _get_mechanism(self): x = self._attributes['RDB$MECHANISM'] return None if x is None else abs(x) def _get_length(self): return self._attributes['RDB$FIELD_LENGTH'] def _get_scale(self): return self._attributes['RDB$FIELD_SCALE'] def _get_field_type(self): return self._attributes['RDB$FIELD_TYPE'] def _get_sub_type(self): return self._attributes['RDB$FIELD_SUB_TYPE'] def _get_character_length(self): return self._attributes['RDB$CHARACTER_LENGTH'] def _get_character_set(self): return self.schema.get_character_set_by_id(self._attributes['RDB$CHARACTER_SET_ID']) def _get_precision(self): return self._attributes['RDB$FIELD_PRECISION'] def _get_datatype(self): if self.field_type is None: # FB3 PSQL function, datatype defined via internal domain return self.domain.datatype else: # Classic external UDF l = [] precision_known = False if self.field_type in (FBT_SMALLINT,FBT_INTEGER,FBT_BIGINT): if self.precision != None: if (self.sub_type > 0) and (self.sub_type < MAX_INTSUBTYPES): l.append('%s(%d, %d)' % \ (INTEGRAL_SUBTYPES[self.sub_type],self.precision,-self.scale)) precision_known = True if not precision_known: if (self.field_type == FBT_SMALLINT) and (self.scale < 0): l.append('NUMERIC(4, %d)' % -self.scale) elif (self.field_type == FBT_INTEGER) and (self.scale < 0): l.append('NUMERIC(9, %d)' % -self.scale) elif (self.field_type == FBT_DOUBLE_PRECISION) and (self.scale < 0): l.append('NUMERIC(15, %d)' % -self.scale) else: l.append(COLUMN_TYPES[self.field_type]) if self.field_type in (FBT_CHAR,FBT_VARCHAR,FBT_CSTRING): l.append('(%d)' % (self.length if (self.character_length is None) else self.character_length)) if self.field_type == FBT_BLOB: if self.sub_type >= 0 and self.sub_type <= MAX_BLOBSUBTYPES: if self.sub_type > 0: l.append(' SUB_TYPE %s' % BLOB_SUBTYPES[self.sub_type]) else: l.append(' SUB_TYPE %d' % self.sub_type) if self.field_type in (FBT_CHAR,FBT_VARCHAR,FBT_CSTRING,FBT_BLOB): if self._attributes['RDB$CHARACTER_SET_ID'] is not None and \ (self.character_set.name != self.schema.default_character_set.name): l.append(' CHARACTER SET %s' % self.character_set.name) return ''.join(l) def _get_package(self): return self.schema.get_package(self._attributes.get('RDB$PACKAGE_NAME')) def _get_argument_name(self): return self._attributes.get('RDB$ARGUMENT_NAME') def _get_domain(self): return self.schema.get_domain(self._attributes.get('RDB$FIELD_SOURCE')) def _get_default(self): result = self._attributes.get('RDB$DEFAULT_SOURCE') if result: if result.upper().startswith('= '): result = result[2:] elif result.upper().startswith('DEFAULT '): result = result[8:] return result def _get_collation(self): cid = self._attributes.get('RDB$COLLATION_ID') return (None if cid is None else self.schema.get_collation_by_id(self.domain._attributes['RDB$CHARACTER_SET_ID'],cid)) def _get_argument_mechanism(self): return self._attributes.get('RDB$ARGUMENT_MECHANISM') def _get_column(self): rname = self._attributes.get('RDB$RELATION_NAME') return (None if rname is None else self.schema.get_table(rname).get_column(self._attributes['RDB$FIELD_NAME'])) def _get_type_from(self): m = self.argument_mechanism if m is None: return PROCPAR_DATATYPE elif m == 0: return PROCPAR_DATATYPE if self.domain.issystemobject() else PROCPAR_DOMAIN elif m == 1: if self._attributes.get('RDB$RELATION_NAME') is None: return PROCPAR_TYPE_OF_DOMAIN else: return PROCPAR_TYPE_OF_COLUMN else: raise fdb.InternalError("Unknown parameter mechanism code: %d" % m) #--- Properties function = LateBindingProperty(_get_function,None,None, ":class:`Function` to which this argument belongs.") position = LateBindingProperty(_get_position,None,None,"Argument position.") mechanism = LateBindingProperty(_get_mechanism,None,None,"How argument is passed.") field_type = LateBindingProperty(_get_field_type,None,None, "Number code of the data type defined for the argument.") length = LateBindingProperty(_get_length,None,None, "Length of the argument in bytes.") scale = LateBindingProperty(_get_scale,None,None, "Negative number representing the scale of NUMBER and DECIMAL argument.") precision = LateBindingProperty(_get_precision,None,None, "Indicates the number of digits of precision available to the data type of the argument.") sub_type = LateBindingProperty(_get_sub_type,None,None,"BLOB subtype.") character_length = LateBindingProperty(_get_character_length,None,None, "Length of CHAR and VARCHAR column, in characters (not bytes).") character_set = LateBindingProperty(_get_character_set,None,None, ":class:`CharacterSet` for a character/text BLOB argument, or None.") datatype = LateBindingProperty(_get_datatype,None,None, "Comlete SQL datatype definition.") # FB 3.0 argument_name = LateBindingProperty(_get_argument_name,None,None,"Argument name.") domain = LateBindingProperty(_get_domain,None,None,":class:`Domain` for this parameter.") default = LateBindingProperty(_get_default,None,None,"Default value.") collation = LateBindingProperty(_get_collation,None,None, ":class:`collation` for this parameter.") argument_mechanism = LateBindingProperty(_get_argument_mechanism,None,None, "Argiment mechanism.") column = LateBindingProperty(_get_column,None,None,":class:`TableColumn` for this parameter.") type_from = LateBindingProperty(_get_type_from,None,None, "Numeric code. See :attr:`Schema.enum_param_type_from`.`") package = LateBindingProperty(_get_package,None,None, "Package this function belongs to. \nObject is :class:`Package` instance or None.") #--- Public def accept_visitor(self,visitor): """Visitor Pattern support. Calls `visitFunctionArgument(self)` on parameter object. :param visitor: Visitor object of Vistior Pattern. """ visitor.visitFunctionArgument(self) def get_sql_definition(self): "Returns SQL definition for parameter." if self.function.isexternal(): return '%s%s%s' % (self.datatype, ' BY DESCRIPTOR' if self.isbydescriptor() else '', ' BY VALUE' if self.isbyvalue() and self.isreturning() else '', ) else: typedef = self.datatype if self.type_from == PROCPAR_DOMAIN: typedef = self.domain.get_quoted_name() elif self.type_from == PROCPAR_TYPE_OF_DOMAIN: typedef = 'TYPE OF %s' % self.domain.get_quoted_name() elif self.type_from == PROCPAR_TYPE_OF_COLUMN: typedef = 'TYPE OF COLUMN %s.%s' % (self.column.table.get_quoted_name(), self.column.get_quoted_name()) result = '%s%s%s' % (self.get_quoted_name()+' ' if not self.isreturning() else '', typedef, '' if self.isnullable() else ' NOT NULL') c = self.collation if c is not None: result += ' COLLATE %s' % c.get_quoted_name() if not self.isreturning() and self.has_default(): result += ' = %s' % self.default return result def isbyvalue(self): "Returns True if argument is passed by value." return self.mechanism == 0 def isbyreference(self): "Returns True if argument is passed by reference." return self.mechanism in [1,5] def isbydescriptor(self,any=False): """Returns True if argument is passed by descriptor. :param bool any: If True, method returns True if any kind of descriptor is used (including BLOB and ARRAY descriptors). """ return self.mechanism in [2,3,4] if any else self.mechanism == 2 def iswithnull(self): "Returns True if argument is passed by reference with NULL support." return self.mechanism == 5 def isfreeit(self): "Returns True if (return) argument is declared as FREE_IT." return self._attributes['RDB$MECHANISM'] < 0 def isreturning(self): "Returns True if argument represents return value for function." return self.position == self.function._attributes['RDB$RETURN_ARGUMENT'] # Firebird 3.0 def isnullable(self): "Returns True if parameter allows NULL." return not bool(self._attributes.get('RDB$NULL_FLAG')) def has_default(self): "Returns True if parameter has default value." return bool(self._attributes.get('RDB$DEFAULT_SOURCE')) def ispackaged(self): "Returns True if function argument is defined in package." return bool(self._attributes.get('RDB$PACKAGE_NAME')) class Function(BaseSchemaItem): """Represents user defined function. Supported SQL actions: - External UDF: declare, drop - PSQL UDF (FB 3, not declared in package): create, recreate, create_or_alter, drop, alter(arguments=string_or_list,returns=string,declare=string_or_list, code=string_or_list) - System UDF: none """ def __init__(self,schema,attributes): super(Function,self).__init__(schema,attributes) self._type_code = [15,] self.__arguments = None self.__returns = None self._strip_attribute('RDB$FUNCTION_NAME') self._strip_attribute('RDB$MODULE_NAME') self._strip_attribute('RDB$ENTRYPOINT') self._strip_attribute('RDB$ENGINE_NAME') self._strip_attribute('RDB$PACKAGE_NAME') self._strip_attribute('RDB$SECURITY_CLASS') self._strip_attribute('RDB$OWNER_NAME') self.__ods = schema._con.ods if not self.issystemobject(): if self.isexternal(): self._actions = ['declare','drop'] else: if self._attributes.get('RDB$PACKAGE_NAME') is None: self._actions = ['create','recreate','alter','create_or_alter','drop'] pass #--- Protected def _get_declare_sql(self,**params): self._check_params(params,[]) fdef = 'DECLARE EXTERNAL FUNCTION %s\n' % self.get_quoted_name() for p in self.arguments: fdef += ' %s%s\n' % (p.get_sql_definition(), '' if p.position == len(self.arguments) else ',') if self.has_return(): fdef += 'RETURNS %s%s\n' % ('PARAMETER %d' % self._attributes['RDB$RETURN_ARGUMENT'] if self.has_return_argument() else self.returns.get_sql_definition(), ' FREE_IT' if self.returns.isfreeit() else '') return "%sENTRY_POINT '%s'\nMODULE_NAME '%s'" % (fdef,self.entrypoint, self.module_name) def _get_drop_sql(self,**params): self._check_params(params,[]) return 'DROP%s FUNCTION %s' % (' EXTERNAL' if self.isexternal() else '', self.get_quoted_name()) def _get_create_sql(self,**params): self._check_params(params,[]) result = 'CREATE FUNCTION %s' % self.get_quoted_name() if self.has_arguments(): if len(self.arguments) == 1: result += ' (%s)\n' % self.arguments[0].get_sql_definition() else: result += ' (\n' for p in self.arguments: result += ' %s%s\n' % (p.get_sql_definition(), '' if p.position == len(self.arguments) else ',') result += ')\n' else: result += '\n' result += 'RETURNS %s\n' % self.returns.get_sql_definition() return result+'AS\n'+self.source def _get_alter_sql(self,**params): self._check_params(params,['arguments','returns','declare','code']) arguments = params.get('arguments') returns = params.get('returns') if returns is None: raise fdb.ProgrammingError("Missing required parameter: 'returns'.") declare = params.get('declare') code = params.get('code') if code is None: raise fdb.ProgrammingError("Missing required parameter: 'code'.") # header = '' if arguments is not None: if isinstance(arguments,(list,tuple)): numpars = len(arguments) if numpars == 1: header = ' (%s)\n' % arguments else: header = ' (\n' i = 1 for p in arguments: header += ' %s%s\n' % (p,'' if i == numpars else ',') i += 1 header += ')\n' else: header = ' (%s)\n' % arguments # if not header: header += '\n' header += 'RETURNS %s\n' % returns # if code: if declare is None: d = '' elif isinstance(declare,(list,tuple)): d = '' for x in declare: d += ' %s\n' % x else: d = '%s\n' % declare if isinstance(code,(list,tuple)): c = '' for x in code: c += ' %s\n' % x else: c = '%s\n' % code body = '%sAS\n%sBEGIN\n%sEND' % ('' if header else '\n',d,c) else: body = '%sAS\nBEGIN\nEND' % ('' if header else '\n') # return 'ALTER FUNCTION %s%s%s' % (self.get_quoted_name(),header,body) def _load_arguments(self,mock=None): cols = ['RDB$FUNCTION_NAME','RDB$ARGUMENT_POSITION','RDB$MECHANISM', 'RDB$FIELD_TYPE','RDB$FIELD_SCALE','RDB$FIELD_LENGTH', 'RDB$FIELD_SUB_TYPE','RDB$CHARACTER_SET_ID','RDB$FIELD_PRECISION', 'RDB$CHARACTER_LENGTH'] if self.__ods >= fdb.ODS_FB_30: cols.extend(['RDB$PACKAGE_NAME','RDB$ARGUMENT_NAME','RDB$FIELD_SOURCE', 'RDB$DEFAULT_SOURCE','RDB$COLLATION_ID','RDB$NULL_FLAG', 'RDB$ARGUMENT_MECHANISM','RDB$FIELD_NAME','RDB$RELATION_NAME', 'RDB$SYSTEM_FLAG','RDB$DESCRIPTION']) self.__arguments = [FunctionArgument(self.schema,self,row) for row in (mock if mock else self.schema._select("""select %s from rdb$function_arguments where rdb$function_name = ? order by rdb$argument_position""" % ','.join(cols),(self.name,)))] rarg = self._attributes['RDB$RETURN_ARGUMENT'] if rarg is not None: for a in self.__arguments: if a.position == rarg: self.__returns = weakref.ref(a) def _get_name(self): return self._attributes['RDB$FUNCTION_NAME'] def _get_module_name(self): return self._attributes['RDB$MODULE_NAME'] def _get_entrypoint(self): return self._attributes['RDB$ENTRYPOINT'] def _get_returns(self): if self.__arguments is None: self._load_arguments() return self.__returns if self.__returns is None else self.__returns() def _get_arguments(self): if self.__arguments is None: self._load_arguments() return [a for a in self.__arguments if a.position != 0] def _get_engine_mame(self): return self._attributes.get('RDB$ENGINE_NAME') def _get_package(self): return self.schema.get_package(self._attributes.get('RDB$PACKAGE_NAME')) def _get_private_flag(self): return self._attributes.get('RDB$PRIVATE_FLAG') def _get_source(self): return self._attributes.get('RDB$FUNCTION_SOURCE') def _get_id(self): return self._attributes.get('RDB$FUNCTION_ID') def _get_valid_blr(self): result = self._attributes.get('RDB$VALID_BLR') return bool(result) if result is not None else None def _get_security_class(self): return self._attributes.get('RDB$SECURITY_CLASS') def _get_owner_name(self): return self._attributes.get('RDB$OWNER_NAME') def _get_legacy_flag(self): return self._attributes.get('RDB$LEGACY_FLAG') def _get_deterministic_flag(self): return self._attributes.get('RDB$DETERMINISTIC_FLAG') #--- Properties module_name = LateBindingProperty(_get_module_name,None,None,"Module name.") entrypoint = LateBindingProperty(_get_entrypoint,None,None,"Entrypoint in module.") returns = LateBindingProperty(_get_returns,None,None, "Returning :class:`FunctionArgument` or None.") arguments = LateBindingProperty(_get_arguments,None,None, "List of function arguments. Items are :class:`FunctionArgument` instances.") # Firebird 3.0 engine_mame = LateBindingProperty(_get_engine_mame,None,None,"Engine name.") package = LateBindingProperty(_get_package,None,None, "Package this function belongs to. \nObject is :class:`Package` instance or None.") private_flag = LateBindingProperty(_get_private_flag,None,None,"Private flag.") source = LateBindingProperty(_get_source,None,None,"Function source.") id = LateBindingProperty(_get_id,None,None,"Function ID.") valid_blr = LateBindingProperty(_get_valid_blr,None,None,"BLR validity flag.") security_class = LateBindingProperty(_get_security_class,None,None,"Security class.") owner_name = LateBindingProperty(_get_owner_name,None,None,"Owner name.") legacy_flag = LateBindingProperty(_get_legacy_flag,None,None,"Legacy flag.") deterministic_flag = LateBindingProperty(_get_deterministic_flag,None,None, "Deterministic flag.") #--- Public def accept_visitor(self,visitor): """Visitor Pattern support. Calls `visitFunction(self)` on parameter object. :param visitor: Visitor object of Vistior Pattern. """ visitor.visitFunction(self) def isexternal(self): "Returns True if function is external UDF, False for PSQL functions." return True if self.module_name else False def has_arguments(self): "Returns True if function has input arguments." return bool(self.arguments) def has_return(self): "Returns True if function returns a value." return (self.returns is not None) def has_return_argument(self): "Returns True if function returns a value in input argument." return (self.returns.position != 0 if self.returns is not None else False) def ispackaged(self): "Returns True if function is defined in package." return bool(self._attributes.get('RDB$PACKAGE_NAME')) class DatabaseFile(BaseSchemaItem): """Represents database extension file. Supported SQL actions: create """ def __init__(self,schema,attributes): super(DatabaseFile,self).__init__(schema,attributes) self._type_code = [] self._strip_attribute('RDB$FILE_NAME') #--- Protected def _get_name(self): return 'FILE_%d' % self.sequence def _get_filename(self): return self._attributes['RDB$FILE_NAME'] def _get_sequence(self): return self._attributes['RDB$FILE_SEQUENCE'] def _get_start(self): return self._attributes['RDB$FILE_START'] def _get_length(self): return self._attributes['RDB$FILE_LENGTH'] #--- Properties filename = LateBindingProperty(_get_filename,None,None,"File name.") sequence = LateBindingProperty(_get_sequence,None,None,"File sequence number.") start = LateBindingProperty(_get_start,None,None,"File start page number.") length = LateBindingProperty(_get_length,None,None,"File length in pages.") #--- Public def accept_visitor(self,visitor): """Visitor Pattern support. Calls `visitDatabaseFile(self)` on parameter object. :param visitor: Visitor object of Vistior Pattern. """ visitor.visitDatabaseFile(self) def issystemobject(self): "Returns True." return True class Shadow(BaseSchemaItem): """Represents database shadow. Supported SQL actions: create, drop(preserve=bool) """ SHADOW_INACTIVE = 2 SHADOW_MANUAL = 4 SHADOW_CONDITIONAL = 16 def __init__(self,schema,attributes): super(Shadow,self).__init__(schema,attributes) self._type_code = [] self.__files = None self._actions = ['create','drop'] def _get_create_sql(self,**params): self._check_params(params,[]) result = 'CREATE SHADOW %d %s%s' % (self.id,'MANUAL' if self.ismanual() else 'AUTO', ' CONDITIONAL' if self.isconditional() else '') if len(self.files) == 1: result += " '%s'" % self.files[0].filename else: f = self.files[0] result += " '%s'%s\n" % (f.filename, ' LENGTH %d' % f.length if f.length > 0 else '') for f in self.files[1:]: result += " FILE '%s'%s%s" % (f.filename, ' STARTING AT %d' % f.start if f.start > 0 else '', ' LENGTH %d' % f.length if f.length > 0 else '') if f.sequence < len(self.files)-1: result += '\n' return result def _get_drop_sql(self,**params): self._check_params(params,['preserve']) preserve = params.get('preserve') return 'DROP SHADOW %d%s' % (self.id, ' PRESERVE FILE' if preserve else '') def _get_name(self): return 'SHADOW_%d' % self.id def _get_id(self): return self._attributes['RDB$SHADOW_NUMBER'] def _get_flags(self): return self._attributes['RDB$FILE_FLAGS'] def _get_files(self): if self.__files is None: self.__files = [DatabaseFile(self,row) for row in self.schema._select(""" select RDB$FILE_NAME, RDB$FILE_SEQUENCE, RDB$FILE_START, RDB$FILE_LENGTH from RDB$FILES where RDB$SHADOW_NUMBER = ? order by RDB$FILE_SEQUENCE""",(self._attributes['RDB$SHADOW_NUMBER'],))] return self.__files #--- Properties id = LateBindingProperty(_get_id,None,None,"Shadow ID number.") flags = LateBindingProperty(_get_flags,None,None,"Shadow flags.") files = LateBindingProperty(_get_files,None,None, "List of shadow files. Items are :class:`DatabaseFile` instances.") #--- Public def accept_visitor(self,visitor): """Visitor Pattern support. Calls `visitShadow(self)` on parameter object. :param visitor: Visitor object of Vistior Pattern. """ visitor.visitShadow(self) def issystemobject(self): "Returns False." return False def ismanual(self): "Returns True if it's MANUAL shadow." return bool(self.flags & self.SHADOW_MANUAL) def isinactive(self): "Returns True if it's INACTIVE shadow." return bool(self.flags & self.SHADOW_INACTIVE) def isconditional(self): "Returns True if it's CONDITIONAL shadow." return bool(self.flags & self.SHADOW_CONDITIONAL) class Privilege(BaseSchemaItem): """Represents priviledge to database object. Supported SQL actions: grant(grantors),revoke(grantors,grant_option) """ def __init__(self,schema,attributes): super(Privilege,self).__init__(schema,attributes) self._type_code = [] self._actions = ['grant','revoke'] self._strip_attribute('RDB$USER') self._strip_attribute('RDB$GRANTOR') self._strip_attribute('RDB$PRIVILEGE') self._strip_attribute('RDB$RELATION_NAME') self._strip_attribute('RDB$FIELD_NAME') def _get_grant_sql(self,**params): self._check_params(params,['grantors']) grantors = params.get('grantors') privileges = {'S':'SELECT','I':'INSERT','U':'UPDATE','D':'DELETE','R':'REFERENCES'} admin_option = ' WITH GRANT OPTION' if self.has_grant() else '' if self.privilege in privileges: privilege = privileges[self.privilege] if self.field_name is not None: privilege += '(%s)' % self.field_name privilege += ' ON ' elif self.privilege == 'X': # procedure privilege = 'EXECUTE ON PROCEDURE ' else: # role membership privilege = '' admin_option = ' WITH ADMIN OPTION' if self.has_grant() else '' user = self.user if isinstance(user,Procedure): utype = 'PROCEDURE ' elif isinstance(user,Trigger): utype = 'TRIGGER ' elif isinstance(user,View): utype = 'VIEW ' else: utype = '' if (grantors is not None) and (self.grantor_name not in grantors): granted_by = ' GRANTED BY %s' % self.grantor_name else: granted_by = '' return 'GRANT %s%s TO %s%s%s%s' % (privilege,self.subject_name,utype, self.user_name,admin_option,granted_by) def _get_revoke_sql(self,**params): self._check_params(params,['grant_option','grantors']) grantors = params.get('grantors') option_only = params.get('grant_option',False) if option_only and not self.has_grant(): raise fdb.ProgrammingError("Can't revoke grant option that wasn't granted.") privileges = {'S':'SELECT','I':'INSERT','U':'UPDATE','D':'DELETE','R':'REFERENCES'} admin_option = 'GRANT OPTION FOR ' if self.has_grant() and option_only else '' if self.privilege in privileges: privilege = privileges[self.privilege] if self.field_name is not None: privilege += '(%s)' % self.field_name privilege += ' ON ' elif self.privilege == 'X': # procedure privilege = 'EXECUTE ON PROCEDURE ' else: # role membership privilege = '' admin_option = 'ADMIN OPTION FOR' if self.has_grant() and option_only else '' user = self.user if isinstance(user,Procedure): utype = 'PROCEDURE ' elif isinstance(user,Trigger): utype = 'TRIGGER ' elif isinstance(user,View): utype = 'VIEW ' else: utype = '' if (grantors is not None) and (self.grantor_name not in grantors): granted_by = ' GRANTED BY %s' % self.grantor_name else: granted_by = '' return 'REVOKE %s%s%s FROM %s%s%s' % (admin_option,privilege, self.subject_name,utype, self.user_name,granted_by) def _get_user(self): return self.schema._get_item(self._attributes['RDB$USER'], self._attributes['RDB$USER_TYPE']) def _get_grantor(self): return fdb.services.User(self._attributes['RDB$GRANTOR']) def _get_privilege(self): return self._attributes['RDB$PRIVILEGE'] def _get_subject(self): rname = self._attributes['RDB$RELATION_NAME'] result = self.schema._get_item(rname, self.subject_type, self.field_name) if result is None and self.subject_type == 0: # Views are logged as tables, so try again for view code result = self.schema._get_item(rname, 1, self.field_name) return result def _get_user_name(self): return self._attributes['RDB$USER'] def _get_user_type(self): return self._attributes['RDB$USER_TYPE'] def _get_grantor_name(self): return self._attributes['RDB$GRANTOR'] def _get_subject_name(self): return self._attributes['RDB$RELATION_NAME'] def _get_subject_type(self): return self._attributes['RDB$OBJECT_TYPE'] def _get_field_name(self): return self._attributes['RDB$FIELD_NAME'] #--- Properties user = LateBindingProperty(_get_user,None,None, "Grantee. Either :class:`~fdb.services.User`, :class:`Role`, " \ ":class:`Procedure`, :class:`Trigger` or :class:`View` object.") grantor = LateBindingProperty(_get_grantor,None,None, "Grantor :class:`~fdb.services.User` object.") privilege = LateBindingProperty(_get_privilege,None,None,"Privilege code.") subject = LateBindingProperty(_get_subject,None,None, "Priviledge subject. Either :class:`Role`, :class:`Table`, :class:`View` " \ "or :class:`Procedure` object.") user_name = LateBindingProperty(_get_user_name,None,None,"User name.") user_type = LateBindingProperty(_get_user_type,None,None,"User type.") grantor_name = LateBindingProperty(_get_grantor_name,None,None,"Grantor name.") subject_name = LateBindingProperty(_get_subject_name,None,None,"Subject name.") subject_type = LateBindingProperty(_get_subject_type,None,None,"Subject type.") field_name = LateBindingProperty(_get_field_name,None,None,"Field name.") def accept_visitor(self,visitor): """Visitor Pattern support. Calls `visitPrivilege(self)` on parameter object. :param visitor: Visitor object of Vistior Pattern. """ visitor.visitPrivilege(self) def has_grant(self): "Returns True if privilege comes with GRANT OPTION." return bool(self._attributes['RDB$GRANT_OPTION']) def issystemobject(self): "Returns True." return True def isselect(self): "Returns True if this is SELECT privilege." return self.privilege == 'S' def isinsert(self): "Returns True if this is INSERT privilege." return self.privilege == 'I' def isupdate(self): "Returns True if this is UPDATE privilege." return self.privilege == 'U' def isdelete(self): "Returns True if this is DELETE privilege." return self.privilege == 'D' def isexecute(self): "Returns True if this is EXECUTE privilege." return self.privilege == 'X' def isreference(self): "Returns True if this is REFERENCE privilege." return self.privilege == 'R' def ismembership(self): "Returns True if this is ROLE membership privilege." return self.privilege == 'M' class Package(BaseSchemaItem): """Represents PSQL package. Supported SQL actions: create(body=bool), recreate(body=bool), create_or_alter, alter(header=string_or_list), drop(body=bool),alter """ def __init__(self,schema,attributes): super(Package,self).__init__(schema,attributes) self._type_code = [18,19] self._actions = ['create','recreate','create_or_alter','alter','drop'] self._strip_attribute('RDB$PACKAGE_NAME') self._strip_attribute('RDB$SECURITY_CLASS') self._strip_attribute('RDB$OWNER_NAME') def _get_create_sql(self,**params): self._check_params(params,['body']) body = params.get('body') cbody = 'BODY ' if body else '' result = 'CREATE PACKAGE %s%s' % (cbody, self.get_quoted_name()) return result+'\nAS\n'+(self.body if body else self.header) def _get_alter_sql(self,**params): self._check_params(params,['header']) header = params.get('header') if not header: hdr = '' else: hdr = '\n'.join(header) if isinstance(header,fdb.ListType) else header return 'ALTER PACKAGE %s\nAS\nBEGIN\n%s\nEND' % (self.get_quoted_name(),hdr) def _get_drop_sql(self,**params): self._check_params(params,['body']) body = params.get('body') cbody = 'BODY ' if body else '' return 'DROP PACKAGE %s%s' % (cbody, self.get_quoted_name()) def _get_name(self): return self._attributes['RDB$PACKAGE_NAME'] def _get_security_class(self): return self._attributes['RDB$SECURITY_CLASS'] def _get_owner_name(self): return self._attributes['RDB$OWNER_NAME'] def _get_header(self): return self._attributes['RDB$PACKAGE_HEADER_SOURCE'] def _get_body(self): return self._attributes['RDB$PACKAGE_BODY_SOURCE'] def _get_functions(self): return [fn for fn in self.schema.functions if fn._attributes['RDB$PACKAGE_NAME'] == self.name] def _get_procedures(self): return [proc for proc in self.schema.procedures if proc._attributes['RDB$PACKAGE_NAME'] == self.name] #--- Properties header = LateBindingProperty(_get_header,None,None,"Package header source.") body = LateBindingProperty(_get_body,None,None,"Package body source.") security_class = LateBindingProperty(_get_security_class,None,None, "Security class name or None.") owner_name = LateBindingProperty(_get_owner_name,None,None,"User name of package creator.") functions = LateBindingProperty(_get_functions,None,None, "List of package functions. Items are :class:`Function` instances.") procedures = LateBindingProperty(_get_procedures,None,None, "List of package procedures. Items are :class:`Procedure` instances.") #--- Public def accept_visitor(self,visitor): """Visitor Pattern support. Calls `visitProcedure(self)` on parameter object. :param visitor: Visitor object of Vistior Pattern. """ visitor.visitPackage(self) def has_valid_body(self): result = self._attributes.get('RDB$VALID_BODY_FLAG') return bool(result) if result is not None else None class SchemaVisitor(object): """Helper class for implementation of schema Visitor. Implements all `visit*` methods supported by schema classes as calls to :meth:`default_action`. """ def default_action(self,obj): "Does nothing." pass def visitSchema(self,schema): self.default_action(schema) def visitMetadataItem(self,item): self.default_action(item) def visitCollation(self,collation): self.default_action(collation) def visitCharacterSet(self,character_set): self.default_action(character_set) def visitException(self,exception): self.default_action(exception) def visitGenerator(self,generator): self.default_action(generator) def visitTableColumn(self,column): self.default_action(column) def visitIndex(self,index): self.default_action(index) def visitViewColumn(self,column): self.default_action(column) def visitDomain(self,domain): self.default_action(domain) def visitDependency(self,dependency): self.default_action(dependency) def visitConstraint(self,constraint): self.default_action(constraint) def visitTable(self,table): self.default_action(table) def visitView(self,view): self.default_action(view) def visitTrigger(self,trigger): self.default_action(trigger) def visitProcedureParameter(self,param): self.default_action(param) def visitProcedure(self,procedure): self.default_action(procedure) def visitRole(self,role): self.default_action(role) def visitFunctionArgument(self,arg): self.default_action(arg) def visitFunction(self,function): self.default_action(function) def visitDatabaseFile(self,dbfile): self.default_action(dbfile) def visitShadow(self,shadow): self.default_action(shadow) def visitPackage(self,package): self.default_action(package) fdb-1.6.1/fdb/blr.py0000644000175000001440000001260412431143162014576 0ustar pcisarusers00000000000000#coding:utf-8 # # PROGRAM/MODULE: fdb # FILE: blr.py # DESCRIPTION: BLR-related definitions # CREATED: 12.6.2013 # # Software distributed under the License is distributed AS IS, # WITHOUT WARRANTY OF ANY KIND, either express or implied. # See the License for the specific language governing rights # and limitations under the License. # # The Original Code was created by Pavel Cisar # # Copyright (c) 2011 Pavel Cisar # and all contributors signed below. # # All Rights Reserved. # Contributor(s): Philippe Makowski # ______________________________________. # # See LICENSE.TXT for details. blr_inner = 0 blr_left = 1 blr_right = 2 blr_full = 3 blr_gds_code = 0 blr_sql_code = 1 blr_exception = 2 blr_trigger_code = 3 blr_default_code = 4 blr_raise = 5 blr_exception_msg = 6 blr_version4 = 4 blr_version5 = 5 blr_eoc = 76 blr_end = 255 blr_assignment = 1 blr_begin = 2 blr_dcl_variable = 3 blr_message = 4 blr_erase = 5 blr_fetch = 6 blr_for = 7 blr_if = 8 blr_loop = 9 blr_modify = 10 blr_handler = 11 blr_receive = 12 blr_select = 13 blr_send = 14 blr_store = 15 blr_label = 17 blr_leave = 18 blr_store2 = 19 blr_post = 20 blr_literal = 21 blr_dbkey = 22 blr_field = 23 blr_fid = 24 blr_parameter = 25 blr_variable = 26 blr_average = 27 blr_count = 28 blr_maximum = 29 blr_minimum = 30 blr_total = 31 blr_add = 34 blr_subtract = 35 blr_multiply = 36 blr_divide = 37 blr_negate = 38 blr_concatenate = 39 blr_substring = 40 blr_parameter2 = 41 blr_from = 42 blr_via = 43 blr_parameter2_old = 44 blr_user_name = 44 blr_null = 45 blr_equiv = 46 blr_eql = 47 blr_neq = 48 blr_gtr = 49 blr_geq = 50 blr_lss = 51 blr_leq = 52 blr_containing = 53 blr_matching = 54 blr_starting = 55 blr_between = 56 blr_or = 57 blr_and = 58 blr_not = 59 blr_any = 60 blr_missing = 61 blr_unique = 62 blr_like = 63 blr_rse = 67 blr_first = 68 blr_project = 69 blr_sort = 70 blr_boolean = 71 blr_ascending = 72 blr_descending = 73 blr_relation = 74 blr_rid = 75 blr_union = 76 blr_map = 77 blr_group_by = 78 blr_aggregate = 79 blr_join_type = 80 blr_agg_count = 83 blr_agg_max = 84 blr_agg_min = 85 blr_agg_total = 86 blr_agg_average = 87 blr_parameter3 = 88 blr_run_max = 89 blr_run_min = 90 blr_run_total = 91 blr_run_average = 92 blr_agg_count2 = 93 blr_agg_count_distinct = 94 blr_agg_total_distinct = 95 blr_agg_average_distinct = 96 blr_function = 100 blr_gen_id = 101 blr_prot_mask = 102 blr_upcase = 103 blr_lock_state = 104 blr_value_if = 105 blr_matching2 = 106 blr_index = 107 blr_ansi_like = 108 blr_seek = 112 blr_continue = 0 blr_forward = 1 blr_backward = 2 blr_bof_forward = 3 blr_eof_backward = 4 blr_run_count = 118 blr_rs_stream = 119 blr_exec_proc = 120 blr_procedure = 124 blr_pid = 125 blr_exec_pid = 126 blr_singular = 127 blr_abort = 128 blr_block = 129 blr_error_handler = 130 blr_cast = 131 blr_start_savepoint = 134 blr_end_savepoint = 135 # Access plan items blr_plan = 139 blr_merge = 140 blr_join = 141 blr_sequential = 142 blr_navigational = 143 blr_indices = 144 blr_retrieve = 145 blr_relation2 = 146 blr_rid2 = 147 blr_set_generator = 150 blr_ansi_any = 151 blr_exists = 152 blr_record_version = 154 blr_stall = 155 blr_ansi_all = 158 blr_extract = 159 # sub parameters for blr_extract blr_extract_year = 0 blr_extract_month = 1 blr_extract_day = 2 blr_extract_hour = 3 blr_extract_minute = 4 blr_extract_second = 5 blr_extract_weekday = 6 blr_extract_yearday = 7 # Added in FB 2.1 blr_extract_millisecond = 8 blr_extract_week = 9 blr_current_date = 160 blr_current_timestamp = 161 blr_current_time = 162 # Those codes reuse BLR code space blr_post_arg = 163 blr_exec_into = 164 blr_user_savepoint = 165 blr_dcl_cursor = 166 blr_cursor_stmt = 167 blr_current_timestamp2 = 168 blr_current_time2 = 169 blr_agg_list = 170 blr_agg_list_distinct = 171 blr_modify2 = 172 # FB 1.0 specific BLR blr_current_role = 174 blr_skip = 175 # FB 1.5 specific BLR blr_exec_sql = 176 blr_internal_info = 177 blr_nullsfirst = 178 blr_writelock = 179 blr_nullslast = 180 # FB 2.0 specific BLR blr_lowcase = 181 blr_strlen = 182 # sub parameter for blr_strlen blr_strlen_bit = 0 blr_strlen_char = 1 blr_strlen_octet = 2 blr_trim = 183 # first sub parameter for blr_trim blr_trim_both = 0 blr_trim_leading = 1 blr_trim_trailing = 2 # second sub parameter for blr_trim blr_trim_spaces = 0 blr_trim_characters = 1 # These codes are actions for user-defined savepoints blr_savepoint_set = 0 blr_savepoint_release = 1 blr_savepoint_undo = 2 blr_savepoint_release_single = 3 # These codes are actions for cursors blr_cursor_open = 0 blr_cursor_close = 1 blr_cursor_fetch = 2 # FB 2.1 specific BLR blr_init_variable = 184 blr_recurse = 185 blr_sys_function = 186 # FB 2.5 specific BLR blr_auto_trans = 187 blr_similar = 188 blr_exec_stmt = 189 # subcodes of blr_exec_stmt blr_exec_stmt_inputs = 1 # input parameters count blr_exec_stmt_outputs = 2 # output parameters count blr_exec_stmt_sql = 3 blr_exec_stmt_proc_block = 4 blr_exec_stmt_data_src = 5 blr_exec_stmt_user = 6 blr_exec_stmt_pwd = 7 blr_exec_stmt_tran = 8 # not implemented yet blr_exec_stmt_tran_clone = 9 # make transaction parameters equal to current transaction blr_exec_stmt_privs = 10 blr_exec_stmt_in_params = 11 # not named input parameters blr_exec_stmt_in_params2 = 12 # named input parameters blr_exec_stmt_out_params = 13 # output parameters blr_exec_stmt_role = 14 blr_stmt_expr = 190 blr_derived_expr = 191 fdb-1.6.1/fdb/ibase.py0000644000175000001440000023363112702673374015125 0ustar pcisarusers00000000000000#coding:utf-8 # # PROGRAM/MODULE: fdb # FILE: ibase.py # DESCRIPTION: Python ctypes interface to Firebird client library # CREATED: 6.10.2011 # # Software distributed under the License is distributed AS IS, # WITHOUT WARRANTY OF ANY KIND, either express or implied. # See the License for the specific language governing rights # and limitations under the License. # # The Original Code was created by Pavel Cisar # # Copyright (c) 2011 Pavel Cisar # and all contributors signed below. # # All Rights Reserved. # Contributor(s): Philippe Makowski # ______________________________________. # # See LICENSE.TXT for details. from ctypes import * from ctypes.util import find_library import sys import locale import types import operator import platform import os PYTHON_MAJOR_VER = sys.version_info[0] #------------------- if PYTHON_MAJOR_VER == 3: from queue import PriorityQueue def nativestr(st,charset="latin-1"): if st == None: return st elif isinstance(st, bytes): return st.decode(charset) else: return st def b(st,charset="latin-1"): if st == None: return st elif isinstance(st, bytes): return st else: try: return st.encode(charset) except UnicodeEncodeError: return st def s(st): return st ord2 = lambda x: x if type(x) == IntType else ord(x) if sys.version_info[1] <= 1: def int2byte(i): return bytes((i,)) else: # This is about 2x faster than the implementation above on 3.2+ int2byte = operator.methodcaller("to_bytes", 1, "big") def mychr(i): return i mybytes = bytes myunicode = str mylong = int StringType = str IntType = int LongType = int FloatType = float ListType = list UnicodeType = str TupleType = tuple xrange = range else: from Queue import PriorityQueue def nativestr(st,charset="latin-1"): if st == None: return st elif isinstance(st, unicode): return st.encode(charset) else: return st def b(st,charset="latin-1"): if st == None: return st elif isinstance(st, types.StringType): return st else: try: return st.encode(charset) except UnicodeEncodeError: return st int2byte = chr s = str ord2 = ord def mychr(i): return chr(i) mybytes = str myunicode = unicode mylong = long StringType = types.StringType IntType = types.IntType LongType = types.LongType FloatType = types.FloatType ListType = types.ListType UnicodeType = types.UnicodeType TupleType = types.TupleType xrange = xrange # Support routines from ctypesgen generated file. # As of ctypes 1.0, ctypes does not support custom error-checking # functions on callbacks, nor does it support custom datatypes on # callbacks, so we must ensure that all callbacks return # primitive datatypes. # # Non-primitive return values wrapped with UNCHECKED won't be # typechecked, and will be converted to c_void_p. def UNCHECKED(type): if (hasattr(type, "_type_") and isinstance(type._type_, str) and type._type_ != "P"): return type else: return c_void_p # ibase.h FB_API_VER = 25 MAX_BLOB_SEGMENT_SIZE = 65535 # Event queue operation (and priority) codes OP_DIE = 1 OP_RECORD_AND_REREGISTER = 2 charset_map = { # DB CHAR SET NAME : PYTHON CODEC NAME (CANONICAL) # ------------------------------------------------------------------------- None : locale.getpreferredencoding(), 'NONE' : locale.getpreferredencoding(), 'OCTETS' : None, # Allow to pass through unchanged. 'UNICODE_FSS' : 'utf_8', 'UTF8' : 'utf_8', # (Firebird 2.0+) 'ASCII' : 'ascii', 'SJIS_0208' : 'shift_jis', 'EUCJ_0208' : 'euc_jp', 'DOS737' : 'cp737', 'DOS437' : 'cp437', 'DOS850' : 'cp850', 'DOS865' : 'cp865', 'DOS860' : 'cp860', 'DOS863' : 'cp863', 'DOS775' : 'cp775', 'DOS862' : 'cp862', 'DOS864' : 'cp864', 'ISO8859_1' : 'iso8859_1', 'ISO8859_2' : 'iso8859_2', 'ISO8859_3' : 'iso8859_3', 'ISO8859_4' : 'iso8859_4', 'ISO8859_5' : 'iso8859_5', 'ISO8859_6' : 'iso8859_6', 'ISO8859_7' : 'iso8859_7', 'ISO8859_8' : 'iso8859_8', 'ISO8859_9' : 'iso8859_9', 'ISO8859_13' : 'iso8859_13', 'KSC_5601' : 'euc_kr', 'DOS852' : 'cp852', 'DOS857' : 'cp857', 'DOS858' : 'cp858', 'DOS861' : 'cp861', 'DOS866' : 'cp866', 'DOS869' : 'cp869', 'WIN1250' : 'cp1250', 'WIN1251' : 'cp1251', 'WIN1252' : 'cp1252', 'WIN1253' : 'cp1253', 'WIN1254' : 'cp1254', 'BIG_5' : 'big5', 'GB_2312' : 'gb2312', 'WIN1255' : 'cp1255', 'WIN1256' : 'cp1256', 'WIN1257' : 'cp1257', 'GB18030' : 'gb18030', 'GBK' : 'gbk', 'KOI8R' : 'koi8_r', # (Firebird 2.0+) 'KOI8U' : 'koi8_u', # (Firebird 2.0+) 'WIN1258' : 'cp1258', # (Firebird 2.0+) } DB_CHAR_SET_NAME_TO_PYTHON_ENCODING_MAP = charset_map # C integer limit constants SHRT_MIN = -32768 SHRT_MAX = 32767 USHRT_MAX = 65535 INT_MIN = -2147483648 INT_MAX = 2147483647 UINT_MAX = 4294967295 LONG_MIN = -9223372036854775808 LONG_MAX = 9223372036854775807 SSIZE_T_MIN = INT_MIN SSIZE_T_MAX = INT_MAX # Constants DSQL_close = 1 DSQL_drop = 2 DSQL_unprepare = 4 SQLDA_version1 = 1 # Type codes SQL_TEXT = 452 SQL_VARYING = 448 SQL_SHORT = 500 SQL_LONG = 496 SQL_FLOAT = 482 SQL_DOUBLE = 480 SQL_D_FLOAT = 530 SQL_TIMESTAMP = 510 SQL_BLOB = 520 SQL_ARRAY = 540 SQL_QUAD = 550 SQL_TYPE_TIME = 560 SQL_TYPE_DATE = 570 SQL_INT64 = 580 # Firebird 3 SQL_BOOLEAN = 32764 SQL_NULL = 32766 SUBTYPE_NUMERIC = 1 SUBTYPE_DECIMAL = 2 # Internal type codes (for example used by ARRAY descriptor) blr_text = 14 blr_text2 = 15 blr_short = 7 blr_long = 8 blr_quad = 9 blr_float = 10 blr_double = 27 blr_d_float = 11 blr_timestamp = 35 blr_varying = 37 blr_varying2 = 38 blr_blob = 261 blr_cstring = 40 blr_cstring2 = 41 blr_blob_id = 45 blr_sql_date = 12 blr_sql_time = 13 blr_int64 = 16 blr_blob2 = 17 # Added in FB 2.1 blr_domain_name = 18 blr_domain_name2 = 19 blr_not_nullable = 20 # Added in FB 2.5 blr_column_name = 21 blr_column_name2 = 22 # Added in FB 3.0 blr_bool = 23 # blr_domain_type_of = 0 blr_domain_full = 1 # Rest of BLR is defined in fdb.blr # Database parameter block stuff isc_dpb_version1 = 1 isc_dpb_version2 = 2 # Firebird 3 isc_dpb_cdd_pathname = 1 isc_dpb_allocation = 2 isc_dpb_journal = 3 isc_dpb_page_size = 4 isc_dpb_num_buffers = 5 isc_dpb_buffer_length = 6 isc_dpb_debug = 7 isc_dpb_garbage_collect = 8 isc_dpb_verify = 9 isc_dpb_sweep = 10 isc_dpb_enable_journal = 11 isc_dpb_disable_journal = 12 isc_dpb_dbkey_scope = 13 isc_dpb_number_of_users = 14 isc_dpb_trace = 15 isc_dpb_no_garbage_collect = 16 isc_dpb_damaged = 17 isc_dpb_license = 18 isc_dpb_sys_user_name = 19 isc_dpb_encrypt_key = 20 isc_dpb_activate_shadow = 21 isc_dpb_sweep_interval = 22 isc_dpb_delete_shadow = 23 isc_dpb_force_write = 24 isc_dpb_begin_log = 25 isc_dpb_quit_log = 26 isc_dpb_no_reserve = 27 isc_dpb_user_name = 28 isc_dpb_password = 29 isc_dpb_password_enc = 30 isc_dpb_sys_user_name_enc = 31 isc_dpb_interp = 32 isc_dpb_online_dump = 33 isc_dpb_old_file_size = 34 isc_dpb_old_num_files = 35 isc_dpb_old_file = 36 isc_dpb_old_start_page = 37 isc_dpb_old_start_seqno = 38 isc_dpb_old_start_file = 39 isc_dpb_drop_walfile = 40 isc_dpb_old_dump_id = 41 isc_dpb_wal_backup_dir = 42 isc_dpb_wal_chkptlen = 43 isc_dpb_wal_numbufs = 44 isc_dpb_wal_bufsize = 45 isc_dpb_wal_grp_cmt_wait = 46 isc_dpb_lc_messages = 47 isc_dpb_lc_ctype = 48 isc_dpb_cache_manager = 49 isc_dpb_shutdown = 50 isc_dpb_online = 51 isc_dpb_shutdown_delay = 52 isc_dpb_reserved = 53 isc_dpb_overwrite = 54 isc_dpb_sec_attach = 55 isc_dpb_disable_wal = 56 isc_dpb_connect_timeout = 57 isc_dpb_dummy_packet_interval = 58 isc_dpb_gbak_attach = 59 isc_dpb_sql_role_name = 60 isc_dpb_set_page_buffers = 61 isc_dpb_working_directory = 62 isc_dpb_sql_dialect = 63 isc_dpb_set_db_readonly = 64 isc_dpb_set_db_sql_dialect = 65 isc_dpb_gfix_attach = 66 isc_dpb_gstat_attach = 67 isc_dpb_set_db_charset = 68 isc_dpb_gsec_attach = 69 # Deprecated in FB3 isc_dpb_address_path = 70 # Added in FB 2.1 isc_dpb_process_id = 71 isc_dpb_no_db_triggers = 72 isc_dpb_trusted_auth = 73 isc_dpb_process_name = 74 # Added in FB 2.5 isc_dpb_trusted_role = 75 isc_dpb_org_filename = 76 isc_dpb_utf8_filename = 77 isc_dpb_ext_call_depth = 78 # Added in FB 3.0 isc_dpb_auth_block = 79 isc_dpb_remote_protocol = 81 isc_dpb_client_version = 80 isc_dpb_host_name = 82 isc_dpb_os_user = 83 isc_dpb_specific_auth_data = 84 isc_dpb_auth_plugin_list = 85 isc_dpb_auth_plugin_name = 86 isc_dpb_config = 87 isc_dpb_nolinger = 88 isc_dpb_reset_icu = 89 isc_dpb_map_attach = 90 # structural codes isc_info_end = 1 isc_info_truncated = 2 isc_info_error = 3 isc_info_data_not_ready = 4 isc_info_length = 126 isc_info_flag_end = 127 isc_info_req_select_count = 13 isc_info_req_insert_count = 14 isc_info_req_update_count = 15 isc_info_req_delete_count = 16 # DB Info item codes isc_info_db_id = 4 isc_info_reads = 5 isc_info_writes = 6 isc_info_fetches = 7 isc_info_marks = 8 isc_info_implementation = 11 isc_info_isc_version = 12 isc_info_base_level = 13 # Note: This is useless info item, consider as obsolete isc_info_page_size = 14 isc_info_num_buffers = 15 isc_info_limbo = 16 isc_info_current_memory = 17 isc_info_max_memory = 18 isc_info_window_turns = 19 isc_info_license = 20 isc_info_allocation = 21 isc_info_attachment_id = 22 isc_info_read_seq_count = 23 isc_info_read_idx_count = 24 isc_info_insert_count = 25 isc_info_update_count = 26 isc_info_delete_count = 27 isc_info_backout_count = 28 isc_info_purge_count = 29 isc_info_expunge_count = 30 isc_info_sweep_interval = 31 isc_info_ods_version = 32 isc_info_ods_minor_version = 33 isc_info_no_reserve = 34 isc_info_logfile = 35 isc_info_cur_logfile_name = 36 isc_info_cur_log_part_offset = 37 isc_info_num_wal_buffers = 38 isc_info_wal_buffer_size = 39 isc_info_wal_ckpt_length = 40 isc_info_wal_cur_ckpt_interval = 41 isc_info_wal_prv_ckpt_fname = 42 isc_info_wal_prv_ckpt_poffset = 43 isc_info_wal_recv_ckpt_fname = 44 isc_info_wal_recv_ckpt_poffset = 45 isc_info_wal_grpc_wait_usecs = 47 isc_info_wal_num_io = 48 isc_info_wal_avg_io_size = 49 isc_info_wal_num_commits = 50 isc_info_wal_avg_grpc_size = 51 isc_info_forced_writes = 52 isc_info_user_names = 53 isc_info_page_errors = 54 isc_info_record_errors = 55 isc_info_bpage_errors = 56 isc_info_dpage_errors = 57 isc_info_ipage_errors = 58 isc_info_ppage_errors = 59 isc_info_tpage_errors = 60 isc_info_set_page_buffers = 61 isc_info_db_sql_dialect = 62 isc_info_db_read_only = 63 isc_info_db_size_in_pages = 64 # Values 65 -100 unused to avoid conflict with InterBase frb_info_att_charset = 101 isc_info_db_class = 102 isc_info_firebird_version = 103 isc_info_oldest_transaction = 104 isc_info_oldest_active = 105 isc_info_oldest_snapshot = 106 isc_info_next_transaction = 107 isc_info_db_provider = 108 isc_info_active_transactions = 109 isc_info_active_tran_count = 110 isc_info_creation_date = 111 isc_info_db_file_size = 112 # added in FB 2.1, nbackup-related - size (in pages) of locked db fb_info_page_contents = 113 # added in FB 2.5 # Added in Firebird 3.0 fb_info_page_warns = 115 fb_info_record_warns = 116 fb_info_bpage_warns = 117 fb_info_dpage_warns = 118 fb_info_ipage_warns = 119 fb_info_ppage_warns = 120 fb_info_tpage_warns = 121 fb_info_pip_errors = 122 fb_info_pip_warns = 123 isc_info_db_last_value = (fb_info_pip_warns + 1) isc_info_version = isc_info_isc_version # Blob information items isc_info_blob_num_segments = 4 isc_info_blob_max_segment = 5 isc_info_blob_total_length = 6 isc_info_blob_type = 7 # Transaction information items isc_info_tra_id = 4 isc_info_tra_oldest_interesting = 5 isc_info_tra_oldest_snapshot = 6 isc_info_tra_oldest_active = 7 isc_info_tra_isolation = 8 isc_info_tra_access = 9 isc_info_tra_lock_timeout = 10 fb_info_tra_dbpath = 11 # Firebird 3.0 # isc_info_tra_isolation responses isc_info_tra_consistency = 1 isc_info_tra_concurrency = 2 isc_info_tra_read_committed = 3 # isc_info_tra_read_committed options isc_info_tra_no_rec_version = 0 isc_info_tra_rec_version = 1 # isc_info_tra_access responses isc_info_tra_readonly = 0 isc_info_tra_readwrite = 1 # SQL information items isc_info_sql_select = 4 isc_info_sql_bind = 5 isc_info_sql_num_variables = 6 isc_info_sql_describe_vars = 7 isc_info_sql_describe_end = 8 isc_info_sql_sqlda_seq = 9 isc_info_sql_message_seq = 10 isc_info_sql_type = 11 isc_info_sql_sub_type = 12 isc_info_sql_scale = 13 isc_info_sql_length = 14 isc_info_sql_null_ind = 15 isc_info_sql_field = 16 isc_info_sql_relation = 17 isc_info_sql_owner = 18 isc_info_sql_alias = 19 isc_info_sql_sqlda_start = 20 isc_info_sql_stmt_type = 21 isc_info_sql_get_plan = 22 isc_info_sql_records = 23 isc_info_sql_batch_fetch = 24 isc_info_sql_relation_alias = 25 # Added in Firebird 3.0 isc_info_sql_explain_plan = 26 isc_info_sql_stmt_flags = 27 # SQL information return values isc_info_sql_stmt_select = 1 isc_info_sql_stmt_insert = 2 isc_info_sql_stmt_update = 3 isc_info_sql_stmt_delete = 4 isc_info_sql_stmt_ddl = 5 isc_info_sql_stmt_get_segment = 6 isc_info_sql_stmt_put_segment = 7 isc_info_sql_stmt_exec_procedure = 8 isc_info_sql_stmt_start_trans = 9 isc_info_sql_stmt_commit = 10 isc_info_sql_stmt_rollback = 11 isc_info_sql_stmt_select_for_upd = 12 isc_info_sql_stmt_set_generator = 13 isc_info_sql_stmt_savepoint = 14 # Transaction parameter block stuff isc_tpb_version1 = 1 isc_tpb_version3 = 3 isc_tpb_consistency = 1 isc_tpb_concurrency = 2 isc_tpb_shared = 3 isc_tpb_protected = 4 isc_tpb_exclusive = 5 isc_tpb_wait = 6 isc_tpb_nowait = 7 isc_tpb_read = 8 isc_tpb_write = 9 isc_tpb_lock_read = 10 isc_tpb_lock_write = 11 isc_tpb_verb_time = 12 isc_tpb_commit_time = 13 isc_tpb_ignore_limbo = 14 isc_tpb_read_committed = 15 isc_tpb_autocommit = 16 isc_tpb_rec_version = 17 isc_tpb_no_rec_version = 18 isc_tpb_restart_requests = 19 isc_tpb_no_auto_undo = 20 isc_tpb_lock_timeout = 21 # BLOB parameter buffer isc_bpb_version1 = 1 isc_bpb_source_type = 1 isc_bpb_target_type = 2 isc_bpb_type = 3 isc_bpb_source_interp = 4 isc_bpb_target_interp = 5 isc_bpb_filter_parameter = 6 # Added in FB 2.1 isc_bpb_storage = 7 isc_bpb_type_segmented = 0 isc_bpb_type_stream = 1 # Added in FB 2.1 isc_bpb_storage_main = 0 isc_bpb_storage_temp = 2 # BLOB codes isc_segment = 335544366 isc_segstr_eof = 335544367 # Services API # Service parameter block stuff isc_spb_current_version = 2 isc_spb_version = isc_spb_current_version isc_spb_version3 = 3 isc_spb_user_name = isc_dpb_user_name isc_spb_sys_user_name = isc_dpb_sys_user_name isc_spb_sys_user_name_enc = isc_dpb_sys_user_name_enc isc_spb_password = isc_dpb_password isc_spb_password_enc = isc_dpb_password_enc isc_spb_command_line = 105 isc_spb_dbname = 106 isc_spb_verbose = 107 isc_spb_options = 108 isc_spb_address_path = 109 # Added in FB 2.1 isc_spb_process_id = 110 isc_spb_trusted_auth = 111 isc_spb_process_name = 112 # Added in FB 2.5 isc_spb_trusted_role = 113 # Added in FB 3.0 isc_spb_verbint = 114 isc_spb_auth_block = 115 isc_spb_auth_plugin_name = 116 isc_spb_auth_plugin_list = 117 isc_spb_utf8_filename = 118 isc_spb_client_version = 119 isc_spb_remote_protocol = 120 isc_spb_host_name = 121 isc_spb_os_user = 122 isc_spb_config = 123 isc_spb_expected_db = 124 isc_spb_specific_auth_data = isc_spb_trusted_auth # Service action items isc_action_svc_backup = 1 # Starts database backup process on the server isc_action_svc_restore = 2 # Starts database restore process on the server isc_action_svc_repair = 3 # Starts database repair process on the server isc_action_svc_add_user = 4 # Adds a new user to the security database isc_action_svc_delete_user = 5 # Deletes a user record from the security database isc_action_svc_modify_user = 6 # Modifies a user record in the security database isc_action_svc_display_user = 7 # Displays a user record from the security database isc_action_svc_properties = 8 # Sets database properties isc_action_svc_add_license = 9 # Adds a license to the license file isc_action_svc_remove_license = 10 # Removes a license from the license file isc_action_svc_db_stats = 11 # Retrieves database statistics isc_action_svc_get_ib_log = 12 # Retrieves the InterBase log file from the server isc_action_svc_get_fb_log = 12 # Retrieves the Firebird log file from the server # Added in FB 2.5 isc_action_svc_nbak = 20 isc_action_svc_nrest = 21 isc_action_svc_trace_start = 22 isc_action_svc_trace_stop = 23 isc_action_svc_trace_suspend = 24 isc_action_svc_trace_resume = 25 isc_action_svc_trace_list = 26 isc_action_svc_set_mapping = 27 isc_action_svc_drop_mapping = 28 isc_action_svc_display_user_adm = 29 isc_action_svc_validate = 30 # Firebird 3.0 isc_action_svc_last = 31 # Service information items isc_info_svc_svr_db_info = 50 # Retrieves the number of attachments and databases */ isc_info_svc_get_config = 53 # Retrieves the parameters and values for IB_CONFIG */ isc_info_svc_version = 54 # Retrieves the version of the services manager */ isc_info_svc_server_version = 55 # Retrieves the version of the Firebird server */ isc_info_svc_implementation = 56 # Retrieves the implementation of the Firebird server */ isc_info_svc_capabilities = 57 # Retrieves a bitmask representing the server's capabilities */ isc_info_svc_user_dbpath = 58 # Retrieves the path to the security database in use by the server */ isc_info_svc_get_env = 59 # Retrieves the setting of $FIREBIRD */ isc_info_svc_get_env_lock = 60 # Retrieves the setting of $FIREBIRD_LCK */ isc_info_svc_get_env_msg = 61 # Retrieves the setting of $FIREBIRD_MSG */ isc_info_svc_line = 62 # Retrieves 1 line of service output per call */ isc_info_svc_to_eof = 63 # Retrieves as much of the server output as will fit in the supplied buffer */ isc_info_svc_timeout = 64 # Sets / signifies a timeout value for reading service information */ isc_info_svc_limbo_trans = 66 # Retrieve the limbo transactions */ isc_info_svc_running = 67 # Checks to see if a service is running on an attachment */ isc_info_svc_get_users = 68 # Returns the user information from isc_action_svc_display_users */ # Parameters for isc_action_{add|del|mod|disp)_user isc_spb_sec_userid = 5 isc_spb_sec_groupid = 6 isc_spb_sec_username = 7 isc_spb_sec_password = 8 isc_spb_sec_groupname = 9 isc_spb_sec_firstname = 10 isc_spb_sec_middlename = 11 isc_spb_sec_lastname = 12 isc_spb_sec_admin = 13 # Parameters for isc_action_svc_backup isc_spb_bkp_file = 5 isc_spb_bkp_factor = 6 isc_spb_bkp_length = 7 isc_spb_bkp_skip_data = 8 # Firebird 3.0 isc_spb_bkp_stat = 15 # Firebird 3.0 isc_spb_bkp_ignore_checksums = 0x01 isc_spb_bkp_ignore_limbo = 0x02 isc_spb_bkp_metadata_only = 0x04 isc_spb_bkp_no_garbage_collect = 0x08 isc_spb_bkp_old_descriptions = 0x10 isc_spb_bkp_non_transportable = 0x20 isc_spb_bkp_convert = 0x40 isc_spb_bkp_expand = 0x80 isc_spb_bkp_no_triggers = 0x8000 # Parameters for isc_action_svc_properties isc_spb_prp_page_buffers = 5 isc_spb_prp_sweep_interval = 6 isc_spb_prp_shutdown_db = 7 isc_spb_prp_deny_new_attachments = 9 isc_spb_prp_deny_new_transactions = 10 isc_spb_prp_reserve_space = 11 isc_spb_prp_write_mode = 12 isc_spb_prp_access_mode = 13 isc_spb_prp_set_sql_dialect = 14 isc_spb_prp_activate = 0x0100 isc_spb_prp_db_online = 0x0200 isc_spb_prp_nolinger = 0x0400 # Firebird 3.0 isc_spb_prp_force_shutdown = 41 isc_spb_prp_attachments_shutdown = 42 isc_spb_prp_transactions_shutdown = 43 isc_spb_prp_shutdown_mode = 44 isc_spb_prp_online_mode = 45 # Parameters for isc_spb_prp_shutdown_mode and isc_spb_prp_online_mode isc_spb_prp_sm_normal = 0 isc_spb_prp_sm_multi = 1 isc_spb_prp_sm_single = 2 isc_spb_prp_sm_full = 3 # Parameters for isc_spb_prp_reserve_space isc_spb_prp_res_use_full = 35 isc_spb_prp_res = 36 # Parameters for isc_spb_prp_write_mode isc_spb_prp_wm_async = 37 isc_spb_prp_wm_sync = 38 # Parameters for isc_action_svc_validate isc_spb_val_tab_incl = 1 isc_spb_val_tab_excl = 2 isc_spb_val_idx_incl = 3 isc_spb_val_idx_excl = 4 isc_spb_val_lock_timeout = 5 # Parameters for isc_spb_prp_access_mode isc_spb_prp_am_readonly = 39 isc_spb_prp_am_readwrite = 40 # Parameters for isc_action_svc_repair isc_spb_rpr_commit_trans = 15 isc_spb_rpr_rollback_trans = 34 isc_spb_rpr_recover_two_phase = 17 isc_spb_tra_id = 18 isc_spb_single_tra_id = 19 isc_spb_multi_tra_id = 20 isc_spb_tra_state = 21 isc_spb_tra_state_limbo = 22 isc_spb_tra_state_commit = 23 isc_spb_tra_state_rollback = 24 isc_spb_tra_state_unknown = 25 isc_spb_tra_host_site = 26 isc_spb_tra_remote_site = 27 isc_spb_tra_db_path = 28 isc_spb_tra_advise = 29 isc_spb_tra_advise_commit = 30 isc_spb_tra_advise_rollback = 31 isc_spb_tra_advise_unknown = 33 # Added in Firebird 3.0 isc_spb_tra_id_64 = 46 isc_spb_single_tra_id_64 = 47 isc_spb_multi_tra_id_64 = 48 isc_spb_rpr_validate_db = 0x01 isc_spb_rpr_sweep_db = 0x02 isc_spb_rpr_mend_db = 0x04 isc_spb_rpr_list_limbo_trans = 0x08 isc_spb_rpr_check_db = 0x10 isc_spb_rpr_ignore_checksum = 0x20 isc_spb_rpr_kill_shadows = 0x40 isc_spb_rpr_full = 0x80 isc_spb_rpr_icu = 0x0800 # Firebird 3.0 # Parameters for isc_action_svc_restore isc_spb_res_skip_data = isc_spb_bkp_skip_data # Firebird 3.0 isc_spb_res_buffers = 9 isc_spb_res_page_size = 10 isc_spb_res_length = 11 isc_spb_res_access_mode = 12 isc_spb_res_fix_fss_data = 13 isc_spb_res_fix_fss_metadata = 14 isc_spb_res_stat = 15 # Firebird 3.0 isc_spb_res_metadata_only = 0x04 isc_spb_res_deactivate_idx = 0x0100 isc_spb_res_no_shadow = 0x0200 isc_spb_res_no_validity = 0x0400 isc_spb_res_one_at_a_time = 0x0800 isc_spb_res_replace = 0x1000 isc_spb_res_create = 0x2000 isc_spb_res_use_all_space = 0x4000 # Parameters for isc_spb_res_access_mode isc_spb_res_am_readonly = isc_spb_prp_am_readonly isc_spb_res_am_readwrite = isc_spb_prp_am_readwrite # Parameters for isc_info_svc_svr_db_info isc_spb_num_att = 5 isc_spb_num_db = 6 # Parameters for isc_info_svc_db_stats isc_spb_sts_data_pages = 0x01 isc_spb_sts_db_log = 0x02 isc_spb_sts_hdr_pages = 0x04 isc_spb_sts_idx_pages = 0x08 isc_spb_sts_sys_relations = 0x10 isc_spb_sts_record_versions = 0x20 isc_spb_sts_table = 0x40 isc_spb_sts_nocreation = 0x80 isc_spb_sts_encryption = 0x100 # Firebird 3.0 # Parameters for isc_action_svc_nbak isc_spb_nbk_level = 5 isc_spb_nbk_file = 6 isc_spb_nbk_direct = 7 isc_spb_nbk_no_triggers = 0x01 # trace isc_spb_trc_id = 1 isc_spb_trc_name = 2 isc_spb_trc_cfg = 3 #------------------- STRING = c_char_p WSTRING = c_wchar_p blb_got_eof = 0 blb_got_fragment = -1 blb_got_full_segment = 1 blb_seek_relative = 1 blb_seek_from_tail = 2 # Implementation codes isc_info_db_impl_rdb_vms = 1 isc_info_db_impl_rdb_eln = 2 isc_info_db_impl_rdb_eln_dev = 3 isc_info_db_impl_rdb_vms_y = 4 isc_info_db_impl_rdb_eln_y = 5 isc_info_db_impl_jri = 6 isc_info_db_impl_jsv = 7 isc_info_db_impl_isc_apl_68K = 25 isc_info_db_impl_isc_vax_ultr = 26 isc_info_db_impl_isc_vms = 27 isc_info_db_impl_isc_sun_68k = 28 isc_info_db_impl_isc_os2 = 29 isc_info_db_impl_isc_sun4 = 30 isc_info_db_impl_isc_hp_ux = 31 isc_info_db_impl_isc_sun_386i = 32 isc_info_db_impl_isc_vms_orcl = 33 isc_info_db_impl_isc_mac_aux = 34 isc_info_db_impl_isc_rt_aix = 35 isc_info_db_impl_isc_mips_ult = 36 isc_info_db_impl_isc_xenix = 37 isc_info_db_impl_isc_dg = 38 isc_info_db_impl_isc_hp_mpexl = 39 isc_info_db_impl_isc_hp_ux68K = 40 isc_info_db_impl_isc_sgi = 41 isc_info_db_impl_isc_sco_unix = 42 isc_info_db_impl_isc_cray = 43 isc_info_db_impl_isc_imp = 44 isc_info_db_impl_isc_delta = 45 isc_info_db_impl_isc_next = 46 isc_info_db_impl_isc_dos = 47 isc_info_db_impl_m88K = 48 isc_info_db_impl_unixware = 49 isc_info_db_impl_isc_winnt_x86 = 50 isc_info_db_impl_isc_epson = 51 isc_info_db_impl_alpha_osf = 52 isc_info_db_impl_alpha_vms = 53 isc_info_db_impl_netware_386 = 54 isc_info_db_impl_win_only = 55 isc_info_db_impl_ncr_3000 = 56 isc_info_db_impl_winnt_ppc = 57 isc_info_db_impl_dg_x86 = 58 isc_info_db_impl_sco_ev = 59 isc_info_db_impl_i386 = 60 isc_info_db_impl_freebsd = 61 isc_info_db_impl_netbsd = 62 isc_info_db_impl_darwin_ppc = 63 isc_info_db_impl_sinixz = 64 isc_info_db_impl_linux_sparc = 65 isc_info_db_impl_linux_amd64 = 66 isc_info_db_impl_freebsd_amd64 = 67 isc_info_db_impl_winnt_amd64 = 68 isc_info_db_impl_linux_ppc = 69 isc_info_db_impl_darwin_x86 = 70 isc_info_db_impl_linux_mipsel = 71 # changed in 2.1, it was isc_info_db_impl_sun_amd64 in 2.0 # Added in FB 2.1 isc_info_db_impl_linux_mips = 72 isc_info_db_impl_darwin_x64 = 73 isc_info_db_impl_sun_amd64 = 74 isc_info_db_impl_linux_arm = 75 isc_info_db_impl_linux_ia64 = 76 isc_info_db_impl_darwin_ppc64 = 77 isc_info_db_impl_linux_s390x = 78 isc_info_db_impl_linux_s390 = 79 isc_info_db_impl_linux_sh = 80 isc_info_db_impl_linux_sheb = 81 # Added in FB 2.5 isc_info_db_impl_linux_hppa = 82 isc_info_db_impl_linux_alpha = 83 isc_info_db_impl_linux_arm64 = 84 isc_info_db_impl_linux_ppc64el = 85 isc_info_db_impl_linux_ppc64 = 86 # Firebird 3.0 isc_info_db_impl_last_value = (isc_info_db_impl_linux_ppc64 + 1) # Info DB provider isc_info_db_code_rdb_eln = 1 isc_info_db_code_rdb_vms = 2 isc_info_db_code_interbase = 3 isc_info_db_code_firebird = 4 isc_info_db_code_last_value = (isc_info_db_code_firebird+1) # Info db class isc_info_db_class_access = 1 isc_info_db_class_y_valve = 2 isc_info_db_class_rem_int = 3 isc_info_db_class_rem_srvr = 4 isc_info_db_class_pipe_int = 7 isc_info_db_class_pipe_srvr = 8 isc_info_db_class_sam_int = 9 isc_info_db_class_sam_srvr = 10 isc_info_db_class_gateway = 11 isc_info_db_class_cache = 12 isc_info_db_class_classic_access = 13 isc_info_db_class_server_access = 14 isc_info_db_class_last_value = (isc_info_db_class_server_access+1) # status codes isc_segment = 335544366 FB_API_HANDLE = c_uint if platform.architecture() == ('64bit', 'WindowsPE'): intptr_t = c_longlong uintptr_t = c_ulonglong else: intptr_t = c_long uintptr_t = c_ulong ISC_STATUS = intptr_t ISC_STATUS_PTR = POINTER(ISC_STATUS) ISC_STATUS_ARRAY = ISC_STATUS * 20 FB_SQLSTATE_STRING = c_char * (5 + 1) ISC_LONG = c_int ISC_ULONG = c_uint ISC_SHORT = c_short ISC_USHORT = c_ushort ISC_UCHAR = c_ubyte ISC_SCHAR = c_char ISC_INT64 = c_longlong ISC_UINT64 = c_ulonglong ISC_DATE = c_int ISC_TIME = c_uint ISC_TRUE = 1 ISC_FALSE = 0 class ISC_TIMESTAMP(Structure): pass ISC_TIMESTAMP._fields_ = [ ('timestamp_date', ISC_DATE), ('timestamp_time', ISC_TIME), ] class GDS_QUAD_t(Structure): pass GDS_QUAD_t._fields_ = [ ('gds_quad_high', ISC_LONG), ('gds_quad_low', ISC_ULONG), ] GDS_QUAD = GDS_QUAD_t ISC_QUAD = GDS_QUAD_t isc_att_handle = FB_API_HANDLE isc_blob_handle = FB_API_HANDLE isc_db_handle = FB_API_HANDLE isc_req_handle = FB_API_HANDLE isc_stmt_handle = FB_API_HANDLE isc_svc_handle = FB_API_HANDLE isc_tr_handle = FB_API_HANDLE isc_resv_handle = ISC_LONG FB_SHUTDOWN_CALLBACK = CFUNCTYPE(UNCHECKED(c_int), c_int, c_int, POINTER(None)) ISC_CALLBACK = CFUNCTYPE(None) ISC_PRINT_CALLBACK = CFUNCTYPE(None, c_void_p, c_short, STRING) ISC_VERSION_CALLBACK = CFUNCTYPE(None, c_void_p, STRING) ISC_EVENT_CALLBACK = CFUNCTYPE(None, POINTER(ISC_UCHAR), c_ushort, POINTER(ISC_UCHAR)) class ISC_ARRAY_BOUND(Structure): pass ISC_ARRAY_BOUND._fields_ = [ ('array_bound_lower', c_short), ('array_bound_upper', c_short), ] class ISC_ARRAY_DESC(Structure): pass ISC_ARRAY_DESC._fields_ = [ ('array_desc_dtype', ISC_UCHAR), ('array_desc_scale', ISC_UCHAR), ## was ISC_SCHAR), ('array_desc_length', c_ushort), ('array_desc_field_name', ISC_SCHAR * 32), ('array_desc_relation_name', ISC_SCHAR * 32), ('array_desc_dimensions', c_short), ('array_desc_flags', c_short), ('array_desc_bounds', ISC_ARRAY_BOUND * 16), ] class ISC_BLOB_DESC(Structure): pass ISC_BLOB_DESC._fields_ = [ ('blob_desc_subtype', c_short), ('blob_desc_charset', c_short), ('blob_desc_segment_size', c_short), ('blob_desc_field_name', ISC_UCHAR * 32), ('blob_desc_relation_name', ISC_UCHAR * 32), ] class isc_blob_ctl(Structure): pass isc_blob_ctl._fields_ = [ ('ctl_source', CFUNCTYPE(ISC_STATUS)), ('ctl_source_handle', POINTER(isc_blob_ctl)), ('ctl_to_sub_type', c_short), ('ctl_from_sub_type', c_short), ('ctl_buffer_length', c_ushort), ('ctl_segment_length', c_ushort), ('ctl_bpb_length', c_ushort), ('ctl_bpb', STRING), ('ctl_buffer', POINTER(ISC_UCHAR)), ('ctl_max_segment', ISC_LONG), ('ctl_number_segments', ISC_LONG), ('ctl_total_length', ISC_LONG), ('ctl_status', POINTER(ISC_STATUS)), ('ctl_data', c_long * 8), ] ISC_BLOB_CTL = POINTER(isc_blob_ctl) class bstream(Structure): pass bstream._fields_ = [ ('bstr_blob', isc_blob_handle), ('bstr_buffer', POINTER(c_char)), # STRING ('bstr_ptr', POINTER(c_char)), # STRING ('bstr_length', c_short), ('bstr_cnt', c_short), ('bstr_mode', c_char), ] BSTREAM = bstream FB_BLOB_STREAM = POINTER(bstream) # values for enumeration 'blob_lseek_mode' blob_lseek_mode = c_int # enum # values for enumeration 'blob_get_result' blob_get_result = c_int # enum class blobcallback(Structure): pass blobcallback._fields_ = [ ('blob_get_segment', CFUNCTYPE(c_short, c_void_p, POINTER(ISC_UCHAR), c_ushort, POINTER(ISC_USHORT))), ('blob_handle', c_void_p), ('blob_number_segments', ISC_LONG), ('blob_max_segment', ISC_LONG), ('blob_total_length', ISC_LONG), ('blob_put_segment', CFUNCTYPE(None, c_void_p, POINTER(ISC_UCHAR), c_ushort)), ('blob_lseek', CFUNCTYPE(ISC_LONG, c_void_p, c_ushort, c_int)), ] BLOBCALLBACK = POINTER(blobcallback) class paramdsc(Structure): pass paramdsc._fields_ = [ ('dsc_dtype', ISC_UCHAR), ('dsc_scale', c_byte), ('dsc_length', ISC_USHORT), ('dsc_sub_type', c_short), ('dsc_flags', ISC_USHORT), ('dsc_address', POINTER(ISC_UCHAR)), ] PARAMDSC = paramdsc class paramvary(Structure): pass paramvary._fields_ = [ ('vary_length', ISC_USHORT), ('vary_string', ISC_UCHAR * 1), ] PARAMVARY = paramvary class ISC_TEB(Structure): pass ISC_TEB._fields_ = [ ('db_ptr', POINTER(isc_db_handle)), ('tpb_len', ISC_SHORT), ('tpb_ptr', STRING) ] class XSQLVAR(Structure): pass XSQLVAR._fields_ = [ ('sqltype', ISC_SHORT), ('sqlscale', ISC_SHORT), ('sqlsubtype', ISC_SHORT), ('sqllen', ISC_SHORT), ('sqldata', POINTER(c_char)), # STRING), ('sqlind', POINTER(ISC_SHORT)), ('sqlname_length', ISC_SHORT), ('sqlname', ISC_SCHAR * 32), ('relname_length', ISC_SHORT), ('relname', ISC_SCHAR * 32), ('ownname_length', ISC_SHORT), ('ownname', ISC_SCHAR * 32), ('aliasname_length', ISC_SHORT), ('aliasname', ISC_SCHAR * 32), ] class XSQLDA(Structure): pass XSQLDA._fields_ = [ ('version', ISC_SHORT), ('sqldaid', ISC_SCHAR * 8), ('sqldabc', ISC_LONG), ('sqln', ISC_SHORT), ('sqld', ISC_SHORT), ('sqlvar', XSQLVAR * 1), ] XSQLDA_PTR = POINTER(XSQLDA) def portable_int (buf): pass class USER_SEC_DATA(Structure): pass USER_SEC_DATA._fields_ = [ ('sec_flags', c_short), ('uid', c_int), ('gid', c_int), ('protocol', c_int), ('server', STRING), ('user_name', STRING), ('password', STRING), ('group_name', STRING), ('first_name', STRING), ('middle_name', STRING), ('last_name', STRING), ('dba_user_name', STRING), ('dba_password', STRING), ] RESULT_VECTOR = ISC_ULONG * 15 # values for enumeration 'db_info_types' db_info_types = c_int # enum # values for enumeration 'info_db_implementations' info_db_implementations = c_int # enum # values for enumeration 'info_db_class' info_db_class = c_int # enum # values for enumeration 'info_db_provider' info_db_provider = c_int # enum class imaxdiv_t(Structure): pass imaxdiv_t._fields_ = [ ('quot', c_long), ('rem', c_long), ] intmax_t = c_long int8_t = c_int8 int16_t = c_int16 int32_t = c_int32 int64_t = c_int64 uint8_t = c_uint8 uint16_t = c_uint16 uint32_t = c_uint32 uint64_t = c_uint64 int_least8_t = c_byte int_least16_t = c_short int_least32_t = c_int int_least64_t = c_long uint_least8_t = c_ubyte uint_least16_t = c_ushort uint_least32_t = c_uint uint_least64_t = c_ulong int_fast8_t = c_byte int_fast16_t = c_long int_fast32_t = c_long int_fast64_t = c_long uint_fast8_t = c_ubyte uint_fast16_t = c_ulong uint_fast32_t = c_ulong uint_fast64_t = c_ulong ptrdiff_t = c_long size_t = c_ulong uintmax_t = c_ulong class fbclient_API(object): """Firebird Client API interface object. Loads Firebird Client Library and exposes API functions as member methods. Uses :ref:`ctypes ` for bindings. """ def __init__(self,fb_library_name=None): def get_key(key, sub_key): try: return winreg.OpenKey(key, sub_key) except: return None if fb_library_name is None: if sys.platform == 'darwin': fb_library_name = find_library('Firebird') # Next elif is necessary hotfix for ctypes issue # http://bugs.python.org/issue16283 elif sys.platform == 'win32': fb_library_name = find_library('fbclient.dll') if not fb_library_name: # let's try windows registry if PYTHON_MAJOR_VER == 3: import winreg else: import _winreg as winreg # try find via installed Firebird server key = get_key(winreg.HKEY_LOCAL_MACHINE, 'SOFTWARE\Firebird Project\Firebird Server\Instances') if not key: key = get_key(winreg.HKEY_LOCAL_MACHINE, 'SOFTWARE\Wow6432Node\Firebird Project\Firebird Server\Instances') if key: instFold = winreg.QueryValueEx(key,'DefaultInstance') fb_library_name = os.path.join(os.path.join(instFold[0], 'bin'), 'fbclient.dll') else: fb_library_name = find_library('fbclient') if not fb_library_name: try: x = CDLL('libfbclient.so') fb_library_name = 'libfbclient.so' except: pass if not fb_library_name: raise Exception("The location of Firebird Client Library could not be determined.") elif not os.path.exists(fb_library_name): raise Exception("Firebird Client Library '%s' not found" % fb_library_name) if sys.platform in ['win32', 'cygwin', 'os2', 'os2emx']: fb_library = WinDLL(fb_library_name) else: fb_library = CDLL(fb_library_name) self.isc_attach_database = fb_library.isc_attach_database self.isc_attach_database.restype = ISC_STATUS self.isc_attach_database.argtypes = [POINTER(ISC_STATUS), c_short, STRING, POINTER(isc_db_handle), c_short, STRING] self.isc_array_gen_sdl = fb_library.isc_array_gen_sdl self.isc_array_gen_sdl.restype = ISC_STATUS self.isc_array_gen_sdl.argtypes = [POINTER(ISC_STATUS), POINTER(ISC_ARRAY_DESC), POINTER(ISC_SHORT), POINTER(ISC_UCHAR), POINTER(ISC_SHORT)] self.isc_array_get_slice = fb_library.isc_array_get_slice self.isc_array_get_slice.restype = ISC_STATUS self.isc_array_get_slice.argtypes = [POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), POINTER(ISC_QUAD), POINTER(ISC_ARRAY_DESC), c_void_p, POINTER(ISC_LONG)] self.isc_array_lookup_bounds = fb_library.isc_array_lookup_bounds self.isc_array_lookup_bounds.restype = ISC_STATUS self.isc_array_lookup_bounds.argtypes = [POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), STRING, STRING, POINTER(ISC_ARRAY_DESC)] self.isc_array_lookup_desc = fb_library.isc_array_lookup_desc self.isc_array_lookup_desc.restype = ISC_STATUS self.isc_array_lookup_desc.argtypes = [POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), STRING, STRING, POINTER(ISC_ARRAY_DESC)] self.isc_array_set_desc = fb_library.isc_array_set_desc self.isc_array_set_desc.restype = ISC_STATUS self.isc_array_set_desc.argtypes = [POINTER(ISC_STATUS), STRING, STRING, POINTER(c_short), POINTER(c_short), POINTER(c_short), POINTER(ISC_ARRAY_DESC)] self.isc_array_put_slice = fb_library.isc_array_put_slice self.isc_array_put_slice.restype = ISC_STATUS self.isc_array_put_slice.argtypes = [POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), POINTER(ISC_QUAD), POINTER(ISC_ARRAY_DESC), c_void_p, POINTER(ISC_LONG)] self.isc_blob_default_desc = fb_library.isc_blob_default_desc self.isc_blob_default_desc.restype = None self.isc_blob_default_desc.argtypes = [POINTER(ISC_BLOB_DESC), POINTER(ISC_UCHAR), POINTER(ISC_UCHAR)] self.isc_blob_gen_bpb = fb_library.isc_blob_gen_bpb self.isc_blob_gen_bpb.restype = ISC_STATUS self.isc_blob_gen_bpb.argtypes = [POINTER(ISC_STATUS), POINTER(ISC_BLOB_DESC), POINTER(ISC_BLOB_DESC), c_ushort, POINTER(ISC_UCHAR), POINTER(c_ushort)] self.isc_blob_info = fb_library.isc_blob_info self.isc_blob_info.restype = ISC_STATUS self.isc_blob_info.argtypes = [POINTER(ISC_STATUS), POINTER(isc_blob_handle), c_short, STRING, c_short, POINTER(c_char)] self.isc_blob_lookup_desc = fb_library.isc_blob_lookup_desc self.isc_blob_lookup_desc.restype = ISC_STATUS self.isc_blob_lookup_desc.argtypes = [POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), POINTER(ISC_UCHAR), POINTER(ISC_UCHAR), POINTER(ISC_BLOB_DESC), POINTER(ISC_UCHAR)] self.isc_blob_set_desc = fb_library.isc_blob_set_desc self.isc_blob_set_desc.restype = ISC_STATUS self.isc_blob_set_desc.argtypes = [POINTER(ISC_STATUS), POINTER(ISC_UCHAR), POINTER(ISC_UCHAR), c_short, c_short, c_short, POINTER(ISC_BLOB_DESC)] self.isc_cancel_blob = fb_library.isc_cancel_blob self.isc_cancel_blob.restype = ISC_STATUS self.isc_cancel_blob.argtypes = [POINTER(ISC_STATUS), POINTER(isc_blob_handle)] self.isc_cancel_events = fb_library.isc_cancel_events self.isc_cancel_events.restype = ISC_STATUS self.isc_cancel_events.argtypes = [POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(ISC_LONG)] self.isc_close_blob = fb_library.isc_close_blob self.isc_close_blob.restype = ISC_STATUS self.isc_close_blob.argtypes = [POINTER(ISC_STATUS), POINTER(isc_blob_handle)] self.isc_commit_retaining = fb_library.isc_commit_retaining self.isc_commit_retaining.restype = ISC_STATUS self.isc_commit_retaining.argtypes = [POINTER(ISC_STATUS), POINTER(isc_tr_handle)] self.isc_commit_transaction = fb_library.isc_commit_transaction self.isc_commit_transaction.restype = ISC_STATUS self.isc_commit_transaction.argtypes = [POINTER(ISC_STATUS), POINTER(isc_tr_handle)] self.isc_create_blob = fb_library.isc_create_blob self.isc_create_blob.restype = ISC_STATUS self.isc_create_blob.argtypes = [POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), POINTER(isc_blob_handle), POINTER(ISC_QUAD)] self.isc_create_blob2 = fb_library.isc_create_blob2 self.isc_create_blob2.restype = ISC_STATUS self.isc_create_blob2.argtypes = [POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), POINTER(isc_blob_handle), POINTER(ISC_QUAD), c_short, STRING] self.isc_create_database = fb_library.isc_create_database self.isc_create_database.restype = ISC_STATUS self.isc_create_database.argtypes = [POINTER(ISC_STATUS), c_short, STRING, POINTER(isc_db_handle), c_short, STRING, c_short] self.isc_database_info = fb_library.isc_database_info self.isc_database_info.restype = ISC_STATUS self.isc_database_info.argtypes = [POINTER(ISC_STATUS), POINTER(isc_db_handle), c_short, STRING, c_short, STRING] self.isc_decode_date = fb_library.isc_decode_date self.isc_decode_date.restype = None self.isc_decode_date.argtypes = [POINTER(ISC_QUAD), c_void_p] self.isc_decode_sql_date = fb_library.isc_decode_sql_date self.isc_decode_sql_date.restype = None self.isc_decode_sql_date.argtypes = [POINTER(ISC_DATE), c_void_p] self.isc_decode_sql_time = fb_library.isc_decode_sql_time self.isc_decode_sql_time.restype = None self.isc_decode_sql_time.argtypes = [POINTER(ISC_TIME), c_void_p] self.isc_decode_timestamp = fb_library.isc_decode_timestamp self.isc_decode_timestamp.restype = None self.isc_decode_timestamp.argtypes = [POINTER(ISC_TIMESTAMP), c_void_p] self.isc_detach_database = fb_library.isc_detach_database self.isc_detach_database.restype = ISC_STATUS self.isc_detach_database.argtypes = [POINTER(ISC_STATUS), POINTER(isc_db_handle)] self.isc_drop_database = fb_library.isc_drop_database self.isc_drop_database.restype = ISC_STATUS self.isc_drop_database.argtypes = [POINTER(ISC_STATUS), POINTER(isc_db_handle)] self.isc_dsql_allocate_statement = fb_library.isc_dsql_allocate_statement self.isc_dsql_allocate_statement.restype = ISC_STATUS self.isc_dsql_allocate_statement.argtypes = [POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_stmt_handle)] self.isc_dsql_alloc_statement2 = fb_library.isc_dsql_alloc_statement2 self.isc_dsql_alloc_statement2.restype = ISC_STATUS self.isc_dsql_alloc_statement2.argtypes = [POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_stmt_handle)] self.isc_dsql_describe = fb_library.isc_dsql_describe self.isc_dsql_describe.restype = ISC_STATUS self.isc_dsql_describe.argtypes = [POINTER(ISC_STATUS), POINTER(isc_stmt_handle), c_ushort, POINTER(XSQLDA)] self.isc_dsql_describe_bind = fb_library.isc_dsql_describe_bind self.isc_dsql_describe_bind.restype = ISC_STATUS self.isc_dsql_describe_bind.argtypes = [POINTER(ISC_STATUS), POINTER(isc_stmt_handle), c_ushort, POINTER(XSQLDA)] self.isc_dsql_exec_immed2 = fb_library.isc_dsql_exec_immed2 self.isc_dsql_exec_immed2.restype = ISC_STATUS self.isc_dsql_exec_immed2.argtypes = [POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), c_ushort, STRING, c_ushort, POINTER(XSQLDA), POINTER(XSQLDA)] self.isc_dsql_execute = fb_library.isc_dsql_execute self.isc_dsql_execute.restype = ISC_STATUS self.isc_dsql_execute.argtypes = [POINTER(ISC_STATUS), POINTER(isc_tr_handle), POINTER(isc_stmt_handle), c_ushort, POINTER(XSQLDA)] self.isc_dsql_execute2 = fb_library.isc_dsql_execute2 self.isc_dsql_execute2.restype = ISC_STATUS self.isc_dsql_execute2.argtypes = [POINTER(ISC_STATUS), POINTER(isc_tr_handle), POINTER(isc_stmt_handle), c_ushort, POINTER(XSQLDA), POINTER(XSQLDA)] self.isc_dsql_execute_immediate = fb_library.isc_dsql_execute_immediate self.isc_dsql_execute_immediate.restype = ISC_STATUS self.isc_dsql_execute_immediate.argtypes = [POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), c_ushort, STRING, c_ushort, POINTER(XSQLDA)] self.isc_dsql_fetch = fb_library.isc_dsql_fetch self.isc_dsql_fetch.restype = ISC_STATUS self.isc_dsql_fetch.argtypes = [POINTER(ISC_STATUS), POINTER(isc_stmt_handle), c_ushort, POINTER(XSQLDA)] self.isc_dsql_finish = fb_library.isc_dsql_finish self.isc_dsql_finish.restype = ISC_STATUS self.isc_dsql_finish.argtypes = [POINTER(isc_db_handle)] self.isc_dsql_free_statement = fb_library.isc_dsql_free_statement self.isc_dsql_free_statement.restype = ISC_STATUS self.isc_dsql_free_statement.argtypes = [POINTER(ISC_STATUS), POINTER(isc_stmt_handle), c_ushort] self.isc_dsql_insert = fb_library.isc_dsql_insert self.isc_dsql_insert.restype = ISC_STATUS self.isc_dsql_insert.argtypes = [POINTER(ISC_STATUS), POINTER(isc_stmt_handle), c_ushort, POINTER(XSQLDA)] self.isc_dsql_prepare = fb_library.isc_dsql_prepare self.isc_dsql_prepare.restype = ISC_STATUS self.isc_dsql_prepare.argtypes = [POINTER(ISC_STATUS), POINTER(isc_tr_handle), POINTER(isc_stmt_handle), c_ushort, STRING, c_ushort, POINTER(XSQLDA)] self.isc_dsql_set_cursor_name = fb_library.isc_dsql_set_cursor_name self.isc_dsql_set_cursor_name.restype = ISC_STATUS self.isc_dsql_set_cursor_name.argtypes = [POINTER(ISC_STATUS), POINTER(isc_stmt_handle), STRING, c_ushort] self.isc_dsql_sql_info = fb_library.isc_dsql_sql_info self.isc_dsql_sql_info.restype = ISC_STATUS self.isc_dsql_sql_info.argtypes = [POINTER(ISC_STATUS), POINTER(isc_stmt_handle), c_short, STRING, c_short, STRING] self.isc_encode_date = fb_library.isc_encode_date self.isc_encode_date.restype = None self.isc_encode_date.argtypes = [c_void_p, POINTER(ISC_QUAD)] self.isc_encode_sql_date = fb_library.isc_encode_sql_date self.isc_encode_sql_date.restype = None self.isc_encode_sql_date.argtypes = [c_void_p, POINTER(ISC_DATE)] self.isc_encode_sql_time = fb_library.isc_encode_sql_time self.isc_encode_sql_time.restype = None self.isc_encode_sql_time.argtypes = [c_void_p, POINTER(ISC_TIME)] self.isc_encode_timestamp = fb_library.isc_encode_timestamp self.isc_encode_timestamp.restype = None self.isc_encode_timestamp.argtypes = [c_void_p, POINTER(ISC_TIMESTAMP)] self.isc_event_counts = fb_library.isc_event_counts self.isc_event_counts.restype = None self.isc_event_counts.argtypes = [POINTER(RESULT_VECTOR), c_short, POINTER(ISC_UCHAR), POINTER(ISC_UCHAR)] self.isc_expand_dpb = fb_library.isc_expand_dpb self.isc_expand_dpb.restype = None self.isc_expand_dpb.argtypes = [POINTER(STRING), POINTER(c_short)] self.isc_modify_dpb = fb_library.isc_modify_dpb self.isc_modify_dpb.restype = c_int self.isc_modify_dpb.argtypes = [POINTER(STRING), POINTER(c_short), c_ushort, STRING, c_short] self.isc_free = fb_library.isc_free self.isc_free.restype = ISC_LONG self.isc_free.argtypes = [STRING] self.isc_get_segment = fb_library.isc_get_segment self.isc_get_segment.restype = ISC_STATUS self.isc_get_segment.argtypes = [POINTER(ISC_STATUS), POINTER(isc_blob_handle), POINTER(c_ushort), c_ushort, c_void_p] #self.isc_get_segment.argtypes = [POINTER(ISC_STATUS), POINTER(isc_blob_handle), # POINTER(c_ushort), c_ushort, POINTER(c_char)] self.isc_get_slice = fb_library.isc_get_slice self.isc_get_slice.restype = ISC_STATUS self.isc_get_slice.argtypes = [POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), POINTER(ISC_QUAD), c_short, STRING, c_short, POINTER(ISC_LONG), ISC_LONG, c_void_p, POINTER(ISC_LONG)] self.isc_interprete = fb_library.isc_interprete self.isc_interprete.restype = ISC_LONG self.isc_interprete.argtypes = [STRING, POINTER(POINTER(ISC_STATUS))] self.fb_interpret = fb_library.fb_interpret self.fb_interpret.restype = ISC_LONG self.fb_interpret.argtypes = [STRING, c_uint, POINTER(POINTER(ISC_STATUS))] self.isc_open_blob = fb_library.isc_open_blob self.isc_open_blob.restype = ISC_STATUS self.isc_open_blob.argtypes = [POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), POINTER(isc_blob_handle), POINTER(ISC_QUAD)] self.isc_open_blob2 = fb_library.isc_open_blob2 self.isc_open_blob2.restype = ISC_STATUS self.isc_open_blob2.argtypes = [POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), POINTER(isc_blob_handle), POINTER(ISC_QUAD), ISC_USHORT, STRING] # POINTER(ISC_UCHAR) self.isc_prepare_transaction2 = fb_library.isc_prepare_transaction2 self.isc_prepare_transaction2.restype = ISC_STATUS self.isc_prepare_transaction2.argtypes = [POINTER(ISC_STATUS), POINTER(isc_tr_handle), ISC_USHORT, POINTER(ISC_UCHAR)] self.isc_print_sqlerror = fb_library.isc_print_sqlerror self.isc_print_sqlerror.restype = None self.isc_print_sqlerror.argtypes = [ISC_SHORT, POINTER(ISC_STATUS)] self.isc_print_status = fb_library.isc_print_status self.isc_print_status.restype = ISC_STATUS self.isc_print_status.argtypes = [POINTER(ISC_STATUS)] self.isc_put_segment = fb_library.isc_put_segment self.isc_put_segment.restype = ISC_STATUS self.isc_put_segment.argtypes = [POINTER(ISC_STATUS), POINTER(isc_blob_handle), c_ushort, c_void_p] #self.isc_put_segment.argtypes = [POINTER(ISC_STATUS), POINTER(isc_blob_handle), # c_ushort, STRING] self.isc_put_slice = fb_library.isc_put_slice self.isc_put_slice.restype = ISC_STATUS self.isc_put_slice.argtypes = [POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), POINTER(ISC_QUAD), c_short, STRING, c_short, POINTER(ISC_LONG), ISC_LONG, c_void_p] self.isc_que_events = fb_library.isc_que_events self.isc_que_events.restype = ISC_STATUS self.isc_que_events.argtypes = [POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(ISC_LONG), c_short, POINTER(ISC_UCHAR), ISC_EVENT_CALLBACK, POINTER(ISC_UCHAR)] self.isc_rollback_retaining = fb_library.isc_rollback_retaining self.isc_rollback_retaining.restype = ISC_STATUS self.isc_rollback_retaining.argtypes = [POINTER(ISC_STATUS), POINTER(isc_tr_handle)] self.isc_rollback_transaction = fb_library.isc_rollback_transaction self.isc_rollback_transaction.restype = ISC_STATUS self.isc_rollback_transaction.argtypes = [POINTER(ISC_STATUS), POINTER(isc_tr_handle)] self.isc_start_multiple = fb_library.isc_start_multiple self.isc_start_multiple.restype = ISC_STATUS self.isc_start_multiple.argtypes = [POINTER(ISC_STATUS), POINTER(isc_tr_handle), c_short, c_void_p] if sys.platform in ['win32', 'cygwin', 'os2', 'os2emx']: P_isc_start_transaction = CFUNCTYPE(ISC_STATUS, POINTER(ISC_STATUS), POINTER(isc_tr_handle), c_short, POINTER(isc_db_handle), c_short, STRING) self.isc_start_transaction = P_isc_start_transaction(('isc_start_transaction', fb_library)) else: self.isc_start_transaction = fb_library.isc_start_transaction self.isc_start_transaction.restype = ISC_STATUS self.isc_start_transaction.argtypes = [POINTER(ISC_STATUS), POINTER(isc_tr_handle), c_short, POINTER(isc_db_handle), c_short, STRING] self.isc_sqlcode = fb_library.isc_sqlcode self.isc_sqlcode.restype = ISC_LONG self.isc_sqlcode.argtypes = [POINTER(ISC_STATUS)] self.isc_sql_interprete = fb_library.isc_sql_interprete self.isc_sql_interprete.restype = None self.isc_sql_interprete.argtypes = [c_short, STRING, c_short] self.isc_transaction_info = fb_library.isc_transaction_info self.isc_transaction_info.restype = ISC_STATUS self.isc_transaction_info.argtypes = [POINTER(ISC_STATUS), POINTER(isc_tr_handle), c_short, STRING, c_short, STRING] self.isc_transact_request = fb_library.isc_transact_request self.isc_transact_request.restype = ISC_STATUS self.isc_transact_request.argtypes = [POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), c_ushort, STRING, c_ushort, STRING, c_ushort, STRING] self.isc_vax_integer = fb_library.isc_vax_integer self.isc_vax_integer.restype = ISC_LONG self.isc_vax_integer.argtypes = [STRING, c_short] self.isc_portable_integer = fb_library.isc_portable_integer self.isc_portable_integer.restype = ISC_INT64 self.isc_portable_integer.argtypes = [POINTER(ISC_UCHAR), c_short] self.isc_add_user = fb_library.isc_add_user self.isc_add_user.restype = ISC_STATUS self.isc_add_user.argtypes = [POINTER(ISC_STATUS), POINTER(USER_SEC_DATA)] self.isc_delete_user = fb_library.isc_delete_user self.isc_delete_user.restype = ISC_STATUS self.isc_delete_user.argtypes = [POINTER(ISC_STATUS), POINTER(USER_SEC_DATA)] self.isc_modify_user = fb_library.isc_modify_user self.isc_modify_user.restype = ISC_STATUS self.isc_modify_user.argtypes = [POINTER(ISC_STATUS), POINTER(USER_SEC_DATA)] self.isc_compile_request = fb_library.isc_compile_request self.isc_compile_request.restype = ISC_STATUS self.isc_compile_request.argtypes = [POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_req_handle), c_short, STRING] self.isc_compile_request2 = fb_library.isc_compile_request2 self.isc_compile_request2.restype = ISC_STATUS self.isc_compile_request2.argtypes = [POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_req_handle), c_short, STRING] self.isc_ddl = fb_library.isc_ddl self.isc_ddl.restype = ISC_STATUS self.isc_ddl.argtypes = [POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), c_short, STRING] self.isc_prepare_transaction = fb_library.isc_prepare_transaction self.isc_prepare_transaction.restype = ISC_STATUS self.isc_prepare_transaction.argtypes = [POINTER(ISC_STATUS), POINTER(isc_tr_handle)] self.isc_receive = fb_library.isc_receive self.isc_receive.restype = ISC_STATUS self.isc_receive.argtypes = [POINTER(ISC_STATUS), POINTER(isc_req_handle), c_short, c_short, c_void_p, c_short] self.isc_reconnect_transaction = fb_library.isc_reconnect_transaction self.isc_reconnect_transaction.restype = ISC_STATUS self.isc_reconnect_transaction.argtypes = [POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), c_short, STRING] self.isc_release_request = fb_library.isc_release_request self.isc_release_request.restype = ISC_STATUS self.isc_release_request.argtypes = [POINTER(ISC_STATUS), POINTER(isc_req_handle)] self.isc_request_info = fb_library.isc_request_info self.isc_request_info.restype = ISC_STATUS self.isc_request_info.argtypes = [POINTER(ISC_STATUS), POINTER(isc_req_handle), c_short, c_short, STRING, c_short, STRING] self.isc_seek_blob = fb_library.isc_seek_blob self.isc_seek_blob.restype = ISC_STATUS self.isc_seek_blob.argtypes = [POINTER(ISC_STATUS), POINTER(isc_blob_handle), c_short, ISC_LONG, POINTER(ISC_LONG)] self.isc_send = fb_library.isc_send self.isc_send.restype = ISC_STATUS self.isc_send.argtypes = [POINTER(ISC_STATUS), POINTER(isc_req_handle), c_short, c_short, c_void_p, c_short] self.isc_start_and_send = fb_library.isc_start_and_send self.isc_start_and_send.restype = ISC_STATUS self.isc_start_and_send.argtypes = [POINTER(ISC_STATUS), POINTER(isc_req_handle), POINTER(isc_tr_handle), c_short, c_short, c_void_p, c_short] self.isc_start_request = fb_library.isc_start_request self.isc_start_request.restype = ISC_STATUS self.isc_start_request.argtypes = [POINTER(ISC_STATUS), POINTER(isc_req_handle), POINTER(isc_tr_handle), c_short] self.isc_unwind_request = fb_library.isc_unwind_request self.isc_unwind_request.restype = ISC_STATUS self.isc_unwind_request.argtypes = [POINTER(ISC_STATUS), POINTER(isc_tr_handle), c_short] self.isc_wait_for_event = fb_library.isc_wait_for_event self.isc_wait_for_event.restype = ISC_STATUS self.isc_wait_for_event.argtypes = [POINTER(ISC_STATUS), POINTER(isc_db_handle), c_short, POINTER(ISC_UCHAR), POINTER(ISC_UCHAR)] self.isc_close = fb_library.isc_close self.isc_close.restype = ISC_STATUS self.isc_close.argtypes = [POINTER(ISC_STATUS), STRING] self.isc_declare = fb_library.isc_declare self.isc_declare.restype = ISC_STATUS self.isc_declare.argtypes = [POINTER(ISC_STATUS), STRING, STRING] self.isc_describe = fb_library.isc_describe self.isc_describe.restype = ISC_STATUS self.isc_describe.argtypes = [POINTER(ISC_STATUS), STRING, POINTER(XSQLDA)] self.isc_describe_bind = fb_library.isc_describe_bind self.isc_describe_bind.restype = ISC_STATUS self.isc_describe_bind.argtypes = [POINTER(ISC_STATUS), STRING, POINTER(XSQLDA)] self.isc_execute = fb_library.isc_execute self.isc_execute.restype = ISC_STATUS self.isc_execute.argtypes = [POINTER(ISC_STATUS), POINTER(isc_tr_handle), STRING, POINTER(XSQLDA)] self.isc_execute_immediate = fb_library.isc_execute_immediate self.isc_execute_immediate.restype = ISC_STATUS self.isc_execute_immediate.argtypes = [POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), POINTER(c_short), STRING] self.isc_fetch = fb_library.isc_fetch self.isc_fetch.restype = ISC_STATUS self.isc_fetch.argtypes = [POINTER(ISC_STATUS), STRING, POINTER(XSQLDA)] self.isc_open = fb_library.isc_open self.isc_open.restype = ISC_STATUS self.isc_open.argtypes = [POINTER(ISC_STATUS), POINTER(isc_tr_handle), STRING, POINTER(XSQLDA)] self.isc_prepare = fb_library.isc_prepare self.isc_prepare.restype = ISC_STATUS self.isc_prepare.argtypes = [POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), STRING, POINTER(c_short), STRING, POINTER(XSQLDA)] self.isc_dsql_execute_m = fb_library.isc_dsql_execute_m self.isc_dsql_execute_m.restype = ISC_STATUS self.isc_dsql_execute_m.argtypes = [POINTER(ISC_STATUS), POINTER(isc_tr_handle), POINTER(isc_stmt_handle), c_ushort, STRING, c_ushort, c_ushort, STRING] self.isc_dsql_execute2_m = fb_library.isc_dsql_execute2_m self.isc_dsql_execute2_m.restype = ISC_STATUS self.isc_dsql_execute2_m.argtypes = [POINTER(ISC_STATUS), POINTER(isc_tr_handle), POINTER(isc_stmt_handle), c_ushort, STRING, c_ushort, c_ushort, STRING, c_ushort, STRING, c_ushort, c_ushort, STRING] self.isc_dsql_execute_immediate_m = fb_library.isc_dsql_execute_immediate_m self.isc_dsql_execute_immediate_m.restype = ISC_STATUS self.isc_dsql_execute_immediate_m.argtypes = [POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), c_ushort, STRING, c_ushort, c_ushort, STRING, c_ushort, c_ushort, STRING] self.isc_dsql_exec_immed3_m = fb_library.isc_dsql_exec_immed3_m self.isc_dsql_exec_immed3_m.restype = ISC_STATUS self.isc_dsql_exec_immed3_m.argtypes = [POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), c_ushort, STRING, c_ushort, c_ushort, STRING, c_ushort, c_ushort, STRING, c_ushort, STRING, c_ushort, c_ushort, STRING] self.isc_dsql_fetch_m = fb_library.isc_dsql_fetch_m self.isc_dsql_fetch_m.restype = ISC_STATUS self.isc_dsql_fetch_m.argtypes = [POINTER(ISC_STATUS), POINTER(isc_stmt_handle), c_ushort, STRING, c_ushort, c_ushort, STRING] self.isc_dsql_insert_m = fb_library.isc_dsql_insert_m self.isc_dsql_insert_m.restype = ISC_STATUS self.isc_dsql_insert_m.argtypes = [POINTER(ISC_STATUS), POINTER(isc_stmt_handle), c_ushort, STRING, c_ushort, c_ushort, STRING] self.isc_dsql_prepare_m = fb_library.isc_dsql_prepare_m self.isc_dsql_prepare_m.restype = ISC_STATUS self.isc_dsql_prepare_m.argtypes = [POINTER(ISC_STATUS), POINTER(isc_tr_handle), POINTER(isc_stmt_handle), c_ushort, STRING, c_ushort, c_ushort, STRING, c_ushort, STRING] self.isc_dsql_release = fb_library.isc_dsql_release self.isc_dsql_release.restype = ISC_STATUS self.isc_dsql_release.argtypes = [POINTER(ISC_STATUS), STRING] self.isc_embed_dsql_close = fb_library.isc_embed_dsql_close self.isc_embed_dsql_close.restype = ISC_STATUS self.isc_embed_dsql_close.argtypes = [POINTER(ISC_STATUS), STRING] self.isc_embed_dsql_declare = fb_library.isc_embed_dsql_declare self.isc_embed_dsql_declare.restype = ISC_STATUS self.isc_embed_dsql_declare.argtypes = [POINTER(ISC_STATUS), STRING, STRING] self.isc_embed_dsql_describe = fb_library.isc_embed_dsql_describe self.isc_embed_dsql_describe.restype = ISC_STATUS self.isc_embed_dsql_describe.argtypes = [POINTER(ISC_STATUS), STRING, c_ushort, POINTER(XSQLDA)] self.isc_embed_dsql_describe_bind = fb_library.isc_embed_dsql_describe_bind self.isc_embed_dsql_describe_bind.restype = ISC_STATUS self.isc_embed_dsql_describe_bind.argtypes = [POINTER(ISC_STATUS), STRING, c_ushort, POINTER(XSQLDA)] self.isc_embed_dsql_execute = fb_library.isc_embed_dsql_execute self.isc_embed_dsql_execute.restype = ISC_STATUS self.isc_embed_dsql_execute.argtypes = [POINTER(ISC_STATUS), POINTER(isc_tr_handle), STRING, c_ushort, POINTER(XSQLDA)] self.isc_embed_dsql_execute2 = fb_library.isc_embed_dsql_execute2 self.isc_embed_dsql_execute2.restype = ISC_STATUS self.isc_embed_dsql_execute2.argtypes = [POINTER(ISC_STATUS), POINTER(isc_tr_handle), STRING, c_ushort, POINTER(XSQLDA), POINTER(XSQLDA)] self.isc_embed_dsql_execute_immed = fb_library.isc_embed_dsql_execute_immed self.isc_embed_dsql_execute_immed.restype = ISC_STATUS self.isc_embed_dsql_execute_immed.argtypes = [POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), c_ushort, STRING, c_ushort, POINTER(XSQLDA)] self.isc_embed_dsql_fetch = fb_library.isc_embed_dsql_fetch self.isc_embed_dsql_fetch.restype = ISC_STATUS self.isc_embed_dsql_fetch.argtypes = [POINTER(ISC_STATUS), STRING, c_ushort, POINTER(XSQLDA)] self.isc_embed_dsql_fetch_a = fb_library.isc_embed_dsql_fetch_a self.isc_embed_dsql_fetch_a.restype = ISC_STATUS self.isc_embed_dsql_fetch_a.argtypes = [POINTER(ISC_STATUS), POINTER(c_int), STRING, ISC_USHORT, POINTER(XSQLDA)] self.isc_embed_dsql_open = fb_library.isc_embed_dsql_open self.isc_embed_dsql_open.restype = ISC_STATUS self.isc_embed_dsql_open.argtypes = [POINTER(ISC_STATUS), POINTER(isc_tr_handle), STRING, c_ushort, POINTER(XSQLDA)] self.isc_embed_dsql_open2 = fb_library.isc_embed_dsql_open2 self.isc_embed_dsql_open2.restype = ISC_STATUS self.isc_embed_dsql_open2.argtypes = [POINTER(ISC_STATUS), POINTER(isc_tr_handle), STRING, c_ushort, POINTER(XSQLDA), POINTER(XSQLDA)] self.isc_embed_dsql_insert = fb_library.isc_embed_dsql_insert self.isc_embed_dsql_insert.restype = ISC_STATUS self.isc_embed_dsql_insert.argtypes = [POINTER(ISC_STATUS), STRING, c_ushort, POINTER(XSQLDA)] self.isc_embed_dsql_prepare = fb_library.isc_embed_dsql_prepare self.isc_embed_dsql_prepare.restype = ISC_STATUS self.isc_embed_dsql_prepare.argtypes = [POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), STRING, c_ushort, STRING, c_ushort, POINTER(XSQLDA)] self.isc_embed_dsql_release = fb_library.isc_embed_dsql_release self.isc_embed_dsql_release.restype = ISC_STATUS self.isc_embed_dsql_release.argtypes = [POINTER(ISC_STATUS), STRING] self.BLOB_open = fb_library.BLOB_open self.BLOB_open.restype = POINTER(BSTREAM) self.BLOB_open.argtypes = [isc_blob_handle, STRING, c_int] self.BLOB_put = fb_library.BLOB_put self.BLOB_put.restype = c_int self.BLOB_put.argtypes = [ISC_SCHAR, POINTER(BSTREAM)] self.BLOB_close = fb_library.BLOB_close self.BLOB_close.restype = c_int self.BLOB_close.argtypes = [POINTER(BSTREAM)] self.BLOB_get = fb_library.BLOB_get self.BLOB_get.restype = c_int self.BLOB_get.argtypes = [POINTER(BSTREAM)] self.BLOB_display = fb_library.BLOB_display self.BLOB_display.restype = c_int self.BLOB_display.argtypes = [POINTER(ISC_QUAD), isc_db_handle, isc_tr_handle, STRING] self.BLOB_dump = fb_library.BLOB_dump self.BLOB_dump.restype = c_int self.BLOB_dump.argtypes = [POINTER(ISC_QUAD), isc_db_handle, isc_tr_handle, STRING] self.BLOB_edit = fb_library.BLOB_edit self.BLOB_edit.restype = c_int self.BLOB_edit.argtypes = [POINTER(ISC_QUAD), isc_db_handle, isc_tr_handle, STRING] self.BLOB_load = fb_library.BLOB_load self.BLOB_load.restype = c_int self.BLOB_load.argtypes = [POINTER(ISC_QUAD), isc_db_handle, isc_tr_handle, STRING] self.BLOB_text_dump = fb_library.BLOB_text_dump self.BLOB_text_dump.restype = c_int self.BLOB_text_dump.argtypes = [POINTER(ISC_QUAD), isc_db_handle, isc_tr_handle, STRING] self.BLOB_text_load = fb_library.BLOB_text_load self.BLOB_text_load.restype = c_int self.BLOB_text_load.argtypes = [POINTER(ISC_QUAD), isc_db_handle, isc_tr_handle, STRING] self.Bopen = fb_library.Bopen self.Bopen.restype = POINTER(BSTREAM) self.Bopen.argtypes = [POINTER(ISC_QUAD), isc_db_handle, isc_tr_handle, STRING] self.isc_ftof = fb_library.isc_ftof self.isc_ftof.restype = ISC_LONG self.isc_ftof.argtypes = [STRING, c_ushort, STRING, c_ushort] self.isc_print_blr = fb_library.isc_print_blr self.isc_print_blr.restype = ISC_STATUS self.isc_print_blr.argtypes = [STRING, ISC_PRINT_CALLBACK, c_void_p, c_short] self.isc_set_debug = fb_library.isc_set_debug self.isc_set_debug.restype = None self.isc_set_debug.argtypes = [c_int] self.isc_qtoq = fb_library.isc_qtoq self.isc_qtoq.restype = None self.isc_qtoq.argtypes = [POINTER(ISC_QUAD), POINTER(ISC_QUAD)] self.isc_vtof = fb_library.isc_vtof self.isc_vtof.restype = None self.isc_vtof.argtypes = [STRING, STRING, c_ushort] self.isc_vtov = fb_library.isc_vtov self.isc_vtov.restype = None self.isc_vtov.argtypes = [STRING, STRING, c_short] self.isc_version = fb_library.isc_version self.isc_version.restype = c_int self.isc_version.argtypes = [POINTER(isc_db_handle), ISC_VERSION_CALLBACK, c_void_p] # deprecated #self.isc_reset_fpe = fb_library.isc_reset_fpe #self.isc_reset_fpe.restype = ISC_LONG #self.isc_reset_fpe.argtypes = [ISC_USHORT] self.isc_service_attach = fb_library.isc_service_attach self.isc_service_attach.restype = ISC_STATUS self.isc_service_attach.argtypes = [POINTER(ISC_STATUS), c_ushort, STRING, POINTER(isc_svc_handle), c_ushort, STRING] self.isc_service_detach = fb_library.isc_service_detach self.isc_service_detach.restype = ISC_STATUS self.isc_service_detach.argtypes = [POINTER(ISC_STATUS), POINTER(isc_svc_handle)] self.isc_service_query = fb_library.isc_service_query self.isc_service_query.restype = ISC_STATUS self.isc_service_query.argtypes = [POINTER(ISC_STATUS), POINTER(isc_svc_handle), POINTER(isc_resv_handle), c_ushort, STRING, c_ushort, STRING, c_ushort, STRING] self.isc_service_start = fb_library.isc_service_start self.isc_service_start.restype = ISC_STATUS self.isc_service_start.argtypes = [POINTER(ISC_STATUS), POINTER(isc_svc_handle), POINTER(isc_resv_handle), c_ushort, STRING] self.isc_get_client_version = fb_library.isc_get_client_version self.isc_get_client_version.restype = None self.isc_get_client_version.argtypes = [STRING] self.isc_get_client_major_version = fb_library.isc_get_client_major_version self.isc_get_client_major_version.restype = c_int self.isc_get_client_major_version.argtypes = [] self.isc_get_client_minor_version = fb_library.isc_get_client_minor_version self.isc_get_client_minor_version.restype = c_int self.isc_get_client_minor_version.argtypes = [] #self.imaxabs = fb_library.imaxabs #self.imaxabs.restype = intmax_t #self.imaxabs.argtypes = [intmax_t] #self.imaxdiv = fb_library.imaxdiv #self.imaxdiv.restype = imaxdiv_t #self.imaxdiv.argtypes = [intmax_t, intmax_t] #self.strtoimax = fb_library.strtoimax #self.strtoimax.restype = intmax_t #self.strtoimax.argtypes = [STRING, POINTER(STRING), c_int] #self.strtoumax = fb_library.strtoumax #self.strtoumax.restype = uintmax_t #self.strtoumax.argtypes = [STRING, POINTER(STRING), c_int] #self.wcstoimax = fb_library.wcstoimax #self.wcstoimax.restype = intmax_t #self.wcstoimax.argtypes = [WSTRING, POINTER(WSTRING), c_int] #self.wcstoumax = fb_library.wcstoumax #self.wcstoumax.restype = uintmax_t #self.wcstoumax.argtypes = [WSTRING, POINTER(WSTRING), c_int] self.P_isc_event_block = CFUNCTYPE(ISC_LONG,POINTER(POINTER(ISC_UCHAR)), POINTER(POINTER(ISC_UCHAR)), ISC_USHORT) self.C_isc_event_block = self.P_isc_event_block(('isc_event_block',fb_library)) self.P_isc_event_block_args = self.C_isc_event_block.argtypes def isc_event_block(self,event_buffer,result_buffer,*args): if len(args) > 15: raise Exception("isc_event_block takes no more than 15 event names") newargs = list(self.P_isc_event_block_args) for x in args: newargs.append(STRING) self.C_isc_event_block.argtypes = newargs result = self.C_isc_event_block(event_buffer,result_buffer,len(args),*args) return result fdb-1.6.1/fdb/services.py0000644000175000001440000024126213015553541015653 0ustar pcisarusers00000000000000#coding:utf-8 # # PROGRAM/MODULE: fdb # FILE: services.py # DESCRIPTION: Python driver for Firebird # CREATED: 19.11.2011 # # Software distributed under the License is distributed AS IS, # WITHOUT WARRANTY OF ANY KIND, either express or implied. # See the License for the specific language governing rights # and limitations under the License. # # The Original Code was created by Pavel Cisar # # Copyright (c) 2011 Pavel Cisar # and all contributors signed below. # # All Rights Reserved. # Contributor(s): Philippe Makowski # ______________________________________. # # See LICENSE.TXT for details. import fdb import sys import os import fdb.ibase as ibase import ctypes import struct import warnings import datetime import types api = None # The following SHUT_* constants are to be passed as the `shutdown_mode` # parameter to Connection.shutdown: SHUT_LEGACY = -1 SHUT_NORMAL = ibase.isc_spb_prp_sm_normal SHUT_MULTI = ibase.isc_spb_prp_sm_multi SHUT_SINGLE = ibase.isc_spb_prp_sm_single SHUT_FULL = ibase.isc_spb_prp_sm_full # The following SHUT_* constants are to be passed as the `shutdown_method` # parameter to Connection.shutdown: SHUT_FORCE = ibase.isc_spb_prp_shutdown_db SHUT_DENY_NEW_TRANSACTIONS = ibase.isc_spb_prp_deny_new_transactions SHUT_DENY_NEW_ATTACHMENTS = ibase.isc_spb_prp_deny_new_attachments # The following WRITE_* constants are to be passed as the `mode` parameter # to Connection.set_write_mode: WRITE_FORCED = ibase.isc_spb_prp_wm_sync WRITE_BUFFERED = ibase.isc_spb_prp_wm_async # The following ACCESS_* constants are to be passed as the `mode` parameter # to Connection.set_access_mode: ACCESS_READ_WRITE = ibase.isc_spb_prp_am_readwrite ACCESS_READ_ONLY = ibase.isc_spb_prp_am_readonly # The following CAPABILITY_* constants are return values of `get_server_capabilities` CAPABILITY_MULTI_CLIENT = 0x2 CAPABILITY_REMOTE_HOP = 0x4 CAPABILITY_SERVER_CONFIG = 0x200 CAPABILITY_QUOTED_FILENAME = 0x400 CAPABILITY_NO_SERVER_SHUTDOWN = 0x100 # The following STATS_* constants are options for backup/restore 'stats' parameter. STATS_TOTAL_TIME = 'T' STATS_TIME_DELTA = 'D' STATS_PAGE_READS = 'R' STATS_PAGE_WRITES = 'W' def _checkString(st): if ibase.PYTHON_MAJOR_VER == 3: try: if isinstance(st, str): # In str instances, Python allows any character # Since Firebird's # Services API only works (properly) with ASCII, we need to make # sure there are no non-ASCII characters in s. st.encode('ASCII') else: if not isinstance(st, ibase.mybytes): raise TypeError('String argument to Services API must be' ' of type %s, not %s.' % (type(ibase.mybytes),type(st)) ) except UnicodeEncodeError: raise TypeError("The database engine's Services API only works" " properly with ASCII string parameters, so str instances that" " contain non-ASCII characters are disallowed." ) else: try: if isinstance(st, ibase.UnicodeType): # In str instances, Python allows any character in the "default # encoding", which is typically not ASCII. Since Firebird's # Services API only works (properly) with ASCII, we need to make # sure there are no non-ASCII characters in s, even though we # already know s is a str instance. st.encode('ASCII') else: if not isinstance(st, ibase.mybytes): raise TypeError('String argument to Services API must be' ' of type %s, not %s.' % (type(ibase.mybytes),type(st)) ) except UnicodeError: raise TypeError("The database engine's Services API only works" " properly with ASCII string parameters, so str instances that" " contain non-ASCII characters, and all unicode instances, are" " disallowed." ) def _string2spb(spb, code, st): myslen = len(st) _numeric2spb(spb, code, myslen, numCType='H') myformat = str(myslen) + 's' # The length, then 's'. spb.append(struct.pack(myformat, st)) def _numeric2spb(spb, code, num, numCType='I'): # numCType is one of the pack format characters specified by the Python # standard library module 'struct'. _code2spb(spb, code) (numericFormat, numericBytes) = _renderSizedIntegerForSPB(num, numCType) spb.append(struct.pack(numericFormat, numericBytes)) def _code2spb(spb, code): (myformat, mybytes) = _renderSizedIntegerForSPB(code, 'b') spb.append(struct.pack(myformat, mybytes)) def _vax_inverse(i, myformat): # Apply the inverse of _ksrv.isc_vax_integer to a Python integer; return # the raw bytes of the resulting value. iRaw = struct.pack(myformat, i) iConv = api.isc_vax_integer(iRaw, len(iRaw)) iConvRaw = struct.pack(myformat, iConv) return iConvRaw def _renderSizedIntegerForSPB(i, myformat): # In order to prepare the Python integer i for inclusion in a Services # API action request buffer, the byte sequence of i must be reversed, which # will make i unrepresentible as a normal Python integer. # Therefore, the rendered version of i must be stored in a raw byte # buffer. # This function returns a 2-tuple containing: # 1. the calculated struct.pack-compatible format string for i # 2. the Python string containing the SPB-compatible raw binary rendering # of i # # Example: # To prepare the Python integer 12345 for storage as an unsigned int in a # SPB, use code such as this: # (iPackFormat, iRawBytes) = _renderSizedIntegerForSPB(12345, 'I') # spbBytes = struct.pack(iPackFormat, iRawBytes) # destFormat = '%ds' % struct.calcsize(myformat) destVal = _vax_inverse(i, myformat) return (destFormat, destVal) def connect(host='service_mgr', user=os.environ.get('ISC_USER', 'sysdba'), password=os.environ.get('ISC_PASSWORD', None) ): """Establishes a connection to the Services Manager. :param string host: (optional) Host machine specification. Local by default. :param string user: (optional) Administrative user name. Defaults to content of environment variable `'ISC_USER'` or `'SYSDBA'`. :param string password: Administrative user password. Default is content of environment variable `'ISC_PASSWORD'`. .. note:: By definition, a Services Manager connection is bound to a particular host. Therefore, the database specified as a parameter to methods such as `getStatistics` MUST NOT include the host name of the database server. """ setattr(sys.modules[__name__],'api',fdb.load_api()) if password is None: raise fdb.ProgrammingError('A password is required to use' ' the Services Manager.') _checkString(host) _checkString(user) _checkString(password) # The database engine's Services API requires that connection strings # conform to one of the following formats: # 1. 'service_mgr' - Connects to the Services Manager on localhost. # 2. 'hostname:service_mgr' - Connects to the Services Manager on the # server named hostname. # # This Python function glosses over the database engine's rules as follows: # - If the $host parameter is not supplied, the connection defaults to # the local host. # - If the $host parameter is supplied, the ':service_mgr' suffix is # optional (the suffix will be appended automatically if necessary). # # Of course, this scheme would collapse if someone actually had a host # named 'service_mgr', and supplied the connection string 'service_mgr' # with the intent of connecting to that host. In that case, the connection # would be attempted to the local host, not to the host named # 'service_mgr'. An easy workaround would be to supply the following # connection string: # 'service_mgr:service_mgr'. if not host.endswith('service_mgr'): if host and not host.endswith(':'): host += ':' host += 'service_mgr' return Connection(host, user, password) class Connection(object): """ Represents a sevice connection between the database client (the Python process) and the database server. .. important:: DO NOT create instances of this class directly! Use only :func:`connect` to get Connection instances. .. tip:: Connection supports the iterator protocol, yielding lines of result like :meth:`readline`. """ QUERY_TYPE_PLAIN_INTEGER = 1 QUERY_TYPE_PLAIN_STRING = 2 QUERY_TYPE_RAW = 3 def __init__(self, host, user, password, charset=None): self.__fetching = False self._svc_handle = ibase.isc_svc_handle(0) self._isc_status = ibase.ISC_STATUS_ARRAY() self._result_buffer = ctypes.create_string_buffer(ibase.USHRT_MAX) self._line_buffer = [] self.__eof = False self.charset = charset self.host = ibase.b(host) self.user = ibase.b(user) self.password = ibase.b(password) if len(self.host) + len(self.user) + len(self.password) > 118: raise fdb.ProgrammingError("The combined length of host, user and" " password can't exceed 118 bytes.") # spb_length = 2 + 1 + 1 + len(self.user) + 1 + 1 + len(self.password) spb = fdb.bs([ibase.isc_spb_version, ibase.isc_spb_current_version, ibase.isc_spb_user_name, len(self.user)]) + self.user + \ fdb.bs([ibase.isc_spb_password, len(self.password)]) + self.password api.isc_service_attach(self._isc_status, len(self.host), self.host, self._svc_handle, len(spb), spb) if fdb.db_api_error(self._isc_status): raise fdb.exception_from_status(fdb.DatabaseError, self._isc_status, "Services/isc_service_attach:") # Get Firebird engine version verstr = self.get_server_version() x = verstr.split() if x[0].find('V') > 0: (x,self.__version) = x[0].split('V') elif x[0].find('T') > 0: (x,self.__version) = x[0].split('T') else: # Unknown version self.__version = '0.0.0.0' x = self.__version.split('.') self.__engine_version = float('%s.%s' % (x[0],x[1])) def __del__(self): self.close() def next(self): """Return the next result line from service manager. Part of *iterator protocol*. :raises StopIteration: If there are no further lines. """ line = self.readline() if line is not None: return line else: self.__fetching = False raise StopIteration __next__ = next def __iter__(self): return self def __check_active(self): if not self._svc_handle: raise ProgrammingError("Connection object is detached from service manager") def __get_closed(self): return True if self._svc_handle else False def __get_fetching(self): return self.__fetching def __read_buffer(self, init=''): request = fdb.bs([ibase.isc_info_svc_to_eof]) spb = ibase.b('') api.isc_service_query(self._isc_status, self._svc_handle, None, len(spb), spb, len(request), request, ibase.USHRT_MAX, self._result_buffer) if fdb.db_api_error(self._isc_status): raise fdb.exception_from_status(fdb.DatabaseError, self._isc_status, "Services/isc_service_query:") (result, _) = self._extract_string(self._result_buffer, 1) if ord(self._result_buffer[_]) == ibase.isc_info_end: self.__eof = True if init: result = init + result self._line_buffer = result.split('\n') def __fetchline(self): if self._line_buffer: if len(self._line_buffer) == 1 and not self.__eof: self.__read_buffer(self._line_buffer.pop(0)) return self._line_buffer.pop(0) else: if not self.__eof: self.__read_buffer() if self._line_buffer: return self._line_buffer.pop(0) self.__fetching = False return None def __get_version(self): return self.__version def __get_engine_version(self): return self.__engine_version def _bytes_to_str(self, sb): ### Todo: verify handling of P version differences, refactor if ibase.PYTHON_MAJOR_VER == 3: return sb.decode(ibase.charset_map.get(self.charset, self.charset)) else: return sb.encode(ibase.charset_map.get(self.charset, self.charset)) def _str_to_bytes(self, st): ### Todo: verify handling of P version differences, refactor if ibase.PYTHON_MAJOR_VER == 3: return st.encode(ibase.charset_map.get(self.charset, self.charset)) else: return st def _extract_int(self, raw, index): new_index = index + ctypes.sizeof(ctypes.c_ushort) return (fdb.bytes_to_int(raw[index:new_index]), new_index) def _extract_string(self, raw, index): (size, index) = self._extract_int(raw, index) new_index = index + size ### Todo: verify handling of P version differences, refactor if ibase.PYTHON_MAJOR_VER == 3: return (str(raw[index:new_index], ibase.charset_map.get(self.charset, self.charset)), new_index) else: return (str(raw[index:new_index]), new_index) def _Q(self, code, result_type, timeout=-1): if code < 0 or code > ibase.USHRT_MAX: raise fdb.ProgrammingError("The service query request_buf code" " must fall between 0 and %d," " inclusive." % ibase.USHRT_MAX) result = None result_size = 1024 request = fdb.bs([code]) if timeout == -1: spb = ibase.b('') else: spb = fdb.bs(ibase.isc_info_svc_timeout, timeout) while True: result_buffer = ctypes.create_string_buffer(result_size) api.isc_service_query(self._isc_status, self._svc_handle, None, len(spb), spb, len(request), request, result_size, result_buffer) if fdb.db_api_error(self._isc_status): raise fdb.exception_from_status(fdb.DatabaseError, self._isc_status, "Services/isc_service_query:") if ord(result_buffer[0]) == ibase.isc_info_truncated: if result_size == ibase.USHRT_MAX: raise fdb.InternalError("Database C API constraints maximum" "result buffer size to %d" % ibase.USHRT_MAX) else: result_size = result_size * 4 if result_size > ibase.USHRT_MAX: result_size = ibase.USHRT_MAX continue break if result_type == self.QUERY_TYPE_PLAIN_INTEGER: (result, _) = self._extract_int(result_buffer, 1) elif result_type == self.QUERY_TYPE_PLAIN_STRING: (result, _) = self._extract_string(result_buffer, 1) elif result_type == self.QUERY_TYPE_RAW: size = result_size - 1 while result_buffer[size] == '\0': size -= 1 result = ibase.s(result_buffer[:size]) return result def _get_isc_info_svc_svr_db_info(self): num_attachments = -1 databases = [] raw = self._QR(ibase.isc_info_svc_svr_db_info) # assert raw[-1] == api.int2byte(ibase.isc_info_flag_end) pos = 1 # Ignore raw[0]. upper_limit = len(raw) - 1 while pos < upper_limit: cluster = ibase.ord2(raw[pos]) pos += 1 if cluster == ibase.isc_spb_num_att: # Number of attachments. (num_attachments, pos) = self._extract_int(raw, pos) elif cluster == ibase.isc_spb_num_db: # Number of databases # attached to. # Do nothing except to advance pos; the number of databases # can be had from len(databases). (_, pos) = self._extract_int(raw, pos) elif cluster == ibase.isc_spb_dbname: (db_name, pos) = self._extract_string(raw, pos) databases.append(db_name) return (num_attachments, databases) def _QI(self, code): return self._Q(code, self.QUERY_TYPE_PLAIN_INTEGER) def _QS(self, code): return self._Q(code, self.QUERY_TYPE_PLAIN_STRING) def _QR(self, code): return self._Q(code, self.QUERY_TYPE_RAW) def _action_thin(self, request_buffer): if len(request_buffer) > ibase.USHRT_MAX: raise fdb.ProgrammingError("The size of the request buffer" " must not exceed %d." % ibase.USHRT_MAX) api.isc_service_start(self._isc_status, self._svc_handle, None, len(request_buffer), request_buffer) if fdb.db_api_error(self._isc_status): raise fdb.exception_from_status(fdb.OperationalError, self._isc_status, "Unable to perform the requested Service API action:") return None def _act(self, request_buffer): return self._action_thin(request_buffer.render()) def _act_and_return_textual_results(self, request_buffer): self._act(request_buffer) return self._collect_unformatted_results() def _collect_unformatted_results(self, line_separator='\n'): # YYY: It might be desirable to replace this function with a more # performant version based on ibase.isc_info_svc_to_eof rather than # ibase.isc_info_svc_line; the function's interface is transparent # either way. # This enhancement should be a very low priority; the Service Manager # API is not typically used for performance-intensive operations. resultLines = [] while 1: try: line = self._QS(ibase.isc_info_svc_line) except fdb.OperationalError: # YYY: It is routine for actions such as RESTORE to raise an # exception at the end of their output. We ignore any such # exception and assume that it was expected, which is somewhat # risky. For example, suppose the network connection is broken # while the client is receiving the action's output... break if not line: break resultLines.append(line) return line_separator.join(resultLines) def _repair_action(self, database, partial_req_buf, line_separator='\n'): # Begin constructing the request buffer (incorporate the one passed as # param $partial_req_buf). full_req_buf = _ServiceActionRequestBuilder(ibase.isc_action_svc_repair) # The filename of the database must be specified regardless of the # action sub-action being perfomed. full_req_buf.add_database_name(database) # Incorporate the caller's partial request buffer. full_req_buf.extend(partial_req_buf) self._action_thin(full_req_buf.render()) # Return the results to the caller synchronously (in this case, there # won't be any textual results, but issuing this call will helpfully # cause the program to block until the Services Manager is finished # with the action). return self._collect_unformatted_results(line_separator=line_separator) def _validate_companion_string_numeric_sequences(self,strings, numbers, string_caption, number_caption): # The core constraint here is that len(numbers) must equal len(strings) - 1 stringsCount = len(strings) numbersCount = len(numbers) requiredNumbersCount = stringsCount - 1 if numbersCount != requiredNumbersCount: raise ValueError( 'Since you passed %d %s, you must %s corresponding %s.' % (stringsCount, string_caption, ('pass %d' % requiredNumbersCount if requiredNumbersCount > 0 else 'not pass any'), number_caption) ) def _exclude_elements_of_types(self, seq, types_to_exclude): if not isinstance(types_to_exclude, tuple): types_to_exclude = tuple(types_to_exclude) return [element for element in seq if not isinstance(element, types_to_exclude)] def _require_str_or_tuple_of_str(self, x): if isinstance(x, str): x = (x,) elif isinstance(x, ibase.myunicode): # We know the following call to _checkString will raise an exception, # but calling it anyway allows us to centralize the error message # generation: _checkString(x) ### Todo: verify handling of P version differences, refactor? for el in x: _checkString(el) return x def _property_action(self, database, partial_req_buf): # Begin constructing the request buffer (incorporate the one passed as # param $partialReqBuf). full_req_buf = _ServiceActionRequestBuilder( ibase.isc_action_svc_properties) # The filename of the database must be specified regardless of the # action sub-action being perfomed. full_req_buf.add_database_name(database) # Incorporate the caller's partial request buffer. full_req_buf.extend(partial_req_buf) self._action_thin(full_req_buf.render()) # Return the results to the caller synchronously # because it blocks until there's been some resolution of the action. return self._collect_unformatted_results() def _property_action_with_one_num_code(self, database, code, num, num_ctype = 'I'): reqBuf = _ServiceActionRequestBuilder() reqBuf.add_numeric(code, num, numCType=num_ctype) return self._property_action(database, reqBuf) def close(self): """Close the connection now (rather than whenever `__del__` is called). The connection will be unusable from this point forward; an :exc:`Error` (or subclass) exception will be raised if any operation is attempted with the connection. """ if self._svc_handle: api.isc_service_detach(self._isc_status, self._svc_handle) if fdb.db_api_error(self._isc_status): raise fdb.exception_from_status(fdb.DatabaseError, self._isc_status, "Services/isc_service_detach:") self._svc_handle = None self.__fetching = False def readline(self): """Get next line of textual output from last service query. :returns string: Output line. """ if self.__fetching: return self.__fetchline() else: return None def readlines(self): """Get list of remaining output lines from last service query. :returns list: Service output. :raises ProgrammingError: When service is not in :attr:`fetching` mode. """ return [line for line in self] def isrunning(self): """Returns True if service is running. .. note:: Some services like :meth:`backup` or :meth:`sweep` may take time to comlete, so they're called asynchronously. Until they're finished, no other async service could be started. """ return self._QI(ibase.isc_info_svc_running) > 0 def wait(self): """Wait until running service completes. """ if self.isrunning: x = 1 while x: x = self.__fetchline() self.__fetching = False def get_service_manager_version(self): """Get Firebird Service Manager version number. :returns integer: Version number. """ self.__check_active() return self._QI(ibase.isc_info_svc_version) def get_server_version(self): """Get Firebird version. :returns string: Firebird version (example: 'LI-V2.5.2.26536 Firebird 2.5'). """ self.__check_active() return self._QS(ibase.isc_info_svc_server_version) def get_architecture(self): """Get Firebird Server architecture. :returns string: Architecture (example: 'Firebird/linux AMD64'). """ self.__check_active() return self._QS(ibase.isc_info_svc_implementation) def get_home_directory(self): """Get Firebird Home (installation) Directory. :returns string: Directory path. """ self.__check_active() return self._QS(ibase.isc_info_svc_get_env) def get_security_database_path(self): """Get full path to Firebird security database. :returns string: Path (path+filename) to security database. """ self.__check_active() return self._QS(ibase.isc_info_svc_user_dbpath) def get_lock_file_directory(self): """Get directory location for Firebird lock files. :returns string: Directory path. """ self.__check_active() return self._QS(ibase.isc_info_svc_get_env_lock) def get_server_capabilities(self): """Get list of Firebird capabilities. :returns tuple: Capability info codes for each capability reported by server. Next fdb.services constants define possible info codes returned:: CAPABILITY_MULTI_CLIENT CAPABILITY_REMOTE_HOP CAPABILITY_SERVER_CONFIG CAPABILITY_QUOTED_FILENAME CAPABILITY_NO_SERVER_SHUTDOWN Example:: >>>fdb.services.CAPABILITY_REMOTE_HOP in svc.get_server_capabilities() True """ self.__check_active() capabilities = self._QI(ibase.isc_info_svc_capabilities) return tuple([x for x in (CAPABILITY_MULTI_CLIENT, CAPABILITY_REMOTE_HOP, CAPABILITY_SERVER_CONFIG, CAPABILITY_QUOTED_FILENAME, CAPABILITY_NO_SERVER_SHUTDOWN) if capabilities & x]) def get_message_file_directory(self): """Get directory with Firebird message file. :returns string: Directory path. """ self.__check_active() return self._QS(ibase.isc_info_svc_get_env_msg) def get_connection_count(self): """Get number of attachments to server. :returns integer: Directory path. """ self.__check_active() return self._get_isc_info_svc_svr_db_info()[0] def get_attached_database_names(self): """Get list of attached databases. :returns list: Filenames of attached databases. """ self.__check_active() return self._get_isc_info_svc_svr_db_info()[1] def get_log(self, callback=None): """Request content of Firebird Server log. **(ASYNC service)** :param function callback: Function to call back with each output line. Function must accept only one parameter: line of output. If `callback` is not specified, log content could be retrieved through :meth:`readline`, :meth:`readlines`, iteration over `Connection` or ignored via call to :meth:`wait`. .. note:: Until log content is not fully fetched from service (or ignored via :meth:`wait`), any attempt to start another asynchronous service will fail with exception. """ self.__check_active() self._act(_ServiceActionRequestBuilder(ibase.isc_action_svc_get_ib_log)) self.__fetching = True self.__eof = False if callback: for line in self: callback(line) def get_limbo_transaction_ids(self, database): """Get list of transactions in limbo. :param string database: Database filename or alias. :returns list: Transaction IDs. :raises InternalError: When can't process the result buffer. """ self.__check_active() _checkString(database) reqBuf = _ServiceActionRequestBuilder() reqBuf.add_option_mask(ibase.isc_spb_rpr_list_limbo_trans) raw = self._repair_action(database, reqBuf, line_separator='') raw = ibase.b(raw) nBytes = len(raw) transIDs = [] i = 0 while i < nBytes: byte = ibase.ord2(raw[i]) if byte in (ibase.isc_spb_single_tra_id, ibase.isc_spb_multi_tra_id): # The transaction ID is a 32-bit integer that begins # immediately after position i. transID = struct.unpack('i', raw[i + 1:i + 5])[0] i += 5 # Advance past the marker byte and the 32-bit integer. transIDs.append(transID) else: raise fdb.InternalError('Unable to process buffer contents' ' beginning at position %d.' % i) return transIDs def _resolve_limbo_transaction(self, resolution, database, transaction_id): _checkString(database) reqBuf = _ServiceActionRequestBuilder() reqBuf.add_numeric(resolution, transaction_id) self._repair_action(database, reqBuf) def commit_limbo_transaction(self, database, transaction_id): """Resolve limbo transaction with commit. :param string database: Database filename or alias. :param integer transaction_id: ID of Transaction to resolve. """ self.__check_active() self._resolve_limbo_transaction(ibase.isc_spb_rpr_commit_trans, database, transaction_id) def rollback_limbo_transaction(self, database, transaction_id): """Resolve limbo transaction with rollback. :param string database: Database filename or alias. :param integer transaction_id: ID of Transaction to resolve. """ self.__check_active() self._resolve_limbo_transaction(ibase.isc_spb_rpr_rollback_trans, database, transaction_id) def get_statistics(self, database, show_only_db_log_pages=0, show_only_db_header_pages=0, show_user_data_pages=1, show_user_index_pages=1, # 2004.06.06: False by default b/c gstat behaves that way: show_system_tables_and_indexes=0, show_record_versions=0, callback=None, tables=None ): """Request database statisctics. **(ASYNC service)** :param string database: Database specification. :param integer show_only_db_log_pages: `1` to analyze only log pages. :param integer show_only_db_header_pages: `1` to analyze only database header. When set, all other parameters are ignored. :param integer show_user_data_pages: `0` to skip user data analysis. :param integer show_user_index_pages: `0` to skip user index analysis. :param integer show_system_tables_and_indexes: `1` to analyze system tables and indices. :param integer show_record_versions: `1` to analyze record versions. :param function callback: Function to call back with each output line. Function must accept only one parameter: line of output. :param string_or_list tables: table name or iterable of table names. If `callback` is not specified, statistical report could be retrieved through :meth:`readline`, :meth:`readlines`, iteration over `Connection` or ignored via call to :meth:`wait`. .. note:: Until report is not fully fetched from service (or ignored via :meth:`wait`), any attempt to start another asynchronous service will fail with exception. """ self.__check_active() _checkString(database) reqBuf = _ServiceActionRequestBuilder(ibase.isc_action_svc_db_stats) # Request for header negates all other options if show_only_db_header_pages: show_only_db_log_pages = show_user_data_pages = 0 show_user_index_pages = show_system_tables_and_indexes =0 show_record_versions = 0 optionMask = 0 if show_user_data_pages: optionMask |= ibase.isc_spb_sts_data_pages if show_only_db_log_pages: optionMask |= ibase.isc_spb_sts_db_log if show_only_db_header_pages: optionMask |= ibase.isc_spb_sts_hdr_pages if show_user_index_pages: optionMask |= ibase.isc_spb_sts_idx_pages if show_system_tables_and_indexes: optionMask |= ibase.isc_spb_sts_sys_relations if show_record_versions: optionMask |= ibase.isc_spb_sts_record_versions reqBuf.add_database_name(database) reqBuf.add_option_mask(optionMask) if tables is not None: if isinstance(tables, types.StringTypes): tables = (tables,) cmdline = ['-t'] cmdline.extend(tables) reqBuf.add_string(ibase.isc_spb_command_line, ' '.join(cmdline)) self._act(reqBuf) self.__fetching = True self.__eof = False if callback: for line in self: callback(line) # Backup and Restore methods: def backup(self, source_database, dest_filenames, dest_file_sizes=(), #factor=None, # YYY:What is this? # Backup operation optionMask: ignore_checksums=0, ignore_limbo_transactions=0, metadata_only=0, collect_garbage=1, transportable=1, convert_external_tables_to_internal=0, compressed=1, no_db_triggers=0, callback=None, stats=None): """Request logical (GBAK) database backup. **(ASYNC service)** :param string source_database: Source database specification. :param dest_filenames: Backup file(s) specification. :type dest_filenames: string or tuple of strings :param dest_file_sizes: (optional) specification of backup file max. sizes. :type dest_file_sizes: tuple of integers :param integer ignore_checksums: `1` to ignore checksums. :param integer ignore_limbo_transactions: `1` to ignore limbo transactions. :param integer metadata_only: `1` to create only metadata backup. :param integer collect_garbage: `0` to skip garbage collection. :param integer transportable: `0` to do not create transportable backup. :param integer convert_external_tables_to_internal: `1` to convert external table to internal ones. :param integer compressed: `0` to create uncompressed backup. :param integer no_db_triggers: `1` to disable database triggers temporarily. :param function callback: Function to call back with each output line. Function must accept only one parameter: line of output. :param list stats: List of arguments for run-time statistics, see STATS_* constants. If `callback` is not specified, backup log could be retrieved through :meth:`readline`, :meth:`readlines`, iteration over `Connection` or ignored via call to :meth:`wait`. .. note:: Until backup report is not fully fetched from service (or ignored via :meth:`wait`), any attempt to start another asynchronous service will fail with exception. """ self.__check_active() # Begin parameter validation section. _checkString(source_database) dest_filenames = self._require_str_or_tuple_of_str(dest_filenames) destFilenamesCount = len(dest_filenames) # 2004.07.17: YYY: Temporary warning: # Current (1.5.1) versions of the database engine appear to hang the # Services API request when it contains more than 11 destFilenames if destFilenamesCount > 11: warnings.warn( 'Current versions of the database engine appear to hang when' ' passed a request to generate a backup with more than 11' ' constituents.', RuntimeWarning ) if destFilenamesCount > 9999: raise fdb.ProgrammingError("The database engine cannot output a" " single source database to more than 9999 backup files." ) self._validate_companion_string_numeric_sequences( dest_filenames, dest_file_sizes, 'destination filenames', 'destination file sizes' ) if len(self._exclude_elements_of_types(dest_file_sizes, (int, ibase.mylong))) > 0: raise TypeError("Every element of destFileSizes must be an int" " or long." ) destFileSizesCount = len(dest_file_sizes) # The following should have already been checked by # _validateCompanionStringNumericSequences. assert destFileSizesCount == destFilenamesCount - 1 # End parameter validation section. # Begin option bitmask setup section. optionMask = 0 if ignore_checksums: optionMask |= ibase.isc_spb_bkp_ignore_checksums if ignore_limbo_transactions: optionMask |= ibase.isc_spb_bkp_ignore_limbo if metadata_only: optionMask |= ibase.isc_spb_bkp_metadata_only if not collect_garbage: optionMask |= ibase.isc_spb_bkp_no_garbage_collect if not transportable: optionMask |= ibase.isc_spb_bkp_non_transportable if convert_external_tables_to_internal: optionMask |= ibase.isc_spb_bkp_convert if not compressed: optionMask |= ibase.isc_spb_bkp_expand if no_db_triggers: optionMask |= ibase.isc_spb_bkp_no_triggers # End option bitmask setup section. # Construct the request buffer. request = _ServiceActionRequestBuilder(ibase.isc_action_svc_backup) # Source database filename: request.add_database_name(source_database) # Backup filenames and sizes: request.add_string_numeric_pairs_sequence( ibase.isc_spb_bkp_file, dest_filenames, ibase.isc_spb_bkp_length, dest_file_sizes ) # Options bitmask: request.add_numeric(ibase.isc_spb_options, optionMask) # Tell the service to make its output available to us. request.add_code(ibase.isc_spb_verbose) # handle request for run-time statistics if stats: request.add_string(ibase.isc_spb_bkp_stat, ''.join(stats)) # Done constructing the request buffer. self._act(request) self.__fetching = True self.__eof = False if callback: for line in self: callback(line) def restore(self, source_filenames, dest_filenames, dest_file_pages=(), page_size=None, cache_buffers=None, access_mode_read_only=0, replace=0, deactivate_indexes=0, do_not_restore_shadows=0, do_not_enforce_constraints=0, commit_after_each_table=0, use_all_page_space=0, no_db_triggers=0, metadata_only=0, callback=None, stats=None): """Request database restore from logical (GBAK) backup. **(ASYNC service)** :param source_filenames: Backup file(s) specification. :type source_filenames: string or tuple of strings :param dest_filenames: Database file(s) specification. :type dest_filenames: string or tuple of strings :param dest_file_pages: (optional) specification of database file max. # of pages. :type dest_file_pages: tuple of integers :param integer page_size: (optional) Page size. :param integer cache_buffers: (optional) Size of page-cache for this database. :param integer access_mode_read_only: `1` to create R/O database. :param integer replace: `1` to replace existing database. :param integer deactivate_indexes: `1` to do not activate indices. :param integer do_not_restore_shadows: `1` to do not restore shadows. :param integer do_not_enforce_constraints: `1` to do not enforce constraints during restore. :param integer commit_after_each_table: `1` to commit after each table is restored. :param integer use_all_page_space: `1` to use all space on data pages. :param integer no_db_triggers: `1` to disable database triggers temporarily. :param integer metadata_only: `1` to restore only database metadata. :param function callback: Function to call back with each output line. Function must accept only one parameter: line of output. :param list stats: List of arguments for run-time statistics, see STATS_* constants. If `callback` is not specified, restore log could be retrieved through :meth:`readline`, :meth:`readlines`, iteration over `Connection` or ignored via call to :meth:`wait`. .. note:: Until restore report is not fully fetched from service (or ignored via :meth:`wait`), any attempt to start another asynchronous service will fail with exception. """ self.__check_active() # Begin parameter validation section. source_filenames = self._require_str_or_tuple_of_str(source_filenames) dest_filenames = self._require_str_or_tuple_of_str(dest_filenames) self._validate_companion_string_numeric_sequences( dest_filenames, dest_file_pages, 'destination filenames', 'destination file page counts' ) # End parameter validation section. # Begin option bitmask setup section. optionMask = 0 if replace: optionMask |= ibase.isc_spb_res_replace else: optionMask |= ibase.isc_spb_res_create if deactivate_indexes: optionMask |= ibase.isc_spb_res_deactivate_idx if do_not_restore_shadows: optionMask |= ibase.isc_spb_res_no_shadow if do_not_enforce_constraints: optionMask |= ibase.isc_spb_res_no_validity if commit_after_each_table: optionMask |= ibase.isc_spb_res_one_at_a_time if use_all_page_space: optionMask |= ibase.isc_spb_res_use_all_space if no_db_triggers: optionMask |= ibase.isc_spb_bkp_no_triggers if metadata_only: optionMask |= ibase.isc_spb_bkp_metadata_only # End option bitmask setup section. # Construct the request buffer. request = _ServiceActionRequestBuilder(ibase.isc_action_svc_restore) # Backup filenames: request.add_string_sequence(ibase.isc_spb_bkp_file, source_filenames) # Database filenames: request.add_string_numeric_pairs_sequence( ibase.isc_spb_dbname, dest_filenames, ibase.isc_spb_res_length, dest_file_pages ) # Page size of the restored database: if page_size: request.add_numeric(ibase.isc_spb_res_page_size, page_size) # cacheBuffers is the number of default cache buffers to configure for # attachments to the restored database: if cache_buffers: request.add_numeric(ibase.isc_spb_res_buffers, cache_buffers) # accessModeReadOnly controls whether the restored database is # "mounted" in read only or read-write mode: if access_mode_read_only: accessMode = ibase.isc_spb_prp_am_readonly else: accessMode = ibase.isc_spb_prp_am_readwrite request.add_numeric(ibase.isc_spb_res_access_mode, accessMode, numCType='B' ) # Options bitmask: request.add_numeric(ibase.isc_spb_options, optionMask) # Tell the service to make its output available to us. request.add_code(ibase.isc_spb_verbose) # handle request for run-time statistics if stats: request.add_string(ibase.isc_spb_res_stat, ''.join(stats)) # Done constructing the request buffer. self._act(request) self.__fetching = True self.__eof = False if callback: for line in self: callback(line) # nbackup methods: def nbackup(self, source_database, dest_filename, nbackup_level=0, no_db_triggers=0): """Perform physical (NBACKUP) database backup. :param string source_database: Source database specification. :param dest_filename: Backup file specification. :param integer nbackup_level: Incremental backup level. :param integer no_db_triggers: `1` to disable database triggers temporarily. .. note:: Method call will not return until action is finished. """ self.__check_active() # Begin parameter validation section. _checkString(source_database) _checkString(dest_filename) dest_filename = ibase.b(dest_filename) # Begin option bitmask setup section. optionMask = 0 if no_db_triggers: optionMask |= ibase.isc_spb_bkp_no_triggers # End option bitmask setup section. # Construct the request buffer. request = _ServiceActionRequestBuilder(ibase.isc_action_svc_nbak) # Source database filename: request.add_database_name(source_database) # Backup filename: request.add_string(ibase.isc_spb_nbk_file, dest_filename) # backup level request.add_numeric(ibase.isc_spb_nbk_level, nbackup_level) # Options bitmask: request.add_numeric(ibase.isc_spb_options, optionMask) # Done constructing the request buffer. self._act(request) self.wait() def nrestore(self, source_filenames, dest_filename, no_db_triggers=0): """Perform restore from physical (NBACKUP) database backup. :param source_filenames: Backup file(s) specification. :type source_filenames: string or tuple of strings :param dest_filename: Database file specification. :param integer no_db_triggers: `1` to disable database triggers temporarily. .. note:: Method call will not return until action is finished. """ self.__check_active() # Begin parameter validation section. source_filenames = self._require_str_or_tuple_of_str(source_filenames) _checkString(dest_filename) dest_filename = ibase.b(dest_filename) # Begin option bitmask setup section. optionMask = 0 if no_db_triggers: optionMask |= ibase.isc_spb_bkp_no_triggers # End option bitmask setup section. # Construct the request buffer. request = _ServiceActionRequestBuilder(ibase.isc_action_svc_nrest) # Source database filename: request.add_string(ibase.isc_spb_dbname, dest_filename) # Backup filenames: request.add_string_sequence(ibase.isc_spb_nbk_file, source_filenames) # Options bitmask: request.add_numeric(ibase.isc_spb_options, optionMask) # Done constructing the request buffer. self._act(request) self.wait() # Trace def trace_start(self, config, name=None): """Start new trace session. **(ASYNC service)** :param string config: Trace session configuration. :param string name: (optional) Trace session name. :returns integer: Trace session ID. :raises DatabaseError: When session ID is not returned on start. Trace session output could be retrieved through :meth:`readline`, :meth:`readlines`, iteration over `Connection` or ignored via call to :meth:`wait`. .. note:: Until session output is not fully fetched from service (or ignored via :meth:`wait`), any attempt to start another asynchronous service including call to any `trace_` method will fail with exception. """ self.__check_active() if not name == None: _checkString(name) _checkString(config) # Construct the request buffer. reqBuf = _ServiceActionRequestBuilder(ibase.isc_action_svc_trace_start) # trace name: if not name == None: reqBuf.add_string(ibase.isc_spb_trc_name, name) # trace configuration: reqBuf.add_string(ibase.isc_spb_trc_cfg, config) self._act(reqBuf) self.__fetching = True self.__eof = False response = self._Q(ibase.isc_info_svc_line, self.QUERY_TYPE_PLAIN_STRING) if response.startswith('Trace session ID'): return int(response.split()[3]) else: # response should contain the error message raise fdb.DatabaseError(response) def trace_stop(self, trace_id): """Stop trace session. :param integer trace_id: Trace session ID. :returns string: Text with confirmation that session was stopped. :raises DatabaseError: When trace session is not stopped. :raises OperationalError: When server can't perform requested operation. """ self.__check_active() # Construct the request buffer. reqBuf = _ServiceActionRequestBuilder(ibase.isc_action_svc_trace_stop) reqBuf.add_numeric(ibase.isc_spb_trc_id, trace_id) response = self._act_and_return_textual_results(reqBuf) if not response.startswith("Trace session ID %i stopped" % trace_id): # response should contain the error message raise fdb.DatabaseError(response) return response def trace_suspend(self, trace_id): """Suspend trace session. :param integer trace_id: Trace session ID. :returns string: Text with confirmation that session was paused. :raises DatabaseError: When trace session is not paused. :raises OperationalError: When server can't perform requested operation. """ self.__check_active() # Construct the request buffer. reqBuf = _ServiceActionRequestBuilder( ibase.isc_action_svc_trace_suspend) reqBuf.add_numeric(ibase.isc_spb_trc_id, trace_id) response = self._act_and_return_textual_results(reqBuf) if not response.startswith("Trace session ID %i paused" % trace_id): # response should contain the error message raise fdb.DatabaseError(response) return response def trace_resume(self, trace_id): """Resume trace session. :param integer trace_id: Trace session ID. :returns string: Text with confirmation that session was resumed. :raises DatabaseError: When trace session is not resumed. :raises OperationalError: When server can't perform requested operation. """ self.__check_active() # Construct the request buffer. reqBuf = _ServiceActionRequestBuilder( ibase.isc_action_svc_trace_resume) reqBuf.add_numeric(ibase.isc_spb_trc_id, trace_id) response = self._act_and_return_textual_results(reqBuf) if not response.startswith("Trace session ID %i resumed" % trace_id): # response should contain the error message raise fdb.DatabaseError(response) return response def trace_list(self): """Get information about existing trace sessions. :returns dictionary: Mapping `SESSION_ID -> SESSION_PARAMS` Session parameters is another dictionary with next keys: :name: (string) (optional) Session name if specified. :date: (datetime.datetime) Session start date and time. :user: (string) Trace user name. :flags: (list of strings) Session flags. :raises OperationalError: When server can't perform requested operation. """ self.__check_active() # Construct the request buffer. reqBuf = _ServiceActionRequestBuilder(ibase.isc_action_svc_trace_list) # Get and parse the returned list. session_list = self._act_and_return_textual_results(reqBuf) result = {} session_id = None for line in session_list.split('\n'): if not line.strip(): session_id = None elif line.startswith("Session ID:"): session_id = int(line.split(':')[1].strip()) result[session_id] = dict() elif line.lstrip().startswith("name:"): result[session_id]["name"] = line.split(':')[1].strip() elif line.lstrip().startswith("user:"): result[session_id]["user"] = line.split(':')[1].strip() elif line.lstrip().startswith("date:"): result[session_id]["date"] = datetime.datetime.strptime( line.split(':',1)[1].strip(), '%Y-%m-%d %H:%M:%S') elif line.lstrip().startswith("flags:"): result[session_id]["flags"] = line.split(':')[1].strip().split(',') else: raise fdb.OperationalError("Unexpected line in trace session list:`%s`" % line) return result # Database property alteration methods: def set_default_page_buffers(self, database, n): """Set individual page cache size for Database. :param string database: Database filename or alias. :param integer n: Number of pages. """ self.__check_active() _checkString(database) self._property_action_with_one_num_code(database, ibase.isc_spb_prp_page_buffers, n) def set_sweep_interval(self, database, n): """Set treshold for automatic sweep. :param string database: Database filename or alias. :param integer n: Sweep treshold, or `0` to disable automatic sweep. """ self.__check_active() _checkString(database) self._property_action_with_one_num_code(database, ibase.isc_spb_prp_sweep_interval, n) def set_reserve_page_space(self, database, reserve_space): """Set data page space reservation policy. :param string database: Database filename or alias. :param boolean reserve_space: `True` to reserve space, `False` to do not. """ self.__check_active() _checkString(database) if reserve_space: reserveCode = ibase.isc_spb_prp_res else: reserveCode = ibase.isc_spb_prp_res_use_full self._property_action_with_one_num_code(database, ibase.isc_spb_prp_reserve_space, reserveCode, num_ctype='b') def set_write_mode(self, database, mode): """Set Disk Write Mode: Sync (forced writes) or Async (buffered). :param string database: Database filename or alias. :param integer mode: One from following constants: :data:`~fdb.services.WRITE_FORCED` or :data:`~fdb.services.WRITE_BUFFERED` """ self.__check_active() _checkString(database) if mode not in (WRITE_FORCED, WRITE_BUFFERED): raise ValueError('mode must be one of the following constants:' ' fdb.services.WRITE_FORCED, fdb.services.WRITE_BUFFERED.') self._property_action_with_one_num_code(database, ibase.isc_spb_prp_write_mode, mode, num_ctype='b') def set_access_mode(self, database, mode): """Set Database Access mode: Read Only or Read/Write :param string database: Database filename or alias. :param integer mode: One from following constants: :data:`~fdb.services.ACCESS_READ_WRITE` or :data:`~fdb.services.ACCESS_READ_ONLY` """ self.__check_active() _checkString(database) if mode not in (ACCESS_READ_WRITE, ACCESS_READ_ONLY): raise ValueError('mode must be one of the following constants:' ' fdb.services.ACCESS_READ_WRITE, fdb.services.ACCESS_READ_ONLY.') self._property_action_with_one_num_code(database, ibase.isc_spb_prp_access_mode, mode, num_ctype='b') def set_sql_dialect(self, database, dialect): """Set SQL Dialect for Database. :param string database: Database filename or alias. :param integer dialect: `1` or `3`. """ self.__check_active() _checkString(database) # The IB 6 API Guide says that dialect "must be 1 or 3", but other # dialects may become valid in future versions, so don't require # dialect in (1, 3) self._property_action_with_one_num_code(database, ibase.isc_spb_prp_set_sql_dialect, dialect) def activate_shadow(self, database): """Activate Database Shadow(s). :param string database: Database filename or alias. """ self.__check_active() _checkString(database) reqBuf = _ServiceActionRequestBuilder() reqBuf.add_option_mask(ibase.isc_spb_prp_activate) self._property_action(database, reqBuf) def no_linger(self, database): """Set one-off override for database linger. :param string database: Database filename or alias. """ self.__check_active() _checkString(database) reqBuf = _ServiceActionRequestBuilder() reqBuf.add_option_mask(ibase.isc_spb_prp_nolinger) self._property_action(database, reqBuf) # Database repair/maintenance methods: def shutdown(self, database, shutdown_mode, shutdown_method, timeout): """Database shutdown. :param string database: Database filename or alias. :param integer shutdown_mode: One from following constants: :data:`~fdb.services.SHUT_LEGACY`, :data:`~fdb.services.SHUT_SINGLE`, :data:`~fdb.services.SHUT_MULTI` or :data:`~fdb.services.SHUT_FULL`. :param integer shutdown_method: One from following constants: :data:`~fdb.services.SHUT_FORCE`, :data:`~fdb.services.SHUT_DENY_NEW_TRANSACTIONS` or :data:`~fdb.services.SHUT_DENY_NEW_ATTACHMENTS`. :param integer timeout: Time in seconds, that the shutdown must complete in. .. seealso:: See also :meth:`~Connection.bring_online` method. """ self.__check_active() _checkString(database) if shutdown_mode not in (SHUT_LEGACY, SHUT_SINGLE, SHUT_MULTI, SHUT_FULL): raise ValueError('shutdown_mode must be one of the following' ' constants: fdb.services.SHUT_LEGACY, fdb.services.SHUT_SINGLE,' ' fdbfdb.services.SHUT_MULTI,' ' fdb.services.SHUT_FULL.') if shutdown_method not in (SHUT_FORCE, SHUT_DENY_NEW_TRANSACTIONS, SHUT_DENY_NEW_ATTACHMENTS): raise ValueError('shutdown_method must be one of the following' ' constants: fdb.services.SHUT_FORCE,' ' fdb.services.SHUT_DENY_NEW_TRANSACTIONS,' ' fdb.services.SHUT_DENY_NEW_ATTACHMENTS.') reqBuf = _ServiceActionRequestBuilder() if shutdown_mode != SHUT_LEGACY: reqBuf.add_numeric(ibase.isc_spb_prp_shutdown_mode, shutdown_mode, numCType='B') reqBuf.add_numeric(shutdown_method, timeout, numCType='I') self._property_action(database, reqBuf) def bring_online(self, database, online_mode=SHUT_NORMAL): """Bring previously shut down database back online. :param string database: Database filename or alias. :param integer online_mode: (Optional) One from following constants: :data:`~fdb.services.SHUT_LEGACY`, :data:`~fdb.services.SHUT_SINGLE`, :data:`~fdb.services.SHUT_MULTI` or :data:`~fdb.services.SHUT_NORMAL` (**Default**). .. seealso:: See also :meth:`~Connection.shutdown` method. """ self.__check_active() _checkString(database) if online_mode not in (SHUT_LEGACY, SHUT_NORMAL,SHUT_SINGLE, SHUT_MULTI): raise ValueError('online_mode must be one of the following' ' constants: fdb.services.SHUT_LEGACY, fdb.services.SHUT_NORMAL,' ' fdbfdb.services.SHUT_SINGLE,' ' fdb.services.SHUT_MULTI.') reqBuf = _ServiceActionRequestBuilder() if online_mode == SHUT_LEGACY: reqBuf.add_option_mask(ibase.isc_spb_prp_db_online) else: reqBuf.add_numeric(ibase.isc_spb_prp_online_mode, online_mode, numCType='B') self._property_action(database, reqBuf) def sweep(self, database): """Perform Database Sweep. .. note:: Method call will not return until sweep is finished. :param string database: Database filename or alias. """ self.__check_active() _checkString(database) reqBuf = _ServiceActionRequestBuilder() optionMask = ibase.isc_spb_rpr_sweep_db reqBuf.add_option_mask(optionMask) return self._repair_action(database, reqBuf) def repair(self, database, read_only_validation=0, ignore_checksums=0, kill_unavailable_shadows=0, mend_database=0, validate_database=1, validate_record_fragments=1): """Database Validation and Repair. :param string database: Database filename or alias. :param integer read_only_validation: `1` to prevent any database changes. :param integer ignore_checksums: `1` to ignore page checksum errors. :param integer kill_unavailable_shadows: `1` to kill unavailable shadows. :param integer mend_database: `1` to fix database for backup. :param integer validate_database: `0` to skip database validation. :param integer validate_record_fragments: `0` to skip validation of record fragments. .. note:: Method call will not return until action is finished. """ self.__check_active() _checkString(database) # YYY: With certain option combinations, this method raises errors # that may not be very comprehensible to a Python programmer who's not # well versed with IB/FB. Should option combination filtering be # done right here instead of leaving that responsibility to the # database engine? # I think not, since any filtering done in this method is liable to # become outdated, or to inadvertently enforce an unnecessary, # crippling constraint on a certain option combination that the # database engine would have allowed. reqBuf = _ServiceActionRequestBuilder() optionMask = 0 if read_only_validation: optionMask |= ibase.isc_spb_rpr_check_db if ignore_checksums: optionMask |= ibase.isc_spb_rpr_ignore_checksum if kill_unavailable_shadows: optionMask |= ibase.isc_spb_rpr_kill_shadows if mend_database: optionMask |= ibase.isc_spb_rpr_mend_db if validate_database: optionMask |= ibase.isc_spb_rpr_validate_db if validate_record_fragments: optionMask |= ibase.isc_spb_rpr_full reqBuf.add_option_mask(optionMask) return self._repair_action(database, reqBuf) # 2003.07.12: Removed method resolveLimboTransactions (dropped plans to # support that operation from kinterbasdb since transactions IDs are not # exposed at the Python level and I don't consider limbo transaction # resolution compelling enough to warrant exposing transaction IDs). def validate(self, database, include_tables=None, exclude_tables=None, include_indices=None, exclude_indices=None, lock_timeout=None,callback=None): """On-line database validation. :param string database: Database filename or alias. :param string include_tables: Pattern for table names to include in validation run. :param string exclude_tables: Pattern for table names to exclude from validation run. :param string include_indices: Pattern for index names to include in validation run. :param string exclude_indices: Pattern for index names to exclude from validation run. :param integer lock_timeout: lock timeout, used to acquire locks for table to validate, in seconds, default is 10 secs. 0 is no-wait, -1 is infinite wait. :param function callback: Function to call back with each output line. Function must accept only one parameter: line of output. .. note:: Patterns are regular expressions, processed by the same rules as SIMILAR TO expressions. All patterns are case-sensitive, regardless of database dialect. If the pattern for tables is omitted then all user tables will be validated. If the pattern for indexes is omitted then all indexes of the appointed tables will be validated. System tables are not validated. If `callback` is not specified, validation log could be retrieved through :meth:`readline`, :meth:`readlines`, iteration over `Connection` or ignored via call to :meth:`wait`. .. note:: Until validate report is not fully fetched from service (or ignored via :meth:`wait`), any attempt to start another asynchronous service will fail with exception. """ self.__check_active() _checkString(database) request = _ServiceActionRequestBuilder(ibase.isc_action_svc_validate) request.add_database_name(database) if include_tables is not None: request.add_string(ibase.isc_spb_val_tab_incl, include_tables) if exclude_tables is not None: request.add_string(ibase.isc_spb_val_tab_excl, exclude_tables) if include_indices is not None: request.add_string(ibase.isc_spb_val_idx_incl, include_indices) if exclude_indices is not None: request.add_string(ibase.isc_spb_val_idx_excl, exclude_indices) if lock_timeout is not None: request.add_numeric(ibase.isc_spb_val_lock_timeout, lock_timeout, numCType='i') # Done constructing the request buffer. self._act(request) self.__fetching = True self.__eof = False if callback: for line in self: callback(line) # User management methods: def get_users(self, user_name=None): """Get information about user(s). :param string user_name: (Optional) When specified, returns information only about user with specified user name. :returns list: :class:`User` instances. """ self.__check_active() if user_name is not None: if isinstance(user_name, ibase.myunicode): _checkString(user_name) user_name = ibase.b(user_name) reqBuf = _ServiceActionRequestBuilder( ibase.isc_action_svc_display_user ) if user_name: user_name = user_name.upper() reqBuf.add_string(ibase.isc_spb_sec_username, user_name) self._act(reqBuf) raw = self._QR(ibase.isc_info_svc_get_users) users = [] curUser = None pos = 1 # Ignore raw[0]. upper_limit = len(raw) - 1 while pos < upper_limit: cluster = ibase.ord2(raw[pos]) pos += 1 if cluster == ibase.isc_spb_sec_username: if curUser is not None: users.append(curUser) curUser = None (user_name, pos) = self._extract_string(raw, pos) curUser = User(user_name) elif cluster == ibase.isc_spb_sec_password: (password, pos) = self._extract_string(raw, pos) curUser.password = password elif cluster == ibase.isc_spb_sec_firstname: (firstName, pos) = self._extract_string(raw, pos) curUser.first_name = firstName elif cluster == ibase.isc_spb_sec_middlename: (middleName, pos) = self._extract_string(raw, pos) curUser.middle_name = middleName elif cluster == ibase.isc_spb_sec_lastname: (lastName, pos) = self._extract_string(raw, pos) curUser.last_name = lastName elif cluster == ibase.isc_spb_sec_groupid: (groupId, pos) = self._extract_int(raw, pos) curUser.group_id = groupId elif cluster == ibase.isc_spb_sec_userid: (userId, pos) = self._extract_int(raw, pos) curUser.user_id = userId # Handle the last user: if curUser is not None: users.append(curUser) curUser = None return users def add_user(self, user): """Add new user. :param user: Instance of :class:`User` with **at least** its :attr:`~User.name` and :attr:`~User.password` attributes specified as non-empty values. All other attributes are optional. :type user: :class:`User` .. note:: This method ignores the :attr:`~User.user_id` and :attr:`~User.group_id` attributes of :class:`~User` regardless of their values. """ self.__check_active() if not user.name: raise fdb.ProgrammingError('You must specify a username.') else: _checkString(user.name) user.name = ibase.b(user.name) if not user.password: raise fdb.ProgrammingError('You must specify a password.') else: _checkString(user.password) user.password = ibase.b(user.password) reqBuf = _ServiceActionRequestBuilder(ibase.isc_action_svc_add_user) reqBuf.add_string(ibase.isc_spb_sec_username, user.name) reqBuf.add_string(ibase.isc_spb_sec_password, user.password) if user.first_name: user.first_name = ibase.b(user.first_name) reqBuf.add_string(ibase.isc_spb_sec_firstname, user.first_name) if user.middle_name: user.middle_name = ibase.b(user.middle_name) reqBuf.add_string(ibase.isc_spb_sec_middlename, user.middle_name) if user.last_name: user.last_name = ibase.b(user.last_name) reqBuf.add_string(ibase.isc_spb_sec_lastname, user.last_name) self._act_and_return_textual_results(reqBuf) def modify_user(self, user): """Modify user information. :param user: Instance of :class:`User` with **at least** its :attr:`~User.name` attribute specified as non-empty value. :type user: :class:`User` .. note:: This method sets :attr:`~User.first_name`, :attr:`~User.middle_name` and :attr:`~User.last_name` to their actual values, and ignores the :attr:`~User.user_id` and :attr:`~User.group_id` attributes regardless of their values. :attr:`~User.password` is set **only** when it has value. """ self.__check_active() reqBuf = _ServiceActionRequestBuilder(ibase.isc_action_svc_modify_user) if isinstance(user.name, str): user.name = ibase.b(user.name) reqBuf.add_string(ibase.isc_spb_sec_username, user.name) if isinstance(user.password, str): user.password = ibase.b(user.password) reqBuf.add_string(ibase.isc_spb_sec_password, user.password) # Change the optional attributes whether they're empty or not. if isinstance(user.first_name, str): user.first_name = ibase.b(user.first_name) reqBuf.add_string(ibase.isc_spb_sec_firstname, user.first_name) if isinstance(user.middle_name, str): user.middle_name = ibase.b(user.middle_name) reqBuf.add_string(ibase.isc_spb_sec_middlename, user.middle_name) if isinstance(user.last_name, str): user.last_name = ibase.b(user.last_name) reqBuf.add_string(ibase.isc_spb_sec_lastname, user.last_name) self._act_and_return_textual_results(reqBuf) def remove_user(self, user): """Remove user. :param user: User name or Instance of :class:`User` with **at least** its :attr:`~User.name` attribute specified as non-empty value. :type user: string or :class:`User` """ self.__check_active() if isinstance(user, User): username = user.name else: _checkString(user) user = ibase.b(user) username = user reqBuf = _ServiceActionRequestBuilder(ibase.isc_action_svc_delete_user) reqBuf.add_string(ibase.isc_spb_sec_username, username) self._act_and_return_textual_results(reqBuf) def user_exists(self, user): """Check for user's existence. :param user: User name or Instance of :class:`User` with **at least** its :attr:`~User.name` attribute specified as non-empty value. :type user: string or :class:`User` :returns boolean: `True` when the specified user exists. """ self.__check_active() if isinstance(user, User): username = user.name else: username = user return len(self.get_users(user_name=username)) > 0 #: (Read Only) `True` if connection is closed. closed = property(__get_closed) #: (Read Only) `True` if connection is fetching result. fetching = property(__get_fetching) #: (Read Only) (string) Firebird version number string of connected server. #: Uses Firebird version numbers in form: major.minor.subrelease.build version = property(__get_version) #: (Read Only) (float) Firebird version number of connected server. Only major.minor version. engine_version = property(__get_engine_version) class User(object): def __init__(self, name=None): if name: _checkString(name) self.name = name.upper() else: #: User `login` name (username). self.name = None #: Password. Not returned by user output methods, but must be #: specified to add a user. self.password = None #: First name. self.first_name = None #: Middle name self.middle_name = None #: Last name self.last_name = None # The user id and group id are not fully supported. For details, see # the documentation of the "User Management Methods" of # services.Connection. #: User ID self.user_id = None #: User group ID self.group_id = None def __str__(self): return '' % ( (self.name is None and 'without a name') or 'named "%s"' % self.name) def load_information(self,svc): """Load information about user from server. :param svc: Open service connection. :type svc: :class:`Connection` :returns: True if information was successfuly retrieved, False otherwise. :raises ProgrammingError: If user name is not defined. """ if self.name is None: raise fdb.ProgrammingError("Can't load information about user without name.") user = svc.get_users(self.name) if len(user) > 0: self.first_name = user.first_name self.middle_name = user.middle_name self.last_name = user.last_name self.user_id = user.user_id self.group_id = user.group_id return len(user) > 0 class _ServiceActionRequestBuilder(object): # This private class helps public facilities in this module to build # the binary action request buffers required by the database Services API # using high-level, easily comprehensible syntax. def __init__(self, clusterIdentifier=None): self._buffer = [] if clusterIdentifier: self.add_code(clusterIdentifier) def __str__(self): return self.render() def extend(self, otherRequestBuilder): self._buffer.append(otherRequestBuilder.render()) def add_code(self, code): _code2spb(self._buffer, code) def add_string(self, code, s): _checkString(s) _string2spb(self._buffer, code, ibase.b(s)) def add_string_sequence(self, code, stringSequence): for s in stringSequence: self.add_string(code, s) def add_string_numeric_pairs_sequence(self, stringCode, stringSequence, numericCode, numericSequence): stringCount = len(stringSequence) numericCount = len(numericSequence) if numericCount != stringCount - 1: raise ValueError("Numeric sequence must contain exactly one less" " element than its companion string sequence." ) i = 0 while i < stringCount: self.add_string(stringCode, stringSequence[i]) if i < numericCount: self.add_numeric(numericCode, numericSequence[i]) i += 1 def add_numeric(self, code, n, numCType='I'): _numeric2spb(self._buffer, code, n, numCType=numCType) def add_option_mask(self, optionMask): self.add_numeric(ibase.isc_spb_options, optionMask) def add_database_name(self, databaseName): # 2003.07.20: Issue a warning for a hostname-containing databaseName # because it will cause isc_service_start to raise an inscrutable error # message with Firebird 1.5 (though it would not have raised an error # at all with Firebird 1.0 and earlier). ### Todo: verify handling of P version differences, refactor databaseName = ibase.b(databaseName,fdb.fbcore._FS_ENCODING) if ibase.PYTHON_MAJOR_VER == 3: colonIndex = (databaseName.decode(fdb.fbcore._FS_ENCODING)).find(':') else: colonIndex = databaseName.find(':') if colonIndex != -1: # This code makes no provision for platforms other than Windows # that allow colons in paths (such as MacOS). Some of # kinterbasdb's current implementation (e.g., event handling) is # constrained to Windows or POSIX anyway. if not sys.platform.lower().startswith('win') or ( # This client process is running on Windows. # # Files that don't exist might still be valid if the connection # is to a server other than the local machine. not os.path.exists(ibase.nativestr(databaseName,fdb.fbcore._FS_ENCODING)) # "Guess" that if the colon falls within the first two # characters of the string, the pre-colon portion refers to a # Windows drive letter rather than to a remote host. # This isn't guaranteed to be correct. and colonIndex > 1 ): warnings.warn( ' Unlike conventional DSNs, Services API database names' ' must not include the host name; remove the "%s" from' ' your database name.' ' (Firebird 1.0 will accept this, but Firebird 1.5 will' ' raise an error.)' % databaseName[:colonIndex + 1], UserWarning ) self.add_string(ibase.isc_spb_dbname, databaseName) def render(self): return ibase.b('').join(self._buffer)fdb-1.6.1/fdb/__init__.py0000644000175000001440000002251312675513052015567 0ustar pcisarusers00000000000000#coding:utf-8 # # PROGRAM/MODULE: fdb # FILE: __init__.py # DESCRIPTION: Python driver for Firebird # CREATED: 8.10.2011 # # Software distributed under the License is distributed AS IS, # WITHOUT WARRANTY OF ANY KIND, either express or implied. # See the License for the specific language governing rights # and limitations under the License. # # The Original Code was created by Pavel Cisar # # Copyright (c) 2011 Pavel Cisar # and all contributors signed below. # # All Rights Reserved. # Contributor(s): ______________________________________. # # See LICENSE.TXT for details. from fdb.fbcore import * from fdb.fbcore import __version__ from fdb import services __all__ = (# Common with KInterbasDB 'BINARY', 'Binary', 'BlobReader', 'Connection', 'ConnectionGroup', 'Cursor', 'DATETIME', 'DBAPITypeObject', 'DESCRIPTION_DISPLAY_SIZE', 'DESCRIPTION_INTERNAL_SIZE', 'DESCRIPTION_NAME', 'DESCRIPTION_NULL_OK', 'DESCRIPTION_PRECISION', 'DESCRIPTION_SCALE', 'DESCRIPTION_TYPE_CODE', 'DIST_TRANS_MAX_DATABASES', 'DataError', 'DatabaseError', 'Date', 'DateFromTicks', 'Error', 'EventConduit', 'IntegrityError', 'InterfaceError', 'InternalError', 'NUMBER', 'NotSupportedError', 'OperationalError', 'PreparedStatement', 'ProgrammingError', 'ROWID', 'STRING', 'TPB', 'TableReservation', 'Time', 'TimeFromTicks', 'TimestampFromTicks', 'Transaction', 'TransactionConflict', 'Warning', '__version__', 'apilevel', 'connect', 'create_database', 'frb_info_att_charset', 'isc_dpb_activate_shadow', 'isc_dpb_address_path', 'isc_dpb_allocation', 'isc_dpb_begin_log', 'isc_dpb_buffer_length', 'isc_dpb_cache_manager', 'isc_dpb_cdd_pathname', 'isc_dpb_connect_timeout', 'isc_dpb_damaged', 'isc_dpb_dbkey_scope', 'isc_dpb_debug', 'isc_dpb_delete_shadow', 'isc_dpb_disable_journal', 'isc_dpb_disable_wal', 'isc_dpb_drop_walfile', 'isc_dpb_dummy_packet_interval', 'isc_dpb_enable_journal', 'isc_dpb_encrypt_key', 'isc_dpb_force_write', 'isc_dpb_garbage_collect', 'isc_dpb_gbak_attach', 'isc_dpb_gfix_attach', 'isc_dpb_gsec_attach', 'isc_dpb_gstat_attach', 'isc_dpb_interp', 'isc_dpb_journal', 'isc_dpb_lc_ctype', 'isc_dpb_lc_messages', 'isc_dpb_license', 'isc_dpb_no_garbage_collect', 'isc_dpb_no_reserve', 'isc_dpb_num_buffers', 'isc_dpb_number_of_users', 'isc_dpb_old_dump_id', 'isc_dpb_old_file', 'isc_dpb_old_file_size', 'isc_dpb_old_num_files', 'isc_dpb_old_start_file', 'isc_dpb_old_start_page', 'isc_dpb_old_start_seqno', 'isc_dpb_online', 'isc_dpb_online_dump', 'isc_dpb_overwrite', 'isc_dpb_page_size', 'isc_dpb_password', 'isc_dpb_password_enc', 'isc_dpb_quit_log', 'isc_dpb_reserved', 'isc_dpb_sec_attach', 'isc_dpb_set_db_charset', 'isc_dpb_set_db_readonly', 'isc_dpb_set_db_sql_dialect', 'isc_dpb_set_page_buffers', 'isc_dpb_shutdown', 'isc_dpb_shutdown_delay', 'isc_dpb_sql_dialect', 'isc_dpb_sql_role_name', 'isc_dpb_sweep', 'isc_dpb_sweep_interval', 'isc_dpb_sys_user_name', 'isc_dpb_sys_user_name_enc', 'isc_dpb_trace', 'isc_dpb_user_name', 'isc_dpb_verify', 'isc_dpb_version1', 'isc_dpb_wal_backup_dir', 'isc_dpb_wal_bufsize', 'isc_dpb_wal_chkptlen', 'isc_dpb_wal_grp_cmt_wait', 'isc_dpb_wal_numbufs', 'isc_dpb_working_directory', 'isc_info_active_tran_count', 'isc_info_active_transactions', 'isc_info_allocation', 'isc_info_attachment_id', 'isc_info_backout_count', 'isc_info_base_level', 'isc_info_bpage_errors', 'isc_info_creation_date', 'isc_info_cur_log_part_offset', 'isc_info_cur_logfile_name', 'isc_info_current_memory', 'isc_info_db_class', 'fb_info_page_contents', 'isc_info_db_id', 'isc_info_db_provider', 'isc_info_db_read_only', 'isc_info_db_size_in_pages', 'isc_info_db_sql_dialect', 'isc_info_delete_count', 'isc_info_dpage_errors', 'isc_info_expunge_count', 'isc_info_fetches', 'isc_info_firebird_version', 'isc_info_forced_writes', 'isc_info_implementation', 'isc_info_insert_count', 'isc_info_ipage_errors', 'isc_info_isc_version', 'isc_info_license', 'isc_info_limbo', 'isc_info_logfile', 'isc_info_marks', 'isc_info_max_memory', 'isc_info_next_transaction', 'isc_info_no_reserve', 'isc_info_num_buffers', 'isc_info_num_wal_buffers', 'isc_info_ods_minor_version', 'isc_info_ods_version', 'isc_info_oldest_active', 'isc_info_oldest_snapshot', 'isc_info_oldest_transaction', 'isc_info_page_errors', 'isc_info_page_size', 'isc_info_ppage_errors', 'isc_info_purge_count', 'isc_info_read_idx_count', 'isc_info_read_seq_count', 'isc_info_reads', 'isc_info_record_errors', 'isc_info_set_page_buffers', 'isc_info_sql_stmt_commit', 'isc_info_sql_stmt_ddl', 'isc_info_sql_stmt_delete', 'isc_info_sql_stmt_exec_procedure', 'isc_info_sql_stmt_get_segment', 'isc_info_sql_stmt_insert', 'isc_info_sql_stmt_put_segment', 'isc_info_sql_stmt_rollback', 'isc_info_sql_stmt_savepoint', 'isc_info_sql_stmt_select', 'isc_info_sql_stmt_select_for_upd', 'isc_info_sql_stmt_set_generator', 'isc_info_sql_stmt_start_trans', 'isc_info_sql_stmt_update', 'isc_info_sweep_interval', 'isc_info_tpage_errors', 'isc_info_tra_access', 'isc_info_tra_concurrency', 'isc_info_tra_consistency', 'isc_info_tra_id', 'isc_info_tra_isolation', 'isc_info_tra_lock_timeout', 'isc_info_tra_no_rec_version', 'isc_info_tra_oldest_active', 'isc_info_tra_oldest_interesting', 'isc_info_tra_oldest_snapshot', 'isc_info_tra_read_committed', 'isc_info_tra_readonly','fb_info_tra_dbpath', 'isc_info_tra_readwrite', 'isc_info_tra_rec_version', 'isc_info_update_count', 'isc_info_user_names', 'isc_info_version', 'isc_info_wal_avg_grpc_size', 'isc_info_wal_avg_io_size', 'isc_info_wal_buffer_size', 'isc_info_wal_ckpt_length', 'isc_info_wal_cur_ckpt_interval', 'isc_info_wal_grpc_wait_usecs', 'isc_info_wal_num_commits', 'isc_info_wal_num_io', 'isc_info_wal_prv_ckpt_fname', 'isc_info_wal_prv_ckpt_poffset', 'isc_info_wal_recv_ckpt_fname', 'isc_info_wal_recv_ckpt_poffset', 'isc_info_window_turns', 'isc_info_writes', 'isc_tpb_autocommit', 'isc_tpb_commit_time', 'isc_tpb_concurrency', 'isc_tpb_consistency', 'isc_tpb_exclusive', 'isc_tpb_ignore_limbo', 'isc_tpb_lock_read', 'isc_tpb_lock_timeout', 'isc_tpb_lock_write', 'isc_tpb_no_auto_undo', 'isc_tpb_no_rec_version', 'isc_tpb_nowait', 'isc_tpb_protected', 'isc_tpb_read', 'isc_tpb_read_committed', 'isc_tpb_rec_version', 'isc_tpb_restart_requests', 'isc_tpb_shared', 'isc_tpb_verb_time', 'isc_tpb_version3', 'isc_tpb_wait', 'isc_tpb_write', 'paramstyle', 'threadsafety', # New in FDB 'ISOLATION_LEVEL_READ_COMMITED', 'ISOLATION_LEVEL_READ_COMMITED_LEGACY', 'ISOLATION_LEVEL_REPEATABLE_READ', 'ISOLATION_LEVEL_SERIALIZABLE', 'ISOLATION_LEVEL_SNAPSHOT', 'ISOLATION_LEVEL_SNAPSHOT_TABLE_STABILITY', 'ISOLATION_LEVEL_READ_COMMITED_RO', 'MAX_BLOB_SEGMENT_SIZE', 'SQL_ARRAY', 'SQL_BLOB', 'SQL_DOUBLE', 'SQL_D_FLOAT', 'SQL_FLOAT', 'SQL_INT64', 'SQL_LONG', 'SQL_QUAD', 'SQL_SHORT', 'SQL_TEXT', 'SQL_TIMESTAMP', 'SQL_TYPE_DATE', 'SQL_TYPE_TIME', 'SQL_VARYING', 'SUBTYPE_DECIMAL', 'SUBTYPE_NUMERIC', 'SQL_BOOLEAN', 'build_dpb', 'charset_map', 'load_api', 'isc_info_end', 'isc_sqlcode', 'bs', 'ConnectionWithSchema', 'ODS_FB_20','ODS_FB_21','ODS_FB_25','ODS_FB_30', ) # Present in KDB but missing in FDB # 'BASELINE_TYPE_TRANSLATION_FACILITIES', # 'CT_COMMIT', 'CT_DEFAULT', 'CT_NONTRANSPARENT', 'CT_ROLLBACK', 'CT_VETO', # 'ConduitWasClosed', 'ConnectionTimedOut', 'DEFAULT_CONCURRENCY_LEVEL', # 'ExternallyVisibleMainTransaction', 'FB_API_VER', 'Timestamp', # '_ALL_EXCEPTION_CLASSES', '_CONNECTION_TIMEOUT_SUPPORTED', # '_Cursor_execute_exception_type_filter', '_DATABASE_INFO_CODES_WITH_COUNT_RESULTS', # '_DATABASE_INFO_CODES_WITH_INT_RESULT', '_DATABASE_INFO_CODES_WITH_TIMESTAMP_RESULT', # '_DATABASE_INFO__KNOWN_LOW_LEVEL_EXCEPTIONS', '_DPBBuilder', # '_DPB_CODES_WITH_STRING_VALUE', '_DPB_CODE_WITH_INT_VALUE', # '_EVENT_HANDLING_SUPPORTED', '_FS_ENCODING', '_MINIMAL_TYPE_TRANS_TYPES', # '_NORMAL_TYPE_TRANS_IN', '_NORMAL_TYPE_TRANS_OUT', '_OUT_TRANS_FUNC_SAMPLE_ARGS', # '_TRANSACTION_CONFLICT_RAW_CODES', # '__timestamp__', '_addDatabaseInfoCodeIfPresent', '_connection_timeout', # '_ensureInitialized', '_extractDatabaseInfoCounts', # '_guessTextualBlobEncodingWhenUsingFB20AndEarlier', '_k', '_kinterbasdb', # '_look_up_array_descriptor', '_look_up_array_subtype', # '_make_output_translator_return_type_dict_from_trans_dict', # '_normalizeDatabaseIdentifier', '_request_buffer_builder', '_trans_info', # '_trans_require_dict', '_validateTPB', 'default_tpb', 'get_concurrency_level', # 'init', 'initialized', 'k_exceptions', 'portable_int', 'raw_byte_to_int', # 'raw_timestamp_to_tuple']fdb-1.6.1/fdb/monitor.py0000644000175000001440000013333512640507211015514 0ustar pcisarusers00000000000000#coding:utf-8 # # PROGRAM: fdb # MODULE: monitor.py # DESCRIPTION: Database monitoring # CREATED: 10.5.2013 # # Software distributed under the License is distributed AS IS, # WITHOUT WARRANTY OF ANY KIND, either express or implied. # See the License for the specific language governing rights # and limitations under the License. # # The Original Code was created by Pavel Cisar # # Copyright (c) 2013 Pavel Cisar # and all contributors signed below. # # All Rights Reserved. # Contributor(s): ______________________________________. import sys import os import fdb from fdb.utils import LateBindingProperty import weakref # Current shutdown mode SHUTDOWN_MODE_ONLINE = 0 SHUTDOWN_MODE_MULTI = 1 SHUTDOWN_MODE_SINGLE = 2 SHUTDOWN_MODE_FULL = 3 # Current physical backup state BACKUP_STATE_NORMAL = 0 BACKUP_STATE_STALLED = 1 BACKUP_STATE_MERGE = 2 # State STATE_IDLE = 0 STATE_ACTIVE = 1 # Flags FLAG_NOT_SET = 0 FLAG_SET = 1 # Transaction solation mode ISOLATION_CONSISTENCY = 0 ISOLATION_CONCURRENCY = 1 ISOLATION_READ_COMMITTED_RV = 2 ISOLATION_READ_COMMITTED_NO_RV = 3 # Wait mode INFINITE_WAIT = -1 NO_WAIT = 0 # Statistics group STAT_DATABASE = 0 STAT_ATTACHMENT = 1 STAT_TRANSACTION = 2 STAT_STATEMENT = 3 STAT_CALL = 4 # Security database SEC_DEFAULT = 'Default' SEC_SELF = 'Self' SEC_OTHER = 'Other' class Monitor(object): """Class for access to Firebird monitoring tables. """ def __init__(self): self._con = None self._ic = None self.__internal = False def __del__(self): if not self.closed: self._close() def __get_closed(self): return self._con is None def __fail_if_closed(self): if self.closed: raise fdb.ProgrammingError("Monitor is not binded to connection.") def _close(self): self._ic.close() self._con = None self._ic = None def _set_as_internal(self): """Mark this instance as `internal` (embedded). This blocks calls to :meth:`bind` and :meth:`close`.""" self.__internal = True self._con = weakref.proxy(self._con) #--- protected def _get_database(self): if self.__database is None: self.__fail_if_closed() if self._con.ods >= fdb.ODS_FB_21: self._ic.execute("select * from mon$database") self.__database = DatabaseInfo(self,self._ic.fetchonemap()) else: self.__database = [] return self.__database def _get_attachments(self): if self.__attachments is None: self.__fail_if_closed() if self._con.ods >= fdb.ODS_FB_21: self._ic.execute("select * from mon$attachments") self.__attachments = [AttachmentInfo(self,row) for row in self._ic.itermap()] else: self.__attachments = [] return self.__attachments def _get_this_attachment(self): return self.get_attachment(self._con.db_info(fdb.isc_info_attachment_id)) def _get_transactions(self): if self.__transactions is None: self.__fail_if_closed() if self._con.ods >= fdb.ODS_FB_21: self._ic.execute("select * from mon$transactions") self.__transactions = [TransactionInfo(self,row) for row in self._ic.itermap()] else: self.__transactions = [] return self.__transactions def _get_statements(self): if self.__statements is None: self.__fail_if_closed() if self._con.ods >= fdb.ODS_FB_21: self._ic.execute("select * from mon$statements") self.__statements = [StatementInfo(self,row) for row in self._ic.itermap()] else: self.__statements = [] return self.__statements def _get_callstack(self): if self.__callstack is None: self.__fail_if_closed() if self._con.ods >= fdb.ODS_FB_21: self._ic.execute("select * from mon$call_stack") self.__callstack = [CallStackInfo(self,row) for row in self._ic.itermap()] else: self.__callstack = [] return self.__callstack def _get_iostats(self): if self.__iostats is None: self.__fail_if_closed() if self._con.ods >= fdb.ODS_FB_30: self._ic.execute("""SELECT r.MON$STAT_ID, r.MON$STAT_GROUP, r.MON$RECORD_SEQ_READS, r.MON$RECORD_IDX_READS, r.MON$RECORD_INSERTS, r.MON$RECORD_UPDATES, r.MON$RECORD_DELETES, r.MON$RECORD_BACKOUTS, r.MON$RECORD_PURGES, r.MON$RECORD_EXPUNGES, r.MON$RECORD_LOCKS, r.MON$RECORD_WAITS, r.MON$RECORD_CONFLICTS, r.MON$BACKVERSION_READS, r.MON$FRAGMENT_READS, r.MON$RECORD_RPT_READS, io.MON$PAGE_FETCHES, io.MON$PAGE_MARKS, io.MON$PAGE_READS, io.MON$PAGE_WRITES, m.MON$MEMORY_ALLOCATED, m.MON$MEMORY_USED, m.MON$MAX_MEMORY_ALLOCATED, m.MON$MAX_MEMORY_USED FROM MON$RECORD_STATS r join MON$IO_STATS io on r.MON$STAT_ID = io.MON$STAT_ID and r.MON$STAT_GROUP = io.MON$STAT_GROUP join MON$MEMORY_USAGE m on r.MON$STAT_ID = m.MON$STAT_ID and r.MON$STAT_GROUP = m.MON$STAT_GROUP""") elif self._con.ods >= fdb.ODS_FB_25: self._ic.execute("""SELECT r.MON$STAT_ID, r.MON$STAT_GROUP, r.MON$RECORD_SEQ_READS, r.MON$RECORD_IDX_READS, r.MON$RECORD_INSERTS, r.MON$RECORD_UPDATES, r.MON$RECORD_DELETES, r.MON$RECORD_BACKOUTS, r.MON$RECORD_PURGES, r.MON$RECORD_EXPUNGES, io.MON$PAGE_FETCHES, io.MON$PAGE_MARKS, io.MON$PAGE_READS, io.MON$PAGE_WRITES, m.MON$MEMORY_ALLOCATED, m.MON$MEMORY_USED, m.MON$MAX_MEMORY_ALLOCATED, m.MON$MAX_MEMORY_USED FROM MON$RECORD_STATS r join MON$IO_STATS io on r.MON$STAT_ID = io.MON$STAT_ID and r.MON$STAT_GROUP = io.MON$STAT_GROUP join MON$MEMORY_USAGE m on r.MON$STAT_ID = m.MON$STAT_ID and r.MON$STAT_GROUP = m.MON$STAT_GROUP""") elif self._con.ods >= fdb.ODS_FB_21: self._ic.execute("""SELECT r.MON$STAT_ID, r.MON$STAT_GROUP, r.MON$RECORD_SEQ_READS, r.MON$RECORD_IDX_READS, r.MON$RECORD_INSERTS, r.MON$RECORD_UPDATES, r.MON$RECORD_DELETES, r.MON$RECORD_BACKOUTS, r.MON$RECORD_PURGES, r.MON$RECORD_EXPUNGES, io.MON$PAGE_FETCHES, io.MON$PAGE_MARKS, io.MON$PAGE_READS, io.MON$PAGE_WRITES FROM MON$RECORD_STATS r join MON$IO_STATS io on r.MON$STAT_ID = io.MON$STAT_ID and r.MON$STAT_GROUP = io.MON$STAT_GROUP""") if self._con.ods >= fdb.ODS_FB_21: self.__iostats = [IOStatsInfo(self,row) for row in self._ic.itermap()] else: self.__iostats = [] return self.__iostats def _get_variables(self): if self.__variables is None: self.__fail_if_closed() if self._con.ods >= fdb.ODS_FB_25: self._ic.execute("select * from mon$context_variables") self.__variables = [ContextVariableInfo(self,row) for row in self._ic.itermap()] else: self.__variables = [] return self.__variables def _get_tablestats(self): if self.__tablestats is None: self.__fail_if_closed() if self._con.ods >= fdb.ODS_FB_30: self._ic.execute("""SELECT ts.MON$STAT_ID, ts.MON$STAT_GROUP, ts.MON$TABLE_NAME, ts.MON$RECORD_STAT_ID, r.MON$RECORD_SEQ_READS, r.MON$RECORD_IDX_READS, r.MON$RECORD_INSERTS, r.MON$RECORD_UPDATES, r.MON$RECORD_DELETES, r.MON$RECORD_BACKOUTS, r.MON$RECORD_PURGES, r.MON$RECORD_EXPUNGES, r.MON$RECORD_LOCKS, r.MON$RECORD_WAITS, r.MON$RECORD_CONFLICTS, r.MON$BACKVERSION_READS, r.MON$FRAGMENT_READS, r.MON$RECORD_RPT_READS FROM MON$TABLE_STATS ts join MON$RECORD_STATS r on ts.MON$RECORD_STAT_ID = r.MON$STAT_ID""") self.__tablestats = [TableStatsInfo(self,row) for row in self._ic.itermap()] else: self.__tablestats = [] return self.__tablestats #--- Properties #: True if link to :class:`~fdb.Connection` is closed. closed = property(__get_closed) db = LateBindingProperty(_get_database,None,None, ":class:`DatabaseInfo` object for attached database.") attachments = LateBindingProperty(_get_attachments,None,None, "List of all attachments.\nItems are :class:`AttachmentInfo` objects.") this_attachment = LateBindingProperty(_get_this_attachment,None,None, ":class:`AttachmentInfo` object for current connection.") transactions = LateBindingProperty(_get_transactions,None,None, "List of all transactions.\nItems are :class:`TransactionInfo` objects.") statements = LateBindingProperty(_get_statements,None,None, "List of all statements.\nItems are :class:`StatementInfo` objects.") callstack = LateBindingProperty(_get_callstack,None,None, "List with complete call stack.\nItems are :class:`CallStackInfo` objects.") iostats = LateBindingProperty(_get_iostats,None,None, "List of all I/O statistics.\nItems are :class:`IOStatsInfo` objects.") variables = LateBindingProperty(_get_variables,None,None, "List of all context variables.\nItems are :class:`ContextVariableInfo` objects.") # FB 3.0 tablestats = LateBindingProperty(_get_tablestats,None,None, "List of all table record I/O statistics.\nItems are :class:`TableStatsInfo` objects.") #--- Public def bind(self, connection): """Bind this instance to specified :class:`~fdb.Connection`. :param connection: :class:`~fdb.Connection` instance. :raises ProgrammingError: If Monitor object was set as internal (via :meth:`_set_as_internal`) or database has ODS lower than 11.1. """ if self.__internal: raise fdb.ProgrammingError("Call to 'bind' not allowed for embedded Monitor.") if self._con: self.close() if connection.ods < fdb.ODS_FB_21: raise fdb.ProgrammingError("Monitoring tables are available only " \ "for databases with ODS 11.1 and higher.") self._con = connection self._ic = self._con.trans(fdb.ISOLATION_LEVEL_READ_COMMITED_RO).cursor() self.clear() def close(self): """Sever link to :class:`~fdb.Connection`. :raises ProgrammingError: If Monitor object was set as internal (via :meth:`_set_as_internal`). """ if self.__internal: raise fdb.ProgrammingError("Call to 'close' not allowed for embedded Monitor.") self._close() self.clear() def clear(self): """Drop all cached information objects. Force reload of fresh monitoring information on next reference.""" self.__database = None self.__attachments = None self.__transactions = None self.__statements = None self.__callstack = None self.__iostats = None self.__variables = None self.__tablestats = None if not self.closed: self._ic.transaction.commit() def refresh(self): "Reloads fresh monitoring information." self.__fail_if_closed() self._ic.transaction.commit() self.clear() self._get_database() def get_attachment(self,id): """Get :class:`AttachmentInfo` by ID. :param int id: Attachment ID. :returns: :class:`AttachmentInfo` with specified ID or `None`. """ for attachment in self.attachments: if attachment.id == id: return attachment else: return None def get_transaction(self,id): """Get :class:`TransactionInfo` by ID. :param int id: Transaction ID. :returns: :class:`TransactionInfo` with specified ID or `None`. """ for transaction in self.transactions: if transaction.id == id: return transaction else: return None def get_statement(self,id): """Get :class:`StatementInfo` by ID. :param int id: Statement ID. :returns: :class:`StatementInfo` with specified ID or `None`. """ for statement in self.statements: if statement.id == id: return statement else: return None def get_call(self,id): """Get :class:`CallStackInfo` by ID. :param int id: Callstack ID. :returns: :class:`CallStackInfo` with specified ID or `None`. """ for call in self.callstack: if call.id == id: return call else: return None class BaseInfoItem(object): "Base class for all database monitoring objects." #: Weak reference to parent :class:`Monitor` instance. monitor = None def __init__(self,monitor,attributes): self.monitor = monitor if type(monitor) == weakref.ProxyType else weakref.proxy(monitor) self._attributes = dict(attributes) #--- protected def _strip_attribute(self,attr): if self._attributes.get(attr): self._attributes[attr] = self._attributes[attr].strip() #--- Protected def _get_stat_id(self): return self._attributes.get('MON$STAT_ID') #--- properties stat_id = LateBindingProperty(_get_stat_id,None,None,"Internal ID.") class DatabaseInfo(BaseInfoItem): "Information about attached database." def __init__(self,monitor,attributes): super(DatabaseInfo,self).__init__(monitor,attributes) self._strip_attribute('MON$DATABASE_NAME') self._strip_attribute('MON$OWNER') self._strip_attribute('MON$SEC_DATABASE') #--- Protected def __get_name(self): return self._attributes['MON$DATABASE_NAME'] def __get_page_size(self): return self._attributes['MON$PAGE_SIZE'] def __get_ods(self): return float('%d.%d' % (self._attributes['MON$ODS_MAJOR'], self._attributes['MON$ODS_MINOR'])) def __get_oit(self): return self._attributes['MON$OLDEST_TRANSACTION'] def __get_oat(self): return self._attributes['MON$OLDEST_ACTIVE'] def __get_ost(self): return self._attributes['MON$OLDEST_SNAPSHOT'] def __get_next_transaction(self): return self._attributes['MON$NEXT_TRANSACTION'] def __get_cache_size(self): return self._attributes['MON$PAGE_BUFFERS'] def __get_sql_dialect(self): return self._attributes['MON$SQL_DIALECT'] def __get_shutdown_mode(self): return self._attributes['MON$SHUTDOWN_MODE'] def __get_sweep_interval(self): return self._attributes['MON$SWEEP_INTERVAL'] def __get_read_only(self): return bool(self._attributes['MON$READ_ONLY']) def __get_forced_writes(self): return bool(self._attributes['MON$FORCED_WRITES']) def __get_reserve_space(self): return bool(self._attributes['MON$RESERVE_SPACE']) def __get_created(self): return self._attributes['MON$CREATION_DATE'] def __get_pages(self): return self._attributes['MON$PAGES'] def __get_backup_state(self): return self._attributes['MON$BACKUP_STATE'] def __get_iostats(self): for io in self.monitor.iostats: if (io.stat_id == self.stat_id) and (io.group == STAT_DATABASE): return io return None def __get_crypt_page(self): return self._attributes.get('MON$CRYPT_PAGE') def __get_owner(self): return self._attributes.get('MON$OWNER') def __get_security_database(self): return self._attributes.get('MON$SEC_DATABASE') def __get_tablestats(self): res = {} for io in self.monitor.tablestats: if (io.stat_id == self.stat_id) and (io.group == STAT_DATABASE): res[io.table_name] = io return res #--- properties name = property(__get_name,None,None,"Database pathname or alias.") page_size = property(__get_page_size,None,None,"Size of database page in bytes.") ods = property(__get_ods,None,None,"On-Disk Structure (ODS) version number.") oit = property(__get_oit,None,None, "Transaction ID of the oldest [interesting] transaction.") oat = property(__get_oat,None,None,"Transaction ID of the oldest active transaction.") ost = property(__get_ost,None,None, "Transaction ID of the Oldest Snapshot, i.e., the number of the OAT " \ "when the last garbage collection was done.") next_transaction = property(__get_next_transaction,None,None, "Transaction ID of the next transaction that will be started.") cache_size = property(__get_cache_size,None,None, "Number of pages allocated in the page cache.") sql_dialect = property(__get_sql_dialect,None,None,"SQL dialect of the database.") shutdown_mode = property(__get_shutdown_mode,None,None,"Current shutdown mode.") sweep_interval = property(__get_sweep_interval,None,None, "The sweep interval configured in the database header. " \ "Value 0 indicates that sweeping is disabled.") read_only = property(__get_read_only,None,None,"True if database is Read Only.") forced_writes = property(__get_forced_writes,None,None, "True if database uses synchronous writes.") reserve_space = property(__get_reserve_space,None,None, "True if database reserves space on data pages.") created = property(__get_created,None,None, "Creation date and time, i.e., when the database was created or last restored.") pages = property(__get_pages,None,None,"Number of pages allocated on disk.") backup_state = property(__get_backup_state,None,None, "Current state of database with respect to nbackup physical backup.") iostats = property(__get_iostats,None,None,":class:`IOStatsInfo` for this object.") # FB 3.0 crypt_page = property(__get_crypt_page,None,None,"Number of page being encrypted.") owner = property(__get_owner,None,None,"User name of database owner.") security_database = property(__get_security_database,None,None, "TYpe of security database (Default, Self or Other).") tablestats = property(__get_tablestats,None,None, "Dictionary of :class:`TableStatsInfo` instances for this object.") class AttachmentInfo(BaseInfoItem): "Information about attachment (connection) to database." def __init__(self,monitor,attributes): super(AttachmentInfo,self).__init__(monitor,attributes) self._strip_attribute('MON$ATTACHMENT_NAME') self._strip_attribute('MON$USER') self._strip_attribute('MON$ROLE') self._strip_attribute('MON$REMOTE_PROTOCOL') self._strip_attribute('MON$REMOTE_ADDRESS') self._strip_attribute('MON$REMOTE_PROCESS') self._strip_attribute('MON$CLIENT_VERSION') self._strip_attribute('MON$REMOTE_VERSION') self._strip_attribute('MON$REMOTE_HOST') self._strip_attribute('MON$REMOTE_OS_USER') self._strip_attribute('MON$AUTH_METHOD') #--- Protected def __get_id(self): return self._attributes['MON$ATTACHMENT_ID'] def __get_server_pid(self): return self._attributes['MON$SERVER_PID'] def __get_state(self): return self._attributes['MON$STATE'] def __get_name(self): return self._attributes['MON$ATTACHMENT_NAME'] def __get_user(self): return self._attributes['MON$USER'] def __get_role(self): return self._attributes['MON$ROLE'] def __get_remote_protocol(self): return self._attributes['MON$REMOTE_PROTOCOL'] def __get_remote_address(self): return self._attributes['MON$REMOTE_ADDRESS'] def __get_remote_pid(self): return self._attributes['MON$REMOTE_PID'] def __get_remote_process(self): return self._attributes['MON$REMOTE_PROCESS'] def __get_character_set(self): return self.monitor._con.schema.get_character_set_by_id(self._attributes['MON$CHARACTER_SET_ID']) def __get_timestamp(self): return self._attributes['MON$TIMESTAMP'] def _get_transactions(self): return [t for t in self.monitor.transactions if t._attributes['MON$ATTACHMENT_ID'] == self.id] def _get_statements(self): return [s for s in self.monitor.statements if s._attributes['MON$ATTACHMENT_ID'] == self.id] def _get_variables(self): return [s for s in self.monitor.variables if s._attributes['MON$ATTACHMENT_ID'] == self.id] def __get_iostats(self): for io in self.monitor.iostats: if (io.stat_id == self.stat_id) and (io.group == STAT_ATTACHMENT): return io return None def __get_auth_method(self): return self._attributes.get('MON$AUTH_METHOD') def __get_client_version(self): return self._attributes.get('MON$CLIENT_VERSION') def __get_remote_version(self): return self._attributes.get('MON$REMOTE_VERSION') def __get_remote_os_user(self): return self._attributes.get('MON$REMOTE_OS_USER') def __get_remote_host(self): return self._attributes.get('MON$REMOTE_HOST') def __get_tablestats(self): res = {} for io in self.monitor.tablestats: if (io.stat_id == self.stat_id) and (io.group == STAT_ATTACHMENT): res[io.table_name] = io return res #--- properties id = property(__get_id,None,None,"Attachment ID.") server_pid = property(__get_server_pid,None,None,"Server process ID.") state = property(__get_state,None,None,"Attachment state (idle/active).") name = property(__get_name,None,None,"Database pathname or alias.") user = property(__get_user,None,None,"User name.") role = property(__get_role,None,None,"Role name.") remote_protocol = property(__get_remote_protocol,None,None,"Remote protocol name.") remote_address = property(__get_remote_address,None,None,"Remote address.") remote_pid = property(__get_remote_pid,None,None,"Remote client process ID.") remote_process = property(__get_remote_process,None,None,"Remote client process pathname.") character_set = property(__get_character_set,None,None, ":class:`~fdb.schema.CharacterSet` for this attachment.") timestamp = property(__get_timestamp,None,None,"Attachment date/time.") transactions = LateBindingProperty(_get_transactions,None,None, "List of transactions associated with attachment.\nItems are :class:`TransactionInfo` objects.") statements = LateBindingProperty(_get_statements,None,None, "List of statements associated with attachment.\nItems are :class:`StatementInfo` objects.") variables = LateBindingProperty(_get_variables,None,None, "List of variables associated with attachment.\nItems are :class:`ContextVariableInfo` objects.") iostats = property(__get_iostats,None,None,":class:`IOStatsInfo` for this object.") # FB 3.0 auth_method = property(__get_auth_method,None,None,"Authentication method.") client_version = property(__get_client_version,None,None,"Client library version.") remote_version = property(__get_remote_version,None,None,"Remote protocol version.") remote_os_user = property(__get_remote_os_user,None,None,"OS user name of client process.") remote_host = property(__get_remote_host,None,None,"Name of remote host.") tablestats = property(__get_tablestats,None,None, "Dictionary of :class:`TableStatsInfo` instances for this object.") #--- Public def isactive(self): "Returns True if attachment is active." return self.state == STATE_ACTIVE def isidle(self): "Returns True if attachment is idle." return self.state == STATE_IDLE def isgcallowed(self): "Returns True if Garbage Collection is enabled for this attachment." return bool(self._attributes['MON$GARBAGE_COLLECTION']) def isinternal(self): "Returns True if attachment is internal system attachment." return bool(self._attributes.get('MON$SYSTEM_FLAG')) def terminate(self): """Terminates client session associated with this attachment. :raises ProgrammingError: If database has ODS lower than 11.2 or this attachement is current session. """ if self.monitor._con.ods < fdb.ODS_FB_25: raise fdb.ProgrammingError("Attachments could be terminated only " \ "for databases with ODS 11.2 and higher.") elif self is self.monitor.this_attachment: raise fdb.ProgrammingError("Can't terminate current session.") else: self.monitor._ic.execute('delete from mon$attachments where mon$attachment_id = ?', (self.id,)) class TransactionInfo(BaseInfoItem): "Information about transaction." def __init__(self,monitor,attributes): super(TransactionInfo,self).__init__(monitor,attributes) #--- Protected def __get_id(self): return self._attributes['MON$TRANSACTION_ID'] def __get_attachment(self): return self.monitor.get_attachment(self._attributes['MON$ATTACHMENT_ID']) def __get_state(self): return self._attributes['MON$STATE'] def __get_timestamp(self): return self._attributes['MON$TIMESTAMP'] def __get_top(self): return self._attributes['MON$TOP_TRANSACTION'] def __get_oldest(self): return self._attributes['MON$OLDEST_TRANSACTION'] def __get_oldest_active(self): return self._attributes['MON$OLDEST_ACTIVE'] def __get_isolation_mode(self): return self._attributes['MON$ISOLATION_MODE'] def __get_lock_timeout(self): return self._attributes['MON$LOCK_TIMEOUT'] def _get_statements(self): return [s for s in self.monitor.statements if s._attributes['MON$TRANSACTION_ID'] == self.id] def _get_variables(self): return [s for s in self.monitor.variables if s._attributes['MON$TRANSACTION_ID'] == self.id] def __get_iostats(self): for io in self.monitor.iostats: if (io.stat_id == self.stat_id) and (io.group == STAT_TRANSACTION): return io return None def __get_tablestats(self): res = {} for io in self.monitor.tablestats: if (io.stat_id == self.stat_id) and (io.group == STAT_TRANSACTION): res[io.table_name] = io return res #--- properties id = property(__get_id,None,None,"Transaction ID.") attachment = property(__get_attachment,None,None, ":class:`AttachmentInfo` instance to which this transaction belongs.") state = property(__get_state,None,None,"Transaction state (idle/active).") timestamp = property(__get_timestamp,None,None,"Transaction start date/time.") top = property(__get_top,None,None,"Top transaction.") oldest = property(__get_oldest,None,None,"Oldest transaction (local OIT).") oldest_active = property(__get_oldest_active,None,None,"Oldest active transaction (local OAT).") isolation_mode = property(__get_isolation_mode,None,None,"Transaction isolation mode code.") lock_timeout = property(__get_lock_timeout,None,None,"Lock timeout.") statements = LateBindingProperty(_get_statements,None,None, "List of statements associated with transaction.\nItems are :class:`StatementInfo` objects.") variables = LateBindingProperty(_get_variables,None,None, "List of variables associated with transaction.\nItems are :class:`ContextVariableInfo` objects.") iostats = property(__get_iostats,None,None,":class:`IOStatsInfo` for this object.") # FB 3.0 tablestats = property(__get_tablestats,None,None, "Dictionary of :class:`TableStatsInfo` instances for this object.") #--- Public def isactive(self): "Returns True if transaction is active." return self.state == STATE_ACTIVE def isidle(self): "Returns True if transaction is idle." return self.state == STATE_IDLE def isreadonly(self): "Returns True if transaction is Read Only." return self._attributes['MON$READ_ONLY'] == FLAG_SET #return bool(self._attributes['MON$READ_ONLY']) def isautocommit(self): "Returns True for autocommited transaction." return self._attributes['MON$AUTO_COMMIT'] == FLAG_SET #return bool(self._attributes['MON$AUTO_COMMIT']) def isautoundo(self): "Returns True for transaction with automatic undo." return self._attributes['MON$AUTO_UNDO'] == FLAG_SET #return bool(self._attributes['MON$AUTO_UNDO']) class StatementInfo(BaseInfoItem): "Information about executed SQL statement." def __init__(self,monitor,attributes): super(StatementInfo,self).__init__(monitor,attributes) self._strip_attribute('MON$SQL_TEXT') self._strip_attribute('MON$EXPLAINED_PLAN') #--- Protected def __get_id(self): return self._attributes['MON$STATEMENT_ID'] def __get_attachment(self): return self.monitor.get_attachment(self._attributes['MON$ATTACHMENT_ID']) def __get_transaction(self): tr = self._attributes['MON$TRANSACTION_ID'] return None if tr is None else self.monitor.get_transaction(tr) def __get_state(self): return self._attributes['MON$STATE'] def __get_timestamp(self): return self._attributes['MON$TIMESTAMP'] def __get_sql_text(self): return self._attributes['MON$SQL_TEXT'] def __get_callstack(self): callstack = [x for x in self.monitor.callstack if ((x._attributes['MON$STATEMENT_ID'] == self.id) and (x._attributes['MON$CALLER_ID'] is None))] if len(callstack) > 0: item = callstack[0] while item is not None: caller_id = item.id item = None for x in self.monitor.callstack: if x._attributes['MON$CALLER_ID'] == caller_id: callstack.append(x) item = x break return callstack def __get_iostats(self): for io in self.monitor.iostats: if (io.stat_id == self.stat_id) and (io.group == STAT_STATEMENT): return io return None def __get_plan(self): return self._attributes.get('MON$EXPLAINED_PLAN') def __get_tablestats(self): res = {} for io in self.monitor.tablestats: if (io.stat_id == self.stat_id) and (io.group == STAT_STATEMENT): res[io.table_name] = io return res #--- properties id = property(__get_id,None,None,"Statement ID.") attachment = property(__get_attachment,None,None, ":class:`AttachmentInfo` instance to which this statement belongs.") transaction = property(__get_transaction,None,None, ":class:`TransactionInfo` instance to which this statement belongs or None.") state = property(__get_state,None,None,"Statement state (idle/active).") timestamp = property(__get_timestamp,None,None,"Statement start date/time.") sql_text = property(__get_sql_text,None,None,"Statement text, if appropriate.") callstack = property(__get_callstack,None,None, "List with call stack for statement.\nItems are :class:`CallStackInfo` objects.") iostats = property(__get_iostats,None,None,":class:`IOStatsInfo` for this object.") # FB 3.0 plan = property(__get_plan,None,None,"Explained execution plan.") tablestats = property(__get_tablestats,None,None, "Dictionary of :class:`TableStatsInfo` instances for this object.") #--- Public def isactive(self): "Returns True if statement is active." return self.state == STATE_ACTIVE def isidle(self): "Returns True if statement is idle." return self.state == STATE_IDLE def terminate(self): """Terminates execution of statement. :raises ProgrammingError: If this attachement is current session. """ if self.attachment == self.monitor.this_attachment: raise fdb.ProgrammingError("Can't terminate statement from current session.") else: self.monitor._ic.execute('delete from mon$statements where mon$statement_id = ?', (self.id,)) class CallStackInfo(BaseInfoItem): "Information about PSQL call (stack frame)." def __init__(self,monitor,attributes): super(CallStackInfo,self).__init__(monitor,attributes) self._strip_attribute('MON$OBJECT_NAME') self._strip_attribute('MON$PACKAGE_NAME') #--- Protected def __get_id(self): return self._attributes['MON$CALL_ID'] def __get_statement(self): return self.monitor.get_statement(self._attributes['MON$STATEMENT_ID']) def __get_caller(self): return self.monitor.get_call(self._attributes['MON$CALLER_ID']) def __get_dbobject(self): obj_name = self._attributes['MON$OBJECT_NAME'] obj_type = self._attributes['MON$OBJECT_TYPE'] if obj_type == 5: # procedure return self.monitor._con.schema.get_procedure(obj_name) elif obj_type == 2: # trigger return self.monitor._con.schema.get_trigger(obj_name) else: raise fdb.ProgrammingError("Unrecognized object type '%d'" % obj_type) def __get_timestamp(self): return self._attributes['MON$TIMESTAMP'] def __get_line(self): return self._attributes['MON$SOURCE_LINE'] def __get_column(self): return self._attributes['MON$SOURCE_COLUMN'] def __get_iostats(self): for io in self.monitor.iostats: if (io.stat_id == self.stat_id) and (io.group == STAT_CALL): return io return None def __get_package_name(self): return self._attributes.get('MON$PACKAGE_NAME') #--- properties id = property(__get_id,None,None,"Call ID.") statement = property(__get_statement,None,None, "Top-level :class:`StatementInfo` instance to which this call stack entry belongs.") caller = property(__get_caller,None,None, "Call stack entry (:class:`CallStackInfo`) of the caller.") dbobject = property(__get_dbobject,None,None, "PSQL object. :class:`~fdb.schema.Procedure` or :class:`~fdb.schema.Trigger` instance.") timestamp = property(__get_timestamp,None,None,"Request start date/time.") line = property(__get_line,None,None,"SQL source line number.") column = property(__get_column,None,None,"SQL source column number.") iostats = property(__get_iostats,None,None,":class:`IOStatsInfo` for this object.") # FB 3.0 package_name = property(__get_package_name,None,None,"Package name.") #--- Public class IOStatsInfo(BaseInfoItem): "Information about page and row level I/O operations, and about memory consumption." def __init__(self,monitor,attributes): super(IOStatsInfo,self).__init__(monitor,attributes) #--- Protected def __get_owner(self): def find(seq): for x in seq: if x.stat_id == self.stat_id: return x return None obj_type = self.group if obj_type == STAT_DATABASE: return self.monitor.db elif obj_type == STAT_ATTACHMENT: return find(self.monitor.attachments) elif obj_type == STAT_TRANSACTION: return find(self.monitor.transactions) elif obj_type == STAT_STATEMENT: return find(self.monitor.statements) elif obj_type == STAT_CALL: return find(self.monitor.callstack) else: raise fdb.ProgrammingError("Unrecognized stat group '%d'" % obj_type) def __get_group(self): return self._attributes['MON$STAT_GROUP'] def __get_reads(self): return self._attributes['MON$PAGE_READS'] def __get_writes(self): return self._attributes['MON$PAGE_WRITES'] def __get_fetches(self): return self._attributes['MON$PAGE_FETCHES'] def __get_marks(self): return self._attributes['MON$PAGE_MARKS'] def __get_seq_reads(self): return self._attributes['MON$RECORD_SEQ_READS'] def __get_idx_reads(self): return self._attributes['MON$RECORD_IDX_READS'] def __get_inserts(self): return self._attributes['MON$RECORD_INSERTS'] def __get_updates(self): return self._attributes['MON$RECORD_UPDATES'] def __get_deletes(self): return self._attributes['MON$RECORD_DELETES'] def __get_backouts(self): return self._attributes['MON$RECORD_BACKOUTS'] def __get_purges(self): return self._attributes['MON$RECORD_PURGES'] def __get_expunges(self): return self._attributes['MON$RECORD_EXPUNGES'] def __get_memory_used(self): return self._attributes.get('MON$MEMORY_USED') def __get_memory_allocated(self): return self._attributes.get('MON$MEMORY_ALLOCATED') def __get_max_memory_used(self): return self._attributes.get('MON$MAX_MEMORY_USED') def __get_max_memory_allocated(self): return self._attributes.get('MON$MAX_MEMORY_ALLOCATED') def __get_locks(self): return self._attributes.get('MON$RECORD_LOCKS') def __get_waits(self): return self._attributes.get('MON$RECORD_WAITS') def __get_conflits(self): return self._attributes.get('MON$RECORD_CONFLICTS') def __get_backversion_reads(self): return self._attributes.get('MON$BACKVERSION_READS') def __get_fragment_reads(self): return self._attributes.get('MON$FRAGMENT_READS') def __get_repeated_reads(self): return self._attributes.get('MON$RECORD_RPT_READS') #--- properties owner = property(__get_owner,None,None, """Object that owns this IOStats instance. Could be either :class:`DatabaseInfo`, :class:`AttachmentInfo`, :class:`TransactionInfo`, :class:`StatementInfo` or :class:`CallStackInfo` instance.""") group = property(__get_group,None,None,"Object group code.") reads = property(__get_reads,None,None,"Number of page reads.") writes = property(__get_writes,None,None,"Number of page writes.") fetches = property(__get_fetches,None,None,"Number of page fetches.") marks = property(__get_marks,None,None,"Number of pages with changes pending.") seq_reads = property(__get_seq_reads,None,None,"Number of records read sequentially.") idx_reads = property(__get_idx_reads,None,None,"Number of records read via an index.") inserts = property(__get_inserts,None,None,"Number of inserted records.") updates = property(__get_updates,None,None,"Number of updated records.") deletes = property(__get_deletes,None,None,"Number of deleted records.") backouts = property(__get_backouts,None,None, "Number of records where a new primary record version or a change to " \ "an existing primary record version is backed out due to rollback or " \ "savepoint undo.") purges = property(__get_purges,None,None, "Number of records where record version chain is being purged of " \ "versions no longer needed by OAT or younger transactions.") expunges = property(__get_expunges,None,None, "Number of records where record version chain is being deleted due to " \ "deletions by transactions older than OAT.") memory_used = property(__get_memory_used,None,None,"Number of bytes currently in use.") memory_allocated = property(__get_memory_allocated,None,None, "Number of bytes currently allocated at the OS level.") max_memory_used = property(__get_max_memory_used,None,None, "Maximum number of bytes used by this object.") max_memory_allocated = property(__get_max_memory_allocated,None,None, "Maximum number of bytes allocated from the operating system by this object.") # FB 3.0 locks = property(__get_locks,None,None,"Number of record locks.") waits = property(__get_waits,None,None,"Number of record waits.") conflits = property(__get_conflits,None,None,"Number of record conflits.") backversion_reads = property(__get_backversion_reads,None,None, "Number of record backversion reads.") fragment_reads = property(__get_fragment_reads,None,None,"Number of record fragment reads.") repeated_reads = property(__get_repeated_reads,None,None,"Number of repeated record reads.") #--- Public class TableStatsInfo(BaseInfoItem): "Information about row level I/O operations on single table." def __init__(self,monitor,attributes): super(TableStatsInfo,self).__init__(monitor,attributes) self._strip_attribute('MON$TABLE_NAME') #--- Protected def __get_owner(self): def find(seq): for x in seq: if x.stat_id == self.stat_id: return x return None obj_type = self.group if obj_type == STAT_DATABASE: return self.monitor.db elif obj_type == STAT_ATTACHMENT: return find(self.monitor.attachments) elif obj_type == STAT_TRANSACTION: return find(self.monitor.transactions) elif obj_type == STAT_STATEMENT: return find(self.monitor.statements) elif obj_type == STAT_CALL: return find(self.monitor.callstack) else: raise fdb.ProgrammingError("Unrecognized table stat group '%d'" % obj_type) def __get_row_stat_id(self): return self._attributes['MON$RECORD_STAT_ID'] def __get_table_name(self): return self._attributes['MON$TABLE_NAME'] def __get_group(self): return self._attributes['MON$STAT_GROUP'] def __get_seq_reads(self): return self._attributes['MON$RECORD_SEQ_READS'] def __get_idx_reads(self): return self._attributes['MON$RECORD_IDX_READS'] def __get_inserts(self): return self._attributes['MON$RECORD_INSERTS'] def __get_updates(self): return self._attributes['MON$RECORD_UPDATES'] def __get_deletes(self): return self._attributes['MON$RECORD_DELETES'] def __get_backouts(self): return self._attributes['MON$RECORD_BACKOUTS'] def __get_purges(self): return self._attributes['MON$RECORD_PURGES'] def __get_expunges(self): return self._attributes['MON$RECORD_EXPUNGES'] def __get_locks(self): return self._attributes['MON$RECORD_LOCKS'] def __get_waits(self): return self._attributes['MON$RECORD_WAITS'] def __get_conflits(self): return self._attributes['MON$RECORD_CONFLICTS'] def __get_backversion_reads(self): return self._attributes['MON$BACKVERSION_READS'] def __get_fragment_reads(self): return self._attributes['MON$FRAGMENT_READS'] def __get_repeated_reads(self): return self._attributes['MON$RECORD_RPT_READS'] #--- properties owner = property(__get_owner,None,None, """Object that owns this TableStats instance. Could be either :class:`DatabaseInfo`, :class:`AttachmentInfo`, :class:`TransactionInfo`, :class:`StatementInfo` or :class:`CallStackInfo` instance.""") row_stat_id = property(__get_row_stat_id,None,None,"Internal ID.") table_name = property(__get_table_name,None,None,"Table name.") group = property(__get_group,None,None,"Object group code.") seq_reads = property(__get_seq_reads,None,None,"Number of records read sequentially.") idx_reads = property(__get_idx_reads,None,None,"Number of records read via an index.") inserts = property(__get_inserts,None,None,"Number of inserted records.") updates = property(__get_updates,None,None,"Number of updated records.") deletes = property(__get_deletes,None,None,"Number of deleted records.") backouts = property(__get_backouts,None,None, "Number of records where a new primary record version or a change to " \ "an existing primary record version is backed out due to rollback or " \ "savepoint undo.") purges = property(__get_purges,None,None, "Number of records where record version chain is being purged of " \ "versions no longer needed by OAT or younger transactions.") expunges = property(__get_expunges,None,None, "Number of records where record version chain is being deleted due to " \ "deletions by transactions older than OAT.") locks = property(__get_locks,None,None,"Number of record locks.") waits = property(__get_waits,None,None,"Number of record waits.") conflits = property(__get_conflits,None,None,"Number of record conflits.") backversion_reads = property(__get_backversion_reads,None,None, "Number of record backversion reads.") fragment_reads = property(__get_fragment_reads,None,None,"Number of record fragment reads.") repeated_reads = property(__get_repeated_reads,None,None,"Number of repeated record reads.") #--- Public class ContextVariableInfo(BaseInfoItem): "Information about context variable." def __init__(self,monitor,attributes): super(ContextVariableInfo,self).__init__(monitor,attributes) self._strip_attribute('MON$VARIABLE_NAME') self._strip_attribute('MON$VARIABLE_VALUE') #--- Protected def __get_attachment(self): return self.monitor.get_attachment(self._attributes['MON$ATTACHMENT_ID']) def __get_transaction(self): tr = self._attributes['MON$TRANSACTION_ID'] return None if tr is None else self.monitor.get_transaction(tr) def __get_name(self): return self._attributes['MON$VARIABLE_NAME'] def __get_value(self): return self._attributes['MON$VARIABLE_VALUE'] #--- properties attachment = property(__get_attachment,None,None, ":class:`AttachmentInfo` instance to which this context variable belongs or None.") transaction = property(__get_transaction,None,None, ":class:`TransactionInfo` instance to which this context variable belongs or None.") name = property(__get_name,None,None,"Context variable name.") value = property(__get_value,None,None,"Value of context variable.") #--- Public def isattachmentvar(self): "Returns True if variable is associated to attachment context." return self._attributes['MON$ATTACHMENT_ID'] is not None def istransactionvar(self): "Returns True if variable is associated to transaction context." return self._attributes['MON$TRANSACTION_ID'] is not None fdb-1.6.1/PKG-INFO0000644000175000001440000000117113017556716014022 0ustar pcisarusers00000000000000Metadata-Version: 1.1 Name: fdb Version: 1.6.1 Summary: Firebird RDBMS bindings for Python. Home-page: http://www.firebirdsql.org/en/devel-python-driver/ Author: Pavel Cisar Author-email: pcisar@users.sourceforge.net License: BSD Description: fdb package is a set of Firebird RDBMS bindings for python. It works on Python 2.6+ and Python 3.x. Keywords: Firebird Platform: UNKNOWN Classifier: Development Status :: 5 - Production/Stable Classifier: License :: OSI Approved :: BSD License Classifier: Operating System :: OS Independent Classifier: Programming Language :: Python Classifier: Topic :: Database fdb-1.6.1/README0000644000175000001440000000052312570610716013577 0ustar pcisarusers00000000000000FDB package is a set of Firebird RDBMS bindings for Python. It works on Python 2.6+ and Python 3.x. FDB is replacement for discontinued KInterbasDB library, and as such it's designed to be as much compatible with KInterbasDB as possible, but there are some differences. See FDB documentation for full description of these differences. fdb-1.6.1/setup.cfg0000644000175000001440000000016013017556716014543 0ustar pcisarusers00000000000000[egg_info] tag_build = tag_date = 0 tag_svn_revision = 0 [build_sphinx] source-dir = sphinx all-files = True fdb-1.6.1/setup.py0000755000175000001440000000227212573516500014436 0ustar pcisarusers00000000000000#!/usr/bin/env python """fdb package is a set of Firebird RDBMS bindings for python. It works on Python 2.6+ and Python 3.x. """ from setuptools import setup, find_packages from fdb import __version__ classifiers = [ 'Development Status :: 5 - Production/Stable', 'License :: OSI Approved :: BSD License', 'Operating System :: OS Independent', 'Programming Language :: Python', 'Topic :: Database', ] setup(name='fdb', version=__version__, description = 'Firebird RDBMS bindings for Python.', url='http://www.firebirdsql.org/en/devel-python-driver/', classifiers=classifiers, keywords=['Firebird'], license='BSD', author='Pavel Cisar', author_email='pcisar@users.sourceforge.net', long_description=__doc__, install_requires=[], setup_requires=[], packages=find_packages(exclude=['ez_setup']), test_suite='nose.collector', #include_package_data=True, package_data={'': ['*.txt'], 'test':'fbtest.fdb'}, #message_extractors={'fdb': [ #('**.py', 'python', None), #('public/**', 'ignore', None)]}, zip_safe=False, entry_points=""" """, ) fdb-1.6.1/LICENSE.TXT0000644000175000001440000000452512431143160014376 0ustar pcisarusers00000000000000======= LICENSE ======= The following contributors hold Copyright (c) over their respective portions of code and documentation: The main portion of initial code (~95%); Current maintainer: Pavel Cisar Contributors: [Initial trace API & nbackup support; Python 3 support] Philippe Makowski Some code in inital version is Python code from KInterbasDB 3.3.0. As it's very hard to find out who exactly was the original author of used code, here is the full list of KInterbasDB contributors: [Author of original version; maintained through version 2.0:] 1998-2001 [alex] Alexander Kuznetsov [Author of ~90% of current code, most of current documentation; maintained through version 3.3:] 2002-2007 [dsr] David S. Rushby [Finishing touch to v3.3; New Documentation; Current maintainer:] 2008-2011 [paci] Pavel Cisar [Significant Contributors:] 2001-2002 [maz] Marek Isalski Marek made important first steps in removing the limitations of version 2.0 in preparation for version 3.0. 2001 [eac] Evgeny A. Cherkashin Evgeny wrote the first version of the distutils build script, which was included in a 2.x point release. 2001-2002 [janez] Janez Jere Janez contributed several bugfixes, including fixes for the date and time parameter conversion code in preparation for version 3.0. Permission to use, copy, modify, and distribute this software and its documentation for any purpose and without fee or royalty is hereby granted, provided that the above copyright notice appears in all copies and that both the copyright notice and this permission notice appear in supporting documentation or portions thereof, including modifications, that you make. The authors disclaim all warranties with regard to this software, including all implied warranties of merchantability and fitness. In no event shall any author be liable for any special, indirect or consequential damages or any damages whatsoever resulting from loss of use, data or profits, whether in an action of contract, negligence or other tortious action, arising out of or in connection with the use or performance of this software. fdb-1.6.1/sphinx/0000755000175000001440000000000013017556716014236 5ustar pcisarusers00000000000000fdb-1.6.1/sphinx/index.txt0000644000175000001440000000304413017532323016073 0ustar pcisarusers00000000000000.. FDB documentation master file, created by sphinx-quickstart on Wed Jan 7 12:29:48 2009. You can adapt this file completely to your liking, but it should at least contain the root `toctree` directive. ############################### Welcome to FDB's documentation! ############################### FDB is a `Python `__ library package that implements `Python Database API 2.0`-compliant support for the open source relational database `Firebird® `__. In addition to the minimal feature set of the standard Python DB API, FDB also exposes nearly the entire native client API of the database engine. FDB is free -- covered by a permissive BSD-style `license `__ that both commercial and noncommercial users should find agreeable. This documentation set is not a tutorial on Python, SQL, or Firebird; rather, it is a topical presentation of FDB's feature set, with example code to demonstrate basic usage patterns. For detailed information about Firebird features, see the `Firebird documentation `__, and especially the excellent `The Firebird Book `__ written by Helen Borrie and published by APress. Documentation Contents: *********************** .. toctree:: :maxdepth: 2 getting-started usage-guide python-db-api-compliance differences-from-kdb reference changelog license Indices and tables ****************** * :ref:`genindex` * :ref:`modindex` * :ref:`search` fdb-1.6.1/sphinx/getting-started.txt0000644000175000001440000002162713017532446020106 0ustar pcisarusers00000000000000 ######################## Getting Started with FDB ######################## Installation ************ FDB is written as pure-Python module on top of Firebird client library (fbclient.so/dll) using ctypes_, so *make sure you have Firebird client properly installed before you try to install FDB*, otherwise the installation will fail. FDB supports Firebird version 2.0 and higher. FDB is distributed as `setuptools`_ package, so you'll need setuptools or `compatible package `_ installed to install FDB properly. Installation from PYPI_ ======================= Run easy_install or pip:: $ pip install fdb or:: $ easy_install fdb Installation from source ======================== Download the source tarball, uncompress it, then run the install command:: $ curl -O http://pypi.python.org/packages/source/f/fdb/fdb-1.2.tar.gz $ tar -xzvf fdb-1.2.tar.gz $ cd fdb-1.2 $ python setup.py install .. _setuptools: https://bitbucket.org/pypa/setuptools .. _PYPI: http://pypi.python.org .. _ctypes: http://docs.python.org/library/ctypes.html Quick-start Guide ***************** This brief tutorial aims to get the reader started by demonstrating elementary usage of FDB. It is not a comprehensive Python Database API tutorial, nor is it comprehensive in its coverage of anything else. The numerous advanced features of FDB are covered in another section of this documentation, which is not in a tutorial format, though it is replete with examples. Connecting to a Database ======================== **Example 1** A database connection is typically established with code such as this: .. sourcecode:: python import fdb # The server is named 'bison'; the database file is at '/temp/test.db'. con = fdb.connect(dsn='bison:/temp/test.db', user='sysdba', password='pass') # Or, equivalently: con = fdb.connect( host='bison', database='/temp/test.db', user='sysdba', password='pass' ) **Example 2** Suppose we want to connect to the database in SQL Dialect 1 and specifying UTF-8 as the character set of the connection: .. sourcecode:: python import fdb con = fdb.connect( dsn='bison:/temp/test.db', user='sysdba', password='pass', dialect=1, # necessary for all dialect 1 databases charset='UTF8' # specify a character set for the connection ) Executing SQL Statements ======================== For this section, suppose we have a table defined and populated by the following SQL code: .. sourcecode:: sql create table languages ( name varchar(20), year_released integer ); insert into languages (name, year_released) values ('C', 1972); insert into languages (name, year_released) values ('Python', 1991); **Example 1** This example shows the *simplest* way to print the entire contents of the `languages` table: .. sourcecode:: python import fdb con = fdb.connect(dsn='/temp/test.db', user='sysdba', password='masterkey') # Create a Cursor object that operates in the context of Connection con: cur = con.cursor() # Execute the SELECT statement: cur.execute("select * from languages order by year_released") # Retrieve all rows as a sequence and print that sequence: print cur.fetchall() Sample output: .. sourcecode:: python [('C', 1972), ('Python', 1991)] **Example 2** Here's another trivial example that demonstrates various ways of fetching a single row at a time from a `SELECT`-cursor: .. sourcecode:: python import fdb con = fdb.connect(dsn='/temp/test.db', user='sysdba', password='masterkey') cur = con.cursor() SELECT = "select name, year_released from languages order by year_released" # 1. Iterate over the rows available from the cursor, unpacking the # resulting sequences to yield their elements (name, year_released): cur.execute(SELECT) for (name, year_released) in cur: print '%s has been publicly available since %d.' % (name, year_released) # 2. Equivalently: cur.execute(SELECT) for row in cur: print '%s has been publicly available since %d.' % (row[0], row[1]) # 3. Using mapping-iteration rather than sequence-iteration: cur.execute(SELECT) for row in cur.itermap(): print '%(name)s has been publicly available since %(year_released)d.' % row Sample output: .. sourcecode:: python C has been publicly available since 1972. Python has been publicly available since 1991. C has been publicly available since 1972. Python has been publicly available since 1991. C has been publicly available since 1972. Python has been publicly available since 1991. **Example 3** The following program is a simplistic table printer (applied in this example to `languages`): .. sourcecode:: python import fdb TABLE_NAME = 'languages' SELECT = 'select * from %s order by year_released' % TABLE_NAME con = fdb.connect(dsn='/temp/test.db', user='sysdba', password='masterkey') cur = con.cursor() cur.execute(SELECT) # Print a header. for fieldDesc in cur.description: print fieldDesc[fdb.DESCRIPTION_NAME].ljust(fieldDesc[fdb.DESCRIPTION_DISPLAY_SIZE]) , print # Finish the header with a newline. print '-' * 78 # For each row, print the value of each field left-justified within # the maximum possible width of that field. fieldIndices = range(len(cur.description)) for row in cur: for fieldIndex in fieldIndices: fieldValue = str(row[fieldIndex]) fieldMaxWidth = cur.description[fieldIndex][fdb.DESCRIPTION_DISPLAY_SIZE] print fieldValue.ljust(fieldMaxWidth) , print # Finish the row with a newline. Sample output: .. sourcecode:: python NAME YEAR_RELEASED ------------------------------------------------------------------------------ C 1972 Python 1991 **Example 4** Let's insert more languages: .. sourcecode:: python import fdb con = fdb.connect(dsn='/temp/test.db', user='sysdba', password='masterkey') cur = con.cursor() newLanguages = [ ('Lisp', 1958), ('Dylan', 1995), ] cur.executemany("insert into languages (name, year_released) values (?, ?)", newLanguages ) # The changes will not be saved unless the transaction is committed explicitly: con.commit() Note the use of a *parameterized* SQL statement above. When dealing with repetitive statements, this is much faster and less error-prone than assembling each SQL statement manually. (You can read more about parameterized SQL statements in the section on :ref:`Prepared Statements `.) After running Example 4, the table printer from Example 3 would print: .. sourcecode:: python NAME YEAR_RELEASED ------------------------------------------------------------------------------ Lisp 1958 C 1972 Python 1991 Dylan 1995 Calling Stored Procedures ========================= Firebird supports stored procedures written in a proprietary procedural SQL language. Firebird stored procedures can have *input* parameters and/or *output* parameters. Some databases support *input/output* parameters, where the same parameter is used for both input and output; Firebird does not support this. It is important to distinguish between procedures that *return a result set* and procedures that *populate and return their output parameters exactly once*. Conceptually, the latter "return their output parameters" like a Python function, whereas the former "yield result rows" like a Python generator. Firebird's *server-side* procedural SQL syntax makes no such distinction, but *client-side* SQL code (and C API code) must. A result set is retrieved from a stored procedure by `SELECT`ing from the procedure, whereas output parameters are retrieved with an `EXECUTE PROCEDURE` statement. To *retrieve a result set* from a stored procedure with FDB, use code such as this: .. sourcecode:: python cur.execute("select output1, output2 from the_proc(?, ?)", (input1, input2)) # Ordinary fetch code here, such as: for row in cur: ... # process row con.commit() # If the procedure had any side effects, commit them. To *execute* a stored procedure and *access its output parameters*, use code such as this: .. sourcecode:: python cur.callproc("the_proc", (input1, input2)) # If there are output parameters, retrieve them as though they were the # first row of a result set. For example: outputParams = cur.fetchone() con.commit() # If the procedure had any side effects, commit them. This latter is not very elegant; it would be preferable to access the procedure's output parameters as the return value of `Cursor.callproc()`. The Python DB API specification requires the current behavior, however. fdb-1.6.1/sphinx/usage-guide.txt0000644000175000001440000044145713017532262017203 0ustar pcisarusers00000000000000 =========== Usage Guide =========== .. currentModule:: fdb Driver structure ================ Source code is currently divided into next submodules: * :mod:`~fdb.ibase` - Python :ref:`ctypes ` interface to Firebird client library. * :mod:`~fdb.fbcore` - Main driver source code. * :mod:`~fdb.services` - Driver code to work with Firebird Services. * :mod:`~fdb.schema` - Driver code to work with Firebird database schema (metadata). * :mod:`~fdb.utils` - Various classes and functions used by driver that are generally useful. All important data, functions, classes and constants are available directly in fdb namespace, so there is not need to import or use :mod:`~fdb.fbcore` and :mod:`~fdb.ibase` submodules directly. Exception is the :mod:`fdb.services` submodule that contains functions and classes for work with Firebird Services. Because Services are optional, not so frequently used Firebird facility, all service-related code was isolated in separate submodule rather than exposed directly through main module namespace. Because :mod:`~fdb.services` submodule contains names also used by main driver (:func:`~fdb.services.connect`, :class:`~fdb.services.Connection`), it's advised to use fully qualified names when refering to them instead importing them via `from fdb.services import ...`. .. index:: Database Databases ========= Access to the database is made available through :class:`Connection` objects. FDB provides two constructors for these: * :func:`connect` - Returns `Connection` to database that already exists. * :func:`create_database` - Returns `Connection` to newly created database. .. index:: pair: Database; connect Using `connect` --------------- This constructor has number of keyword parameters that could be divided into several groups: * Database specification (parameters `dsn`, `host`, `database` and `port`) * User specification (parameters `user`, `password` and `role`) * Connection options (parameters `sql_dialect`, `charset`, `isolation_level`, `buffers`, `force_writes`, `no_reserve` and `db_key_scope`) To establish a connection to database, you always must specify the database, either as `connection string` parameter `dsn`, or as required combination of parameters `host`, `database` and `port`. .. important:: Current driver version ignores the value of `port` parameter. If you need to specify the port number, you have to use `dsn` parameter instead. Although specification of `user` and `password` parameters is optional (if environment variables `ISC_USER` and `ISC_PASSWORD` are set, their values are used if these parameters are ommited), it's recommended practice to use them. Parameter `role` is needed only when you use Firebird roles. Connection options are optional (see `Firebird Documentation`_ for details). However you may often want to specify `charset`, as it directs automatic conversions of string data between client and server, and automatic conversions from/to unicode performed by FDB driver (see `Data handling and conversions`_ for details). **Examples:** .. code-block:: python # Connecting via 'dsn' # # Local database (local protocol, if supported) con = fdb.connect(dsn='/path/database.fdb', user='sysdba', password='pass') # Local database (TCP/IP) con = fdb.connect(dsn='localhost:/path/database.fdb', user='sysdba', password='pass') # Local database (TCP/IP with port specification) con = fdb.connect(dsn='localhost/3050:/path/database.fdb', user='sysdba', password='pass') # Remote database con = fdb.connect(dsn='host:/path/database.db', user='sysdba', password='pass') # Remote database with port specification con = fdb.connect(dsn='host/3050:/path/database.db', user='sysdba', password='pass') # # Connecting via 'database', 'host' and 'port' # # Local database (local protocol, if supported) con = fdb.connect(database='/path/database.db', user='sysdba', password='pass') # Local database (TCP/IP) con = fdb.connect(host='localhost', database='/path/database.db', user='sysdba', password='pass') # Local database (TCP/IP with port specification) con = fdb.connect(host='localhost', port=3050, database='/path/database.db', user='sysdba', password='pass') # Remote database con = fdb.connect(host='myhost', database='/path/database.db', user='sysdba', password='pass') Since version 1.2 FDB supports additional `Connection` class(es) that extend :class:`Connection` functionality in optional (opt-in) way. For example :class:`ConnectionWithSchema` extends Connection interface with methods and attributes provided by :class:`~fdb.schema.Schema`. New `connection_class` parameter was introduced to `connect` and `create_database` to connect to/create database using different class than descends from `Connection`. **Example:** .. code-block:: python # Connecting through ConnectionWithSchema # con = fdb.connect(dsn='/path/database.fdb', user='sysdba', password='pass', connection_class=fdb.ConnectionWithSchema) .. index:: pair: Database; create Using `create_database` ----------------------- The Firebird engine supports dynamic database creation via the SQL statement `CREATE DATABASE`. FDB wraps it into :func:`create_database`, that returns `Connection` instance attached to newly created database. **Example:** .. code-block:: python con = fdb.create_database("create database 'host:/temp/db.db' user 'sysdba' password 'pass'") .. note:: Since version 1.2 FDB supports additional method for database creation. Instead `CREATE DATABASE` SQL statement you can use number of optional keyword parameters introduced to :func:`create_database`. **Example:** .. code-block:: python con = fdb.create_database(dsn='/temp/db.fdb',user='sysdba',password='pass',page_size=8192) .. index:: pair: Database; delete Deleting databases ------------------ The Firebird engine also supports dropping (deleting) databases dynamically, but dropping is a more complicated operation than creating, for several reasons: an existing database may be in use by users other than the one who requests the deletion, it may have supporting objects such as temporary sort files, and it may even have dependent shadow databases. Although the database engine recognizes a `DROP DATABASE` SQL statement, support for that statement is limited to the `isql` command-line administration utility. However, the engine supports the deletion of databases via an API call, which FDB exposes as :meth:`~Connection.drop_database` method in :class:`Connection` class. So, to drop a database you need to connect to it first. **Examples:** .. code-block:: python import fdb con = fdb.create_database("create database '/temp/db.db' user 'sysdba' password 'pass'") con.drop_database() con = fdb.connect(dsn='/path/database.fdb', user='sysdba', password='pass') con.drop_database() .. index:: pair: Connection; usage Connection object ----------------- :class:`Connection` object represents a direct link to database, and works as gateway for next operations with it: * `Executing SQL Statements`_: methods :meth:`~Connection.execute_immediate` and :meth:`~Connection.cursor`. * Dropping database: method :meth:`~Connection.drop_database`. * `Transanction management`_: methods :meth:`~Connection.begin`, :meth:`~Connection.commit`, :meth:`~Connection.rollback`, :meth:`~Connection.savepoint`, :meth:`~Connection.trans`, :meth:`~Connection.trans_info` and :meth:`~Connection.transaction_info`, and attributes :attr:`~Connection.main_transaction`, :attr:`~Connection.transactions`, :attr:`~Connection.default_tpb` and :attr:`~Connection.group`. * Work with `Database Events`_: method :meth:`~Connection.event_conduit`. * `Getting information about Firebird version`_: attributes :attr:`~Connection.server_version`, :attr:`~Connection.firebird_version`, :attr:`~Connection.version` and :attr:`~Connection.engine_version`. * `Getting information about database`_: methods :meth:`~Connection.db_info` and :meth:`~Connection.database_info`. * :ref:`Getting information about database metadata `: attribute :attr:`~Connection.schema` and :attr:`~Connection.ods`. .. index:: pair: Firebird; information about Getting information about Firebird version ------------------------------------------ Because functionality and some features depends on actual Firebird version, it could be important for FDB users to check it. This (otherwise) simple task could be confusing for new Firebird users, because Firebird uses two different version lineages. This abomination was introduced to Firebird thanks to its InterBase legacy (Firebird 1.0 is a fork of InterBase 6.0), as applications designed to work with InterBase can often work with Firebird without problems (and vice versa). However, legacy applications designed to work with InterBase may stop working properly if they would detect unexpectedly low server version, so default version number returned by Firebird (and FDB) is based on InterBase version number. For example this version for Firebird 2.5.2 is 6.3.2, so condition for legacy applications that require at least IB 6.0 is met. FDB provides these version strings as two `Connection` properties: - :attr:`~Connection.server_version` - Legacy InterBase-friendly version string. - :attr:`~Connection.firebird_version` - Firebird's own version string. However, this version string contains more information than version number. For example for Linux Firebird 2.5.2 it's 'LI-V2.5.2.26540 Firebird 2.5'. So FDB provides two more properties for your convenience: - :attr:`~Connection.version` - Only Firebird version number. It's a string with format: major.minor.subrelease.build - :attr:`~Connection.engine_version` - Engine (major.minor) version as (float) number. FDB also provides convenient constants for supported engine versions: `ODS_FB_20`,`ODS_FB_21` and `ODS_FB_25`. Database On-Disk Structure -------------------------- Particular Firebird features may also depend on specific support in database (for example monitoring tables introduced in Firebird 2.1). These required structures are present automatically when database is created by particular engine verison that needs them, but Firebird engine may typically work with databases created by older versions and thus with older structure, so it could be necessary to consult also On-Disk Structure (ODS for short) version. FDB provides this number as :attr:`Connection.ods` (float) property. **Example**: .. code-block:: python con = fdb.connect(dsn='/path/database.fdb', user='sysdba', password='pass') print 'Firebird version:',con.version print 'ODS version:',con.ods :: Firebird version: 2.5.2.26540 ODS version: 11.1 In abover example although connected Firebird engine is version 2.5, connected database has ODS 11.1 which came with Firebird 2.1, and some Firebird 2.5 features will not be available on this database. .. index:: pair: Database; information about Getting information about database ---------------------------------- Firebird provides various informations about server and connected database via `database_info` API call. FDB surfaces this API through methods :meth:`~Connection.db_info` and :meth:`~Connection.database_info` on Connection object. :meth:`Connection.database_info` is a `very thin` wrapper around function `isc_database_info()`. This method does not attempt to interpret its results except with regard to whether they are a string or an integer. For example, requesting `isc_info_user_names` with the call:: con.database_info(fdb.isc_info_user_names, 's') will return a binary string containing a raw succession of length-name pairs. **Example program:** .. code-block:: python import fdb con = fdb.connect(dsn='localhost:/temp/test.db', user='sysdba', password='pass') # Retrieving an integer info item is quite simple. bytesInUse = con.database_info(fdb.isc_info_current_memory, 'i') print 'The server is currently using %d bytes of memory.' % bytesInUse # Retrieving a string info item is somewhat more involved, because the # information is returned in a raw binary buffer that must be parsed # according to the rules defined in the Interbase® 6 API Guide section # entitled "Requesting buffer items and result buffer values" (page 51). # # Often, the buffer contains a succession of length-string pairs # (one byte telling the length of s, followed by s itself). # Function fdb.ibase.ord2 is provided to convert a raw # byte to a Python integer (see examples below). buf = con.database_info(fdb.isc_info_db_id, 's') # Parse the filename from the buffer. beginningOfFilename = 2 # The second byte in the buffer contains the size of the database filename # in bytes. lengthOfFilename = fdb.ibase.ord2(buf[1]) filename = buf[beginningOfFilename:beginningOfFilename + lengthOfFilename] # Parse the host name from the buffer. beginningOfHostName = (beginningOfFilename + lengthOfFilename) + 1 # The first byte after the end of the database filename contains the size # of the host name in bytes. lengthOfHostName = fdb.ibase.ord2(buf[beginningOfHostName - 1]) host = buf[beginningOfHostName:beginningOfHostName + lengthOfHostName] print 'We are connected to the database at %s on host %s.' % (filename, host) Sample output:: The server is currently using 8931328 bytes of memory. We are connected to the database at C:\TEMP\TEST.DB on host WEASEL. A more convenient way to access the same functionality is via the :meth:`~Connection.db_info` method, which is high-level convenience wrapper around the `database_info()` method that parses the output of database_info into Python-friendly objects instead of returning raw binary buffers in the case of complex result types. For example, requesting `isc_info_user_names` with the call:: con.db_info(fdb.isc_info_user_names) returns a dictionary that maps (username -> number of open connections). If SYSDBA has one open connection to the database to which `con` is connected, and TEST_USER_1 has three open connections to that same database, the return value would be:: {‘SYSDBA’: 1, ‘TEST_USER_1’: 3} **Example program:** .. code-block:: python import fdb import os.path ############################################################################### # Querying an isc_info_* item that has a complex result: ############################################################################### # Establish three connections to the test database as TEST_USER_1, and one # connection as SYSDBA. Then use the Connection.db_info method to query the # number of attachments by each user to the test database. testUserCons = [] for i in range(3): tcon = fdb.connect(dsn='localhost:/temp/test.db', user='TEST_USER_1', password='pass') testUserCons.append(tcon) con = fdb.connect(dsn='localhost:/temp/test.db', user='sysdba', password='pass') print 'Open connections to this database:' print con.db_info(fdb.isc_info_user_names) ############################################################################### # Querying multiple isc_info_* items at once: ############################################################################### # Request multiple db_info items at once, specifically the page size of the # database and the number of pages currently allocated. Compare the size # computed by that method with the size reported by the file system. # The advantages of using db_info instead of the file system to compute # database size are: # - db_info works seamlessly on connections to remote databases that reside # in file systems to which the client program lacks access. # - If the database is split across multiple files, db_info includes all of # them. res = con.db_info([fdb.isc_info_page_size, fdb.isc_info_allocation]) pagesAllocated = res[fdb.isc_info_allocation] pageSize = res[fdb.isc_info_page_size] print '\ndb_info indicates database size is', pageSize * pagesAllocated, 'bytes' print 'os.path.getsize indicates size is ', os.path.getsize(DB_FILENAME), 'bytes' Sample output:: Open connections to this database: {'SYSDBA': 1, 'TEST_USER_1': 3} db_info indicates database size is 20684800 bytes os.path.getsize indicates size is 20684800 bytes .. index:: SQL Statement Executing SQL Statements ======================== FDB implements two ways for execution of SQL commands against connected database: * :meth:`~Connection.execute_immediate` - for execution of SQL commands that don't return any result. * :class:`Cursor` objects that offer rich interface for execution of SQL commands and fetching their results. .. index:: pair: Cursor; usage Cursor object ------------- Because `Cursor` objects always operate in context of single :class:`Connection` (and :class:`Transaction`), `Cursor` instances are not created directly, but by constructor method. Python DB API 2.0 assume that if database engine supports transactions, it supports only one transaction per connection, hence it defines constructor method :meth:`~Connection.cursor` (and other transaction-related methods) as part of :class:`Connection` interface. However, Firebird supports multiple independent transactions per connection. To conform to Python DB API, FDB uses concept of internal :attr:`~Connection.main_transaction` and secondary :attr:`~Connection.transactions`. Cursor constructor is primarily defined by :class:`Transaction`, and Cursor constructor on `Connection` is therefore a shortcut for `main_transaction.cursor()`. `Cursor` objects are used for next operations: * Execution of SQL Statemets: methods :meth:`~Cursor.execute`, :meth:`~Cursor.executemany` and :meth:`~Cursor.callproc`. * Creating :class:`PreparedStatement` objects for efficient repeated execution of SQL statements, and to obtain additional information about SQL statements (like execution :attr:`~PreparedStatement.plan`): method :meth:`~Cursor.prep`. * `Fetching results `_: methods :meth:`~Cursor.fetchone`, :meth:`~Cursor.fetchmany`, :meth:`~Cursor.fetchall`, :meth:`~Cursor.fetchonemap`, :meth:`~Cursor.fetchmanymap`, :meth:`~Cursor.fetchallmap`, :meth:`~Cursor.iter`, :meth:`~Cursor.itermap` and :meth:`~Cursor.next`. .. index:: pair: SQL Statement; execution SQL Execution Basics -------------------- There are three methods how to execute SQL commands: * :meth:`Connection.execute_immediate` or :meth:`Transaction.execute_immediate` for SQL commands that don't return any result, and are not executed frequently. This method also **doesn't** support either `parametrized statements`_ or `prepared statements`_. .. tip:: This method is efficient for `administrative` and `DDL`_ SQL commands, like `DROP`, `CREATE` or `ALTER` commands, `SET STATISTICS` etc. * :meth:`Cursor.execute` or :meth:`Cursor.executemany` for commands that return result sets, i.e. sequence of `rows` of the same structure, and sequence has unknown number of `rows` (including zero). .. tip:: This method is preferred for all `SELECT` and other `DML`_ statements, or any statement that is executed frequently, either `as is` or in `parametrized` form. * :meth:`Cursor.callproc` for execution of `Stored procedures` that always return exactly one set of values. .. note:: This method of SP invocation is equivalent to `"EXECUTE PROCEDURE ..."` SQL statement. .. index:: pair: SQL Statement; parametrized .. _parametrized-statements: Parametrized statements ----------------------- When SQL command you want to execute contains data `values`, you can either: * Embed them `directly` or via `string formatting` into command `string`, e.g.: .. code-block:: python cur.execute("insert into the_table (a,b,c) values ('aardvark', 1, 0.1)") # or cur.execute("select * from the_table where col == 'aardvark'") # or cur.execute("insert into the_table (a,b,c) values ('%s', %i, %f)" % ('aardvark',1,0.1)) # or cur.execute("select * from the_table where col == '%s'" % 'aardvark') * Use parameter marker (`?`) in command `string` in the slots where values are expected, then supply those values as Python list or tuple: .. code-block:: python cur.execute("insert into the_table (a,b,c) values (?,?,?)", ('aardvark', 1, 0.1)) # or cur.execute("select * from the_table where col == ?",('aardvark',)) While both methods have the same results, the second one (called `parametrized`) has several important advantages: * You don't need to handle conversions from Python data types to strings. * FDB will handle all data type conversions (if necessary) from Python data types to Firebird ones, including `None/NULL` conversion and conversion from `unicode` to `byte strings` in encoding expected by server. * You may pass BLOB values as open `file-like` objects, and FDB will handle the transfer of BLOB value. * If you'll pass exactly the same command `string` again to particular :class:`Cursor` instance, it will be executed more efficiently (see section about `Prepared Statements`_ for details). Parametrized statemets also have some limitations. Currently: * `DATE`, `TIME` and `DATETIME` values must be relevant :mod:`datetime` objects. * `NUMERIC` and `DECIMAL` values must be :mod:`decimal` objects. .. index:: pair: Cursor; fetching data Fetching data from server ------------------------- Result of SQL statement execution consists from sequence of zero to unknown number of `rows`, where each `row` is a set of exactly the same number of values. :class:`Cursor` object offer number of different methods for fetching these `rows`, that should satisfy all your specific needs: * :meth:`~Cursor.fetchone` - Returns the next row of a query result set, or `None` when no more data is available. .. tip:: Cursor supports the :ref:`iterator protocol `, yielding tuples of values like :meth:`~Cursor.fetchone`. * :meth:`~Cursor.fetchmany` - Returns the next set of rows of a query result, returning a sequence of sequences (e.g. a list of tuples). An empty sequence is returned when no more rows are available. The number of rows to fetch per call is specified by the parameter. If it is not given, the cursor’s :attr:`~Cursor.arraysize` determines the number of rows to be fetched. The method does try to fetch as many rows as indicated by the size parameter. If this is not possible due to the specified number of rows not being available, fewer rows may be returned. .. note:: The default value of :attr:`~Cursor.arraysize` is `1`, so without paremeter it's equivalent to :meth:`~Cursor.fetchone`, but returns list of `rows`, instead actual `row` directly. * :meth:`~Cursor.fetchall` - Returns all (remaining) rows of a query result as list of tuples, where each tuple is one row of returned values. .. tip:: This method can potentially return huge amount of data, that may exhaust available memory. If you need just `iteration` over potentially big result set, use loops with :meth:`~Cursor.fetchone`, Cursor's built-in support for :ref:`iterator protocol ` or call to :meth:`~Cursor.iter` instead this method. * :meth:`~Cursor.fetchonemap` - Returns the next row like :meth:`~Cursor.fetchone`, but returns a mapping of `field name` to `field value`, rather than a tuple. * :meth:`~Cursor.fetchmanymap` - Returns the next set of rows of a query result like :meth:`~Cursor.fetchmany`, but returns a list of mapping of `field name` to `field value`, rather than a tuple. * :meth:`~Cursor.fetchallmap` - Returns all (remaining) rows of a query result like :meth:`~Cursor.fetchall`, returns a list of mappings of `field name` to `field value`, rather than a tuple. .. tip:: This method can potentially return huge amount of data, that may exhaust available memory. If you need just `iteration` over potentially big result set with mapping support, use :meth:`~Cursor.itermap` instead this method. * :meth:`~Cursor.iter` - Equivalent to the :meth:`~Cursor.fetchall`, except that it returns :ref:`iterator ` rather than materialized list. * :meth:`~Cursor.itermap` - Equivalent to the :meth:`~Cursor.fetchallmap`, except that it returns :ref:`iterator ` rather than materialized list. * Call to :meth:`~Cursor.execute` returns self (Cursor instance) that itself supports the :ref:`iterator protocol `, yielding tuples of values like :meth:`~Cursor.fetchone`. .. important:: FDB makes absolutely no guarantees about the return value of the `fetchone` / `fetchmany` / `fetchall` methods except that it is a sequence indexed by field position. FDB makes absolutely no guarantees about the return value of the `fetchonemap` / `fetchmanymap` / `fetchallmap` methods except that it is a mapping of field name to field value. Therefore, client programmers should not rely on the return value being an instance of a particular class or type. **Examples:** .. code-block:: python import fdb con = fdb.connect(dsn='/temp/test.db', user='sysdba', password='masterkey') cur = con.cursor() SELECT = "select name, year_released from languages order by year_released" # 1. Using built-in support for iteration protocol to iterate over the rows available from the cursor, # unpacking the resulting sequences to yield their elements (name, year_released): cur.execute(SELECT) for (name, year_released) in cur: print '%s has been publicly available since %d.' % (name, year_released) # or alternatively you can take an advantage of cur.execute returning self. for (name, year_released) in cur.execute(SELECT): print '%s has been publicly available since %d.' % (name, year_released) # 2. Equivalently using fetchall(): # This is potentially dangerous if result set is huge, as the whole result set is first materialized # as list and then used for iteration. cur.execute(SELECT) for row in cur.fetchall(): print '%s has been publicly available since %d.' % (row[0], row[1]) # 3. Using mapping-iteration rather than sequence-iteration: cur.execute(SELECT) for row in cur.itermap(): print '%(name)s has been publicly available since %(year_released)d.' % row .. tip:: :meth:`Cursor.execute` and :meth:`Cursor.executemany` return self, so you can use calls to them as iterators (see example above). .. index:: pair: SQL Statement; prepared .. _prepared-statements: Prepared Statements ------------------- Execution of any SQL statement has three phases: * *Preparation*: command is analyzed, validated, execution plan is determined by optimizer and all necessary data structures (for example for input and output parameters) are initialized. * *Execution*: input parameters (if any) are passed to server and previously prepared statement is actually executed by database engine. * *Fetching*: result of execution and data (if any) are transfered from server to client, and allocated resources are then released. The preparation phase consumes some amount of server resources (memory and CPU). Although preparation and release of resources typically takes only small amount of CPU time, it builds up as number of executed statements grows. Firebird (like most database engines) allows to spare this time for subsequent execution if particular statement should be executed repeatedly - by reusing once prepared statement for repeated execution. This may save significant amount of server processing time, and result in better overall performance. FDB builds on this by encapsulating all statement-related code into separate :class:`PreparedStatement` class, and implementing :class:`Cursor` class as a wrapper around it. .. warning:: FDB's implementation of Cursor somewhat violates the Python DB API 2.0, which requires that cursor will be unusable after call to `close`; and an Error (or subclass) exception should be raised if any operation is attempted with the cursor. If you'll take advantage of this `anomaly`, your code would be less portable to other Python DB API 2.0 compliant drivers. Beside SQL command string, `Cursor` also allows to aquire and use `PreparedStatement` instances explicitly. `PreparedStatement` are aquired by calling :meth:`~Cursor.prep` method could be then passed to :meth:`~Cursor.execute` or :meth:`~Cursor.executemany` instead `command string`. **Example:** .. code-block:: python insertStatement = cur.prep("insert into the_table (a,b,c) values (?,?,?)") inputRows = [ ('aardvark', 1, 0.1), ('zymurgy', 2147483647, 99999.999), ('foobar', 2000, 9.9) ] for row in inputRows: cur.execute(insertStatement,row) # # or you can use executemany # cur.executemany(insertStatement, inputRows) Prepared statements are bound to `Cursor` instance that created them, and can't be used with any other `Cursor` instance. Beside repeated execution they are also useful to get information about statement (like its output :attr:`~PreparedStatement.description`, execution :attr:`~PreparedStatement.plan` or :attr:`~PreparedStatement.statement_type`) before its execution. **Example Program:** The following program demonstrates the explicit use of `PreparedStatements`. It also benchmarks explicit `PreparedStatement` reuse against normal execution that prepares statements on each execution. .. code-block:: python import time import fdb con = fdb.connect(dsn='localhost:employee', user='sysdba', password='masterkey' ) cur = con.cursor() # Create supporting database entities: cur.execute("recreate table t (a int, b varchar(50))") con.commit() cur.execute("create unique index unique_t_a on t(a)") con.commit() # Explicitly prepare the insert statement: psIns = cur.prep("insert into t (a,b) values (?,?)") print 'psIns.sql: "%s"' % psIns.sql print 'psIns.statement_type == fdb.isc_info_sql_stmt_insert:', ( psIns.statement_type == fdb.isc_info_sql_stmt_insert ) print 'psIns.n_input_params: %d' % psIns.n_input_params print 'psIns.n_output_params: %d' % psIns.n_output_params print 'psIns.plan: %s' % psIns.plan print N = 50000 iStart = 0 # The client programmer uses a PreparedStatement explicitly: startTime = time.time() for i in xrange(iStart, iStart + N): cur.execute(psIns, (i, str(i))) print ( 'With explicit prepared statement, performed' '\n %0.2f insertions per second.' % (N / (time.time() - startTime)) ) con.commit() iStart += N # A new SQL string containing the inputs is submitted every time. Also, in a # more complicated scenario where the end user supplied the string input # values, the program would risk SQL injection attacks: startTime = time.time() for i in xrange(iStart, iStart + N): cur.execute("insert into t (a,b) values (%d,'%s')" % (i, str(i))) print ( 'When unable to reuse prepared statement, performed' '\n %0.2f insertions per second.' % (N / (time.time() - startTime)) ) con.commit() # Prepare a SELECT statement and examine its properties. The optimizer's plan # should use the unique index that we created at the beginning of this program. print psSel = cur.prep("select * from t where a = ?") print 'psSel.sql: "%s"' % psSel.sql print 'psSel.statement_type == fdb.isc_info_sql_stmt_select:', ( psSel.statement_type == fdb.isc_info_sql_stmt_select ) print 'psSel.n_input_params: %d' % psSel.n_input_params print 'psSel.n_output_params: %d' % psSel.n_output_params print 'psSel.plan: %s' % psSel.plan # The current implementation does not allow PreparedStatements to be prepared # on one Cursor and executed on another: print print 'Note that PreparedStatements are not transferrable from one cursor to another:' cur2 = con.cursor() cur2.execute(psSel) Sample output:: psIns.sql: "insert into t (a,b) values (?,?)" psIns.statement_type == fdb.isc_info_sql_stmt_insert: True psIns.n_input_params: 2 psIns.n_output_params: 0 psIns.plan: None With explicit prepared statement, performed 4276.00 insertions per second. When unable to reuse prepared statement, performed 2037.70 insertions per second. psSel.sql: "select * from t where a = ?" psSel.statement_type == fdb.isc_info_sql_stmt_select: True psSel.n_input_params: 1 psSel.n_output_params: 2 psSel.plan: PLAN (T INDEX (UNIQUE_T_A)) Note that PreparedStatements are not transferrable from one cursor to another: Traceback (most recent call last): File "pstest.py", line 85, in cur2.execute(psSel) File "/home/job/python/envs/pyfirebird/fdb/fdb/fbcore.py", line 2623, in execute raise ValueError("PreparedStatement was created by different Cursor.") ValueError: PreparedStatement was created by different Cursor. As you can see, the version that prevents the reuse of prepared statements is about two times slower – *for a trivial statement*. In a real application, SQL statements are likely to be far more complicated, so the speed advantage of using prepared statements would only increase. .. index:: pair: Cursor; named Named Cursors ------------- To allow the Python programmer to perform scrolling `UPDATE` or `DELETE` via the `“SELECT ... FOR UPDATEâ€` syntax, FDB provides the read/write property :attr:`Cursor.name`. **Example Program:** .. code-block:: python import fdb con = fdb.connect(dsn='localhost:/temp/test.db', user='sysdba', password='pass') curScroll = con.cursor() curUpdate = con.cursor() curScroll.execute("select city from addresses for update") curScroll.name = 'city_scroller' update = "update addresses set city=? where current of " + curScroll.name for (city,) in curScroll: city = ... # make some changes to city curUpdate.execute( update, (city,) ) con.commit() .. index:: pair: Stored procedure; execution Working with stored procedures ------------------------------ Firebird stored procedures can have `input` parameters and/or `output` parameters. Some databases support `input/output` parameters, where the same parameter is used for both input and output; Firebird does not support this. It is important to distinguish between procedures that `return a result set` and procedures that `populate and return their output parameters` exactly once. Conceptually, the latter “return their output parameters†like a Python function, whereas the former “yield result rows†like a Python generator. Firebird’s `server-side` procedural SQL syntax makes no such distinction, but client-side SQL code (and C API code) must. A result set is retrieved from a stored procedure by `SELECT'ing from the procedure, whereas output parameters are retrieved with an 'EXECUTE PROCEDURE' statement`. To **retrieve a result set** from a stored procedure with FDB, use code such as this: .. code-block:: python cur.execute("select output1, output2 from the_proc(?, ?)", (input1, input2)) # Ordinary fetch code here, such as: for row in cur: ... # process row con.commit() # If the procedure had any side effects, commit them. To **execute** a stored procedure and **access its output parameters**, use code such as this: .. code-block:: python cur.callproc("the_proc", (input1, input2)) # If there are output parameters, retrieve them as though they were the # first row of a result set. For example: outputParams = cur.fetchone() con.commit() # If the procedure had any side effects, commit them. This latter is not very elegant; it would be preferable to access the procedure’s output parameters as the return value of :meth:`Cursor.callproc`. The Python DB API specification requires the current behavior, however. .. index:: pair: Data; conversion Data handling and conversions ============================= .. index:: pair: parameter; conversion Implicit Conversion of Input Parameters from Strings ---------------------------------------------------- The database engine treats most SQL data types in a weakly typed fashion: the engine may attempt to convert the raw value to a different type, as appropriate for the current context. For instance, the SQL expressions `123` (integer) and `‘123’` (string) are treated equivalently when the value is to be inserted into an `integer` field; the same applies when `‘123’` and `123` are to be inserted into a `varchar` field. This weak typing model is quite unlike Python’s dynamic yet strong typing. Although weak typing is regarded with suspicion by most experienced Python programmers, the database engine is in certain situations so aggressive about its typing model that KInterbasDB must compromise in order to remain an elegant means of programming the database engine. An example is the handling of “magic values†for date and time fields. The database engine interprets certain string values such as `‘yesterday’` and `‘now’` as having special meaning in a date/time context. If FDB did not accept strings as the values of parameters destined for storage in date/time fields, the resulting code would be awkward. Consider the difference between the two Python snippets below, which insert a row containing an integer and a timestamp into a table defined with the following DDL statement: .. code-block:: sql create table test_table (i integer, t timestamp) .. code-block:: python i = 1 t = 'now' sqlWithMagicValues = "insert into test_table (i, t) values (?, '%s')" % t cur.execute( sqlWithMagicValues, (i,) ) .. code-block:: python i = 1 t = 'now' cur.execute( "insert into test_table (i, t) values (?, ?)", (i, t) ) If FDB did not support weak parameter typing, string parameters that the database engine is to interpret as “magic values†would have to be rolled into the SQL statement in a separate operation from the binding of the rest of the parameters, as in the first Python snippet above. Implicit conversion of parameter values from strings allows the consistency evident in the second snippet, which is both more readable and more general. It should be noted that FDB does not perform the conversion from string itself. Instead, it passes that responsibility to the database engine by changing the parameter metadata structure dynamically at the last moment, then restoring the original state of the metadata structure after the database engine has performed the conversion. A secondary benefit is that when one uses FDB to import large amounts of data from flat files into the database, the incoming values need not necessarily be converted to their proper Python types before being passed to the database engine. Eliminating this intermediate step may accelerate the import process considerably, although other factors such as the chosen connection protocol and the deactivation of indexes during the import are more consequential. For bulk import tasks, the database engine’s external tables also deserve consideration. External tables can be used to suck semi-structured data from flat files directly into the relational database without the intervention of an ad hoc conversion program. .. index:: pair: unicode; conversion Automatic conversion from/to unicode ------------------------------------ In Firebird, every `CHAR`, `VARCHAR` or textual `BLOB` field can (or, better: must) have a `character set` assigned. While it's possible to define single character set for whole database, it's also possible to define different character set for each textual field. This information is used to correctly store the bytes that make up the character string, and together with collation information (that defines the sort ordering and uppercase conversions for a string) is vital for correct data manupulation, including automatic transliteration between character sets when necessary. .. important:: Because data also flow between server and client application, it's vital that client will send data encoded only in character set(s) that server expects. While it's possible to leave this responsibility completely on client application, it's better when client and server settle on single character set they would use for communication, especially when database operates with multiple character sets, or uses character set that is not `native` for client application. Character set for communication is specified using `charset` parameter in :func:`connection ` call. When `connection charset` is defined, all textual data returned from server are encoded in this charset, and client application must ensure that all textual data sent to server are encoded only in this charset as well. FDB helps with client side of this character set bargain by automatically converting `unicode` strings into `bytes/strings` encoded in connection character set, and vice versa. However, developers are still responsible that `non-unicode` strings passed to server are in correct encoding (because FDB makes no assuption about encoding of non-unicode strings, so it can't recode them to connection charset). .. important:: In case that `connection charset` is NOT defined at all, or `NONE` charset is specified, FDB uses :func:`locale.getpreferredencoding` to determine encoding for conversions from/to `unicode`. .. important:: There is one exception to automatic conversion: when character set OCTETS is defined for data column. Values assigned to OCTETS columns are always passed `as is`, because they're basically binary streams. This has specific implications regarding Python version you use. Python 2.x `native strings` are `bytes`, suitable for such binary streams, but Python 3 native strings are `unicode`, and you would probably want to use `bytes` type instead. However, FDB in this case doesn't check the value type at all, so you'll not be warned if you'll make a mistake and pass `unicode` to OCTETS column (unless you'll pass more bytes than column may hold, or you intend to store unicode that way). Rules for automatic conversion depend on Python version you use: * Native Python 2.x `strings` are passed to server as is, and developers must explicitly use `unicode` strings to take advantage of automatic conversion. String values coming from server are converted to `unicode` **only**: * for data stored in database (i.e. not for string values returned by Firebird Service and `info` calls etc.). * when `connection charset` is specified. * Native Python 3 strings are `unicode`, so conversion is fully automatic in both directions for all textual data, i.e. including for string values returned by Firebird Service and `info` calls etc. When `connection charset` is not specified, FDB uses :func:`locale.getpreferredencoding` to determine encoding for conversions from/to `unicode`. .. tip:: Except for legacy databases that doesn't have `character set` defined, **always** define character set for your databases and specify `connection charset`. It will make your life much easier. .. index:: BLOB pair: Data; BLOB .. _working_with_blobs: Working with BLOBs ------------------ FDB uses two types of BLOB values: * **Materialized** BLOB values are Python strings. This is the **default** type. * **Streamed** BLOB values are `file-like` objects. Materialized BLOBs are easy to work with, but are not suitable for: * **deferred loading** of BLOBs. They're called `materialized` because they're always fetched from server as part of row fetch. Fetching BLOB value means separate API calls (and network roundtrips), which may slow down you application considerably. * **large values**, as they are always stored in memory in full size. These drawbacks are addressed by `stream` BLOBs. Using BLOBs in `stream` mode is easy: * For **input** values, simply use :ref:`parametrized statement ` and pass any `file-like` object in place of BLOB parameter. The `file-like` object must implement only the :meth:`~file.read` method, as no other metod is used. * For **output** values, you have to call :meth:`Cursor.set_stream_blob` (or :meth:`PreparedStatement.set_stream_blob`) method with specification of column name(s) that should be returned as `file-like` objects. FDB then returns :class:`BlobReader` instance instead string in place of returned BLOB value for these column(s). The :class:`BlobReader` instance is bound to particular BLOB value returned by server, so its life time is limited. The actual BLOB value is not opened initially, so no additonal API calls to server are made if you'll decide to ignore the value completely. You also don't need to open the BLOB value explicitly, as BLOB is opened automatically on first call to :meth:`~BlobReader.next`, :meth:`~BlobReader.read`, :meth:`~BlobReader.readline`, :meth:`~BlobReader.readlines` or :meth:`~BlobReader.seek`. However, it's good practice to :meth:`~BlobReader.close` the reader once you're finished reading, as it's likely that Python's garbage collector would call the `__del__` method too late, when fetch context is already gone, and closing the reader would cause an error. .. warning:: If BLOB was NOT CREATED as `stream` BLOB, calling :meth:`BlobReader.seek` method will raise :exc:`DatabaseError` exception. **This constraint is set by Firebird.** .. important:: When working with BLOB values, always have memory efficiency in mind, especially when you're processing huge quantity of rows with BLOB values at once. Materialized BLOB values may exhaust your memory quickly, but using stream BLOBs may have inpact on performance too, as new `BlobReader` instance is created for each value fetched. **Example program:** .. code-block:: python import os.path from cStringIO import StringIO import fdb con = fdb.connect(dsn='localhost:employee',user='sysdba', password='masterkey') cur = con.cursor() cur.execute("recreate table blob_test (a blob)") con.commit() # --- Materialized mode (str objects for both input and output) --- # Insertion: cur.execute("insert into blob_test values (?)", ('abcdef',)) cur.execute("insert into blob_test values (?)", ('ghijklmnop',)) # Retrieval: cur.execute("select * from blob_test") print 'Materialized retrieval (as str):' print cur.fetchall() cur.execute("delete from blob_test") # --- Streaming mode (file-like objects for input; fdb.BlobReader objects for output) --- # Insertion: cur.execute("insert into blob_test values (?)", (StringIO('abcdef'),)) cur.execute("insert into blob_test values (?)", (StringIO('ghijklmnop'),)) f = file(os.path.abspath(__file__), 'rb') cur.execute("insert into blob_test values (?)", (f,)) f.close() # Retrieval using the "file-like" methods of BlobReader: cur.execute("select * from blob_test") cur.set_stream_blob('A') # Note the capital letter readerA = cur.fetchone()[0] print '\nStreaming retrieval (via fdb.BlobReader):' # Python "file-like" interface: print 'readerA.mode: "%s"' % readerA.mode print 'readerA.closed: %s' % readerA.closed print 'readerA.tell(): %d' % readerA.tell() print 'readerA.read(2): "%s"' % readerA.read(2) print 'readerA.tell(): %d' % readerA.tell() print 'readerA.read(): "%s"' % readerA.read() print 'readerA.tell(): %d' % readerA.tell() print 'readerA.read(): "%s"' % readerA.read() readerA.close() print 'readerA.closed: %s' % readerA.closed Output:: Materialized retrieval (as str): [('abcdef',), ('ghijklmnop',)] Streaming retrieval (via fdb.BlobReader): readerA.mode: "rb" readerA.closed: False readerA.tell(): 0 readerA.read(2): "ab" readerA.tell(): 2 readerA.read(): "cdef" readerA.tell(): 6 readerA.read(): "" readerA.closed: True .. index:: ARRAY pair: Data; ARRAY Firebird ARRAY type ------------------- FDB supports Firebird ARRAY data type. ARRAY values are represented as Python lists. On input, the Python sequence (list or tuple) must be nested appropriately if the array field is multi-dimensional, and the incoming sequence must not fall short of its maximum possible length (it will not be “padded†implicitly–see below). On output, the lists will be nested if the database array has multiple dimensions. .. note:: Database arrays have no place in a purely relational data model, which requires that data values be atomized (that is, every value stored in the database must be reduced to elementary, non-decomposable parts). The Firebird implementation of database arrays, like that of most relational database engines that support this data type, is fraught with limitations. Database arrays are of fixed size, with a predeclared number of dimensions (max. 16) and number of elements per dimension. Individual array elements cannot be set to NULL / None, so the mapping between Python lists (which have dynamic length and are therefore not normally “padded†with dummy values) and non-trivial database arrays is clumsy. Stored procedures cannot have array parameters. Finally, many interface libraries, GUIs, and even the isql command line utility do not support database arrays. In general, it is preferable to avoid using database arrays unless you have a compelling reason. **Example:** .. code-block:: python >>> import fdb >>> con = fdb.connect(dsn='localhost:employee',user='sysdba', password='masterkey') >>> cur = con.cursor() >>> cur.execute("select LANGUAGE_REQ from job where job_code='Eng' and job_grade=3 and job_country='Japan'") >>> cur.fetchone() (['Japanese\n', 'Mandarin\n', 'English\n', '\n', '\n'],) **Example program:** .. code-block:: python import fdb con = fdb.connect(dsn='localhost:/temp/test.db', user='sysdba', password='pass') con.execute_immediate("recreate table array_table (a int[3,4])") con.commit() cur = con.cursor() arrayIn = [ [1, 2, 3, 4], [5, 6, 7, 8], [9,10,11,12] ] print 'arrayIn: %s' % arrayIn cur.execute("insert into array_table values (?)", (arrayIn,)) cur.execute("select a from array_table") arrayOut = cur.fetchone()[0] print 'arrayOut: %s' % arrayOut con.commit() Output:: arrayIn: [[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]] arrayOut: [[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]] .. index:: pair: Transaction; management Transanction management ======================= For the sake of simplicity, FDB lets the Python programmer ignore transaction management to the greatest extent allowed by the Python Database API Specification 2.0. The specification says, “if the database supports an auto-commit feature, this must be initially offâ€. At a minimum, therefore, it is necessary to call the commit method of the connection in order to persist any changes made to the database. Remember that because of ACID, every data manipulation operation in the Firebird database engine takes place in the context of a transaction, including operations that are conceptually “read-onlyâ€, such as a typical SELECT. The client programmer of FDB establishes a transaction implicitly by using any SQL execution method, such as :meth:`~Connection.execute_immediate`, :meth:`Cursor.execute()`, or :meth:`Cursor.callproc`. Although FDB allows the programmer to pay little attention to transactions, it also exposes the full complement of the database engine’s advanced transaction control features: `transaction parameters`_, `retaining transactions`_, `savepoints`_, and `distributed transactions`_. Basics ------ When it comes to transactions, Python Database API 2.0 specify that :class:`Connection` object has to respond to the following methods: :meth:`Connection.commit()` Commit any pending transaction to the database. Note that if the database supports an auto-commit feature, this must be initially off. An interface method may be provided to turn it back on. Database modules that do not support transactions should implement this method with void functionality. :meth:`Connection.rollback()` (optional) In case a database does provide transactions this method causes the the database to roll back to the start of any pending transaction. **Closing a connection without committing the changes first will cause an implicit rollback to be performed.** In addition to the implicit transaction initiation required by Python Database API, FDB allows the programmer to start transactions explicitly via the :meth:`Connection.begin` method. Also :meth:`Connection.savepoint` method was added to provide support for `Firebird SAVEPOINTs`_. But Python Database API 2.0 was created with assumption that connection can support only one transactions per single connection. However, Firebird can support multiple independent transactions that can run simultaneously within single connection / attachment to the database. This feature is very important, as applications may require multiple transaction openned simultaneously to perform various tasks, which would require to open multiple connections and thus consume more resources than necessary. FDB surfaces this Firebird feature by separating transaction management out from :class:`Connection` into separate :class:`Transaction` objects. To comply with Python DB API 2.0 requirements, `Connection` object uses one `Transaction` instance as :attr:`main transaction `, and delegates :meth:`~Connection.begin`, :meth:`~Connection.savepoint`, :meth:`~Connection.commit`, :meth:`~Connection.rollback`, :meth:`~Connection.trans_info` and :meth:`~Connection.transaction_info` calls to it. .. seealso:: More about using multiple transactions with the same connection in separate :ref:`section `. **Example:** .. code-block:: python import fdb con = fdb.connect(dsn='localhost:employee',user='sysdba', password='masterkey') cur = con.cursor() # Most minimalistic transaction management -> implicit start, only commit() and rollback() # ======================================================================================== # # Transaction is started implicitly cur.execute('insert into country values ('Oz','Crowns') con.commit() # commits active transaction # Again, transaction is started implicitly cur.execute('insert into country values ('Barsoom','XXX') con.rollback() # rolls back active transaction cur.execute('insert into country values ('Pellucidar','Shells') # This will roll back the transaction # because Python DB API 2.0 requires that closing connection # with pending transaction must cause an implicit rollback con.close() .. index:: pair: Transaction; auto-commit Auto-commit ----------- FDB doesn't support `auto-commit` feature directly, but developers may achieve the similar result using `explicit` transaction start, taking advantage of :attr:`~Transaction.default_action` and its default value (`commit`). **Example:** .. code-block:: python import fdb con = fdb.connect(dsn='localhost:employee',user='sysdba', password='masterkey') cur = con.cursor() con.begin() cur.execute('insert into country values ('Oz','Crowns') con.begin() # commits active transaction and starts new one cur.execute('insert into country values ('Barsoom','XXX') con.begin() # commits active transaction and starts new one cur.execute('insert into country values ('Pellucidar','Shells') # However, commit is required before connection is closed, # because Python DB API 2.0 requires that closing connection # with pending transaction must cause an implicit rollback con.commit() con.close() .. index:: pair: Transaction; parameters Transaction parameters ---------------------- The database engine offers the client programmer an optional facility called `transaction parameter buffers` (TPBs) for tweaking the operating characteristics of the transactions he initiates. These include characteristics such as whether the transaction has read and write access to tables, or read-only access, and whether or not other simultaneously active transactions can share table access with the transaction. Connections have a :attr:`~Connection.default_tpb` attribute that can be changed to set the default TPB for all transactions subsequently started on the connection. Alternatively, if the programmer only wants to set the TPB for a single transaction, he can start a transaction explicitly via the :meth:`~Connection.begin()` method and pass a TPB for that single transaction. For details about TPB construction, see the `Firebird API documentation`_. In particular, the `ibase.h` supplied with Firebird contains all possible TPB elements – single bytes that the C API defines as constants whose names begin with `isc_tpb_`. FDB makes all of those TPB constants available (under the same names) as module-level constants. A transaction parameter buffer is handled in C as a character array; FDB requires that TPBs be constructed as Python `strings` (or `bytes` for Python 3). Since the constants in the `fdb.isc_tpb_*` family are numbers, they can't be simply concatenated to create a TPB, but you may use utility function **fdb.bs(byte_array)** that accepts sequence of numbers and returns `string` (P2) or `bytes` (P3). For example next call returns TPB for typical `READ COMMITED` transaction: .. code-block:: python from fdb import * TPB = bs([isc_tpb_version3, isc_tpb_write, isc_tpb_wait, isc_tpb_read_committed, isc_tpb_rec_version]) .. warning:: This method requires good knowledge of `tpc_block` structure and proper order of various parameters, as Firebird engine will raise an error when badly structured block would be used. Also definition of `table reservation` parameters is uncomfortable as you’ll need to mix binary codes with table names passed as Pascal strings (characters preceded by string length). FDB provides several predefined TPB's for convenience: * **ISOLATION_LEVEL_READ_COMMITED** Read/Write READ COMMITED with `record version` and WAIT option. Isolation level with greatest concurrent throughput. This is **Default** TPB. .. tip:: This isolation level is optimal for transactions that write data and doesn't require stable snapshot of database for their operations (i.e. most operations are limited to individual rows). * **ISOLATION_LEVEL_READ_COMMITED_LEGACY** Read/Write READ COMMITED with `NO record version` and WAIT option. .. warning:: This isolation level emulates RDBMS that use locks instead multiversion control (MVC). It's not recommended to use it at all, except for legacy applications lazily ported from such RDBMS to Firebird. * **ISOLATION_LEVEL_READ_COMMITED_RO** Like `ISOLATION_LEVEL_READ_COMMITED`, but **Read Only**. .. tip:: Firebird threats these transactions as `pre-committed`, so they are best option **for long running transactions that only read data**. Internaly FDB uses such transaction to read metadata from connected database. This internal transaction is also available to developers for convenience as :attr:`Connection.query_transaction`. * **ISOLATION_LEVEL_REPEATABLE_READ** or **ISOLATION_LEVEL_SNAPSHOT** Read/Write SNAPSHOT (concurrency) with WAIT option. .. tip:: This isolation level is necessary for transactions that process data in bulk, like reporting, recalculations etc. * **ISOLATION_LEVEL_SERIALIZABLE** or **ISOLATION_LEVEL_SNAPSHOT_TABLE_STABILITY** Read/Write SNAPSHOT TABLE STABILITY (consistency) with WAIT option. Like REPEATABLE_READ/SNAPSHOT, but locks whole tables for writes from other transactions. Isolation level with lowest concurrent throughput. .. warning:: Because tables are locked for `protected write` (i.e. no other transaction can write until lock is released) **at time of first access**, there is a great risk of `deadlock` between transactions. .. tip:: To prevent deadlocks and increase concurrent throughput it's recommended to use custom TPB's with `fine-grained table access reservation`. **Example:** .. code-block:: python import fdb con = fdb.connect(dsn='localhost:employee',user='sysdba', password='masterkey') cur = con.cursor() # Start transaction with default_tpb (ISOLATION_LEVEL_READ_COMMITED) con.begin() cur.execute('select * from JOB') com.commit() # Start using transactions in REPEATABLE READ (SNAPSHOT) isolation con.default_tpb = fdb.ISOLATION_LEVEL_REPEATABLE_READ con.begin() cur.execute('select * from JOB') com.commit() # Start THIS transaction as R/O READ COMMITTED con.begin(fdb.ISOLATION_LEVEL_READ_COMMITED_RO) cur.execute('select * from JOB') com.commit() For cases when predefined transaction parameter blocks are not suitable for your needs, FDB offers utility class :class:`TPB` for convenient and safe construction of custom `tpb blocks`. Simply create instance of this class, set member attributes to required values and use either :meth:`rendered ` binary tpb block or `TPB` instance itself to set `default_tpb` or as paraneter to `begin()`. **Example:** .. code-block:: python import fdb con = fdb.connect(dsn='localhost:employee',user='sysdba', password='masterkey') # Use TPB to construct valid transaction parameter block # from the fdb.isc_tpb_* family. customTPB = fdb.TPB() customTPB.isolation_level = fdb.isc_tpb_consistency # SERIALIZABLE customTPB.table_reservation["MY_TABLE"] = (fdb.isc_tpb_protected, fdb.isc_tpb_lock_write) # Explicitly start a transaction with the custom TPB: con.begin(tpb=customTPB) # For frequent use, it's better to use already assembled version of TPB customTPB = fdb.TPB() customTPB.access_mode = fdb.isc_tpb_read # read only customTPB.isolation_level = fdb.isc_tpb_concurrency # SNAPSHOT customTPB = customTPB.render() # Create valid block according to current values of member attributes. for x in range(1000): con.begin(tpb=customTPB) If you want to build only `table reservation` part of `tpb` (for example to add to various custom built parameter blocks), you can use class :class:`TableReservation` instead `TPB`. .. index:: pair: Transaction; information about Getting information about transaction ------------------------------------- :class:`Transaction` object exposes two methods that return information about currently managed active transaction (the same methods are exposed also by :class:`Connection` object for :attr:`~Connection.main_transaction`): :meth:`~Transaction.transaction_info` is a very thin wrapper around function `isc_transaction_info()`. This method does not attempt to interpret its results except with regard to whether they are a string or an integer. A more convenient way to access the same functionality is via the :meth:`~Transaction.trans_info` method, which is high-level convenience wrapper around the `transaction_info` method that parses the output of `transaction_info` into Python-friendly objects instead of returning raw binary buffers in the case of complex result types. **Example program:** .. code-block:: python import fdb con = fdb.connect(dsn='localhost:employee',user='sysdba', password='masterkey') # Start transaction, so we can get information about it con.begin() info = con.trans_info([fdb.isc_info_tra_id, fdb.isc_info_tra_oldest_interesting, fdb.isc_info_tra_oldest_snapshot, fdb.isc_info_tra_oldest_active, fdb.isc_info_tra_isolation, fdb.isc_info_tra_access, fdb.isc_info_tra_lock_timeout]) print info print "TransactionID:", info[fdb.isc_info_tra_id] print "Oldest Interesting (OIT):",info[fdb.isc_info_tra_oldest_interesting] print "Oldest Snapshot:",info[fdb.isc_info_tra_oldest_snapshot] print "Oldest Active (OAT):",info[fdb.isc_info_tra_oldest_active] print "Isolation Level:",info[fdb.isc_info_tra_isolation] print "Access Mode:",info[fdb.isc_info_tra_access] print "Lock Timeout:",info[fdb.isc_info_tra_lock_timeout] Output:: {4: 459, 5: 430, 6: 459, 7: 459, 8: (3, 1), 9: 1, 10: -1} TransactionID: 459 Oldest Interesting (OIT): 430 Oldest Snapshot: 459 Oldest Active (OAT): 459 Isolation Level: (3, 1) Access Mode: 1 Lock Timeout: -1 .. note:: Isolation level info values are available as FDB constants `isc_info_tra_consistency`, `isc_info_tra_concurrency` and `isc_info_tra_read_committed`. For `read committed`, a tuple of two values is returned instead single value, where the second value is record version flag `isc_info_tra_no_rec_version` or `isc_info_tra_rec_version`. Access mode values are available as FDB constants `isc_info_tra_readonly` and `isc_info_tra_readwrite`. .. index:: pair: Transaction; retaining Retaining transactions ---------------------- The :meth:`~Transaction.commit` and :meth:`~Transaction.rollback` methods accept an optional boolean parameter `retaining` (**default False**) to indicate whether to recycle the transactional context of the transaction being resolved by the method call. If retaining is `True`, the infrastructural support for the transaction active at the time of the method call will be “retained†(efficiently and transparently recycled) after the database server has committed or rolled back the conceptual transaction. .. important:: In code that commits or rolls back frequently, “retaining†the transaction yields considerably better performance. However, retaining transactions must be used cautiously because they can interfere with the server’s ability to garbage collect old record versions. For details about this issue, read the “Garbage†section of `this document `_ by Ann Harrison. For more information about retaining transactions, see `Firebird documentation`. .. index:: pair: Transaction; SAVEPOINT Savepoints ---------- Savepoints are named, intermediate control points within an open transaction that can later be rolled back to, without affecting the preceding work. Multiple savepoints can exist within a single unresolved transaction, providing “multi-level undo†functionality. Although Firebird savepoints are fully supported from SQL alone via the `SAVEPOINT ‘name’` and `ROLLBACK TO ‘name’` statements, FDB also exposes savepoints at the Python API level for the sake of convenience. Call to method :meth:`~Transaction.savepoint` establishes a savepoint with the specified `name`. To roll back to a specific savepoint, call the :meth:`~Transaction.rollback` method and provide a value (the name of the savepoint) for the optional `savepoint` parameter. If the savepoint parameter of `rollback()` is not specified, the active transaction is cancelled in its entirety, as required by the Python Database API Specification. The following program demonstrates savepoint manipulation via the FDB API, rather than raw SQL. .. code-block:: python import fdb con = fdb.connect(dsn='employee', user='sysdba', password='pass') cur = con.cursor() cur.execute("recreate table test_savepoints (a integer)") con.commit() print 'Before the first savepoint, the contents of the table are:' cur.execute("select * from test_savepoints") print ' ', cur.fetchall() cur.execute("insert into test_savepoints values (?)", [1]) con.savepoint('A') print 'After savepoint A, the contents of the table are:' cur.execute("select * from test_savepoints") print ' ', cur.fetchall() cur.execute("insert into test_savepoints values (?)", [2]) con.savepoint('B') print 'After savepoint B, the contents of the table are:' cur.execute("select * from test_savepoints") print ' ', cur.fetchall() cur.execute("insert into test_savepoints values (?)", [3]) con.savepoint('C') print 'After savepoint C, the contents of the table are:' cur.execute("select * from test_savepoints") print ' ', cur.fetchall() con.rollback(savepoint='A') print 'After rolling back to savepoint A, the contents of the table are:' cur.execute("select * from test_savepoints") print ' ', cur.fetchall() con.rollback() print 'After rolling back entirely, the contents of the table are:' cur.execute("select * from test_savepoints") print ' ', cur.fetchall() The output of the example program is shown below:: Before the first savepoint, the contents of the table are: [] After savepoint A, the contents of the table are: [(1,)] After savepoint B, the contents of the table are: [(1,), (2,)] After savepoint C, the contents of the table are: [(1,), (2,), (3,)] After rolling back to savepoint A, the contents of the table are: [(1,)] After rolling back entirely, the contents of the table are: [] .. index:: pair: Transaction; multiple .. _multiple_transactions: Using multiple transactions with the same connection ---------------------------------------------------- To use additional transactions that could run simultaneously with :attr:`main transaction ` managed by `Connection`, create new :class:`Transaction` object calling :meth:`Connection.trans` method. If you don't specify the optional `default_tpb` parameter, this new `Transaction` inherits the :attr:`~Connection.default_tpb` from `Connection`. Physical transaction is not started when `Transaction` instance is created, but implicitly when first SQL statement is executed, or explicitly via :meth:`Transaction.begin` call. To execute statements in context of this additional transaction you have to use `cursors` obtained directly from this `Transaction` instance calling its :meth:`~Transaction.cursor` method, or call :meth:`Transaction.execute_immediate` method. **Example:** .. code-block:: python import fdb con = fdb.connect(dsn='employee', user='sysdba', password='pass') # Cursor for main_transaction context cur = con.cursor() # Create new READ ONLY READ COMMITTED transaction ro_transaction = con.trans(fdb.ISOLATION_LEVEL_READ_COMMITED_RO) # and cursor ro_cur = ro_transaction.cursor() cur.execute('insert into country values ('Oz','Crowns') con.commit() # commits main transaction # Read data created by main transaction from second one ro_cur.execute("select * from COUNTRY where COUNTRY = `Oz`") print ro_cur.fetchall() # Insert more data, but don't commit cur.execute('insert into country values ('Barsoom','XXX') # Read data created by main transaction from second one ro_cur.execute("select * from COUNTRY where COUNTRY = `Barsoom`") print ro_cur.fetchall() .. index:: pair: Transaction; distributed .. _distributed_transactions: Distributed Transactions ------------------------ Distributed transactions are transactions that span multiple databases. FDB provides this Firebird feature through :class:`ConnectionGroup` class. Instances of this class act as managers for :class:`Transaction` object that is bound to multiple connections, and to :class:`cursors ` bound to it and connections participated in group. That's it, distributed transaction is fully independent from all other transactions, main or secondary, of member connections. To assemble a group of connections, you can either pass the sequence of :class:`Connection` instances to :class:`ConnectionGroup` constructor, or add connections latter calling :meth:`ConnectionGroup.add` method. Any :class:`Connection` could be a member of only one group, and attempt to add it to another one would raise an exception. Also, `Connection` participating in group cannot be :meth:`closed ` before it's :meth:`removed ` or whole group is :meth:`disbanded `. .. warning:: Never add more than one connection to the same database to the same `ConnectionGroup`! Similarly to :class:`Transaction`, distributed transactions are managed through :meth:`ConnectionGroup.begin`, :meth:`ConnectionGroup.savepoint`. :meth:`ConnectionGroup.commit` and :meth:`ConnectionGroup.rollback` methods. Additionally, `ConnectionGroup` exposes method :meth:`~ConnectionGroup.prepare` that explicitly initiates the first phase of `Two-Phase Commit Protocol`. Transaction parameters are defined similarly to :class:`Transaction` using :attr:`ConnectionGroup.default_tpb` or as optional parameter to :meth:`~ConnectionGroup.begin` call. SQL statements that should belong to context of distributed transaction are executed via :class:`Cursor` instances aquired through :meth:`ConnectionGroup.cursor` method, or calling :meth:`ConnectionGroup.execute_immediate` method. .. note:: Because :class:`Cursor` instances can belong to only one :class:`Connection`, the :meth:`~ConnectionGroup.cursor` method has manadory parameter `connection`, to specify to which member connection cursor should belong. **Example program:** .. code-block:: python import fdb # First database con1 = fdb.create_database("CREATE DATABASE 'testdb-1.fdb' USER 'SYSDBA' PASSWORD 'masterkey'") con1.execute_immediate("recreate table T (PK integer, C1 integer)") con1.commit() # Second database con2 = fdb.create_database("CREATE DATABASE 'testdb-2,fdb' USER 'SYSDBA' PASSWORD 'masterkey'") con2.execute_immediate("recreate table T (PK integer, C1 integer)") con2.commit() # Create connection group cg = fdb.ConnectionGroup((con1,con2)) # Prepare Group cursors for each connection gc1 = cg.cursor(con1) gc2 = cg.cursor(con2) # Connection cursors to check content of databases q = 'select * from T order by pk' cc1 = con1.cursor() p1 = cc1.prep(q) cc2 = con2.cursor() p2 = cc2.prep(q) print "Distributed transaction: COMMIT" # =============================== gc1.execute('insert into t (pk) values (1)') gc2.execute('insert into t (pk) values (1)') cg.commit() # check it con1.commit() cc1.execute(p1) print 'db1:',cc1.fetchall() con2.commit() cc2.execute(p2) print 'db2:',cc2.fetchall() print "Distributed transaction: PREPARE + COMMIT" # ========================================= gc1.execute('insert into t (pk) values (2)') gc2.execute('insert into t (pk) values (2)') cg.prepare() cg.commit() # check it con1.commit() cc1.execute(p1) print 'db1:',cc1.fetchall() con2.commit() cc2.execute(p2) print 'db2:',cc2.fetchall() print "Distributed transaction: SAVEPOINT + ROLLBACK to it" # =================================================== gc1.execute('insert into t (pk) values (3)') cg.savepoint('CG_SAVEPOINT') gc2.execute('insert into t (pk) values (3)') cg.rollback(savepoint='CG_SAVEPOINT') # check it - via group cursors, as transaction is still active gc1.execute(q) print 'db1:',gc1.fetchall() gc2.execute(q) print 'db2:',gc2.fetchall() print "Distributed transaction: ROLLBACK" # ================================= cg.rollback() # check it con1.commit() cc1.execute(p1) print 'db1:',cc1.fetchall() con2.commit() cc2.execute(p2) print 'db2:',cc2.fetchall() print "Distributed transaction: EXECUTE_IMMEDIATE" # ========================================== cg.execute_immediate('insert into t (pk) values (3)') cg.commit() # check it con1.commit() cc1.execute(p1) print 'db1:',cc1.fetchall() con2.commit() cc2.execute(p2) print 'db2:',cc2.fetchall() # Finalize con1.drop_database() con1.close() con2.drop_database() con2.close() Output:: Distributed transaction: COMMIT db1: [(1, None)] db2: [(1, None)] Distributed transaction: PREPARE + COMMIT db1: [(1, None), (2, None)] db2: [(1, None), (2, None)] Distributed transaction: SAVEPOINT + ROLLBACK to it db1: [(1, None), (2, None), (3, None)] db2: [(1, None), (2, None)] Distributed transaction: ROLLBACK db1: [(1, None), (2, None)] db2: [(1, None), (2, None)] Distributed transaction: EXECUTE_IMMEDIATE db1: [(1, None), (2, None), (3, None)] db2: [(1, None), (2, None), (3, None)] .. _transaction-context-manager: Transaction Context Manager --------------------------- FDB provides context manager :class:`TransactionContext` that allows automatic transaction management using :ref:`WITH`. It can work with any object that supports `begin()`, `commit()` and `rollback()` methods, i.e. :class:`Connection`, :class:`ConnectionGroup` or :class:`Transaction`. It starts transaction when WITH block is entered and commits it if no exception occurst within it, or calls `rollback()` otherwise. Exceptions raised in WITH block are never suppressed. Examples: .. sourcecode:: python con = fdb.connect(dsn='employee',user='sysdba',password='masterkey') # Uses default main transaction with TransactionContext(con): cur = con.cursor() cur.execute("insert into T (PK,C1) values (1,'TXT')") # Uses separate transaction with TransactionContext(con.trans()) as tr: cur = tr.cursor() cur.execute("insert into T (PK,C1) values (2,'AAA')") # Uses connection group (distributed transaction) con2 = fdb.connect(dsn='remote:employee',user='sysdba',password='masterkey') cg = fdb.ConnectionGroup((con,con2)) with TransactionContext(cg): cur1 = cg.cursor(con) cur2 = cg.cursor(con2) cur1.execute("insert into T (PK,C1) values (3,'Local')") cur2.execute("insert into T (PK,C1) values (3,'Remote')") .. index:: pair: Database; events Database Events =============== What they are ------------- The Firebird engine features a distributed, interprocess communication mechanism based on messages called `database events`. A database event is a message passed from a trigger or stored procedure to an application to announce the occurrence of a specified condition or action, usually a database change such as an insertion, modification, or deletion of a record. The Firebird event mechanism enables applications to respond to actions and database changes made by other, concurrently running applications without the need for those applications to communicate directly with one another, and without incurring the expense of CPU time required for periodic polling to determine if an event has occurred. Why use them ------------ Anything that can be accomplished with database events can also be implemented using other techniques, so why bother with events? Since you’ve chosen to write database-centric programs in Python rather than assembly language, you probably already know the answer to this question, but let’s illustrate. A typical application for database events is the handling of administrative messages. Suppose you have an administrative message database with a `message's` table, into which various applications insert timestamped status reports. It may be desirable to react to these messages in diverse ways, depending on the status they indicate: to ignore them, to initiate the update of dependent databases upon their arrival, to forward them by e-mail to a remote administrator, or even to set off an alarm so that on-site administrators will know a problem has occurred. It is undesirable to tightly couple the program whose status is being reported (the `message producer`) to the program that handles the status reports (the `message handler`). There are obvious losses of flexibility in doing so. For example, the message producer may run on a separate machine from the administrative message database and may lack access rights to the downstream reporting facilities (e.g., network access to the SMTP server, in the case of forwarded e-mail notifications). Additionally, the actions required to handle status reports may themselves be time-consuming and error-prone, as in accessing a remote network to transmit e-mail. In the absence of database event support, the message handler would probably be implemented via `polling`. Polling is simply the repetition of a check for a condition at a specified interval. In this case, the message handler would check in an infinite loop to see whether the most recent record in the `messages` table was more recent than the last message it had handled. If so, it would handle the fresh message(s); if not, it would go to sleep for a specified interval, then loop. The `polling-based` implementation of the message handler is fundamentally flawed. Polling is a form of busy-wait_; the check for new messages is performed at the specified interval, regardless of the actual activity level of the message producers. If the polling interval is lengthy, messages might not be handled within a reasonable time period after their arrival; if the polling interval is brief, the message handler program (and there may be many such programs) will waste a large amount of CPU time on unnecessary checks. The database server is necessarily aware of the exact moment when a new message arrives. Why not let the message handler program request that the database server send it a notification when a new message arrives? The message handler can then efficiently sleep until the moment its services are needed. Under this `event-based` scheme, the message handler becomes aware of new messages at the instant they arrive, yet it does not waste CPU time checking in vain for new messages when there are none available. How events are exposed ---------------------- #. Server Process ("An event just occurred!") To notify any interested listeners that a specific event has occurred, issue the `POST_EVENT` statement from Stored Procedure or Trigger. The `POST_EVENT` statement has one parameter: the name of the event to post. In the preceding example of the administrative message database, `POST_EVENT` might be used from an `after insert` trigger on the `messages` table, like this: .. code-block:: s create trigger trig_messages_handle_insert for messages after insert as begin POST_EVENT 'new_message'; end .. note:: The physical notification of the client process does not occur until the transaction in which the `POST_EVENT` took place is actually committed. Therefore, multiple events may *conceptually* occur before the client process is *physically* informed of even one occurrence. Furthermore, the database engine makes no guarantee that clients will be informed of events in the same groupings in which they conceptually occurred. If, within a single transaction, an event named `event_a` is posted once and an event named `event_b` is posted once, the client may receive those posts in separate "batches", despite the fact that they occurred in the same conceptual unit (a single transaction). This also applies to multiple occurrences of *the same* event within a single conceptual unit: the physical notifications may arrive at the client separately. #. Client Process ("Send me a message when an event occurs.") .. note:: If you don't care about the gory details of event notification, skip to the section that describes FDB's Python-level event handling API. The Firebird C client library offers two forms of event notification. The first form is *synchronous* notification, by way of the function :c:func:`isc_wait_for_event()`. This form is admirably simple for a C programmer to use, but is inappropriate as a basis for FDB's event support, chiefly because it's not sophisticated enough to serve as the basis for a comfortable Python-level API. The other form of event notification offered by the database client library is *asynchronous*, by way of the functions :c:func:`isc_que_events()` (note that the name of that function is misspelled), :c:func:`isc_cancel_events()`, and others. The details are as nasty as they are numerous, but the essence of using asynchronous notification from C is as follows: #. Call :c:func:`isc_event_block()` to create a formatted binary buffer that will tell the server which events the client wants to listen for. #. Call :c:func:`isc_que_events()` (passing the buffer created in the previous step) to inform the server that the client is ready to receive event notifications, and provide a callback that will be asynchronously invoked when one or more of the registered events occurs. #. [The thread that called :c:func:`isc_que_events()` to initiate event listening must now do something else.] #. When the callback is invoked (the database client library starts a thread dedicated to this purpose), it can use the :c:func:`isc_event_counts()` function to determine how many times each of the registered events has occurred since the last call to :c:func:`isc_event_counts()` (if any). #. [The callback thread should now "do its thing", which may include communicating with the thread that called :c:func:`isc_que_events()`.] #. When the callback thread is finished handling an event notification, it must call :c:func:`isc_que_events()` again in order to receive future notifications. Future notifications will invoke the callback again, effectively "looping" the callback thread back to Step 4. API for Python developers ------------------------- The FDB database event API is comprised of the following: the method :meth:`Connection.event_conduit` and the class :class:`EventConduit`. The :class:`EventConduit` class serve as "conduit" through which database event notifications will flow into the Python program. It's not designed to be instantiated directly by the Python programmer. Instead, use the :meth:`Connection.event_conduit` method to create `EventConduit` instances. `event_conduit` is a method of `Connection` rather than a module-level function or a class constructor because the database engine deals with events in the context of a particular database (after all, `POST_EVENT` must be issued by a stored procedure or a trigger). :meth:`Connection.event_conduit` takes a sequence of string event names as parameter, and returns :class:`EventConduit` instance. .. important:: To start listening for events it's necessary (starting from FDB version 1.4.2) to call :meth:`EventConduit.begin` method or use EventConduit's context manager interface. Immediately when :meth:`~EventConduit.begin` method is called, EventConduit starts to accumulate notifications of any events that occur within the conduit’s internal queue until the conduit is closed either explicitly (via the :meth:`~EventConduit.close` method) or implicitly (via garbage collection). Notifications about events are aquired through call to :meth:`~EventConduit.wait` method, that blocks the calling thread until at least one of the events occurs, or the specified `timeout` (if any) expires, and returns `None` if the wait timed out, or a dictionary that maps `event_name -> event_occurrence_count`. .. important:: `EventConduit` can act as context manager that ensures execution of :meth:`~EventConduit.begin` and :meth:`~EventConduit.close` methods. It's strongly advised to use the `EventConduit` with the `with` statement. **Example:** .. code-block:: python with connection.event_conduit( ('event_a', 'event_b') ) as conduit: events = conduit.wait() process_events(events) If you want to drop notifications accumulated so far by conduit, call :meth:`EventConduit.flush` method. **Example program:** .. code-block:: python import fdb import threading import time # Prepare database con = fdb.create_database("CREATE DATABASE 'event_test.fdb' USER 'SYSDBA' PASSWORD 'masterkey'") con.execute_immediate("CREATE TABLE T (PK Integer, C1 Integer)") con.execute_immediate("""CREATE TRIGGER EVENTS_AU FOR T ACTIVE BEFORE UPDATE POSITION 0 AS BEGIN if (old.C1 <> new.C1) then post_event 'c1_updated' ; END""") con.execute_immediate("""CREATE TRIGGER EVENTS_AI FOR T ACTIVE AFTER INSERT POSITION 0 AS BEGIN if (new.c1 = 1) then post_event 'insert_1' ; else if (new.c1 = 2) then post_event 'insert_2' ; else if (new.c1 = 3) then post_event 'insert_3' ; else post_event 'insert_other' ; END""") con.commit() cur = con.cursor() # Utility function def send_events(command_list): for cmd in command_list: cur.execute(cmd) con.commit() print "One event" # ========= timed_event = threading.Timer(3.0,send_events,args=[["insert into T (PK,C1) values (1,1)",]]) events = con.event_conduit(['insert_1']) events.begin() timed_event.start() e = events.wait() events.close() print e print "Multiple events" # =============== cmds = ["insert into T (PK,C1) values (1,1)", "insert into T (PK,C1) values (1,2)", "insert into T (PK,C1) values (1,3)", "insert into T (PK,C1) values (1,1)", "insert into T (PK,C1) values (1,2)",] timed_event = threading.Timer(3.0,send_events,args=[cmds]) events = self.con.event_conduit(['insert_1','insert_3']) events.begin() timed_event.start() e = events.wait() events.close() print e print "20 events" # ========= cmds = ["insert into T (PK,C1) values (1,1)", "insert into T (PK,C1) values (1,2)", "insert into T (PK,C1) values (1,3)", "insert into T (PK,C1) values (1,1)", "insert into T (PK,C1) values (1,2)",] timed_event = threading.Timer(1.0,send_events,args=[cmds]) events = con.event_conduit(['insert_1','A','B','C','D', 'E','F','G','H','I','J','K','L','M', 'N','O','P','Q','R','insert_3']) events.begin() timed_event.start() time.sleep(3) e = events.wait() events.close() print e print "Flush events" # ============ timed_event = threading.Timer(3.0,send_events,args=[["insert into T (PK,C1) values (1,1)",]]) events = con.event_conduit(['insert_1']) events.begin() send_events(["insert into T (PK,C1) values (1,1)", "insert into T (PK,C1) values (1,1)"]) time.sleep(2) events.flush() timed_event.start() e = events.wait() events.close() print e # Finalize con.drop_database() con.close() Output:: One event {'insert_1': 1} Multiple events {'insert_3': 1, 'insert_1': 2} 20 events {'A': 0, 'C': 0, 'B': 0, 'E': 0, 'D': 0, 'G': 0, 'insert_1': 2, 'I': 0, 'H': 0, 'K': 0, 'J': 0, 'M': 0, 'L': 0, 'O': 0, 'N': 0, 'Q': 0, 'P': 0, 'R': 0, 'insert_3': 1, 'F': 0} Flush events {'insert_1': 1} .. currentModule:: fdb.services .. index:: Services .. _working_with_services: Working with Services ===================== Database server maintenance tasks such as user management, load monitoring, and database backup have traditionally been automated by scripting the command-line tools :program:`gbak`, :program:`gfix`, :program:`gsec`, and :program:`gstat`. The API presented to the client programmer by these utilities is inelegant because they are, after all, command-line tools rather than native components of the client language. To address this problem, Firebird has a facility called the `Services API`, which exposes a uniform interface to the administrative functionality of the traditional command-line tools. The native Services API, though consistent, is much lower-level than a Pythonic API. If the native version were exposed directly, accomplishing a given task would probably require more Python code than scripting the traditional command-line tools. For this reason, FDB presents its own abstraction over the native API via the :mod:`fdb.services` module. Services API Connections ------------------------ All Services API operations are performed in the context of a `connection` to a specific database server, represented by the :class:`fdb.services.Connection` class. Similarly to database connections, FDB provides :func:`~fdb.services.connect()` constructor function to create such connections. This constructor has three keyword parameters: :host: The network name of the computer on which the database server is running. :user: The name of the database user under whose authority the maintenance tasks are to be performed. :password: User’s password. Since maintenance operations are most often initiated by an administrative user on the same computer as the database server, `host` defaults to the local computer, and `user` defaults to `SYSDBA`. The three calls to :func:`fdb.services.connect()` in the following program are equivalent: .. code-block:: python from fdb import services con = services.connect(password='masterkey') con = services.connect(user='sysdba', password='masterkey') con = services.connect(host='localhost', user='sysdba', password='masterkey') .. note:: Like database connections, it's good practice to :meth:`~Connection.close` them when you don't need them anymore. :class:`~fdb.services.Connection` object provides number of methods that could be divided into several groups: * `Server Configuration and State`_: To get information about server configuration, active attachments or users, or to get content of server log. * `Database options`_: To set various database parameters like size of page cache, access mode or SQL dialect. * `Database maintenance`_: To perform backup, restore, validation or other database maintenance tasks. * `User maintanance`_: To get or change information about users defined in security database, to create new or remove users. * `Trace service`_: To start, stop, pause/resume or list Firebird `trace sessions`. * `Text ouput from Services`_: Some services like `backup` or `trace` may return significant amount of text. This output is not returned directly by method that starts the service, but through separate methods that emulate read from text file, or provide :ref:`iterator protocol ` support on `Connection`. Server Configuration and State ------------------------------ :meth:`~Connection.get_service_manager_version()` To help client programs adapt to version changes, the service manager exposes its version number as an integer. .. code-block:: python # 64-bit Linux Firebird 2.5.1 SuperServer >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') >>> print con.get_service_manager_version() 2 `fdb.services` is a thick wrapper of the Services API that can shield its users from changes in the underlying C API, so this method is unlikely to be useful to the typical Python client programmer. :meth:`~Connection.get_server_version()` Returns the server’s version string .. code-block:: python # 64-bit Linux Firebird 2.5.1 SuperServer >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') >>> print con.get_server_version() LI-V2.5.2.26536 Firebird 2.5 At first glance, this method appears to duplicate the functionality of the :attr:`fdb.Connection.server_version` property, but when working with Firebird, there is a difference. :attr:`fdb.Connection.server_version` is based on a C API call (`isc_database_info()`) that existed long before the introduction of the Services API. Some programs written before the advent of Firebird test the version number in the return value of `isc_database_info()`, and refuse to work if it indicates that the server is too old. Since the first stable version of Firebird was labeled 1.0, this pre-Firebird version testing scheme incorrectly concludes that (e.g.) Firebird 1.0 is older than Interbase 5.0. Firebird addresses this problem by making `isc_database_info()` return a “pseudo-InterBase†version number, whereas the Services API returns the true Firebird version, as shown: .. code-block:: python # 64-bit Linux Firebird 2.5.1 SuperServer import fdb con = fdb.connect(dsn='employee', user='sysdba', password='masterkey') print 'Interbase-compatible version string:', con.server_version svcCon = fdb.services.connect(password='masterkey') print 'Actual Firebird version string: ', svcCon.get_server_version() Output (on Firebird 2.5.1/Linux64):: Interbase-compatible version string: LI-V6.3.1.26351 Firebird 2.5 Actual Firebird version string: LI-V2.5.1.26351 Firebird 2.5 :meth:`~Connection.get_architecture()` Returns platform information for the server, including hardware architecture and operating system family. .. code-block:: python # 64-bit Linux Firebird 2.5.1 SuperServer >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') >>> print con.get_architecture() Firebird/linux AMD64 :meth:`~Connection.get_home_directory()` Returns the equivalent of the RootDirectory setting from :file:`firebird.conf`. .. code-block:: python # 64-bit Linux Firebird 2.5.1 SuperServer >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') >>> print con.get_home_directory() /opt/firebird/ :meth:`~Connection.get_security_database_path()` Returns the location of the server’s core security database, which contains user definitions and such. Name of this database is :file:`security2.fdb` (Firebird 2.0 and later) or :file:`security.fdb` (Firebird 1.5). .. code-block:: python # 64-bit Linux Firebird 2.5.1 SuperServer >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') >>> print con.get_security_database_path() /opt/firebird/security2.fdb :meth:`~Connection.get_lock_file_directory()` Returns the directory location for Firebird lock files. .. code-block:: python # 64-bit Linux Firebird 2.5.1 SuperServer >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') >>> print con.get_lock_file_directory() /tmp/firebird/ :meth:`~Connection.get_server_capabilities()` Returns tuple of capability info codes for each capability reported by Firebird server. Following constants are defined in :mod:`fdb.services` for convenience: * :data:`CAPABILITY_MULTI_CLIENT` * :data:`CAPABILITY_REMOTE_HOP` * :data:`CAPABILITY_SERVER_CONFIG` * :data:`CAPABILITY_QUOTED_FILENAME` * :data:`CAPABILITY_NO_SERVER_SHUTDOWN` .. code-block:: python # 64-bit Linux Firebird 2.5.1 SuperServer >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') >>> print con.get_server_capabilities() (2L, 4L, 512L, 256L) >>> fdb.services.CAPABILITY_MULTI_CLIENT in con.get_server_capabilities() True >>> fdb.services.CAPABILITY_QUOTED_FILENAME in con.get_server_capabilities() False :meth:`~Connection.get_message_file_directory()` To support internationalized error messages/prompts, the database engine stores its messages in a file named :file:`firebird.msg`. The directory in which this file resides can be determined with this method. .. code-block:: python # 64-bit Linux Firebird 2.5.1 SuperServer >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') >>> print con.get_message_file_directory() /opt/firebird/ :meth:`~Connection.get_connection_count()` Returns the number of active connections to databases managed by the server. This count only includes database connections (such as open instances of :class:`fdb.Connection`), not services manager connections (such as open instances of :class:`fdb.services.Connection`). .. code-block:: python # 64-bit Linux Firebird 2.5.1 SuperServer >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') >>> db1 = fdb.connect(dsn='employee',user='sysdba',password='masterkey') >>> db2 = fdb.connect(dsn='employee',user='sysdba',password='masterkey') >>> print con.get_connection_count() 2 :meth:`~Connection.get_attached_database_names()` Returns a list of the names of all databases to which the server is maintaining at least one connection. The database names are not guaranteed to be in any particular order. .. code-block:: python # 64-bit Linux Firebird 2.5.1 SuperServer >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') >>> db1 = fdb.connect(dsn='employee',user='sysdba',password='masterkey') >>> db2 = fdb.connect(dsn='employee',user='sysdba',password='masterkey') >>> print con.get_attached_database_names() ['/opt/firebird/examples/empbuild/employee.fdb'] :meth:`~Connection.get_log()` Request the contents of the server’s log file (:file:`firebird.log`). This method is so-called `Async method` that only initiates log transfer. Actual log content could be read by one from many methods for `text ouput from Services`_ that `Connection` provides . .. code-block:: python # 64-bit Linux Firebird 2.5.1 SuperServer >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') >>> con.get_log() >>> log = con.readlines() Database options ---------------- :meth:`~Connection.set_default_page_buffers` Sets individual page cache size for Database. .. code-block:: python >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') >>> con.set_default_page_buffers('employee',100) :meth:`~Connection.set_sweep_interval` Sets treshold for automatic sweep. .. code-block:: python >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') >>> con.set_sweep_interval('employee',100000) :meth:`~Connection.set_reserve_page_space` Sets data page space reservation policy. .. code-block:: python >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') # Use all space >>> con.set_reserve_page_space('employee',False) :meth:`~Connection.set_write_mode` Sets Disk Write Mode: Sync (forced writes) or Async (buffered). Following constants are defined in :mod:`fdb.services` for convenience: * :data:`~fdb.services.WRITE_FORCED` * :data:`~fdb.services.WRITE_BUFFERED` .. code-block:: python >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') # Disable Forced Writes >>> con.set_write_mode('employee',services.WRITE_BUFFERED) :meth:`~Connection.set_access_mode` Sets Database Access mode: Read Only or Read/Write. Following constants are defined in :mod:`fdb.services` for convenience: * :data:`~fdb.services.ACCESS_READ_WRITE` * :data:`~fdb.services.ACCESS_READ_ONLY` .. code-block:: python >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') # Set database to R/O mode >>> con.set_access_mode('employee',services.ACCESS_READ_ONLY) :meth:`~Connection.set_sql_dialect` Sets SQL Dialect for Database. .. warning:: Changing SQL dialect on existing database is not recommended. Only newly created database objects would respect new dialect setting, while objects created with previous dialect remain unchanged. That may have dire consequences. .. code-block:: python >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') # Use SQL dialect 1 >>> con.set_sql_dialect('employee',1) Database maintenance -------------------- :meth:`~Connection.get_limbo_transaction_ids` Returns list of transactions in limbo. :meth:`~Connection.commit_limbo_transaction` Resolves limbo transaction with commit. :meth:`~Connection.rollback_limbo_transaction` Resolves limbo transaction with rollback. :meth:`~Connection.get_statistics` Request database statisctics. Report is in the same format as the output of the gstat command-line utility. This method has one required parameter, the location of the database on which to compute statistics, and six optional boolean parameters for controlling the domain of the statistics. This method is so-called `Async method` that only initiates report processing. Actual report could be read by one from many methods for `text ouput from Services`_ that `Connection` provides . .. note:: Until statistical report is not fully fetched from service (or ignored via :meth:`~Connection.wait`), any attempt to start another asynchronous service will fail with exception. .. code-block:: python >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') >>> con.get_statistics('employee') >>> stat_report = con.readlines() :meth:`~Connection.backup` Request logical (GBAK) database backup. Produces report about backup process. This method is so-called `Async method` that only initiates backup process. Actual report could be read by one from many methods for `text ouput from Services`_ that `Connection` provides . .. note:: Until backup report is not fully fetched from service (or ignored via :meth:`~Connection.wait`), any attempt to start another asynchronous service will fail with exception. .. code-block:: python # 64-bit Linux Firebird 2.5.1 SuperServer >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') >>> con.backup('employee', '/home/data/employee.fbk', metadata_only=True, collect_garbage=False) >>> backup_report = con.readlines() :meth:`~Connection.restore` Request database restore from logical (GBAK) backup. Produces report about restore process. This method is so-called `Async method` that only initiates restore process. Actual report could be read by one from many methods for `text ouput from Services`_ that `Connection` provides . .. note:: Until restore report is not fully fetched from service (or ignored via :meth:`~Connection.wait`), any attempt to start another asynchronous service will fail with exception. .. code-block:: python # 64-bit Linux Firebird 2.5.1 SuperServer >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') >>> con.restore('/home/data/employee.fbk', '/home/data/empcopy.fdb') >>> restore_report = con.readlines() :meth:`~Connection.nbackup` Perform physical (NBACKUP) database backup. .. note:: Method call will not return until sweep is finished. :meth:`~Connection.nrestore` Perform restore from physical (NBACKUP) database backup. .. note:: Method call will not return until sweep is finished. :meth:`~Connection.shutdown` Database shutdown. Following constants are defined in :mod:`fdb.services` for convenience: For shutdow mode: * :data:`~fdb.services.SHUT_SINGLE` * :data:`~fdb.services.SHUT_MULTI` * :data:`~fdb.services.SHUT_FULL` For shutdown method: * :data:`~fdb.services.SHUT_FORCE` * :data:`~fdb.services.SHUT_DENY_NEW_TRANSACTIONS` * :data:`~fdb.services.SHUT_DENY_NEW_ATTACHMENTS` .. code-block:: python # 64-bit Linux Firebird 2.5.1 SuperServer >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') # Shutdown database to single-user maintenance mode >>> con.shutdown('empoyee', services.SHUT_SINGLE, services.SHUT_FORCE, 0) # Go to full shutdown mode, disabling new attachments during 5 seconds >>> con.shutdown('empoyee', services.SHUT_FULL, services.SHUT_DENY_NEW_ATTACHMENTS, 5) :meth:`~Connection.bring_online` Bring previously shut down database back online. Following constants are defined in :mod:`fdb.services` for convenience: For on-line mode: * :data:`~fdb.services.SHUT_NORMAL` * :data:`~fdb.services.SHUT_SINGLE` * :data:`~fdb.services.SHUT_MULTI` .. code-block:: python # 64-bit Linux Firebird 2.5.1 SuperServer >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') # Enable multi-user maintenance >>> con.bring_online('employee', services.SHUT_MULTI) # Enable single-user maintenance >>> con.bring_online('employee', services.SHUT_SINGLE) # Return to normal state >>> con.bring_online('employee') :meth:`~Connection.activate_shadow` Activates Database Shadow(s). :meth:`~Connection.sweep` Performs Database Sweep. .. note:: Method call will not return until sweep is finished. .. code-block:: python >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') >>> con.sweep('empoyee') :meth:`~Connection.repair` Database Validation and Repair. .. note:: Method call will not return until action is finished. .. code-block:: python >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') # Just validate >>> con.repair('empoyee', ignore_checksums=True, read_only_validation=True) # Mend the database >>> con.repair('empoyee', ignore_checksums=True, mend_database=True) User maintanance ---------------- :meth:`~Connection.get_users` Returns information about specified user or all users as a list of :class:`User` instances. .. code-block:: python >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') >>> users = con.get_users() >>> for user in users: ... print user.name ... print user.first_name, user.middle_name, user.last_name ... print user.user_id, user.group_id SYSDBA Sql Server Administrator 0 0 :meth:`~Connection.add_user` Adds new user. Requires instance of :class:`User` with **at least** its :attr:`~User.name` and :attr:`~User.password` attributes specified as non-empty values. All other attributes are optional. .. code-block:: python >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') >>> user = services.User('NewUser') >>> user.password = 'secret' >>> user.first_name = 'John' >>> user.last_name = 'Doe' >>> con.add_users(User) :meth:`~Connection.modify_user` Modifycation of user information. Requires instance of :class:`User` with **at least** its :attr:`~User.name` attribute specified as non-empty value. .. note:: Sets :attr:`~User.first_name`, :attr:`~User.middle_name` and :attr:`~User.last_name` to their actual values, and ignores the :attr:`~User.user_id` and :attr:`~User.group_id` attributes regardless of their values. :attr:`~User.password` is set **only** when it has value. .. code-block:: python >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') >>> user = services.User('SYSDBA') # Change pasword >>> user.password = 'Pa$$w0rd' >>> con.modify_user(User) :meth:`~Connection.remove_user` Removes user. Requires User name or instance of :class:`User` with **at least** its :attr:`~User.name` attribute specified as non-empty value. .. code-block:: python >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') >>> con.remove_user('NewUser') :meth:`~Connection.user_exists` Checks for user's existence. Requires User name or instance of :class:`User` with **at least** its :attr:`~User.name` attribute specified as non-empty value. .. code-block:: python >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') >>> con.user_exists('NewUser') False Trace service ------------- :meth:`~Connection.trace_start` Starts new trace session. Requires trace `configuration` and returns `Session ID`. Trace session output could be retrieved through :meth:`~Connection.readline`, :meth:`~Connection.readlines`, iteration over `Connection` or ignored via call to :meth:`~Connection.wait`. .. note:: Until session output is not fully fetched from service (or ignored via :meth:`~Connection.wait`), any attempt to start another asynchronous service including call to any `trace_` method will fail with exception. .. code-block:: python import fdb svc = fdb.services.connect(password='masterkey') # Because trace session blocks the connection, we need another one to stop trace session! svc_aux = fdb.services.connect(password='masterkey') trace_config = """ enabled true log_statement_finish true print_plan true include_filter %%SELECT%% exclude_filter %%RDB$%% time_threshold 0 max_sql_length 2048 """ trace_id = svc.trace_start(trace_config,'test_trace_1') trace_log = [] # Get first 10 lines of trace output for i in range(10): trace_log.append(svc.readline()) # Stop trace session svc_aux.stop_trace(trace_id) :meth:`~Connection.trace_stop` Stops trace session specified by ID. .. code-block:: python >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') >>> con.trace_stop(15) Trace session ID 15 stopped :meth:`~Connection.trace_suspend` Suspends trace session specified by ID. .. code-block:: python >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') >>> con.trace_suspend(15) Trace session ID 15 paused :meth:`~Connection.trace_resume` resumes trace session specified by ID. .. code-block:: python >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') >>> con.trace_resume(15) Trace session ID 15 resumed :meth:`~Connection.trace_list` Returns information about existing trace sessions as dictionary mapping `SESSION_ID -> SESSION_PARAMS`. Session parameters is another dictionary with next keys: :name: (string) (optional) Session name if specified. :date: (datetime.datetime) Session start date and time. :user: (string) Trace user name. :flags: (list of strings) Session flags. .. code-block:: python >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') >>> con.trace_list() {53: {'date': datetime.datetime(2012, 10, 5, 10, 45, 4), 'flags': ['active', ' admin', ' trace'], 'user': 'SYSDBA'}} Text ouput from Services ------------------------ Some services like `backup` or `trace` may return significant amount of text. Rather than return the whole text as single string value by methods that provide access to these services, FDB isolated the transfer process to separate methods: * :meth:`~Connection.readline` - Similar to :meth:`file.readline`, returns next line of output from Service. * :meth:`~Connection.readlines` - Like :meth:`file.readlines`, returns list of output lines. * Iteration over `Connection` object, because :class:`Connection` has built-in support for :ref:`iterator protocol `. * Using `callback` method provided by developer. Each `Connection` method that returns its result asynchronously accepts an optional parameter `callback`, which must be a function that accepts one string parameter. This method is then called with each output line coming from service. * :meth:`~Connection.wait` - Waits for Sevice to finish, ignoring rest of the output it may produce. .. warning:: Until output is not fully fetched from service, any attempt to start another asynchronous service will fail with exception! This constraint is set by Firebird Service API. You may check the status of asynchronous Services using :attr:`Connection.fetching` attribute or :meth:`Connection.isrunning` method. In cases when you're not interested in output produced by Service, call :meth:`~Connection.wait` to wait for service to complete. **Examples:** .. code-block:: python import fdb svc = fdb.services.connect(password='masterkey') print "Fetch materialized" print "==================" print "Start backup" svc.backup('employee', 'employee.fbk') print "svc.fetching is", svc.fetching print "svc.running is", svc.isrunning() report = svc.readlines() print "%i lines returned" % len(report) print "First 5 lines from output:" for i in xrange(5): print i,report[i] print "svc.fetching is", svc.fetching print "svc.running is", svc.isrunning() print print "Iterate over result" print "===================" svc.backup('employee', 'employee.fbk') output = [] for line in svc: output.append(line) print "%i lines returned" % len(output) print "Last 5 lines from output:" for line in output[-5:]: print line print print "Callback" print "========" output = [] # Callback function def fetchline(line): output.append(line) svc.backup('employee', 'employee.fbk', callback=fetchline) print "%i lines returned" % len(output) print "Last 5 lines from output:" for line in output[-5:]: print line Output:: Fetch materialized ================== Start backup svc.fetching is True svc.running is True 558 lines returned First 5 lines from output: 0 gbak:readied database employee for backup 1 gbak:creating file employee.fbk 2 gbak:starting transaction 3 gbak:database employee has a page size of 4096 bytes. 4 gbak:writing domains svc.fetching is False svc.running is False Iterate over result =================== 558 lines returned Last 5 lines from output: gbak:writing referential constraints gbak:writing check constraints gbak:writing SQL roles gbak:writing names mapping gbak:closing file, committing, and finishing. 74752 bytes written Callback ======== 558 lines returned Last 5 lines from output: gbak:writing referential constraints gbak:writing check constraints gbak:writing SQL roles gbak:writing names mapping gbak:closing file, committing, and finishing. 74752 bytes written .. index:: pair: Database; schema pair: Database schema; working with .. _working-with-schema: Working with database schema ============================ Description of database objects like tables, views, stored procedures, triggers or UDF functions that represent database schema is stored in set of system tables present in every database. Firebird users can query these tables to get information about these objects and their relations. But querying system tables is inconvenient, as it requires good knowledge how this information is structured and requires significant amount of Python code. Changes in system tables between Firebird versions further add to this complexity. Hence FDB provides set of classes (isolated in separate module :mod:`fdb.schema`) that transform information stored in system tables into set of Python objects that surface the vital information in meaningful way, and additionaly provide set of methods for most commonly used operations or checks. Database schema could be accessed in three different ways, each suitable for different use case: - By direct creation of :class:`fdb.schema.Schema` instances that are then :meth:`binded ` to particular :class:`Connection` instance. This method is best if you want to work with schema only occasionally, or you want to keep connections as lightweight as possible. - Accessing :attr:`fdb.Connection.schema` property. This method is more convenient than previous one, and represents a compromise between convenience and resource consumption because :class:`~fdb.schem.Schema` instance is not created until first reference and is managed by connection itself. Individual metadata objects are not loaded from system tables until first reference. - Using :class:`~fdb.ConnectionWithSchema` instead :class:`Connection` by specifying `connection_class=ConnectionWithSchema` parameter to :func:`~fdb.connect` or :func:`~fdb.create_database`. This `Connection` descendant loads all database metadata immediately and provides directly all attributes and methods provided by :class:`~fdb.schema.Schema` class. This method is most suitable in case you want to work with database metadata extensively. **Examples:** 1. Using Schema instance: .. code-block:: python >>> import fdb >>> con = fdb.connect(dsn='employee',user='sysdba', password='masterkey') >>> schema = fdb.schema.Schema() >>> schema.bind(con) >>> [t.name for t in schema.tables] ['COUNTRY', 'JOB', 'DEPARTMENT', 'EMPLOYEE', 'SALES', 'PROJECT', 'EMPLOYEE_PROJECT', 'PROJ_DEPT_BUDGET', 'SALARY_HISTORY', 'CUSTOMER'] 2. Using Connection.schema: .. code-block:: python >>> import fdb >>> con = fdb.connect(dsn='employee',user='sysdba', password='masterkey') >>> [t.name for t in con.schema.tables] ['COUNTRY', 'JOB', 'DEPARTMENT', 'EMPLOYEE', 'SALES', 'PROJECT', 'EMPLOYEE_PROJECT', 'PROJ_DEPT_BUDGET', 'SALARY_HISTORY', 'CUSTOMER'] 3. Using ConnectionWithSchema: .. code-block:: python >>> import fdb >>> con = fdb.connect(dsn='employee',user='sysdba', password='masterkey', connection_class=fdb.ConnectionWithSchema) >>> [t.name for t in con.tables] ['COUNTRY', 'JOB', 'DEPARTMENT', 'EMPLOYEE', 'SALES', 'PROJECT', 'EMPLOYEE_PROJECT', 'PROJ_DEPT_BUDGET', 'SALARY_HISTORY', 'CUSTOMER'] .. note:: Individual metadata information (i.e. information about :attr:`domains `, :attr:`tables ` etc.) is loaded on first access and cached for further reference until it's :meth:`clared ` or :meth:`reload ` is requested. Because once loaded information is cached, it's good to :meth:`clar ` it when it's no longer needed to conserve memory. Available information --------------------- The :class:`~fdb.schema.Schema` provides information about: - **Database:** :attr:`Owner name `, :attr:`default character set `, :attr:`description `, :attr:`security class ` and whether database consist from :meth:`single or multiple files `. - **Facilities:** Available :attr:`character sets `, :attr:`collations `, database :attr:`files ` and :attr:`shadows `. - **User database objects:** :attr:`exceptions `, :attr:`generators `, :attr:`domains `, :attr:`tables ` and their :attr:`constraints `, :attr:`indices `, :attr:`views `, :attr:`triggers `, :attr:`procedures `, user :attr:`roles ` and :attr:`user defined functions `. - **System database objects:** :attr:`generators `, :attr:`domains `, :attr:`tables ` and their constraints, :attr:`indices `, :attr:`views `, :attr:`triggers `, :attr:`procedures ` and :attr:`functions `. - **Relations between objects:** Through direct links between metadata objects and :attr:`dependencies `. - **Privileges:** :attr:`All ` privileges, or privileges granted for specific :attr:`table `, :attr:`table column `, :attr:`view `, :attr:`view column `, :attr:`procedure ` or :attr:`role `. It's also possible to get all privileges :meth:`granted to ` specific user, role, procedure, trigger or view. Metadata objects ---------------- Schema information is presented as Python objects of various classes with common parent class :class:`~fdb.schema.BaseSchemaItem` (except :class:`~fdb.schema.Schema` itself), that defines several common attributes and methods: Attributes: - :attr:`~fdb.schema.BaseSchemaItem.name`: Name of database object or None if object doesn't have a name. - :attr:`~fdb.schema.BaseSchemaItem.description`: Description (documentation text) for object or None if object doesn't have a description. - :attr:`~fdb.schema.BaseSchemaItem.actions`: List of supported SQL operations on schema object instance. Methods: - :meth:`~fdb.schema.BaseSchemaItem.accept_visitor`: :ref:`Visitor Pattern support `. - :meth:`~fdb.schema.BaseSchemaItem.issystemobject`: Returns True if this database object is system object. - :meth:`~fdb.schema.BaseSchemaItem.get_quoted_name`: Returns quoted (if necessary) name of database object. - :meth:`~fdb.schema.BaseSchemaItem.get_dependents`: Returns list of all database objects that :ref:`depend ` on this one. - :meth:`~fdb.schema.BaseSchemaItem.get_dependencies`: Returns list of database objects that this object :ref:`depend ` on. - :meth:`~fdb.schema.BaseSchemaItem.get_sql_for`: Returns :ref:`SQL command string ` for specified action on database object. There are next schema objects: :class:`~fdb.schema.Collation`, :class:`~fdb.schema.CharacterSet`, :class:`~fdb.schema.DatabaseException`, :class:`~fdb.schema.Sequence` (Generator), :class:`~fdb.schema.Domain`, :class:`~fdb.schema.Index`, :class:`~fdb.schema.Table`, :class:`~fdb.schema.TableColumn`, :class:`~fdb.schema.Constraint`, :class:`~fdb.schema.View`, :class:`~fdb.schema.ViewColumn`, :class:`~fdb.schema.Trigger`, :class:`~fdb.schema.Procedure`, :class:`~fdb.schema.ProcedureParameter`, :class:`~fdb.schema.Function`, :class:`~fdb.schema.FunctionArgument`, :class:`~fdb.schema.Role`, :class:`~fdb.schema.Dependency`, :class:`~fdb.schema.DatabaseFile` and :class:`~fdb.schema.Shadow`. .. _visitor-pattern-support: Visitor Pattern support ----------------------- `Visitor Pattern`_ is particularly useful when you need to process various objects that need special handling in common algorithm (for example display information about them or generate SQL commands to create them in new database). Each metadata object (including :class:`~fdb.schema.Schema`) supports :meth:`~fdb.schema.BaseSchemaItem.accept_visitor` method that calls `class-specific` method on object passed to it as parameter. This method name always starts with 'visit'. For example :class:`~fdb.schema.Table` object calls `visitTable`. This 'visit*' method has one parameter - object instance that calls it. FDB provides skeleton implementation for :class:`schema Visitor object ` that implements all `visit*` methods called by schema classes as call to method `default_action` that does nothing. Next code uses visitor pattern to print all DROP SQL statements necessary to drop database object, taking its dependencies into account, i.e. it could be necessary to first drop other - dependant objects before it could be dropped. .. code-block:: python import fdb # Object dropper class ObjectDropper(fdb.schema.SchemaVisitor): def __init__(self): self.seen = [] def drop(self,obj): self.seen = [] obj.accept_visitor(self) def default_action(self,obj): if not obj.issystemobject() and 'drop' in obj.actions: for dependency in obj.get_dependents(): d = dependency.dependent if d and d not in self.seen: d.accept_visitor(self) if obj not in self.seen: print obj.get_sql_for('drop') self.seen.append(obj) def visitSchema(self,schema): pass def visitMetadataItem(self,item): pass def visitTableColumn(self,column): column.table.accept_visitor(self) def visitViewColumn(self,column): column.view.accept_visitor(self) def visitDependency(self,dependency): pass def visitConstraint(self,constraint): pass def visitProcedureParameter(self,param): param.procedure.accept_visitor(self) def visitFunctionArgument(self,arg): arg.function.accept_visitor(self) def visitDatabaseFile(self,dbfile): pass def visitShadow(self,shadow): pass # Sample use: con = fdb.connect(dsn='employee',user='sysdba', password='masterkey') table = con.schema.get_table('JOB') dropper = ObjectDropper() dropper.drop(table) Will produce next result:: DROP PROCEDURE ALL_LANGS DROP PROCEDURE SHOW_LANGS DROP TABLE JOB .. _object-dependencies: Object dependencies ------------------- Close relations between metadata object like `ownership` (Table vs. TableColumn, Index or Trigger) or `cooperation` (like FK Index vs. partner UQ/PK Index) are defined directly using properties of particular schema objects. Besides close releations Firebird also uses `dependencies`, that describe functional dependency between otherwise independent metadata objects. For example stored procedure can call other stored procedures, define its parameters using domains or work with tables or views. Removing or changing these objects may/will cause the procedure to stop working correctly, so Firebird tracks these dependencies. FDB surfaces these dependencies as :class:`~fdb.schema.Dependency` schema objects, and all schema objects have :meth:`~fdb.schema.BaseSchemaItem.get_dependents` and :meth:`~fdb.schema.BaseSchemaItem.get_dependencies` methods to get list of :class:`~fdb.schema.Dependency` instances that describe these dependencies. :class:`~fdb.schema.Dependency` object provides names and types of dependent/depended on database objects, and access to their respective schema Python objects as well. .. _sql-oprations: SQL operations -------------- FDB doesn't allow you to change database metadata directly using schema objects. Instead it supports generation of DDL SQL commands from schema objects using :meth:`~fdb.schema.BaseSchemaItem.get_sql_for` method present on all schema objects except Schema itself. DDL commands that could be generated depend on object type and context (for example it's not possible to generate DDL commands for system database objects), and list of DDL commands that could be generated for particular schema object could be obtained from its :attr:`~fdb.schema.BaseSchemaItem.actions` attribute. Possible `actions` could be: create, recreate, create_or_alter, alter, drop, activate, deactivate, recompute and declare. Some actions require/allow additional parameters. .. list-table:: SQL actions :widths: 20 15 15 5 45 :header-rows: 1 * - Schema class - Action - Parameter - Required - Description * - :class:`~fdb.schema.Collation` - create - - - * - - drop - - - * - :class:`~fdb.schema.CharacterSet` - alter - collation - Yes - :class:`~fdb.schema.Collation` instance or collation name * - :class:`~fdb.schema.DatabaseException` - create - - - * - - recreate - - - * - - alter - message - Yes - string. * - - create_or_alter - - - * - - drop - - - * - :class:`~fdb.schema.Sequence` - create - - - * - - alter - value - Yes - integer * - - drop - - - * - :class:`~fdb.schema.Domain` - create - - - * - - alter - - - One from next parameters required * - - - name - No - string * - - - default - No - string definition or None to drop default * - - - check - No - string definition or None to drop check * - - - datatype - No - string SQL datatype definition * - - drop - - - * - :class:`~fdb.schema.Constraint` - create - - - * - - drop - - - * - :class:`~fdb.schema.Index` - create - - - * - - activate - - - * - - deactivate - - - * - - recompute - - - * - - drop - - - * - :class:`~fdb.schema.Table` - create - - - * - - recreate - - - * - - drop - - - * - :class:`~fdb.schema.TableColumn` - alter - - - One from next parameters required * - - - name - No - string * - - - datatype - No - string SQL type definition * - - - position - No - integer * - - - expression - No - string with COMPUTED BY expression * - :class:`~fdb.schema.View` - create - - - * - - recreate - - - * - - alter - columns - No - string or list of strings * - - - query - Yes - string * - - - check - No - True for WITH CHECK OPTION clause * - - create_or_alter - - - * - - drop - - - * - :class:`~fdb.schema.Trigger` - create - - - * - - recreate - - - * - - create_or_alter - - - * - - alter - - - Requires parameters for either header or body definition. * - - - fire_on - No - string * - - - active - No - bool * - - - sequence - No - integer * - - - declare - No - string or list of strings * - - - code - No - string or list of strings * - - drop - - - * - :class:`~fdb.schema.Procedure` - create - no_code - No - True to supress procedure body from output * - - recreate - no_code - No - True to supress procedure body from output * - - create_or_alter - no_code - No - True to supress procedure body from output * - - drop - - - * - :class:`~fdb.schema.Role` - create - - - * - - drop - - - * - :class:`~fdb.schema.Function` - declare - - - * - - drop - - - * - :class:`~fdb.schema.DatabaseFile` - create - - - * - :class:`~fdb.schema.Shadow` - create - - - * - - drop - - - * - :class:`~fdb.schema.Privilege` - grant - grantors - No - List of grantor names. Generates GRANTED BY clause if grantor is not in list. * - - revoke - grantors - No - List of grantor names. Generates GRANTED BY clause if grantor is not in list. * - - - grant_option - No - True to get REVOKE of GRANT/ADMIN OPTION only. Raises ProgrammingError if privilege doesn't have such option. **Examples:** .. code-block:: python >>> import fdb >>> con = fdb.connect(dsn='employee',user='sysdba', password='masterkey') >>> t = con.schema.get_table('EMPLOYEE') >>> print t.get_sql_for('create') CREATE TABLE EMPLOYEE ( EMP_NO EMPNO NOT NULL, FIRST_NAME "FIRSTNAME" NOT NULL, LAST_NAME "LASTNAME" NOT NULL, PHONE_EXT VARCHAR(4), HIRE_DATE TIMESTAMP DEFAULT 'NOW' NOT NULL, DEPT_NO DEPTNO NOT NULL, JOB_CODE JOBCODE NOT NULL, JOB_GRADE JOBGRADE NOT NULL, JOB_COUNTRY COUNTRYNAME NOT NULL, SALARY SALARY NOT NULL, FULL_NAME COMPUTED BY (last_name || ', ' || first_name), PRIMARY KEY (EMP_NO) ) >>> for i in t.indices: ... if 'create' in i.actions: ... print i.get_sql_for('create') ... CREATE ASCENDING INDEX NAMEX ON EMPLOYEE (LAST_NAME,FIRST_NAME) >>> for c in [x for x in t.constraints if x.ischeck() or x.isfkey()]: ... print c.get_sql_for('create') ... ALTER TABLE EMPLOYEE ADD FOREIGN KEY (DEPT_NO) REFERENCES DEPARTMENT (DEPT_NO) ALTER TABLE EMPLOYEE ADD FOREIGN KEY (JOB_CODE,JOB_GRADE,JOB_COUNTRY) REFERENCES JOB (JOB_CODE,JOB_GRADE,JOB_COUNTRY) ALTER TABLE EMPLOYEE ADD CHECK ( salary >= (SELECT min_salary FROM job WHERE job.job_code = employee.job_code AND job.job_grade = employee.job_grade AND job.job_country = employee.job_country) AND salary <= (SELECT max_salary FROM job WHERE job.job_code = employee.job_code AND job.job_grade = employee.job_grade AND job.job_country = employee.job_country)) >>> p = con.schema.get_procedure('GET_EMP_PROJ') >>> print p.get_sql_for('recreate',no_code=True) RECREATE PROCEDURE GET_EMP_PROJ (EMP_NO SMALLINT) RETURNS (PROJ_ID CHAR(5)) AS BEGIN END >>> print p.get_sql_for('create_or_alter') CREATE OR ALTER PROCEDURE GET_EMP_PROJ (EMP_NO SMALLINT) RETURNS (PROJ_ID CHAR(5)) AS BEGIN FOR SELECT proj_id FROM employee_project WHERE emp_no = :emp_no INTO :proj_id DO SUSPEND; END >>> print p.get_sql_for('alter',input=['In1 INTEGER','In2 VARCHAR(5)'], ... output='Out1 INETEGER,\nOut2 VARCHAR(10)',declare=['declare variable i integer = 1;'], ... code=['/* body */','Out1 = i',"Out2 = 'Value'"]) ALTER PROCEDURE GET_EMP_PROJ ( In1 INTEGER, In2 VARCHAR(5) ) RETURNS (Out1 INETEGER, Out2 VARCHAR(10)) AS declare variable i integer = 1; BEGIN /* body */ Out1 = i Out2 = 'Value' END Working with user privileges ============================ User or database object privileges are part of database metadata accesible through FDB :class:`~fdb.schema.Schema` support. Each discrete privilege is represented by :class:`~fdb.schema.Privilege` instance. You can access either :attr:`all ` privileges, or privileges granted for specific :attr:`table `, :attr:`table column `, :attr:`view `, :attr:`view column `, :attr:`procedure ` or :attr:`role `. It's also possible to get all privileges :meth:`granted to ` specific user, role, procedure, trigger or view. :class:`~fdb.schema.Privilege` class supports :meth:`~fdb.schema.Privilege.get_sql_for` method to generate GRANT and REVOKE SQL statements for given privilege. If you want to generate grant/revoke statements for set of privileges (for example all privileges granted on specific object or grated to specific user), it's more convenient to use function :func:`~fdb.schema.get_grants` that returns list of minimal set of SQL commands required to task. **Examples:** .. code-block:: python >>> import fdb >>> con = fdb.connect(dsn='employee',user='sysdba', password='masterkey') >>> t = con.schema.get_table('EMPLOYEE') >>> for p in t.privileges: ... print p.get_sql_for('grant') ... GRANT SELECT ON EMPLOYEE TO SYSDBA WITH GRANT OPTION GRANT INSERT ON EMPLOYEE TO SYSDBA WITH GRANT OPTION GRANT UPDATE ON EMPLOYEE TO SYSDBA WITH GRANT OPTION GRANT DELETE ON EMPLOYEE TO SYSDBA WITH GRANT OPTION GRANT REFERENCES ON EMPLOYEE TO SYSDBA WITH GRANT OPTION GRANT SELECT ON EMPLOYEE TO PUBLIC WITH GRANT OPTION GRANT INSERT ON EMPLOYEE TO PUBLIC WITH GRANT OPTION GRANT UPDATE ON EMPLOYEE TO PUBLIC WITH GRANT OPTION GRANT DELETE ON EMPLOYEE TO PUBLIC WITH GRANT OPTION GRANT REFERENCES ON EMPLOYEE TO PUBLIC WITH GRANT OPTION >>> for p in fdb.schema.get_grants(t.privileges): ... print p ... GRANT DELETE, INSERT, REFERENCES, SELECT, UPDATE ON EMPLOYEE TO PUBLIC WITH GRANT OPTION GRANT DELETE, INSERT, REFERENCES, SELECT, UPDATE ON EMPLOYEE TO SYSDBA WITH GRANT OPTION Normally generated GRANT/REVOKE statements don't contain grantor's name, because GRANTED BY clause is supported only since Firebird 2.5. If you want to get GRANT/REVOKE statements including this clause, use `grantors` parameter for `get_sql_for` and `get_grants`. This parameter is a list of grantor names, and GRANTED BY clause is generated only for privileges not granted by user from this list. It's useful to suppress GRANTED BY clause for SYSDBA or database owner. Working with monitoring tables ============================== Starting from Firebird 2.1 (ODS 11.1) its possible to monitor server-side activity happening inside a particular database. The engine offers a set of “virtual†tables (so-called "monitoring tables") that provides the user with a snapshot of the current activity within the given database. FDB provides access to this information through set of classes (isolated in separate module :mod:`fdb.monitor`) that transform information stored in monitoring tables into set of Python objects that surface the information in meaningful way, and additionaly provide set of methods for available operations or checks. Like database schema, monitoring tables could be accessed in two different ways, each suitable for different use case: - By direct creation of :class:`fdb.monitor.Monitor` instances that are then binded to particular :class:`~fdb.Connection` instance. This method is best if you want to work with monitoring data only occasionally, or you want to keep connections as lightweight as possible. - Accessing :attr:`fdb.Connection.monitor` property. This method is more convenient than previous one, and represents a compromise between convenience and resource consumption because :class:`~fdb.monitor.Monitor` instance is not created until first reference and is managed by connection itself. **Examples:** 1. Using Monitor instance: .. code-block:: python >>> import fdb >>> con = fdb.connect(dsn='employee',user='sysdba', password='masterkey') >>> monitor = fdb.monitor.Monitor() >>> monitor.bind(con) >>> monitor.db.name '/opt/firebird/examples/empbuild/employee.fdb' 2. Using Connection.monitor: .. code-block:: python >>> import fdb >>> con = fdb.connect(dsn='employee',user='sysdba', password='masterkey') >>> con.monitor.db.name '/opt/firebird/examples/empbuild/employee.fdb' Available information --------------------- The :class:`~fdb.monitor.Monitor` provides information about: - :attr:`Database `. - :attr:`Connections ` to database and :attr:`current ` connection. - :attr:`Transactions `. - Executed :attr:`SQL statements `. - PSQL :attr:`callstack `. - Page and row :attr:`I/O statistics `, including memory usage. - :attr:`Context variables `. Activity snapshot ----------------- The key term of the monitoring feature is an `activity snapshot`. It represents the current state of the database, comprising a variety of information about the database itself, active attachments and users, transactions, prepared and running statements, and more. A snapshot is created the first time any of the monitoring information is being accessed from in the given `Monitor` instance and it is preserved until :attr:`~fdb.monitor.Monitor.closed`, :meth:`clared ` or :meth:`refreshed `, in order that accessed information is always consistent. There are two ways to refresh the snapshot: 1. Call :meth:`Monitor.clear() ` method. New snapshot will be taken on next access to monitoring information. 2. Call :meth:`Monitor.refresh() ` method to take the new snapshot immediately. .. important:: In both cases, any instances of information objects your application may hold would be obsolete. Using them may result in error, or (more likely) provide outdated information. .. note:: Individual monitoring information (i.e. information about :attr:`connections `, :attr:`transactions ` etc.) is loaded from activity snapshot on first access and cached for further reference until it's :meth:`clared ` or snapshot is :meth:`refreshed `. Because once loaded information is cached, it's good to :meth:`clear ` it when it's no longer needed to conserve memory. .. _Firebird Documentation: http://www.firebirdsql.org/en/reference-manuals/ .. _DDL: http://en.wikipedia.org/wiki/Data_Definition_Language .. _DML: http://en.wikipedia.org/wiki/Data_Manipulation_Language .. _Firebird SAVEPOINTs: http://www.firebirdsql.org/refdocs/langrefupd15-savepoint.html .. _Firebird API documentation: http://www.ibphoenix.com/files/60ApiGuide.zip .. _busy-wait: http://www.catb.org/jargon/html/B/busy-wait.html .. _Visitor Pattern: http://en.wikipedia.org/wiki/Visitor_pattern fdb-1.6.1/sphinx/reference.txt0000644000175000001440000003274012676513045016741 0ustar pcisarusers00000000000000 ============= FDB Reference ============= .. module:: fdb :synopsis: Python Database API 2.0 Compliant driver for Firebird Module globals ============== .. data:: __version__ Current driver version, string. .. data:: apilevel String constant stating the supported DB API level (2.0). .. data:: threadsafety Integer constant stating the level of thread safety the interface supports. Curretly `1` = Threads may share the module, but not connections. .. data:: paramstyle String constant stating the type of parameter marker formatting expected by the interface. `'qmark'` = Question mark style, e.g. '...WHERE name=?' .. data:: DESCRIPTION_NAME .. data:: DESCRIPTION_TYPE_CODE .. data:: DESCRIPTION_DISPLAY_SIZE .. data:: DESCRIPTION_INTERNAL_SIZE .. data:: DESCRIPTION_PRECISION .. data:: DESCRIPTION_SCALE .. data:: DESCRIPTION_NULL_OK Helper constants for work with :attr:`Cursor.description` content. .. data:: ISOLATION_LEVEL_READ_COMMITED_LEGACY TPB for R/W WAIT READ COMMITTED NO RECORD VERSION transaction. .. data:: ISOLATION_LEVEL_READ_COMMITED TPB for R/W WAIT READ COMMITTED RECORD VERSION transaction. .. data:: ISOLATION_LEVEL_REPEATABLE_READ .. data:: ISOLATION_LEVEL_SNAPSHOT TPB's for R/W WAIT SNAPSHOT transaction. .. data:: ISOLATION_LEVEL_SERIALIZABLE .. data:: ISOLATION_LEVEL_SNAPSHOT_TABLE_STABILITY TPB's for R/W WAIT SNAPSHOT TABLE STABILITY transaction. .. data:: ISOLATION_LEVEL_READ_COMMITED_RO TPB for R/O WAIT READ COMMITTED RECORD VERSION transaction. .. data:: MAX_BLOB_SEGMENT_SIZE .. data:: charset_map Python dictionary that maps Firebird character set names (key) to Python character sets (value). .. data:: ODS_FB_20 .. data:: ODS_FB_21 .. data:: ODS_FB_25 ODS version numbers introduced by Firebird engine version. .. data:: IMPLEMENTATION_NAMES .. data:: PROVIDER_NAMES .. data:: DB_CLASS_NAMES Exceptions ========== .. autoexception:: Warning :show-inheritance: .. autoexception:: Error :show-inheritance: .. autoexception:: InterfaceError :show-inheritance: .. autoexception:: DatabaseError :show-inheritance: .. autoexception:: DataError :show-inheritance: .. autoexception:: OperationalError :show-inheritance: .. autoexception:: IntegrityError :show-inheritance: .. autoexception:: InternalError :show-inheritance: .. autoexception:: ProgrammingError :show-inheritance: .. autoexception:: NotSupportedError :show-inheritance: .. autoexception:: TransactionConflict :show-inheritance: This is the exception inheritance layout:: StandardError |__Warning |__Error |__InterfaceError |__DatabaseError |__DataError |__OperationalError |__IntegrityError |__InternalError |__ProgrammingError |__NotSupportedError Functions ========= .. autofunction:: connect .. autofunction:: create_database .. autofunction:: load_api Classes ======= Connection ---------- .. autoclass:: Connection :member-order: groupwise :members: :inherited-members: .. autoclass:: ConnectionWithSchema :member-order: groupwise :members: :undoc-members: .. note:: For list of methods see :class:`fdb.schema.Schema`. Cursor ------ .. autoclass:: Cursor :member-order: groupwise :members: :inherited-members: :undoc-members: Transaction ----------- .. autoclass:: Transaction :member-order: groupwise :members: :inherited-members: :undoc-members: PreparedStatement ----------------- .. autoclass:: PreparedStatement :member-order: groupwise :members: :inherited-members: :undoc-members: ConnectionGroup --------------- .. autoclass:: ConnectionGroup :member-order: groupwise :members: :inherited-members: :undoc-members: TransactionContext ------------------ .. autoclass:: TransactionContext :member-order: groupwise :members: :inherited-members: :undoc-members: EventConduit ------------ .. autoclass:: EventConduit :member-order: groupwise :members: :inherited-members: :undoc-members: BlobReader ---------- .. autoclass:: BlobReader :member-order: groupwise :members: :inherited-members: :undoc-members: TPB --- .. autoclass:: TPB :member-order: groupwise :members: :undoc-members: TableReservation ---------------- .. autoclass:: TableReservation :member-order: groupwise :members: :inherited-members: :undoc-members: Internally used classes exposed to driver users =============================================== .. module:: fdb.fbcore :synopsis: Implementation of Firebird driver RowMapping ---------- .. autoclass:: _RowMapping :member-order: groupwise :members: :inherited-members: :undoc-members: EventBlock ---------- .. autoclass:: EventBlock :member-order: groupwise :members: :inherited-members: :undoc-members: TableAccessStats ---------------- .. autoclass:: _TableAccessStats :member-order: groupwise :members: :inherited-members: :undoc-members: .. _services_api: Services ======== .. module:: fdb.services :synopsis: Submodule for work with Firebird Services Module globals -------------- .. data:: SHUT_LEGACY .. data:: SHUT_NORMAL .. data:: SHUT_MULTI .. data:: SHUT_SINGLE .. data:: SHUT_FULL These constants are to be passed as the `shutdown_mode` parameter to :meth:`Connection.shutdown` and :meth:`Connection.bring_online`. .. data:: SHUT_FORCE .. data:: SHUT_DENY_NEW_TRANSACTIONS .. data:: SHUT_DENY_NEW_ATTACHMENTS These constants are to be passed as the `shutdown_method` parameter to :meth:`Connection.shutdown` .. data:: WRITE_FORCED .. data:: WRITE_BUFFERED These constants are to be passed as the `mode` parameter to :meth:`Connection.setWriteMode` .. data:: ACCESS_READ_WRITE .. data:: ACCESS_READ_ONLY These constants are to be passed as the `mode` parameter to :meth:`Connection.setAccessMode` .. data:: CAPABILITY_MULTI_CLIENT .. data:: CAPABILITY_REMOTE_HOP .. data:: CAPABILITY_SERVER_CONFIG .. data:: CAPABILITY_QUOTED_FILENAME .. data:: CAPABILITY_NO_SERVER_SHUTDOWN These constants are return values of :meth:`Connection.get_server_capabilities` .. data:: STATS_TOTAL_TIME .. data:: STATS_TIME_DELTA .. data:: STATS_PAGE_READS .. data:: STATS_PAGE_WRITES These constants are options for :meth:`Connection.backup`/:meth:`Connection.restore` 'stats' parameter. Functions --------- .. autofunction:: connect Connection ---------- .. autoclass:: Connection :member-order: groupwise :members: :inherited-members: :undoc-members: User ---- .. autoclass:: User :member-order: groupwise :members: :inherited-members: :undoc-members: Database schema =============== .. module:: fdb.schema :synopsis: Submodule for work with database metadata (schema) Functions --------- .. autofunction:: get_grants .. autofunction:: isKeyword Schema ------ .. autoclass:: Schema :member-order: groupwise :members: :inherited-members: :undoc-members: BaseSchemaItem -------------- .. autoclass:: BaseSchemaItem :member-order: groupwise :members: :inherited-members: :undoc-members: Collation --------- .. autoclass:: Collation :member-order: groupwise :members: :inherited-members: :undoc-members: CharacterSet ------------ .. autoclass:: CharacterSet :member-order: groupwise :members: :inherited-members: :undoc-members: DatabaseException ----------------- .. autoclass:: DatabaseException :member-order: groupwise :members: :inherited-members: :undoc-members: Sequence -------- .. autoclass:: Sequence :member-order: groupwise :members: :inherited-members: :undoc-members: Index ----- .. autoclass:: Index :member-order: groupwise :members: :inherited-members: :undoc-members: TableColumn ----------- .. autoclass:: TableColumn :member-order: groupwise :members: :inherited-members: :undoc-members: ViewColumn ---------- .. autoclass:: ViewColumn :member-order: groupwise :members: :inherited-members: :undoc-members: Domain ------ .. autoclass:: Domain :member-order: groupwise :members: :inherited-members: :undoc-members: Dependency ---------- .. autoclass:: Dependency :member-order: groupwise :members: :inherited-members: :undoc-members: Constraint ---------- .. autoclass:: Constraint :member-order: groupwise :members: :inherited-members: :undoc-members: Table ----- .. autoclass:: Table :member-order: groupwise :members: :inherited-members: :undoc-members: View ---- .. autoclass:: View :member-order: groupwise :members: :inherited-members: :undoc-members: Trigger ------- .. autoclass:: Trigger :member-order: groupwise :members: :inherited-members: :undoc-members: ProcedureParameter ------------------ .. autoclass:: ProcedureParameter :member-order: groupwise :members: :inherited-members: :undoc-members: Procedure --------- .. autoclass:: Procedure :member-order: groupwise :members: :inherited-members: :undoc-members: Role ---- .. autoclass:: Role :member-order: groupwise :members: :inherited-members: :undoc-members: FunctionArgument ---------------- .. autoclass:: FunctionArgument :member-order: groupwise :members: :inherited-members: :undoc-members: Function -------- .. autoclass:: Function :member-order: groupwise :members: :inherited-members: :undoc-members: DatabaseFile ------------ .. autoclass:: DatabaseFile :member-order: groupwise :members: :inherited-members: :undoc-members: Shadow ------ .. autoclass:: Shadow :member-order: groupwise :members: :inherited-members: :undoc-members: Privilege ------------- .. autoclass:: Privilege :member-order: groupwise :members: :inherited-members: :undoc-members: SchemaVisitor ------------- .. autoclass:: SchemaVisitor :member-order: groupwise :members: :inherited-members: :undoc-members: Monitoring information ====================== .. module:: fdb.monitor :synopsis: Submodule for work with database monitoring tables Constants --------- .. data:: SHUTDOWN_MODE_ONLINE .. data:: SHUTDOWN_MODE_MULTI .. data:: SHUTDOWN_MODE_SINGLE .. data:: SHUTDOWN_MODE_FULL Shutdown modes for :attr:`DatabaseInfo.shutdown_mode`. .. data:: BACKUP_STATE_NORMAL .. data:: BACKUP_STATE_STALLED .. data:: BACKUP_STATE_MERGE Backup states for :attr:`DatabaseInfo.backup_state`. .. data:: STATE_IDLE .. data:: STATE_ACTIVE States for :attr:`AttachmentInfo.state`, :attr:`TransactionInfo.state`. and :attr:`StatementInfo.state`. .. data:: ISOLATION_CONSISTENCY .. data:: ISOLATION_CONCURRENCY .. data:: ISOLATION_READ_COMMITTED_RV .. data:: ISOLATION_READ_COMMITTED_NO_RV Isolation modes for :attr:`TransactionInfo.isolation_mode`. .. data:: INFINITE_WAIT .. data:: NO_WAIT Special timeout values for :attr:`TransactionInfo.lock_timeout`. .. data:: STAT_DATABASE .. data:: STAT_ATTACHMENT .. data:: STAT_TRANSACTION .. data:: STAT_STATEMENT .. data:: STAT_CALL Group codes for :attr:`IOStatsInfo.group`. Monitor ------- .. autoclass:: Monitor :member-order: groupwise :members: :inherited-members: :undoc-members: BaseInfoItem ------------ .. autoclass:: BaseInfoItem :member-order: groupwise :members: :inherited-members: :undoc-members: DatabaseInfo ------------ .. autoclass:: DatabaseInfo :member-order: groupwise :members: :inherited-members: :undoc-members: AttachmentInfo -------------- .. autoclass:: AttachmentInfo :member-order: groupwise :members: :inherited-members: :undoc-members: TransactionInfo --------------- .. autoclass:: TransactionInfo :member-order: groupwise :members: :inherited-members: :undoc-members: StatementInfo ------------- .. autoclass:: StatementInfo :member-order: groupwise :members: :inherited-members: :undoc-members: CallStackInfo ------------- .. autoclass:: CallStackInfo :member-order: groupwise :members: :inherited-members: :undoc-members: IOStatsInfo ----------- .. autoclass:: IOStatsInfo :member-order: groupwise :members: :inherited-members: :undoc-members: ContextVariableInfo ------------------- .. autoclass:: ContextVariableInfo :member-order: groupwise :members: :inherited-members: :undoc-members: Utilities ========= .. module:: fdb.utils :synopsis: Submodule with various classes and functions Functions --------- .. autofunction:: update_meta .. autofunction:: iter_class_properties .. autofunction:: iter_class_variables .. autofunction:: embed_attributes LateBindingProperty ------------------- .. autoclass:: LateBindingProperty :member-order: groupwise :members: Iterator -------- .. autoclass:: Iterator :member-order: groupwise :members: __iter__, next, __next__ :inherited-members: :undoc-members: EmbeddedProperty ---------------- .. autoclass:: EmbeddedProperty :member-order: groupwise :members: EmbeddedAttribute ----------------- .. autoclass:: EmbeddedAttribute :member-order: groupwise :members: .. module:: fdb.blr :synopsis: Python ctypes interface to Firebird client library (BLR) .. module:: fdb.ibase :synopsis: Python ctypes interface to Firebird client library ctypes interface to Firebird client library =========================================== :ref:`ctypes ` interface to `fbclient.so/dll` is defined in submodule :mod:`fdb.ibase` and :mod:`fdb.blr`. .. autoclass:: fbclient_API :member-order: groupwise :members: fdb-1.6.1/sphinx/differences-from-kdb.txt0000644000175000001440000001223013017532521020735 0ustar pcisarusers00000000000000============================ Differences from KInterbasDB ============================ No need for initialization ========================== FDB doesn't support various configurations of automatic type translations like KDB, so it's no longer necessary to initialize the driver before any feature is used. Distributed transactions ======================== Support for :ref:`Distributed Transactions ` works slightly differently than in KDB. FDB uses :class:`~fdb.ConnectionGroup` class like KDB with the same interface, but DT is not bound to main transaction of individual connections managed by group. That means that :class:`~fdb.Cursor` instances obtained from :class:`~fdb.Connection` don't work in DT if connection is part of ConnectionGroup, but work normally in connection context. To get Cursor for specific connection that works in DT, use :meth:`fdb.ConnectionGroup.cursor()` method and pass the connection as parameter. We believe that this arrangement is more logical and flexible than KDB's way. Transaction context for cursor objects depends on how cursor is obtained/created: a) :meth:`fdb.Connection.cursor()` - Works in context of "main" transaction for connection. b) :meth:`fdb.Transaction.cursor()` - Works in context of this transaction. c) :meth:`fdb.ConnectionGroup.cursor()` - Works in context of Distributed Transaction Stream BLOBs ============ Firebird supports two types of BLOBs, stream and segmented. The database stores segmented BLOBs in chunks. Each chunk starts with a two byte length indicator followed by however many bytes of data were passed as a segment. Stream BLOBs are stored as a continuous array of data bytes with no length indicators included. Both types of BLOBs could be accessed by the same API functions, but only stream BLOBs support seek operation (via `isc_seek_blob function`). FDB implements stream BLOBs as file-like objects. On input, you can simply pass any file-like object (only 'read' method is required) as parameter value for BLOB column. For example: .. code-block:: python f = open('filename.ext', 'rb') cur.execute('insert into T (MyBLOB) values (?)',[f]) f.close() On output, stream BLOBs are represented by BlobReader instances on request. To request streamed access to BLOB, you have to use prepared statement for your query and call its `set_stream_blob(column_name)` method. Stream access is not allowed for cursors because it would lead to dangerous situations (BlobReader life-time management) and anomalies (stream access when it's not required). Example: .. code-block:: python p = cursor.prep('select first 1 MyBLOB from T') p.set_stream_blob('MyBLOB') cur.execute(p) row = cur.fetchone() blob_reader = row[1] print blob_reader.readlines() blob_reader.close() Whenever you use stream access to BLOB, FDB opens or creates the underlying BLOB value as stream one. On input it means that true stream BLOB is created in database, but on output it depends on how BLOB value was actually created. If BLOB was created as stream one, you can use the seek method of BlobReader, but if it was created as regular BLOB, any call to seek will raise an error:: SQLCODE: -685 - invalid ARRAY or BLOB operation - invalid BLOB type for operation You can read BLOBs created as stream ones as fully materialized, and regular ones in stream mode (without seek) without any problems, and that same apply for input - you can create values in the same column as stream or regular ones interchangeably. From your point of view, stream BLOBs are just different interface to BLOB values, with single exception - `BlobReader.seek()` will throw an exception if you'd call it on BLOB value that was not created as stream BLOB. To work with stream BLOBs, you don't need to use `cursor.set_type_trans_in/out` methods like in KDB, i.e. calls to: .. code-block:: python cur.set_type_trans_in ({'BLOB': {'mode': 'stream'}}) cur.set_type_trans_out({'BLOB': {'mode': 'stream'}}) To write (create) stream BLOB value, simply pass file-like object as parameter to your INSERT/UPDATE statements where BLOB value is expected. To read BLOB value as stream, use prepared statement and register interest to get BlobReader instead fully materialized value via set_stream_blob() calls for each BLOB value (column name) you want to get this way. :class:`~fdb.BlobReader` supports iteration protocol, and read(), readline(), readlines(), seek(), tell(), flush() (as noop) and close() methods. It does NOT support chunks() method of KInterbasDB.BlobReader. It is not strictly necessary to close BlobReader instances explicitly. A BlobReader object will be automatically closed by its __del__ method when it goes out of scope, or when its Connection, PreparedStatement closes, whichever comes first. However, it is always a better idea to close resources explicitly (via try...finally) than to rely on artifacts of the Python implementation. You will also encounter errors if BLOB value was deleted from database before BlobReader is closed, so the odds that this may happen are higher if you do not close it explicitly. Services API ============ Support for Firebird Services was :ref:`completelly reworked ` in FDB. fdb-1.6.1/sphinx/python-db-api-compliance.txt0000644000175000001440000002102712431143162021546 0ustar pcisarusers00000000000000##################################### Compliance to Python Database API 2.0 ##################################### .. currentmodule:: fdb Full text of Python Database API 2.0 (PEP 249) is available at `http://www.python.org/dev/peps/pep-0249/ `__ Unsupported Optional Features ============================= :meth:`fdb.Cursor.nextset` This method is not implemented because the database engine does not support opening multiple result sets simultaneously with a single cursor. Supported Optional Features =========================== - `Connection.Error`, `Connection.ProgrammingError`, etc. All exception classes defined by the DB API standard are exposed on the Connection objects as attributes (in addition to being available at module scope). - `Cursor.connection` This read-only attribute return a reference to the Connection object on which the cursor was created. Nominally Supported Optional Features ===================================== :class:`fdb.Cursor` :attr:`~fdb.Cursor.arraysize` As required by the spec, the value of this attribute is observed with respect to the `fetchmany` method. However, changing the value of this attribute does not make any difference in fetch efficiency because the database engine only supports fetching a single row at a time. :meth:`~fdb.Cursor.setinputsizes` Although this method is present, it does nothing, as allowed by the spec. :meth:`~fdb.Cursor.setoutputsize` Although this method is present, it does nothing, as allowed by the spec. Extensions and Caveats ====================== FDB offers a large feature set beyond the minimal requirements of the Python DB API. This section attempts to document only those features that overlap with the DB API, or are too insignificant to warrant their own subsection elsewhere. :func:`fdb.connect` This function supports the following optional keyword arguments in addition to those required by the spec: :role: For connecting to a database with a specific SQL role. *Example:* .. sourcecode:: python fdb.connect(dsn='host:/path/database.db', user='limited_user', password='pass', role='MORE_POWERFUL_ROLE') :charset: For explicitly specifying the character set of the connection. See Firebird Documentation for a list of available character sets, and `Unicode Fields and FDB` section for information on handling extended character sets with FDB. *Example:* .. sourcecode:: python fdb.connect(dsn='host:/path/database.db', user='sysdba', password='pass', charset='UTF8') :sql_dialect: The SQL dialect is feature for backward compatibility with Interbase® 5.5 or earlier. The default dialect is `3` (the most featureful dialect, default for Firebird). If you want to connect to `legacy` databases, you must explicitly set this argument's value to `1`. Dialect `2` is a transitional dialect that is normally used only during ports from IB < 6 to IB >= 6 or Firebird. See Firebird documentation for more information about SQL Dialects. *Example:* .. sourcecode:: python fdb.connect(dsn='host:/path/database.db', user='sysdba', password='pass', dialect=1) :class:`fdb.Connection` :attr:`~fdb.Connection.charset` *(read-only)* The character set of the connection (set via the `charset` parameter of :func:`fdb.connect()`). See Firebird Documentation for a list of available character sets, and `Unicode Fields and FDB` section for information on handling extended character sets with FDB. :attr:`~fdb.Connection.sql_dialect` This integer attribute indicates which SQL dialect the connection is using. You should not change a connection's dialect; instead, discard the connection and establish a new one with the desired dialect. For more information, see the documentation of the `sql_dialect` argument of the `connect` function. :attr:`~fdb.Connection.server_version` *(read-only)* The version string of the database server to which this connection is connected. :meth:`~fdb.Connection.execute_immediate` Executes a statement without caching its prepared form. The statement must *not* be of a type that returns a result set. In most cases (especially cases in which the same statement -- perhaps a parameterized statement -- is executed repeatedly), it is better to create a cursor using the connection's `cursor` method, then execute the statement using one of the cursor's execute methods. Arguments: :sql: String containing the SQL statement to execute. :meth:`~fdb.Connection.commit(retaining=False)` :meth:`~fdb.Connection.rollback(retaining=False, savepoint=None)` The `commit` and `rollback` methods accept an optional boolean parameter `retaining` (default `False`) that indicates whether the transactional context of the transaction being resolved should be recycled. For details, see the Advanced Transaction Control: Retaining Operations section of this document. The `rollback` method accepts an optional string parameter `savepoint` that causes the transaction to roll back only as far as the designated savepoint, rather than rolling back entirely. For details, see the Advanced Transaction Control: Savepoints section of this document. :class:`fdb.Cursor` :attr:`~fdb.Cursor.description` FDB makes absolutely no guarantees about `description` except those required by the Python Database API Specification 2.0 (that is, `description` is either `None` or a sequence of 7-element sequences). Therefore, client programmers should *not* rely on `description` being an instance of a particular class or type. FDB provides several named positional constants to be used as indices into a given element of `description` . The contents of all `description` elements are defined by the DB API spec; these constants are provided merely for convenience. .. sourcecode:: python DESCRIPTION_NAME DESCRIPTION_TYPE_CODE DESCRIPTION_DISPLAY_SIZE DESCRIPTION_INTERNAL_SIZE DESCRIPTION_PRECISION DESCRIPTION_SCALE DESCRIPTION_NULL_OK Here is an example of accessing the *name* of the first field in the `description` of cursor `cur`: .. sourcecode:: python nameOfFirstField = cur.description[0][fdb.DESCRIPTION_NAME] For more information, see the documentation of Cursor.description in the `DB API Specification `__. :attr:`~fdb.Cursor.rowcount` Although FDB's `Cursor`s implement this attribute, the database engine's own support for the determination of "rows affected"/"rows selected" is quirky. The database engine only supports the determination of rowcount for `INSERT`, `UPDATE`, `DELETE`, and `SELECT` statements. When stored procedures become involved, row count figures are usually not available to the client. Determining rowcount for `SELECT` statements is problematic: the rowcount is reported as zero until at least one row has been fetched from the result set, and the rowcount is misreported if the result set is larger than 1302 rows. The server apparently marshals result sets internally in batches of 1302, and will misreport the rowcount for result sets larger than 1302 rows until the 1303rd row is fetched, result sets larger than 2604 rows until the 2605th row is fetched, and so on, in increments of 1302. As required by the Python DB API Spec, the rowcount attribute "is -1 in case no executeXX() has been performed on the cursor or the rowcount of the last operation is not determinable by the interface". :meth:`~fdb.Cursor.fetchone` :meth:`~fdb.Cursor.fetchmany` :meth:`~fdb.Cursor.fetchall` FDB makes absolutely no guarantees about the return value of the `fetchone` / `fetchmany` / `fetchall` methods except that it is a sequence indexed by field position. FDB makes absolutely no guarantees about the return value of the `fetchonemap` / `fetchmanymap` / `fetchallmap` methods (documented below) except that it is a mapping of field name to field value. Therefore, client programmers should *not* rely on the return value being an instance of a particular class or type. fdb-1.6.1/sphinx/license.txt0000644000175000001440000000003512431143162016401 0ustar pcisarusers00000000000000.. include:: ../LICENSE.TXT fdb-1.6.1/sphinx/Makefile0000644000175000001440000000446712431143162015673 0ustar pcisarusers00000000000000# Makefile for Sphinx documentation # # You can set these variables from the command line. SPHINXOPTS = SPHINXBUILD = sphinx-build PAPER = # Internal variables. PAPEROPT_a4 = -D latex_paper_size=a4 PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d .build/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . .PHONY: help clean html web pickle htmlhelp latex changes linkcheck help: @echo "Please use \`make ' where is one of" @echo " html to make standalone HTML files" @echo " pickle to make pickle files" @echo " json to make JSON files" @echo " htmlhelp to make HTML files and a HTML help project" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" @echo " changes to make an overview over all changed/added/deprecated items" @echo " linkcheck to check all external links for integrity" clean: -rm -rf .build/* html: mkdir -p .build/html .build/doctrees $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) ../docs @echo @echo "Build finished. The HTML pages are in ../docs." pickle: mkdir -p .build/pickle .build/doctrees $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) .build/pickle @echo @echo "Build finished; now you can process the pickle files." web: pickle json: mkdir -p .build/json .build/doctrees $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) .build/json @echo @echo "Build finished; now you can process the JSON files." htmlhelp: mkdir -p .build/htmlhelp .build/doctrees $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) .build/htmlhelp @echo @echo "Build finished; now you can run HTML Help Workshop with the" \ ".hhp project file in .build/htmlhelp." latex: mkdir -p .build/latex .build/doctrees $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) .build/latex @echo @echo "Build finished; the LaTeX files are in .build/latex." @echo "Run \`make all-pdf' or \`make all-ps' in that directory to" \ "run these through (pdf)latex." changes: mkdir -p .build/changes .build/doctrees $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) .build/changes @echo @echo "The overview file is in .build/changes." linkcheck: mkdir -p .build/linkcheck .build/doctrees $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) .build/linkcheck @echo @echo "Link check complete; look for any errors in the above output " \ "or in .build/linkcheck/output.txt." fdb-1.6.1/sphinx/changelog.txt0000644000175000001440000002770113017040446016721 0ustar pcisarusers00000000000000######### Changelog ######### * `Version 1.6.1`_ (30.11.2016) * `Version 1.6`_ (30.3.2016) * `Version 1.5.1`_ (22.2.2016) * `Version 1.5`_ (7.1.2016) * `Version 1.4.11`_ (12.11.2015) * `Version 1.4.10`_ (31.8.2015) * `Version 1.4.9`_ (30.6.2015) * `Version 1.4.8`_ (23.6.2015) * `Version 1.4.7`_ (4.6.2015) * `Version 1.4.6`_ (29.5.2015) * `Version 1.4.5`_ (31.3.2015) * `Version 1.4.4`_ (27.2.2015) * `Version 1.4.3`_ (14.11.2014) * `Version 1.4.2`_ (13.11.2014) * `Version 1.4.1`_ (25.6.2014) * `Version 1.4`_ (24.6.2013) * `Version 1.3`_ (7.6.2013) * `Version 1.2`_ (31.5.2013) * `Version 1.1.1`_ (14.5.2013) * `Version 1.1`_ (9.4.2013) * `Version 1.0`_ (7.3.2013) * `Version 0.9.9`_ (30.11.2012) * `Version 0.9.1`_ (8. 9. 2012) * `Version 0.9`_ (5. 9. 2012) * `Version 0.8.5`_ (28. 6. 2012) * `Version 0.8`_ (31. 5. 2012) * `Version 0.7.2`_ (27. 3. 2012) * `Version 0.7.1`_ (31. 1. 2012) * `Version 0.7`_ (21. 12. 2011) Version 1.6.1 ============= - (PYFB-68) - Add support for isc_spb_sts_table option - (PYFB-67) - Cursor fails after use with executemany(). ReferenceError: weakly-referenced object no longer exists Version 1.6 =========== - New: Extended support for database and transaction info (new attributes and functions on :class:`~fdb.Connection` and :class:`~fdb.Transaction`, fixes and improvements to :meth:`~fdb.Connection.db_info` and :meth:`~fdb.Connection.database_info`). - Fix: Missing character sets for automatic translations. - (PYFB-64) - cursor.description throws ReferenceError after executemany INSERT Version 1.5.1 ============= - New :func:`~fdb.connect()` parameters: no_gc, no_db_triggers and no_linger. - Direct support for *with* statement (PEP8) in :class:`~fdb.Connection` class. Version 1.5 =========== - Initial support for Firebird 3.0 - BOOLEAN datatype - IDENTITY datatype (in schema) - Database linger - Preserve SHADOW on DROP - DDL triggers - New and extended system tables - New and extended monitoring tables - GBAK statistics (service) - On-line validation (service) - (PYFB-60) Cursor: executemany(operation, seq_of_parameters) does PREPARE of for each parameter from Version 1.4.11 ============== - (PYFB-58) Severe performance loss and minor memory leak Version 1.4.10 ============== - (PYFB-54) Windows 7x64 and FB2.5.2x32 Python2.7: Error in Registry Path. FDB driver does not find the library fbclient.dll - (PYFB-55) get_sql_for incorrect generate sql query for Views - (PYFB-56) schema.reload typing mistake for views - (PYFB-57) role.privileges does not return correct privilege list Version 1.4.9 ============= - (PYFB-51) .get_sql_for('create') returns invalid output parameters - (PYFB-52) isc_info* types which are _DATABASE_INFO_CODES_WITH_COUNT_RESULTS raises TypeError: 'float' object cannot be interpreted as an integer Version 1.4.8 ============= - Enhancement to automatic client library location detection on POSIX. Now it also looks at LD_LIBRARY_PATH dir if specified. Version 1.4.7 ============= - Forgoten debug printout removed. Annoying for Python 2.x users, fatal for 3.x users. Version 1.4.6 ============= Bugs Fixed ---------- - (PYFB-50) Exception ReferenceError: 'weakly-referenced object no longer exists' in PreparedStatement and Cursor Version 1.4.5 ============= Bugs Fixed ---------- - (PYFB-49) Memory and DB resource leak due to circular references. Version 1.4.4 ============= Improvements ------------ - (PYFB-47) Firebird client library path added as optinal parameter to :func:`fdb.connect` and :func:`fdb.create_database`. Bugs Fixed ---------- - Additional fix related to PYFB-43 - Additional correction for unregistered problem with circular ref. between PS and Cursor when explicit PS is executed. Version 1.4.3 ============= Bugs Fixed ---------- - Previous fix for PYFB-43 was incomplete, corrected. Version 1.4.2 ============= Improvements ------------ - In relation to PYFB-43 I had to make a **backward incompatible change** to event processing API. Starting from this version `EventConduit` does not automatically starts collection of events upon creation, but it's now necessary to call :meth:`~fdb.EventConduit.begin` method. To mitigate the inconvenience, EventConduit now supports context manager protocol that ensures calls to begin() and close() via `with` statement. - In relation to PYFB-39 I have decided to drop support for implicitly cached and reused prepared statements. I never liked this feature as I think it's a sneaky method how to put some performance to badly written applications that in worst case may lead to significant resource consumption on server side when developers are not only lazy but also stupid. It was implemented for the sake of compatibility with KInterbasDB. **This change has no impact on API, but may affect performance of your applications.** Bugs Fixed ---------- - PYFB-44 - Inserting a datetime.date into a TIMESTAMP column does not work - PYFB-42 - Python 3.4 and FDB - backup throws an exception - Unregistered - Fixes in monitor.TransactionInfo Version 1.4.1 ============= Improvements ------------ - PYFB-40 - fbclient.dll is not found if not in path. Aside from registry lookup, client library isn't loaded until first call to :func:`fdb.connect`, :func:`fdb.create_database` or :func:`fdb.load_api` (which now supports optional specification of Firebird Client Library to load). - Adjustments for Firebird 3.0 (Alpha1) - Properties :attr:`~fdb.services.Connection.version` and :attr:`~fdb.services.Connection.engine_version` added to :class:`fdb.services.Connection` Bugs Fixed ---------- - Unregistered - isolation_level parameter for fdb.connection has no effect. - Unregistered - Information gathered from monitoring tables is not properly dropped upon refresh request. Version 1.4 =========== New Features ------------ - :mod:`fdb.schema` submodule extended with support for user privileges. Improvements ------------ - :meth:`fdb.services.User.load_information` method to load information about user from server. - :mod:`fdb.ibase` content cleanup and additions. - :mod:`fdb.blr` submodule with BLR definitions. Bugs Fixed ---------- - PYFB-37 - Unicode Strings incorrect not allowed for insertion into BLOB SubType 1. Version 1.3 =========== New Features ------------ - :mod:`fdb.monitor` submodule for access to / work with monitoring tables. - New :attr:`fdb.Connection.monitor` property for access to monitoring tables. Improvements ------------ - :attr:`~fdb.schema.Schema.closed` property and :meth:`~fdb.schema.Schema.clear` method for Schema. - Unit tests reworked. Bugs Fixed ---------- - Unregistered: Bug in :meth:`fdb.schema.Schema.close` and :meth:`fdb.schema.Schema.bind`. Version 1.2 =========== New Features ------------ - :mod:`fdb.schema` submodule for access to / work with database metadata. - :mod:`fdb.utils` submodule with various helper classes and functions. - New :attr:`fdb.Connection.schema` property for access to database schema. - New :class:`~fdb.ConnectionWithSchema` connection class that provides more direct access to database schema than `Connection`. - New :attr:`fdb.Connection.firebird_version`, :attr:`fdb.Connection.version` and :attr:`fdb.Connection.engine_version` properties. - New `Connection.ods` read only property that returns ODS version number of connected database. There are also new module-level constants `ODS_FB_20`, `ODS_FB_21` and `ODS_FB_25`. - New :attr:`fdb.Connection.query_transaction` property. This is ReadOnly ReadCommitted transaction that could be active indefinitely without blocking garbage collection. It’s used internally to query metadata, but it’s generally useful. Improvements ------------ - Optional PEP 249 (Python DB API 2.0) Extensions - `Connection.Error`, `Connection.ProgrammingError`, etc. All exception classes defined by the DB API standard are exposed on the Connection objects as attributes (in addition to being available at module scope). - `Cursor.connection` This read-only attribute return a reference to the Connection object on which the cursor was created. - `Cursor.transaction` read-only attribute returns a reference to the Transaction object on which the cursor was created. - Optimized wekref management, especially for `PreparedStatement`. - `create_database` now supports two methods for database screation. You can specify CREATE DATABASE statement (as before) or provide set of named database parameters (SQL statement is created automatically from them). - Functions `connection` and `create_database` now take optional keyword parameter `connection_class` to obtain instances of different class instead `Connection`. - Support for legacy (pre-2.5) shutdown mode with mode `fdb.services.SHUT_LEGACY`. - :meth:`fdb.Cursor.executemany` returns `self`, so it could be used directly as iterator. - Documentation inprovements. Bugs Fixed ---------- - Unregistered: `buffers` parameter of `fdb.connection` doesn't support values greater than 255. - Unregistered: Lowercase character set name passed to `fdb.connect` may result in wrong funcion of automatic data conversions and other failures (exceptions raised). Version 1.1.1 ============= Bugs Fixed ---------- - PYFB-35 - Call to fetch after a sql statement without a result should raise exception - PYFB-34 - Server resources not released on PreparedStatement destruction Version 1.1 =========== New Features ------------ - :ref:`Context Manager ` for transactions. Bugs Fixed ---------- - PYFB-30 - BLOBs are truncated at first zero byte Version 1.0 =========== Improvements ------------ - Removed dependency on fbclient library to be present at import time (PYFB-24) Bugs Fixed ---------- - PYFB-25 - Truncate long text from VARCHAR(5000) Version 0.9.9 ============= New Features ------------ - Firebird ARRAY support. Other changes ------------- - Cursor.execute() returns Self, so it could be used as iterator. - Reading output from Services now uses more efficient method to get data from server. Bugs Fixed ---------- - Fix: precision_cache in Connection works as intended. Version 0.9.1 ============= Just bugfixes to make FDB work on P3K again. Version 0.9 =========== New Features ------------ - Documentation; both in-source (in Sphinx autodoc format) and Sphinx (html) - Services API completely reworked Other changes ------------- - Unregistered bugs fixed. - Various optimizations and cleanup - Object reference graph optimalizations - Many new tests in test suite Version 0.8.5 ============= New Features ------------ - Support for Firebird stream BLOBs (see ReleaseNotes for details) - Documentation (stub, from KInterbasDB 3.3.0) Bugs Fixed ---------- - Fix for PYFB-17 and PYFB-18 (see our JIRA tracker for details) - Fixes for automatic unicode conversions + refactoring - Some optimizations Version 0.8 =========== New Features ------------ - Support for Firebird Event Notifications - `JIRA entry ` Bugs Fixes ---------- - database_info (isc_info_firebird_version) fails opn amd64 linux - `JIRA entry ` - more than 2 consecutive cursor open execute and iter fail - `JIRA entry ` Version 0.7.2 ============= New Features ------------ - Python 3 Support (thanks to Philippe Makowski) - Support for Distributed Transactions And as always, some (unregistered) bugs fixed. Version 0.7.1 ============= Bug fixes. Version 0.7 =========== Initial release. Almost feature-complete (ready for 95% of users), but it could be still buggy (it’s beta!), and the code wasn’t optimized for size and speed. In all other ways it’s ready for wide testing. What’s missing -------------- - Distributed transactions - ARRAY support - EVENTs support - Stream BLOBs - TRACE service - Documentation (but you can use KInterbasDB one as FDB is as close to it as possible). - Python 3.x support (haven’t had time to test it, but it shouldn’t be hard to make it work there) fdb-1.6.1/sphinx/requirements.txt0000644000175000001440000000000412431143162017476 0ustar pcisarusers00000000000000fdb fdb-1.6.1/sphinx/conf.py0000644000175000001440000001423313017553766015542 0ustar pcisarusers00000000000000# -*- coding: utf-8 -*- # # FDB documentation build configuration file, created by # sphinx-quickstart on Wed Jan 7 12:29:48 2009. # # This file is execfile()d with the current directory set to its containing dir. # # The contents of this file are pickled, so don't put values in the namespace # that aren't pickleable (module imports are okay, they're removed automatically). # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. import sys, os # If your extensions are in another directory, add it here. If the directory # is relative to the documentation root, use os.path.abspath to make it # absolute, like shown here. sys.path.append(os.path.abspath('..')) # General configuration # --------------------- # Add any Sphinx extension module names here, as strings. They can be extensions # coming with Sphinx (named 'sphinx.ext.*') or your custom ones. extensions = ['sphinx.ext.autodoc', 'sphinx.ext.intersphinx', 'sphinx.ext.todo'] # Add any paths that contain templates here, relative to this directory. templates_path = ['.templates'] # The suffix of source filenames. source_suffix = '.txt' # The encoding of source files. #source_encoding = 'utf-8' # The master toctree document. master_doc = 'index' # General information about the project. project = u'FDB' copyright = u'2009-2016, David Rushby, Pavel Cisar' # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # # The short X.Y version. version = '1.6.1' # The full version, including alpha/beta/rc tags. release = '1.6.1' # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. #language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: #today = '' # Else, today_fmt is used as the format for a strftime call. #today_fmt = '%B %d, %Y' # List of documents that shouldn't be included in the build. unused_docs = ['usage'] # List of directories, relative to source directory, that shouldn't be searched # for source files. exclude_trees = ['.build','requirements.txt'] # The reST default role (used for this markup: `text`) to use for all documents. #default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. #add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). #add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. #show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # Autodoc options # --------------- autoclass_content = 'both' # Intersphinx options intersphinx_mapping = {'python': ('http://docs.python.org/', None)} # Options for HTML output # ----------------------- html_theme = "fdbtheme" html_theme_path = ["."] # The style sheet to use for HTML and HTML Help pages. A file of that name # must exist either in Sphinx' static/ path, or in one of the custom paths # given in html_static_path. #html_style = 'pyramid.css' # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". #html_title = None # A shorter title for the navigation bar. Default is the same as html_title. #html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. #html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. #html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['.static'] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. #html_last_updated_fmt = '%b %d, %Y' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. #html_use_smartypants = True # Custom sidebar templates, maps document names to template names. #html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. #html_additional_pages = {} # If false, no module index is generated. #html_use_modindex = True # If false, no index is generated. #html_use_index = True # If true, the index is split into individual pages for each letter. #html_split_index = False # If true, the reST sources are included in the HTML build as _sources/. #html_copy_source = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. #html_use_opensearch = '' # If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml"). #html_file_suffix = '' # Output file base name for HTML help builder. htmlhelp_basename = 'FDBdoc' # Options for LaTeX output # ------------------------ # The paper size ('letter' or 'a4'). #latex_paper_size = 'letter' # The font size ('10pt', '11pt' or '12pt'). #latex_font_size = '10pt' # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, document class [howto/manual]). latex_documents = [ ('index', 'FDB.tex', ur'FDB Documentation', ur'Pavel Cisar', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. #latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. #latex_use_parts = False # Additional stuff for the LaTeX preamble. #latex_preamble = '' # Documents to append as an appendix to all manuals. #latex_appendices = [] # If false, no module index is generated. #latex_use_modindex = True fdb-1.6.1/sphinx/fdbtheme/0000755000175000001440000000000013017556716016014 5ustar pcisarusers00000000000000fdb-1.6.1/sphinx/fdbtheme/layout.html0000644000175000001440000000155012431143162020202 0ustar pcisarusers00000000000000{% extends "basic/layout.html" %} {%- block extrahead %} {% endblock %} {% block header %} {%- if logo %} {%- endif %} {% endblock %} {%- block sidebarlogo %}{%- endblock %} {%- block sidebarsourcelink %}{%- endblock %} fdb-1.6.1/sphinx/fdbtheme/static/0000755000175000001440000000000013017556716017303 5ustar pcisarusers00000000000000fdb-1.6.1/sphinx/fdbtheme/static/ie6.css0000644000175000001440000000132612431143162020464 0ustar pcisarusers00000000000000* html img, * html .png{position:relative;behavior:expression((this.runtimeStyle.behavior="none")&&(this.pngSet?this.pngSet=true:(this.nodeName == "IMG" && this.src.toLowerCase().indexOf('.png')>-1?(this.runtimeStyle.backgroundImage = "none", this.runtimeStyle.filter = "progid:DXImageTransform.Microsoft.AlphaImageLoader(src='" + this.src + "',sizingMethod='image')", this.src = "_static/transparent.gif"):(this.origBg = this.origBg? this.origBg :this.currentStyle.backgroundImage.toString().replace('url("','').replace('")',''), this.runtimeStyle.filter = "progid:DXImageTransform.Microsoft.AlphaImageLoader(src='" + this.origBg + "',sizingMethod='crop')", this.runtimeStyle.backgroundImage = "none")),this.pngSet=true) );} fdb-1.6.1/sphinx/fdbtheme/static/footerbg.png0000644000175000001440000000051512431143162021603 0ustar pcisarusers00000000000000‰PNG  IHDRfÍÉÙIDAT8…“É‘Ã0í¬œZ샔%[®Ú‡Šâ58äù| X0`©‡˜Ôáÿo©„qNG@Å™Ï;mH#©¡¦8‰áÎáàKkƒÜ8FèOž-«&19çÀZ#¸ÆA¢dF\gƒ/8ñŠML—¦Gpº8ûdk躓à€éâ}œ}ºôts¾ìóœùe.¾uõ9dë=|úî÷¢­üð9àëðèè­óÉM÷êíÂþÖÓ±}ÈæÕ»æ™ëÁk>öÿ½´þÎÆÕ;«t.k~äí!;?ÀÛŸîZ¢Ôq‚–ÜYÁ:ßîw·óuËmMŽ|¬<äÈì¡ñ+3ëÝ9ðípâ-nù³V=F|˜IEND®B`‚fdb-1.6.1/sphinx/fdbtheme/static/dialog-warning.png0000644000175000001440000000255712431143162022706 0ustar pcisarusers00000000000000‰PNG  IHDR szzôsBIT|dˆ&IDATX…å—[hTG€¿™sÎîÙ‹»1‰«Y“&ë݈˪ˆ—xÉ‚—€Ä VD£4R%õ‚Z[郈}òÅÆXjò" &£¢` D%¼"f„ÚÍÆf“xÎéC’%&F›‹Oýa˜afþÿÿþ3ÿÌÏÿ»(#Q.e&(×Á® 9\Å_ao;ü ±28>\;b8JeoOIù3´i“‚aPwáBW<Ýö üñÙÊAi¢qnqq¶úàªJ<-ͺwþ|s¾ü:†boÈGÐ…)Á`¦[QD¬¦†Xu5Þädá RÜP4T{C(‡¢(?OYµJm½t ™š ©©Ä*+™Û¤ÇÎAÒgøŠÓ/«µ´Ðñø1ú8Ž#ÞЀÞÑ/'Ç.àûÏpÆ*º~8°|¹Özù2ʬYhùùØÂaÔ¹s‰]½JöâÅ6©ªß•Bú¨t¡ô¼<»ùð!]‘úþýÙ­î()¡óÕ+”¦&ü¡pbTJ!ÝîõgΛ§ÅªªÐ–.E[¸!B´`ÛêÕÄnÜ #Ԥñé䌀„™+WŠÎ;w0b1ô}û¢û÷öÎýû1b1ÌÆF2æÍ3%œ€³0Ë1~ü&ÿ”)jÛõëØ P§OÇ0 Z[[‰F£˜¦‰’‘cËÞÖÖâTÕë]Q +F  Â©ìü|+^SƒeY8vï ‰  ƒD"„8‹‹±…®{÷Èš?N[Ÿxì> P¹î¬¬¼Ô”åmm-ö­[‘~?B\.WbŸËåBLIÁUTDûýû¤&'K›Ï7­ ¾€BñËÔ5kx[U…p»ÑwîLœyèÎçŽÈqãˆ×Õ‘ ©N–ƒmÈç᫤ٳsܦ)Ûëë±ïÚ…LJJ8r8(Š‚¢(躞¸Òéĵw/Ožà•R¸&Mò½…]C8 ªT”“SÂaÙvå ÒïG/,L8Id¾Ó‰Óé|o^sô©Si¯«#+'GS¤üáwðüg€ Øá[°`¢ýõkÑñô)Ž={ºžˆ¾·÷xbÂëû°ó4\ºú–Ê$`ì4˜ €nöɑЄw€0 ÞM@ Ðkµè€ü¶}@L  èìiÖ(Ú™ü p ¢žáM/´IEND®B`‚fdb-1.6.1/sphinx/fdbtheme/static/headerbg.png0000644000175000001440000000031312431143162021531 0ustar pcisarusers00000000000000‰PNG  IHDR4b·Ît’IDAT8í“AÄ C“\«—›SÏn`b‹íºà)4ÁoŸàû9DRp‘hµÈ¯«×uìô½?÷Þ²‡ ”@»÷¬\SÍ=ýÌ®Š3}½÷­¾ÚOÜÕÜåo¼±FŸ5´9,×cå©ïß»'çãmZó&kÌéä… ´q~øx²Xò5†èßE3˜,óçû÷”01]îsÖIEND®B`‚fdb-1.6.1/sphinx/fdbtheme/static/dialog-note.png0000644000175000001440000000305612431143162022201 0ustar pcisarusers00000000000000‰PNG  IHDR szzôsBIT|dˆåIDATX…–[l×Çgn{ñ.kì5Æ€ñcS›–ª´J„(ižªH©RòEIUT‰J}@}á©´ªT¤´U“—<4m_’¨ª¢¤¡@M”ÐV$¸vHØÅÁàÛÚk÷¾³3§Þ]ïŒok>éÓÜÎùþ¿ó}ç2‚MØïckÉà‰ &ž‘ˆ>[²Õvd@UEN…9pnåJüE+òö©ËÌÕSÔ%|‚½º*~'…òÈžQµ»¯iÝFCCCm‡L6™˜%6ú…uçî´c_-9üôÔ;|ñÀgŸÂˆæøƒ¢jÏ><è88(DÊÄI™È|ne0Ý@ G ÜÈèÍayíÚͼ´í?O8uöuŠ›øã÷hÅ'.víŠö?r⨮.$±Í$H¹sՔМÆVþuñR)þÕôŽýøŸÌÔðÛã4uñÙ¡CûÛ‡ ûÞÒ±ëvG¨m ʯLæŠrðgHÖ6Q½}ÎAk ˆ«èýÚžÿHgóâe“éEÚvvˆ@ƒh˜š˜=v¨W¯ŒQ ¨x;´…9×ÓÜðëŠ=1^wÊ×3{n†þ¾¥wOËP[˜sµß\%8œí‘ :ü£ç~ØìŒÇ×M»ö­çP‡ž^~“Ò'/¯M!ÊöüõO[XÈ:§/0 ž ýœèð@frãšKá W¹Q{‰œOòo÷E‚~ÎW^W^9J)Nô|ã›Â6“«©—7=Ï‹÷ÉeéÚÛ'„Pžxå(@Nçñ®íMºÌ,ÖWw…– Št¶…œÎã.€ ÎÉ=}~™ª/Þççëêæ¤Mº÷uA“.$}--ÈB¾®@²°ùÈR‰HsHú´Ê‡’CK¸©'e®ÝÛ%ènç¤î!ÓSK{†ÐF «ö üØR´‚\p†ª©8u®û*@y!Ë{ÿAæ–'¯Ðüˆ` "eyÏS±‘`@MIÑ.ä±þ$´-dv™ž;šÛ%^µR¹8ŽLO""ˆ`+-K‡S@SH¤’‰HHÓ–µ2XÁD¦Èüò$³IÄ–öÕÅkÍ)!çÇéiD ™Œo;š"g\n-Ü¿Óж—;ÙPÊ s&2; öÊÓ´øú³l†\}++Ê,fÒ‚[.€¬ÅkñøÌ£;[ûÞºÿU³c— Û™Ùú¥qñϧ Y‹× f,Þ½“(X"´ÄŠ3jU3~ð*_$ð=ûÖ&¶ŸÌX‹w]/\ÂDá·oÆ¥V)æ•#uß÷×\r.ñH±±Y òí.aº²yN_I&inA¾ ZWÏ!só”>| Š™ Ô”öÝ\»1µÍsºòzÅÑËOò›î¨ï'GŽìGo€óà?#U wõsõúx16‘~éÅ·øy•ËÛv*Å™Øtá߯Kzï€ky0q¶«›‘ø¼›H6•âLíçѯŒá<º‡7gæò'UÛ í8Ð/sìÒæµU ½g€áÛóÎ'ÃÓ“Y‹ïž¹Œ«V«ïB kwˆ7ŠÅü±ù™d¤óà ¦ú Ȥê;ª…@¶#wõpåƒÛÖÍØÂçÿ¾Ëñ_}Èàªéj> p} ùßoô†íŽ‘[Ý>]Õ¢Cû…,ÏvÇvÍ¡ˆ†Z[Ê®nFã¦|çÊíüølñï¿üˆç?¸K†¥’ À¦|\x'¡øËU?ÞÃÞc]œ êØÝêWz»¶M[ 61t•¢åYÌ`Î-òe,aÍäœl‘büúbŒ/Bçk®+Œa/ˆ¿§‘–‡»xx_34v –àPé|‰û£s¼÷þÞ-XC¸r¿*€ðž{£œ%¥m\-§U²TÛ`•½Xãg×*7^ÑŠ°VVjb8e·= ˜+?»F¼ž)ˆÚQW¼b•,ÔBÊâžÙ_±ÿíO‰xÇ£ÐIEND®B`‚fdb-1.6.1/sphinx/fdbtheme/static/dialog-seealso.png0000644000175000001440000000273612431143162022673 0ustar pcisarusers00000000000000‰PNG  IHDR szzôsBIT|dˆtEXtSoftwarewww.inkscape.org›î<pIDATX…Å–[lUÇÿß™ÙݶÔÞì…R‹)¡Þ¸GB¸T MVcˆFÑYDž|‘G£QLLÔø¤‰KbÀ„¨$„5ATB@*Ò…‚”ZJé…Þwév/s;Ÿìn¶ën¡ÀÙ|9çÌœ9¿ßùæÌÎ3ãAq¯x|Ú—Ÿv×óܳ€íBÁ¯Ÿæ~P´pöŠUBÁ)O+yX<sYÝú:hõø´9÷]@‚ÚªzeíÓ/–“ÍŸ¶ø¾ *FÍnÌ,­—¿R õ¨Ç§5Þ" Œ<”_Ϫ7r5§ëÏ.mÛm¯ŸÎÿ€Ç§åh W‰èYf.¬ÎØÖøúõ °Ù„Ír•b¥âç¦ïññ¯¼¥ï¸gO[MûfWkµÕOæÏ(£<­9Î|Rѽ‹MH6a± —ȃŠþ{øf(p%6û½º9mOs’Àgîò- ‹_vW-DÔ Bç0Lƒ)£08 › ØlÁf1;„þXTr¡ÌY‹ó—ÎE‡oœe‰ü^=tÇŸVN,{lCÕòù5:¢ò&$[°ØHÖ‰Ô'®EšµƒMä¡Òµÿt\Ô{úº¯±Ä¿WO0Ô©V/|³tþúGWÎkTBÖ0nm6àÖîOÔ”r$d 'á«Ã4-ŒŽŒXö§Â§ðø´µUŵë—ÌÝ Í>“ €“?${‰0¤w$ç(qÖ@³ qâôñˆnÆÞ÷oÑ¿Hçdˆß÷ÝëžÚä Û£°X‡€ ƒ˜àRf@!GÊ„¾ØyŒ›ƒ0dP¦Íé98Ñr–îJÀmòþþž‹gliÁjî@è²tHMZ|°íGëùã;­XÚ99â²…þ'@DâÛwí s4òÉ‘KSFPúHiQål±:ÀyìsKçoÒÎOm«‰:ž¤ˆH)ÊÆ‘þÎŽãáÁ`aq…(|DÛðprÒ&wN AMȤ¯>N„’D$€”·aü€—J¶·(iœ[³èíXtÜlÿ½íëCŸÇN Â@idò6a§÷‰™OÉ$pj{ãÇEUËê_›c΋MWù0Ô%3(Òàö4ÚvzÒ’ýâ9ŠsÍ;•Õ³•š¡«7Ní t´Yf<©È”‰ô•K63Û“>H2܆D$3äÈ!Q±PuÏQÝŠJ°WêÝ'õ˜Çá™DÒ¥,f–“ö@Še§Ö‰U§?Vœ!R%lf¶'ñ¦ú$‹Ë¤ e‚S|òLI8gMë³