fdb-2.0.0/0000755000175000001440000000000013270606076012713 5ustar pcisarusers00000000000000fdb-2.0.0/fdb.egg-info/0000755000175000001440000000000013270606076015140 5ustar pcisarusers00000000000000fdb-2.0.0/fdb.egg-info/dependency_links.txt0000644000175000001440000000000113270606076021206 0ustar pcisarusers00000000000000 fdb-2.0.0/fdb.egg-info/requires.txt0000644000175000001440000000001713270606076017536 0ustar pcisarusers00000000000000future>=0.16.0 fdb-2.0.0/fdb.egg-info/PKG-INFO0000644000175000001440000001036413270606076016241 0ustar pcisarusers00000000000000Metadata-Version: 2.1 Name: fdb Version: 2.0.0 Summary: The Python driver for Firebird Home-page: http://www.firebirdsql.org/en/devel-python-driver/ Author: Pavel Císař Author-email: pcisar@users.sourceforge.net License: UNKNOWN Project-URL: Funding, https://www.firebirdsql.org/en/donate/ Project-URL: Documentation, http://fdb2.readthedocs.io/en/latest/ Project-URL: Bug Reports, http://tracker.firebirdsql.org/browse/PYFB Project-URL: Say Thanks!, https://saythanks.io/to/pcisar Project-URL: Source, https://github.com/FirebirdSQL/fdb Description: ==================================== FDB - The Python driver for Firebird ==================================== |docs| || Home_ || `Bug Reports`_ || Source_ || `Say Thanks!`_ FDB is a `Python`_ library package that implements `Python Database API 2.0`_-compliant support for the open source relational database `Firebird`_ ®. In addition to the minimal feature set of the standard Python DB API, FDB also exposes the entire native (old-style) client API of the database engine. Notably: * Automatic data conversion from strings on input. * Automatic input/output conversions of textual data between UNICODE and database character sets. * Support for prepared SQL statements. * Multiple independent transactions per single connection. * All transaction parameters that Firebird supports, including table access specifications. * Distributed transactions. * Firebird BLOB support, including support for stream BLOBs. * Support for Firebird Events. * Support for Firebird ARRAY data type. * Support for all Firebird Services FDB also contains extensive collection of submodules that simplify various Firebird-related tasks. Notably: * Database schema * Firebird monitoring tables * Parsing Firebird trace & audit logs * Parsing Firebird server log * Parsing Firebird gstat utility output FDB is implemented on top of Firebird client library using ctypes, and currently uses only traditional Firebird API. FDB works with Firebird 2.0 and newer, and Python 2.7 and 3.4+. FDB is free – covered by a permissive BSD-style license that both commercial and noncommercial users should find agreeable. FDB is replacement for discontinued KInterbasDB library, and as such it's designed to be as much compatible with KInterbasDB as possible, but there are some differences. See FDB documentation for full description of these differences. |donate| .. _Python: http://python.org .. _Python Database API 2.0: http://www.python.org/dev/peps/pep-0249/ .. _Firebird: http://www.firebirdsql.org .. _Bug Reports: http://tracker.firebirdsql.org/browse/PYFB .. _Home: http://www.firebirdsql.org/en/devel-python-driver/ .. _Source: https://github.com/FirebirdSQL/fdb .. _Say Thanks!: https://saythanks.io/to/pcisar .. |donate| image:: https://www.firebirdsql.org/img/donate/donate_to_firebird.gif :alt: Contribute to the development :scale: 100% :target: https://www.firebirdsql.org/en/donate/ .. |docs| image:: https://readthedocs.org/projects/fdb/badge/?version=v2.0 :alt: Documentation Status :scale: 100% :target: http://fdb.readthedocs.io/en/v2.0/ Keywords: Firebird Platform: UNKNOWN Classifier: Development Status :: 5 - Production/Stable Classifier: Intended Audience :: Developers Classifier: License :: OSI Approved :: BSD License Classifier: Programming Language :: Python :: 2 Classifier: Programming Language :: Python :: 2.7 Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.4 Classifier: Programming Language :: Python :: 3.5 Classifier: Programming Language :: Python :: 3.6 Classifier: Operating System :: POSIX :: Linux Classifier: Operating System :: Microsoft :: Windows Classifier: Operating System :: MacOS Classifier: Topic :: Database Requires-Python: >=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, <4 Description-Content-Type: text/x-rst fdb-2.0.0/fdb.egg-info/top_level.txt0000644000175000001440000000000413270606076017664 0ustar pcisarusers00000000000000fdb fdb-2.0.0/fdb.egg-info/SOURCES.txt0000644000175000001440000000306213270606076017025 0ustar pcisarusers00000000000000.gitignore LICENSE.TXT README.rst requirements.txt setup.cfg setup.py fdb/__init__.py fdb/blr.py fdb/fbcore.py fdb/gstat.py fdb/ibase.py fdb/log.py fdb/monitor.py fdb/schema.py fdb/services.py fdb/trace.py fdb/utils.py fdb.egg-info/PKG-INFO fdb.egg-info/SOURCES.txt fdb.egg-info/dependency_links.txt fdb.egg-info/requires.txt fdb.egg-info/top_level.txt sphinx/Makefile sphinx/changelog.txt sphinx/conf.py sphinx/getting-started.txt sphinx/index.txt sphinx/license.txt sphinx/python-db-api-compliance.txt sphinx/reference.txt sphinx/usage-guide.txt sphinx/.static/basic.css sphinx/.static/dialog-note.png sphinx/.static/dialog-seealso.png sphinx/.static/dialog-topic.png sphinx/.static/dialog-warning.png sphinx/.static/fdbtheme.css sphinx/fdbtheme/layout.html sphinx/fdbtheme/theme.conf sphinx/fdbtheme/static/dialog-note.png sphinx/fdbtheme/static/dialog-seealso.png sphinx/fdbtheme/static/dialog-topic.png sphinx/fdbtheme/static/dialog-warning.png sphinx/fdbtheme/static/epub.css sphinx/fdbtheme/static/fdbtheme.css_t sphinx/fdbtheme/static/footerbg.png sphinx/fdbtheme/static/headerbg.png sphinx/fdbtheme/static/ie6.css sphinx/fdbtheme/static/middlebg.png sphinx/fdbtheme/static/transparent.gif test/fbtest.fdb test/fbtest20.fdb test/fbtest21.fdb test/fbtest25.fdb test/fbtest30.fdb test/gstat25-a.out test/gstat25-d.out test/gstat25-f.out test/gstat25-h.out test/gstat25-i.out test/gstat25-r.out test/gstat25-s.out test/gstat30-a.out test/gstat30-d.out test/gstat30-e.out test/gstat30-f.out test/gstat30-h.out test/gstat30-i.out test/gstat30-r.out test/gstat30-s.out test/testfdb.pyfdb-2.0.0/README.rst0000644000175000001440000000501213270604253014373 0ustar pcisarusers00000000000000==================================== FDB - The Python driver for Firebird ==================================== |docs| || Home_ || `Bug Reports`_ || Source_ || `Say Thanks!`_ FDB is a `Python`_ library package that implements `Python Database API 2.0`_-compliant support for the open source relational database `Firebird`_ ®. In addition to the minimal feature set of the standard Python DB API, FDB also exposes the entire native (old-style) client API of the database engine. Notably: * Automatic data conversion from strings on input. * Automatic input/output conversions of textual data between UNICODE and database character sets. * Support for prepared SQL statements. * Multiple independent transactions per single connection. * All transaction parameters that Firebird supports, including table access specifications. * Distributed transactions. * Firebird BLOB support, including support for stream BLOBs. * Support for Firebird Events. * Support for Firebird ARRAY data type. * Support for all Firebird Services FDB also contains extensive collection of submodules that simplify various Firebird-related tasks. Notably: * Database schema * Firebird monitoring tables * Parsing Firebird trace & audit logs * Parsing Firebird server log * Parsing Firebird gstat utility output FDB is implemented on top of Firebird client library using ctypes, and currently uses only traditional Firebird API. FDB works with Firebird 2.0 and newer, and Python 2.7 and 3.4+. FDB is free – covered by a permissive BSD-style license that both commercial and noncommercial users should find agreeable. FDB is replacement for discontinued KInterbasDB library, and as such it's designed to be as much compatible with KInterbasDB as possible, but there are some differences. See FDB documentation for full description of these differences. |donate| .. _Python: http://python.org .. _Python Database API 2.0: http://www.python.org/dev/peps/pep-0249/ .. _Firebird: http://www.firebirdsql.org .. _Bug Reports: http://tracker.firebirdsql.org/browse/PYFB .. _Home: http://www.firebirdsql.org/en/devel-python-driver/ .. _Source: https://github.com/FirebirdSQL/fdb .. _Say Thanks!: https://saythanks.io/to/pcisar .. |donate| image:: https://www.firebirdsql.org/img/donate/donate_to_firebird.gif :alt: Contribute to the development :scale: 100% :target: https://www.firebirdsql.org/en/donate/ .. |docs| image:: https://readthedocs.org/projects/fdb/badge/?version=v2.0 :alt: Documentation Status :scale: 100% :target: http://fdb.readthedocs.io/en/v2.0/ fdb-2.0.0/fdb/0000755000175000001440000000000013270606076013446 5ustar pcisarusers00000000000000fdb-2.0.0/fdb/utils.py0000644000175000001440000005364613270362207015171 0ustar pcisarusers00000000000000#coding:utf-8 # # PROGRAM: fdb # MODULE: utils.py # DESCRIPTION: Python driver for Firebird - Various utility classes and functions # CREATED: 10.5.2013 # # Software distributed under the License is distributed AS IS, # WITHOUT WARRANTY OF ANY KIND, either express or implied. # See the License for the specific language governing rights # and limitations under the License. # # The Original Code was created by Pavel Cisar # # Copyright (c) Pavel Cisar # and all contributors signed below. # # All Rights Reserved. # Contributor(s): ______________________________________. from operator import attrgetter def safe_int(str_value, base=10): """Always returns integer value from string/None argument. Returns 0 if argument is None. """ if str_value: return int(str_value, base) else: return 0 def safe_str(str_value): """Always returns string value from string/None argument. Returns empty string if argument is None. """ if str_value is None: return '' else: return str_value def update_meta(self, other): "Helper function for :class:`LateBindingProperty` class." self.__name__ = other.__name__ self.__doc__ = other.__doc__ self.__dict__.update(other.__dict__) return self class LateBindingProperty(property): """Property class that binds to getter/setter/deleter methods when **instance** of class that define the property is created. This allows you to override these methods in descendant classes (if they are not private) without necessity to redeclare the property itself in descendant class. Recipe from Tim Delaney, 2005/03/31 http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/408713 :: class C(object): def getx(self): print 'C.getx' return self._x def setx(self, x): print 'C.setx' self._x = x def delx(self): print 'C.delx' del self._x x = LateBindingProperty(getx, setx, delx) class D(C): def setx(self, x): print 'D.setx' super(D, self).setx(x) def delx(self): print 'D.delx' super(D, self).delx() c = C() c.x = 1 c.x c.x del c.x print d = D() d.x = 1 d.x d.x del d.x This has the advantages that: a. You get back an actual property object (with attendant memory savings, performance increases, etc); b. It's the same syntax as using property(fget, fset, fdel, doc) except for the name; c. It will fail earlier (when you define the class as opposed to when you use it). d. It's shorter ;) e. If you inspect the property you will get back functions with the correct __name__, __doc__, etc. """ def __new__(cls, fget=None, fset=None, fdel=None, doc=None): if fget is not None: def __get__(obj, objtype=None, name=fget.__name__): fget = getattr(obj, name) return fget() fget = update_meta(__get__, fget) if fset is not None: def __set__(obj, value, name=fset.__name__): fset = getattr(obj, name) return fset(value) fset = update_meta(__set__, fset) if fdel is not None: def __delete__(obj, name=fdel.__name__): fdel = getattr(obj, name) return fdel() fdel = update_meta(__delete__, fdel) return property(fget, fset, fdel, doc) class Iterator(object): """Generic iterator implementation. """ def __init__(self, method, sentinel=None): """ :param method: Callable without parameters that returns next item. :param sentinel: Value that when returned by `method` indicates the end of sequence. """ self.getnext = method self.sentinel = sentinel self.exhausted = False def __iter__(self): return self def next(self): if self.exhausted: raise StopIteration else: result = self.getnext() self.exhausted = (result == self.sentinel) if self.exhausted: raise StopIteration else: return result __next__ = next class EmbeddedProperty(property): """Property class that forwards calls to getter/setter/deleter methods to respective property methods of another object. This class allows you to "inject" properties from embedded object into class definition of parent object.""" def __init__(self, obj, prop): """ :param string obj: Attribute name with embedded object. :param property prop: Property instance from embedded object. """ self.obj = obj self.prop = prop self.__doc__ = prop.__doc__ def __get__(self, obj, objtype): if obj is None: return self return self.prop.__get__(getattr(obj, self.obj)) def __set__(self, obj, val): self.prop.__set__(getattr(obj, self.obj), val) def __delete__(self, obj): self.prop.__delete__(getattr(obj, self.obj)) class EmbeddedAttribute(property): """Property class that gets/sets attribute of another object. This class allows you to "inject" attributes from embedded object into class definition of parent object.""" def __init__(self, obj, attr): """ :param string obj: Attribute name with embedded object. :param string attr: Attribute name from embedded object. """ self.obj = obj self.attr = attr self.__doc__ = attr.__doc__ def __get__(self, obj, objtype): if obj is None: return self return getattr(getattr(obj, self.obj), self.attr) def __set__(self, obj, val): setattr(getattr(obj, self.obj), self.attr, val) def iter_class_properties(cls): """Iterator that yields `name, property` pairs for all properties in class. :param class cls: Class object.""" for varname in vars(cls): value = getattr(cls, varname) if isinstance(value, property): yield varname, value def iter_class_variables(cls): """Iterator that yields names of all non-callable attributes in class. :param class cls: Class object.""" for varname in vars(cls): value = getattr(cls, varname) if not (isinstance(value, property) or callable(value)) and not varname.startswith('_'): yield varname def embed_attributes(from_class, attr): """Class decorator that injects properties and attributes from another class instance embedded in class instances. Only attributes and properties that are not already defined in decorated class are injected. :param class from_class: Class that should extend decorated class. :param string attr: Attribute name that holds instance of embedded class within decorated class instance.""" def d(class_): for pname, prop in iter_class_properties(from_class): if not hasattr(class_, pname): setattr(class_, pname, EmbeddedProperty(attr, prop)) for attrname in iter_class_variables(from_class): if not hasattr(class_, attrname): setattr(class_, attrname, EmbeddedAttribute(attr, attrname)) return class_ return d def make_lambda(expr, params='item', context=None): """Make lambda function from expression. .. versionadded:: 2.0 """ if context: return eval('lambda %s:%s' % (params, expr), context) else: return eval('lambda %s:%s' % (params, expr)) class ObjectList(list): """List of objects with additional functionality. .. versionadded:: 2.0 """ def __init__(self, items=None, _cls=None, key_expr=None): """ :param iterable items: Sequence to initialize the collection. :param _cls: Class or list/tuple of classes. Only instances of these classes would be allowed in collection. :param str key_expr: Key expression. Must contain item referrence as `item`, for example `item.attribute_name`. :raises ValueError: When initialization sequence contains invalid instance. """ if items: super(ObjectList, self).__init__(items) else: super(ObjectList, self).__init__() self.__key_expr = key_expr self.__frozen = False self._cls = _cls self.__map = None def __check_value(self, value): if self._cls and not isinstance(value, self._cls): raise TypeError("Value is not an instance of allowed class") def __check_mutability(self): if self.__frozen: raise TypeError("list is frozen") def __setitem__(self, index, value): self.__check_mutability() self.__check_value(value) super(ObjectList, self).__setitem__(index, value) def __setslice__(self, i, j, y): self.__check_mutability() super(ObjectList, self).__setslice__(i, j, y) def __delitem__(self, index): self.__check_mutability() super(ObjectList, self).__delitem__(index) def __delslice__(self, i, j): self.__check_mutability() super(ObjectList, self).__delslice__(i, j) def insert(self, index, item): """Insert item before index. :raises TypeError: When list is frozen or item is not an instance of allowed class""" self.__check_mutability() self.__check_value(item) super(ObjectList, self).insert(index, item) def append(self, item): """Add an item to the end of the list. :raises TypeError: When list is frozen or item is not an instance of allowed class""" self.__check_mutability() self.__check_value(item) super(ObjectList, self).append(item) def extend(self, iterable): """Extend the list by appending all the items in the given iterable. :raises TypeError: When list is frozen or item is not an instance of allowed class""" for item in iterable: self.append(item) def sort(self, attrs=None, expr=None, reverse=False): """Sort items in-place, optionaly using attribute values as key or key expression. :param list attrs: List of attribute names. :param expr: Key expression, a callable accepting one parameter or expression as string referencing list item as `item`. .. important:: Only one parameter (`attrs` or `expr`) could be specified. If none is present then uses default list sorting rule. :raises TypeError: When list is frozen. Examples:: sort(attrs=['name','degree']) # Sort by item.name, item.degree sort(expr=lambda x: x.name.upper()) # Sort by upper item.name sort(expr='item.name.upper()') # Sort by upper item.name """ self.__check_mutability() if attrs: super(ObjectList, self).sort(key=attrgetter(*attrs), reverse=reverse) elif expr: super(ObjectList, self).sort(key=expr if callable(expr) else make_lambda(expr), reverse=reverse) else: super(ObjectList, self).sort(reverse=reverse) def reverse(self): """Reverse the elements of the list, in place. :raises TypeError: When list is frozen.""" self.__check_mutability() super(ObjectList, self).reverse() def clear(self): """Remove all items from the list. :raises TypeError: When list is frozen.""" self.__check_mutability() while len(self) > 0: del self[0] def freeze(self): """Set list to immutable (frozen) state.""" self.__frozen = True if self.__key_expr: fce = make_lambda(self.__key_expr) self.__map = dict(((key, index) for index, key in enumerate((fce(item) for item in self)))) def filter(self, expr): """Return new ObjectList of items for which `expr` is evaluated as True. :param expr: Boolean expression, a callable accepting one parameter or expression as string referencing list item as `item`. Example:: filter(lambda x: x.name.startswith("ABC")) filter('item.name.startswith("ABC")') """ fce = expr if callable(expr) else make_lambda(expr) return ObjectList(self.ifilter(expr), self._cls, self.__key_expr) def ifilter(self, expr): """Return generator that yields items for which `expr` is evaluated as True. :param expr: Boolean expression, a callable accepting one parameter or expression as string referencing list item as `item`. Example:: ifilter(lambda x: x.name.startswith("ABC")) ifilter('item.name.startswith("ABC")') """ fce = expr if callable(expr) else make_lambda(expr) return (item for item in self if fce(item)) def ifilterfalse(self, expr): """Return generator that yields items for which `expr` is evaluated as False. :param expr: Boolean expression, a callable accepting one parameter or expression as string referencing list item as `item`. Example:: ifilter(lambda x: x.name.startswith("ABC")) ifilter('item.name.startswith("ABC")') """ fce = expr if callable(expr) else make_lambda(expr) return (item for item in self if not fce(item)) def report(self, *args): """Return list of data produced by expression(s) evaluated on list items. Parameter(s) could be one from: - A callable accepting one parameter and returning data for output - One or more expressions as string referencing item as `item`. Examples:: # returns list of tuples with item.name and item.size report(lambda x: (x.name, x.size)) report('item.name','item.size') # returns list of item names report(lambda x: x.name) report('item.name') """ if len(args) == 1 and callable(args[0]): fce = args[0] else: attrs = "(%s)" % ",".join(args) if len(args) > 1 else args[0] fce = make_lambda(attrs) return [fce(item) for item in self] def ireport(self, *args): """Return generator that yields data produced by expression(s) evaluated on list items. Parameter(s) could be one from: - A callable accepting one parameter and returning data for output - One or more expressions as string referencing item as `item`. Examples:: # generator of tuples with item.name and item.size report(lambda x: (x.name, x.size)) report('item.name','item.size') # generator of item names report(lambda x: x.name) report('item.name') """ if len(args) == 1 and callable(args[0]): fce = args[0] else: attrs = "(%s)" % ",".join(args) if len(args) > 1 else args[0] fce = make_lambda(attrs) return (fce(item) for item in self) def ecount(self, expr): """Return number of items for which `expr` is evaluated as True. :param expr: Boolean expression, a callable accepting one parameter or expression as string referencing list item as `item`. Example:: ecount(lambda x: x.name.startswith("ABC")) ecount('item.name.startswith("ABC")') """ return sum(1 for item in self.ifilter(expr)) def split(self, expr): """Return two new ObjectLists, first with items for which `expr` is evaluated as True and second for `expr` evaluated as False. :param expr: Boolean expression, a callable accepting one parameter or expression as string referencing list item as `item`. Example:: split(lambda x: x.size > 100) split('item.size > 100') """ return ObjectList(self.ifilter(expr), self._cls, self.__key_expr), ObjectList(self.ifilterfalse(expr), self._cls, self.__key_expr) def extract(self, expr): """Move items for which `expr` is evaluated as True into new ObjectLists. :param expr: Boolean expression, a callable accepting one parameter or expression as string referencing list item as `item`. :raises TypeError: When list is frozen. Example:: extract(lambda x: x.name.startswith("ABC")) extract('item.name.startswith("ABC")') """ self.__check_mutability() fce = expr if callable(expr) else make_lambda(expr) l = ObjectList(_cls=self._cls, key_expr=self.__key_expr) i = 0 while len(self) > i: item = self[i] if fce(item): l.append(item) del self[i] else: i += 1 return l def get(self, value, expr=None): """Return item with given key value using default or specified key expression, or None if there is no such item. Uses very fast method to look up value of default key expression in `frozen` list, otherwise it uses slower list traversal. :param value: Searched value. :param expr: Key value expression, a callable accepting two parameters (item,value) or expression as string referencing list item as `item`. :raises TypeError: If key expression is not defined. Examples:: # Search using default key expression get('ITEM_NAME') # Search using callable key expression get('ITEM_NAME',lambda x: x.name.upper()) # Search using string key expression get('ITEM_NAME','item.name.upper()') """ if self.__map and not expr: i = self.__map.get(value) return self[i] if i is not None else None if not (self.__key_expr or expr): raise TypeError("Key expression required") if callable(expr): fce = expr else: s = '%s == value' % (self.__key_expr if expr is None else expr) fce = make_lambda(s, 'item,value') for item in self: if fce(item, value): return item return None def contains(self, value, expr=None): """Return True if list has any item with default or specified key expression equal to given value. :param value: Tested key value. :param expr: Key value expression, a callable accepting two parameters (item,value) or expression as string referencing list item as `item`. Examples:: # Search using default key expression contains('ITEM_NAME') # Search using callable key expression contains('ITEM_NAME',lambda x: x.name.upper()) # Search using string key expression contains('ITEM_NAME','item.name.upper()') """ return False if self.get(value, expr) is None else True def all(self, expr): """Return True if `expr` is evaluated as True for all list elements. :param expr: Boolean expression, a callable accepting one parameter or expression as string referencing list item as `item`. Example:: all(lambda x: x.name.startswith("ABC")) all('item.name.startswith("ABC")') """ fce = expr if callable(expr) else make_lambda(expr) for item in self: if not fce(item): return False return True def any(self, expr): """Return True if `expr` is evaluated as True for any list element. :param expr: Boolean expression, a callable accepting one parameter or expression as string referencing list item as `item`. Example:: any(lambda x: x.name.startswith("ABC")) any('item.name.startswith("ABC")') """ fce = expr if callable(expr) else make_lambda(expr) for item in self: if not fce(item): return True return False # frozen = property(fget=lambda self: self.__frozen, doc='True if list is immutable') key = property(fget=lambda self: self.__key_expr, doc='Key expression') class Visitable(object): """Base class for Visitor Pattern support. .. versionadded:: 2.0 """ def accept(self, visitor): """Visitor Pattern support. Calls `visit(self)` on parameter object. :param visitor: Visitor object of Visitor Pattern. """ visitor.visit(self) class Visitor(object): """Base class for Visitor Pattern visitors. .. versionadded:: 2.0 Descendants may implement methods to handle individual object types that follow naming pattern `visit_`. Calls :meth:`default_action` if appropriate special method is not defined. .. important:: This implementation uses Python Method Resolution Order (__mro__) to find special handling method, so special method for given class is used also for its decendants. Example:: class Node(object): pass class A(Node): pass class B(Node): pass class C(A,B): pass class MyVisitor(object): def default_action(self, obj): print 'default_action '+obj.__class__.__name__ def visit_B(self, obj): print 'visit_B '+obj.__class__.__name__ a = A() b = B() c = C() visitor = MyVisitor() visitor.visit(a) visitor.visit(b) visitor.visit(c) Will create output:: default_action A visit_B B visit_B C """ def visit(self, obj): """Dispatch to method that handles `obj`. First traverses the `obj.__mro__` to try find method with name following `visit_` pattern and calls it with `obj`. Otherwise it calls :meth:`default_action`. :param object obj: Object to be handled by visitor. """ meth = None for cls in obj.__class__.__mro__: meth = getattr(self, 'visit_'+cls.__name__, None) if meth: break if not meth: meth = self.default_action return meth(obj) def default_action(self, obj): """Default handler for visited objects. :param object obj: Object to be handled. .. note:: This implementation does nothing! """ pass fdb-2.0.0/fdb/fbcore.py0000644000175000001440000075150113270606045015265 0ustar pcisarusers00000000000000#coding:utf-8 # # PROGRAM/MODULE: fdb # FILE: fbcore.py # DESCRIPTION: Python driver for Firebird - Core # CREATED: 8.10.2011 # # Software distributed under the License is distributed AS IS, # WITHOUT WARRANTY OF ANY KIND, either express or implied. # See the License for the specific language governing rights # and limitations under the License. # # The Original Code was created by Pavel Cisar # # Copyright (c) Pavel Cisar # and all contributors signed below. # # All Rights Reserved. # Contributor(s): Philippe Makowski # ______________________________________. # # See LICENSE.TXT for details. import sys import os import ctypes import struct import time import datetime import decimal import weakref import threading from builtins import dict from . import ibase from . import schema from . import monitor from . import utils try: # Python 2 from itertools import izip_longest except ImportError: # Python 3 from itertools import zip_longest as izip_longest from fdb.ibase import (frb_info_att_charset, isc_dpb_activate_shadow, isc_dpb_address_path, isc_dpb_allocation, isc_dpb_begin_log, isc_dpb_buffer_length, isc_dpb_cache_manager, isc_dpb_cdd_pathname, isc_dpb_connect_timeout, isc_dpb_damaged, isc_dpb_dbkey_scope, isc_dpb_debug, isc_dpb_delete_shadow, isc_dpb_dummy_packet_interval, isc_dpb_encrypt_key, isc_dpb_force_write, isc_dpb_garbage_collect, isc_dpb_gbak_attach, isc_dpb_gfix_attach, isc_dpb_gsec_attach, isc_dpb_gstat_attach, isc_dpb_interp, isc_dpb_lc_ctype, isc_dpb_lc_messages, isc_dpb_no_garbage_collect, isc_dpb_no_reserve, isc_dpb_num_buffers, isc_dpb_number_of_users, isc_dpb_old_dump_id, isc_dpb_old_file, isc_dpb_old_file_size, isc_dpb_old_num_files, isc_dpb_old_start_file, isc_dpb_old_start_page, isc_dpb_old_start_seqno, isc_dpb_online, isc_dpb_online_dump, isc_dpb_overwrite, isc_dpb_page_size, isc_dpb_password, isc_dpb_password_enc, isc_dpb_reserved, isc_dpb_sec_attach, isc_dpb_set_db_charset, isc_dpb_set_db_readonly, isc_dpb_set_db_sql_dialect, isc_dpb_set_page_buffers, isc_dpb_shutdown, isc_dpb_shutdown_delay, isc_dpb_sql_dialect, isc_dpb_sql_role_name, isc_dpb_sweep, isc_dpb_sweep_interval, isc_dpb_sys_user_name, isc_dpb_sys_user_name_enc, isc_dpb_trace, isc_dpb_user_name, isc_dpb_verify, isc_dpb_version1, isc_dpb_working_directory, isc_dpb_no_db_triggers, isc_dpb_nolinger, isc_info_active_tran_count, isc_info_end, isc_info_truncated, isc_info_sql_stmt_type, isc_info_sql_get_plan, isc_info_sql_records, isc_info_req_select_count, isc_info_req_insert_count, isc_info_req_update_count, isc_info_req_delete_count, isc_info_blob_total_length, isc_info_blob_max_segment, isc_info_blob_type, isc_info_blob_num_segments, fb_info_page_contents, isc_info_active_transactions, isc_info_allocation, isc_info_attachment_id, isc_info_backout_count, isc_info_base_level, isc_info_bpage_errors, isc_info_creation_date, isc_info_current_memory, isc_info_db_class, isc_info_db_id, isc_info_db_provider, isc_info_db_read_only, isc_info_db_size_in_pages, isc_info_db_sql_dialect, isc_info_delete_count, isc_info_dpage_errors, isc_info_expunge_count, isc_info_fetches, isc_info_firebird_version, isc_info_forced_writes, isc_info_implementation, isc_info_insert_count, isc_info_ipage_errors, isc_info_isc_version, isc_info_limbo, isc_info_marks, isc_info_max_memory, isc_info_next_transaction, isc_info_no_reserve, isc_info_num_buffers, isc_info_ods_minor_version, isc_info_ods_version, isc_info_oldest_active, isc_info_oldest_snapshot, isc_info_oldest_transaction, isc_info_page_errors, isc_info_page_size, isc_info_ppage_errors, isc_info_purge_count, isc_info_read_idx_count, isc_info_read_seq_count, isc_info_reads, isc_info_record_errors, isc_info_set_page_buffers, isc_info_sql_stmt_commit, isc_info_sql_stmt_ddl, isc_info_sql_stmt_delete, isc_info_sql_stmt_exec_procedure, isc_info_sql_stmt_get_segment, isc_info_sql_stmt_insert, isc_info_sql_stmt_put_segment, isc_info_sql_stmt_rollback, isc_info_sql_stmt_savepoint, isc_info_sql_stmt_select, isc_info_sql_stmt_select_for_upd, isc_info_sql_stmt_set_generator, isc_info_sql_stmt_start_trans, isc_info_sql_stmt_update, isc_info_sweep_interval, isc_info_tpage_errors, isc_info_tra_access, isc_info_tra_concurrency, isc_info_tra_consistency, isc_info_tra_id, isc_info_tra_isolation, isc_info_tra_lock_timeout, isc_info_tra_no_rec_version, isc_info_tra_oldest_active, isc_info_tra_oldest_interesting, isc_info_tra_oldest_snapshot, isc_info_tra_read_committed, isc_info_tra_readonly, isc_info_tra_readwrite, isc_info_tra_rec_version, fb_info_tra_dbpath, isc_info_update_count, isc_info_user_names, isc_info_version, isc_info_writes, isc_tpb_autocommit, # FB 3 isc_dpb_version2, fb_info_implementation, fb_info_page_warns, fb_info_record_warns, fb_info_bpage_warns, fb_info_dpage_warns, fb_info_ipage_warns, fb_info_ppage_warns, fb_info_tpage_warns, fb_info_pip_errors, fb_info_pip_warns, # isc_tpb_commit_time, isc_tpb_concurrency, isc_tpb_consistency, isc_tpb_exclusive, isc_tpb_ignore_limbo, isc_tpb_lock_read, isc_tpb_lock_timeout, isc_tpb_lock_write, isc_tpb_no_auto_undo, isc_tpb_no_rec_version, isc_tpb_nowait, isc_tpb_protected, isc_tpb_read, isc_tpb_read_committed, isc_tpb_rec_version, isc_tpb_restart_requests, isc_tpb_shared, isc_tpb_verb_time, isc_tpb_version3, isc_tpb_wait, isc_tpb_write, # b, s, ord2, int2byte, mychr, mybytes, myunicode, mylong, StringType, IntType, LongType, FloatType, ListType, UnicodeType, TupleType, xrange, charset_map, #isc_sqlcode, isc_sql_interprete, fb_interpret, isc_dsql_execute_immediate, XSQLDA_PTR, ISC_SHORT, ISC_LONG, ISC_SCHAR, ISC_UCHAR, ISC_QUAD, ISC_DATE, ISC_TIME, SHRT_MIN, SHRT_MAX, USHRT_MAX, INT_MIN, INT_MAX, LONG_MIN, LONG_MAX, # SQL_TEXT, SQL_VARYING, SQL_SHORT, SQL_LONG, SQL_FLOAT, SQL_DOUBLE, SQL_D_FLOAT, SQL_TIMESTAMP, SQL_BLOB, SQL_ARRAY, SQL_QUAD, SQL_TYPE_TIME, SQL_TYPE_DATE, SQL_INT64, SQL_BOOLEAN, SUBTYPE_NUMERIC, SUBTYPE_DECIMAL, MAX_BLOB_SEGMENT_SIZE, ISC_INT64, # XSQLVAR, ISC_TEB, RESULT_VECTOR, ISC_STATUS, ISC_STATUS_ARRAY, ISC_STATUS_PTR, ISC_EVENT_CALLBACK, ISC_ARRAY_DESC, # blr_varying, blr_varying2, blr_text, blr_text2, blr_short, blr_long, blr_int64, blr_float, blr_d_float, blr_double, blr_timestamp, blr_sql_date, blr_sql_time, blr_cstring, blr_quad, blr_blob, blr_bool, # SQLDA_version1, isc_segment, isc_db_handle, isc_tr_handle, isc_stmt_handle, isc_blob_handle, sys_encoding) PYTHON_MAJOR_VER = sys.version_info[0] #: Current driver version __version__ = '2.0.0' apilevel = '2.0' threadsafety = 1 paramstyle = 'qmark' HOOK_API_LOADED = 1 HOOK_DATABASE_ATTACHED = 2 HOOK_DATABASE_ATTACH_REQUEST = 3 HOOK_DATABASE_DETACH_REQUEST = 4 HOOK_DATABASE_CLOSED = 5 HOOK_SERVICE_ATTACHED = 6 hooks = {} def add_hook(hook_type, func): """Instals hook function for specified hook_type. :param hook_type: One from HOOK_* constants :param func: Hook routine to be installed .. important:: Routine must have a signature required for given hook type. However it's not checked when hook is installed, and any issue will lead to run-time error when hook routine is executed. """ hooks.setdefault(hook_type, list()).append(func) def remove_hook(hook_type, func): """Uninstalls previously installed hook function for specified hook_type. :param hook_type: One from HOOK_* constants :param func: Hook routine to be uninstalled If hook routine wasn't previously installed, it does nothing. """ try: hooks.get(hook_type, list()).remove(func) except: pass def get_hooks(hook_type): """Returns list of installed hook routines for specified hook_type. :param hook_type: One from HOOK_* constants :returns: List of installed hook routines. """ return hooks.get(hook_type, list()) def load_api(fb_library_name=None): """Initializes bindings to Firebird Client Library unless they are already initialized. Called automatically by :func:`fdb.connect` and :func:`fdb.create_database`. :param string fb_library_name: (optional) Path to Firebird Client Library. When it's not specified, FDB does its best to locate appropriate client library. :returns: :class:`fdb.ibase.fbclient_API` instance. Hooks: Event HOOK_API_LOADED: Executed after api is initialized. Hook routine must have signature: hook_func(api). Any value returned by hook is ignored. """ if not hasattr(sys.modules[__name__], 'api'): setattr(sys.modules[__name__], 'api', ibase.fbclient_API(fb_library_name)) for hook in get_hooks(HOOK_API_LOADED): hook(getattr(sys.modules[__name__], 'api')) return getattr(sys.modules[__name__], 'api') # Exceptions required by Python Database API class Error(Exception): """Exception that is the base class of all other error exceptions. You can use this to catch all errors with one single 'except' statement. Warnings are not considered errors and thus should not use this class as base.""" pass class InterfaceError(Error): """Exception raised for errors that are related to the database interface rather than the database itself.""" pass class DatabaseError(Error): "Exception raised for errors that are related to the database." pass #def __init__(self,msg, sqlcode=0,error_code=0): #self.sqlcode = sqlcode #self.error_code = error_code #self.msg = msg class DataError(DatabaseError): """Exception raised for errors that are due to problems with the processed data like division by zero, numeric value out of range, etc.""" pass class OperationalError(DatabaseError): """Exception raised for errors that are related to the database's operation and not necessarily under the control of the programmer, e.g. an unexpected disconnect occurs, the data source name is not found, a transaction could not be processed, a memory allocation error occurred during processing, etc.""" pass class IntegrityError(DatabaseError): """Exception raised when the relational integrity of the database is affected, e.g. a foreign key check fails.""" pass class InternalError(DatabaseError): """Exception raised when the database encounters an internal error, e.g. the cursor is not valid anymore, the transaction is out of sync, etc.""" pass class ProgrammingError(DatabaseError): """Exception raised for programming errors, e.g. table not found or already exists, syntax error in the SQL statement, wrong number of parameters specified, etc.""" pass class NotSupportedError(DatabaseError): """Exception raised in case a method or database API was used which is not supported by the database""" pass class TransactionConflict(DatabaseError): pass class ParseError(Exception): pass # Named positional constants to be used as indices into the description # attribute of a cursor (these positions are defined by the DB API spec). # For example: # nameOfFirstField = cursor.description[0][kinterbasdb.DESCRIPTION_NAME] DESCRIPTION_NAME = 0 DESCRIPTION_TYPE_CODE = 1 DESCRIPTION_DISPLAY_SIZE = 2 DESCRIPTION_INTERNAL_SIZE = 3 DESCRIPTION_PRECISION = 4 DESCRIPTION_SCALE = 5 DESCRIPTION_NULL_OK = 6 # Types Required by Python DB-API 2.0 def Date(year, month, day): return datetime.date(year, month, day) def Time(hour, minite, second): return datetime.time(hour, minite, second) def Timestamp(year, month, day, hour, minute, second): return datetime.datetime(year, month, day, hour, minute, second) def DateFromTicks(ticks): return apply(Date, time.localtime(ticks)[:3]) def TimeFromTicks(ticks): return apply(Time, time.localtime(ticks)[3:6]) def TimestampFromTicks(ticks): return apply(Timestamp, time.localtime(ticks)[:6]) def Binary(b): return b class DBAPITypeObject: def __init__(self, *values): self.values = values def __cmp__(self, other): if other in self.values: return 0 if other < self.values: return 1 else: return -1 STRING = DBAPITypeObject(str) if PYTHON_MAJOR_VER == 3: BINARY = DBAPITypeObject(bytes) else: BINARY = DBAPITypeObject(str) NUMBER = DBAPITypeObject(int, decimal.Decimal) DATETIME = DBAPITypeObject(datetime.datetime, datetime.date, datetime.time) ROWID = DBAPITypeObject() _FS_ENCODING = sys.getfilesystemencoding() DIST_TRANS_MAX_DATABASES = 16 def bs(byte_array): return bytes(byte_array) if PYTHON_MAJOR_VER == 3 else ''.join((chr(c) for c in byte_array)) ISOLATION_LEVEL_READ_COMMITED_LEGACY = bs([isc_tpb_version3, isc_tpb_write, isc_tpb_wait, isc_tpb_read_committed, isc_tpb_no_rec_version]) ISOLATION_LEVEL_READ_COMMITED = bs([isc_tpb_version3, isc_tpb_write, isc_tpb_wait, isc_tpb_read_committed, isc_tpb_rec_version]) ISOLATION_LEVEL_REPEATABLE_READ = bs([isc_tpb_version3, isc_tpb_write, isc_tpb_wait, isc_tpb_concurrency]) ISOLATION_LEVEL_SNAPSHOT = ISOLATION_LEVEL_REPEATABLE_READ ISOLATION_LEVEL_SERIALIZABLE = bs([isc_tpb_version3, isc_tpb_write, isc_tpb_wait, isc_tpb_consistency]) ISOLATION_LEVEL_SNAPSHOT_TABLE_STABILITY = ISOLATION_LEVEL_SERIALIZABLE ISOLATION_LEVEL_READ_COMMITED_RO = bs([isc_tpb_version3, isc_tpb_read, isc_tpb_wait, isc_tpb_read_committed, isc_tpb_rec_version]) # ODS constants ODS_FB_20 = 11.0 ODS_FB_21 = 11.1 ODS_FB_25 = 11.2 ODS_FB_30 = 12.0 # Translation tables d = dir(ibase) s = 'isc_info_db_impl_' q = [x for x in d if x.startswith(s) and x[len(s):] != 'last_value'] #: Dictionary to map Implementation codes to names IMPLEMENTATION_NAMES = dict(zip([getattr(ibase, x) for x in q], [x[len(s):] for x in q])) s = 'isc_info_db_code_' q = [x for x in d if x.startswith(s) and x[len(s):] != 'last_value'] #: Dictionary to map provider codes to names PROVIDER_NAMES = dict(zip([getattr(ibase, x) for x in q], [x[len(s):] for x in q])) s = 'isc_info_db_class_' q = [x for x in d if x.startswith(s) and x[len(s):] != 'last_value'] #: Dictionary to map database class codes to names DB_CLASS_NAMES = dict(zip([getattr(ibase, x) for x in q], [x[len(s):] for x in q])) # Private constants _SIZE_OF_SHORT = ctypes.sizeof(ctypes.c_short) _tenTo = [10 ** x for x in range(20)] if PYTHON_MAJOR_VER != 3: del x __xsqlda_cache = {} __tebarray_cache = {} _DATABASE_INFO_CODES_WITH_INT_RESULT = ( isc_info_allocation, isc_info_no_reserve, isc_info_db_sql_dialect, isc_info_ods_minor_version, isc_info_ods_version, isc_info_page_size, isc_info_current_memory, isc_info_forced_writes, isc_info_max_memory, isc_info_num_buffers, isc_info_sweep_interval, isc_info_attachment_id, isc_info_fetches, isc_info_marks, isc_info_reads, isc_info_writes, isc_info_set_page_buffers, isc_info_db_read_only, isc_info_db_size_in_pages, isc_info_page_errors, isc_info_record_errors, isc_info_bpage_errors, isc_info_dpage_errors, isc_info_ipage_errors, isc_info_ppage_errors, isc_info_tpage_errors, frb_info_att_charset, isc_info_oldest_transaction, isc_info_oldest_active, isc_info_oldest_snapshot, isc_info_next_transaction, isc_info_active_tran_count, isc_info_db_class, isc_info_db_provider, ) _DATABASE_INFO_CODES_WITH_COUNT_RESULTS = ( isc_info_backout_count, isc_info_delete_count, isc_info_expunge_count, isc_info_insert_count, isc_info_purge_count, isc_info_read_idx_count, isc_info_read_seq_count, isc_info_update_count ) _DATABASE_INFO_CODES_WITH_TIMESTAMP_RESULT = (isc_info_creation_date,) _DATABASE_INFO__KNOWN_LOW_LEVEL_EXCEPTIONS = (isc_info_user_names, fb_info_page_contents, isc_info_active_transactions) def xsqlda_factory(size): if size in __xsqlda_cache: cls = __xsqlda_cache[size] else: class XSQLDA(ctypes.Structure): pass XSQLDA._fields_ = [ ('version', ISC_SHORT), ('sqldaid', ISC_SCHAR * 8), ('sqldabc', ISC_LONG), ('sqln', ISC_SHORT), ('sqld', ISC_SHORT), ('sqlvar', XSQLVAR * size), ] __xsqlda_cache[size] = XSQLDA cls = XSQLDA xsqlda = cls() xsqlda.version = SQLDA_version1 xsqlda.sqln = size return xsqlda def tebarray_factory(size): if size in __tebarray_cache: cls = __tebarray_cache[size] else: cls = ISC_TEB * size __xsqlda_cache[size] = cls teb_array = cls() return teb_array buf_pointer = ctypes.POINTER(ctypes.c_char) def is_dead_proxy(obj): "Return True if object is a dead :func:`weakref.proxy`." return isinstance(obj, weakref.ProxyType) and not dir(obj) def b2u(st, charset): "Decode to unicode if charset is defined. For conversion of result set data." if charset: return st.decode(charset) else: return st def p3fix(st, charset): """For P3 convert bytes to string using connection charset, P2 as is. For conversion of info results to native strings.""" if PYTHON_MAJOR_VER == 3: return st.decode(charset) else: return st def inc_pointer(pointer): t = type(pointer) p = ctypes.cast(pointer, ctypes.c_void_p) p.value += 1 return ctypes.cast(p, t) def bytes_to_bint(b): # Read as big endian len_b = len(b) if len_b == 1: fmt = 'B' elif len_b == 2: fmt = '>H' elif len_b == 4: fmt = '>L' elif len_b == 8: fmt = '>Q' else: raise InternalError return struct.unpack(fmt, b)[0] def bytes_to_int(b): # Read as little endian. len_b = len(b) if len_b == 1: fmt = 'b' elif len_b == 2: fmt = ' 0 def exception_from_status(error, status, preamble=None): msglist = [] msg = ctypes.create_string_buffer(512) if preamble: msglist.append(preamble) sqlcode = api.isc_sqlcode(status) error_code = status[1] msglist.append('- SQLCODE: %i' % sqlcode) pvector = ctypes.cast(ctypes.addressof(status), ISC_STATUS_PTR) while True: result = api.fb_interpret(msg, 512, pvector) if result != 0: if PYTHON_MAJOR_VER == 3: msglist.append('- ' + (msg.value).decode(sys_encoding)) else: msglist.append('- ' + msg.value) else: break return error('\n'.join(msglist), sqlcode, error_code) class ParameterBuffer(object): """Helper class for construction of Database (and other) parameter buffers. Parameters are stored in insertion order.""" def __init__(self, charset): self.items = [] self.charset = charset def add_parameter_code(self, code): """Add parameter code to parameter buffer. :param code: Firebird code for the parameter """ self.items.append(struct.pack('c', int2byte(code))) def add_string_parameter(self, code, value): """Add string to parameter buffer. :param code: Firebird code for the parameter :param string value: Parameter value """ if PYTHON_MAJOR_VER == 3 or isinstance(value, UnicodeType): value = value.encode(charset_map.get(self.charset, self.charset)) slen = len(value) if slen >= 256: # Because the length is denoted in the DPB by a single byte. raise ProgrammingError("""Too large parameter buffer component (>256 bytes).""") self.items.append(struct.pack('cc%ds' % slen, int2byte(code), int2byte(slen), value)) def add_byte_parameter(self, code, value): """Add byte value to parameter buffer. :param code: Firebird code for the parameter :param value: Parameter value (0-255) """ if not isinstance(value, (int, mylong)) or value < 0 or value > 255: raise ProgrammingError("The value must be an int or long value between 0 and 255.") self.items.append(struct.pack('ccc', int2byte(code), b('\x01'), int2byte(value))) def add_integer_parameter(self, code, value): """Add integer value to parameter buffer. :param code: Firebird code for the parameter :param int value: Parameter value """ if not isinstance(value, (int, mylong)): raise ProgrammingError("The value for an integer DPB code must be an int or long.") self.items.append(struct.pack('=ccI', int2byte(code), b('\x04'), value)) def add_byte(self, byte): """Add byte value to buffer. :param byte: Value to be added. """ self.items.append(struct.pack('b', byte)) def add_word(self, word): """Add two byte value to buffer. :param word: Value to be added. """ self.items.append(struct.pack('= 256: # Because the length is denoted in the DPB by a single byte. raise ProgrammingError("Individual component of" " parameter buffer is too large. Components must be less" " than 256 bytes.") self.items.append(struct.pack('cc%ds' % slen, int2byte(slen), value)) def get_buffer(self): """Get parameter buffer content. :returns: Byte string with all inserted parameters. """ return b('').join(self.items) def clear(self): "Clear all parameters stored in parameter buffer." self.items = [] def get_length(self): "Returns actual total length of parameter buffer." return sum((len(x) for x in self.items)) def connect(dsn='', user=None, password=None, host=None, port=None, database=None, sql_dialect=3, role=None, charset=None, buffers=None, force_write=None, no_reserve=None, db_key_scope=None, isolation_level=ISOLATION_LEVEL_READ_COMMITED, connection_class=None, fb_library_name=None, no_gc=None, no_db_triggers=None, no_linger=None): """Establish a connection to database. :param dsn: Connection string in format [host[/port]]:database :param string user: User name. If not specified, fdb attempts to use ISC_USER envar. :param string password: User password. If not specified, fdb attempts to use ISC_PASSWORD envar. :param string host: Server host machine specification. :param integer port: Port used by Firebird server. :param string database: Database specification (file spec. or alias) :param sql_dialect: SQL Dialect for connection. :type sql_dialect): 1, 2 or 3 :param string role: User role. :param string charset: Character set for connection. :param integer buffers: Page case size override for connection. :param integer force_writes: Forced writes override for connection. :param integer no_reserve: Page space reservation override for connection. :param integer db_key_scope: DBKEY scope override for connection. :param isolation_level: Default transaction isolation level for connection **(not used)**. :type isolation_level: 0, 1, 2 or 3 :param connection_class: Custom connection class :type connection_class: subclass of :class:`Connection` :param string fb_library_name: Full path to Firebird client library. See :func:`~fdb.load_api` for details. :param integer no_gc: No Garbage Collection flag. :param integer no_db_triggers: No database triggers flag (FB 2.1). :param integer no_linger: No linger flag (FB3). :returns: Connection to database. :rtype: :class:`Connection` instance. :raises `~fdb.ProgrammingError`: For bad parameter values. :raises `~fdb.DatabaseError`: When connection cannot be established. .. important:: You may specify the database using either `dns` or `database` (with optional `host`), but not both. **Examples:** .. code-block:: python con = fdb.connect(dsn='host:/path/database.fdb', user='sysdba', password='pass', charset='UTF8') con = fdb.connect(host='myhost', database='/path/database.fdb', user='sysdba', password='pass', charset='UTF8') **Hooks:** Event `HOOK_DATABASE_ATTACH_REQUEST`: Executed after all parameters are preprocessed and before :class:`Connection` is created. Hook must have signature: hook_func(dsn, dpb) where `dpb` is :class:`ParameterBuffer` instance. Hook may return :class:`Connection` (or subclass) instance or None. First instance returned by any hook will become the return value of this function and other hooks are not called. Event `HOOK_DATABASE_ATTACHED`: Executed before :class:`Connection` (or subclass) instance is returned. Hook must have signature: hook_func(connection). Any value returned by hook is ignored. """ def build_dpb(user, password, sql_dialect, role, charset, buffers, force_write, no_reserve, db_key_scope, no_gc, no_db_triggers, no_linger): dpb = ParameterBuffer(charset) dpb.add_parameter_code(isc_dpb_version1) if user: dpb.add_string_parameter(isc_dpb_user_name, user) if password: dpb.add_string_parameter(isc_dpb_password, password) if role: dpb.add_string_parameter(isc_dpb_sql_role_name, role) if sql_dialect: dpb.add_byte_parameter(isc_dpb_sql_dialect, sql_dialect) if charset: dpb.add_string_parameter(isc_dpb_lc_ctype, charset.upper()) if buffers: dpb.add_integer_parameter(isc_dpb_num_buffers, buffers) if force_write: dpb.add_byte_parameter(isc_dpb_force_write, force_write) if no_reserve: dpb.add_byte_parameter(isc_dpb_no_reserve, no_reserve) if db_key_scope: dpb.add_byte_parameter(isc_dpb_dbkey_scope, db_key_scope) if no_gc: dpb.add_byte_parameter(isc_dpb_no_garbage_collect, no_gc) if no_db_triggers: dpb.add_byte_parameter(isc_dpb_no_db_triggers, no_db_triggers) if no_linger: dpb.add_byte_parameter(isc_dpb_nolinger, no_linger) return dpb load_api(fb_library_name) if connection_class is None: connection_class = Connection if not issubclass(connection_class, Connection): raise ProgrammingError("'connection_class' must be subclass of Connection") if not user: user = os.environ.get('ISC_USER', None) if not password: password = os.environ.get('ISC_PASSWORD', None) if sql_dialect not in [1, 2, 3]: raise ProgrammingError("SQl Dialect must be either 1, 2 or 3") if ((not dsn and not host and not database) or (dsn and (host or database)) or (host and not database)): raise ProgrammingError("Must supply one of:\n" " 1. keyword argument dsn='host:/path/to/database'\n" " 2. both keyword arguments host='host' and" " database='/path/to/database'\n" " 3. only keyword argument database='/path/to/database'") if not dsn: if host and host.endswith(':'): raise ProgrammingError("Host must not end with a colon." " You should specify host='%s' rather than host='%s'." % (host[:-1], host)) elif host: if port: dsn = '%s/%d:%s' % (host, port, database) else: dsn = '%s:%s' % (host, database) else: if port: dsn = 'localhost/%d:%s' % (port, database) else: dsn = database dsn = b(dsn, _FS_ENCODING) if charset: charset = charset.upper() # dpb = build_dpb(user, password, sql_dialect, role, charset, buffers, force_write, no_reserve, db_key_scope, no_gc, no_db_triggers, no_linger) # # Pre-attach hook # con = None for hook in get_hooks(HOOK_DATABASE_ATTACH_REQUEST): try: con = hook(dsn, dpb) except Exception as e: raise ProgrammingError("Error in DATABASE_ATTACH_REQUEST hook.", *e.args) if con is not None: break # if con is None: _isc_status = ISC_STATUS_ARRAY() _db_handle = isc_db_handle(0) dpbuf = dpb.get_buffer() api.isc_attach_database(_isc_status, len(dsn), dsn, _db_handle, len(dpbuf), dpbuf) if db_api_error(_isc_status): raise exception_from_status(DatabaseError, _isc_status, "Error while connecting to database:") con = connection_class(_db_handle, dpbuf, sql_dialect, charset, isolation_level) # for hook in get_hooks(HOOK_DATABASE_ATTACHED): hook(con) # return con def create_database(sql='', sql_dialect=3, dsn='', user=None, password=None, host=None, port=None, database=None, page_size=None, length=None, charset=None, files=None, connection_class=None, fb_library_name=None): """ Creates a new database. Parameters could be specified either by supplied "CREATE DATABASE" statement, or set of database parameters. :param sql: "CREATE DATABASE" statement. :param sql_dialect: SQL Dialect for newly created database. :type sql_dialect: 1 or 3 :param dsn: Connection string in format [host[/port]]:database :param string user: User name. If not specified, fdb attempts to use ISC_USER envar. :param string password: User password. If not specified, fdb attempts to use ISC_PASSWORD envar. :param string host: Server host machine specification. :param integer port: Port used by Firebird server. :param string database: Database specification (file spec. or alias) :param integer page_size: Database page size. :param integer length: Database size in pages. :param string charset: Character set for connection. :param string files: Specification of secondary database files. :param connection_class: Custom connection class :type connection_class: subclass of :class:`Connection` :param string fb_library_name: Full path to Firebird client library. See :func:`~fdb.load_api` for details. :returns: Connection to the newly created database. :rtype: :class:`Connection` instance. :raises `~fdb.ProgrammingError`: For bad parameter values. :raises `~fdb.DatabaseError`: When database creation fails. **Example:** .. code-block:: python con = fdb.create_database("create database '/temp/db.fdb' user 'sysdba' password 'pass'") con = fdb.create_database(dsn='/temp/db.fdb',user='sysdba',password='pass',page_size=8192) **Hooks:** Event` HOOK_DATABASE_ATTACHED`: Executed before :class:`Connection` (or subclass) instance is returned. Hook must have signature: hook_func(connection). Any value returned by hook is ignored. """ load_api(fb_library_name) if connection_class is None: connection_class = Connection if not issubclass(connection_class, Connection): raise ProgrammingError("'connection_class' must be subclass of Connection") # Database to create must be specified by either `sql` or other parameters. if sql: if isinstance(sql, myunicode): sql = sql.encode(_FS_ENCODING) else: if not user: user = os.environ.get('ISC_USER', None) if not password: password = os.environ.get('ISC_PASSWORD', None) if sql_dialect not in [1, 2, 3]: raise ProgrammingError("SQl Dialect must be either 1, 2 or 3") if ((not dsn and not host and not database) or (dsn and (host or database)) or (host and not database)): raise ProgrammingError("Must supply one of:\n" " 1. keyword argument dsn='host:/path/to/database'\n" " 2. both keyword arguments host='host' and" " database='/path/to/database'\n" " 3. only keyword argument database='/path/to/database'") if not dsn: if host and host.endswith(':'): raise ProgrammingError("Host must not end with a colon." " You should specify host='%s' rather than host='%s'." % (host[:-1], host)) elif host: if port: dsn = '%s/%d:%s' % (host, port, database) else: dsn = '%s:%s' % (host, database) else: if port: dsn = 'localhost/%d:%s' % (port, database) else: dsn = database # Parameter checks sql = "create database '%s' user '%s' password '%s'" % (dsn, user, password) if page_size: sql = '%s page_size %i' % (sql, page_size) if length: sql = '%s length %i' % (sql, length) if charset: sql = '%s default character set %s' % (sql, charset.upper()) if files: sql = '%s %s' % (sql, files) sql = b(sql, _FS_ENCODING) isc_status = ISC_STATUS_ARRAY(0) trans_handle = isc_tr_handle(0) db_handle = isc_db_handle(0) xsqlda = xsqlda_factory(1) # For yet unknown reason, the isc_dsql_execute_immediate segfaults when # NULL (None) is passed as XSQLDA, so we provide one here api.isc_dsql_execute_immediate(isc_status, db_handle, trans_handle, ctypes.c_ushort(len(sql)), sql, sql_dialect, ctypes.cast(ctypes.pointer(xsqlda), XSQLDA_PTR)) if db_api_error(isc_status): raise exception_from_status(DatabaseError, isc_status, "Error while creating database:") con = connection_class(db_handle, sql_dialect=sql_dialect, charset=charset) for hook in get_hooks(HOOK_DATABASE_ATTACHED): hook(HOOK_DATABASE_ATTACHED, con) return con class _cursor_weakref_callback(object): """Wraps callback function used in weakrefs so it's called only if still exists. """ def __init__(self, obj): self.__obj = weakref.ref(obj) def __call__(self, *args, **kwargs): self.__obj()._cursors.remove(*args, **kwargs) class _weakref_callback(object): """Wraps callback function used in weakrefs so it's called only if still exists. """ def __init__(self, func): self.__funcref = weakref.ref(func) def __call__(self, *args, **kwargs): func = self.__funcref() if func: func(*args, **kwargs) class TransactionContext(object): """Context Manager that manages transaction for object passed to constructor. Performs `rollback` if exception is thrown inside code block, otherwise performs `commit` at the end of block. .. note: :class:`~fdb.Transaction` acts as context manager and supports `with` statement directly. Example:: with TransactionContext(my_transaction): cursor.execute('insert into tableA (x,y) values (?,?)',(x,y)) cursor.execute('insert into tableB (x,y) values (?,?)',(x,y)) """ #: Transaction-like object this instance manages. transaction = None def __init__(self, transaction): ":param transaction: Any object that supports `begin()`, `commit()` and `rollback()`." self.transaction = transaction def __enter__(self): self.transaction.begin() return self.transaction def __exit__(self, exc_type, exc_val, exc_tb): if exc_type is None: self.transaction.commit() else: self.transaction.rollback() class Connection(object): """ Represents a connection between the database client (the Python process) and the database server. .. important:: DO NOT create instances of this class directly! Use only :func:`connect` or :func:`create_database` to get Connection instances. """ # PEP 249 (Python DB API 2.0) extensions Warning = Warning Error = Error InterfaceError = InterfaceError DatabaseError = DatabaseError DataError = DataError OperationalError = OperationalError IntegrityError = IntegrityError InternalError = InternalError ProgrammingError = ProgrammingError NotSupportedError = NotSupportedError def __init__(self, db_handle, dpb=None, sql_dialect=3, charset=None, isolation_level=ISOLATION_LEVEL_READ_COMMITED): """ :param db_handle: Database handle provided by factory function. :param dpb: Database Parameter Block associated with database handle. :param integer sql_dialect: SQL Dialect associated with database handle. :param string charset: Character set associated with database handle. """ if charset: self.__charset = charset.upper() else: self.__charset = None self._python_charset = charset_map.get(self.charset, self.charset) self._default_tpb = isolation_level # Main transaction self._main_transaction = Transaction([self], default_tpb=self._default_tpb) # ReadOnly ReadCommitted transaction self._query_transaction = Transaction([self], default_tpb=ISOLATION_LEVEL_READ_COMMITED_RO) self._transactions = [self._main_transaction, self._query_transaction] self.__precision_cache = {} self.__sqlsubtype_cache = {} self.__conduits = [] self.__group = None self.__schema = None self.__monitor = None self.__ods = None self.__sql_dialect = sql_dialect self._dpb = dpb self._isc_status = ISC_STATUS_ARRAY() self._db_handle = db_handle # Cursor for internal use self.__ic = self.query_transaction.cursor() self.__ic._set_as_internal() # Get Firebird engine version verstr = self.db_info(isc_info_firebird_version) x = verstr.split() if x[0].find('V') > 0: (x, self.__version) = x[0].split('V') elif x[0].find('T') > 0: (x, self.__version) = x[0].split('T') else: # Unknown version self.__version = '0.0.0.0' x = self.__version.split('.') self.__engine_version = float('%s.%s' % (x[0], x[1])) # self.__page_size = self.db_info(isc_info_page_size) def __remove_group(self, group_ref): self.__group = None def __ensure_group_membership(self, must_be_member, err_msg): if must_be_member: if self.group is None: raise ProgrammingError(err_msg) else: if self.group is not None: raise ProgrammingError(err_msg) def __check_attached(self): if self._db_handle is None: raise ProgrammingError("Connection object is detached from database") def __close(self, detach=True): if self._db_handle != None: if self.__schema: self.__schema._close() if self.__monitor: self.__monitor._close() self.__ic.close() del self.__ic try: for conduit in self.__conduits: conduit.close() for transaction in self._transactions: transaction.default_action = 'rollback' # Required by Python DB API 2.0 transaction.close() if detach: api.isc_detach_database(self._isc_status, self._db_handle) finally: self._db_handle = None for hook in get_hooks(HOOK_DATABASE_CLOSED): hook(self) # def __enter__(self): return self def __exit__(self, *args): self.close() def __get_sql_dialect(self): return self.__sql_dialect def __get_main_transaction(self): return self._main_transaction def __get_query_transaction(self): return self._query_transaction def __get_transactions(self): return tuple(self._transactions) def __get_closed(self): return self._db_handle is None def __get_server_version(self): return self.db_info(isc_info_version) def __get_firebird_version(self): return self.db_info(isc_info_firebird_version) def __get_version(self): return self.__version def __get_engine_version(self): return self.__engine_version def __get_default_tpb(self): return self._default_tpb def __set_default_tpb(self, value): self._default_tpb = _validate_tpb(value) def __get_charset(self): return self.__charset def __set_charset(self, value): # More informative error message: raise AttributeError("A connection's 'charset' property can be" " specified upon Connection creation as a keyword argument to" " fdb.connect, but it cannot be modified thereafter.") def __get_group(self): if self.__group: try: return self.__group() except: return None else: return None def __get_ods(self): if not self.__ods: self.__ods = float('%d.%d' % (self.ods_version, self.ods_minor_version)) return self.__ods def __get_ods_version(self): return self.db_info(isc_info_ods_version) def __get_ods_minor_version(self): return self.db_info(isc_info_ods_minor_version) def __get_page_size(self): return self.__page_size def __get_page_cache_size(self): return self.db_info(isc_info_num_buffers) def __get_database_name(self): return self.db_info(isc_info_db_id)[0] def __get_site_name(self): return self.db_info(isc_info_db_id)[1] def __get_attachment_id(self): return self.db_info(isc_info_attachment_id) def __get_io_stats(self): return self.db_info([isc_info_reads, isc_info_writes, isc_info_fetches, isc_info_marks]) def __get_current_memory(self): return self.db_info(isc_info_current_memory) def __get_max_memory(self): return self.db_info(isc_info_max_memory) def __get_pages_allocated(self): return self.db_info(isc_info_allocation) def __get_database_sql_dialect(self): return self.db_info(isc_info_db_sql_dialect) def __get_sweep_interval(self): return self.db_info(isc_info_sweep_interval) def __get_space_reservation(self): value = self.db_info(isc_info_no_reserve) return value == 0 def __get_forced_writes(self): value = self.db_info(isc_info_forced_writes) return value == 1 def __get_creation_date(self): return self.db_info(isc_info_creation_date) def __get_implementation_id(self): return self.db_info(isc_info_implementation)[0] def __get_provider_id(self): return self.db_info(isc_info_db_provider) def __get_db_class_id(self): return self.db_info(isc_info_db_class) def __get_oit(self): return self.db_info(isc_info_oldest_transaction) def __get_oat(self): return self.db_info(isc_info_oldest_active) def __get_ost(self): return self.db_info(isc_info_oldest_snapshot) def __get_next_transaction(self): return self.db_info(isc_info_next_transaction) def __parse_date(self, raw_value): "Convert raw data to datetime.date" nday = bytes_to_int(raw_value) + 678882 century = (4 * nday - 1) // 146097 nday = 4 * nday - 1 - 146097 * century day = nday // 4 nday = (4 * day + 3) // 1461 day = 4 * day + 3 - 1461 * nday day = (day + 4) // 4 month = (5 * day - 3) // 153 day = 5 * day - 3 - 153 * month day = (day + 5) // 5 year = 100 * century + nday if month < 10: month += 3 else: month -= 9 year += 1 return year, month, day def __parse_time(self, raw_value): "Convert raw data to datetime.time" n = bytes_to_int(raw_value) s = n // 10000 m = s // 60 h = m // 60 m = m % 60 s = s % 60 return (h, m, s, (n % 10000) * 100) def _get_schema(self): if not self.__schema: self.__schema = schema.Schema() self.__schema.bind(self) self.__schema._set_as_internal() return self.__schema def _get_monitor(self): if not self.__monitor: if self.ods >= ODS_FB_21: self.__monitor = monitor.Monitor() self.__monitor.bind(self) self.__monitor._set_as_internal() else: raise ProgrammingError("Monitoring tables are available only " \ "for databases with ODS 11.1 and higher.") return self.__monitor def _get_array_sqlsubtype(self, relation, column): subtype = self.__sqlsubtype_cache.get((relation, column)) if subtype is not None: return subtype self.__ic.execute("SELECT FIELD_SPEC.RDB$FIELD_SUB_TYPE" " FROM RDB$FIELDS FIELD_SPEC, RDB$RELATION_FIELDS REL_FIELDS" " WHERE" " FIELD_SPEC.RDB$FIELD_NAME = REL_FIELDS.RDB$FIELD_SOURCE" " AND REL_FIELDS.RDB$RELATION_NAME = ?" " AND REL_FIELDS.RDB$FIELD_NAME = ?", (p3fix(relation, self._python_charset), p3fix(column, self._python_charset))) result = self.__ic.fetchone() self.__ic.close() if result: self.__sqlsubtype_cache[(relation, column)] = result[0] return result[0] def _determine_field_precision(self, sqlvar): if sqlvar.relname_length == 0 or sqlvar.sqlname_length == 0: # Either or both field name and relation name are not provided, # so we cannot determine field precision. It's normal situation # for example for queries with dynamically computed fields return 0 # Special case for automatic RDB$DB_KEY fields. if ((sqlvar.sqlname_length == 6 and sqlvar.sqlname == 'DB_KEY') or (sqlvar.sqlname_length == 10 and sqlvar.sqlname == 'RDB$DB_KEY')): return 0 precision = self.__precision_cache.get((sqlvar.relname, sqlvar.sqlname)) if precision is not None: return precision # First, try table self.__ic.execute("SELECT FIELD_SPEC.RDB$FIELD_PRECISION" " FROM RDB$FIELDS FIELD_SPEC," " RDB$RELATION_FIELDS REL_FIELDS" " WHERE" " FIELD_SPEC.RDB$FIELD_NAME =" " REL_FIELDS.RDB$FIELD_SOURCE" " AND REL_FIELDS.RDB$RELATION_NAME = ?" " AND REL_FIELDS.RDB$FIELD_NAME = ?", (p3fix(sqlvar.relname, self._python_charset), p3fix(sqlvar.sqlname, self._python_charset))) result = self.__ic.fetchone() self.__ic.close() if result: self.__precision_cache[(sqlvar.relname, sqlvar.sqlname)] = result[0] return result[0] # Next, try stored procedure output parameter self.__ic.execute("SELECT FIELD_SPEC.RDB$FIELD_PRECISION" " FROM RDB$FIELDS FIELD_SPEC," " RDB$PROCEDURE_PARAMETERS REL_FIELDS" " WHERE" " FIELD_SPEC.RDB$FIELD_NAME =" " REL_FIELDS.RDB$FIELD_SOURCE" " AND RDB$PROCEDURE_NAME = ?" " AND RDB$PARAMETER_NAME = ?" " AND RDB$PARAMETER_TYPE = 1", (p3fix(sqlvar.relname, self._python_charset), p3fix(sqlvar.sqlname, self._python_charset))) result = self.__ic.fetchone() self.__ic.close() if result: self.__precision_cache[(sqlvar.relname, sqlvar.sqlname)] = result[0] return result[0] # We ran out of options return 0 def drop_database(self): """Drops the database to which this connection is attached. Unlike plain file deletion, this method behaves responsibly, in that it removes shadow files and other ancillary files for this database. :raises `~fdb.ProgrammingError`: When connection is a member of a :class:`ConnectionGroup`. :raises `~fdb.DatabaseError`: When error is returned from server. """ self.__ensure_group_membership(False, "Cannot drop database via" " connection that is part of a ConnectionGroup.") saved_handle = isc_db_handle(self._db_handle.value) self.__close(detach=False) api.isc_drop_database(self._isc_status, saved_handle) if db_api_error(self._isc_status): self._db_handle = saved_handle raise exception_from_status(DatabaseError, self._isc_status, "Error while dropping database:") def execute_immediate(self, sql): """Executes a statement in context of :attr:`main_transaction` without caching its prepared form. Automatically starts transaction if it's not already started. :param string sql: SQL statement to execute. .. important:: **The statement must not be of a type that returns a result set.** In most cases (especially cases in which the same statement – perhaps a parameterized statement – is executed repeatedly), it is better to create a cursor using the connection’s cursor method, then execute the statement using one of the cursor’s execute methods. :param string sql: SQL statement to execute. :raises `~fdb.ProgrammingError`: When connection is closed. :raises `~fdb.DatabaseError`: When error is returned from server. """ self.__check_attached() self.main_transaction.execute_immediate(sql) def database_info(self, info_code, result_type, page_number=None): """Wraps the Firebird C API function `isc_database_info`. For documentation, see the IB 6 API Guide section entitled "Requesting information about an attachment" (p. 51). Note that this method is a VERY THIN wrapper around the FB C API function `isc_database_info`. This method does NOT attempt to interpret its results except with regard to whether they are a string or an integer. For example, requesting `isc_info_user_names` will return a string containing a raw succession of length-name pairs. A thicker wrapper might interpret those raw results and return a Python tuple, but it would need to handle a multitude of special cases in order to cover all possible isc_info_* items. :param integer info_code: One of the `isc_info_*` constants. :param string result_type: Must be either ‘b’ if you expect a binary string result, or ‘i’ if you expect an integer result. :param integer page_number: Page number for `fb_info_page_contents` info code. :raises `~fdb.DatabaseError`: When error is returned from server. :raises `~fdb.OperationalError`: When returned information is bigger than SHRT_MAX. :raises `~fdb.InternalError`: On unexpected processing condition. :raises `ValueError`: On illegal `result_type` value. .. seealso:: Extracting data with the database_info function is rather clumsy. See :meth:`db_info` for higher-level means of accessing the same information. .. note:: Some of the information available through this method would be more easily retrieved with the Services API (see submodule :mod:`fdb.services`). """ self.__check_attached() buf_size = 256 if info_code != fb_info_page_contents else self.page_size + 10 request_buffer = bs([info_code]) if info_code == fb_info_page_contents: request_buffer += int_to_bytes(4, 2) request_buffer += int_to_bytes(page_number, 4) while True: res_buf = int2byte(0) * buf_size api.isc_database_info(self._isc_status, self._db_handle, len(request_buffer), request_buffer, len(res_buf), res_buf) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Error while requesting database information:") i = buf_size - 1 while i >= 0: if res_buf[i] != mychr(0): break else: i -= 1 if ord2(res_buf[i]) == isc_info_truncated: if buf_size < SHRT_MAX: buf_size *= 2 if buf_size > SHRT_MAX: buf_size = SHRT_MAX continue else: raise OperationalError("Result is too large to fit into" " buffer of size SHRT_MAX, yet underlying info " " function only accepts buffers with size <= SHRT_MAX.") else: break if ord2(res_buf[i]) != isc_info_end: raise InternalError("Exited request loop sucessfuly, but" " res_buf[i] != sc_info_end.") if (request_buffer[0] != res_buf[0]) and (info_code != isc_info_active_transactions): # isc_info_active_transactions with no active transactions returns empty buffer # and does not follow this rule, so we'll report it only for other codes. raise InternalError("Result code does not match request code.") if result_type.upper() == 'I': return bytes_to_int(res_buf[3:3 + bytes_to_int(res_buf[1:3])]) elif (result_type.upper() == 'B' and info_code in _DATABASE_INFO__KNOWN_LOW_LEVEL_EXCEPTIONS): # The result buffers for a few request codes don't follow the generic # conventions, so we need to return their full contents rather than # omitting the initial infrastructural bytes. return ctypes.string_at(res_buf, i) #elif result_type.upper() == 'S': #return ctypes.string_at(res_buf[3:], bytes_to_int(res_buf[1:3])) elif result_type.upper() == 'B': return ctypes.string_at(res_buf[3:], bytes_to_int(res_buf[1:3])) else: raise ValueError("Unknown result type requested (must be 'i', 'b' or 's').") def db_info(self, request): """ Higher-level convenience wrapper around the :meth:`database_info` method that parses the output of `database_info` into Python-friendly objects instead of returning raw binary buffers in the case of complex result types. :param request: Single `fdb.isc_info_*` info request code or a sequence of such codes. :returns: Mapping of (info request code -> result). :raises `ValueError`: When requested code is not recognized. :raises `~fdb.OperationalError`: On unexpected processing condition. """ def _extract_database_info_counts(buf): # Extract a raw binary sequence # of (unsigned short, signed int) pairs into # a corresponding Python dictionary. ushort_size = struct.calcsize(' 0: slen = unpack_num(buf, pos) pos += 1 item = buf[pos:pos + slen] pos += slen items.append(p3fix(item, self._python_charset)) count -= 1 results[info_code] = tuple(items) elif info_code == fb_info_implementation: buf = self.database_info(info_code, 'b') pos = 1 cpu_id = unpack_num(buf, pos) pos += 1 os_id = unpack_num(buf, pos) pos += 1 compiler_id = unpack_num(buf, pos) pos += 1 flags = unpack_num(buf, pos) pos += 1 class_number = unpack_num(buf, pos) results[info_code] = (cpu_id, os_id, compiler_id, flags, class_number) elif info_code == isc_info_implementation: buf = self.database_info(info_code, 'b') pos = 1 impl_number = unpack_num(buf, pos) pos += 1 class_number = unpack_num(buf, pos) results[info_code] = (impl_number, class_number) elif info_code in (isc_info_version, isc_info_firebird_version): buf = self.database_info(info_code, 'b') pos = 1 version_string_len = unpack_num(buf, pos) pos += 1 results[info_code] = p3fix(buf[pos:pos + version_string_len], self._python_charset) elif info_code == isc_info_user_names: # The isc_info_user_names results buffer does not exactly match # the format declared on page 54 of the IB 6 API Guide. # The buffer is formatted as a sequence of clusters, each of # which begins with the byte isc_info_user_names, followed by a # two-byte cluster length, followed by a one-byte username # length, followed by a single username. # I don't understand why the lengths are represented # redundantly (the two-byte cluster length is always one # greater than the one-byte username length), but perhaps it's # an attempt to adhere to the general format of an information # cluster declared on page 51 while also [trying, but failing # to] adhere to the isc_info_user_names-specific format # declared on page 54. buf = self.database_info(info_code, 'b') usernames = [] pos = 0 while pos < len(buf): ### Todo: check Python version differences, merge as possible if PYTHON_MAJOR_VER == 3: if buf[pos] != isc_info_user_names: raise OperationalError('While trying to service' ' isc_info_user_names request, found unexpected' ' results buffer contents at position %d of [%s]' % (pos, buf)) pos += 1 # The two-byte cluster length: name_cluster_len = (struct.unpack(' number of connections by that user. res = {} for un in usernames: res[un] = res.get(un, 0) + 1 results[info_code] = res elif info_code in (isc_info_active_transactions, isc_info_limbo): buf = self.database_info(info_code, 'b') transactions = [] ushort_size = struct.calcsize('= 0 or attr.startswith('_') or attr in ['close', 'bind'] or hasattr(self, attr)): val = getattr(self.__schema, attr) if callable(val): setattr(self, attr, val) def _get_schema(self): return self.__schema class EventBlock(object): """Represents Firebird API structure for block of events. .. warning: Internaly used class not intended for direct use. """ #: List of registered event names event_names = [] #: length of internal event buffer buf_length = 0 #: Event ID event_id = 0 #: Event buffer event_buf = None #: Result buffer result_buf = None def __init__(self, queue, db_handle, event_names): self.__first = True def callback(result, length, updated): ctypes.memmove(result, updated, length) self.__queue.put((ibase.OP_RECORD_AND_REREGISTER, self)) return 0 self.__queue = weakref.proxy(queue) self._db_handle = db_handle self._isc_status = ISC_STATUS_ARRAY(0) self.event_names = list(event_names) self.__results = RESULT_VECTOR(0) self.__closed = False self.__callback = ISC_EVENT_CALLBACK(callback) self.event_buf = ctypes.pointer(ISC_UCHAR(0)) self.result_buf = ctypes.pointer(ISC_UCHAR(0)) self.buf_length = 0 self.event_id = ISC_LONG(0) self.buf_length = api.isc_event_block(ctypes.pointer(self.event_buf), ctypes.pointer(self.result_buf), *[b(x) for x in event_names]) def _begin(self): self.__wait_for_events() def __lt__(self, other): return self.event_id.value < other.event_id.value def __wait_for_events(self): api.isc_que_events(self._isc_status, self._db_handle, self.event_id, self.buf_length, self.event_buf, self.__callback, self.result_buf) if db_api_error(self._isc_status): self.close() raise exception_from_status(DatabaseError, self._isc_status, "Error while waiting for events:") def count_and_reregister(self): "Count event occurences and reregister interest in futrther notifications." result = {} api.isc_event_counts(self.__results, self.buf_length, self.event_buf, self.result_buf) if self.__first: # Ignore the first call, it's for setting up the table self.__first = False self.__wait_for_events() return None for i in xrange(len(self.event_names)): result[self.event_names[i]] = int(self.__results[i]) self.__wait_for_events() return result def close(self): "Close this block canceling managed events." if not self.closed: api.isc_cancel_events(self._isc_status, self._db_handle, self.event_id) self.__closed = True del self.__callback if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Error while canceling events:") def __get_closed(self): return self.__closed def __del__(self): self.close() #: (ReadOnly) True if block is closed for business closed = property(__get_closed) class EventConduit(object): """Represents a conduit through which database event notifications will flow into the Python program. .. important:: DO NOT create instances of this class directly! Use only :meth:`Connection.event_conduit` to get EventConduit instances. Notifications of any events are not accumulated until :meth:`begin` method is called. From the moment the :meth:`begin` method is called, notifications of any events that occur will accumulate asynchronously within the conduit’s internal queue until the conduit is closed either explicitly (via the :meth:`close` method) or implicitly (via garbage collection). `EventConduit` implements context manager protocol to call method :meth:`begin` and :meth:`close` automatically. Example: .. code-block:: python with connection.event_conduit( ('event_a', 'event_b') ) as conduit: events = conduit.wait() process_events(events) """ def __init__(self, db_handle, event_names): """ :param db_handle: Database handle. :param event_names: List of strings that represent event names. """ self._db_handle = db_handle self._isc_status = ISC_STATUS_ARRAY(0) self.__event_names = list(event_names) self.__events = {}.fromkeys(self.__event_names, 0) self.__event_blocks = [] self.__closed = False self.__queue = ibase.PriorityQueue() self.__events_ready = threading.Event() self.__blocks = [[x for x in y if x] for y in izip_longest(*[iter(event_names)]*15)] self.__initialized = False def __enter__(self): self.begin() return self def __exit__(self, exc_type, exc_val, exc_tb): self.close() def begin(self): """Starts listening for events. Must be called directly or through context manager interface.""" def event_process(queue): while True: operation, data = queue.get() if operation == ibase.OP_RECORD_AND_REREGISTER: events = data.count_and_reregister() if events: for key, value in events.items(): self.__events[key] += value self.__events_ready.set() elif operation == ibase.OP_DIE: return self.__initialized = True self.__process_thread = threading.Thread(target=event_process, args=(self.__queue,)) self.__process_thread.start() for block_events in self.__blocks: event_block = EventBlock(self.__queue, self._db_handle, block_events) self.__event_blocks.append(event_block) event_block._begin() def wait(self, timeout=None): """Wait for events. Blocks the calling thread until at least one of the events occurs, or the specified timeout (if any) expires. :param timeout: Number of seconds (use a float to indicate fractions of seconds). If not even one of the relevant events has occurred after timeout seconds, this method will unblock and return None. The default timeout is infinite. :type timeout: integer or float :returns: `None` if the wait timed out, otherwise a dictionary that maps `event_name -> event_occurrence_count`. Example: .. code-block:: python >>>conduit = connection.event_conduit( ('event_a', 'event_b') ) >>>conduit.begin() >>>conduit.wait() { 'event_a': 1, 'event_b': 0 } In the example above `event_a` occurred once and `event_b` did not occur at all. """ if not self.__initialized: raise ProgrammingError("Event collection not initialized. " "It's necessary to call begin().") if not self.closed: self.__events_ready.wait(timeout) return self.__events.copy() def flush(self): """Clear any event notifications that have accumulated in the conduit’s internal queue. """ if not self.closed: self.__events_ready.clear() self.__events = {}.fromkeys(self.__event_names, 0) def close(self): """Cancels the standing request for this conduit to be notified of events. After this method has been called, this EventConduit object is useless, and should be discarded. """ if not self.closed: self.__queue.put((ibase.OP_DIE, self)) self.__process_thread.join() for block in self.__event_blocks: block.close() self.__closed = True def __get_closed(self): return self.__closed def __del__(self): self.close() #: (Read Only) (boolean) True if conduit is closed. closed = property(__get_closed) class PreparedStatement(object): """Represents a prepared statement, an "inner" database cursor, which is used to manage the SQL statement execution and context of a fetch operation. .. important:: DO NOT create instances of this class directly! Use only :meth:`Cursor.prep` to get PreparedStatement instances. .. note:: PreparedStatements are bound to :class:`Cursor` instance that created them, and using them with other Cursor would report an error. """ #: Constant for internal use by this class. Do not change! RESULT_SET_EXHAUSTED = 100 #: Constant for internal use by this class. Do not change! NO_FETCH_ATTEMPTED_YET = -1 #: :class:`Cursor` instance that manages this PreparedStatement. Do not change! cursor = None #: (integer) An integer code that can be matched against the statement #: type constants in the isc_info_sql_stmt_* series. Do not change! statement_type = 0 #: The number of input parameters the statement requires. Do not change! n_input_params = 0 #: The number of output fields the statement produces. Do not change! n_output_params = 0 def __init__(self, operation, cursor, internal=True): self.__sql = operation self.__internal = internal if internal: self.cursor = weakref.proxy(cursor, _weakref_callback(self.__cursor_deleted)) else: self.cursor = cursor self._stmt_handle = None self._isc_status = ISC_STATUS_ARRAY() # Internal XSQLDA structure for output values. self._out_sqlda = xsqlda_factory(10) # Internal XSQLDA structure for input values. self._in_sqlda = xsqlda_factory(10) # Internal list to save original input SQLDA structures when they has # to temporarily augmented. self._in_sqlda_save = [] # (integer) An integer code that can be matched against the statement # type constants in the isc_info_sql_stmt_* series. self.statement_type = None self.__streamed_blobs = [] self.__streamed_blob_treshold = 65536 self.__blob_readers = [] self.__executed = False self.__prepared = False self.__closed = False self.__description = None self.__output_cache = None self._last_fetch_status = ISC_STATUS(self.NO_FETCH_ATTEMPTED_YET) connection = self.cursor._connection self.__charset = connection.charset self.__python_charset = connection._python_charset self.__sql_dialect = connection.sql_dialect # allocate statement handle self._stmt_handle = isc_stmt_handle(0) api.isc_dsql_allocate_statement(self._isc_status, connection._db_handle, self._stmt_handle) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Error while allocating SQL statement:") # prepare statement op = b(operation, self.__python_charset) api.isc_dsql_prepare(self._isc_status, self.cursor._transaction._tr_handle, self._stmt_handle, len(op), op, self.__sql_dialect, ctypes.cast(ctypes.pointer(self._out_sqlda), XSQLDA_PTR)) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Error while preparing SQL statement:") # Determine statement type info = b(' ') * 20 api.isc_dsql_sql_info(self._isc_status, self._stmt_handle, 1, bs([isc_info_sql_stmt_type]), len(info), info) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Error while determining SQL statement type:") if ord2(info[0]) != isc_info_sql_stmt_type: raise InternalError("Cursor.execute, determine statement type:\n" "first byte must be 'isc_info_sql_stmt_type'") self.statement_type = bytes_to_int(info[3:3 + bytes_to_int(info[1:3])]) # Init XSQLDA for input parameters api.isc_dsql_describe_bind(self._isc_status, self._stmt_handle, self.__sql_dialect, ctypes.cast(ctypes.pointer(self._in_sqlda), XSQLDA_PTR)) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Error while determining SQL statement parameters:") if self._in_sqlda.sqld > self._in_sqlda.sqln: self._in_sqlda = xsqlda_factory(self._in_sqlda.sqld) api.isc_dsql_describe_bind(self._isc_status, self._stmt_handle, self.__sql_dialect, ctypes.cast(ctypes.pointer(self._in_sqlda), XSQLDA_PTR)) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Error while determining SQL statement parameters:") # The number of input parameters the statement requires. self.n_input_params = self._in_sqlda.sqld # record original type and size information so it can be restored for # subsequent executions (mind the implicit string conversions!) for sqlvar in self._in_sqlda.sqlvar[:self.n_input_params]: self._in_sqlda_save.append((sqlvar.sqltype, sqlvar.sqllen)) # Init output XSQLDA api.isc_dsql_describe(self._isc_status, self._stmt_handle, self.__sql_dialect, ctypes.cast(ctypes.pointer(self._out_sqlda), XSQLDA_PTR)) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Error while determining SQL statement output:") if self._out_sqlda.sqld > self._out_sqlda.sqln: self._out_sqlda = xsqlda_factory(self._out_sqlda.sqld) api.isc_dsql_describe(self._isc_status, self._stmt_handle, self.__sql_dialect, ctypes.cast(ctypes.pointer(self._out_sqlda), XSQLDA_PTR)) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Error while determining SQL statement output:") # The number of output fields the statement produces. self.n_output_params = self._out_sqlda.sqld self.__coerce_xsqlda(self._out_sqlda) self.__prepared = True self._name = None def __cursor_deleted(self, obj): self.cursor = None def __get_name(self): return self._name def __set_name(self, name): if self._name: raise ProgrammingError("Cursor's name has already been declared") self._set_cursor_name(name) def __get_closed(self): return self.__closed def __get_plan(self): buf_size = 256 while True: info = b(' ') * buf_size api.isc_dsql_sql_info(self._isc_status, self._stmt_handle, 2, bs([isc_info_sql_get_plan, isc_info_end]), len(info), info) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Error while determining rowcount:") if ord2(info[0]) == isc_info_truncated: if buf_size < SHRT_MAX: buf_size *= 2 if buf_size > SHRT_MAX: buf_size = SHRT_MAX continue else: return "Plan is too big" else: break if ord2(info[0]) == isc_info_end: return None if ord2(info[0]) != isc_info_sql_get_plan: raise IndentationError("Unexpected code in result buffer while" " querying SQL plan.") size = bytes_to_uint(info[1:_SIZE_OF_SHORT + 1]) # Skip first byte: a new line ### Todo: Better handling of P version specifics result = ctypes.string_at(info[_SIZE_OF_SHORT + 2:], size - 1) if PYTHON_MAJOR_VER == 3: return b2u(result, self.__python_charset) #return result.decode(charset_map.get(self.__charset,self.__charset)) else: return result def __get_sql(self): return self.__sql def __is_fixed_point(self, dialect, data_type, subtype, scale): return ((data_type in [SQL_SHORT, SQL_LONG, SQL_INT64] and (subtype or scale)) or ((dialect < 3) and scale and (data_type in [SQL_DOUBLE, SQL_D_FLOAT]))) def __get_external_data_type_name(self, dialect, data_type, subtype, scale): if data_type == SQL_TEXT: return 'CHAR' elif data_type == SQL_VARYING: return 'VARCHAR' elif self.__is_fixed_point(dialect, data_type, subtype, scale): if subtype == SUBTYPE_NUMERIC: return 'NUMERIC' elif subtype == SUBTYPE_DECIMAL: return 'DECIMAL' else: return 'NUMERIC/DECIMAL' elif data_type == SQL_SHORT: return 'SMALLINT' elif data_type == SQL_LONG: return 'INTEGER' elif data_type == SQL_INT64: return 'BIGINT' elif data_type == SQL_FLOAT: return 'FLOAT' elif data_type in [SQL_DOUBLE, SQL_D_FLOAT]: return 'DOUBLE' elif data_type == SQL_TIMESTAMP: return 'TIMESTAMP' elif data_type == SQL_TYPE_DATE: return 'DATE' elif data_type == SQL_TYPE_TIME: return 'TIME' elif data_type == SQL_BLOB: return 'BLOB' elif data_type == SQL_BOOLEAN: return 'BOOLEAN' else: return 'UNKNOWN' def __get_internal_data_type_name(self, data_type): if data_type == SQL_TEXT: return 'SQL_TEXT' elif data_type == SQL_VARYING: return 'SQL_VARYING' elif data_type == SQL_SHORT: return 'SQL_SHORT' elif data_type == SQL_LONG: return 'SQL_LONG' elif data_type == SQL_INT64: return 'SQL_INT64' elif data_type == SQL_FLOAT: return 'SQL_FLOAT' elif data_type in [SQL_DOUBLE, SQL_D_FLOAT]: return 'SQL_DOUBLE' elif data_type == SQL_TIMESTAMP: return 'SQL_TIMESTAMP' elif data_type == SQL_TYPE_DATE: return 'SQL_TYPE_DATE' elif data_type == SQL_TYPE_TIME: return 'SQL_TYPE_TIME' elif data_type == SQL_BLOB: return 'SQL_BLOB' elif data_type == SQL_BOOLEAN: return 'SQL_BOOLEAN' else: return 'UNKNOWN' def __get_description(self): if not self.__description: desc = [] if self.__prepared and (self._out_sqlda.sqld > 0): for sqlvar in self._out_sqlda.sqlvar[:self._out_sqlda.sqld]: # Field name (or alias) sqlname = p3fix(sqlvar.sqlname[:sqlvar.sqlname_length], self.__python_charset) alias = p3fix(sqlvar.aliasname[:sqlvar.aliasname_length], self.__python_charset) if alias != sqlname: sqlname = alias # Type information intsize = sqlvar.sqllen vartype = sqlvar.sqltype & ~1 scale = sqlvar.sqlscale precision = 0 if vartype in [SQL_TEXT, SQL_VARYING]: vtype = StringType # CHAR with multibyte encoding requires special handling if sqlvar.sqlsubtype in (4, 69): # UTF8 and GB18030 dispsize = sqlvar.sqllen // 4 elif sqlvar.sqlsubtype == 3: # UNICODE_FSS dispsize = sqlvar.sqllen // 3 else: dispsize = sqlvar.sqllen elif (vartype in [SQL_SHORT, SQL_LONG, SQL_INT64] and (sqlvar.sqlsubtype or scale)): vtype = decimal.Decimal precision = (self.cursor._connection._determine_field_precision(sqlvar)) dispsize = 20 elif vartype == SQL_SHORT: vtype = IntType dispsize = 6 elif vartype == SQL_LONG: vtype = IntType dispsize = 11 elif vartype == SQL_INT64: vtype = LongType dispsize = 20 elif vartype in [SQL_FLOAT, SQL_DOUBLE, SQL_D_FLOAT]: # Special case, dialect 1 DOUBLE/FLOAT # could be Fixed point if (self.__sql_dialect < 3) and scale: vtype = decimal.Decimal precision = (self.cursor._connection._determine_field_precision(sqlvar)) else: vtype = FloatType dispsize = 17 elif vartype == SQL_BLOB: scale = sqlvar.sqlsubtype vtype = StringType dispsize = 0 elif vartype == SQL_TIMESTAMP: vtype = datetime.datetime dispsize = 22 elif vartype == SQL_TYPE_DATE: vtype = datetime.date dispsize = 10 elif vartype == SQL_TYPE_TIME: vtype = datetime.time dispsize = 11 elif vartype == SQL_ARRAY: vtype = ListType dispsize = -1 elif vartype == SQL_BOOLEAN: vtype = bool dispsize = 5 else: vtype = None dispsize = -1 desc.append(tuple([sqlname, vtype, dispsize, intsize, precision, scale, bool(sqlvar.sqltype & 1)])) self.__description = tuple(desc) return self.__description def __get_rowcount(self): result = -1 if (self.__executed and self.statement_type in [isc_info_sql_stmt_select, isc_info_sql_stmt_insert, isc_info_sql_stmt_update, isc_info_sql_stmt_delete]): info = b(' ') * 64 api.isc_dsql_sql_info(self._isc_status, self._stmt_handle, 2, bs([isc_info_sql_records, isc_info_end]), len(info), info) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Error while determining rowcount:") if ord2(info[0]) != isc_info_sql_records: raise InternalError("Cursor.get_rowcount:\n" "first byte must be 'isc_info_sql_records'") res_walk = 3 short_size = ctypes.sizeof(ctypes.c_short) while ord2(info[res_walk]) != isc_info_end: cur_count_type = ord2(info[res_walk]) res_walk += 1 size = bytes_to_uint(info[res_walk:res_walk + short_size]) res_walk += short_size count = bytes_to_uint(info[res_walk:res_walk + size]) if ((cur_count_type == isc_info_req_select_count and self.statement_type == isc_info_sql_stmt_select) or (cur_count_type == isc_info_req_insert_count and self.statement_type == isc_info_sql_stmt_insert) or (cur_count_type == isc_info_req_update_count and self.statement_type == isc_info_sql_stmt_update) or (cur_count_type == isc_info_req_delete_count and self.statement_type == isc_info_sql_stmt_delete)): result = count res_walk += size return result def _parse_date(self, raw_value): "Convert raw data to datetime.date" nday = bytes_to_int(raw_value) + 678882 century = (4 * nday - 1) // 146097 nday = 4 * nday - 1 - 146097 * century day = nday // 4 nday = (4 * day + 3) // 1461 day = 4 * day + 3 - 1461 * nday day = (day + 4) // 4 month = (5 * day - 3) // 153 day = 5 * day - 3 - 153 * month day = (day + 5) // 5 year = 100 * century + nday if month < 10: month += 3 else: month -= 9 year += 1 return year, month, day def _parse_time(self, raw_value): "Convert raw data to datetime.time" n = bytes_to_int(raw_value) s = n // 10000 m = s // 60 h = m // 60 m = m % 60 s = s % 60 return (h, m, s, (n % 10000) * 100) def _convert_date(self, v): # Convert datetime.date to BLR format data i = v.month + 9 jy = v.year + (i // 12) - 1 jm = i % 12 c = jy // 100 jy -= 100 * c j = ((146097 * c) // 4 + (1461 * jy) // 4 + (153 * jm + 2) // 5 + v.day - 678882) return int_to_bytes(j, 4) def _convert_time(self, v): # Convert datetime.time to BLR format time t = ((v.hour * 3600 + v.minute * 60 + v.second) * 10000 + v.microsecond // 100) return int_to_bytes(t, 4) def _convert_timestamp(self, v): # Convert datetime.datetime or datetime.date # to BLR format timestamp if isinstance(v, datetime.datetime): return self._convert_date(v.date()) + self._convert_time(v.time()) elif isinstance(v, datetime.date): return self._convert_date(v) + self._convert_time(datetime.time()) else: raise ValueError("datetime.datetime or datetime.date expected") def _check_integer_range(self, value, dialect, data_type, subtype, scale): if data_type == SQL_SHORT: vmin = SHRT_MIN vmax = SHRT_MAX elif data_type == SQL_LONG: vmin = INT_MIN vmax = INT_MAX elif data_type == SQL_INT64: vmin = LONG_MIN vmax = LONG_MAX if (value < vmin) or (value > vmax): msg = """numeric overflow: value %s (%s scaled for %d decimal places) is of too great a magnitude to fit into its internal storage type %s, which has range [%s,%s].""" % (str(value), self.__get_external_data_type_name(dialect, data_type, subtype, scale), scale, self.__get_internal_data_type_name(data_type), str(vmin), str(vmax)) raise ProgrammingError(msg, -802) def __coerce_xsqlda(self, xsqlda): """Allocate space for SQLVAR data. """ for sqlvar in xsqlda.sqlvar[:self._out_sqlda.sqld]: if sqlvar.sqltype & 1: sqlvar.sqlind = ctypes.pointer(ISC_SHORT(0)) vartype = sqlvar.sqltype & ~1 if vartype in [SQL_TEXT, SQL_VARYING]: sqlvar.sqldata = ctypes.create_string_buffer(sqlvar.sqllen + 2) elif vartype == SQL_SHORT: sqlvar.sqldata = ctypes.cast(ctypes.create_string_buffer( sqlvar.sqllen), buf_pointer) elif vartype == SQL_LONG: sqlvar.sqldata = ctypes.cast(ctypes.create_string_buffer( sqlvar.sqllen), buf_pointer) elif vartype == SQL_INT64: sqlvar.sqldata = ctypes.cast(ctypes.create_string_buffer( sqlvar.sqllen), buf_pointer) elif vartype == SQL_FLOAT: sqlvar.sqldata = ctypes.cast(ctypes.create_string_buffer( sqlvar.sqllen), buf_pointer) elif vartype == SQL_DOUBLE: sqlvar.sqldata = ctypes.cast(ctypes.create_string_buffer( sqlvar.sqllen), buf_pointer) elif vartype == SQL_D_FLOAT: sqlvar.sqldata = ctypes.cast(ctypes.create_string_buffer( sqlvar.sqllen), buf_pointer) elif vartype == SQL_BLOB: sqlvar.sqldata = ctypes.cast(ctypes.create_string_buffer( sqlvar.sqllen), buf_pointer) elif vartype == SQL_TIMESTAMP: sqlvar.sqldata = ctypes.cast(ctypes.create_string_buffer( sqlvar.sqllen), buf_pointer) elif vartype == SQL_TYPE_DATE: sqlvar.sqldata = ctypes.cast(ctypes.create_string_buffer( sqlvar.sqllen), buf_pointer) elif vartype == SQL_TYPE_TIME: sqlvar.sqldata = ctypes.cast(ctypes.create_string_buffer( sqlvar.sqllen), buf_pointer) elif vartype == SQL_ARRAY: sqlvar.sqldata = ctypes.cast(ctypes.create_string_buffer( sqlvar.sqllen), buf_pointer) elif vartype == SQL_BOOLEAN: sqlvar.sqldata = ctypes.cast(ctypes.create_string_buffer( sqlvar.sqllen), buf_pointer) else: pass def __xsqlda2tuple(self, xsqlda): """Move data from output XSQLDA to result tuple. """ values = [] for sqlvar in xsqlda.sqlvar[:xsqlda.sqld]: value = '' vartype = sqlvar.sqltype & ~1 scale = sqlvar.sqlscale # NULL handling if ((sqlvar.sqltype & 1) != 0) and (bool(sqlvar.sqlind) and sqlvar.sqlind.contents.value == -1): value = None elif vartype == SQL_TEXT: value = ctypes.string_at(sqlvar.sqldata, sqlvar.sqllen) #value = sqlvar.sqldata[:sqlvar.sqllen] ### Todo: verify handling of P version differences if ((self.__charset or PYTHON_MAJOR_VER == 3) and sqlvar.sqlsubtype != 1): # non OCTETS value = b2u(value, self.__python_charset) # CHAR with multibyte encoding requires special handling if sqlvar.sqlsubtype in (4, 69): # UTF8 and GB18030 reallength = sqlvar.sqllen // 4 elif sqlvar.sqlsubtype == 3: # UNICODE_FSS reallength = sqlvar.sqllen // 3 else: reallength = sqlvar.sqllen value = value[:reallength] elif vartype == SQL_VARYING: size = bytes_to_uint(sqlvar.sqldata[:2]) #value = ctypes.string_at(sqlvar.sqldata[2],2+size) ### Todo: verify handling of P version differences if PYTHON_MAJOR_VER == 3: value = bytes(sqlvar.sqldata[2:2 + size]) else: value = str(sqlvar.sqldata[2:2 + size]) if ((self.__charset or PYTHON_MAJOR_VER == 3) and sqlvar.sqlsubtype != 1): # non OCTETS value = b2u(value, self.__python_charset) elif vartype == SQL_BOOLEAN: value = bool(bytes_to_int(sqlvar.sqldata.contents.value)) elif vartype in [SQL_SHORT, SQL_LONG, SQL_INT64]: value = bytes_to_int(sqlvar.sqldata[:sqlvar.sqllen]) # It's scalled integer? if sqlvar.sqlsubtype or scale: value = decimal.Decimal(value) / _tenTo[abs(scale)] elif vartype == SQL_TYPE_DATE: yyyy, mm, dd = self._parse_date(sqlvar.sqldata[:sqlvar.sqllen]) value = datetime.date(yyyy, mm, dd) elif vartype == SQL_TYPE_TIME: h, m, s, ms = self._parse_time(sqlvar.sqldata[:sqlvar.sqllen]) value = datetime.time(h, m, s, ms) elif vartype == SQL_TIMESTAMP: yyyy, mm, dd = self._parse_date(sqlvar.sqldata[:4]) h, m, s, ms = self._parse_time(sqlvar.sqldata[4:sqlvar.sqllen]) value = datetime.datetime(yyyy, mm, dd, h, m, s, ms) elif vartype == SQL_FLOAT: value = struct.unpack('f', sqlvar.sqldata[:sqlvar.sqllen])[0] elif vartype == SQL_DOUBLE: value = struct.unpack('d', sqlvar.sqldata[:sqlvar.sqllen])[0] elif vartype == SQL_BOOLEAN: value = bytes_to_int(sqlvar.sqldata[:sqlvar.sqllen]) value = value == 1 elif vartype == SQL_BLOB: val = sqlvar.sqldata[:sqlvar.sqllen] blobid = ISC_QUAD(bytes_to_uint(val[:4]), bytes_to_uint(val[4:sqlvar.sqllen])) # Check if stream BLOB is requested instead materialized one use_stream = False if self.__streamed_blobs: # Get the BLOB name sqlname = p3fix(sqlvar.sqlname[:sqlvar.sqlname_length], self.__python_charset) alias = p3fix(sqlvar.aliasname[:sqlvar.aliasname_length], self.__python_charset) if alias != sqlname: sqlname = alias if sqlname in self.__streamed_blobs: use_stream = True if use_stream: # Stream BLOB value = BlobReader(blobid, self.cursor._connection._db_handle, self.cursor._transaction._tr_handle, sqlvar.sqlsubtype == 1, self.__charset) self.__blob_readers.append(value) else: # Materialized BLOB blob_handle = isc_blob_handle() api.isc_open_blob2(self._isc_status, self.cursor._connection._db_handle, self.cursor._transaction._tr_handle, blob_handle, blobid, 0, None) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Cursor.read_output_blob/isc_open_blob2:") # Get BLOB total length and max. size of segment result = ctypes.cast(ctypes.create_string_buffer(20), buf_pointer) api.isc_blob_info(self._isc_status, blob_handle, 2, bs([isc_info_blob_total_length, isc_info_blob_max_segment]), 20, result) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Cursor.read_output_blob/isc_blob_info:") offset = 0 while bytes_to_uint(result[offset]) != isc_info_end: code = bytes_to_uint(result[offset]) offset += 1 if code == isc_info_blob_total_length: length = bytes_to_uint(result[offset:offset + 2]) blob_length = bytes_to_uint(result[ offset + 2:offset + 2 + length]) offset += length + 2 elif code == isc_info_blob_max_segment: length = bytes_to_uint(result[offset:offset + 2]) segment_size = bytes_to_uint(result[ offset + 2:offset + 2 + length]) offset += length + 2 # Does the blob size exceeds treshold for streamed one? if ((self.__streamed_blob_treshold >= 0) and (blob_length > self.__streamed_blob_treshold)): # Stream BLOB value = BlobReader(blobid, self.cursor._connection._db_handle, self.cursor._transaction._tr_handle, sqlvar.sqlsubtype == 1, self.__charset) self.__blob_readers.append(value) else: # Load BLOB allow_incomplete_segment_read = True status = ISC_STATUS(0) blob = ctypes.create_string_buffer(blob_length) bytes_read = 0 bytes_actually_read = ctypes.c_ushort(0) while bytes_read < blob_length: status = api.isc_get_segment(self._isc_status, blob_handle, bytes_actually_read, min(segment_size, blob_length - bytes_read), ctypes.byref(blob, bytes_read)) if status != 0: if (status == isc_segment) and allow_incomplete_segment_read: bytes_read += bytes_actually_read.value else: raise exception_from_status(DatabaseError, self._isc_status, "Cursor.read_output_blob/isc_get_segment:") else: bytes_read += bytes_actually_read.value # Finalize value value = blob.raw if (self.__charset or PYTHON_MAJOR_VER == 3) and sqlvar.sqlsubtype == 1: value = b2u(value, self.__python_charset) # Close blob api.isc_close_blob(self._isc_status, blob_handle) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Cursor.read_otput_blob/isc_close_blob:") elif vartype == SQL_ARRAY: value = [] val = sqlvar.sqldata[:sqlvar.sqllen] arrayid = ISC_QUAD(bytes_to_uint(val[:4]), bytes_to_uint(val[4:sqlvar.sqllen])) arraydesc = ISC_ARRAY_DESC(0) sqlsubtype = self.cursor._connection._get_array_sqlsubtype(sqlvar.relname, sqlvar.sqlname) api.isc_array_lookup_bounds(self._isc_status, self.cursor._connection._db_handle, self.cursor._transaction._tr_handle, sqlvar.relname, sqlvar.sqlname, arraydesc) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Cursor.read_otput_array/isc_array_lookup_bounds:") value_type = arraydesc.array_desc_dtype value_scale = arraydesc.array_desc_scale value_size = arraydesc.array_desc_length if value_type in (blr_varying, blr_varying2): value_size += 2 dimensions = [] total_num_elements = 1 for dimension in xrange(arraydesc.array_desc_dimensions): bounds = arraydesc.array_desc_bounds[dimension] dimensions.append((bounds.array_bound_upper+1)-bounds.array_bound_lower) total_num_elements *= dimensions[dimension] total_size = total_num_elements * value_size buf = ctypes.create_string_buffer(total_size) value_buffer = ctypes.cast(buf, buf_pointer) tsize = ISC_LONG(total_size) api.isc_array_get_slice(self._isc_status, self.cursor._connection._db_handle, self.cursor._transaction._tr_handle, arrayid, arraydesc, value_buffer, tsize) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Cursor.read_otput_array/isc_array_get_slice:") (value, bufpos) = self.__extract_db_array_to_list(value_size, value_type, sqlsubtype, value_scale, 0, dimensions, value_buffer, 0) values.append(value) return tuple(values) def __extract_db_array_to_list(self, esize, dtype, subtype, scale, dim, dimensions, buf, bufpos): """Extracts ARRRAY column data from buffer to Python list(s). """ value = [] if dim == len(dimensions)-1: for i in xrange(dimensions[dim]): if dtype in (blr_text, blr_text2): val = ctypes.string_at(buf[bufpos:bufpos+esize], esize) ### Todo: verify handling of P version differences if ((self.__charset or PYTHON_MAJOR_VER == 3) and subtype != 1): # non OCTETS val = b2u(val, self.__python_charset) # CHAR with multibyte encoding requires special handling if subtype in (4, 69): # UTF8 and GB18030 reallength = esize // 4 elif subtype == 3: # UNICODE_FSS reallength = esize // 3 else: reallength = esize val = val[:reallength] elif dtype in (blr_varying, blr_varying2): val = ctypes.string_at(buf[bufpos:bufpos+esize]) if ((self.__charset or PYTHON_MAJOR_VER == 3) and subtype != 1): # non OCTETS val = b2u(val, self.__python_charset) elif dtype in (blr_short, blr_long, blr_int64): val = bytes_to_int(buf[bufpos:bufpos+esize]) if subtype or scale: val = decimal.Decimal(val) / _tenTo[abs(256-scale)] elif dtype == blr_bool: val = bytes_to_int(buf[bufpos:bufpos+esize]) == 1 elif dtype == blr_float: val = struct.unpack('f', buf[bufpos:bufpos+esize])[0] elif dtype in (blr_d_float, blr_double): val = struct.unpack('d', buf[bufpos:bufpos+esize])[0] elif dtype == blr_timestamp: yyyy, mm, dd = self._parse_date(buf[bufpos:bufpos+4]) h, m, s, ms = self._parse_time(buf[bufpos+4:bufpos+esize]) val = datetime.datetime(yyyy, mm, dd, h, m, s, ms) elif dtype == blr_sql_date: yyyy, mm, dd = self._parse_date(buf[bufpos:bufpos+esize]) val = datetime.date(yyyy, mm, dd) elif dtype == blr_sql_time: h, m, s, ms = self._parse_time(buf[bufpos:bufpos+esize]) val = datetime.time(h, m, s, ms) else: raise OperationalError("Unsupported Firebird ARRAY subtype: %i" % dtype) value.append(val) bufpos += esize else: for i in xrange(dimensions[dim]): (val, bufpos) = self.__extract_db_array_to_list(esize, dtype, subtype, scale, dim+1, dimensions, buf, bufpos) value.append(val) return (value, bufpos) def __copy_list_to_db_array(self, esize, dtype, subtype, scale, dim, dimensions, value, buf, bufpos): """Copies Python list(s) to ARRRAY column data buffer. """ valuebuf = None if dtype in (blr_text, blr_text2): valuebuf = ctypes.create_string_buffer(bs([0]), esize) elif dtype in (blr_varying, blr_varying2): valuebuf = ctypes.create_string_buffer(bs([0]), esize) elif dtype in (blr_short, blr_long, blr_int64): if esize == 2: valuebuf = ISC_SHORT(0) elif esize == 4: valuebuf = ISC_LONG(0) elif esize == 8: valuebuf = ISC_INT64(0) else: raise OperationalError("Unsupported number type") elif dtype == blr_float: valuebuf = ctypes.create_string_buffer(bs([0]), esize) elif dtype in (blr_d_float, blr_double): valuebuf = ctypes.create_string_buffer(bs([0]), esize) elif dtype == blr_timestamp: valuebuf = ctypes.create_string_buffer(bs([0]), esize) elif dtype == blr_sql_date: valuebuf = ctypes.create_string_buffer(bs([0]), esize) elif dtype == blr_sql_time: valuebuf = ctypes.create_string_buffer(bs([0]), esize) elif dtype == blr_bool: valuebuf = ctypes.create_string_buffer(bs([0]), esize) #sqlvar.sqldata = ctypes.cast(ctypes.pointer( #ctypes.create_string_buffer( #int_to_bytes(value, sqlvar.sqllen))), buf_pointer) else: raise OperationalError("Unsupported Firebird ARRAY subtype: %i" % dtype) self.__fill_db_array_buffer(esize, dtype, subtype, scale, dim, dimensions, value, valuebuf, buf, bufpos) def __fill_db_array_buffer(self, esize, dtype, subtype, scale, dim, dimensions, value, valuebuf, buf, bufpos): if dim == len(dimensions)-1: for i in xrange(dimensions[dim]): if dtype in (blr_text, blr_text2, blr_varying, blr_varying2): val = value[i] if isinstance(val, UnicodeType): val = val.encode(self.__python_charset) if len(val) > esize: raise ValueError("ARRAY value of parameter is too long," " expected %i, found %i" % (esize, len(val))) valuebuf.value = val ctypes.memmove(ctypes.byref(buf, bufpos), valuebuf, esize) elif dtype in (blr_short, blr_long, blr_int64): if subtype or scale: val = value[i] if isinstance(val, decimal.Decimal): val = int((val * _tenTo[256-abs(scale)]).to_integral()) elif isinstance(val, (int, mylong, float,)): val = int(val * _tenTo[256-abs(scale)]) else: raise TypeError('Objects of type %s are not ' ' acceptable input for' ' a fixed-point column.' % str(type(val))) valuebuf.value = val else: if esize == 2: valuebuf.value = value[i] elif esize == 4: valuebuf.value = value[i] elif esize == 8: valuebuf.value = value[i] else: raise OperationalError("Unsupported type") ctypes.memmove(ctypes.byref(buf, bufpos), ctypes.byref(valuebuf), esize) elif dtype == blr_bool: valuebuf.value = int_to_bytes(1 if value[i] else 0, 1) ctypes.memmove(ctypes.byref(buf, bufpos), ctypes.byref(valuebuf), esize) elif dtype == blr_float: valuebuf.value = struct.pack('f', value[i]) ctypes.memmove(ctypes.byref(buf, bufpos), valuebuf, esize) elif dtype in (blr_d_float, blr_double): valuebuf.value = struct.pack('d', value[i]) ctypes.memmove(ctypes.byref(buf, bufpos), valuebuf, esize) elif dtype == blr_timestamp: valuebuf.value = self._convert_timestamp(value[i]) ctypes.memmove(ctypes.byref(buf, bufpos), valuebuf, esize) elif dtype == blr_sql_date: valuebuf.value = self._convert_date(value[i]) ctypes.memmove(ctypes.byref(buf, bufpos), valuebuf, esize) elif dtype == blr_sql_time: valuebuf.value = self._convert_time(value[i]) ctypes.memmove(ctypes.byref(buf, bufpos), valuebuf, esize) else: raise OperationalError("Unsupported Firebird ARRAY subtype: %i" % dtype) bufpos += esize else: for i in xrange(dimensions[dim]): bufpos = self.__fill_db_array_buffer(esize, dtype, subtype, scale, dim+1, dimensions, value[i], valuebuf, buf, bufpos) return bufpos def __validate_array_value(self, dim, dimensions, value_type, sqlsubtype, value_scale, value): """Validates whether Python list(s) passed as ARRAY column value matches column definition (length, structure and value types). """ ok = isinstance(value, (ibase.ListType, ibase.TupleType)) ok = ok and (len(value) == dimensions[dim]) if not ok: return False for i in xrange(dimensions[dim]): if dim == len(dimensions)-1: # leaf: check value type if value_type in (blr_text, blr_text2, blr_varying, blr_varying2): ok = isinstance(value[i], (ibase.StringType, ibase.UnicodeType)) elif value_type in (blr_short, blr_long, blr_int64): if sqlsubtype or value_scale: ok = isinstance(value[i], decimal.Decimal) else: ok = isinstance(value[i], ibase.IntType) elif value_type == blr_float: ok = isinstance(value[i], ibase.FloatType) elif value_type in (blr_d_float, blr_double): ok = isinstance(value[i], ibase.FloatType) elif value_type == blr_timestamp: ok = isinstance(value[i], datetime.datetime) elif value_type == blr_sql_date: ok = isinstance(value[i], datetime.date) elif value_type == blr_sql_time: ok = isinstance(value[i], datetime.time) elif value_type == blr_bool: ok = isinstance(value[i], bool) else: ok = False else: # non-leaf: recurse down ok = ok and self.__validate_array_value(dim+1, dimensions, value_type, sqlsubtype, value_scale, value[i]) if not ok: return False return ok def __tuple2xsqlda(self, xsqlda, parameters): """Move data from parameters to input XSQLDA. """ for i in xrange(xsqlda.sqld): sqlvar = xsqlda.sqlvar[i] value = parameters[i] vartype = sqlvar.sqltype & ~1 scale = sqlvar.sqlscale # NULL handling if value is None: # Set the null flag whether sqlvar definition allows it or not, # to give BEFORE triggers to act on value without # our interference. if (sqlvar.sqltype & 1) == 0: # NULLs were not allowed, so set it allowed or FB will complain sqlvar.sqltype += 1 sqlvar.sqlind = ctypes.pointer(ISC_SHORT(-1)) sqlvar.sqldata = None else: # if sqlvar allows null, allocate the null flag # I don't know whether it's necessary, # but we'll do it anyway for safety if (sqlvar.sqltype & 1) != 0: sqlvar.sqlind = ctypes.pointer(ISC_SHORT(0)) # Fill in value by type if ((vartype != SQL_BLOB and isinstance(value, (StringType, UnicodeType))) or vartype in [SQL_TEXT, SQL_VARYING]): # Place for Implicit Conversion of Input Parameters # to Strings if not isinstance(value, (UnicodeType, StringType, ibase.mybytes)): value = str(value) # Place for Implicit Conversion of Input Parameters # from Strings if isinstance(value, UnicodeType): value = value.encode(self.__python_charset) if vartype in [SQL_TEXT, SQL_VARYING] and len(value) > sqlvar.sqllen: raise ValueError("Value of parameter (%i) is too long," " expected %i, found %i" % (i, sqlvar.sqllen, len(value))) sqlvar.sqltype = SQL_TEXT | (sqlvar.sqltype & 1) sqlvar.sqllen = ctypes.c_short(len(value)) sqlvar.sqldata = ctypes.cast(ctypes.pointer( ctypes.create_string_buffer(value)), buf_pointer) elif vartype in [SQL_SHORT, SQL_LONG, SQL_INT64]: # It's scalled integer? if sqlvar.sqlsubtype or scale: if isinstance(value, decimal.Decimal): value = int( (value * _tenTo[abs(scale)]).to_integral()) elif isinstance(value, (int, mylong, float,)): value = int(value * _tenTo[abs(scale)]) else: raise TypeError('Objects of type %s are not ' ' acceptable input for' ' a fixed-point column.' % str(type(value))) self._check_integer_range(value, self.__sql_dialect, vartype, sqlvar.sqlsubtype, sqlvar.sqlscale) sqlvar.sqldata = ctypes.cast(ctypes.pointer( ctypes.create_string_buffer( int_to_bytes(value, sqlvar.sqllen))), buf_pointer) elif vartype == SQL_TYPE_DATE: sqlvar.sqldata = ctypes.cast(ctypes.pointer( ctypes.create_string_buffer( self._convert_date(value))), buf_pointer) elif vartype == SQL_TYPE_TIME: sqlvar.sqldata = ctypes.cast(ctypes.pointer( ctypes.create_string_buffer( self._convert_time(value))), buf_pointer) elif vartype == SQL_TIMESTAMP: sqlvar.sqldata = ctypes.cast(ctypes.pointer( ctypes.create_string_buffer( self._convert_timestamp(value))), buf_pointer) elif vartype == SQL_FLOAT: sqlvar.sqldata = ctypes.cast( ctypes.pointer(ctypes.create_string_buffer( struct.pack('f', value))), buf_pointer) elif vartype == SQL_DOUBLE: sqlvar.sqldata = ctypes.cast( ctypes.pointer(ctypes.create_string_buffer( struct.pack('d', value))), buf_pointer) elif vartype == SQL_BOOLEAN: sqlvar.sqldata = ctypes.cast(ctypes.pointer( ctypes.create_string_buffer( int_to_bytes(1 if value else 0, sqlvar.sqllen))), buf_pointer) elif vartype == SQL_BLOB: blobid = ISC_QUAD(0, 0) blob_handle = isc_blob_handle() if hasattr(value, 'read'): # It seems we've got file-like object, use stream BLOB api.isc_create_blob2(self._isc_status, self.cursor._connection._db_handle, self.cursor._transaction._tr_handle, blob_handle, blobid, 4, bs([ibase.isc_bpb_version1, ibase.isc_bpb_type, 1, ibase.isc_bpb_type_stream])) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Cursor.write_input_blob/isc_create_blob2:") sqlvar.sqldata = ctypes.cast(ctypes.pointer(blobid), buf_pointer) blob = ctypes.create_string_buffer(MAX_BLOB_SEGMENT_SIZE) value_chunk = value.read(MAX_BLOB_SEGMENT_SIZE) blob.raw = ibase.b(value_chunk) while len(value_chunk) > 0: api.isc_put_segment(self._isc_status, blob_handle, len(value_chunk), ctypes.byref(blob)) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Cursor.write_input_blob/isc_put_segment:") ctypes.memset(blob, 0, MAX_BLOB_SEGMENT_SIZE) value_chunk = value.read(MAX_BLOB_SEGMENT_SIZE) blob.raw = ibase.b(value_chunk) api.isc_close_blob(self._isc_status, blob_handle) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Cursor.write_input_blob/isc_close_blob:") else: # Non-stream BLOB if isinstance(value, myunicode): if sqlvar.sqlsubtype == 1: value = value.encode(self.__python_charset) else: raise TypeError('Unicode strings are not' ' acceptable input for' ' a non-textual BLOB column.') blob = ctypes.create_string_buffer(value) api.isc_create_blob2(self._isc_status, self.cursor._connection._db_handle, self.cursor._transaction._tr_handle, blob_handle, blobid, 0, None) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Cursor.write_input_blob/isc_create_blob2:") sqlvar.sqldata = ctypes.cast(ctypes.pointer(blobid), buf_pointer) total_size = len(value) bytes_written_so_far = 0 bytes_to_write_this_time = MAX_BLOB_SEGMENT_SIZE while bytes_written_so_far < total_size: if (total_size - bytes_written_so_far) < MAX_BLOB_SEGMENT_SIZE: bytes_to_write_this_time = (total_size - bytes_written_so_far) api.isc_put_segment(self._isc_status, blob_handle, bytes_to_write_this_time, ctypes.byref(blob, bytes_written_so_far)) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Cursor.write_input_blob/isc_put_segment:") bytes_written_so_far += bytes_to_write_this_time api.isc_close_blob(self._isc_status, blob_handle) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Cursor.write_input_blob/isc_close_blob:") elif vartype == SQL_ARRAY: arrayid = ISC_QUAD(0, 0) arrayid_ptr = ctypes.pointer(arrayid) arraydesc = ISC_ARRAY_DESC(0) sqlvar.sqldata = ctypes.cast(ctypes.pointer(arrayid), buf_pointer) sqlsubtype = self.cursor._connection._get_array_sqlsubtype(sqlvar.relname, sqlvar.sqlname) api.isc_array_lookup_bounds(self._isc_status, self.cursor._connection._db_handle, self.cursor._transaction._tr_handle, sqlvar.relname, sqlvar.sqlname, arraydesc) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Cursor.write_otput_array/isc_array_lookup_bounds:") value_type = arraydesc.array_desc_dtype value_scale = arraydesc.array_desc_scale value_size = arraydesc.array_desc_length if value_type in (blr_varying, blr_varying2): value_size += 2 dimensions = [] total_num_elements = 1 for dimension in xrange(arraydesc.array_desc_dimensions): bounds = arraydesc.array_desc_bounds[dimension] dimensions.append((bounds.array_bound_upper+1)-bounds.array_bound_lower) total_num_elements *= dimensions[dimension] total_size = total_num_elements * value_size # Validate value to make sure it matches the array structure if not self.__validate_array_value(0, dimensions, value_type, sqlsubtype, value_scale, value): raise ValueError("Incorrect ARRAY field value.") value_buffer = ctypes.create_string_buffer(total_size) tsize = ISC_LONG(total_size) self.__copy_list_to_db_array(value_size, value_type, sqlsubtype, value_scale, 0, dimensions, value, value_buffer, 0) api.isc_array_put_slice(self._isc_status, self.cursor._connection._db_handle, self.cursor._transaction._tr_handle, arrayid_ptr, arraydesc, value_buffer, tsize) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Cursor.read_otput_array/isc_array_put_slice:") sqlvar.sqldata = ctypes.cast(arrayid_ptr, buf_pointer) def _free_handle(self): if self._stmt_handle != None and not self.__closed: self.__executed = False self.__closed = True self.__output_cache = None self._name = None while len(self.__blob_readers) > 0: self.__blob_readers.pop().close() if self.statement_type == isc_info_sql_stmt_select: api.isc_dsql_free_statement(self._isc_status, self._stmt_handle, ibase.DSQL_close) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Error while releasing SQL statement handle:") def _close(self): if self._stmt_handle != None: while len(self.__blob_readers) > 0: self.__blob_readers.pop().close() stmt_handle = self._stmt_handle self._stmt_handle = None self.__executed = False self.__prepared = False self.__closed = True self.__description = None self.__output_cache = None self._name = None if is_dead_proxy(self.cursor): self.cursor = None connection = self.cursor._connection if self.cursor else None if (not connection) or (connection and not connection.closed): api.isc_dsql_free_statement(self._isc_status, stmt_handle, ibase.DSQL_drop) if (db_api_error(self._isc_status) and (self._isc_status[1] not in [335544528, 335544485])): raise exception_from_status(DatabaseError, self._isc_status, "Error while closing SQL statement:") def _execute(self, parameters=None): # Bind parameters if parameters: if not isinstance(parameters, (ListType, TupleType)): raise TypeError("parameters must be list or tuple") if len(parameters) > self._in_sqlda.sqln: raise ProgrammingError("Statement parameter sequence contains" " %d parameters, but only %d are allowed" % (len(parameters), self._in_sqlda.sqln)) # Restore original type and size information for input parameters i = 0 for sqlvar in self._in_sqlda.sqlvar[:self.n_input_params]: sqlvar.sqltype, sqlvar.sqllen = self._in_sqlda_save[i] i += 1 self.__tuple2xsqlda(self._in_sqlda, parameters) xsqlda_in = ctypes.cast(ctypes.pointer(self._in_sqlda), XSQLDA_PTR) else: xsqlda_in = None # Execute the statement if ((self.statement_type == isc_info_sql_stmt_exec_procedure) and (self._out_sqlda.sqld > 0)): # NOTE: We have to pass xsqlda_out only for statements that return # single row xsqlda_out = ctypes.cast(ctypes.pointer(self._out_sqlda), XSQLDA_PTR) api.isc_dsql_execute2(self._isc_status, self.cursor._transaction._tr_handle, self._stmt_handle, self.__sql_dialect, xsqlda_in, xsqlda_out) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Error while executing Stored Procedure:") # The result was returned immediately, but we have to provide it # via fetch*() calls as Python DB API requires. However, it's not # possible to call fetch on open such statement, so we'll cache # the result and return it in fetchone instead calling fetch. self.__output_cache = self.__xsqlda2tuple(self._out_sqlda) else: api.isc_dsql_execute2(self._isc_status, self.cursor._transaction._tr_handle, self._stmt_handle, self.__sql_dialect, xsqlda_in, None) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Error while executing SQL statement:") self.__output_cache = None self.__executed = True self.__closed = False self._last_fetch_status = ISC_STATUS(self.NO_FETCH_ATTEMPTED_YET) def _fetchone(self): if self._last_fetch_status == self.RESULT_SET_EXHAUSTED and not self.__output_cache: return None if self.__executed: if self.__output_cache: if self._last_fetch_status == self.RESULT_SET_EXHAUSTED: self._free_handle() return None else: self._last_fetch_status = self.RESULT_SET_EXHAUSTED return self.__output_cache else: if self.n_output_params == 0: raise DatabaseError("Attempt to fetch row of results after statement" " that does not produce result set.") self._last_fetch_status = api.isc_dsql_fetch( self._isc_status, self._stmt_handle, self.__sql_dialect, ctypes.cast(ctypes.pointer(self._out_sqlda), XSQLDA_PTR)) if self._last_fetch_status == 0: return self.__xsqlda2tuple(self._out_sqlda) elif self._last_fetch_status == self.RESULT_SET_EXHAUSTED: self._free_handle() return None else: if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Cursor.fetchone:") elif self.__closed: raise ProgrammingError("Cannot fetch from closed cursor.") else: raise ProgrammingError("Cannot fetch from this cursor because" " it has not executed a statement.") def _set_cursor_name(self, name): api.isc_dsql_set_cursor_name(self._isc_status, self._stmt_handle, b(name), 0) if db_api_error(self._isc_status): raise exception_from_status(OperationalError, self._isc_status, "Could not set cursor name:") self._name = name def set_stream_blob(self, blob_spec): """Specify a BLOB column(s) to work in `stream` mode instead classic, materialized mode. :param blob_spec: Single name or sequence of column names. Name must be in format as it's stored in database (refer to :attr:`description` for real value). :type blob_spec: string or list .. important:: BLOB name is **permanently** added to the list of BLOBs handled as `stream` BLOBs by this instance. :param string blob_spec: Name of BLOB column. """ if isinstance(blob_spec, ibase.StringType): self.__streamed_blobs.append(blob_spec) else: self.__streamed_blobs.extend(blob_spec) def set_stream_blob_treshold(self, size): """Specify max. blob size for materialized blobs. If size of particular blob exceeds this threshold, returns streamed blob (:class:`BlobReader`) instead string. Value -1 means no size limit (use at your own risk). Default value is 64K :param integer size: Max. size for materialized blob. """ self.__streamed_blob_treshold = size def __del__(self): if self._stmt_handle != None: self._close() def close(self): """Drops the resources associated with executed prepared statement, but keeps it prepared for another execution. """ self._free_handle() #: (Read Only) (string) SQL command this PreparedStatement executes. sql = property(__get_sql) #: (Read Only) Sequence of 7-item sequences. #: Each of these sequences contains information describing one result column: #: (name, type_code, display_size,internal_size, precision, scale, null_ok) description = property(__get_description) #: (Read Only) (integer) Specifies the number of rows that the last execution #: produced (for DQL statements like select) or affected (for DML statements #: like update or insert ). #: #: The attribute is -1 in case the statement was not yet executed #: or the rowcount of the operation is not determinable by the interface. rowcount = property(__get_rowcount) #: (Read Only) (string) A string representation of the execution plan generated #: for this statement by the database engine’s optimizer. plan = property(__get_plan) #: (Read/Write) (string) Name for the SQL cursor. This property can be #: ignored entirely if you don’t need to use it. name = property(__get_name, __set_name) #: (Read Only) (boolean) True if closed. Note that closed means that PS #: statement handle was closed for further fetching, releasing server resources, #: but wasn't dropped, and couldbe still used for another execution. closed = property(__get_closed) class Cursor(object): """Represents a database cursor, which is used to execute SQL statement and manage the context of a fetch operation. .. important:: DO NOT create instances of this class directly! Use only :meth:`Connection.cursor`, :meth:`Transaction.cursor` and :meth:`ConnectionGroup.cursor` to get Cursor instances that operate in desired context. .. note:: Cursor is actually a high-level wrapper around :class:`PreparedStatement` instance(s) that handle the actual SQL statement execution and result management. .. tip:: Cursor supports the iterator protocol, yielding tuples of values like :meth:`fetchone`. """ #: (Read/Write) As required by the Python DB API 2.0 spec, the value of this #: attribute is observed with respect to the :meth:`fetchmany` method. However, #: changing the value of this attribute does not make any difference in fetch #: efficiency because the database engine only supports fetching a single row #: at a time. arraysize = 1 def __init__(self, connection, transaction): """ .. important:: The association between a Cursor and its :class:`Transaction` and :class:`Connection` is set when the Cursor is created, and cannot be changed during the lifetime of that Cursor. :param connection: :class:`Connection` instance this cursor should be bound to. :param transaction: :class:`Transaction` instance this cursor should be bound to. """ self._connection = connection self._transaction = transaction self._ps = None # current prepared statement def next(self): """Return the next item from the container. Part of *iterator protocol*. :raises StopIteration: If there are no further items. """ row = self.fetchone() if row: return row else: raise StopIteration __next__ = next def __iter__(self): return self def __valid_ps(self): return (self._ps is not None) and not (isinstance(self._ps, weakref.ProxyType) and not dir(self._ps)) def __get_description(self): if self.__valid_ps(): return self._ps.description else: return [] def __get_rowcount(self): if self.__valid_ps(): return self._ps.rowcount else: return -1 def __get_name(self): if self.__valid_ps(): return self._ps._name else: return None def __set_name(self, name): if name is None or not isinstance(name, StringType): raise ProgrammingError("The name attribute can only be set to a" " string, and cannot be deleted") if not self.__valid_ps(): raise ProgrammingError("This cursor has not yet executed a" " statement, so setting its name attribute" " would be meaningless") if self._ps._name: raise ProgrammingError("Cursor's name has already been declared in" " context of currently executed statement") self._ps._set_cursor_name(name) def __get_plan(self): if self.__valid_ps(): return self._ps.plan else: return None def __get_connection(self): return self._connection def __get_transaction(self): return self._transaction def __connection_deleted(self, obj): self._connection = None def __ps_deleted(self, obj): self._ps = None def _set_as_internal(self): self._connection = weakref.proxy(self._connection, _weakref_callback(self.__connection_deleted)) def callproc(self, procname, parameters=None): """Call a stored database procedure with the given name. The result of the call is available through the standard fetchXXX() methods. :param string procname: Stored procedure name. :param parameters: (Optional) Sequence of parameters. Must contain one entry for each argument that the procedure expects. :type parameters: List or Tuple :returns: parameters, as required by Python DB API 2.0 Spec. :raises TypeError: When parameters is not List or Tuple. :raises `~fdb.ProgrammingError`: When more parameters than expected are suplied. :raises `~fdb.DatabaseError`: When error is returned by server. """ if not parameters: params = [] else: if isinstance(parameters, (ListType, TupleType)): params = parameters else: raise TypeError("callproc paremeters must be List or Tuple") sql = ('EXECUTE PROCEDURE ' + procname + ' ' + ','.join('?' * len(params))) self.execute(sql, params) return parameters def close(self): """Close the cursor now (rather than whenever `__del__` is called). Closes any currently open :class:`PreparedStatement`. However, the cursor is still bound to :class:`Connection` and :class:`Transaction`, so it could be still used to execute SQL statements. .. warning:: FDB's implementation of Cursor somewhat violates the Python DB API 2.0, which requires that cursor will be unusable after call to `close`; and an Error (or subclass) exception should be raised if any operation is attempted with the cursor. If you’ll take advantage of this anomaly, your code would be less portable to other Python DB API 2.0 compliant drivers. """ if is_dead_proxy(self._ps): self._ps = None if self._ps != None: self._ps.close() self._ps = None def execute(self, operation, parameters=None): """Prepare and execute a database operation (query or command). .. note:: Execution is handled by :class:`PreparedStatement` that is either supplied as `operation` parameter, or created internally when `operation` is a string. Internally created PreparedStatements are stored in cache for later reuse, when the same `operation` string is used again. :returns: self, so call to execute could be used as iterator. :param operation: SQL command specification. :type operation: string or :class:`PreparedStatement` instance :param parameters: (Optional) Sequence of parameters. Must contain one entry for each argument that the operation expects. :type parameters: List or Tuple :raises ValueError: When operation PreparedStatement belongs to different Cursor instance. :raises TypeError: When parameters is not List or Tuple. :raises `~fdb.ProgrammingError`: When more parameters than expected are suplied. :raises `~fdb.DatabaseError`: When error is returned by server. """ if is_dead_proxy(self._ps): self._ps = None if self._ps != None: # Dirty trick to check whether operation when it's # PreparedStatement is the one we (may) have weak proxy for if self._ps.__repr__.__self__ is not operation: self._ps.close() if not self._transaction.active: self._transaction.begin() if isinstance(operation, PreparedStatement): if operation.cursor is not self: raise ValueError("PreparedStatement was created by different Cursor.") self._ps = weakref.proxy(operation, _weakref_callback(self.__ps_deleted)) else: self._ps = PreparedStatement(operation, self, True) self._ps._execute(parameters) # return self so `execute` call could be used as iterable return self def prep(self, operation): """Create prepared statement for repeated execution. .. note:: Returned :class:`PreparedStatement` instance is bound to its Cursor instance via strong reference, and is not stored in Cursor's internal cache of prepared statements. :param string operation: SQL command :returns: :class:`PreparedStatement` instance. :raises `~fdb.DatabaseError`: When error is returned by server. :raises `~fdb.InternalError`: On unexpected processing condition. """ if not self._transaction.active: self._transaction.begin() return PreparedStatement(operation, self, False) def executemany(self, operation, seq_of_parameters): """Prepare a database operation (query or command) and then execute it against all parameter sequences or mappings found in the sequence `seq_of_parameters`. .. note:: This function simply calls :meth:`execute` in a loop, feeding it with parameters from `seq_of_parameters`. Because `execute` caches `PreparedStatements`, calling `executemany` is equally efective as direct use of prepared statement and calling `execute` in a loop directly in application. :returns: self, so call to executemany could be used as iterator. :param operation: SQL command specification. :type operation: string or :class:`PreparedStatement` instance :param seq_of_parameters: Sequence of sequences of parameters. Must contain one sequence of parameters for each execution that has one entry for each argument that the operation expects. :type seq_of_parameters: List or Tuple :raises ValueError: When operation PreparedStatement belongs to different Cursor instance. :raises TypeError: When seq_of_parameters is not List or Tuple. :raises `~fdb.ProgrammingError`: When there are more parameters in any sequence than expected. :raises `~fdb.DatabaseError`: When error is returned by server. """ if not isinstance(operation, PreparedStatement): operation = self.prep(operation) for parameters in seq_of_parameters: self.execute(operation, parameters) return self def fetchone(self): """Fetch the next row of a query result set. :returns: tuple of returned values, or None when no more data is available. :raises `~fdb.DatabaseError`: When error is returned by server. :raises `~fdb.ProgrammingError`: When underlying :class:`PreparedStatement` is closed, statement was not yet executed, or unknown status is returned by fetch operation. """ if self._ps: return self._ps._fetchone() else: raise ProgrammingError("Cannot fetch from this cursor because" " it has not executed a statement.") def fetchmany(self, size=arraysize): """Fetch the next set of rows of a query result, returning a sequence of sequences (e.g. a list of tuples). An empty sequence is returned when no more rows are available. The number of rows to fetch per call is specified by the parameter. If it is not given, the cursor’s arraysize determines the number of rows to be fetched. The method does try to fetch as many rows as indicated by the size parameter. If this is not possible due to the specified number of rows not being available, fewer rows may be returned. :param integer size: Max. number of rows to fetch. :returns: List of tuples, where each tuple is one row of returned values. :raises `~fdb.DatabaseError`: When error is returned by server. :raises `~fdb.ProgrammingError`: When underlying :class:`PreparedStatement` is closed, statement was not yet executed, or unknown status is returned by fetch operation. """ i = 0 result = [] while i < size: row = self.fetchone() if row: result.append(row) i += 1 else: return result return result def fetchall(self): """Fetch all (remaining) rows of a query result. :returns: List of tuples, where each tuple is one row of returned values. :raises `~fdb.DatabaseError`: When error is returned by server. :raises `~fdb.ProgrammingError`: When underlying :class:`PreparedStatement` is closed, statement was not yet executed, or unknown status is returned by fetch operation. """ return [row for row in self] def fetchonemap(self): """Fetch the next row of a query result set like :meth:`fetchone`, except that it returns a mapping of field name to field value, rather than a tuple. :returns: :class:`fbcore._RowMapping` of returned values, or None when no more data is available. :raises `~fdb.DatabaseError`: When error is returned by server. :raises `~fdb.ProgrammingError`: When underlying :class:`PreparedStatement` is closed, statement was not yet executed, or unknown status is returned by fetch operation. """ row = self.fetchone() if row: row = _RowMapping(self.description, row) return row def fetchmanymap(self, size=arraysize): """Fetch the next set of rows of a query result, like :meth:`fetchmany`, except that it returns a list of mappings of field name to field value, rather than a list of tuples. :param integer size: Max. number of rows to fetch. :returns: List of :class:`fbcore._RowMapping` instances, one such instance for each row. :raises `~fdb.DatabaseError`: When error is returned by server. :raises `~fdb.ProgrammingError`: When underlying :class:`PreparedStatement` is closed, statement was not yet executed, or unknown status is returned by fetch operation. """ i = 0 result = [] while i < size: row = self.fetchonemap() if row: result.append(row) i += 1 else: return result return result def fetchallmap(self): """Fetch all (remaining) rows of a query result like :meth:`fetchall`, except that it returns a list of mappings of field name to field value, rather than a list of tuples. :returns: List of :class:`fbcore._RowMapping` instances, one such instance for each row. :raises `~fdb.DatabaseError`: When error is returned by server. :raises `~fdb.ProgrammingError`: When underlying :class:`PreparedStatement` is closed, statement was not yet executed, or unknown status is returned by fetch operation. """ return [row for row in self.itermap()] def iter(self): """Equivalent to the :meth:`fetchall`, except that it returns iterator rather than materialized list. :returns: Iterator that yields tuple of values like :meth:`fetchone`. """ return self def itermap(self): """Equivalent to the :meth:`fetchallmap`, except that it returns iterator rather than materialized list. :returns: Iterator that yields :class:`fbcore._RowMapping` instance like :meth:`fetchonemap`. """ return utils.Iterator(self.fetchonemap, None) def setinputsizes(self, sizes): """Required by Python DB API 2.0, but pointless for Firebird, so it does nothing.""" pass def setoutputsize(self, size, column=None): """Required by Python DB API 2.0, but pointless for Firebird, so it does nothing.""" pass def set_stream_blob(self, blob_name): """Specify a BLOB column(s) to work in `stream` mode instead classic, materialized mode for already executed statement. :param blob_name: Single name or sequence of column names. Name must be in format as it's stored in database (refer to :attr:`description` for real value). :type blob_name: string or list .. important:: BLOB name is **permanently** added to the list of BLOBs handled as `stream` BLOBs by current :class:`PreparedStatement` instance. If instance is stored in internal cache of prepared statements, the same command executed repeatedly will retain this setting. :param string blob_name: Name of BLOB column. :raises `~fdb.ProgrammingError`: """ if self._ps: self._ps.set_stream_blob(blob_name) else: raise ProgrammingError def set_stream_blob_treshold(self, size): """Specify max. blob size for materialized blobs. If size of particular blob exceeds this threshold, returns streamed blob (:class:`BlobReader`) instead string. Value -1 means no size limit (use at your own risk). Default value is 64K :param integer size: Max. size for materialized blob. """ if self._ps: self._ps.set_stream_blob_treshold(size) else: raise ProgrammingError def __del__(self): self.close() #: (Read Only) Sequence of 7-item sequences. #: Each of these sequences contains information describing one result column: #: (name, type_code, display_size,internal_size, precision, scale, null_ok) #: #: If cursor doesn't have a prepared statement, the value is None. description = property(__get_description) #: (Read Only) (integer) Specifies the number of rows that the last executeXXX() #: produced (for DQL statements like select) or affected (for DML statements #: like update or insert ). #: #: The attribute is -1 in case no executeXXX() has been performed on the cursor #: or the rowcount of the last operation is not determinable by the interface. #: #: .. note:: #: #: The database engine's own support for the determination of #: “rows affected”/”rows selected” is quirky. The database engine only #: supports the determination of rowcount for INSERT, UPDATE, DELETE, #: and SELECT statements. When stored procedures become involved, row #: count figures are usually not available to the client. Determining #: rowcount for SELECT statements is problematic: the rowcount is reported #: as zero until at least one row has been fetched from the result set, #: and the rowcount is misreported if the result set is larger than 1302 #: rows. The server apparently marshals result sets internally in batches #: of 1302, and will misreport the rowcount for result sets larger than #: 1302 rows until the 1303rd row is fetched, result sets larger than 2604 #: rows until the 2605th row is fetched, and so on, in increments of 1302. rowcount = property(__get_rowcount) #: (Read/Write) (string) Name for the SQL cursor. This property can be #: ignored entirely if you don’t need to use it. name = property(__get_name, __set_name) #: (Read Only) (string) A string representation of the execution plan #: for last executed statement generated by the database engine’s optimizer. #: `None` if no statement was executed. plan = property(__get_plan) #: (Read Only) (:class:`Connection`) PEP 249 Extension. #: Reference to the :class:`Connection` object on which the cursor was created. connection = property(__get_connection) #: (Read Only) (:class:`Transaction`) #: Reference to the :class:`Transaction` object on which the cursor was created. transaction = property(__get_transaction) class Transaction(object): """Represents a transaction context, which is used to execute SQL statement. .. important:: DO NOT create instances of this class directly! :class:`Connection` and :class:`ConnectionGroup` manage Transaction internally, surfacing all important methods directly in their interfaces. If you want additional transactions independent from :attr:`Connection.main_transaction`, use :meth:`Connection.trans` method to obtain such `Transaction` instance. """ #: (Read/Write) Transaction Parameter Block. default_tpb = ISOLATION_LEVEL_READ_COMMITED #: (Read/Write) Default action on active transaction when it's closed. #: Accepted values: commit, rollback default_action = 'commit' def __init__(self, connections, default_tpb=None, default_action='commit'): """ :param iterable connections: Sequence of (up to 16) :class:`Connection` instances. :param default_tpb: Transaction Parameter Block for this transaction. If `None` is specified, uses `ISOLATION_LEVEL_READ_COMMITED`. :type default_tpb: :class:`TPB` instance, list/tuple of `isc_tpb_*` constants or `bytestring` :param default_action: Action taken when active transaction is ended automatically (during :meth:`close` or :meth:`begin`). :type default_action: string 'commit' or 'rollback' :raises `~fdb.ProgrammingError`: When zero or more than 16 connections are given. """ if len(connections) > 16: raise ProgrammingError("Transaction can't accept more than 16 Connections") elif len(connections) == 0: raise ProgrammingError("Transaction requires at least one Connection") self._connections = [weakref.ref(c) for c in connections] self.__python_charset = connections[0]._python_charset if default_tpb is None: self.default_tpb = ISOLATION_LEVEL_READ_COMMITED else: self.default_tpb = default_tpb self.default_action = default_action self._cursors = [] # Weak references to cursors self._isc_status = ISC_STATUS_ARRAY() self._tr_handle = None self.__closed = False def __enter__(self): return self def __exit__(self, *args): self.close() def __get_closed(self): return self.__closed #return self._tr_handle == None def __get_active(self): return self._tr_handle != None def __get_cursors(self): return [x() for x in self._cursors] def __check_active(self): if not self.active: raise ProgrammingError("Transaction object is not active") def __close_cursors(self): for cursor in self._cursors: c = cursor() if c: c.close() def __con_in_list(self, connection): for con in self._connections: if con() == connection: return True return False def __get_default_action(self): return self.__default_action def __set_default_action(self, action): action = action.lower() if not action in ('commit', 'rollback'): raise ProgrammingError("Transaction's default action must be either" "'commit' or 'rollback'.") else: self.__default_action = action def __get_transaction_id(self): return self.trans_info(isc_info_tra_id) def __get_oit(self): return self.trans_info(isc_info_tra_oldest_interesting) def __get_oat(self): return self.trans_info(isc_info_tra_oldest_active) def __get_ost(self): return self.trans_info(isc_info_tra_oldest_snapshot) def __get_isolation(self): return self.trans_info(isc_info_tra_isolation) def __get_lock_timeout(self): return self.trans_info(isc_info_tra_lock_timeout) def execute_immediate(self, sql): """Executes a statement without caching its prepared form on **all connections** this transaction is bound to. Automatically starts transaction if it's not already started. :param string sql: SQL statement to execute. .. important:: **The statement must not be of a type that returns a result set.** In most cases (especially cases in which the same statement – perhaps a parameterized statement – is executed repeatedly), it is better to create a cursor using the connection’s cursor method, then execute the statement using one of the cursor’s execute methods. :param string sql: SQL statement to execute. :raises `~fdb.DatabaseError`: When error is returned from server. """ if not self.active: self.begin() for connection in self._connections: con = connection() sql = b(sql, con._python_charset) api.isc_execute_immediate(self._isc_status, con._db_handle, self._tr_handle, ctypes.c_short(len(sql)), sql) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Error while executing SQL statement:") def _finish(self): if self._tr_handle != None: try: if self.default_action == 'commit': self.commit() else: self.rollback() except Exception as e: self._tr_handle = None raise e def begin(self, tpb=None): """Starts a transaction explicitly. :param tpb: (optional) Transaction Parameter Block for newly created Transaction. If not specified, :attr:`default_tpb` is used. :type tpb: :class:`TPB` instance, list/tuple of `isc_tpb_*` constants or `bytestring` .. note:: Calling this method directly is never required; a transaction will be started implicitly if necessary. .. important:: If the physical transaction is unresolved when this method is called, a :meth:`commit` or :meth:`rollback` will be performed first, accordingly to :attr:`default_action` value. :raises `~fdb.DatabaseError`: When error is returned by server. :raises `~fdb.ProgrammingError`: When TPB is in usupported format, or transaction is permanently :attr:`closed`. """ if self.__closed: raise ProgrammingError("Transaction is permanently closed.") self._finish() # Make sure that previous transaction (if any) is ended self._tr_handle = isc_tr_handle(0) _tpb = tpb if tpb else self.default_tpb if isinstance(_tpb, TPB): _tpb = _tpb.render() elif isinstance(_tpb, (ListType, TupleType)): _tpb = bs(_tpb) elif not isinstance(_tpb, mybytes): raise ProgrammingError("TPB must be either string, list/tuple of" " numeric constants or TPB instance.") ### Todo: verify handling of P version differences if PYTHON_MAJOR_VER == 3: if int2byte(_tpb[0]) != bs([isc_tpb_version3]): _tpb = bs([isc_tpb_version3]) + _tpb else: if _tpb[0] != bs([isc_tpb_version3]): _tpb = bs([isc_tpb_version3]) + _tpb if len(self._connections) == 1: api.isc_start_transaction(self._isc_status, self._tr_handle, 1, self._connections[0]()._db_handle, len(_tpb), _tpb) if db_api_error(self._isc_status): self._tr_handle = None raise exception_from_status(DatabaseError, self._isc_status, "Error while starting transaction:") elif len(self._connections) > 1: cnum = len(self._connections) teb_array = tebarray_factory(cnum) for i in xrange(cnum): teb_array[i].db_ptr = ctypes.pointer(self._connections[i]()._db_handle) teb_array[i].tpb_len = len(_tpb) teb_array[i].tpb_ptr = _tpb api.isc_start_multiple(self._isc_status, self._tr_handle, cnum, teb_array) if db_api_error(self._isc_status): self._tr_handle = None raise exception_from_status(DatabaseError, self._isc_status, "Error while starting transaction:") def commit(self, retaining=False): """Commit any pending transaction to the database. .. note:: If transaction is not active, this method does nothing, because the consensus among Python DB API experts is that transactions should always be started implicitly, even if that means allowing a `commit()` or `rollback()` without an actual transaction. :param boolean retaining: Indicates whether the transactional context of the transaction being resolved should be recycled. :raises `~fdb.DatabaseError`: When error is returned by server as response to commit. """ if not self.active: return if retaining: api.isc_commit_retaining(self._isc_status, self._tr_handle) else: self.__close_cursors() api.isc_commit_transaction(self._isc_status, self._tr_handle) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Error while commiting transaction:") if not retaining: self._tr_handle = None def rollback(self, retaining=False, savepoint=None): """Rollback any pending transaction to the database. .. note:: If transaction is not active, this method does nothing, because the consensus among Python DB API experts is that transactions should always be started implicitly, even if that means allowing a `commit()` or `rollback()` without an actual transaction. :param boolean retaining: Indicates whether the transactional context of the transaction being resolved should be recycled. Mutually exclusive with 'savepoint`. :param string savepoint: Savepoint name. Causes the transaction to roll back only as far as the designated savepoint, rather than rolling back entirely. Mutually exclusive with 'retaining`. :raises `~fdb.ProgrammingError`: If both `savepoint` and `retaining` are specified. :raises `~fdb.DatabaseError`: When error is returned by server as response to rollback. """ if not self.active: return if retaining and savepoint: raise ProgrammingError("Can't rollback to savepoint while" " retaining context") if savepoint: self.execute_immediate('rollback to %s' % savepoint) else: if retaining: api.isc_rollback_retaining(self._isc_status, self._tr_handle) else: self.__close_cursors() api.isc_rollback_transaction(self._isc_status, self._tr_handle) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Error while rolling back transaction:") if not retaining: self._tr_handle = None def close(self): """Permanently closes the Transaction object and severs its associations with other objects (:class:`Cursor` and :class:`Connection` instances). .. important:: If the physical transaction is unresolved when this method is called, a :meth:`commit` or :meth:`rollback` will be performed first, accordingly to :attr:`default_action` value. """ exc = None try: self._finish() except Exception as e: exc = e del self._cursors[:] del self._connections[:] self.__closed = True if exc: raise exc def savepoint(self, name): """Establishes a savepoint with the specified name. .. note:: If transaction is bound to multiple connections, savepoint is created on all of them. .. important:: Because savepoint is created not through Firebird API (there is no such API call), but by executing `SAVEPOINT ` SQL statement, calling this method starts the transaction if it was not yet started. :param string name: Savepoint name. """ self.execute_immediate('SAVEPOINT %s' % name) def cursor(self, connection=None): """Creates a new :class:`Cursor` that will operate in the context of this Transaction. :param connection: **Required only when** Transaction is bound to multiple `Connections`, to specify to which `Connection` the returned Cursor should be bound. :type connection: :class:`Connection` instance :raises `~fdb.ProgrammingError`: When transaction operates on multiple `Connections` and: `connection` parameter is not specified, or specified `connection` is not among `Connections` this Transaction is bound to. """ if len(self._connections) > 1: if not connection: raise ProgrammingError("Transaction.cursor on multiple connections" " requires 'connection' parameter") if not self.__con_in_list(connection): raise ProgrammingError("Transaction.cursor connection not in" " list of connections for this transaction") con = connection else: con = self._connections[0]() c = Cursor(con, self) self._cursors.append(weakref.ref(c, _cursor_weakref_callback(self))) return c def trans_info(self, request): """Pythonic wrapper around :meth:`transaction_info` call. :param request: One or more information request codes (see :meth:`transaction_info` for details). Multiple codes must be passed as tuple. :returns: Decoded response(s) for specified request code(s). When multiple requests are passed, returns a dictionary where key is the request code and value is the response from server. """ # We process request as a sequence of info codes, even if only one code # was supplied by the caller. request_is_singleton = isinstance(request, int) if request_is_singleton: request = (request,) results = {} for info_code in request: # The global().get(...) workaround is here because only recent # versions of FB expose constant isc_info_tra_isolation: if info_code == isc_info_tra_isolation: buf = self.transaction_info(info_code, 'b') buf = buf[1 + struct.calcsize('h'):] if len(buf) == 1: results[info_code] = bytes_to_uint(buf) else: # For isolation level isc_info_tra_read_committed, the # first byte indicates the isolation level # (isc_info_tra_read_committed), while the second indicates # the record version flag (isc_info_tra_rec_version or # isc_info_tra_no_rec_version). isolation_level_byte, record_version_byte = struct.unpack('cc', buf) isolation_level = bytes_to_uint(isolation_level_byte) record_version = bytes_to_uint(record_version_byte) results[info_code] = (isolation_level, record_version) else: # At the time of this writing (2006.02.09), # isc_info_tra_isolation is the only known return value of # isc_transaction_info that's not a simple integer. results[info_code] = self.transaction_info(info_code, 'i') if request_is_singleton: return results[request[0]] else: return results def transaction_info(self, info_code, result_type): """Return information about active transaction. This is very thin wrapper around Firebird API `isc_transaction_info` call. :param integer info_code: One from the `isc_info_tra_*` constants. :param result_type: Code for result type. :type result_type: 'b' for binary string or 'i' for integer :raises `~fdb.ProgrammingError`: If transaction is not active. :raises `~fdb.OperationalError`: When result is too large to fit into buffer of size SHRT_MAX. :raises `~fdb.InternalError`: On unexpected processing condition. :raises ValueError: When illegal result type code is specified. """ self.__check_active() request_buffer = bs([info_code]) buf_size = 256 while True: res_buf = int2byte(0) * buf_size api.isc_transaction_info(self._isc_status, self._tr_handle, len(request_buffer), request_buffer, len(res_buf), res_buf) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Error while requesting transaction information:") i = buf_size - 1 while i >= 0: if res_buf[i] != mychr(0): break else: i -= 1 if ord2(res_buf[i]) == isc_info_truncated: if buf_size < SHRT_MAX: buf_size *= 2 if buf_size > SHRT_MAX: buf_size = SHRT_MAX continue else: raise OperationalError("Result is too large to fit into" " buffer of size SHRT_MAX, yet underlying info" " function only accepts buffers with size <= SHRT_MAX.") else: break if ord2(res_buf[i]) != isc_info_end: raise InternalError("Exited request loop sucessfuly, but" " res_buf[i] != sc_info_end.") if request_buffer[0] != res_buf[0]: raise InternalError("Result code does not match request code.") if result_type.upper() == 'I': return bytes_to_int(res_buf[3:3 + bytes_to_int(res_buf[1:3])]) elif result_type.upper() == 'B': return ctypes.string_at(res_buf, i) else: raise ValueError("Unknown result type requested (must be 'i' or 'b').") def prepare(self): """Manually triggers the first phase of a two-phase commit (2PC). .. note:: Direct use of this method is optional; if preparation is not triggered manually, it will be performed implicitly by `commit()` in a 2PC. """ self.__check_active() api.isc_prepare_transaction(self._isc_status, self._tr_handle) if db_api_error(self._isc_status): self.rollback() raise exception_from_status(DatabaseError, self._isc_status, "Error while preparing transaction:") def __del__(self): if self._tr_handle != None: self.close() def isreadonly(self): "Returns True if transaction is Read Only." return self.trans_info(isc_info_tra_access) == isc_info_tra_readonly #: (Read Only) (int) Internal ID (server-side) for transaction. transaction_id = property(__get_transaction_id) #: (Read Only) True if transaction is closed. closed = property(__get_closed) #: (Read Only) True if transaction is active. active = property(__get_active) #: (Read Only) List of :class:`Cursor` objects associated with this Transaction. cursors = property(__get_cursors) #: (Read/Write) (string) 'commit' or 'rollback', action to be #: taken when physical transaction has to be ended automatically. #: **Default is 'commit'**. default_action = property(__get_default_action, __set_default_action) #: (Read Only) (int) ID of Oldest Interesting Transaction when this transaction started. oit = property(__get_oit) #: (Read Only) (int) ID of Oldest Active Transaction when this transaction started. oat = property(__get_oat) #: (Read Only) (int) ID of Oldest Snapshot Transaction when this transaction started. ost = property(__get_ost) #: (Read Only) (int) or (tuple) Isolation level code [isc_info_tra_consistency, # isc_info_tra_concurrency or isc_info_tra_read_committed]. For `isc_info_tra_read_committed` # return tuple where first item is `isc_info_tra_read_committed` and second one is # [isc_info_tra_no_rec_version or isc_info_tra_rec_version] isolation = property(__get_isolation) #: (Read Only) (int) Lock timeout (seconds or -1 for unlimited). lock_timeout = property(__get_lock_timeout) class ConnectionGroup(object): """Manager for distributed transactions, i.e. transactions that span multiple databases. .. tip:: ConnectionGroup supports `in` operator to check membership of connections. """ # XXX: ConnectionGroup objects currently are not thread-safe. Since # separate Connections can be manipulated simultaneously by different # threads in kinterbasdb, it would make sense for a container of multiple # connections to be safely manipulable simultaneously by multiple threads. # XXX: Adding two connections to the same database freezes the DB client # library. However, I've no way to detect with certainty whether any given # con1 and con2 are connected to the same database, what with database # aliases, IP host name aliases, remote-vs-local protocols, etc. # Therefore, a warning must be added to the docs. def __init__(self, connections=()): """ :param iterable connections: Sequence of :class:`Connection` instances. .. seealso:: See :meth:`add` for list of exceptions the constructor may throw. """ self._cons = [] self._transaction = None self._default_tpb = ISOLATION_LEVEL_READ_COMMITED for con in connections: self.add(con) def __del__(self): self.disband() def __get_default_tpb(self): return self._default_tpb def __set_default_tpb(self, value): self._default_tpb = _validate_tpb(value) def disband(self): """Forcefully deletes all connections from connection group. .. note:: If transaction is active, it’s canceled (**rollback**). .. note:: Any error during transaction finalization doesn't stop the disband process, however the exception catched is eventually reported. """ exc = None if self._transaction: try: self._transaction.default_action = 'rollback' self._transaction.close() except Exception as e: exc = e self._transaction = None self.clear() if exc: raise exc # Membership methods: def add(self, con): """Adds active connection to the group. :param con: A :class:`Connection` instance to add to this group. :raises TypeError: When `con` is not :class:`Connection` instance. :raises `~fdb.ProgrammingError`: When `con` is already member of this or another group, or :attr:`~Connection.closed`. When this group has unresolved transaction or contains 16 connections. """ ### CONTRAINTS ON $con: ### # con must be an instance of kinterbasdb.Connection: if not isinstance(con, Connection): raise TypeError("con must be an instance of fdb.Connection") # con cannot already be a member of this group: if con in self: raise ProgrammingError("con is already a member of this group.") # con cannot belong to more than one group at a time: if con.group: raise ProgrammingError("con is already a member of another group;" " it cannot belong to more than one group at once.") # con must be connected to a database; it must not have been closed. if con.closed: raise ProgrammingError("con has been closed; it cannot join a group.") #if con._timeout_enabled: #raise ProgrammingError('Connections with timeout enabled cannot' #' participate in distributed transactions.') ### CONTRAINTS ON $self: ### # self cannot accept new members while self has an unresolved # transaction: self.__require_transaction_state(False, "Cannot add connection to group that has an unresolved transaction.") self.__drop_transaction() # self cannot have more than DIST_TRANS_MAX_DATABASES members: if self.count() >= DIST_TRANS_MAX_DATABASES: raise ProgrammingError("The database engine limits the number of" " database handles that can participate in a single" " distributed transaction to %d or fewer; this group already" " has %d members." % (DIST_TRANS_MAX_DATABASES, self.count())) ### CONTRAINTS FINISHED ### # Can't set con.group directly (read-only); must use package-private # method. con._set_group(self) self._cons.append(con) def remove(self, con): """Removes specified connection from group. :param con: A :class:`Connection` instance to remove. :raises `~fdb.ProgrammingError`: When `con` doesn't belong to this group or transaction is active. """ if con not in self: raise ProgrammingError("con is not a member of this group.") assert con.group is self self.__require_transaction_state(False, "Cannot remove connection from group that has an unresolved transaction.") self.__drop_transaction() con._set_group(None) self._cons.remove(con) def clear(self): """Removes all connections from group. :raises `~fdb.ProgrammingError`: When transaction is active. """ self.__require_transaction_state(False, "Cannot clear group that has an unresolved transaction.") self.__drop_transaction() for con in self.members(): self.remove(con) assert self.count() == 0 def cursor(self, connection): """Creates a new :class:`Cursor` that will operate in the context of distributed transaction and specific :class:`Connection` that belongs to this group. .. note:: Automatically starts transaction if it's not already started. :param connection: :class:`Connection` instance. :raises `~fdb.ProgrammingError`: When group is empty or specified `connection` doesn't belong to this group. """ if not self._transaction: self.__require_non_empty_group('start') self._transaction = Transaction(self._cons) return self._transaction.cursor(connection) def members(self): "Returns list of connection objects that belong to this group." return self._cons[:] # return a *copy* of the internal list def count(self): "Returns number of :class:`Connection` objects that belong to this group." return len(self._cons) def contains(self, con): """Returns True if specified connection belong to this group. :param con: :class:`Connection` instance. """ return con in self._cons __contains__ = contains # alias to support the 'in' operator def __iter__(self): return iter(self._cons) def __drop_transaction(self): if self._transaction: self._transaction.close() self._transaction = None def __require_transaction_state(self, must_be_active, err_msg=''): transaction = self._transaction if ((must_be_active and transaction is None) or (not must_be_active and (transaction is not None and transaction.active))): raise ProgrammingError(err_msg) def __require_non_empty_group(self, operation_name): if self.count() == 0: raise ProgrammingError("Cannot %s distributed transaction with an empty ConnectionGroup." % operation_name) def __ensure_transaction(self): if not self._transaction: self.__require_non_empty_group('start') self._transaction = Transaction(self._cons, default_tpb=self.default_tpb) # Transactional methods: def execute_immediate(self, sql): """Executes a statement on all member connections without caching its prepared form. Automatically starts transaction if it's not already started. :param string sql: SQL statement to execute. .. important:: **The statement must not be of a type that returns a result set.** In most cases (especially cases in which the same statement – perhaps a parameterized statement – is executed repeatedly), it is better to create a cursor using the connection’s cursor method, then execute the statement using one of the cursor’s execute methods. :param string sql: SQL statement to execute. :raises `~fdb.DatabaseError`: When error is returned from server. """ self.__ensure_transaction() self._transaction.execute_immediate(sql) def begin(self, tpb=None): """Starts distributed transaction over member connections. :param tpb: (Optional) Transaction Parameter Buffer for newly started transaction. If not specified, :attr:`default_tpb` is used. :type tpb: :class:`TPB` instance, list/tuple of `isc_tpb_*` constants or `bytestring` :raises `~fdb.ProgrammingError`: When group is empty or has active transaction. """ self.__require_transaction_state(False, "Must resolve current transaction before starting another.") self.__ensure_transaction() self._transaction.begin(tpb) def savepoint(self, name): """Establishes a named SAVEPOINT on all member connections. See :meth:`Transaction.savepoint` for details. :param string name: Name for savepoint. :raises `~fdb.ProgrammingError`: When group is empty. """ self.__require_non_empty_group('savepoint') return self._transaction.savepoint(name) def prepare(self): """Manually triggers the first phase of a two-phase commit (2PC). Use of this method is optional; if preparation is not triggered manually, it will be performed implicitly by commit() in a 2PC.""" self.__require_non_empty_group('prepare') self.__require_transaction_state(True, "This group has no transaction to prepare.") self._transaction.prepare() def commit(self, retaining=False): """Commits distributed transaction over member connections using 2PC. .. note:: If transaction is not active, this method does nothing, because the consensus among Python DB API experts is that transactions should always be started implicitly, even if that means allowing a `commit()` or `rollback()` without an actual transaction. :param boolean retaining: Indicates whether the transactional context of the transaction being resolved should be recycled. :raises `~fdb.ProgrammingError`: When group is empty. """ self.__require_non_empty_group('commit') # The consensus among Python DB API experts is that transactions should # always be started implicitly, even if that means allowing a commit() # or rollback() without an actual transaction. if self._transaction is None: return self._transaction.commit(retaining) def rollback(self, retaining=False, savepoint=None): """Rollbacks distributed transaction over member connections. .. note:: If transaction is not active, this method does nothing, because the consensus among Python DB API experts is that transactions should always be started implicitly, even if that means allowing a `commit()` or `rollback()` without an actual transaction. :param boolean retaining: Indicates whether the transactional context of the transaction being resolved should be recycled. :raises `~fdb.ProgrammingError`: When group is empty. """ self.__require_non_empty_group('rollback') # The consensus among Python DB API experts is that transactions should # always be started implicitly, even if that means allowing a commit() # or rollback() without an actual transaction. if self._transaction is None: return self._transaction.rollback(retaining, savepoint) #: (Read/Write) Deafult Transaction Parameter Block used for transactions. default_tpb = property(__get_default_tpb, __set_default_tpb) class BlobReader(object): """BlobReader is a “file-like” class, so it acts much like a file instance opened in `rb` mode. .. important:: DO NOT create instances of this class directly! BlobReader instances are returned automatically in place of output BLOB values when `stream` BLOB access is requested via :meth:`PreparedStatement.set_stream_blob`. .. tip:: BlobReader supports iterator protocol, yielding lines like :meth:`readline`. """ def __init__(self, blobid, db_handle, tr_handle, is_text, charset): self.__closed = False self.__mode = 'rb' self.__bytes_read = 0 self.__pos = 0 self.__index = 0 #self.__bstream = api.Bopen(blobid, db_handle, tr_handle, self.__mode) self.__db_handle = db_handle self.__tr_handle = tr_handle self.__is_text = is_text self.__charset = charset self.__python_charset = charset_map.get(charset, charset) self.__blobid = blobid self.__opened = False self._blob_handle = isc_blob_handle() self._isc_status = ISC_STATUS_ARRAY() def __ensure_open(self): if self.closed: raise ProgrammingError("BlobReader is closed.") if not self.__opened: self.__open() def __open(self): api.isc_open_blob2(self._isc_status, self.__db_handle, self.__tr_handle, self._blob_handle, self.__blobid, 4, bs([ibase.isc_bpb_version1, ibase.isc_bpb_type, 1, ibase.isc_bpb_type_stream])) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Cursor.read_output_blob/isc_open_blob2:") # Get BLOB total length and max. size of segment result = ctypes.cast(ctypes.create_string_buffer(20), buf_pointer) api.isc_blob_info(self._isc_status, self._blob_handle, 2, bs([isc_info_blob_total_length, isc_info_blob_max_segment]), 20, result) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Cursor.read_output_blob/isc_blob_info:") offset = 0 while bytes_to_uint(result[offset]) != isc_info_end: code = bytes_to_uint(result[offset]) offset += 1 if code == isc_info_blob_total_length: length = bytes_to_uint(result[offset:offset + 2]) self._blob_length = bytes_to_uint(result[ offset + 2:offset + 2 + length]) offset += length + 2 elif code == isc_info_blob_max_segment: length = bytes_to_uint(result[offset:offset + 2]) self._segment_size = bytes_to_uint(result[ offset + 2:offset + 2 + length]) offset += length + 2 # Create internal buffer self.__buf = ctypes.create_string_buffer(self._segment_size) self.__buf_pos = 0 self.__buf_data = 0 self.__opened = True def __reset_buffer(self): ctypes.memset(self.__buf, 0, self._segment_size) self.__buf_pos = 0 self.__buf_data = 0 def __blob_get(self): self.__reset_buffer() # Load BLOB allow_incomplete_segment_read = True status = ISC_STATUS(0) bytes_read = 0 bytes_actually_read = ctypes.c_ushort(0) status = api.isc_get_segment(self._isc_status, self._blob_handle, bytes_actually_read, self._segment_size, ctypes.byref(self.__buf)) if status != 0: if status == ibase.isc_segstr_eof: self.__buf_data = 0 elif (status == isc_segment) and allow_incomplete_segment_read: self.__buf_data = bytes_actually_read.value else: raise exception_from_status(DatabaseError, self._isc_status, "BlobReader.__BLOB_get/isc_get_segment:") else: self.__buf_data = bytes_actually_read.value def close(self): """Closes the Reader. Like :meth:`file.close`. :raises `~fdb.DatabaseError`: When error is returned by server. """ if self.__opened and not self.closed: self.__closed = True api.isc_close_blob(self._isc_status, self._blob_handle) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "BlobReader.close/isc_close_blob:") def flush(self): """Flush the internal buffer. Like :meth:`file.flush`. Does nothing as it's pointless for reader.""" pass def next(self): """Return the next line from the BLOB. Part of *iterator protocol*. :raises StopIteration: If there are no further lines. """ line = self.readline() if line: return line else: raise StopIteration __next__ = next def __iter__(self): return self def read(self, size=-1): """Read at most size bytes from the file (less if the read hits EOF before obtaining size bytes). If the size argument is negative or omitted, read all data until EOF is reached. The bytes are returned as a string object. An empty string is returned when EOF is encountered immediately. Like :meth:`file.read`. :raises `~fdb.ProgrammingError`: When reader is closed. .. note:: Performs automatic conversion to `unicode` for TEXT BLOBs, if used Python is v3 or `connection charset` is defined. """ self.__ensure_open() if size >= 0: to_read = min(size, self._blob_length - self.__pos) else: to_read = self._blob_length - self.__pos return_size = to_read result = ctypes.create_string_buffer(return_size) pos = 0 while to_read > 0: to_copy = min(to_read, self.__buf_data - self.__buf_pos) if to_copy == 0: self.__blob_get() to_copy = min(to_read, self.__buf_data - self.__buf_pos) if to_copy == 0: # BLOB EOF break ctypes.memmove(ctypes.byref(result, pos), ctypes.byref(self.__buf, self.__buf_pos), to_copy) pos += to_copy self.__pos += to_copy self.__buf_pos += to_copy to_read -= to_copy result = result.raw[:return_size] if (self.__charset or PYTHON_MAJOR_VER == 3) and self.__is_text: result = b2u(result, self.__python_charset) return result def readline(self): """Read one entire line from the file. A trailing newline character is kept in the string (but may be absent when a file ends with an incomplete line). An empty string is returned when EOF is encountered immediately. Like :meth:`file.readline`. :raises `~fdb.ProgrammingError`: When reader is closed. .. note:: Performs automatic conversion to `unicode` for TEXT BLOBs, if used Python is v3 or `connection charset` is defined. """ self.__ensure_open() line = [] to_read = self._blob_length - self.__pos to_copy = 0 found = False while to_read > 0 and not found: to_scan = min(to_read, self.__buf_data - self.__buf_pos) if to_scan == 0: self.__blob_get() to_scan = min(to_read, self.__buf_data - self.__buf_pos) if to_scan == 0: # BLOB EOF break pos = 0 result = '' while pos < to_scan: if self.__buf[self.__buf_pos+pos] == ibase.b('\n'): found = True pos += 1 break pos += 1 result = ctypes.string_at(ctypes.byref(self.__buf, self.__buf_pos), pos) if (self.__charset or PYTHON_MAJOR_VER == 3) and self.__is_text: result = b2u(result, self.__python_charset) line.append(result) self.__buf_pos += pos self.__pos += pos to_read -= pos return ''.join(line) def readlines(self, sizehint=None): """Read until EOF using :meth:`readline` and return a list containing the lines thus read. The optional sizehint argument (if present) is ignored. Like :meth:`file.readlines`. .. note:: Performs automatic conversion to `unicode` for TEXT BLOBs, if used Python is v3 or `connection charset` is defined. """ result = [] line = self.readline() while line: result.append(line) line = self.readline() return result def seek(self, offset, whence=os.SEEK_SET): """Set the file’s current position, like stdio‘s `fseek()`. See :meth:`file.seek` details. :param integer offset: Offset from specified position. :param whence: (Optional) Context for offset. :type whence: os.SEEK_SET, os.SEEK_CUR or os.SEEK_END :raises `~fdb.ProgrammingError`: When reader is closed. .. warning:: If BLOB was NOT CREATED as `stream` BLOB, this method raises :exc:`DatabaseError` exception. This constraint is set by Firebird. """ self.__ensure_open() pos = ISC_LONG(0) api.isc_seek_blob(self._isc_status, self._blob_handle, whence, ISC_LONG(offset), ctypes.byref(pos)) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "BlobReader.seek/isc_blob_info:") self.__pos = pos.value self.__reset_buffer() def tell(self): """Return current position in BLOB, like stdio‘s `ftell()` and :meth:`file.tell`.""" return self.__pos def get_info(self): """Return information about BLOB. :returns: Tuple with values: blob_length, segment_size, num_segments, blob_type Meaning of individual values: :blob_length: Total blob length in bytes :segment_size: Size of largest segment :num_segments: Number of segments :blob_type: isc_bpb_type_segmented or isc_bpb_type_stream """ self.__ensure_open() result = ctypes.cast(ctypes.create_string_buffer(30), buf_pointer) api.isc_blob_info(self._isc_status, self._blob_handle, 4, bs([isc_info_blob_total_length, isc_info_blob_max_segment, isc_info_blob_num_segments, isc_info_blob_type]), 30, result) if db_api_error(self._isc_status): raise exception_from_status(DatabaseError, self._isc_status, "Source isc_blob_info failed:") offset = 0 while bytes_to_uint(result[offset]) != isc_info_end: code = bytes_to_uint(result[offset]) offset += 1 if code == isc_info_blob_total_length: length = bytes_to_uint(result[offset:offset + 2]) blob_length = bytes_to_uint(result[ offset + 2:offset + 2 + length]) offset += length + 2 elif code == isc_info_blob_max_segment: length = bytes_to_uint(result[offset:offset + 2]) segment_size = bytes_to_uint(result[ offset + 2:offset + 2 + length]) offset += length + 2 elif code == isc_info_blob_num_segments: length = bytes_to_uint(result[offset:offset + 2]) num_segments = bytes_to_uint(result[ offset + 2:offset + 2 + length]) offset += length + 2 elif code == isc_info_blob_type: length = bytes_to_uint(result[offset:offset + 2]) blob_type = bytes_to_uint(result[ offset + 2:offset + 2 + length]) offset += length + 2 # return (blob_length, segment_size, num_segments, blob_type) def __get_closed(self): return self.__closed def __get_mode(self): return self.__mode def __del__(self): self.close() #: (Read Only) (boolean) True is BlobReader is closed. closed = property(__get_closed) #: (Read Only) (string) File mode - always "rb" mode = property(__get_mode) class _RowMapping(object): """An internal dictionary-like class that wraps a row of results in order to map field name to field value. .. warning:: We make ABSOLUTELY NO GUARANTEES about the return value of the `fetch(one|many|all)` methods except that it is a sequence indexed by field position, and no guarantees about the return value of the `fetch(one|many|all)map` methods except that it is a mapping of field name to field value. Therefore, client programmers should NOT rely on the return value being an instance of a particular class or type. """ def __init__(self, description, row): self._description = description fields = self._fields = {} pos = 0 for field_spec in description: # It's possible for a result set from the database engine to return # multiple fields with the same name, but kinterbasdb's key-based # row interface only honors the first (thus setdefault, which won't # store the position if it's already present in self._fields). fields.setdefault(field_spec[DESCRIPTION_NAME], row[pos]) pos += 1 def __len__(self): return len(self._fields) def __getitem__(self, field_name): fields = self._fields # Straightforward, unnormalized lookup will work if the fieldName is # already uppercase and/or if it refers to a database field whose # name is case-sensitive. if field_name in fields: return fields[field_name] else: field_name_normalized = _normalize_db_identifier(field_name) try: return fields[field_name_normalized] except KeyError: raise KeyError('Result set has no field named "%s". The field' ' name must be one of: (%s)' % (field_name, ', '.join(fields.keys()))) def get(self, field_name, default_value=None): try: return self[field_name] except KeyError: return default_value def __contains__(self, field_name): try: self[field_name] except KeyError: return False else: return True def __str__(self): # Return an easily readable dump of this row's field names and their # corresponding values. return '' % ', '.join([ '%s = %s' % (field_name, self[field_name]) for field_name in self._fields.keys() ]) def keys(self): # Note that this is an *ordered* list of keys. return [fieldSpec[DESCRIPTION_NAME] for fieldSpec in self._description] def values(self): # Note that this is an *ordered* list of values. return [self[field_name] for field_name in self.keys()] def items(self): return [(field_name, self[field_name]) for field_name in self.keys()] def iterkeys(self): for field_desc in self._description: yield field_desc[DESCRIPTION_NAME] __iter__ = iterkeys def itervalues(self): for field_name in self: yield self[field_name] def iteritems(self): for field_name in self: yield field_name, self[field_name] class _TableAccessStats(object): """An internal class that wraps results from :meth:`~fdb.Connection.get_table_access_stats()`""" def __init__(self, table_id): self.table_id = table_id self.table_name = None self.sequential = None self.indexed = None self.inserts = None self.updates = None self.deletes = None self.backouts = None self.purges = None self.expunges = None def _set_info(self, info_code, value): if info_code == isc_info_read_seq_count: self.sequential = value elif info_code == isc_info_read_idx_count: self.indexed = value elif info_code == isc_info_insert_count: self.inserts = value elif info_code == isc_info_update_count: self.updates = value elif info_code == isc_info_delete_count: self.deletes = value elif info_code == isc_info_backout_count: self.backouts = value elif info_code == isc_info_purge_count: self.purges = value elif info_code == isc_info_expunge_count: self.expunges = value else: ProgrammingError("Unsupported info code: %d" % info_code) class _RequestBufferBuilder(object): def __init__(self, clusterIdentifier=None): self.clear() if clusterIdentifier: self._add_code(clusterIdentifier) def render(self): # Convert the RequestBufferBuilder's components to a binary Python str. return b('').join(self._buffer) def clear(self): self._buffer = [] def _extend(self, other_req_builder): self._buffer.append(other_req_builder.render()) def _add_raw(self, raw_buf): assert isinstance(raw_buf, mybytes) self._buffer.append(raw_buf) def _add_code(self, code): self._code2reqbuf(self._buffer, code) def _code2reqbuf(self, req_buf, code): if isinstance(code, str): assert len(code) == 1 code = ord(code) # The database engine considers little-endian integers "portable"; they # need to have been converted to little-endianness before being sent # across the network. req_buf.append(struct.pack(' UINT_MAX): raise ValueError('Lock resolution must be either None' ' or a non-negative int number of seconds between 0 and' ' %d.' % UINT_MAX) self._lock_timeout = lock_timeout #: (integer) Required lock timeout or None. #: #: **Default:** `None` lock_timeout = property(_get_lock_timeout, _set_lock_timeout) # table_reservation property (an instance of TableReservation): def _get_table_reservation(self): if self._table_reservation is None: self._table_reservation = TableReservation() return self._table_reservation #: (:class:`TableReservation`) Table reservation specification. #: #: **Default:** `None`. #: #: Instead of changing the value of the TableReservation object itself, you #: must change its elements by manipulating it as though it were a dictionary #: that mapped “TABLE_NAME”: (sharingMode, accessMode) For example: #: #: .. code-block:: python #: #: tpb.table_reservation["MY_TABLE"] = (fdb.isc_tpb_protected, fdb.isc_tpb_lock_write) table_reservation = property(_get_table_reservation) class TableReservation(object): """A dictionary-like helper class that maps “TABLE_NAME”: (sharingMode, accessMode). It performs validation of values assigned to keys. """ _MISSING = object() _SHARING_MODE_STRS = { isc_tpb_shared: 'isc_tpb_shared', isc_tpb_protected: 'isc_tpb_protected', isc_tpb_exclusive: 'isc_tpb_exclusive', } _ACCESS_MODE_STRS = { isc_tpb_lock_read: 'isc_tpb_lock_read', isc_tpb_lock_write: 'isc_tpb_lock_write', } def __init__(self): self._res = {} def copy(self): # A shallow copy is fine. import copy return copy.copy(self) def render(self): """Create valid `table access parameter block` according to current key/value pairs. :returns: (string) Table access definition block. """ if not self: return b('') frags = [] _ = frags.append for table_name, resdefs in self.iteritems(): table_name_len_with_term = len(b(table_name)) + 1 for (sharing_mode, access_mode) in resdefs: _(int2byte(access_mode)) _(struct.pack('' frags = ['') return ''.join(frags) def keys(self): return self._res.keys() def values(self): return self._res.values() def items(self): return self._res.items() def iterkeys(self): return self._res.iterkeys() def itervalues(self): return self._res.itervalues() def iteritems(self): ### Todo: verify handling of P version differences if PYTHON_MAJOR_VER == 3: return self._res.items() else: return self._res.iteritems() def __setitem__(self, key, value): key = _validateKey(key) key = _normalize_db_identifier(key) # If the += operator is being applied, the form of value will be like: # [(sharingMode0, accessMode0), ..., newSharingMode, newAccessMode] # For the sake of convenience, we detect this situation and handle it # "naturally". if isinstance(value, list) and len(value) >= 3: other_values = value[:-2] value = tuple(value[-2:]) else: other_values = None if ((not isinstance(value, tuple)) or len(value) != 2 or value[0] not in (isc_tpb_shared, isc_tpb_protected, isc_tpb_exclusive) or value[1] not in (isc_tpb_lock_read, isc_tpb_lock_write)): raise ValueError("""Table reservation entry must be a 2-tuple of the following form: element 0: sharing mode (one of (isc_tpb_shared, isc_tpb_protected, isc_tpb_exclusive)) element 1: access mode (one of (isc_tpb_lock_read, isc_tpb_lock_write)) %s is not acceptable.""" % str(value)) if other_values is None: value = [value] else: other_values.append(value) value = other_values self._res[key] = value def _validateKey(key): ### Todo: verify handling of P version differences, refactor if PYTHON_MAJOR_VER == 3: acceptable_key = isinstance(key, str) if acceptable_key and isinstance(key, str): try: key.encode('ASCII') except UnicodeEncodeError: acceptable_key = False if not acceptable_key: raise TypeError('Only str keys are allowed.') else: acceptable_key = isinstance(key, basestring) if acceptable_key and isinstance(key, unicode): try: key = key.encode('ASCII') except UnicodeEncodeError: acceptable_key = False if not acceptable_key: raise TypeError('Only str keys are allowed.') return key def _validate_tpb(tpb): if isinstance(tpb, TPB): # TPB's accessor methods perform their own validation, and its # render method takes care of infrastructural trivia. return tpb elif isinstance(tpb, (ListType, TupleType)): return tpb elif not (isinstance(tpb, mybytes) and len(tpb) > 0): raise ProgrammingError('TPB must be non-unicode string of length > 0') # The kinterbasdb documentation promises (or at least strongly implies) # that if the user tries to set a TPB that does not begin with # isc_tpb_version3, kinterbasdb will automatically supply that # infrastructural value. This promise might cause problems in the future, # when isc_tpb_version3 is superseded. A possible solution would be to # check the first byte against all known isc_tpb_versionX version flags, # like this: # if tpb[0] not in (isc_tpb_version3, ..., isc_tpb_versionN): # tpb = isc_tpb_version3 + tpb # That way, compatibility with old versions of the DB server would be # maintained, but client code could optionally specify a newer TPB version. if tpb[0] != isc_tpb_version3: tpb = isc_tpb_version3 + tpb return tpb def _normalize_db_identifier(ident): if ident.startswith('"') and ident.endswith('"'): # Quoted name; leave the case of the field name untouched, but # strip the quotes. return ident[1:-1] else: # Everything else is normalized to uppercase to support case- # insensitive lookup. return ident.upper() fdb-2.0.0/fdb/schema.py0000644000175000001440000062271413270362207015267 0ustar pcisarusers00000000000000#coding:utf-8 # # PROGRAM: fdb # MODULE: schema.py # DESCRIPTION: Python driver for Firebird - Database schema # CREATED: 10.5.2013 # # Software distributed under the License is distributed AS IS, # WITHOUT WARRANTY OF ANY KIND, either express or implied. # See the License for the specific language governing rights # and limitations under the License. # # The Original Code was created by Pavel Cisar # # Copyright (c) Pavel Cisar # and all contributors signed below. # # All Rights Reserved. # Contributor(s): ______________________________________. # # See LICENSE.TXT for details. import fdb from fdb.utils import LateBindingProperty, ObjectList, Visitable import string import weakref from itertools import groupby import collections # Firebird field type codes FBT_SMALLINT = 7 FBT_INTEGER = 8 FBT_QUAD = 9 FBT_FLOAT = 10 FBT_CHAR = 14 FBT_DOUBLE_PRECISION = 27 FBT_DATE = 35 FBT_VARCHAR = 37 FBT_CSTRING = 40 FBT_BLOB_ID = 45 FBT_BLOB = 261 FBT_SQL_DATE = 12 FBT_SQL_TIME = 13 FBT_SQL_TIMESTAMP = 35 FBT_BIGINT = 16 FBT_BOOLEAN = 23 MAX_INTSUBTYPES = 2 MAX_BLOBSUBTYPES = 8 # Trigger masks TRIGGER_TYPE_SHIFT = 13 TRIGGER_TYPE_MASK = (0x3 << TRIGGER_TYPE_SHIFT) TRIGGER_TYPE_DML = (0 << TRIGGER_TYPE_SHIFT) TRIGGER_TYPE_DB = (1 << TRIGGER_TYPE_SHIFT) TRIGGER_TYPE_DDL = (2 << TRIGGER_TYPE_SHIFT) # Trigger type codes DDL_TRIGGER_ANY = 4611686018427375615 # 9223372036854751229 DDL_TRIGGER_CREATE_TABLE = 1 DDL_TRIGGER_ALTER_TABLE = 2 DDL_TRIGGER_DROP_TABLE = 3 DDL_TRIGGER_CREATE_PROCEDURE = 4 DDL_TRIGGER_ALTER_PROCEDURE = 5 DDL_TRIGGER_DROP_PROCEDURE = 6 DDL_TRIGGER_CREATE_FUNCTION = 7 DDL_TRIGGER_ALTER_FUNCTION = 8 DDL_TRIGGER_DROP_FUNCTION = 9 DDL_TRIGGER_CREATE_TRIGGER = 10 DDL_TRIGGER_ALTER_TRIGGER = 11 DDL_TRIGGER_DROP_TRIGGER = 12 # gap for TRIGGER_TYPE_MASK - 3 bits DDL_TRIGGER_CREATE_EXCEPTION = 16 DDL_TRIGGER_ALTER_EXCEPTION = 17 DDL_TRIGGER_DROP_EXCEPTION = 18 DDL_TRIGGER_CREATE_VIEW = 19 DDL_TRIGGER_ALTER_VIEW = 20 DDL_TRIGGER_DROP_VIEW = 21 DDL_TRIGGER_CREATE_DOMAIN = 22 DDL_TRIGGER_ALTER_DOMAIN = 23 DDL_TRIGGER_DROP_DOMAIN = 24 DDL_TRIGGER_CREATE_ROLE = 25 DDL_TRIGGER_ALTER_ROLE = 26 DDL_TRIGGER_DROP_ROLE = 27 DDL_TRIGGER_CREATE_INDEX = 28 DDL_TRIGGER_ALTER_INDEX = 29 DDL_TRIGGER_DROP_INDEX = 30 DDL_TRIGGER_CREATE_SEQUENCE = 31 DDL_TRIGGER_ALTER_SEQUENCE = 32 DDL_TRIGGER_DROP_SEQUENCE = 33 DDL_TRIGGER_CREATE_USER = 34 DDL_TRIGGER_ALTER_USER = 35 DDL_TRIGGER_DROP_USER = 36 DDL_TRIGGER_CREATE_COLLATION = 37 DDL_TRIGGER_DROP_COLLATION = 38 DDL_TRIGGER_ALTER_CHARACTER_SET = 39 DDL_TRIGGER_CREATE_PACKAGE = 40 DDL_TRIGGER_ALTER_PACKAGE = 41 DDL_TRIGGER_DROP_PACKAGE = 42 DDL_TRIGGER_CREATE_PACKAGE_BODY = 43 DDL_TRIGGER_DROP_PACKAGE_BODY = 44 DDL_TRIGGER_CREATE_MAPPING = 45 DDL_TRIGGER_ALTER_MAPPING = 46 DDL_TRIGGER_DROP_MAPPING = 47 # Lists and disctionary maps COLUMN_TYPES = {None: 'UNKNOWN', FBT_SMALLINT: 'SMALLINT', FBT_INTEGER: 'INTEGER', FBT_QUAD: 'QUAD', FBT_FLOAT: 'FLOAT', FBT_CHAR: 'CHAR', FBT_DOUBLE_PRECISION: 'DOUBLE PRECISION', FBT_VARCHAR: 'VARCHAR', FBT_CSTRING: 'CSTRING', FBT_BLOB_ID: 'BLOB_ID', FBT_BLOB: 'BLOB', FBT_SQL_TIME: 'TIME', FBT_SQL_DATE: 'DATE', FBT_SQL_TIMESTAMP: 'TIMESTAMP', FBT_BIGINT: 'BIGINT', FBT_BOOLEAN: 'BOOLEAN'} INTEGRAL_SUBTYPES = ('UNKNOWN', 'NUMERIC', 'DECIMAL') BLOB_SUBTYPES = ('BINARY', 'TEXT', 'BLR', 'ACL', 'RANGES', 'SUMMARY', 'FORMAT', 'TRANSACTION_DESCRIPTION', 'EXTERNAL_FILE_DESCRIPTION', 'DEBUG_INFORMATION') TRIGGER_PREFIX_TYPES = ['BEFORE', 'AFTER'] TRIGGER_SUFFIX_TYPES = ['', 'INSERT', 'UPDATE', 'DELETE'] TRIGGER_DB_TYPES = ['CONNECT', 'DISCONNECT', 'TRANSACTION START', 'TRANSACTION COMMIT', 'TRANSACTION ROLLBACK'] TRIGGER_DDL_TYPES = [None, "CREATE TABLE", "ALTER TABLE", "DROP TABLE", "CREATE PROCEDURE", "ALTER PROCEDURE", "DROP PROCEDURE", "CREATE FUNCTION", "ALTER FUNCTION", "DROP FUNCTION", "CREATE TRIGGER", "ALTER TRIGGER", "DROP TRIGGER", None, None, None, # gap for TRIGGER_TYPE_MASK - 3 bits "CREATE EXCEPTION", "ALTER EXCEPTION", "DROP EXCEPTION", "CREATE VIEW", "ALTER VIEW", "DROP VIEW", "CREATE DOMAIN", "ALTER DOMAIN", "DROP DOMAIN", "CREATE ROLE", "ALTER ROLE", "DROP ROLE", "CREATE INDEX", "ALTER INDEX", "DROP INDEX", "CREATE SEQUENCE", "ALTER SEQUENCE", "DROP SEQUENCE", "CREATE USER", "ALTER USER", "DROP USER", "CREATE COLLATION", "DROP COLLATION", "ALTER CHARACTER SET", "CREATE PACKAGE", "ALTER PACKAGE", "DROP PACKAGE", "CREATE PACKAGE BODY", "DROP PACKAGE BODY", "CREATE MAPPING", "ALTER MAPPING", "DROP MAPPING"] # Collation parameters codes COLLATION_PAD_SPACE = 1 COLLATION_CASE_INSENSITIVE = 2 COLLATION_ACCENT_INSENSITIVE = 4 # Index type names INDEX_TYPE_ASCENDING = 'ASCENDING' INDEX_TYPE_DESCENDING = 'DESCENDING' INDEX_TYPES = [INDEX_TYPE_ASCENDING, INDEX_TYPE_DESCENDING] # Relation type codes RELATION_TYPE_TABLE = 0 RELATION_TYPE_VIEW = 1 RELATION_TYPE_GTT = 5 RELATION_TYPE_GTT_PRESERVE = 4 RELATION_TYPE_GTT_DELETE = 5 # Procedure parameter type codes PROCPAR_DATATYPE = 0 PROCPAR_DOMAIN = 1 PROCPAR_TYPE_OF_DOMAIN = 2 PROCPAR_TYPE_OF_COLUMN = 3 # Section codes for Schema.get_metadata_ddl() SCRIPT_COLLATIONS = 1 SCRIPT_CHARACTER_SETS = 2 SCRIPT_UDFS = 3 SCRIPT_GENERATORS = 4 SCRIPT_EXCEPTIONS = 5 SCRIPT_DOMAINS = 6 SCRIPT_PACKAGE_DEFS = 7 SCRIPT_FUNCTION_DEFS = 8 SCRIPT_PROCEDURE_DEFS = 9 SCRIPT_TABLES = 10 SCRIPT_PRIMARY_KEYS = 11 SCRIPT_UNIQUE_CONSTRAINTS = 12 SCRIPT_CHECK_CONSTRAINTS = 13 SCRIPT_FOREIGN_CONSTRAINTS = 14 SCRIPT_INDICES = 15 SCRIPT_VIEWS = 16 SCRIPT_PACKAGE_BODIES = 17 SCRIPT_PROCEDURE_BODIES = 18 SCRIPT_FUNCTION_BODIES = 19 SCRIPT_TRIGGERS = 20 SCRIPT_ROLES = 21 SCRIPT_GRANTS = 22 SCRIPT_COMMENTS = 23 SCRIPT_SHADOWS = 24 SCRIPT_SET_GENERATORS = 25 SCRIPT_INDEX_DEACTIVATIONS = 26 SCRIPT_INDEX_ACTIVATIONS = 27 SCRIPT_TRIGGER_DEACTIVATIONS = 28 SCRIPT_TRIGGER_ACTIVATIONS = 29 # Schema information collection codes SCHEMA_TABLES = 1 SCHEMA_VIEWS = 2 SCHEMA_DOMAINS = 3 SCHEMA_INDICES = 4 SCHEMA_DEPENDENCIES = 5 SCHEMA_GENERATORS = 6 SCHEMA_SEQUENCES = 6 SCHEMA_TRIGGERS = 7 SCHEMA_PROCEDURES = 8 SCHEMA_CONSTRAINTS = 9 SCHEMA_COLLATIONS = 10 SCHEMA_CHARACTER_SETS = 11 SCHEMA_EXCEPTIONS = 12 SCHEMA_ROLES = 13 SCHEMA_FUNCTIONS = 14 SCHEMA_FILES = 15 SCHEMA_SHADOWS = 16 SCHEMA_PRIVILEGES = 17 SCHEMA_USERS = 18 SCHEMA_PACKAGES = 19 SCHEMA_BACKUP_HISTORY = 20 SCHEMA_FILTERS = 21 # List of default sections (in order) for Schema.get_metadata_ddl() SCRIPT_DEFAULT_ORDER = [SCRIPT_COLLATIONS, SCRIPT_CHARACTER_SETS, SCRIPT_UDFS, SCRIPT_GENERATORS, SCRIPT_EXCEPTIONS, SCRIPT_DOMAINS, SCRIPT_PACKAGE_DEFS, SCRIPT_FUNCTION_DEFS, SCRIPT_PROCEDURE_DEFS, SCRIPT_TABLES, SCRIPT_PRIMARY_KEYS, SCRIPT_UNIQUE_CONSTRAINTS, SCRIPT_CHECK_CONSTRAINTS, SCRIPT_FOREIGN_CONSTRAINTS, SCRIPT_INDICES, SCRIPT_VIEWS, SCRIPT_PACKAGE_BODIES, SCRIPT_PROCEDURE_BODIES, SCRIPT_FUNCTION_BODIES, SCRIPT_TRIGGERS, SCRIPT_GRANTS, SCRIPT_ROLES, SCRIPT_COMMENTS, SCRIPT_SHADOWS, SCRIPT_SET_GENERATORS] # List of reserved Firebird words RESERVED = ['ACTIVE', 'ADD', 'ADMIN', 'AFTER', 'ALL', 'ALTER', 'AND', 'ANY', 'ARE', 'AS', 'ASC', 'ASCENDING', 'AT', 'AUTO', 'AUTODDL', 'AVG', 'BASED', 'BASE_NAME', 'BEFORE', 'BEGIN', 'BETWEEN', 'BIGINT', 'BIT_LENGTH', 'BLOB', 'BLOBEDIT', 'BOTH', 'BUFFER', 'BY', 'BOOLEAN', 'CASE', 'CAST', 'CHAR', 'CHARACTER', 'CHAR_LENGTH', 'CHARACTER_LENGTH', 'CHECK', 'CHECK_POINT_LENGTH', 'CLOSE', 'COALESCE', 'COLLATE', 'COLLATION', 'COLUMN', 'COMMIT', 'COMMITTED', 'COMPILETIME', 'COMPUTED', 'CONDITIONAL', 'CONNECT', 'CONSTRAINT', 'CONTAINING', 'CONTINUE', 'COUNT', 'CREATE', 'CROSS', 'CSTRING', 'CURRENT', 'CURRENT_CONNECTION', 'CURRENT_DATE', 'CURRENT_ROLE', 'CURRENT_TIME', 'CURRENT_TIMESTAMP', 'CURRENT_TRANSACTION', 'CURRENT_USER', 'CORR', 'COVAR_POP', 'COVAR_SAMP', 'DATABASE', 'DATE', 'DAY', 'DB_KEY', 'DEBUG', 'DEC', 'DECIMAL', 'DECLARE', 'DEFAULT', 'DELETE', 'DELETING', 'DESC', 'DESCENDING', 'DESCRIBE', 'DISCONNECT', 'DISPLAY', 'DISTINCT', 'DO', 'DOMAIN', 'DOUBLE', 'DROP', 'DETERMINISTIC', 'ECHO', 'EDIT', 'ELSE', 'END', 'ENTRY_POINT', 'ESCAPE', 'EVENT', 'EXCEPTION', 'EXECUTE', 'EXISTS', 'EXIT', 'EXTERN', 'EXTERNAL', 'EXTRACT', 'FETCH', 'FILE', 'FILTER', 'FLOAT', 'FOR', 'FOREIGN', 'FOUND', 'FROM', 'FULL', 'FUNCTION', 'FALSE', 'GDSCODE', 'GENERATOR', 'GEN_ID', 'GOTO', 'GRANT', 'GROUP', 'GROUP_COMMIT_WAIT_TIME', 'GLOBAL', 'HAVING', 'HEADING', 'HELP', 'HOUR', 'IF', 'IMMEDIATE', 'IN', 'INACTIVE', 'INDEX', 'INDICATOR', 'INIT', 'INNER', 'INPUT', 'INPUT_TYPE', 'INSERT', 'INSERTING', 'INT', 'INTEGER', 'INTO', 'IS', 'ISOLATION', 'INSENSITIVE', 'JOIN', 'KEY', 'LAST', 'LC_MESSAGES', 'LC_TYPE', 'LEADING', 'LEAVE', 'LEFT', 'LENGTH', 'LEVEL', 'LIKE', 'LOCK', 'LOG_BUFFER_SIZE', 'LONG', 'LOWER', 'MANUAL', 'MAX', 'MAXIMUM', 'MAXIMUM_SEGMENT', 'MAX_SEGMENT', 'MERGE', 'MESSAGE', 'MIN', 'MINIMUM', 'MINUTE', 'MODULE_NAME', 'MONTH', 'NAMES', 'NATIONAL', 'NATURAL', 'NCHAR', 'NO', 'NOAUTO', 'NOT', 'NULL', 'NULLIF', 'NULLS', 'NUM_LOG_BUFFERS', 'NUMERIC', 'OCTET_LENGTH', 'OF', 'ON', 'ONLY', 'OPEN', 'OPTION', 'OR', 'ORDER', 'OUTER', 'OUTPUT', 'OUTPUT_TYPE', 'OVERFLOW', 'OFFSET', 'OVER', 'PAGE', 'PAGELENGTH', 'PAGES', 'PAGE_SIZE', 'PARAMETER', 'PASSWORD', 'PERCENT', 'PLAN', 'POSITION', 'POST_EVENT', 'PRECISION', 'PREPARE', 'PRIMARY', 'PRIVILEGES', 'PROCEDURE', 'PUBLIC', 'QUIT', 'RDB$DB_KEY', 'READ', 'REAL', 'RECORD_VERSION', 'RECREATE', 'REFERENCES', 'RELEASE', 'RESERV', 'RESERVING', 'RETAIN', 'RETURN', 'RETURNING_VALUES', 'RETURNS', 'REVOKE', 'RIGHT', 'ROLLBACK', 'ROW_COUNT', 'ROWS', 'RUNTIME', 'RECURSIVE', 'RDB$RECORD_VERSION', 'REGR_AVGX', 'REGR_AVGY', 'REGR_COUNT', 'REGR_INTERCEPT', 'REGR_R2', 'REGR_SLOPE', 'REGR_SXX', 'REGR_SXY', 'REGR_SYY', 'ROW', 'SAVEPOINT', 'SCHEMA', 'SECOND', 'SELECT', 'SET', 'SHADOW', 'SHARED', 'SHELL', 'SHOW', 'SIMILAR', 'SINGULAR', 'SIZE', 'SMALLINT', 'SNAPSHOT', 'SOME', 'SORT', 'SQL', 'SQLCODE', 'SQLERROR', 'SQLWARNING', 'STABILITY', 'STARTING', 'STARTS', 'STATEMENT', 'STATIC', 'STATISTICS', 'SUB_TYPE', 'SUM', 'SUSPEND', 'SENSITIVE', 'START', 'SCROLL', 'SQLSTATE', 'STDDEV_POP', 'STDDEV_SAMP', 'TABLE', 'TERM', 'TERMINATOR', 'THEN', 'TIES', 'TIME', 'TIMESTAMP', 'TO', 'TRAILING', 'TRANSACTION', 'TRANSLATE', 'TRANSLATION', 'TRIGGER', 'TRIM', 'TRUE', 'UNCOMMITTED', 'UNION', 'UNIQUE', 'UNKNOWN', 'UPDATE', 'UPDATING', 'UPPER', 'USER', 'USING', 'VALUE', 'VALUES', 'VARCHAR', 'VARIABLE', 'VARYING', 'VERSION', 'VIEW', 'VAR_POP', 'VAR_SAMP', 'WAIT', 'WHEN', 'WHENEVER', 'WHERE', 'WHILE', 'WITH', 'WORK', 'WRITE', 'YEAR'] # List of non-reserved Firebird words NON_RESERVED = ['ABS', 'ACCENT', 'ACOS', 'ALWAYS', 'ASCII_CHAR', 'ASCII_VAL', 'ASIN', 'ATAN', 'ATAN2', 'AUTONOMOUS', 'ACTION', 'ABSOLUTE', 'ACOSH', 'ASINH', 'ATANH', 'BIN_AND', 'BIN_OR', 'BIN_NOT', 'BIN_SHL', 'BIN_SHR', 'BIN_XOR', 'BLOCK', 'BACKUP', 'BREAK', 'BODY', #removed 'BASENAME', 'CALLER', 'CEIL', 'CEILING', 'CHAR_TO_UUID', 'CASCADE', 'COMMENT', 'COMMON', 'COS', 'COSH', 'COT', 'CURSOR', 'CONTINUE', #removed 'CACHE','CHECK_POINT_LEN', 'DATEADD', 'DATEDIFF', 'DECODE', 'DIFFERENCE', 'DATA', 'DESCRIPTOR', 'DDL', 'DECRYPT', 'DENSE_RANK', 'EXP', 'ENCRYPT', 'ENGINE', 'FIRSTNAME', 'FLOOR', 'FIRST', 'FREE_IT', 'FIRST_VALUE', 'GEN_UUID', 'GENERATED', 'GRANTED', #removed 'GROUP_COMMIT_WAIT', 'HASH', 'IGNORE', 'IIF', 'IDENTITY', 'INCREMENT', 'LIMBO', 'LIST', 'LN', 'LOG', 'LOG10', 'LPAD', 'LASTNAME', 'LAST_VALUE', 'LAG', 'LEAD', 'LINGER', #removed 'LOGFILE','LOG_BUF_SEZE', 'MAPPING', 'MATCHED', 'MATCHING', 'MAXVALUE', 'MIDDLENAME', 'MILLISECOND', 'MINVALUE', 'MOD', 'NEXT', 'NAME', 'NTH_VALUE', #removed 'NUM_LOG_BUFS', 'OS_NAME', 'OVERLAY', 'PI', 'PLACING', 'POWER', 'PROTECTED', 'PAD', 'PRESERVE', 'PACKAGE', 'PARTITION', 'PLUGIN', 'PRIOR', 'REPLACE', 'REQUESTS', 'RESTART', 'RETURNING', 'REVERSE', 'ROUND', 'RPAD', 'RAND', 'RESTRICT', 'ROLE', 'RANK', 'RELATIVE', 'ROW_NUMBER', #removed 'RAW_PARTITIONS', 'SEGMENT', 'SEQUENCE', 'SIGN', 'SIN', 'SINH', 'SOURCE', 'SPACE', 'SQLSTATE', 'SQRT', 'SCALAR_ARRAY', 'SKIP', 'SUBSTRING', 'SERVERWIDE', 'TIMEOUT', 'TRUNC', 'TWO_PHASE', 'TAN', 'TANH', 'TYPE', 'TEMPORARY', 'TAGS', 'TRUSTED', 'UUID_TO_CHAR', 'UNDO', 'USAGE', 'WEEK', 'WEEKDAY', 'YEARDAY'] #--- Functions def get_grants(privileges, grantors=None): """Get list of minimal set of SQL GRANT statamenets necessary to grant specified privileges. :param list privileges: List of :class:`Privilege` instances. :param list grantors: List of standard grantor names. Generates GRANTED BY clause for privileges granted by user that's not in list. """ tp = {'S':'SELECT', 'I':'INSERT', 'U':'UPDATE', 'D':'DELETE', 'R':'REFERENCES'} def skey(item): return (item.user_name, item.user_type, item.grantor_name, item.subject_name, item.subject_type, item.has_grant(), item.privilege in tp, item.privilege, str(item.field_name),) def gkey(item): return (item.user_name, item.user_type, item.grantor_name, item.subject_name, item.subject_type, item.has_grant(), item.privilege in tp,) def gkey2(item): return item.privilege grants = [] p = list(privileges) p.sort(key=skey) for k, g in groupby(p, gkey): g = list(g) item = g[0] if item.has_grant(): admin_option = ' WITH %s OPTION' % ('ADMIN' if item.privilege == 'M' else 'GRANT') else: admin_option = '' uname = item.user_name user = item.user if isinstance(user, Procedure): utype = 'PROCEDURE ' elif isinstance(user, Trigger): utype = 'TRIGGER ' elif isinstance(user, View): utype = 'VIEW ' else: utype = '' sname = item.subject_name if (grantors is not None) and (item.grantor_name not in grantors): granted_by = ' GRANTED BY %s' % item.grantor_name else: granted_by = '' priv_list = [] for k, items in groupby(g, gkey2): items = list(items) item = items[0] if item.privilege in tp: privilege = tp[item.privilege] if len(items) > 1: privilege += '(%s)' % ','.join(i.field_name for i in items if i.field_name) elif item.field_name is not None: privilege += '(%s)' % item.field_name priv_list.append(privilege) elif item.privilege == 'X': # procedure privilege = 'EXECUTE ON PROCEDURE ' else: # role membership privilege = '' if priv_list: privilege = ', '.join(priv_list) privilege += ' ON ' grants.append('GRANT %s%s TO %s%s%s%s' % (privilege, sname, utype, uname, admin_option, granted_by)) return grants def iskeyword(ident): "Return True if `ident` is (any) Firebird keyword." return (ident in RESERVED) or (ident in NON_RESERVED) def escape_single_quotes(text): "Return `text` with any single quotes escaped (doubled)." return text.replace("'", "''") #--- Classes class Schema(Visitable): """This class represents database schema. """ #: option switch: Always quote db object names on output opt_always_quote = False #: option switch: Keyword for generator/sequence opt_generator_keyword = 'SEQUENCE' #: Datatype declaration methods for procedure parameters: key = numID, value = name enum_param_type_from = {PROCPAR_DATATYPE: 'DATATYPE', PROCPAR_DOMAIN: 'DOMAIN', PROCPAR_TYPE_OF_DOMAIN: 'TYPE OF DOMAIN', PROCPAR_TYPE_OF_COLUMN: 'TYPE OF COLUMN'} #: Object types: key = numID, value = type_name enum_object_types = dict() #: Object type codes: key = type_name, value = numID enum_object_type_codes = dict() #: Character set names: key = numID, value = charset_name enum_character_set_names = dict() #: Field types: key = numID, value = type_name enum_field_types = dict() #: Field sub types: key = numID, value = type_name enum_field_subtypes = dict() #: Function types: key = numID, value = type_name enum_function_types = dict() #: Mechanism Types: key = numID, value = type_name enum_mechanism_types = dict() #: Parameter Mechanism Types: key = numID, value = type_name enum_parameter_mechanism_types = dict() #: Procedure Types: key = numID, value = type_name enum_procedure_types = dict() #: Relation Types: key = numID, value = type_name enum_relation_types = dict() #: System Flag Types: key = numID, value = type_name enum_system_flag_types = dict() #: Transaction State Types: key = numID, value = type_name enum_transaction_state_types = dict() #: Trigger Types: key = numID, value = type_name enum_trigger_types = dict() # Firebird 3.0 #: Parameter Types: key = numID, value = type_name enum_parameter_types = dict() #: Index activity status: key = numID, value = flag_name enum_index_activity_flags = dict() #: Index uniqueness: key = numID, value = flag_name enum_index_unique_flags = dict() #: Trigger activity status: key = numID, value = flag_name_name enum_trigger_activity_flags = dict() #: Grant option: key = numID, value = option_name enum_grant_options = dict() #: Page type: key = numID, value = type_name enum_page_types = dict() #: Privacy flags: numID, value = flag_name enum_privacy_flags = dict() #: Legacy flags: numID, value = flag_name enum_legacy_flags = dict() #: Determinism flags: numID, value = flag_name enum_determinism_flags = dict() def __init__(self): self._con = None self._ic = None self.__internal = False self.__clear() def __del__(self): if not self.closed: self._close() def __fail_if_closed(self): if self.closed: raise fdb.ProgrammingError("Schema is not binded to connection.") def _close(self): self._ic.close() self._con = None self._ic = None def _set_as_internal(self): """Mark this instance as `internal` (embedded). This blocks calls to :meth:`bind` and :meth:`close`.""" self.__internal = True self._con = weakref.proxy(self._con) def __object_by_name(self, _list, name): if name is None: return None for o in _list: if o.name == name: return o return None def __clear(self, data=None): if data: if not isinstance(data, collections.Iterable): data = (data, ) else: data = range(1, SCHEMA_FILTERS + 1) for item in data: if item == SCHEMA_TABLES: self.__tables = None elif item == SCHEMA_VIEWS: self.__views = None elif item == SCHEMA_DOMAINS: self.__domains = None elif item == SCHEMA_INDICES: self.__indices = None self.__constraint_indices = None elif item == SCHEMA_DEPENDENCIES: self.__dependencies = None elif item == SCHEMA_GENERATORS: self.__generators = None elif item == SCHEMA_TRIGGERS: self.__triggers = None elif item == SCHEMA_PROCEDURES: self.__procedures = None elif item == SCHEMA_CONSTRAINTS: self.__constraints = None elif item == SCHEMA_COLLATIONS: self.__collations = None elif item == SCHEMA_CHARACTER_SETS: self.__character_sets = None elif item == SCHEMA_EXCEPTIONS: self.__exceptions = None elif item == SCHEMA_ROLES: self.__roles = None elif item == SCHEMA_FUNCTIONS: self.__functions = None elif item == SCHEMA_FILES: self.__files = None elif item == SCHEMA_SHADOWS: self.__shadows = None elif item == SCHEMA_PRIVILEGES: self.__privileges = None elif item == SCHEMA_USERS: self.__users = None elif item == SCHEMA_PACKAGES: self.__packages = None elif item == SCHEMA_BACKUP_HISTORY: self.__backup_history = None elif item == SCHEMA_FILTERS: self.__filters = None else: raise fdb.ProgrammingError("Unknown metadata category '%d'" % item) #--- protected def _select_row(self, cmd, params=None): if params: self._ic.execute(cmd, params) else: self._ic.execute(cmd) return self._ic.fetchonemap() def _select(self, cmd, params=None): if params: self._ic.execute(cmd, params) else: self._ic.execute(cmd) return self._ic.itermap() def _get_field_dimensions(self, field): return [(r[0], r[1]) for r in self._ic.execute("""SELECT r.RDB$LOWER_BOUND, r.RDB$UPPER_BOUND FROM RDB$FIELD_DIMENSIONS r where r.RDB$FIELD_NAME = '%s' order by r.RDB$DIMENSION""" % field.name)] def _get_item(self, name, itype, subname=None): if itype == 0: # Relation return self.get_table(name) elif itype == 1: # View return self.get_view(name) elif itype == 2: # Trigger return self.get_trigger(name) elif itype == 5: # Procedure return self.get_procedure(name) elif itype == 8: # User result = self.__object_by_name(self._get_users(), name) if not result: result = fdb.services.User(name) self.__users.append(result) return result elif itype == 9: # Field return self.get_table(name).get_column(subname) elif itype == 10: # Index return self.get_index(name) elif itype == 13: # Role return self.get_role(name) elif itype == 14: # Generator return self.get_sequence(name) elif itype == 15: # UDF return self.get_function(name) elif itype == 17: # Collation return self.get_collation(name) elif itype in [17, 18]: # Package return self.get_package(name) else: raise fdb.ProgrammingError('Unsupported subject type') #--- special attribute access methods def _get_default_character_set(self): return self.get_character_set(self._default_charset_name) def _get_collations(self): if self.__collations is None: self.__fail_if_closed() self._ic.execute("select * from rdb$collations") self.__collations = ObjectList((Collation(self, row) for row in self._ic.itermap()), Collation, 'item.name') self.__collations.freeze() return self.__collations def _get_character_sets(self): if self.__character_sets is None: self.__fail_if_closed() self._ic.execute("select * from rdb$character_sets") self.__character_sets = ObjectList((CharacterSet(self, row) for row in self._ic.itermap()), CharacterSet, 'item.name') self.__character_sets.freeze() return self.__character_sets def _get_exceptions(self): if self.__exceptions is None: self.__fail_if_closed() self._ic.execute("select * from rdb$exceptions") self.__exceptions = ObjectList((DatabaseException(self, row) for row in self._ic.itermap()), DatabaseException, 'item.name') self.__exceptions.freeze() return self.__exceptions def _get_all_domains(self): if self.__domains is None: self.__fail_if_closed() cols = ['RDB$FIELD_NAME', 'RDB$VALIDATION_SOURCE', 'RDB$COMPUTED_SOURCE', 'RDB$DEFAULT_SOURCE', 'RDB$FIELD_LENGTH', 'RDB$FIELD_SCALE', 'RDB$FIELD_TYPE', 'RDB$FIELD_SUB_TYPE', 'RDB$DESCRIPTION', 'RDB$SYSTEM_FLAG', 'RDB$SEGMENT_LENGTH', 'RDB$EXTERNAL_LENGTH', 'RDB$EXTERNAL_SCALE', 'RDB$EXTERNAL_TYPE', 'RDB$DIMENSIONS', 'RDB$NULL_FLAG', 'RDB$CHARACTER_LENGTH', 'RDB$COLLATION_ID', 'RDB$CHARACTER_SET_ID', 'RDB$FIELD_PRECISION'] if self._con.ods >= fdb.ODS_FB_30: cols.extend(['RDB$SECURITY_CLASS', 'RDB$OWNER_NAME']) self._ic.execute("""select %s from RDB$FIELDS""" % ','.join(cols)) self.__domains = ObjectList((Domain(self, row) for row in self._ic.itermap()), Domain, 'item.name') self.__domains.freeze() return self.__domains def _get_domains(self): return self._get_all_domains().filter(lambda item: not item.issystemobject()) def _get_sysdomains(self): return self._get_all_domains().filter(lambda item: item.issystemobject()) def _get_all_tables(self): if self.__tables is None: self.__fail_if_closed() self._ic.execute("select * from rdb$relations where rdb$view_blr is null") self.__tables = ObjectList((Table(self, row) for row in self._ic.itermap()), Table, 'item.name') self.__tables.freeze() return self.__tables def _get_tables(self): return self._get_all_tables().filter(lambda item: not item.issystemobject()) def _get_systables(self): return self._get_all_tables().filter(lambda item: item.issystemobject()) def _get_all_views(self): if self.__views is None: self.__fail_if_closed() self._ic.execute("select * from rdb$relations where rdb$view_blr is not null") self.__views = ObjectList((View(self, row) for row in self._ic.itermap()), View, 'item.name') self.__views.freeze() return self.__views def _get_views(self): return self._get_all_views().filter(lambda item: not item.issystemobject()) def _get_sysviews(self): return self._get_all_views().filter(lambda item: item.issystemobject()) def _get_constraint_indices(self): if self.__constraint_indices is None: self.__fail_if_closed() self._ic.execute("""select RDB$INDEX_NAME, RDB$CONSTRAINT_NAME from RDB$RELATION_CONSTRAINTS where RDB$INDEX_NAME is not null""") self.__constraint_indices = dict((key.strip(), value.strip()) for key, value in self._ic) return self.__constraint_indices def _get_all_indices(self): if self.__indices is None: self.__fail_if_closed() # Dummy call to _get_constraint_indices() is necessary as # Index.issystemobject() that is called in Index.__init__() will # drop result from internal cursor and we'll not load all indices. self._get_constraint_indices() self._ic.execute("""select RDB$INDEX_NAME, RDB$RELATION_NAME, RDB$INDEX_ID, RDB$UNIQUE_FLAG, RDB$DESCRIPTION, RDB$SEGMENT_COUNT, RDB$INDEX_INACTIVE, RDB$INDEX_TYPE, RDB$FOREIGN_KEY, RDB$SYSTEM_FLAG, RDB$EXPRESSION_SOURCE, RDB$STATISTICS from RDB$INDICES""") self.__indices = ObjectList((Index(self, row) for row in self._ic.itermap()), Index, 'item.name') self.__indices.freeze() return self.__indices def _get_indices(self): return self._get_all_indices().filter(lambda item: not item.issystemobject()) def _get_sysindices(self): return self._get_all_indices().filter(lambda item: item.issystemobject()) def _get_all_generators(self): if self.__generators is None: self.__fail_if_closed() cols = ['RDB$GENERATOR_NAME', 'RDB$GENERATOR_ID', 'RDB$DESCRIPTION', 'RDB$SYSTEM_FLAG'] if self._con.ods >= fdb.ODS_FB_30: cols.extend(['RDB$SECURITY_CLASS', 'RDB$OWNER_NAME', 'RDB$INITIAL_VALUE', 'RDB$GENERATOR_INCREMENT']) self._ic.execute("select %s from rdb$generators" % ','.join(cols)) self.__generators = ObjectList((Sequence(self, row) for row in self._ic.itermap()), Sequence, 'item.name') self.__generators.freeze() return self.__generators def _get_generators(self): return self._get_all_generators().filter(lambda item: not item.issystemobject()) def _get_sysgenerators(self): return self._get_all_generators().filter(lambda item: item.issystemobject()) def _get_all_triggers(self): if self.__triggers is None: self.__fail_if_closed() cols = ['RDB$TRIGGER_NAME', 'RDB$RELATION_NAME', 'RDB$TRIGGER_SEQUENCE', 'RDB$TRIGGER_TYPE', 'RDB$TRIGGER_SOURCE', 'RDB$DESCRIPTION', 'RDB$TRIGGER_INACTIVE', 'RDB$SYSTEM_FLAG', 'RDB$FLAGS'] if self._con.ods >= fdb.ODS_FB_30: cols.extend(['RDB$VALID_BLR', 'RDB$ENGINE_NAME', 'RDB$ENTRYPOINT']) self._ic.execute("select %s from RDB$TRIGGERS" % ','.join(cols)) self.__triggers = ObjectList((Trigger(self, row) for row in self._ic.itermap()), Trigger, 'item.name') self.__triggers.freeze() return self.__triggers def _get_triggers(self): return self._get_all_triggers().filter(lambda item: not item.issystemobject()) def _get_systriggers(self): return self._get_all_triggers().filter(lambda item: item.issystemobject()) def _get_all_procedures(self): if self.__procedures is None: self.__fail_if_closed() cols = ['RDB$PROCEDURE_NAME', 'RDB$PROCEDURE_ID', 'RDB$PROCEDURE_INPUTS', 'RDB$PROCEDURE_OUTPUTS', 'RDB$DESCRIPTION', 'RDB$PROCEDURE_SOURCE', 'RDB$SECURITY_CLASS', 'RDB$OWNER_NAME', 'RDB$SYSTEM_FLAG'] if self._con.ods >= fdb.ODS_FB_21: cols.extend(['RDB$PROCEDURE_TYPE', 'RDB$VALID_BLR']) if self._con.ods >= fdb.ODS_FB_30: cols.extend(['RDB$ENGINE_NAME', 'RDB$ENTRYPOINT', 'RDB$PACKAGE_NAME', 'RDB$PRIVATE_FLAG']) self._ic.execute("select %s from rdb$procedures" % ','.join(cols)) self.__procedures = ObjectList((Procedure(self, row) for row in self._ic.itermap()), Procedure, 'item.name') self.__procedures.freeze() return self.__procedures def _get_procedures(self): return self._get_all_procedures().filter(lambda item: not item.issystemobject()) def _get_sysprocedures(self): return self._get_all_procedures().filter(lambda item: item.issystemobject()) def _get_constraints(self): if self.__constraints is None: self.__fail_if_closed() # Dummy call to _get_all_tables() is necessary as # Constraint.issystemobject() that is called in Constraint.__init__() # will drop result from internal cursor and we'll not load all constraints. self._get_all_tables() self._ic.execute("""select c.RDB$CONSTRAINT_NAME, c.RDB$CONSTRAINT_TYPE, c.RDB$RELATION_NAME, c.RDB$DEFERRABLE, c.RDB$INITIALLY_DEFERRED, c.RDB$INDEX_NAME, r.RDB$CONST_NAME_UQ, r.RDB$MATCH_OPTION,r.RDB$UPDATE_RULE,r.RDB$DELETE_RULE, k.RDB$TRIGGER_NAME from rdb$relation_constraints C left outer join rdb$ref_constraints R on C.rdb$constraint_name = R.rdb$constraint_name left outer join rdb$check_constraints K on (C.rdb$constraint_name = K.rdb$constraint_name) and (c.RDB$CONSTRAINT_TYPE in ('CHECK','NOT NULL'))""") self.__constraints = ObjectList((Constraint(self, row) for row in self._ic.itermap()), Constraint, 'item.name') # Check constrains need special care because they're doubled # (select above returns two records for them with different trigger names) checks = self.__constraints.extract(lambda item: item.ischeck()) dchecks = {} for check in checks: dchecks.setdefault(check.name, list()).append(check) for checklist in dchecks.values(): names = [c._attributes['RDB$TRIGGER_NAME'] for c in checklist] check = checklist[0] check._attributes['RDB$TRIGGER_NAME'] = names self.__constraints.append(check) self.__constraints.freeze() return self.__constraints def _get_roles(self): if self.__roles is None: self.__fail_if_closed() self._ic.execute("select * from rdb$roles") self.__roles = ObjectList((Role(self, row) for row in self._ic.itermap()), Role, 'item.name') self.__roles.freeze() return self.__roles def _get_dependencies(self): if self.__dependencies is None: self.__fail_if_closed() self._ic.execute("select * from rdb$dependencies") self.__dependencies = ObjectList((Dependency(self, row) for row in self._ic.itermap()), Dependency) return self.__dependencies def _get_all_functions(self): if self.__functions is None: self.__fail_if_closed() cols = ['RDB$FUNCTION_NAME', 'RDB$FUNCTION_TYPE', 'RDB$MODULE_NAME', 'RDB$ENTRYPOINT', 'RDB$DESCRIPTION', 'RDB$RETURN_ARGUMENT', 'RDB$SYSTEM_FLAG'] if self._con.ods >= fdb.ODS_FB_30: cols.extend(['RDB$ENGINE_NAME', 'RDB$PACKAGE_NAME', 'RDB$PRIVATE_FLAG', 'RDB$FUNCTION_SOURCE', 'RDB$FUNCTION_ID', 'RDB$VALID_BLR', 'RDB$SECURITY_CLASS', 'RDB$OWNER_NAME', 'RDB$LEGACY_FLAG', 'RDB$DETERMINISTIC_FLAG']) self._ic.execute("select %s from rdb$functions" % ','.join(cols)) self.__functions = ObjectList((Function(self, row) for row in self._ic.itermap()), Function, 'item.name') self.__functions.freeze() return self.__functions def _get_functions(self): return self._get_all_functions().filter(lambda item: not item.issystemobject()) def _get_sysfunctions(self): return self._get_all_functions().filter(lambda item: item.issystemobject()) def _get_files(self): if self.__files is None: self.__fail_if_closed() self._ic.execute("""select RDB$FILE_NAME, RDB$FILE_SEQUENCE, RDB$FILE_START, RDB$FILE_LENGTH from RDB$FILES where RDB$SHADOW_NUMBER = 0 order by RDB$FILE_SEQUENCE""") self.__files = ObjectList((DatabaseFile(self, row) for row in self._ic.itermap()), DatabaseFile, 'item.name') self.__files.freeze() return self.__files def _get_shadows(self): if self.__shadows is None: self.__fail_if_closed() self._ic.execute("""select RDB$FILE_FLAGS, RDB$SHADOW_NUMBER from RDB$FILES where RDB$SHADOW_NUMBER > 0 AND RDB$FILE_SEQUENCE = 0 order by RDB$SHADOW_NUMBER""") self.__shadows = ObjectList((Shadow(self, row) for row in self._ic.itermap()), Shadow, 'item.name') self.__shadows.freeze() return self.__shadows def _get_privileges(self): if self.__privileges is None: self.__fail_if_closed() self._ic.execute("""select RDB$USER, RDB$GRANTOR, RDB$PRIVILEGE, RDB$GRANT_OPTION, RDB$RELATION_NAME, RDB$FIELD_NAME, RDB$USER_TYPE, RDB$OBJECT_TYPE FROM RDB$USER_PRIVILEGES""") self.__privileges = ObjectList((Privilege(self, row) for row in self._ic.itermap()), Privilege) return self.__privileges def _get_backup_history(self): if self.__backup_history is None: self.__fail_if_closed() self._ic.execute("""SELECT RDB$BACKUP_ID, RDB$TIMESTAMP, RDB$BACKUP_LEVEL, RDB$GUID, RDB$SCN, RDB$FILE_NAME FROM RDB$BACKUP_HISTORY""") self.__backup_history = ObjectList((BackupHistory(self, row) for row in self._ic.itermap()), BackupHistory, 'item.name') self.__backup_history.freeze() return self.__backup_history def _get_filters(self): if self.__filters is None: self.__fail_if_closed() self._ic.execute("""SELECT RDB$FUNCTION_NAME, RDB$DESCRIPTION, RDB$MODULE_NAME, RDB$ENTRYPOINT, RDB$INPUT_SUB_TYPE, RDB$OUTPUT_SUB_TYPE, RDB$SYSTEM_FLAG FROM RDB$FILTERS""") self.__filters = ObjectList((Filter(self, row) for row in self._ic.itermap()), Filter, 'item.name') self.__filters.freeze() return self.__filters def _get_users(self): if self.__users is None: self.__fail_if_closed() self._ic.execute("select distinct(RDB$USER) FROM RDB$USER_PRIVILEGES") self.__users = ObjectList((fdb.services.User(row[0].strip()) for row in self._ic), fdb.services.User, 'item.name') return self.__users def _get_packages(self): if self.__packages is None: if self._con.ods >= fdb.ODS_FB_30: self.__fail_if_closed() self._ic.execute("""select RDB$PACKAGE_NAME, RDB$PACKAGE_HEADER_SOURCE, RDB$PACKAGE_BODY_SOURCE, RDB$VALID_BODY_FLAG, RDB$SECURITY_CLASS, RDB$OWNER_NAME, RDB$SYSTEM_FLAG, RDB$DESCRIPTION FROM RDB$PACKAGES""") self.__packages = ObjectList((Package(self, row) for row in self._ic.itermap()), Package, 'item.name') else: self.__packages = ObjectList(_cls=Package, key_expr='item.name') self.__packages.freeze() return self.__packages #--- Properties #: True if link to :class:`~fdb.Connection` is closed. closed = property(lambda self: self._con is None) description = property(lambda self: self.__description, doc="Database description or None if it doesn't have a description.") owner_name = property(lambda self: self.__owner, doc="Database owner name.") default_character_set = LateBindingProperty(_get_default_character_set, doc="Default :class:`CharacterSet` for database") security_class = property(lambda self: self.__security_class, doc="Can refer to the security class applied as databasewide access control limits.") collations = LateBindingProperty(_get_collations, doc=":class:`~fdb.utils.ObjectList` of all collations in database.\nItems are :class:`Collation` objects.") character_sets = LateBindingProperty(_get_character_sets, doc=":class:`~fdb.utils.ObjectList` of all character sets in database.\nItems are :class:`CharacterSet` objects.") exceptions = LateBindingProperty(_get_exceptions, doc=":class:`~fdb.utils.ObjectList` of all exceptions in database.\nItems are :class:`DatabaseException` objects.") generators = LateBindingProperty(_get_generators, doc=":class:`~fdb.utils.ObjectList` of all user generators in database.\nItems are :class:`Sequence` objects.") sysgenerators = LateBindingProperty(_get_sysgenerators, doc=":class:`~fdb.utils.ObjectList` of all system generators in database.\nItems are :class:`Sequence` objects.") sequences = LateBindingProperty(_get_generators, doc=":class:`~fdb.utils.ObjectList` of all user generators in database.\nItems are :class:`Sequence` objects.") syssequences = LateBindingProperty(_get_sysgenerators, doc=":class:`~fdb.utils.ObjectList` of all system generators in database.\nItems are :class:`Sequence` objects.") domains = LateBindingProperty(_get_domains, doc=":class:`~fdb.utils.ObjectList` of all user domains in database.\nItems are :class:`Domain` objects.") sysdomains = LateBindingProperty(_get_sysdomains, doc=":class:`~fdb.utils.ObjectList` of all system domains in database.\nItems are :class:`Domain` objects.") indices = LateBindingProperty(_get_indices, doc=":class:`~fdb.utils.ObjectList` of all user indices in database.\nItems are :class:`Index` objects.") sysindices = LateBindingProperty(_get_sysindices, doc=":class:`~fdb.utils.ObjectList` of all system indices in database.\nItems are :class:`Index` objects.") tables = LateBindingProperty(_get_tables, doc=":class:`~fdb.utils.ObjectList` of all user tables in database.\nItems are :class:`Table` objects.") systables = LateBindingProperty(_get_systables, doc=":class:`~fdb.utils.ObjectList` of all system tables in database.\nItems are :class:`Table` objects.") views = LateBindingProperty(_get_views, doc=":class:`~fdb.utils.ObjectList` of all user views in database.\nItems are :class:`View` objects.") sysviews = LateBindingProperty(_get_sysviews, doc=":class:`~fdb.utils.ObjectList` of all system views in database.\nItems are :class:`View` objects.") triggers = LateBindingProperty(_get_triggers, doc=":class:`~fdb.utils.ObjectList` of all user triggers in database.\nItems are :class:`Trigger` objects.") systriggers = LateBindingProperty(_get_systriggers, doc=":class:`~fdb.utils.ObjectList` of all system triggers in database.\nItems are :class:`Trigger` objects.") procedures = LateBindingProperty(_get_procedures, doc=":class:`~fdb.utils.ObjectList` of all user procedures in database.\nItems are :class:`Procedure` objects.") sysprocedures = LateBindingProperty(_get_sysprocedures, doc=":class:`~fdb.utils.ObjectList` of all system procedures in database.\nItems are :class:`Procedure` objects.") constraints = LateBindingProperty(_get_constraints, doc=":class:`~fdb.utils.ObjectList` of all constraints in database.\nItems are :class:`Constraint` objects.") roles = LateBindingProperty(_get_roles, doc=":class:`~fdb.utils.ObjectList` of all roles in database.\nItems are :class:`Role` objects.") dependencies = LateBindingProperty(_get_dependencies, doc=":class:`~fdb.utils.ObjectList` of all dependencies in database.\nItems are :class:`Dependency` objects.") functions = LateBindingProperty(_get_functions, doc=":class:`~fdb.utils.ObjectList` of all user functions defined in database.\nItems are :class:`Function` objects.") sysfunctions = LateBindingProperty(_get_sysfunctions, doc=":class:`~fdb.utils.ObjectList` of all system functions defined in database.\nItems are :class:`Function` objects.") files = LateBindingProperty(_get_files, doc=":class:`~fdb.utils.ObjectList` of all extension files defined for database.\nItems are :class:`DatabaseFile` objects.") shadows = LateBindingProperty(_get_shadows, doc=":class:`~fdb.utils.ObjectList` of all shadows defined for database.\nItems are :class:`Shadow` objects.") privileges = LateBindingProperty(_get_privileges, doc=":class:`~fdb.utils.ObjectList` of all privileges defined for database.\nItems are :class:`Privilege` objects.") backup_history = LateBindingProperty(_get_backup_history, doc=":class:`~fdb.utils.ObjectList` of all nbackup hisotry records.\nItems are :class:`BackupHistory` objects.") filters = LateBindingProperty(_get_filters, doc=":class:`~fdb.utils.ObjectList` of all user-defined BLOB filters.\nItems are :class:`Filter` objects.") # FB 3 packages = LateBindingProperty(_get_packages, doc=":class:`~fdb.utils.ObjectList` of all packages defined for database.\nItems are :class:`Package` objects.") linger = property(lambda self: self.__linger, doc="Database linger value.") #--- Public def bind(self, connection): """Bind this instance to specified :class:`~fdb.Connection`. :param connection: :class:`~fdb.Connection` instance. :raises `~fdb.ProgrammingError`: If Schema object was set as internal (via :meth:`_set_as_internal`). """ if self.__internal: raise fdb.ProgrammingError("Call to 'bind' not allowed for embedded Schema.") if self._con: self.close() self._con = connection self._ic = self._con.query_transaction.cursor() self.__clear() self._ic.execute('select * from RDB$DATABASE') row = self._ic.fetchonemap() self.__description = row['RDB$DESCRIPTION'] self.__linger = row.get('RDB$LINGER') self._default_charset_name = row['RDB$CHARACTER_SET_NAME'].strip() self.__security_class = row['RDB$SECURITY_CLASS'] if self.__security_class: self.__security_class = self.__security_class.strip() self._ic.execute("select RDB$OWNER_NAME from RDB$RELATIONS where RDB$RELATION_NAME = 'RDB$DATABASE'") self.__owner = self._ic.fetchone()[0].strip() # Load enumerate types defined in RDB$TYPES table enum_select = 'select RDB$TYPE, RDB$TYPE_NAME from RDB$TYPES where RDB$FIELD_NAME = ?' def enum_dict(enum_type): return dict((key, value.strip()) for key, value in self._ic.execute(enum_select, (enum_type,))) # Object types self.enum_object_types = enum_dict('RDB$OBJECT_TYPE') # Object type codes self.enum_object_type_codes = dict(((value, key) for key, value in self.enum_object_types.items())) # Character set names self.enum_character_set_names = enum_dict('RDB$CHARACTER_SET_NAME') # Field types self.enum_field_types = enum_dict('RDB$FIELD_TYPE') # Field sub types self.enum_field_subtypes = enum_dict('RDB$FIELD_SUB_TYPE') # Function types self.enum_function_types = enum_dict('RDB$FUNCTION_TYPE') # Mechanism Types self.enum_mechanism_types = enum_dict('RDB$MECHANISM') # Parameter Mechanism Types self.enum_parameter_mechanism_types = enum_dict('RDB$PARAMETER_MECHANISM') # Procedure Types self.enum_procedure_types = enum_dict('RDB$PROCEDURE_TYPE') if not self.enum_procedure_types: self.enum_procedure_types = {0: 'LEGACY', 1: 'SELECTABLE', 2: 'EXECUTABLE'} # Relation Types self.enum_relation_types = enum_dict('RDB$RELATION_TYPE') # System Flag Types self.enum_system_flag_types = enum_dict('RDB$SYSTEM_FLAG') # Transaction State Types self.enum_transaction_state_types = enum_dict('RDB$TRANSACTION_STATE') # Trigger Types self.enum_trigger_types = enum_dict('RDB$TRIGGER_TYPE') # Firebird 3.0 # Parameter Types self.enum_parameter_types = enum_dict('RDB$PARAMETER_TYPE') # Index activity self.enum_index_activity_flags = enum_dict('RDB$INDEX_INACTIVE') # Index uniqueness self.enum_index_unique_flags = enum_dict('RDB$UNIQUE_FLAG') # Trigger activity self.enum_trigger_activity_flags = enum_dict('RDB$TRIGGER_INACTIVE') # Grant options self.enum_grant_options = enum_dict('RDB$GRANT_OPTION') # Page types self.enum_page_types = enum_dict('RDB$PAGE_TYPE') # Privacy self.enum_privacy_flags = enum_dict('RDB$PRIVATE_FLAG') # Legacy self.enum_legacy_flags = enum_dict('RDB$LEGACY_FLAG') # Determinism self.enum_determinism_flags = enum_dict('RDB$DETERMINISTIC_FLAG') def close(self): """Sever link to :class:`~fdb.Connection`. :raises `~fdb.ProgrammingError`: If Schema object was set as internal (via :meth:`_set_as_internal`). """ if self.__internal: raise fdb.ProgrammingError("Call to 'close' not allowed for embedded Schema.") self._close() self.__clear() #--- Basic Database manipulation routines def clear(self): "Drop all cached metadata objects." self.__clear() def reload(self, data=None): """Drop all or specified category of cached metadata objects, so they're reloaded from database on next reference. :param string data: `None`, metadata category code or iterable with category codes. .. note:: Category codes are defined by `SCHEMA_*` globals. :raises fdb.ProgrammingError: For undefined metadata category. .. note:: Also commits query transaction. """ self.__clear(data) if not self.closed: self._ic.transaction.commit() def get_metadata_ddl(self, sections=SCRIPT_DEFAULT_ORDER): """Return list of DDL SQL commands for creation of specified categories of database objects. :param list sections: List of section identifiers. :returns: List with SQL commands. Sections identifiers are represented by `SCRIPT_*` contants defined in schema module. Sections are created in the order of occerence in list. Uses `SCRIPT_DEFAULT_ORDER` list when sections are not specified. """ def order_by_dependency(items, get_dependencies): ordered = [] wlist = list(items) while len(wlist) > 0: item = wlist.pop(0) add = True for dep in get_dependencies(item): if isinstance(dep.depended_on, View) and dep.depended_on not in ordered: wlist.append(item) add = False break if add: ordered.append(item) return ordered def view_dependencies(item): return [x for x in item.get_dependencies() if x.depended_on_type == 1] # script = [] for section in sections: if section == SCRIPT_COLLATIONS: for collation in self.collations: if not collation.issystemobject(): script.append(collation.get_sql_for('create')) elif section == SCRIPT_CHARACTER_SETS: for charset in self.character_sets: if charset.name != charset.default_collate.name: script.append(charset.get_sql_for('alter', collation=charset.default_collate.name)) elif section == SCRIPT_UDFS: for udf in self.functions: if udf.isexternal(): script.append(udf.get_sql_for('declare')) elif section == SCRIPT_GENERATORS: for generator in self.generators: script.append(generator.get_sql_for('create')) elif section == SCRIPT_EXCEPTIONS: for e in self.exceptions: script.append(e.get_sql_for('create')) elif section == SCRIPT_DOMAINS: for domain in self.domains: script.append(domain.get_sql_for('create')) elif section == SCRIPT_PACKAGE_DEFS: for package in self.packages: script.append(package.get_sql_for('create')) elif section == SCRIPT_FUNCTION_DEFS: for func in (x for x in self.functions if not x.isexternal() and not x.ispackaged()): script.append(func.get_sql_for('create', no_code=True)) elif section == SCRIPT_PROCEDURE_DEFS: for proc in (x for x in self.procedures if not x.ispackaged()): script.append(proc.get_sql_for('create', no_code=True)) elif section == SCRIPT_TABLES: for table in self.tables: script.append(table.get_sql_for('create', no_pk=True, no_unique=True)) elif section == SCRIPT_PRIMARY_KEYS: for constraint in (x for x in self.constraints if x.ispkey()): script.append(constraint.get_sql_for('create')) elif section == SCRIPT_UNIQUE_CONSTRAINTS: for table in self.tables: for constraint in (x for x in table.constraints if x.isunique()): script.append(constraint.get_sql_for('create')) elif section == SCRIPT_CHECK_CONSTRAINTS: for table in self.tables: for constraint in (x for x in table.constraints if x.ischeck()): script.append(constraint.get_sql_for('create')) elif section == SCRIPT_FOREIGN_CONSTRAINTS: for table in self.tables: for constraint in (x for x in table.constraints if x.isfkey()): script.append(constraint.get_sql_for('create')) elif section == SCRIPT_INDICES: for table in self.tables: for index in (x for x in table.indices if not x.isenforcer()): script.append(index.get_sql_for('create')) elif section == SCRIPT_VIEWS: for view in order_by_dependency(self.views, view_dependencies): script.append(view.get_sql_for('create')) elif section == SCRIPT_PACKAGE_BODIES: for package in self.packages: script.append(package.get_sql_for('create', body=True)) elif section == SCRIPT_PROCEDURE_BODIES: for proc in (x for x in self.procedures if not x.ispackaged()): script.append('ALTER' + proc.get_sql_for('create')[6:]) elif section == SCRIPT_FUNCTION_BODIES: for func in (x for x in self.functions if not x.isexternal() and not x.ispackaged()): script.append('ALTER' + func.get_sql_for('create')[6:]) elif section == SCRIPT_TRIGGERS: for trigger in self.triggers: script.append(trigger.get_sql_for('create')) elif section == SCRIPT_ROLES: for role in (x for x in self.roles if not x.issystemobject()): script.append(role.get_sql_for('create')) elif section == SCRIPT_GRANTS: for priv in (x for x in self.privileges if x.user_name != 'SYSDBA' and not x.subject.issystemobject()): script.append(priv.get_sql_for('grant')) elif section == SCRIPT_COMMENTS: for objects in [self.character_sets, self.collations, self.exceptions, self.domains, self.generators, self.tables, self.indices, self.views, self.triggers, self.procedures, self.functions, self.roles]: for obj in objects: if obj.description is not None: script.append(obj.get_sql_for('comment')) if isinstance(obj, (Table, View)): for col in obj.columns: if col.description is not None: script.append(col.get_sql_for('comment')) elif isinstance(obj, Procedure): if isinstance(obj, (Table, View)): for par in obj.input_params: if par.description is not None: script.append(par.get_sql_for('comment')) for par in obj.output_params: if par.description is not None: script.append(par.get_sql_for('comment')) elif section == SCRIPT_SHADOWS: for shadow in self.shadows: script.append(shadow.get_sql_for('create')) elif section == SCRIPT_INDEX_DEACTIVATIONS: for index in self.indices: script.append(index.get_sql_for('deactivate')) elif section == SCRIPT_INDEX_ACTIVATIONS: for index in self.indices: script.append(index.get_sql_for('activate')) elif section == SCRIPT_SET_GENERATORS: for generator in self.generators: script.append(generator.get_sql_for('alter', value=generator.value)) elif section == SCRIPT_TRIGGER_DEACTIVATIONS: for trigger in self.triggers: script.append(trigger.get_sql_for('alter', active=False)) elif section == SCRIPT_TRIGGER_ACTIVATIONS: for trigger in self.triggers: script.append(trigger.get_sql_for('alter', active=True)) else: raise ValueError("Unknown section code %s" % section) return script def ismultifile(self): "Returns true if database has multiple files." return len(self.files) > 0 def get_collation(self, name): """Get :class:`Collation` by name. :param string name: Collation name. :returns: :class:`Collation` with specified name or `None`. """ return self.collations.get(name) def get_character_set(self, name): """Get :class:`CharacterSet` by name. :param string name: Character set name. :returns: :class:`CharacterSet` with specified name or `None`. """ return self.character_sets.get(name) def get_exception(self, name): """Get :class:`DatabaseException` by name. :param string name: Exception name. :returns: :class:`DatabaseException` with specified name or `None`. """ return self.exceptions.get(name) def get_generator(self, name): """Get :class:`Sequence` by name. :param string name: Sequence name. :returns: :class:`Sequence` with specified name or `None`. """ return self._get_all_generators().get(name) get_sequence = get_generator def get_index(self, name): """Get :class:`Index` by name. :param string name: Index name. :returns: :class:`Index` with specified name or `None`. """ return self._get_all_indices().get(name) def get_domain(self, name): """Get :class:`Domain` by name. :param string name: Domain name. :returns: :class:`Domain` with specified name or `None`. """ return self._get_all_domains().get(name) def get_table(self, name): """Get :class:`Table` by name. :param string name: Table name. :returns: :class:`Table` with specified name or `None`. """ return self._get_all_tables().get(name) def get_view(self, name): """Get :class:`View` by name. :param string name: View name. :returns: :class:`View` with specified name or `None`. """ return self._get_all_views().get(name) def get_trigger(self, name): """Get :class:`Trigger` by name. :param string name: Trigger name. :returns: :class:`Trigger` with specified name or `None`. """ return self._get_all_triggers().get(name) def get_procedure(self, name): """Get :class:`Procedure` by name. :param string name: Procedure name. :returns: :class:`Procedure` with specified name or `None`. """ return self._get_all_procedures().get(name) def get_constraint(self, name): """Get :class:`Constraint` by name. :param string name: Constraint name. :returns: :class:`Constraint` with specified name or `None`. """ return self.constraints.get(name) def get_role(self, name): """Get :class:`Role` by name. :param string name: Role name. :returns: :class:`Role` with specified name or `None`. """ return self.roles.get(name) def get_function(self, name): """Get :class:`Function` by name. :param string name: Function name. :returns: :class:`Function` with specified name or `None`. """ return self._get_all_functions().get(name) def get_collation_by_id(self, charset_id, collation_id): """Get :class:`Collation` by ID. :param integer charset_id: Character set ID. :param integer collation_id: Collation ID. :returns: :class:`Collation` with specified ID or `None`. """ for collation in self.collations: if (collation._attributes['RDB$CHARACTER_SET_ID'] == charset_id) and (collation.id == collation_id): return collation else: return None def get_character_set_by_id(self, id): """Get :class:`CharacterSet` by ID. :param integer name: CharacterSet ID. :returns: :class:`CharacterSet` with specified ID or `None`. """ for charset in self.character_sets: if charset.id == id: return charset else: return None def get_privileges_of(self, user, user_type=None): """Get list of all privileges granted to user/database object. :param user: User name or instance of class that represents possible user. Allowed classes are :class:`~fdb.services.User`, :class:`Table`, :class:`View`, :class:`Procedure`, :class:`Trigger` or :class:`Role`. :param int user_type: **Required if** `user` is provided as string name. Numeric code for user type, see :attr:`Schema.enum_object_types`. :returns: List of :class:`Privilege` objects. :raises `~fdb.ProgrammingError`: For unknown `user_type` code. """ if isinstance(user, (fdb.StringType, fdb.UnicodeType)): if (user_type is None) or (user_type not in self.enum_object_types): raise fdb.ProgrammingError("Unknown user_type code.") else: uname = user utype = [user_type] elif isinstance(user, (Table, View, Procedure, Trigger, Role)): uname = user.name utype = user._type_code elif isinstance(user, fdb.services.User): uname = user.name utype = [8] return [p for p in self.privileges if (p.user_name == uname) and (p.user_type in utype)] def get_package(self, name): """Get :class:`Package` by name. :param string name: Package name. :returns: :class:`Package` with specified name or `None`. """ return self.packages.get(name) class BaseSchemaItem(Visitable): """Base class for all database schema objects.""" #: Weak reference to parent :class:`Schema` instance. schema = None def __init__(self, schema, attributes): self.schema = schema if isinstance(schema, weakref.ProxyType) else weakref.proxy(schema) self._type_code = [] self._attributes = dict(attributes) self._actions = [] #--- protected def _strip_attribute(self, attr): if self._attributes.get(attr): self._attributes[attr] = self._attributes[attr].strip() def _check_params(self, params, param_names): p = set(params.keys()) n = set(param_names) if not p.issubset(n): raise fdb.ProgrammingError("Unsupported parameter(s) '%s'" % ','.join(p.difference(n))) def _needs_quoting(self, ident): if not ident: return False if self.schema.opt_always_quote: return True if len(ident) >= 1 and ident[0] not in string.ascii_uppercase: return True for char in ident: if char not in string.ascii_uppercase + string.digits + '$_': return True return iskeyword(ident) def _get_quoted_ident(self, ident): if self._needs_quoting(ident): return '"%s"' % ident else: return ident def _get_name(self): return None def _get_description(self): return self._attributes.get('RDB$DESCRIPTION') def _get_actions(self): return self._actions def _get_recreate_sql(self, **params): return 'RE'+self._get_create_sql(**params) def _get_create_or_alter_sql(self, **params): return 'CREATE OR ALTER' + self._get_create_sql(**params)[6:] #--- properties name = LateBindingProperty(_get_name, doc="Database object name or None if object doesn't have a name.") description = LateBindingProperty(_get_description, doc="Database object description or None if object doesn't have a description.") actions = LateBindingProperty(_get_actions, doc="List of supported SQL operations on metadata object instance.") #--- Public def issystemobject(self): "Returns True if this database object is system object." return True if self._attributes.get('RDB$SYSTEM_FLAG', False) else False def get_quoted_name(self): "Returns quoted (if necessary) name." return self._get_quoted_ident(self.name) def get_dependents(self): "Returns list of all database objects that depend on this one." return [d for d in self.schema.dependencies if d.depended_on_name == self.name and d.depended_on_type in self._type_code] def get_dependencies(self): "Returns list of database objects that this object depend on." return [d for d in self.schema.dependencies if d.dependent_name == self.name and d.dependent_type in self._type_code] def get_sql_for(self, action, **params): """Returns SQL command for specified action on metadata object. Supported actions are defined by :attr:`actions` list. :raises `~fdb.ProgrammingError`: For unsupported action or wrong parameters passed. """ _action = action.lower() if _action in self._actions: _call = getattr(self, '_get_%s_sql' % _action) return _call(**params) else: raise fdb.ProgrammingError("Unsupported action '%s'" % action) class Collation(BaseSchemaItem): """Represents collation. Supported SQL actions: - User collation: create, drop, comment - System collation: comment """ def __init__(self, schema, attributes): super(Collation, self).__init__(schema, attributes) self._type_code = [17,] self._strip_attribute('RDB$COLLATION_NAME') self._strip_attribute('RDB$BASE_COLLATION_NAME') self._strip_attribute('RDB$FUNCTION_NAME') self._strip_attribute('RDB$SECURITY_CLASS') self._strip_attribute('RDB$OWNER_NAME') self._actions = ['comment'] if not self.issystemobject(): self._actions.extend(['create', 'drop']) #--- Protected def _get_drop_sql(self, **params): self._check_params(params, []) return 'DROP COLLATION %s' % self.get_quoted_name() def _get_create_sql(self, **params): self._check_params(params, []) base_sql = """CREATE COLLATION %s FOR %s %s %s %s %s %s""" % (self.get_quoted_name(), self.character_set.get_quoted_name(), ("FROM EXTERNAL ('%s')" % self._attributes['RDB$BASE_COLLATION_NAME'] if self.isbasedonexternal() else "FROM %s" % self.base_collation.get_quoted_name()), 'PAD SPACE' if self.ispadded() else 'NO PAD', 'CASE INSENSITIVE' if self.iscaseinsensitive() else 'CASE SENSITIVE', 'ACCENT INSENSITIVE' if self.isaccentinsensitive() else 'ACCENT SENSITIVE', "'%s'" % self.specific_attributes if self.specific_attributes else '') return base_sql.strip() def _get_comment_sql(self, **params): return 'COMMENT ON COLLATION %s IS %s' % (self.get_quoted_name(), 'NULL' if self.description is None else "'%s'" % escape_single_quotes(self.description)) def _get_name(self): return self._attributes['RDB$COLLATION_NAME'] def _get_id(self): return self._attributes['RDB$COLLATION_ID'] def _get_character_set(self): """Return :class:`CharacterSet` object to which this collation belongs. """ return self.schema.get_character_set_by_id(self._attributes['RDB$CHARACTER_SET_ID']) def _get_base_collation(self): base_name = self._attributes['RDB$BASE_COLLATION_NAME'] return self.schema.get_collation(base_name) if base_name else None def _get_attributes(self): return self._attributes['RDB$COLLATION_ATTRIBUTES'] def _get_specific_attributes(self): return self._attributes['RDB$SPECIFIC_ATTRIBUTES'] def _get_function_name(self): return self._attributes['RDB$FUNCTION_NAME'] def _get_security_class(self): return self._attributes.get('RDB$SECURITY_CLASS') def _get_owner_name(self): return self._attributes.get('RDB$OWNER_NAME') #--- Properties id = LateBindingProperty(_get_id, doc="Collation ID.") character_set = LateBindingProperty(_get_character_set, doc="Character set object associated with collation.") base_collation = LateBindingProperty(_get_base_collation, doc="Base Collation object that's extended by this one or None.") attributes = LateBindingProperty(_get_attributes, doc="Collation attributes.") specific_attributes = LateBindingProperty(_get_specific_attributes, doc="Collation specific attributes.") function_name = LateBindingProperty(_get_function_name, doc="Not currently used.") # FB 3.0 security_class = LateBindingProperty(_get_security_class, doc="Security class name or None.") owner_name = LateBindingProperty(_get_owner_name, doc="Creator user name.") #--- Public def ispadded(self): """Returns True if collation has PAD SPACE attribute.""" return bool(self.attributes & COLLATION_PAD_SPACE) def iscaseinsensitive(self): "Returns True if collation has CASE INSENSITIVE attribute." return bool(self.attributes & COLLATION_CASE_INSENSITIVE) def isaccentinsensitive(self): "Returns True if collation has ACCENT INSENSITIVE attribute." return bool(self.attributes & COLLATION_ACCENT_INSENSITIVE) def isbasedonexternal(self): "Returns True if collation is based on external collation definition." return self._attributes['RDB$BASE_COLLATION_NAME'] and not self.base_collation class CharacterSet(BaseSchemaItem): """Represents character set. Supported SQL actions: alter(collation=Collation instance or collation name), comment """ def __init__(self, schema, attributes): super(CharacterSet, self).__init__(schema, attributes) self._type_code = [11,] self._strip_attribute('RDB$CHARACTER_SET_NAME') self._strip_attribute('RDB$DEFAULT_COLLATE_NAME') self._strip_attribute('RDB$SECURITY_CLASS') self._strip_attribute('RDB$OWNER_NAME') self._actions = ['alter', 'comment'] #--- protected def _get_alter_sql(self, **params): self._check_params(params, ['collation']) collation = params.get('collation') if collation: return ('ALTER CHARACTER SET %s SET DEFAULT COLLATION %s' % (self.get_quoted_name(), collation.get_quoted_name() if isinstance(collation, Collation) else collation)) else: raise fdb.ProgrammingError("Missing required parameter: 'collation'.") def _get_comment_sql(self, **params): return 'COMMENT ON CHARACTER SET %s IS %s' % (self.get_quoted_name(), 'NULL' if self.description is None else "'%s'" % escape_single_quotes(self.description)) def _get_name(self): return self._attributes['RDB$CHARACTER_SET_NAME'] def _get_id(self): return self._attributes['RDB$CHARACTER_SET_ID'] def _get_bytes_per_character(self): return self._attributes['RDB$BYTES_PER_CHARACTER'] def _get_default_collate(self): return self.get_collation(self._attributes['RDB$DEFAULT_COLLATE_NAME']) def _get_collations(self): return self.schema.collations.filter(lambda item: item._attributes['RDB$CHARACTER_SET_ID'] == self.id) def _get_security_class(self): return self._attributes.get('RDB$SECURITY_CLASS') def _get_owner_name(self): return self._attributes.get('RDB$OWNER_NAME') #--- properties id = LateBindingProperty(_get_id, doc="Character set ID.") bytes_per_character = LateBindingProperty(_get_bytes_per_character, doc="Size of characters in bytes.") default_collate = LateBindingProperty(_get_default_collate, doc="Collate object of default collate.") collations = LateBindingProperty(_get_collations, doc=":class:`~fdb.utils.ObjectList` of Collations associated with character set.") # FB 3.0 security_class = LateBindingProperty(_get_security_class, doc="Security class name or None.") owner_name = LateBindingProperty(_get_owner_name, doc="Creator user name.") #--- Public def get_collation(self, name): """Return :class:`Collation` object with specified name that belongs to this character set. """ return self.collations.get(name) def get_collation_by_id(self, id): """Return :class:`Collation` object with specified id that belongs to this character set. """ for col in self.collations: if col.id == id: return col return None class DatabaseException(BaseSchemaItem): """Represents database exception. Supported SQL actions: - User exception: create, recreate, alter(message=string), create_or_alter, drop, comment - System exception: comment """ def __init__(self, schema, attributes): super(DatabaseException, self).__init__(schema, attributes) self._type_code = [7,] self._strip_attribute('RDB$EXCEPTION_NAME') self._strip_attribute('RDB$SECURITY_CLASS') self._strip_attribute('RDB$OWNER_NAME') self._actions = ['comment'] if not self.issystemobject(): self._actions.extend(['create', 'recreate', 'alter', 'create_or_alter', 'drop']) #--- Protected def _get_create_sql(self, **params): self._check_params(params, []) return "CREATE EXCEPTION %s '%s'" % (self.get_quoted_name(), escape_single_quotes(self.message)) def _get_alter_sql(self, **params): self._check_params(params, ['message']) message = params.get('message') if message: return "ALTER EXCEPTION %s '%s'" % (self.get_quoted_name(), escape_single_quotes(message)) else: raise fdb.ProgrammingError("Missing required parameter: 'message'.") def _get_drop_sql(self, **params): self._check_params(params, []) return 'DROP EXCEPTION %s' % self.get_quoted_name() def _get_comment_sql(self, **params): return 'COMMENT ON EXCEPTION %s IS %s' % (self.get_quoted_name(), 'NULL' if self.description is None else "'%s'" % escape_single_quotes(self.description)) def _get_name(self): return self._attributes['RDB$EXCEPTION_NAME'] def _get_id(self): return self._attributes['RDB$EXCEPTION_NUMBER'] def _get_message(self): return self._attributes['RDB$MESSAGE'] def _get_security_class(self): return self._attributes.get('RDB$SECURITY_CLASS') def _get_owner_name(self): return self._attributes.get('RDB$OWNER_NAME') #--- Properties id = LateBindingProperty(_get_id, doc="System-assigned unique exception number.") message = LateBindingProperty(_get_message, doc="Custom message text.") # FB 3.0 security_class = LateBindingProperty(_get_security_class, doc="Security class name or None.") owner_name = LateBindingProperty(_get_owner_name, doc="Creator user name.") class Sequence(BaseSchemaItem): """Represents database generator/sequence. Supported SQL actions: - User sequence: create, alter(value=number), drop, comment - System sequence: comment """ def __init__(self, schema, attributes): super(Sequence, self).__init__(schema, attributes) self._type_code = [14,] self._strip_attribute('RDB$GENERATOR_NAME') self._strip_attribute('RDB$SECURITY_CLASS') self._strip_attribute('RDB$OWNER_NAME') self._actions = ['comment'] if not self.issystemobject(): self._actions.extend(['create', 'alter', 'drop']) #--- protected def _get_create_sql(self, **params): self._check_params(params, []) return 'CREATE %s %s' % (self.schema.opt_generator_keyword, self.get_quoted_name()) def _get_alter_sql(self, **params): self._check_params(params, ['value']) value = params.get('value') if value is not None: return "ALTER %s %s RESTART WITH %d" % (self.schema.opt_generator_keyword, self.get_quoted_name(), value) else: raise fdb.ProgrammingError("Missing required parameter: 'value'.") def _get_drop_sql(self, **params): self._check_params(params, []) return 'DROP %s %s' % (self.schema.opt_generator_keyword, self.get_quoted_name()) def _get_comment_sql(self, **params): return 'COMMENT ON %s %s IS %s' % (self.schema.opt_generator_keyword, self.get_quoted_name(), 'NULL' if self.description is None else "'%s'" % escape_single_quotes(self.description)) def _get_name(self): return self._attributes['RDB$GENERATOR_NAME'] def _get_id(self): return self._attributes['RDB$GENERATOR_ID'] def _get_value(self): return self.schema._select_row("select GEN_ID(%s,0) from RDB$DATABASE" % self.get_quoted_name())['GEN_ID'] def _get_security_class(self): return self._attributes.get('RDB$SECURITY_CLASS') def _get_owner_name(self): return self._attributes.get('RDB$OWNER_NAME') def _get_inital_value(self): return self._attributes.get('RDB$INITIAL_VALUE') def _get_increment(self): return self._attributes.get('RDB$GENERATOR_INCREMENT') #--- Properties id = LateBindingProperty(_get_id, doc="Internal ID number of the sequence.") value = LateBindingProperty(_get_value, doc="Current sequence value.") # FB 3.0 security_class = LateBindingProperty(_get_security_class, doc="Security class name or None.") owner_name = LateBindingProperty(_get_owner_name, doc="Creator user name.") inital_value = LateBindingProperty(_get_inital_value, doc="Initial sequence value.") increment = LateBindingProperty(_get_increment, doc="Sequence increment.") #--- Public def isidentity(self): "Returns True for system generators created for IDENTITY columns." return self._attributes['RDB$SYSTEM_FLAG'] == 6 class TableColumn(BaseSchemaItem): """Represents table column. Supported SQL actions: - User column: alter(name=string,datatype=string_SQLTypeDef,position=number, expression=computed_by_expr,restart=None_or_init_value), drop, comment - System column: comment """ def __init__(self, schema, table, attributes): super(TableColumn, self).__init__(schema, attributes) self._type_code = [3, 9] self.__table = weakref.proxy(table) self._strip_attribute('RDB$FIELD_NAME') self._strip_attribute('RDB$RELATION_NAME') self._strip_attribute('RDB$FIELD_SOURCE') self._strip_attribute('RDB$SECURITY_CLASS') self._strip_attribute('RDB$GENERATOR_NAME') self._actions = ['comment'] if not self.issystemobject(): self._actions.extend(['alter', 'drop']) #--- Protected def _get_alter_sql(self, **params): self._check_params(params, ['expression', 'datatype', 'name', 'position', 'restart']) new_expr = params.get('expression') new_type = params.get('datatype') new_name = params.get('name') new_position = params.get('position') if new_expr and not self.iscomputed(): raise fdb.ProgrammingError("Change from persistent column to computed" " is not allowed.") elif self.iscomputed() and (new_type and not new_expr): raise fdb.ProgrammingError("Change from computed column to persistent" " is not allowed.") sql = 'ALTER TABLE %s ALTER COLUMN %s' % (self.table.get_quoted_name(), self.get_quoted_name()) if new_name: return '%s TO %s' % (sql, self._get_quoted_ident(new_name)) elif new_position: return '%s POSITION %d' % (sql, new_position) elif new_type or new_expr: result = sql if new_type: result += ' TYPE %s' % new_type if new_expr: result += ' COMPUTED BY %s' % new_expr return result elif 'restart' in params: restart = params.get('restart') sql += ' RESTART' if restart is not None: sql += ' WITH %d' % restart return sql else: raise fdb.ProgrammingError("Parameter required.") def _get_drop_sql(self, **params): self._check_params(params, []) return 'ALTER TABLE %s DROP %s' % (self.table.get_quoted_name(), self.get_quoted_name()) def _get_comment_sql(self, **params): return 'COMMENT ON COLUMN %s.%s IS %s' % (self.table.get_quoted_name(), self.get_quoted_name(), 'NULL' if self.description is None else "'%s'" % escape_single_quotes(self.description)) def _get_name(self): return self._attributes['RDB$FIELD_NAME'] def _get_id(self): return self._attributes['RDB$FIELD_ID'] def _get_table(self): return self.__table def _get_domain(self): return self.schema.get_domain(self._attributes['RDB$FIELD_SOURCE']) def _get_position(self): return self._attributes['RDB$FIELD_POSITION'] def _get_security_class(self): return self._attributes['RDB$SECURITY_CLASS'] def _get_default(self): result = self._attributes.get('RDB$DEFAULT_SOURCE') if result: if result.upper().startswith('DEFAULT '): result = result[8:] return result def _get_collation(self): return self.schema.get_collation_by_id(self.domain._attributes['RDB$CHARACTER_SET_ID'], self._attributes['RDB$COLLATION_ID']) def _get_datatype(self): return self.domain.datatype def _get_privileges(self): return self.schema.privileges.filter(lambda p: (p.subject_name == self.table.name and p.field_name == self.name and p.subject_type in self.table._type_code)) def _get_generator(self): return self.schema.get_generator(self._attributes.get('RDB$GENERATOR_NAME')) def _get_identity_type(self): return self._attributes.get('RDB$IDENTITY_TYPE') #--- Properties id = LateBindingProperty(_get_id, doc="Internam number ID for the column.") table = LateBindingProperty(_get_table, doc="The Table object this column belongs to.") domain = LateBindingProperty(_get_domain, doc="Domain object this column is based on.") position = LateBindingProperty(_get_position, doc="Column's sequence number in row.") security_class = LateBindingProperty(_get_security_class, doc="Security class name or None.") default = LateBindingProperty(_get_default, doc="Default value for column or None.") collation = LateBindingProperty(_get_collation, doc="Collation object or None.") datatype = LateBindingProperty(_get_datatype, doc="Comlete SQL datatype definition.") privileges = LateBindingProperty(_get_privileges, doc=":class:`~fdb.utils.ObjectList` of :class:`Privilege` objects granted to this object.") # FB 3.0 generator = LateBindingProperty(_get_generator, doc="Internal flags.") identity_type = LateBindingProperty(_get_identity_type, doc="Internal flags.") #--- Public def get_dependents(self): "Return list of all database objects that depend on this one." return [d for d in self.schema.dependencies if d.depended_on_name == self._attributes['RDB$RELATION_NAME'] and d.depended_on_type == 0 and d.field_name == self.name] def get_dependencies(self): "Return list of database objects that this object depend on." return [d for d in self.schema.dependencies if d.dependent_name == self._attributes['RDB$RELATION_NAME'] and d.dependent_type == 0 and d.field_name == self.name] def get_computedby(self): "Returns (string) extression for column computation or None." return self.domain.expression def iscomputed(self): "Returns True if column is computed." return bool(self.domain.expression) def isdomainbased(self): "Returns True if column is based on user domain." return not self.domain.issystemobject() def isnullable(self): "Returns True if column can accept NULL values." return not self._attributes['RDB$NULL_FLAG'] def iswritable(self): "Returns True if column is writable (i.e. it's not computed etc.)." return bool(self._attributes['RDB$UPDATE_FLAG']) def isidentity(self): "Returns True for identity type column." return self._attributes.get('RDB$IDENTITY_TYPE') is not None def has_default(self): "Returns True if column has default value." return bool(self._attributes.get('RDB$DEFAULT_SOURCE')) class Index(BaseSchemaItem): """Represents database index. Supported SQL actions: - User index: create, activate, deactivate, recompute, drop, comment - System index: activate, recompute, comment """ def __init__(self, schema, attributes): super(Index, self).__init__(schema, attributes) self._type_code = [6, 10] self.__segment_names = None self.__segment_statistics = None self._strip_attribute('RDB$INDEX_NAME') self._strip_attribute('RDB$RELATION_NAME') self._strip_attribute('RDB$FOREIGN_KEY') self._actions = ['activate', 'recompute', 'comment'] if not self.issystemobject(): self._actions.extend(['create', 'deactivate', 'drop']) #--- Protected def _get_create_sql(self, **params): self._check_params(params, []) return """CREATE %s%s INDEX %s ON %s %s""" % ('UNIQUE ' if self.isunique() else '', self.index_type, self.get_quoted_name(), self.table.name, 'COMPUTED BY %s' % self.expression if self.isexpression() else '(%s)' % ','.join(self.segment_names)) def _get_activate_sql(self, **params): self._check_params(params, []) return 'ALTER INDEX %s ACTIVE' % self.get_quoted_name() def _get_deactivate_sql(self, **params): self._check_params(params, []) return 'ALTER INDEX %s INACTIVE' % self.get_quoted_name() def _get_recompute_sql(self, **params): self._check_params(params, []) return 'SET STATISTICS INDEX %s' % self.get_quoted_name() def _get_drop_sql(self, **params): self._check_params(params, []) return 'DROP INDEX %s' % self.get_quoted_name() def _get_comment_sql(self, **params): return 'COMMENT ON INDEX %s IS %s' % (self.get_quoted_name(), 'NULL' if self.description is None else "'%s'" % escape_single_quotes(self.description)) def _get_name(self): return self._attributes['RDB$INDEX_NAME'] def _get_table(self): return self.schema.get_table(self._attributes['RDB$RELATION_NAME']) def _get_id(self): return self._attributes['RDB$INDEX_ID'] def _get_index_type(self): return (INDEX_TYPE_DESCENDING if self._attributes['RDB$INDEX_TYPE'] == 1 else INDEX_TYPE_ASCENDING) def _get_partner_index(self): pname = self._attributes['RDB$FOREIGN_KEY'] return self.schema.get_index(pname) if pname else None def _get_expression(self): return self._attributes['RDB$EXPRESSION_SOURCE'] def _get_statistics(self): return self._attributes['RDB$STATISTICS'] def _get_segments(self): return ObjectList(self.table.get_column(colname) for colname in self.segment_names) def _get_segment_names(self): if self.__segment_names is None: if self._attributes['RDB$SEGMENT_COUNT'] > 0: self.__segment_names = [r['RDB$FIELD_NAME'].strip() for r in self.schema._select("""select rdb$field_name from rdb$index_segments where rdb$index_name = ? order by rdb$field_position""", (self.name,))] else: self.__segment_names = [] return self.__segment_names def _get_segment_statistics(self): if self.__segment_statistics is None: if self._attributes['RDB$SEGMENT_COUNT'] > 0: if self.schema._con.ods >= fdb.ODS_FB_21: self.__segment_statistics = [r['RDB$STATISTICS'] for r in self.schema._select("""select RDB$STATISTICS from rdb$index_segments where rdb$index_name = ? order by rdb$field_position""", (self.name,))] else: self.__segment_statistics = [None for x in range(self._attributes['RDB$SEGMENT_COUNT'])] else: self.__segment_statistics = [] return self.__segment_statistics def _get_constraint(self): const_name = self.schema._get_constraint_indices().get(self.name) if const_name: return self.schema.get_constraint(const_name) else: return None #--- Properties table = LateBindingProperty(_get_table, doc="The :class:`Table` instance the index applies to.") id = LateBindingProperty(_get_id, doc="Internal number ID of the index.") index_type = LateBindingProperty(_get_index_type, doc="ASCENDING or DESCENDING.") partner_index = LateBindingProperty(_get_partner_index, doc="Associated unique/primary key :class:`Index` instance, or None.") expression = LateBindingProperty(_get_expression, doc="Source of an expression or None.") statistics = LateBindingProperty(_get_statistics, doc="Latest selectivity of the index.") segment_names = LateBindingProperty(_get_segment_names, doc="List of index segment names.") segment_statistics = LateBindingProperty(_get_segment_statistics, doc="List of index segment statistics (for ODS 11.1 and higher).") segments = LateBindingProperty(_get_segments, doc=":class:`~fdb.utils.ObjectList` of index segments as :class:`TableColumn` instances.") constraint = LateBindingProperty(_get_constraint, doc=":class:`Constraint` instance that uses this index or None.") #--- Public def issystemobject(self): "Returns True if this database object is system object." return bool(self._attributes['RDB$SYSTEM_FLAG'] or (self.isenforcer() and self.name.startswith('RDB$'))) def isexpression(self): "Returns True if index is expression index." return not self.segments def isunique(self): "Returns True if index is UNIQUE." return self._attributes['RDB$UNIQUE_FLAG'] == 1 def isinactive(self): "Returns True if index is INACTIVE." return self._attributes['RDB$INDEX_INACTIVE'] == 1 def isenforcer(self): "Returns True if index is used to enforce a constraint." return self.name in self.schema._get_constraint_indices() class ViewColumn(BaseSchemaItem): """Represents view column. Supported SQL actions: comment """ def __init__(self, schema, view, attributes): super(ViewColumn, self).__init__(schema, attributes) self._type_code = [3, 9] self.__view = weakref.proxy(view) self._strip_attribute('RDB$FIELD_NAME') self._strip_attribute('RDB$BASE_FIELD') self._strip_attribute('RDB$RELATION_NAME') self._strip_attribute('RDB$FIELD_SOURCE') self._strip_attribute('RDB$SECURITY_CLASS') self._strip_attribute('BASE_RELATION') self._actions = ['comment'] #--- Protected def _get_comment_sql(self, **params): return 'COMMENT ON COLUMN %s.%s IS %s' % (self.view.get_quoted_name(), self.get_quoted_name(), 'NULL' if self.description is None else "'%s'" % escape_single_quotes(self.description)) def _get_name(self): return self._attributes['RDB$FIELD_NAME'] def _get_base_field(self): bfield = self._attributes['RDB$BASE_FIELD'] if bfield: brel = self._attributes['BASE_RELATION'] relation = self.schema.get_table(brel) if relation: return relation.get_column(bfield) relation = self.schema.get_view(brel) if relation: return relation.get_column(bfield) relation = self.schema.get_procedure(brel) if relation: return relation.get_outparam(bfield) raise fdb.OperationalError("Can't locate base relation.") return None def _get_view(self): return self.__view def _get_domain(self): return self.schema.get_domain(self._attributes['RDB$FIELD_SOURCE']) def _get_position(self): return self._attributes['RDB$FIELD_POSITION'] def _get_security_class(self): return self._attributes['RDB$SECURITY_CLASS'] def _get_collation(self): return self.schema.get_collation_by_id(self.domain._attributes['RDB$CHARACTER_SET_ID'], self._attributes['RDB$COLLATION_ID']) def _get_datatype(self): return self.domain.datatype def _get_privileges(self): return self.schema.privileges.filter(lambda p: (p.subject_name == self.view.name and p.field_name == self.name and p.subject_type == 0)) # Views are logged as Tables in RDB$USER_PRIVILEGES #--- Properties base_field = LateBindingProperty(_get_base_field, doc="The source column from the base relation. Result could be either " ":class:`TableColumn`, :class:`ViewColumn` or :class:`ProcedureParameter` " "instance or None.") view = LateBindingProperty(_get_view, doc="View object this column belongs to.") domain = LateBindingProperty(_get_domain, doc="Domain object this column is based on.") position = LateBindingProperty(_get_position, doc="Column's sequence number in row.") security_class = LateBindingProperty(_get_security_class, doc="Security class name or None.") collation = LateBindingProperty(_get_collation, doc="Collation object or None.") datatype = LateBindingProperty(_get_datatype, doc="Comlete SQL datatype definition.") privileges = LateBindingProperty(_get_privileges, doc=":class:`~fdb.utils.ObjectList` of :class:`Privilege` objects granted to this object.") #--- Public def get_dependents(self): "Return list of all database objects that depend on this one." return [d for d in self.schema.dependencies if d.depended_on_name == self._attributes['RDB$RELATION_NAME'] and d.depended_on_type == 1 and d.field_name == self.name] def get_dependencies(self): "Return list of database objects that this object depend on." return [d for d in self.schema.dependencies if d.dependent_name == self._attributes['RDB$RELATION_NAME'] and d.dependent_type == 1 and d.field_name == self.name] def isnullable(self): "Returns True if column is NULLABLE." return not self._attributes['RDB$NULL_FLAG'] def iswritable(self): "Returns True if column is writable." return bool(self._attributes['RDB$UPDATE_FLAG']) class Domain(BaseSchemaItem): """Represents SQl Domain. Supported SQL actions: - User domain: create, alter(name=string,default=string_definition_or_None, check=string_definition_or_None,datatype=string_SQLTypeDef), drop, comment - System domain: comment """ def __init__(self, schema, attributes): super(Domain, self).__init__(schema, attributes) self._type_code = [9] self._strip_attribute('RDB$FIELD_NAME') self._strip_attribute('RDB$SECURITY_CLASS') self._strip_attribute('RDB$OWNER_NAME') self._actions = ['comment'] if not self.issystemobject(): self._actions.extend(['create', 'alter', 'drop']) #--- Protected def _get_create_sql(self, **params): self._check_params(params, []) sql = 'CREATE DOMAIN %s AS %s' % (self.get_quoted_name(), self.datatype) if self.has_default(): sql += ' DEFAULT %s' % self.default if not self.isnullable(): sql += ' NOT NULL' if self.isvalidated(): sql += ' ' + self.validation if self._attributes['RDB$COLLATION_ID']: #sql += ' COLLATE %s' % self.collation.get_quoted_name() if self.character_set._attributes['RDB$DEFAULT_COLLATE_NAME'] != self.collation.name: sql += ' COLLATE %s' % self.collation.get_quoted_name() return sql def _get_alter_sql(self, **params): self._check_params(params, ['name', 'default', 'check', 'datatype']) new_name = params.get('name') new_default = params.get('default', '') new_constraint = params.get('check', '') new_type = params.get('datatype') sql = 'ALTER DOMAIN %s' % self.get_quoted_name() if len(params) > 1: raise fdb.ProgrammingError("Only one parameter allowed.") if new_name: return '%s TO %s' % (sql, self._get_quoted_ident(new_name)) elif new_default != '': return ('%s SET DEFAULT %s' % (sql, new_default) if new_default else '%s DROP DEFAULT' % sql) elif new_constraint != '': return ('%s ADD CHECK (%s)' % (sql, new_constraint) if new_constraint else '%s DROP CONSTRAINT' % sql) elif new_type: return '%s TYPE %s' % (sql, new_type) else: raise fdb.ProgrammingError("Parameter required.") def _get_drop_sql(self, **params): self._check_params(params, []) return 'DROP DOMAIN %s' % self.get_quoted_name() def _get_comment_sql(self, **params): return 'COMMENT ON DOMAIN %s IS %s' % (self.get_quoted_name(), 'NULL' if self.description is None else "'%s'" % escape_single_quotes(self.description)) def _get_name(self): return self._attributes['RDB$FIELD_NAME'] def _get_expression(self): return self._attributes['RDB$COMPUTED_SOURCE'] def _get_validation(self): return self._attributes['RDB$VALIDATION_SOURCE'] def _get_default(self): result = self._attributes.get('RDB$DEFAULT_SOURCE') if result: if result.upper().startswith('DEFAULT '): result = result[8:] return result def _get_length(self): return self._attributes['RDB$FIELD_LENGTH'] def _get_scale(self): return self._attributes['RDB$FIELD_SCALE'] def _get_field_type(self): return self._attributes['RDB$FIELD_TYPE'] def _get_sub_type(self): return self._attributes['RDB$FIELD_SUB_TYPE'] def _get_segment_length(self): return self._attributes['RDB$SEGMENT_LENGTH'] def _get_external_length(self): return self._attributes['RDB$EXTERNAL_LENGTH'] def _get_external_scale(self): return self._attributes['RDB$EXTERNAL_SCALE'] def _get_external_type(self): return self._attributes['RDB$EXTERNAL_TYPE'] def _get_dimensions(self): if self._attributes['RDB$DIMENSIONS']: return self.schema._get_field_dimensions(self) else: return [] def _get_character_length(self): return self._attributes['RDB$CHARACTER_LENGTH'] def _get_collation(self): return self.schema.get_collation_by_id(self._attributes['RDB$CHARACTER_SET_ID'], self._attributes['RDB$COLLATION_ID']) def _get_character_set(self): return self.schema.get_character_set_by_id(self._attributes['RDB$CHARACTER_SET_ID']) def _get_precision(self): return self._attributes['RDB$FIELD_PRECISION'] def _get_datatype(self): l = [] precision_known = False if self.field_type in (FBT_SMALLINT, FBT_INTEGER, FBT_BIGINT): if self.precision != None: if (self.sub_type > 0) and (self.sub_type <= MAX_INTSUBTYPES): l.append('%s(%d, %d)' % \ (INTEGRAL_SUBTYPES[self.sub_type], self.precision, -self.scale)) precision_known = True if not precision_known: if (self.field_type == FBT_SMALLINT) and (self.scale < 0): l.append('NUMERIC(4, %d)' % -self.scale) elif (self.field_type == FBT_INTEGER) and (self.scale < 0): l.append('NUMERIC(9, %d)' % -self.scale) elif (self.field_type == FBT_DOUBLE_PRECISION) and (self.scale < 0): l.append('NUMERIC(15, %d)' % -self.scale) else: l.append(COLUMN_TYPES[self.field_type]) if self.field_type in (FBT_CHAR, FBT_VARCHAR): l.append('(%d)' % (self.length if self.character_length is None else self.character_length)) if self._attributes['RDB$DIMENSIONS'] != None: l.append('[%s]' % ', '.join('%d' % u if l == 1 else '%d:%d' % (l, u) for l, u in self.dimensions)) if self.field_type == FBT_BLOB: if self.sub_type >= 0 and self.sub_type <= MAX_BLOBSUBTYPES: l.append(' SUB_TYPE %s' % BLOB_SUBTYPES[self.sub_type]) else: l.append(' SUB_TYPE %d' % self.sub_type) l.append(' SEGMENT SIZE %d' % self.segment_length) if self.field_type in (FBT_CHAR, FBT_VARCHAR, FBT_BLOB): if self._attributes['RDB$CHARACTER_SET_ID'] is not None and \ (self.character_set.name != self.schema.default_character_set.name) or \ self._attributes['RDB$COLLATION_ID']: if self._attributes['RDB$CHARACTER_SET_ID'] is not None: l.append(' CHARACTER SET %s' % self.character_set.name) return ''.join(l) def _get_security_class(self): return self._attributes.get('RDB$SECURITY_CLASS') def _get_owner_name(self): return self._attributes.get('RDB$OWNER_NAME') #--- Properties expression = LateBindingProperty(_get_expression, doc="Expression that defines the COMPUTED BY column or None.") validation = LateBindingProperty(_get_validation, doc="CHECK constraint for the domain or None.") default = LateBindingProperty(_get_default, doc="Expression that defines the default value or None.") length = LateBindingProperty(_get_length, doc="Length of the column in bytes.") scale = LateBindingProperty(_get_scale, doc="Negative number representing the scale of NUMBER and DECIMAL column.") field_type = LateBindingProperty(_get_field_type, doc="Number code of the data type defined for the column.") sub_type = LateBindingProperty(_get_sub_type, doc="BLOB subtype.") segment_length = LateBindingProperty(_get_segment_length, doc="For BLOB columns, a suggested length for BLOB buffers.") external_length = LateBindingProperty(_get_external_length, doc="Length of field as it is in an external table. Always 0 for regular tables.") external_scale = LateBindingProperty(_get_external_scale, doc="Scale factor of an integer field as it is in an external table.") external_type = LateBindingProperty(_get_external_type, doc="Data type of the field as it is in an external table.") dimensions = LateBindingProperty(_get_dimensions, doc="List of dimension definition pairs if column is an array type. Always empty for non-array columns.") character_length = LateBindingProperty(_get_character_length, doc="Length of CHAR and VARCHAR column, in characters (not bytes).") collation = LateBindingProperty(_get_collation, doc="Collation object for a character column or None.") character_set = LateBindingProperty(_get_character_set, doc="CharacterSet object for a character or text BLOB column, or None.") precision = LateBindingProperty(_get_precision, doc="Indicates the number of digits of precision available to the data type of the column.") datatype = LateBindingProperty(_get_datatype, doc="Comlete SQL datatype definition.") # FB 3.0 security_class = LateBindingProperty(_get_security_class, doc="Security class name or None.") owner_name = LateBindingProperty(_get_owner_name, doc="Creator user name.") #--- Public def issystemobject(self): "Return True if this database object is system object." return (self._attributes['RDB$SYSTEM_FLAG'] == 1) or self.name.startswith('RDB$') def isnullable(self): "Returns True if domain is not defined with NOT NULL." return not self._attributes['RDB$NULL_FLAG'] def iscomputed(self): "Returns True if domain is computed." return bool(self._attributes['RDB$COMPUTED_SOURCE']) def isvalidated(self): "Returns True if domain has validation constraint." return bool(self._attributes['RDB$VALIDATION_SOURCE']) def isarray(self): "Returns True if domain defines an array." return bool(self._attributes['RDB$DIMENSIONS']) def has_default(self): "Returns True if domain has default value." return bool(self._attributes['RDB$DEFAULT_SOURCE']) class Dependency(BaseSchemaItem): """Maps dependency between database objects. Supported SQL actions: none """ def __init__(self, schema, attributes): super(Dependency, self).__init__(schema, attributes) self._strip_attribute('RDB$DEPENDENT_NAME') self._strip_attribute('RDB$DEPENDED_ON_NAME') self._strip_attribute('RDB$FIELD_NAME') self._strip_attribute('RDB$PACKAGE_NAME') #--- Protected def _get_dependent_name(self): return self._attributes['RDB$DEPENDENT_NAME'] def _get_dependent_type(self): return self._attributes['RDB$DEPENDENT_TYPE'] def _get_field_name(self): return self._attributes['RDB$FIELD_NAME'] def _get_depended_on_name(self): return self._attributes['RDB$DEPENDED_ON_NAME'] def _get_depended_on_type(self): return self._attributes['RDB$DEPENDED_ON_TYPE'] def _get_dependent(self): if self.dependent_type == 0: # TABLE t = self.schema.get_table(self.dependent_name) elif self.dependent_type == 1: # VIEW return self.schema.get_view(self.dependent_name) elif self.dependent_type == 2: # TRIGGER return self.schema.get_trigger(self.dependent_name) elif self.dependent_type == 3: # COMPUTED FIELD (i.e. DOMAIN) return self.schema.get_domain(self.dependent_name) elif self.dependent_type == 4: ## ToDo: Implement handler for VALIDATION if necessary return None elif self.dependent_type == 5: #PROCEDURE return self.schema.get_procedure(self.dependent_name) elif self.dependent_type == 6: # EXPRESSION INDEX return self.schema.get_index(self.dependent_name) elif self.dependent_type == 7: # EXCEPTION return self.schema.get_exception(self.dependent_name) elif self.dependent_type == 8: ## ToDo: Implement handler for USER if necessary return None elif self.dependent_type == 9: # FIELD (i.e. DOMAIN) return self.schema.get_domain(self.dependent_name) elif self.dependent_type == 10: # INDEX return self.schema.get_index(self.dependent_name) elif self.dependent_type == 11: ## ToDo: Implement handler for DEPENDENT COUNT if necessary return None elif self.dependent_type == 12: ## ToDo: Implement handler for USER GROUP if necessary return None elif self.dependent_type == 13: # ROLE return self.schema.get_role(self.dependent_name) elif self.dependent_type == 14: # GENERATOR return self.schema.get_generator(self.dependent_name) elif self.dependent_type == 15: # UDF return self.schema.get_function(self.dependent_name) elif self.dependent_type == 16: ## ToDo: Implement handler for BLOB_FILTER return None elif self.dependent_type == 17: # Collation return self.schema.get_collation(self.dependent_name) elif self.dependent_type in [18, 19]: # Package + package body return self.schema.get_package(self.dependent_name) return None def _get_depended_on(self): if self.depended_on_type == 0: # TABLE t = self.schema.get_table(self.depended_on_name) if self.field_name: return t.get_column(self.field_name) else: return t elif self.depended_on_type == 1: # VIEW t = self.schema.get_view(self.depended_on_name) if self.field_name: return t.get_column(self.field_name) else: return t elif self.depended_on_type == 2: # TRIGGER return self.schema.get_trigger(self.depended_on_name) elif self.depended_on_type == 3: # COMPUTED FIELD (i.e. DOMAIN) return self.schema.get_domain(self.depended_on_name) elif self.depended_on_type == 4: ## ToDo: Implement handler for VALIDATION if necessary return None elif self.depended_on_type == 5: #PROCEDURE return self.schema.get_procedure(self.depended_on_name) elif self.depended_on_type == 6: # EXPRESSION INDEX return self.schema.get_index(self.depended_on_name) elif self.depended_on_type == 7: # EXCEPTION return self.schema.get_exception(self.depended_on_name) elif self.depended_on_type == 8: ## ToDo: Implement handler for USER if necessary return None elif self.depended_on_type == 9: # FIELD (i.e. DOMAIN) return self.schema.get_domain(self.depended_on_name) elif self.depended_on_type == 10: # INDEX return self.schema.get_index(self.depended_on_name) elif self.depended_on_type == 11: ## ToDo: Implement handler for DEPENDENT COUNT if necessary return None elif self.depended_on_type == 12: ## ToDo: Implement handler for USER GROUP if necessary return None elif self.depended_on_type == 13: # ROLE return self.schema.get_role(self.depended_on_name) elif self.depended_on_type == 14: # GENERATOR return self.schema.get_generator(self.depended_on_name) elif self.depended_on_type == 15: # UDF return self.schema.get_function(self.depended_on_name) elif self.depended_on_type == 16: ## ToDo: Implement handler for BLOB_FILTER return None return None def _get_package(self): return self.schema.get_package(self._attributes.get('RDB$PACKAGE_NAME')) #--- Properties dependent = LateBindingProperty(_get_dependent, doc="Dependent database object.") dependent_name = LateBindingProperty(_get_dependent_name, doc="Dependent database object name.") dependent_type = LateBindingProperty(_get_dependent_type, doc="Dependent database object type.") field_name = LateBindingProperty(_get_field_name, doc="Name of one column in `depended on` object.") depended_on = LateBindingProperty(_get_depended_on, doc="Database object on which dependent depends.") depended_on_name = LateBindingProperty(_get_depended_on_name, doc="Name of db object on which dependent depends.") depended_on_type = LateBindingProperty(_get_depended_on_type, doc="Type of db object on which dependent depends.") # FB 3.0 package = LateBindingProperty(_get_package, doc=":class:`Package` instance if dependent depends on object in package or None.") #--- Public def issystemobject(self): "Returns True as dependency entries are considered as system objects." return True def get_dependents(self): "Returns empty list because Dependency object never has dependents." return [] def get_dependencies(self): "Returns empty list because Dependency object never has dependencies." return [] def ispackaged(self): "Returns True if dependency is defined in package." return bool(self._attributes.get('RDB$PACKAGE_NAME')) class Constraint(BaseSchemaItem): """Represents table or column constraint. Supported SQL actions: - Constraint on user table except NOT NULL constraint: create, drop - Constraint on system table: none """ def __init__(self, schema, attributes): super(Constraint, self).__init__(schema, attributes) self._strip_attribute('RDB$CONSTRAINT_NAME') self._strip_attribute('RDB$CONSTRAINT_TYPE') self._strip_attribute('RDB$RELATION_NAME') self._strip_attribute('RDB$DEFERRABLE') self._strip_attribute('RDB$INITIALLY_DEFERRED') self._strip_attribute('RDB$INDEX_NAME') self._strip_attribute('RDB$TRIGGER_NAME') self._strip_attribute('RDB$CONST_NAME_UQ') self._strip_attribute('RDB$MATCH_OPTION') self._strip_attribute('RDB$UPDATE_RULE') self._strip_attribute('RDB$DELETE_RULE') if not (self.issystemobject() or self.isnotnull()): self._actions = ['create', 'drop'] #--- Protected def _get_create_sql(self, **params): self._check_params(params, []) const_def = 'ALTER TABLE %s ADD ' % self.table.get_quoted_name() if not self.name.startswith('INTEG_'): const_def += 'CONSTRAINT %s\n ' % self.get_quoted_name() if self.ischeck(): const_def += self.triggers[0].source elif self.ispkey() or self.isunique(): const_def += 'PRIMARY KEY' if self.ispkey() else 'UNIQUE' i = self.index const_def += ' (%s)' % ','.join(i.segment_names) if not i.issystemobject(): const_def += '\n USING %s INDEX %s' % (i.index_type, i.get_quoted_name()) elif self.isfkey(): const_def += 'FOREIGN KEY (%s)\n ' % ','.join(self.index.segment_names) p = self.partner_constraint const_def += 'REFERENCES %s (%s)' % (p.table.get_quoted_name(), ','.join(p.index.segment_names)) if self.delete_rule != 'RESTRICT': const_def += '\n ON DELETE %s' % self.delete_rule if self.update_rule != 'RESTRICT': const_def += '\n ON UPDATE %s' % self.update_rule i = self.index if not i.issystemobject(): const_def += '\n USING %s INDEX %s' % (i.index_type, i.get_quoted_name()) else: raise fdb.OperationalError("Unrecognized constraint type '%s'" % self.constraint_type) return const_def def _get_drop_sql(self, **params): self._check_params(params, []) return 'ALTER TABLE %s DROP CONSTRAINT %s' % (self.table.get_quoted_name(), self.get_quoted_name()) def _get_name(self): return self._attributes['RDB$CONSTRAINT_NAME'] def _get_constraint_type(self): return self._attributes['RDB$CONSTRAINT_TYPE'] def _get_table(self): return self.schema.get_table(self._attributes['RDB$RELATION_NAME']) def _get_index(self): return self.schema.get_index(self._attributes['RDB$INDEX_NAME']) def _get_trigger_names(self): if self.ischeck(): return self._attributes['RDB$TRIGGER_NAME'] else: return [] def _get_triggers(self): return [self.schema.get_trigger(tname) for tname in self.trigger_names] def _get_column_name(self): if self.isnotnull(): return self._attributes['RDB$TRIGGER_NAME'] else: return None def _get_partner_constraint(self): return self.schema.get_constraint(self._attributes['RDB$CONST_NAME_UQ']) def _get_match_option(self): return self._attributes['RDB$MATCH_OPTION'] def _get_update_rule(self): return self._attributes['RDB$UPDATE_RULE'] def _get_delete_rule(self): return self._attributes['RDB$DELETE_RULE'] #--- Properties constraint_type = LateBindingProperty(_get_constraint_type, doc="primary key/unique/foreign key/check/not null.") table = LateBindingProperty(_get_table, doc=":class:`Table` instance this constraint applies to.") index = LateBindingProperty(_get_index, doc=":class:`Index` instance that enforces the constraint.\n`None` if constraint is not primary key/unique or foreign key.") trigger_names = LateBindingProperty(_get_trigger_names, doc="For a CHECK constraint contains trigger names that enforce the constraint.") triggers = LateBindingProperty(_get_triggers, doc="For a CHECK constraint contains :class:`Trigger` instances that enforce the constraint.") column_name = LateBindingProperty(_get_column_name, doc="For a NOT NULL constraint, this is the name of the column to which the constraint applies.") partner_constraint = LateBindingProperty(_get_partner_constraint, doc="For a FOREIGN KEY constraint, this is the unique or primary key :class:`Constraint` referred.") match_option = LateBindingProperty(_get_match_option, doc="For a FOREIGN KEY constraint only. Current value is FULL in all cases.") update_rule = LateBindingProperty(_get_update_rule, doc="For a FOREIGN KEY constraint, this is the action applicable to when primary key is updated.") delete_rule = LateBindingProperty(_get_delete_rule, doc="For a FOREIGN KEY constraint, this is the action applicable to when primary key is deleted.") #--- Public def issystemobject(self): "Returns True if this database object is system object." return self.schema.get_table(self._attributes['RDB$RELATION_NAME']).issystemobject() def isnotnull(self): "Returns True if it's NOT NULL constraint." return self.constraint_type == 'NOT NULL' def ispkey(self): "Returns True if it's PRIMARY KEY constraint." return self.constraint_type == 'PRIMARY KEY' def isfkey(self): "Returns True if it's FOREIGN KEY constraint." return self.constraint_type == 'FOREIGN KEY' def isunique(self): "Returns True if it's UNIQUE constraint." return self.constraint_type == 'UNIQUE' def ischeck(self): "Returns True if it's CHECK constraint." return self.constraint_type == 'CHECK' def isdeferrable(self): "Returns True if it's DEFERRABLE constraint." return self._attributes['RDB$DEFERRABLE'] != 'NO' def isdeferred(self): "Returns True if it's INITIALLY DEFERRED constraint." return self._attributes['RDB$INITIALLY_DEFERRED'] != 'NO' class Table(BaseSchemaItem): """Represents Table in database. Supported SQL actions: - User table: create (no_pk=bool,no_unique=bool), recreate (no_pk=bool,no_unique=bool), drop, comment - System table: comment """ def __init__(self, schema, attributes): super(Table, self).__init__(schema, attributes) self._type_code = [0,] self.__columns = None self._strip_attribute('RDB$RELATION_NAME') self._strip_attribute('RDB$OWNER_NAME') self._strip_attribute('RDB$SECURITY_CLASS') self._strip_attribute('RDB$DEFAULT_CLASS') self._actions = ['comment'] if not self.issystemobject(): self._actions.extend(['create', 'recreate', 'drop']) #--- Protected def _get_create_sql(self, **params): try: self._check_params(params, ['no_pk', 'no_unique']) no_pk = params.get('no_pk', False) no_unique = params.get('no_unique', False) # tabdef = 'CREATE %sTABLE %s' % ('GLOBAL TEMPORARY ' if self.isgtt() else '', self.get_quoted_name()) if self.isexternal(): tabdef += " EXTERNAL FILE '%s'\n" % self.external_file tabdef += ' (' partdefs = [] for col in self.columns: coldef = '\n %s ' % col.get_quoted_name() collate = '' if col.isdomainbased(): coldef += '%s' % col.domain.get_quoted_name() elif col.iscomputed(): coldef += 'COMPUTED BY %s' % col.get_computedby() else: datatype = col.datatype if datatype.rfind(' COLLATE ') > 0: datatype, collate = datatype.split(' COLLATE ') coldef += '%s' % datatype if col.isidentity(): coldef += ' GENERATED BY DEFAULT AS IDENTITY' if col.generator.inital_value != 0: coldef += ' (START WITH %d)' % col.generator.inital_value else: if col.has_default(): coldef += ' DEFAULT %s' % col.default if not col.isnullable(): coldef += ' NOT NULL' if col._attributes['RDB$COLLATION_ID'] is not None: # Sometimes RDB$COLLATION_ID has a garbage value if col.collation is not None: cname = col.collation.name if col.domain.character_set._attributes['RDB$DEFAULT_COLLATE_NAME'] != cname: collate = cname if collate: coldef += ' COLLATE %s' % collate partdefs.append(coldef) if self.has_pkey() and not no_pk: pk = self.primary_key pkdef = '\n ' if not pk.name.startswith('INTEG_'): pkdef += 'CONSTRAINT %s\n ' % pk.get_quoted_name() i = pk.index pkdef += 'PRIMARY KEY (%s)' % ','.join(i.segment_names) if not i.issystemobject(): pkdef += '\n USING %s INDEX %s' % (i.index_type, i.get_quoted_name()) partdefs.append(pkdef) if not no_unique: for uq in self.constraints: if uq.isunique(): uqdef = '\n ' if not uq.name.startswith('INTEG_'): uqdef += 'CONSTRAINT %s\n ' % uq.get_quoted_name() i = uq.index uqdef += 'UNIQUE (%s)' % ','.join(i.segment_names) if not i.issystemobject(): uqdef += '\n USING %s INDEX %s' % (i.index_type, i.get_quoted_name()) partdefs.append(uqdef) tabdef += ','.join(partdefs) tabdef += '\n)' return tabdef except Exception as e: raise e def _get_drop_sql(self, **params): self._check_params(params, []) return 'DROP TABLE %s' % self.get_quoted_name() def _get_comment_sql(self, **params): return 'COMMENT ON TABLE %s IS %s' % (self.get_quoted_name(), 'NULL' if self.description is None else "'%s'" % escape_single_quotes(self.description)) def _get_name(self): return self._attributes['RDB$RELATION_NAME'] def _get_id(self): return self._attributes['RDB$RELATION_ID'] def _get_dbkey_length(self): return self._attributes['RDB$DBKEY_LENGTH'] def _get_format(self): return self._attributes['RDB$FORMAT'] def _get_table_type(self): return self.schema.enum_relation_types.get(self._attributes.get('RDB$RELATION_TYPE'), 'PERSISTENT') def _get_security_class(self): return self._attributes['RDB$SECURITY_CLASS'] def _get_external_file(self): return self._attributes['RDB$EXTERNAL_FILE'] def _get_owner_name(self): return self._attributes['RDB$OWNER_NAME'] def _get_default_class(self): return self._attributes['RDB$DEFAULT_CLASS'] def _get_flags(self): return self._attributes['RDB$FLAGS'] def _get_indices(self): return self.schema._get_all_indices().filter(lambda i: i._attributes['RDB$RELATION_NAME'] == self.name) def _get_triggers(self): return self.schema.triggers.filter(lambda t: t._attributes['RDB$RELATION_NAME'] == self.name) def _get_constraints(self): return self.schema.constraints.filter(lambda c: c._attributes['RDB$RELATION_NAME'] == self.name) def _get_columns(self): if self.__columns is None: cols = ['RDB$FIELD_NAME', 'RDB$RELATION_NAME', 'RDB$FIELD_SOURCE', 'RDB$FIELD_POSITION', 'RDB$UPDATE_FLAG', 'RDB$FIELD_ID', 'RDB$DESCRIPTION', 'RDB$SECURITY_CLASS', 'RDB$SYSTEM_FLAG', 'RDB$NULL_FLAG', 'RDB$DEFAULT_SOURCE', 'RDB$COLLATION_ID'] if self.schema._con.ods >= fdb.ODS_FB_30: cols.extend(['RDB$GENERATOR_NAME', 'RDB$IDENTITY_TYPE']) self.__columns = ObjectList((TableColumn(self.schema, self, row) for row in self.schema._select("""select %s from RDB$RELATION_FIELDS where RDB$RELATION_NAME = ? order by RDB$FIELD_POSITION""" % ','.join(cols), (self.name,))), TableColumn, 'item.name') self.__columns.freeze() return self.__columns def _get_primary_key(self): for const in self.constraints: if const.ispkey(): return const return None def _get_foreign_keys(self): return self.constraints.filter(lambda c: c.isfkey()) def _get_privileges(self): return self.schema.privileges.filter(lambda p: ((p.subject_name == self.name) and (p.subject_type in self._type_code))) #--- Properties id = LateBindingProperty(_get_id, doc="Internam number ID for the table.") dbkey_length = LateBindingProperty(_get_dbkey_length, doc="Length of the RDB$DB_KEY column in bytes.") format = LateBindingProperty(_get_format, doc="Internal format ID for the table.") table_type = LateBindingProperty(_get_table_type, doc="Table type.") security_class = LateBindingProperty(_get_security_class, doc="Security class that define access limits to the table.") external_file = LateBindingProperty(_get_external_file, doc="Full path to the external data file, if any.") owner_name = LateBindingProperty(_get_owner_name, doc="User name of table's creator.") default_class = LateBindingProperty(_get_default_class, doc="Default security class.") flags = LateBindingProperty(_get_flags, doc="Internal flags.") primary_key = LateBindingProperty(_get_primary_key, doc="PRIMARY KEY :class:`Constraint` for this table or None.") foreign_keys = LateBindingProperty(_get_foreign_keys, doc=":class:`~fdb.utils.ObjectList` of FOREIGN KEY :class:`Constraint` instances for this table.") columns = LateBindingProperty(_get_columns, doc="Returns :class:`~fdb.utils.ObjectList` of columns defined for table.\nItems are :class:`TableColumn` objects.") constraints = LateBindingProperty(_get_constraints, doc="Returns :class:`~fdb.utils.ObjectList` of constraints defined for table.\nItems are :class:`Constraint` objects.") indices = LateBindingProperty(_get_indices, doc="Returns :class:`~fdb.utils.ObjectList` of indices defined for table.\nItems are :class:`Index` objects.") triggers = LateBindingProperty(_get_triggers, doc="Returns :class:`~fdb.utils.ObjectList` of triggers defined for table.\nItems are :class:`Trigger` objects.") privileges = LateBindingProperty(_get_privileges, doc=":class:`~fdb.utils.ObjectList` of :class:`Privilege` objects granted to this object.") #--- Public def get_column(self, name): "Return :class:`TableColumn` object with specified name." return self.columns.get(name) #for col in self.columns: #if col.name == name: #return col #return None def isgtt(self): "Returns True if table is GLOBAL TEMPORARY table." return self.table_type.startswith('GLOBAL_TEMPORARY') def ispersistent(self): "Returns True if table is persistent one." return self.table_type in ['PERSISTENT', 'EXTERNAL'] def isexternal(self): "Returns True if table is external table." return bool(self.external_file) def has_pkey(self): "Returns True if table has PRIMARY KEY defined." for const in self.constraints: if const.ispkey(): return True return False def has_fkey(self): "Returns True if table has any FOREIGN KEY constraint." for const in self.constraints: if const.isfkey(): return True return False class View(BaseSchemaItem): """Represents database View. Supported SQL actions: - User views: create, recreate, alter(columns=string_or_list,query=string,check=bool), create_or_alter, drop, comment - System views: comment """ def __init__(self, schema, attributes): super(View, self).__init__(schema, attributes) self._type_code = [1,] self.__columns = None self._strip_attribute('RDB$RELATION_NAME') self._strip_attribute('RDB$VIEW_SOURCE') self._strip_attribute('RDB$OWNER_NAME') self._strip_attribute('RDB$SECURITY_CLASS') self._strip_attribute('RDB$DEFAULT_CLASS') self._actions = ['comment'] if not self.issystemobject(): self._actions.extend(['create', 'recreate', 'alter', 'create_or_alter', 'drop']) #--- Protected def _get_create_sql(self, **params): self._check_params(params, []) return "CREATE VIEW %s (%s)\n AS\n %s" % (self.get_quoted_name(), ','.join([col.get_quoted_name() for col in self.columns]), self.sql) def _get_alter_sql(self, **params): self._check_params(params, ['columns', 'query', 'check']) columns = params.get('columns') if isinstance(columns, (list, tuple)): columns = ','.join(columns) query = params.get('query') check = params.get('check', False) if query: return "ALTER VIEW %s %s\n AS\n %s" % (self.get_quoted_name(), '(%s)' % columns if columns else '', '%s\n WITH CHECK OPTION' % query if check else query) else: raise fdb.ProgrammingError("Missing required parameter: 'query'.") def _get_drop_sql(self, **params): self._check_params(params, []) return 'DROP VIEW %s' % self.get_quoted_name() def _get_comment_sql(self, **params): return 'COMMENT ON VIEW %s IS %s' % (self.get_quoted_name(), 'NULL' if self.description is None else "'%s'" % escape_single_quotes(self.description)) def _get_name(self): return self._attributes['RDB$RELATION_NAME'] def _get_sql(self): return self._attributes['RDB$VIEW_SOURCE'] def _get_id(self): return self._attributes['RDB$RELATION_ID'] def _get_dbkey_length(self): return self._attributes['RDB$DBKEY_LENGTH'] def _get_format(self): return self._attributes['RDB$FORMAT'] def _get_security_class(self): return self._attributes['RDB$SECURITY_CLASS'] def _get_owner_name(self): return self._attributes['RDB$OWNER_NAME'] def _get_default_class(self): return self._attributes['RDB$DEFAULT_CLASS'] def _get_flags(self): return self._attributes['RDB$FLAGS'] def _get_triggers(self): return self.schema.triggers.filter(lambda t: t._attributes['RDB$RELATION_NAME'] == self.name) def _get_columns(self): if self.__columns is None: self.__columns = ObjectList((ViewColumn(self.schema, self, row) for row in self.schema._select("""select r.RDB$FIELD_NAME, r.RDB$RELATION_NAME, r.RDB$FIELD_SOURCE, r.RDB$FIELD_POSITION, r.RDB$UPDATE_FLAG, r.RDB$FIELD_ID, r.RDB$DESCRIPTION, r.RDB$SYSTEM_FLAG, r.RDB$SECURITY_CLASS, r.RDB$NULL_FLAG, r.RDB$DEFAULT_SOURCE, r.RDB$COLLATION_ID, r.RDB$BASE_FIELD, v.RDB$RELATION_NAME as BASE_RELATION from RDB$RELATION_FIELDS r left join RDB$VIEW_RELATIONS v on r.RDB$VIEW_CONTEXT = v.RDB$VIEW_CONTEXT and v.rdb$view_name = ? where r.RDB$RELATION_NAME = ? order by RDB$FIELD_POSITION""", (self.name, self.name))), ViewColumn, 'item.name') self.__columns.freeze() return self.__columns def _get_privileges(self): return self.schema.privileges.filter(lambda p: ((p.subject_name == self.name) and (p.subject_type == 0))) # Views are logged as Tables in RDB$USER_PRIVILEGES #--- Properties id = LateBindingProperty(_get_id, doc="Internal number ID for the view.") sql = LateBindingProperty(_get_sql, doc="The query specification.") dbkey_length = LateBindingProperty(_get_dbkey_length, doc="Length of the RDB$DB_KEY column in bytes.") format = LateBindingProperty(_get_format, doc="Internal format ID for the view.") security_class = LateBindingProperty(_get_security_class, doc="Security class that define access limits to the view.") owner_name = LateBindingProperty(_get_owner_name, doc="User name of view's creator.") default_class = LateBindingProperty(_get_default_class, doc="Default security class.") flags = LateBindingProperty(_get_flags, doc="Internal flags.") columns = LateBindingProperty(_get_columns, doc="Returns :class:`~fdb.utils.ObjectList` of columns defined for view.\nItems are :class:`ViewColumn` objects.") triggers = LateBindingProperty(_get_triggers, doc="Returns :class:`~fdb.utils.ObjectList` of triggers defined for view.\nItems are :class:`Trigger` objects.") privileges = LateBindingProperty(_get_privileges, doc=":class:`~fdb.utils.ObjectList` of :class:`Privilege` objects granted to this object.") #--- Public def get_column(self, name): "Return :class:`TableColumn` object with specified name." for col in self.columns: if col.name == name: return col return None def get_trigger(self, name): "Return :class:`Trigger` object with specified name." for t in self.triggers: if t.name == name: return t return None def has_checkoption(self): "Returns True if View has WITH CHECK OPTION defined." return "WITH CHECK OPTION" in self.sql.upper() class Trigger(BaseSchemaItem): """Represents trigger. Supported SQL actions: - User trigger: create(inactive=bool), recreate, create_or_alter, drop, alter(fire_on=string,active=bool,sequence=int,declare=string_or_list, code=string_or_list), comment - System trigger: comment """ def __init__(self, schema, attributes): super(Trigger, self).__init__(schema, attributes) self._type_code = [2,] self._strip_attribute('RDB$TRIGGER_NAME') self._strip_attribute('RDB$RELATION_NAME') self._strip_attribute('RDB$ENGINE_NAME') self._strip_attribute('RDB$ENTRYPOINT') self._actions = ['comment'] if not self.issystemobject(): self._actions.extend(['create', 'recreate', 'alter', 'create_or_alter', 'drop']) #--- Protected def _get_create_sql(self, **params): self._check_params(params, ['inactive']) inactive = params.get('inactive', False) result = 'CREATE TRIGGER %s' % self.get_quoted_name() if self._attributes['RDB$RELATION_NAME']: result += ' FOR %s' % self.relation.get_quoted_name() result += ' %s\n%s POSITION %d\n%s' % ('ACTIVE' if self.isactive() and not inactive else 'INACTIVE', self.get_type_as_string(), self.sequence, self.source) return result def _get_alter_sql(self, **params): self._check_params(params, ['fire_on', 'active', 'sequence', 'declare', 'code']) action = params.get('fire_on') active = params.get('active') sequence = params.get('sequence') declare = params.get('declare') code = params.get('code') # header = '' if active is not None: header += ' ACTIVE' if active else ' INACTIVE' if action is not None: dbaction = action.upper().startswith('ON ') if (dbaction and not self.isdbtrigger()) or (not dbaction and self.isdbtrigger()): raise fdb.ProgrammingError("Trigger type change is not allowed.") header += '\n %s' % action if sequence is not None: header += '\n POSITION %d' % sequence # if code is not None: if declare is None: d = '' elif isinstance(declare, (list, tuple)): d = '' for x in declare: d += ' %s\n' % x else: d = '%s\n' % declare if isinstance(code, (list, tuple)): c = '' for x in code: c += ' %s\n' % x else: c = '%s\n' % code body = '\nAS\n%sBEGIN\n%sEND' % (d, c) else: body = '' # if not (header or body): raise fdb.ProgrammingError("Header or body definition required.") return 'ALTER TRIGGER %s%s%s' % (self.get_quoted_name(), header, body) def _get_drop_sql(self, **params): self._check_params(params, []) return 'DROP TRIGGER %s' % self.get_quoted_name() def _get_comment_sql(self, **params): return 'COMMENT ON TRIGGER %s IS %s' % (self.get_quoted_name(), 'NULL' if self.description is None else "'%s'" % escape_single_quotes(self.description)) def _get_action_time(self): if self.isddltrigger(): return (self.trigger_type) & 1 else: return (self.trigger_type + 1) & 1 def _get_action_type(self, slot): if self.isddltrigger(): return (self.trigger_type & ~TRIGGER_TYPE_DDL) >> 1 else: return ((self.trigger_type + 1) >> (slot * 2 - 1)) & 3 def _get_name(self): return self._attributes['RDB$TRIGGER_NAME'] def _get_relation(self): relname = self._attributes['RDB$RELATION_NAME'] rel = self.schema.get_table(relname) if not rel: rel = self.schema.get_view(relname) return rel def _get_sequence(self): return self._attributes['RDB$TRIGGER_SEQUENCE'] def _get_trigger_type(self): return self._attributes['RDB$TRIGGER_TYPE'] def _get_source(self): return self._attributes['RDB$TRIGGER_SOURCE'] def _get_flags(self): return self._attributes['RDB$FLAGS'] def _get_valid_blr(self): result = self._attributes.get('RDB$VALID_BLR') return bool(result) if result is not None else None def _get_engine_name(self): return self._attributes.get('RDB$ENGINE_NAME') def _get_entrypoint(self): return self._attributes.get('RDB$ENTRYPOINT') def _istype(self, type_code): atype = self._get_action_type(1) if atype == type_code: return True atype = self._get_action_type(2) if atype and atype == type_code: return True atype = self._get_action_type(3) if atype and atype == type_code: return True return False #--- Properties relation = LateBindingProperty(_get_relation, doc=":class:`Table` or :class:`View` that the trigger is for, or None for database triggers") sequence = LateBindingProperty(_get_sequence, doc="Sequence (position) of trigger. Zero usually means no sequence defined.") trigger_type = LateBindingProperty(_get_trigger_type, doc="Numeric code for trigger type that define what event and when are covered by trigger.") source = LateBindingProperty(_get_source, doc="PSQL source code.") flags = LateBindingProperty(_get_flags, doc="Internal flags.") valid_blr = LateBindingProperty(_get_valid_blr, doc="Trigger BLR invalidation flag. Coul be True/False or None.") # FB 3 engine_name = LateBindingProperty(_get_engine_name, doc="Engine name.") entrypoint = LateBindingProperty(_get_entrypoint, doc="Entrypoint.") #--- Public def isactive(self): "Returns True if this trigger is active." return self._attributes['RDB$TRIGGER_INACTIVE'] == 0 def isbefore(self): "Returns True if this trigger is set for BEFORE action." return self._get_action_time() == 0 def isafter(self): "Returns True if this trigger is set for AFTER action." return self._get_action_time() == 1 def isdbtrigger(self): "Returns True if this trigger is database trigger." return (self.trigger_type & TRIGGER_TYPE_MASK) == TRIGGER_TYPE_DB def isddltrigger(self): "Returns True if this trigger is DDL trigger." return (self.trigger_type & TRIGGER_TYPE_MASK) == TRIGGER_TYPE_DDL def isinsert(self): "Returns True if this trigger is set for INSERT operation." return self._istype(1) def isupdate(self): "Returns True if this trigger is set for UPDATE operation." return self._istype(2) def isdelete(self): "Returns True if this trigger is set for DELETE operation." return self._istype(3) def get_type_as_string(self): "Return string with action and operation specification." l = [] if self.isddltrigger(): l.append(TRIGGER_PREFIX_TYPES[self._get_action_time()]) code = self._get_action_type(1) l.append('ANY DDL STATEMENT' if code == DDL_TRIGGER_ANY else TRIGGER_DDL_TYPES[code]) elif self.isdbtrigger(): l.append('ON '+TRIGGER_DB_TYPES[self.trigger_type & ~TRIGGER_TYPE_DB]) else: l.append(TRIGGER_PREFIX_TYPES[self._get_action_time()]) l.append(TRIGGER_SUFFIX_TYPES[self._get_action_type(1)]) sufix = self._get_action_type(2) if sufix: l.append('OR') l.append(TRIGGER_SUFFIX_TYPES[sufix]) sufix = self._get_action_type(3) if sufix: l.append('OR') l.append(TRIGGER_SUFFIX_TYPES[sufix]) return ' '.join(l) class ProcedureParameter(BaseSchemaItem): """Represents procedure parameter. Supported SQL actions: comment """ def __init__(self, schema, proc, attributes): super(ProcedureParameter, self).__init__(schema, attributes) self.__proc = proc self._strip_attribute('RDB$PARAMETER_NAME') self._strip_attribute('RDB$PROCEDURE_NAME') self._strip_attribute('RDB$FIELD_SOURCE') self._strip_attribute('RDB$RELATION_NAME') self._strip_attribute('RDB$FIELD_NAME') self._strip_attribute('RDB$PACKAGE_NAME') self._actions = ['comment'] #--- Protected def _get_comment_sql(self, **params): return 'COMMENT ON PARAMETER %s.%s IS %s' % (self.procedure.get_quoted_name(), self.get_quoted_name(), 'NULL' if self.description is None else "'%s'" % escape_single_quotes(self.description)) def _get_name(self): return self._attributes['RDB$PARAMETER_NAME'] def _get_procedure(self): return self.schema.get_procedure(self._attributes['RDB$PROCEDURE_NAME']) def _get_sequence(self): return self._attributes['RDB$PARAMETER_NUMBER'] def _get_domain(self): return self.schema.get_domain(self._attributes['RDB$FIELD_SOURCE']) def _get_datatype(self): return self.domain.datatype def _get_type_from(self): m = self.mechanism if m is None: return PROCPAR_DATATYPE elif m == 0: return PROCPAR_DATATYPE if self.domain.issystemobject() else PROCPAR_DOMAIN elif m == 1: if self._attributes.get('RDB$RELATION_NAME') is None: return PROCPAR_TYPE_OF_DOMAIN else: return PROCPAR_TYPE_OF_COLUMN else: raise fdb.InternalError("Unknown parameter mechanism code: %d" % m) def _get_default(self): result = self._attributes.get('RDB$DEFAULT_SOURCE') if result: if result.upper().startswith('= '): result = result[2:] elif result.upper().startswith('DEFAULT '): result = result[8:] return result def _get_collation(self): cid = self._attributes.get('RDB$COLLATION_ID') return (None if cid is None else self.schema.get_collation_by_id(self.domain._attributes['RDB$CHARACTER_SET_ID'], cid)) def _get_mechanism(self): return self._attributes.get('RDB$PARAMETER_MECHANISM') def _get_column(self): rname = self._attributes.get('RDB$RELATION_NAME') return (None if rname is None else self.schema.get_table(rname).get_column(self._attributes['RDB$FIELD_NAME'])) def _get_package(self): return self.schema.get_package(self._attributes.get('RDB$PACKAGE_NAME')) #--- Properties procedure = LateBindingProperty(_get_procedure, doc="Name of the stored procedure.") sequence = LateBindingProperty(_get_sequence, doc="Sequence (position) of parameter.") domain = LateBindingProperty(_get_domain, doc=":class:`Domain` for this parameter.") datatype = LateBindingProperty(_get_datatype, doc="Comlete SQL datatype definition.") type_from = LateBindingProperty(_get_type_from, doc="Numeric code. See :attr:`Schema.enum_param_type_from`.`") # FB 2.1 default = LateBindingProperty(_get_default, doc="Default value.") collation = LateBindingProperty(_get_collation, doc=":class:`collation` for this parameter.") mechanism = LateBindingProperty(_get_mechanism, doc="Parameter mechanism code.") # FB 2.5 column = LateBindingProperty(_get_column, doc=":class:`TableColumn` for this parameter.") # FB 3.0 package = LateBindingProperty(_get_package, doc="Package this procedure belongs to. \nObject is :class:`Package` instance or None.") #--- Public def get_sql_definition(self): "Returns SQL definition for parameter." typedef = self.datatype if self.type_from == PROCPAR_DOMAIN: typedef = self.domain.get_quoted_name() elif self.type_from == PROCPAR_TYPE_OF_DOMAIN: typedef = 'TYPE OF %s' % self.domain.get_quoted_name() elif self.type_from == PROCPAR_TYPE_OF_COLUMN: typedef = 'TYPE OF COLUMN %s.%s' % (self.column.table.get_quoted_name(), self.column.get_quoted_name()) result = '%s %s%s' % (self.get_quoted_name(), typedef, '' if self.isnullable() else ' NOT NULL') c = self.collation if c is not None: result += ' COLLATE %s' % c.get_quoted_name() if self.isinput() and self.has_default(): result += ' = %s' % self.default return result def isinput(self): "Returns True if parameter is INPUT parameter." return self._attributes['RDB$PARAMETER_TYPE'] == 0 def isnullable(self): "Returns True if parameter allows NULL." return not bool(self._attributes.get('RDB$NULL_FLAG')) def has_default(self): "Returns True if parameter has default value." return bool(self._attributes.get('RDB$DEFAULT_SOURCE')) def ispackaged(self): "Returns True if procedure parameter is defined in package." return bool(self._attributes.get('RDB$PACKAGE_NAME')) class Procedure(BaseSchemaItem): """Represents stored procedure. Supported SQL actions: - User procedure: create(no_code=bool), recreate(no_code=bool), create_or_alter(no_code=bool), drop, alter(input=string_or_list,output=string_or_list,declare=string_or_list, code=string_or_list), comment - System procedure: comment """ def __init__(self, schema, attributes): super(Procedure, self).__init__(schema, attributes) self._type_code = [5,] self.__input_params = self.__output_params = None self._strip_attribute('RDB$PROCEDURE_NAME') self._strip_attribute('RDB$OWNER_NAME') self._strip_attribute('RDB$SECURITY_CLASS') self._strip_attribute('RDB$ENGINE_NAME') self._strip_attribute('RDB$ENTRYPOINT') self._strip_attribute('RDB$PACKAGE_NAME') self.__ods = schema._con.ods self._actions = ['comment'] if not self.issystemobject(): self._actions.extend(['create', 'recreate', 'alter', 'create_or_alter', 'drop']) #--- Protected def _get_create_sql(self, **params): self._check_params(params, ['no_code']) no_code = params.get('no_code') result = 'CREATE PROCEDURE %s' % self.get_quoted_name() if self.has_input(): if self._attributes['RDB$PROCEDURE_INPUTS'] == 1: result += ' (%s)\n' % self.input_params[0].get_sql_definition() else: result += ' (\n' for p in self.input_params: result += ' %s%s\n' % (p.get_sql_definition(), '' if p.sequence+1 == self._attributes['RDB$PROCEDURE_INPUTS'] else ',') result += ')\n' else: result += '\n' if self.has_output(): if self._attributes['RDB$PROCEDURE_OUTPUTS'] == 1: result += 'RETURNS (%s)\n' % self.output_params[0].get_sql_definition() else: result += 'RETURNS (\n' for p in self.output_params: result += ' %s%s\n' % (p.get_sql_definition(), '' if p.sequence+1 == self._attributes['RDB$PROCEDURE_OUTPUTS'] else ',') result += ')\n' return result+'AS\n'+(('BEGIN\nEND' if self.proc_type != 1 else 'BEGIN\n SUSPEND;\nEND') if no_code else self.source) def _get_alter_sql(self, **params): self._check_params(params, ['input', 'output', 'declare', 'code']) inpars = params.get('input') outpars = params.get('output') declare = params.get('declare') code = params.get('code') if 'code' not in params: raise fdb.ProgrammingError("Missing required parameter: 'code'.") # header = '' if inpars is not None: if isinstance(inpars, (list, tuple)): numpars = len(inpars) if numpars == 1: header = ' (%s)\n' % inpars else: header = ' (\n' i = 1 for p in inpars: header += ' %s%s\n' % (p, '' if i == numpars else ',') i += 1 header += ')\n' else: header = ' (%s)\n' % inpars # if outpars is not None: if not header: header += '\n' if isinstance(outpars, (list, tuple)): numpars = len(outpars) if numpars == 1: header += 'RETURNS (%s)\n' % outpars else: header += 'RETURNS (\n' i = 1 for p in outpars: header += ' %s%s\n' % (p, '' if i == numpars else ',') i += 1 header += ')\n' else: header += 'RETURNS (%s)\n' % outpars # if code: if declare is None: d = '' elif isinstance(declare, (list, tuple)): d = '' for x in declare: d += ' %s\n' % x else: d = '%s\n' % declare if isinstance(code, (list, tuple)): c = '' for x in code: c += ' %s\n' % x else: c = '%s\n' % code body = '%sAS\n%sBEGIN\n%sEND' % ('' if header else '\n', d, c) else: body = '%sAS\nBEGIN\nEND' % ('' if header else '\n') # return 'ALTER PROCEDURE %s%s%s' % (self.get_quoted_name(), header, body) def _get_drop_sql(self, **params): self._check_params(params, []) return 'DROP PROCEDURE %s' % self.get_quoted_name() def _get_comment_sql(self, **params): return 'COMMENT ON PROCEDURE %s IS %s' % (self.get_quoted_name(), 'NULL' if self.description is None else "'%s'" % escape_single_quotes(self.description)) def __param_columns(self): cols = ['RDB$PARAMETER_NAME', 'RDB$PROCEDURE_NAME', 'RDB$PARAMETER_NUMBER', 'RDB$PARAMETER_TYPE', 'RDB$FIELD_SOURCE', 'RDB$DESCRIPTION', 'RDB$SYSTEM_FLAG'] if self.__ods >= fdb.ODS_FB_21: cols.extend(['RDB$DEFAULT_SOURCE', 'RDB$COLLATION_ID', 'RDB$NULL_FLAG', 'RDB$PARAMETER_MECHANISM']) if self.__ods >= fdb.ODS_FB_25: cols.extend(['RDB$FIELD_NAME', 'RDB$RELATION_NAME']) if self.__ods >= fdb.ODS_FB_30: cols.extend(['RDB$PACKAGE_NAME']) return ','.join(cols) def _get_name(self): return self._attributes['RDB$PROCEDURE_NAME'] def _get_id(self): return self._attributes['RDB$PROCEDURE_ID'] def _get_source(self): return self._attributes['RDB$PROCEDURE_SOURCE'] def _get_security_class(self): return self._attributes['RDB$SECURITY_CLASS'] def _get_owner_name(self): return self._attributes['RDB$OWNER_NAME'] def _get_input_params(self): sql = """select %s from rdb$procedure_parameters where rdb$procedure_name = ? and rdb$parameter_type = 0 order by rdb$parameter_number""" if self.__input_params is None: if self.has_input(): self.__input_params = ObjectList((ProcedureParameter(self.schema, self, row) for row in self.schema._select(sql % self.__param_columns(), (self.name,))), ProcedureParameter, 'item.name') else: self.__input_params = ObjectList() self.__input_params.freeze() return self.__input_params def _get_output_params(self): sql = """select %s from rdb$procedure_parameters where rdb$procedure_name = ? and rdb$parameter_type = 1 order by rdb$parameter_number""" if self.__output_params is None: if self.has_output(): self.__output_params = ObjectList((ProcedureParameter(self.schema, self, row) for row in self.schema._select(sql % self.__param_columns(), (self.name,))), ProcedureParameter, 'item.name') else: self.__output_params = ObjectList() self.__output_params.freeze() return self.__output_params def _get_proc_type(self): return self._attributes.get('RDB$PROCEDURE_TYPE', 0) def _get_valid_blr(self): result = self._attributes.get('RDB$VALID_BLR') return bool(result) if result is not None else None def _get_privileges(self): return self.schema.privileges.filter(lambda p: ((p.subject_name == self.name) and (p.subject_type in self._type_code))) def _get_engine_name(self): return self._attributes.get('RDB$ENGINE_NAME') def _get_entrypoint(self): return self._attributes.get('RDB$ENTRYPOINT') def _get_package(self): return self.schema.get_package(self._attributes.get('RDB$PACKAGE_NAME')) def _get_privacy(self): return self._attributes.get('RDB$PRIVATE_FLAG') #--- Properties id = LateBindingProperty(_get_id, doc="Internal unique ID number.") source = LateBindingProperty(_get_source, doc="PSQL source code.") security_class = LateBindingProperty(_get_security_class, doc="Security class that define access limits to the procedure.") owner_name = LateBindingProperty(_get_owner_name, doc="User name of procedure's creator.") input_params = LateBindingProperty(_get_input_params, doc=":class:`~fdb.utils.ObjectList` of input parameters.\nInstances are :class:`ProcedureParameter` instances.") output_params = LateBindingProperty(_get_output_params, doc=":class:`~fdb.utils.ObjectList` of output parameters.\nInstances are :class:`ProcedureParameter` instances.") privileges = LateBindingProperty(_get_privileges, doc=":class:`~fdb.utils.ObjectList` of :class:`Privilege` objects granted to this object.") # FB 2.1 proc_type = LateBindingProperty(_get_proc_type, doc="Procedure type code. See :attr:`~fdb.schema.Schema.enum_procedure_types`.") valid_blr = LateBindingProperty(_get_valid_blr, doc="Procedure BLR invalidation flag. Coul be True/False or None.") # FB 3.0 engine_name = LateBindingProperty(_get_engine_name, doc="Engine name.") entrypoint = LateBindingProperty(_get_entrypoint, doc="Entrypoint.") package = LateBindingProperty(_get_package, doc="Package this procedure belongs to. \nObject is :class:`Package` instance or None.") privacy = LateBindingProperty(_get_privacy, doc="Privacy flag.") #--- Public def get_param(self, name): "Returns :class:`ProcedureParameter` with specified name or None" for p in self.output_params: if p.name == name: return p for p in self.input_params: if p.name == name: return p return None def has_input(self): "Returns True if procedure has any input parameters." return bool(self._attributes['RDB$PROCEDURE_INPUTS']) def has_output(self): "Returns True if procedure has any output parameters." return bool(self._attributes['RDB$PROCEDURE_OUTPUTS']) def ispackaged(self): "Returns True if procedure is defined in package." return bool(self._attributes.get('RDB$PACKAGE_NAME')) class Role(BaseSchemaItem): """Represents user role. Supported SQL actions: - User role: create, drop, comment - System role: comment """ def __init__(self, schema, attributes): super(Role, self).__init__(schema, attributes) self._type_code = [13,] self._strip_attribute('RDB$ROLE_NAME') self._strip_attribute('RDB$OWNER_NAME') self._strip_attribute('RDB$SECURITY_CLASS') self._actions = ['comment'] if not self.issystemobject(): self._actions.extend(['create', 'drop']) #--- Protected def _get_create_sql(self, **params): self._check_params(params, []) return 'CREATE ROLE %s' % self.get_quoted_name() def _get_drop_sql(self, **params): self._check_params(params, []) return 'DROP ROLE %s' % self.get_quoted_name() def _get_comment_sql(self, **params): return 'COMMENT ON ROLE %s IS %s' % (self.get_quoted_name(), 'NULL' if self.description is None else "'%s'" % escape_single_quotes(self.description)) def _get_name(self): return self._attributes['RDB$ROLE_NAME'] def _get_owner_name(self): return self._attributes['RDB$OWNER_NAME'] def _get_security_class(self): return self._attributes.get('RDB$SECURITY_CLASS') def _get_privileges(self): return self.schema.privileges.filter(lambda p: ((p.user_name == self.name) and (p.user_type in self._type_code))) #--- Properties owner_name = LateBindingProperty(_get_owner_name, doc="User name of role owner.") privileges = LateBindingProperty(_get_privileges, doc=":class:`~fdb.utils.ObjectList` of :class:`Privilege` objects granted to this object.") security_class = LateBindingProperty(_get_security_class, doc="Security class name or None.") class FunctionArgument(BaseSchemaItem): """Represets UDF argument. Supported SQL actions: none. """ def __init__(self, schema, function, attributes): super(FunctionArgument, self).__init__(schema, attributes) self._type_code = [15,] self.__function = function self._strip_attribute('RDB$FUNCTION_NAME') self._strip_attribute('RDB$PACKAGE_NAME') self._strip_attribute('RDB$ARGUMENT_NAME') self._strip_attribute('RDB$FIELD_SOURCE') self._strip_attribute('RDB$DEFAULT_SOURCE') self._strip_attribute('RDB$FIELD_NAME') self._strip_attribute('RDB$RELATION_NAME') self._strip_attribute('RDB$DESCRIPTION') #--- Protected def _get_name(self): return self.argument_name if self.argument_name else (self.function.name+ '_'+str(self._get_position())) def _get_function(self): return self.__function #return self.schema.get_function(self._attributes['RDB$FUNCTION_NAME']) def _get_position(self): return self._attributes['RDB$ARGUMENT_POSITION'] def _get_mechanism(self): x = self._attributes['RDB$MECHANISM'] return None if x is None else abs(x) def _get_length(self): return self._attributes['RDB$FIELD_LENGTH'] def _get_scale(self): return self._attributes['RDB$FIELD_SCALE'] def _get_field_type(self): return self._attributes['RDB$FIELD_TYPE'] def _get_sub_type(self): return self._attributes['RDB$FIELD_SUB_TYPE'] def _get_character_length(self): return self._attributes['RDB$CHARACTER_LENGTH'] def _get_character_set(self): return self.schema.get_character_set_by_id(self._attributes['RDB$CHARACTER_SET_ID']) def _get_precision(self): return self._attributes['RDB$FIELD_PRECISION'] def _get_datatype(self): if self.field_type is None: # FB3 PSQL function, datatype defined via internal domain return self.domain.datatype else: # Classic external UDF l = [] precision_known = False if self.field_type in (FBT_SMALLINT, FBT_INTEGER, FBT_BIGINT): if self.precision != None: if (self.sub_type > 0) and (self.sub_type <= MAX_INTSUBTYPES): l.append('%s(%d, %d)' % \ (INTEGRAL_SUBTYPES[self.sub_type], self.precision, -self.scale)) precision_known = True if not precision_known: if (self.field_type == FBT_SMALLINT) and (self.scale < 0): l.append('NUMERIC(4, %d)' % -self.scale) elif (self.field_type == FBT_INTEGER) and (self.scale < 0): l.append('NUMERIC(9, %d)' % -self.scale) elif (self.field_type == FBT_DOUBLE_PRECISION) and (self.scale < 0): l.append('NUMERIC(15, %d)' % -self.scale) else: l.append(COLUMN_TYPES[self.field_type]) if self.field_type in (FBT_CHAR, FBT_VARCHAR, FBT_CSTRING): l.append('(%d)' % (self.length if (self.character_length is None) else self.character_length)) if self.field_type == FBT_BLOB: if self.sub_type >= 0 and self.sub_type <= MAX_BLOBSUBTYPES: if self.sub_type > 0: l.append(' SUB_TYPE %s' % BLOB_SUBTYPES[self.sub_type]) else: l.append(' SUB_TYPE %d' % self.sub_type) if self.field_type in (FBT_CHAR, FBT_VARCHAR, FBT_CSTRING, FBT_BLOB): if self._attributes['RDB$CHARACTER_SET_ID'] is not None and \ (self.character_set.name != self.schema.default_character_set.name): l.append(' CHARACTER SET %s' % self.character_set.name) return ''.join(l) def _get_package(self): return self.schema.get_package(self._attributes.get('RDB$PACKAGE_NAME')) def _get_argument_name(self): return self._attributes.get('RDB$ARGUMENT_NAME') def _get_domain(self): return self.schema.get_domain(self._attributes.get('RDB$FIELD_SOURCE')) def _get_default(self): result = self._attributes.get('RDB$DEFAULT_SOURCE') if result: if result.upper().startswith('= '): result = result[2:] elif result.upper().startswith('DEFAULT '): result = result[8:] return result def _get_collation(self): cid = self._attributes.get('RDB$COLLATION_ID') return (None if cid is None else self.schema.get_collation_by_id(self.domain._attributes['RDB$CHARACTER_SET_ID'], cid)) def _get_argument_mechanism(self): return self._attributes.get('RDB$ARGUMENT_MECHANISM') def _get_column(self): rname = self._attributes.get('RDB$RELATION_NAME') return (None if rname is None else self.schema.get_table(rname).get_column(self._attributes['RDB$FIELD_NAME'])) def _get_type_from(self): m = self.argument_mechanism if m is None: return PROCPAR_DATATYPE elif m == 0: return PROCPAR_DATATYPE if self.domain.issystemobject() else PROCPAR_DOMAIN elif m == 1: if self._attributes.get('RDB$RELATION_NAME') is None: return PROCPAR_TYPE_OF_DOMAIN else: return PROCPAR_TYPE_OF_COLUMN else: raise fdb.InternalError("Unknown parameter mechanism code: %d" % m) #--- Properties function = LateBindingProperty(_get_function, doc=":class:`Function` to which this argument belongs.") position = LateBindingProperty(_get_position, doc="Argument position.") mechanism = LateBindingProperty(_get_mechanism, doc="How argument is passed.") field_type = LateBindingProperty(_get_field_type, doc="Number code of the data type defined for the argument.") length = LateBindingProperty(_get_length, doc="Length of the argument in bytes.") scale = LateBindingProperty(_get_scale, doc="Negative number representing the scale of NUMBER and DECIMAL argument.") precision = LateBindingProperty(_get_precision, doc="Indicates the number of digits of precision available to the data type of the argument.") sub_type = LateBindingProperty(_get_sub_type, doc="BLOB subtype.") character_length = LateBindingProperty(_get_character_length, doc="Length of CHAR and VARCHAR column, in characters (not bytes).") character_set = LateBindingProperty(_get_character_set, doc=":class:`CharacterSet` for a character/text BLOB argument, or None.") datatype = LateBindingProperty(_get_datatype, doc="Comlete SQL datatype definition.") # FB 3.0 argument_name = LateBindingProperty(_get_argument_name, doc="Argument name.") domain = LateBindingProperty(_get_domain, doc=":class:`Domain` for this parameter.") default = LateBindingProperty(_get_default, doc="Default value.") collation = LateBindingProperty(_get_collation, doc=":class:`Collation` for this parameter.") argument_mechanism = LateBindingProperty(_get_argument_mechanism, doc="Argument mechanism.") column = LateBindingProperty(_get_column, doc=":class:`TableColumn` for this parameter.") type_from = LateBindingProperty(_get_type_from, doc="Numeric code. See :attr:`Schema.enum_param_type_from`.`") package = LateBindingProperty(_get_package, doc="Package this function belongs to.\nObject is :class:`Package` instance or None.") #--- Public def get_sql_definition(self): "Returns SQL definition for parameter." if self.function.isexternal(): return '%s%s%s' % (self.datatype, ' BY DESCRIPTOR' if self.isbydescriptor() else '', ' BY VALUE' if self.isbyvalue() and self.isreturning() else '',) else: typedef = self.datatype if self.type_from == PROCPAR_DOMAIN: typedef = self.domain.get_quoted_name() elif self.type_from == PROCPAR_TYPE_OF_DOMAIN: typedef = 'TYPE OF %s' % self.domain.get_quoted_name() elif self.type_from == PROCPAR_TYPE_OF_COLUMN: typedef = 'TYPE OF COLUMN %s.%s' % (self.column.table.get_quoted_name(), self.column.get_quoted_name()) result = '%s%s%s' % (self.get_quoted_name()+' ' if not self.isreturning() else '', typedef, '' if self.isnullable() else ' NOT NULL') c = self.collation if c is not None: result += ' COLLATE %s' % c.get_quoted_name() if not self.isreturning() and self.has_default(): result += ' = %s' % self.default return result def isbyvalue(self): "Returns True if argument is passed by value." return self.mechanism == 0 def isbyreference(self): "Returns True if argument is passed by reference." return self.mechanism in [1, 5] def isbydescriptor(self, any=False): """Returns True if argument is passed by descriptor. :param bool any: If True, method returns True if any kind of descriptor is used (including BLOB and ARRAY descriptors). """ return self.mechanism in [2, 3, 4] if any else self.mechanism == 2 def iswithnull(self): "Returns True if argument is passed by reference with NULL support." return self.mechanism == 5 def isfreeit(self): "Returns True if (return) argument is declared as FREE_IT." return self._attributes['RDB$MECHANISM'] < 0 def isreturning(self): "Returns True if argument represents return value for function." return self.position == self.function._attributes['RDB$RETURN_ARGUMENT'] # Firebird 3.0 def isnullable(self): "Returns True if parameter allows NULL." return not bool(self._attributes.get('RDB$NULL_FLAG')) def has_default(self): "Returns True if parameter has default value." return bool(self._attributes.get('RDB$DEFAULT_SOURCE')) def ispackaged(self): "Returns True if function argument is defined in package." return bool(self._attributes.get('RDB$PACKAGE_NAME')) class Function(BaseSchemaItem): """Represents user defined function. Supported SQL actions: - External UDF: declare, drop, comment - PSQL UDF (FB 3, not declared in package): create(no_code=bool), recreate(no_code=bool), create_or_alter(no_code=bool), drop, alter(arguments=string_or_list,returns=string,declare=string_or_list, code=string_or_list) - System UDF: none """ def __init__(self, schema, attributes): super(Function, self).__init__(schema, attributes) self._type_code = [15,] self.__arguments = None self.__returns = None self._strip_attribute('RDB$FUNCTION_NAME') self._strip_attribute('RDB$MODULE_NAME') self._strip_attribute('RDB$ENTRYPOINT') self._strip_attribute('RDB$ENGINE_NAME') self._strip_attribute('RDB$PACKAGE_NAME') self._strip_attribute('RDB$SECURITY_CLASS') self._strip_attribute('RDB$OWNER_NAME') self.__ods = schema._con.ods if not self.issystemobject(): if self.isexternal(): self._actions = ['comment', 'declare', 'drop'] else: if self._attributes.get('RDB$PACKAGE_NAME') is None: self._actions = ['create', 'recreate', 'alter', 'create_or_alter', 'drop'] #--- Protected def _get_declare_sql(self, **params): self._check_params(params, []) fdef = 'DECLARE EXTERNAL FUNCTION %s\n' % self.get_quoted_name() for p in self.arguments: fdef += ' %s%s\n' % (p.get_sql_definition(), '' if p.position == len(self.arguments) else ',') if self.has_return(): fdef += 'RETURNS %s%s\n' % ('PARAMETER %d' % self._attributes['RDB$RETURN_ARGUMENT'] if self.has_return_argument() else self.returns.get_sql_definition(), ' FREE_IT' if self.returns.isfreeit() else '') return "%sENTRY_POINT '%s'\nMODULE_NAME '%s'" % (fdef, self.entrypoint, self.module_name) def _get_drop_sql(self, **params): self._check_params(params, []) return 'DROP%s FUNCTION %s' % (' EXTERNAL' if self.isexternal() else '', self.get_quoted_name()) def _get_comment_sql(self, **params): return 'COMMENT ON EXTERNAL FUNCTION %s IS %s' % (self.get_quoted_name(), 'NULL' if self.description is None else "'%s'" % escape_single_quotes(self.description)) def _get_create_sql(self, **params): self._check_params(params, ['no_code']) no_code = params.get('no_code') result = 'CREATE FUNCTION %s' % self.get_quoted_name() if self.has_arguments(): if len(self.arguments) == 1: result += ' (%s)\n' % self.arguments[0].get_sql_definition() else: result += ' (\n' for p in self.arguments: result += ' %s%s\n' % (p.get_sql_definition(), '' if p.position == len(self.arguments) else ',') result += ')\n' else: result += '\n' result += 'RETURNS %s\n' % self.returns.get_sql_definition() return result+'AS\n'+('BEGIN\nEND' if no_code else self.source) def _get_alter_sql(self, **params): self._check_params(params, ['arguments', 'returns', 'declare', 'code']) arguments = params.get('arguments') returns = params.get('returns') if returns is None: raise fdb.ProgrammingError("Missing required parameter: 'returns'.") declare = params.get('declare') code = params.get('code') if 'code' not in params: raise fdb.ProgrammingError("Missing required parameter: 'code'.") # header = '' if arguments is not None: if isinstance(arguments, (list, tuple)): numpars = len(arguments) if numpars == 1: header = ' (%s)\n' % arguments else: header = ' (\n' i = 1 for p in arguments: header += ' %s%s\n' % (p, '' if i == numpars else ',') i += 1 header += ')\n' else: header = ' (%s)\n' % arguments # if not header: header += '\n' header += 'RETURNS %s\n' % returns # if code: if declare is None: d = '' elif isinstance(declare, (list, tuple)): d = '' for x in declare: d += ' %s\n' % x else: d = '%s\n' % declare if isinstance(code, (list, tuple)): c = '' for x in code: c += ' %s\n' % x else: c = '%s\n' % code body = '%sAS\n%sBEGIN\n%sEND' % ('' if header else '\n', d, c) else: body = '%sAS\nBEGIN\nEND' % ('' if header else '\n') # return 'ALTER FUNCTION %s%s%s' % (self.get_quoted_name(), header, body) def _load_arguments(self, mock=None): cols = ['RDB$FUNCTION_NAME', 'RDB$ARGUMENT_POSITION', 'RDB$MECHANISM', 'RDB$FIELD_TYPE', 'RDB$FIELD_SCALE', 'RDB$FIELD_LENGTH', 'RDB$FIELD_SUB_TYPE', 'RDB$CHARACTER_SET_ID', 'RDB$FIELD_PRECISION', 'RDB$CHARACTER_LENGTH'] if self.__ods >= fdb.ODS_FB_30: cols.extend(['RDB$PACKAGE_NAME', 'RDB$ARGUMENT_NAME', 'RDB$FIELD_SOURCE', 'RDB$DEFAULT_SOURCE', 'RDB$COLLATION_ID', 'RDB$NULL_FLAG', 'RDB$ARGUMENT_MECHANISM', 'RDB$FIELD_NAME', 'RDB$RELATION_NAME', 'RDB$SYSTEM_FLAG', 'RDB$DESCRIPTION']) self.__arguments = ObjectList((FunctionArgument(self.schema, self, row) for row in (mock if mock else self.schema._select("""select %s from rdb$function_arguments where rdb$function_name = ? order by rdb$argument_position""" % ','.join(cols), (self.name,)))), FunctionArgument) self.__arguments.freeze() rarg = self._attributes['RDB$RETURN_ARGUMENT'] if rarg is not None: for a in self.__arguments: if a.position == rarg: self.__returns = weakref.ref(a) def _get_name(self): return self._attributes['RDB$FUNCTION_NAME'] def _get_module_name(self): return self._attributes['RDB$MODULE_NAME'] def _get_entrypoint(self): return self._attributes['RDB$ENTRYPOINT'] def _get_returns(self): if self.__arguments is None: self._load_arguments() return self.__returns if self.__returns is None else self.__returns() def _get_arguments(self): if self.__arguments is None: self._load_arguments() return self.__arguments.filter(lambda a: a.position != 0) def _get_engine_mame(self): return self._attributes.get('RDB$ENGINE_NAME') def _get_package(self): return self.schema.get_package(self._attributes.get('RDB$PACKAGE_NAME')) def _get_private_flag(self): return self._attributes.get('RDB$PRIVATE_FLAG') def _get_source(self): return self._attributes.get('RDB$FUNCTION_SOURCE') def _get_id(self): return self._attributes.get('RDB$FUNCTION_ID') def _get_valid_blr(self): result = self._attributes.get('RDB$VALID_BLR') return bool(result) if result is not None else None def _get_security_class(self): return self._attributes.get('RDB$SECURITY_CLASS') def _get_owner_name(self): return self._attributes.get('RDB$OWNER_NAME') def _get_legacy_flag(self): return self._attributes.get('RDB$LEGACY_FLAG') def _get_deterministic_flag(self): return self._attributes.get('RDB$DETERMINISTIC_FLAG') #--- Properties module_name = LateBindingProperty(_get_module_name, doc="Module name.") entrypoint = LateBindingProperty(_get_entrypoint, doc="Entrypoint in module.") returns = LateBindingProperty(_get_returns, doc="Returning :class:`FunctionArgument` or None.") arguments = LateBindingProperty(_get_arguments, doc=":class:`~fdb.utils.ObjectList` of function arguments." " Items are :class:`FunctionArgument` instances.") # Firebird 3.0 engine_mame = LateBindingProperty(_get_engine_mame, doc="Engine name.") package = LateBindingProperty(_get_package, doc="Package this function belongs to. \nObject is" " :class:`Package` instance or None.") private_flag = LateBindingProperty(_get_private_flag, doc="Private flag.") source = LateBindingProperty(_get_source, doc="Function source.") id = LateBindingProperty(_get_id, doc="Function ID.") valid_blr = LateBindingProperty(_get_valid_blr, doc="BLR validity flag.") security_class = LateBindingProperty(_get_security_class, doc="Security class.") owner_name = LateBindingProperty(_get_owner_name, doc="Owner name.") legacy_flag = LateBindingProperty(_get_legacy_flag, doc="Legacy flag.") deterministic_flag = LateBindingProperty(_get_deterministic_flag, doc="Deterministic flag.") #--- Public def isexternal(self): "Returns True if function is external UDF, False for PSQL functions." return True if self.module_name else False def has_arguments(self): "Returns True if function has input arguments." return bool(self.arguments) def has_return(self): "Returns True if function returns a value." return self.returns is not None def has_return_argument(self): "Returns True if function returns a value in input argument." return self.returns.position != 0 if self.returns is not None else False def ispackaged(self): "Returns True if function is defined in package." return bool(self._attributes.get('RDB$PACKAGE_NAME')) class DatabaseFile(BaseSchemaItem): """Represents database extension file. Supported SQL actions: create """ def __init__(self, schema, attributes): super(DatabaseFile, self).__init__(schema, attributes) self._type_code = [] self._strip_attribute('RDB$FILE_NAME') #--- Protected def _get_name(self): return 'FILE_%d' % self.sequence def _get_filename(self): return self._attributes['RDB$FILE_NAME'] def _get_sequence(self): return self._attributes['RDB$FILE_SEQUENCE'] def _get_start(self): return self._attributes['RDB$FILE_START'] def _get_length(self): return self._attributes['RDB$FILE_LENGTH'] #--- Properties filename = LateBindingProperty(_get_filename, doc="File name.") sequence = LateBindingProperty(_get_sequence, doc="File sequence number.") start = LateBindingProperty(_get_start, doc="File start page number.") length = LateBindingProperty(_get_length, doc="File length in pages.") #--- Public def issystemobject(self): "Returns True." return True class Shadow(BaseSchemaItem): """Represents database shadow. Supported SQL actions: create, drop(preserve=bool) """ SHADOW_INACTIVE = 2 SHADOW_MANUAL = 4 SHADOW_CONDITIONAL = 16 def __init__(self, schema, attributes): super(Shadow, self).__init__(schema, attributes) self._type_code = [] self.__files = None self._actions = ['create', 'drop'] def _get_create_sql(self, **params): self._check_params(params, []) result = 'CREATE SHADOW %d %s%s' % (self.id, 'MANUAL' if self.ismanual() else 'AUTO', ' CONDITIONAL' if self.isconditional() else '') if len(self.files) == 1: result += " '%s'" % self.files[0].filename else: f = self.files[0] result += " '%s'%s\n" % (f.filename, ' LENGTH %d' % f.length if f.length > 0 else '') for f in self.files[1:]: result += " FILE '%s'%s%s" % (f.filename, ' STARTING AT %d' % f.start if f.start > 0 else '', ' LENGTH %d' % f.length if f.length > 0 else '') if f.sequence < len(self.files)-1: result += '\n' return result def _get_drop_sql(self, **params): self._check_params(params, ['preserve']) preserve = params.get('preserve') return 'DROP SHADOW %d%s' % (self.id, ' PRESERVE FILE' if preserve else '') def _get_name(self): return 'SHADOW_%d' % self.id def _get_id(self): return self._attributes['RDB$SHADOW_NUMBER'] def _get_flags(self): return self._attributes['RDB$FILE_FLAGS'] def _get_files(self): if self.__files is None: self.__files = [DatabaseFile(self, row) for row in self.schema._select(""" select RDB$FILE_NAME, RDB$FILE_SEQUENCE, RDB$FILE_START, RDB$FILE_LENGTH from RDB$FILES where RDB$SHADOW_NUMBER = ? order by RDB$FILE_SEQUENCE""", (self._attributes['RDB$SHADOW_NUMBER'],))] return self.__files #--- Properties id = LateBindingProperty(_get_id, doc="Shadow ID number.") flags = LateBindingProperty(_get_flags, doc="Shadow flags.") files = LateBindingProperty(_get_files, doc="List of shadow files. Items are :class:`DatabaseFile` instances.") #--- Public def issystemobject(self): "Returns False." return False def ismanual(self): "Returns True if it's MANUAL shadow." return bool(self.flags & self.SHADOW_MANUAL) def isinactive(self): "Returns True if it's INACTIVE shadow." return bool(self.flags & self.SHADOW_INACTIVE) def isconditional(self): "Returns True if it's CONDITIONAL shadow." return bool(self.flags & self.SHADOW_CONDITIONAL) class Privilege(BaseSchemaItem): """Represents priviledge to database object. Supported SQL actions: grant(grantors),revoke(grantors,grant_option) """ def __init__(self, schema, attributes): super(Privilege, self).__init__(schema, attributes) self._type_code = [] self._actions = ['grant', 'revoke'] self._strip_attribute('RDB$USER') self._strip_attribute('RDB$GRANTOR') self._strip_attribute('RDB$PRIVILEGE') self._strip_attribute('RDB$RELATION_NAME') self._strip_attribute('RDB$FIELD_NAME') def _get_grant_sql(self, **params): self._check_params(params, ['grantors']) grantors = params.get('grantors', ['SYSDBA']) privileges = {'S':'SELECT', 'I':'INSERT', 'U':'UPDATE', 'D':'DELETE', 'R':'REFERENCES'} admin_option = ' WITH GRANT OPTION' if self.has_grant() else '' if self.privilege in privileges: privilege = privileges[self.privilege] if self.field_name is not None: privilege += '(%s)' % self.field_name privilege += ' ON ' elif self.privilege == 'X': # procedure privilege = 'EXECUTE ON PROCEDURE ' else: # role membership privilege = '' admin_option = ' WITH ADMIN OPTION' if self.has_grant() else '' user = self.user if isinstance(user, Procedure): utype = 'PROCEDURE ' elif isinstance(user, Trigger): utype = 'TRIGGER ' elif isinstance(user, View): utype = 'VIEW ' else: utype = '' if (grantors is not None) and (self.grantor_name not in grantors): granted_by = ' GRANTED BY %s' % self.grantor_name else: granted_by = '' return 'GRANT %s%s TO %s%s%s%s' % (privilege, self.subject_name, utype, self.user_name, admin_option, granted_by) def _get_revoke_sql(self, **params): self._check_params(params, ['grant_option', 'grantors']) grantors = params.get('grantors', ['SYSDBA']) option_only = params.get('grant_option', False) if option_only and not self.has_grant(): raise fdb.ProgrammingError("Can't revoke grant option that wasn't granted.") privileges = {'S':'SELECT', 'I':'INSERT', 'U':'UPDATE', 'D':'DELETE', 'R':'REFERENCES'} admin_option = 'GRANT OPTION FOR ' if self.has_grant() and option_only else '' if self.privilege in privileges: privilege = privileges[self.privilege] if self.field_name is not None: privilege += '(%s)' % self.field_name privilege += ' ON ' elif self.privilege == 'X': # procedure privilege = 'EXECUTE ON PROCEDURE ' else: # role membership privilege = '' admin_option = 'ADMIN OPTION FOR' if self.has_grant() and option_only else '' user = self.user if isinstance(user, Procedure): utype = 'PROCEDURE ' elif isinstance(user, Trigger): utype = 'TRIGGER ' elif isinstance(user, View): utype = 'VIEW ' else: utype = '' if (grantors is not None) and (self.grantor_name not in grantors): granted_by = ' GRANTED BY %s' % self.grantor_name else: granted_by = '' return 'REVOKE %s%s%s FROM %s%s%s' % (admin_option, privilege, self.subject_name, utype, self.user_name, granted_by) def _get_user(self): return self.schema._get_item(self._attributes['RDB$USER'], self._attributes['RDB$USER_TYPE']) def _get_grantor(self): return fdb.services.User(self._attributes['RDB$GRANTOR']) def _get_privilege(self): return self._attributes['RDB$PRIVILEGE'] def _get_subject(self): rname = self._attributes['RDB$RELATION_NAME'] result = self.schema._get_item(rname, self.subject_type, self.field_name) if result is None and self.subject_type == 0: # Views are logged as tables, so try again for view code result = self.schema._get_item(rname, 1, self.field_name) return result def _get_user_name(self): return self._attributes['RDB$USER'] def _get_user_type(self): return self._attributes['RDB$USER_TYPE'] def _get_grantor_name(self): return self._attributes['RDB$GRANTOR'] def _get_subject_name(self): return self._attributes['RDB$RELATION_NAME'] def _get_subject_type(self): return self._attributes['RDB$OBJECT_TYPE'] def _get_field_name(self): return self._attributes['RDB$FIELD_NAME'] #--- Properties user = LateBindingProperty(_get_user, doc="Grantee. Either :class:`~fdb.services.User`, :class:`Role`," " :class:`Procedure`, :class:`Trigger` or :class:`View` object.") grantor = LateBindingProperty(_get_grantor, doc="Grantor :class:`~fdb.services.User` object.") privilege = LateBindingProperty(_get_privilege, doc="Privilege code.") subject = LateBindingProperty(_get_subject, doc="Priviledge subject. Either :class:`Role`, :class:`Table`," " :class:`View` or :class:`Procedure` object.") user_name = LateBindingProperty(_get_user_name, doc="User name.") user_type = LateBindingProperty(_get_user_type, doc="User type.") grantor_name = LateBindingProperty(_get_grantor_name, doc="Grantor name.") subject_name = LateBindingProperty(_get_subject_name, doc="Subject name.") subject_type = LateBindingProperty(_get_subject_type, doc="Subject type.") field_name = LateBindingProperty(_get_field_name, doc="Field name.") #--- Public def has_grant(self): "Returns True if privilege comes with GRANT OPTION." return bool(self._attributes['RDB$GRANT_OPTION']) def issystemobject(self): "Returns True." return True def isselect(self): "Returns True if this is SELECT privilege." return self.privilege == 'S' def isinsert(self): "Returns True if this is INSERT privilege." return self.privilege == 'I' def isupdate(self): "Returns True if this is UPDATE privilege." return self.privilege == 'U' def isdelete(self): "Returns True if this is DELETE privilege." return self.privilege == 'D' def isexecute(self): "Returns True if this is EXECUTE privilege." return self.privilege == 'X' def isreference(self): "Returns True if this is REFERENCE privilege." return self.privilege == 'R' def ismembership(self): "Returns True if this is ROLE membership privilege." return self.privilege == 'M' class Package(BaseSchemaItem): """Represents PSQL package. Supported SQL actions: create(body=bool), recreate(body=bool), create_or_alter(body=bool), alter(header=string_or_list), drop(body=bool), alter """ def __init__(self, schema, attributes): super(Package, self).__init__(schema, attributes) self._type_code = [18, 19] self._actions = ['create', 'recreate', 'create_or_alter', 'alter', 'drop'] self._strip_attribute('RDB$PACKAGE_NAME') self._strip_attribute('RDB$SECURITY_CLASS') self._strip_attribute('RDB$OWNER_NAME') def _get_create_sql(self, **params): self._check_params(params, ['body']) body = params.get('body') cbody = 'BODY ' if body else '' result = 'CREATE PACKAGE %s%s' % (cbody, self.get_quoted_name()) return result+'\nAS\n'+(self.body if body else self.header) def _get_alter_sql(self, **params): self._check_params(params, ['header']) header = params.get('header') if not header: hdr = '' else: hdr = '\n'.join(header) if isinstance(header, fdb.ListType) else header return 'ALTER PACKAGE %s\nAS\nBEGIN\n%s\nEND' % (self.get_quoted_name(), hdr) def _get_drop_sql(self, **params): self._check_params(params, ['body']) body = params.get('body') cbody = 'BODY ' if body else '' return 'DROP PACKAGE %s%s' % (cbody, self.get_quoted_name()) def _get_name(self): return self._attributes['RDB$PACKAGE_NAME'] def _get_security_class(self): return self._attributes['RDB$SECURITY_CLASS'] def _get_owner_name(self): return self._attributes['RDB$OWNER_NAME'] def _get_header(self): return self._attributes['RDB$PACKAGE_HEADER_SOURCE'] def _get_body(self): return self._attributes['RDB$PACKAGE_BODY_SOURCE'] def _get_functions(self): return self.schema.functions.filter(lambda fn: fn._attributes['RDB$PACKAGE_NAME'] == self.name) def _get_procedures(self): return self.schema.procedures.filter(lambda proc: proc._attributes['RDB$PACKAGE_NAME'] == self.name) #--- Properties header = LateBindingProperty(_get_header, doc="Package header source.") body = LateBindingProperty(_get_body, doc="Package body source.") security_class = LateBindingProperty(_get_security_class, doc="Security class name or None.") owner_name = LateBindingProperty(_get_owner_name, doc="User name of package creator.") functions = LateBindingProperty(_get_functions, doc=":class:`~fdb.utils.ObjectList` of package functions." " Items are :class:`Function` instances.") procedures = LateBindingProperty(_get_procedures, doc=":class:`~fdb.utils.ObjectList` of package procedures." " Items are :class:`Procedure` instances.") #--- Public def has_valid_body(self): result = self._attributes.get('RDB$VALID_BODY_FLAG') return bool(result) if result is not None else None class BackupHistory(BaseSchemaItem): """Represents entry of history for backups performed using the nBackup utility. Supported SQL actions: None """ def __init__(self, schema, attributes): super(BackupHistory, self).__init__(schema, attributes) self._type_code = [] self._strip_attribute('RDB$FILE_NAME') #--- Protected def _get_name(self): return 'BCKP_%d' % self.sequence def _get_backup_id(self): return self._attributes['RDB$BACKUP_ID'] def _get_filename(self): return self._attributes['RDB$FILE_NAME'] def _get_created(self): return self._attributes['RDB$TIMESTAMP'] def _get_level(self): return self._attributes['RDB$BACKUP_LEVEL'] def _get_scn(self): return self._attributes['RDB$SCN'] def _get_guid(self): return self._attributes['RDB$GUID'] #--- Properties backup_id = LateBindingProperty(_get_backup_id, doc="The identifier assigned by the engine.") filename = LateBindingProperty(_get_filename, doc="Full path and file name of backup file.") created = LateBindingProperty(_get_created, doc="Backup date and time.") level = LateBindingProperty(_get_level, doc="Backup level.") scn = LateBindingProperty(_get_scn, doc="System (scan) number.") guid = LateBindingProperty(_get_guid, doc="Unique identifier.") #--- Public def issystemobject(self): "Returns True." return True class Filter(BaseSchemaItem): """Represents userdefined BLOB filter. Supported SQL actions: - BLOB filter: declare, drop, comment - System UDF: none """ def __init__(self, schema, attributes): super(Filter, self).__init__(schema, attributes) self._type_code = [16,] self._strip_attribute('RDB$FUNCTION_NAME') self._strip_attribute('RDB$MODULE_NAME') self._strip_attribute('RDB$ENTRYPOINT') self.__ods = schema._con.ods if not self.issystemobject(): self._actions = ['comment', 'declare', 'drop'] #--- Protected def _get_declare_sql(self, **params): self._check_params(params, []) fdef = 'DECLARE FILTER %s\nINPUT_TYPE %d OUTPUT_TYPE %d\n' % (self.get_quoted_name(), self.input_sub_type, self.output_sub_type) return "%sENTRY_POINT '%s' MODULE_NAME '%s'" % (fdef, self.entrypoint, self.module_name) def _get_drop_sql(self, **params): self._check_params(params, []) return 'DROP FILTER %s' % self.get_quoted_name() def _get_comment_sql(self, **params): return 'COMMENT ON FILTER %s IS %s' % (self.get_quoted_name(), 'NULL' if self.description is None else "'%s'" % escape_single_quotes(self.description)) def _get_name(self): return self._attributes['RDB$FUNCTION_NAME'] def _get_module_name(self): return self._attributes['RDB$MODULE_NAME'] def _get_entrypoint(self): return self._attributes['RDB$ENTRYPOINT'] def _get_input_sub_type(self): return self._attributes.get('RDB$INPUT_SUB_TYPE') def _get_output_sub_type(self): return self._attributes.get('RDB$OUTPUT_SUB_TYPE') #--- Properties module_name = LateBindingProperty(_get_module_name, doc="The name of the dynamic library or shared object where the code of the BLOB filter is located.") entrypoint = LateBindingProperty(_get_entrypoint, doc="The exported name of the BLOB filter in the filter library.") input_sub_type = LateBindingProperty(_get_input_sub_type, doc="The BLOB subtype of the data to be converted by the function.") output_sub_type = LateBindingProperty(_get_output_sub_type, doc="The BLOB subtype of the converted data.") fdb-2.0.0/fdb/blr.py0000644000175000001440000001630513270362207014577 0ustar pcisarusers00000000000000#coding:utf-8 # # PROGRAM/MODULE: fdb # FILE: blr.py # DESCRIPTION: Python driver for Firebird - BLR-related definitions # CREATED: 12.6.2013 # # Software distributed under the License is distributed AS IS, # WITHOUT WARRANTY OF ANY KIND, either express or implied. # See the License for the specific language governing rights # and limitations under the License. # # The Original Code was created by Pavel Cisar # # Copyright (c) Pavel Cisar # and all contributors signed below. # # All Rights Reserved. # Contributor(s): Philippe Makowski # ______________________________________. # # See LICENSE.TXT for details. # BLR data types are defined in fdb.ibase #blr_text = 14 #blr_text2 = 15 #blr_short = 7 #blr_long = 8 #blr_quad = 9 #blr_float = 10 #blr_double = 27 #blr_d_float = 11 #blr_timestamp = 35 #blr_varying = 37 #blr_varying2 = 38 #blr_blob = 261 #blr_cstring = 40 #blr_cstring2 = 41 #blr_blob_id = 45 #blr_sql_date = 12 #blr_sql_time = 13 #blr_int64 = 16 #blr_blob2 = 17 #blr_domain_name = 18 #blr_domain_name2 = 19 #blr_not_nullable = 20 #blr_column_name = 21 #blr_column_name2 = 22 #blr_bool = 23 # Firebird 3.0 # first sub parameter for blr_domain_name[2] blr_domain_type_of = 0 blr_domain_full = 1 # Historical alias for pre V6 applications blr_date = 35 # blr_timestamp # Other BLR codes blr_inner = 0 blr_left = 1 blr_right = 2 blr_full = 3 blr_gds_code = 0 blr_sql_code = 1 blr_exception = 2 blr_trigger_code = 3 blr_default_code = 4 blr_raise = 5 blr_exception_msg = 6 # Firebird 3.0 blr_exception_params = 7 blr_sql_state = 8 blr_version4 = 4 blr_version5 = 5 blr_eoc = 76 blr_end = 255 blr_assignment = 1 blr_begin = 2 blr_dcl_variable = 3 blr_message = 4 blr_erase = 5 blr_fetch = 6 blr_for = 7 blr_if = 8 blr_loop = 9 blr_modify = 10 blr_handler = 11 blr_receive = 12 blr_select = 13 blr_send = 14 blr_store = 15 blr_label = 17 blr_leave = 18 blr_store2 = 19 blr_post = 20 blr_literal = 21 blr_dbkey = 22 blr_field = 23 blr_fid = 24 blr_parameter = 25 blr_variable = 26 blr_average = 27 blr_count = 28 blr_maximum = 29 blr_minimum = 30 blr_total = 31 # unused codes 32..33 blr_add = 34 blr_subtract = 35 blr_multiply = 36 blr_divide = 37 blr_negate = 38 blr_concatenate = 39 blr_substring = 40 blr_parameter2 = 41 blr_from = 42 blr_via = 43 blr_parameter2_old = 44 blr_user_name = 44 blr_null = 45 blr_equiv = 46 blr_eql = 47 blr_neq = 48 blr_gtr = 49 blr_geq = 50 blr_lss = 51 blr_leq = 52 blr_containing = 53 blr_matching = 54 blr_starting = 55 blr_between = 56 blr_or = 57 blr_and = 58 blr_not = 59 blr_any = 60 blr_missing = 61 blr_unique = 62 blr_like = 63 # unused codes 64..66 blr_rse = 67 blr_first = 68 blr_project = 69 blr_sort = 70 blr_boolean = 71 blr_ascending = 72 blr_descending = 73 blr_relation = 74 blr_rid = 75 blr_union = 76 blr_map = 77 blr_group_by = 78 blr_aggregate = 79 blr_join_type = 80 # unused codes 81..82 blr_agg_count = 83 blr_agg_max = 84 blr_agg_min = 85 blr_agg_total = 86 blr_agg_average = 87 blr_parameter3 = 88 # Unsupported #blr_run_max = 89 #blr_run_min = 90 #blr_run_total = 91 #blr_run_average = 92 blr_agg_count2 = 93 blr_agg_count_distinct = 94 blr_agg_total_distinct = 95 blr_agg_average_distinct = 96 # unused codes 97..99 blr_function = 100 blr_gen_id = 101 blr_prot_mask = 102 blr_upcase = 103 blr_lock_state = 104 blr_value_if = 105 blr_matching2 = 106 blr_index = 107 blr_ansi_like = 108 blr_scrollable = 109 # Firebird 3.0 #blr_seek = 112 # Defined in FB < 3.0 # unused codes 110..117 blr_run_count = 118 blr_rs_stream = 119 blr_exec_proc = 120 # unused codes 121..123 blr_procedure = 124 blr_pid = 125 blr_exec_pid = 126 blr_singular = 127 blr_abort = 128 blr_block = 129 blr_error_handler = 130 blr_cast = 131 # Firebird 3.0 blr_pid2 = 132 blr_procedure2 = 133 # blr_start_savepoint = 134 blr_end_savepoint = 135 #unused codes 136..138 # Access plan items blr_plan = 139 blr_merge = 140 blr_join = 141 blr_sequential = 142 blr_navigational = 143 blr_indices = 144 blr_retrieve = 145 blr_relation2 = 146 blr_rid2 = 147 # unused codes 148..149 blr_set_generator = 150 blr_ansi_any = 151 blr_exists = 152 # unused codes 153 blr_record_version = 154 blr_stall = 155 # unused codes 156..157 blr_ansi_all = 158 blr_extract = 159 # these indicate directions for blr_seek and blr_find blr_continue = 0 blr_forward = 1 blr_backward = 2 blr_bof_forward = 3 blr_eof_backward = 4 # sub parameters for blr_extract blr_extract_year = 0 blr_extract_month = 1 blr_extract_day = 2 blr_extract_hour = 3 blr_extract_minute = 4 blr_extract_second = 5 blr_extract_weekday = 6 blr_extract_yearday = 7 # Added in FB 2.1 blr_extract_millisecond = 8 blr_extract_week = 9 blr_current_date = 160 blr_current_timestamp = 161 blr_current_time = 162 # Those codes reuse BLR code space blr_post_arg = 163 blr_exec_into = 164 blr_user_savepoint = 165 blr_dcl_cursor = 166 blr_cursor_stmt = 167 blr_current_timestamp2 = 168 blr_current_time2 = 169 blr_agg_list = 170 blr_agg_list_distinct = 171 blr_modify2 = 172 # unused codes 173 # FB 1.0 specific BLR blr_current_role = 174 blr_skip = 175 # FB 1.5 specific BLR blr_exec_sql = 176 blr_internal_info = 177 blr_nullsfirst = 178 blr_writelock = 179 blr_nullslast = 180 # FB 2.0 specific BLR blr_lowcase = 181 blr_strlen = 182 # sub parameter for blr_strlen blr_strlen_bit = 0 blr_strlen_char = 1 blr_strlen_octet = 2 blr_trim = 183 # first sub parameter for blr_trim blr_trim_both = 0 blr_trim_leading = 1 blr_trim_trailing = 2 # second sub parameter for blr_trim blr_trim_spaces = 0 blr_trim_characters = 1 # These codes are actions for user-defined savepoints blr_savepoint_set = 0 blr_savepoint_release = 1 blr_savepoint_undo = 2 blr_savepoint_release_single = 3 # These codes are actions for cursors blr_cursor_open = 0 blr_cursor_close = 1 blr_cursor_fetch = 2 blr_cursor_fetch_scroll = 3 # Firebird 3.0 # Scroll options (FB 3.0) blr_croll_forward = 0 blr_croll_backward = 1 blr_croll_bof = 2 blr_croll_eof = 3 blr_croll_absolute = 4 blr_croll_relative = 5 # FB 2.1 specific BLR blr_init_variable = 184 blr_recurse = 185 blr_sys_function = 186 # FB 2.5 specific BLR blr_auto_trans = 187 blr_similar = 188 blr_exec_stmt = 189 # subcodes of blr_exec_stmt blr_exec_stmt_inputs = 1 # input parameters count blr_exec_stmt_outputs = 2 # output parameters count blr_exec_stmt_sql = 3 blr_exec_stmt_proc_block = 4 blr_exec_stmt_data_src = 5 blr_exec_stmt_user = 6 blr_exec_stmt_pwd = 7 blr_exec_stmt_tran = 8 # not implemented yet blr_exec_stmt_tran_clone = 9 # make transaction parameters equal to current transaction blr_exec_stmt_privs = 10 blr_exec_stmt_in_params = 11 # not named input parameters blr_exec_stmt_in_params2 = 12 # named input parameters blr_exec_stmt_out_params = 13 # output parameters blr_exec_stmt_role = 14 blr_stmt_expr = 190 blr_derived_expr = 191 # FB 3.0 specific BLR blr_procedure3 = 192 blr_exec_proc2 = 193 blr_function2 = 194 blr_window = 195 blr_partition_by = 196 blr_continue_loop = 197 blr_procedure4 = 198 blr_agg_function = 199 blr_substring_similar = 200 blr_bool_as_value = 201 blr_coalesce = 202 blr_decode = 203 blr_exec_subproc = 204 blr_subproc_decl = 205 blr_subproc = 206 blr_subfunc_decl = 207 blr_subfunc = 208 blr_record_version2 = 209 blr_gen_id2 = 210 # NEXT VALUE FOR generator fdb-2.0.0/fdb/ibase.py0000644000175000001440000030404613270362207015105 0ustar pcisarusers00000000000000#coding:utf-8 # # PROGRAM/MODULE: fdb # FILE: ibase.py # DESCRIPTION: Python driver for Firebird - Python ctypes interface to Firebird client library # CREATED: 6.10.2011 # # Software distributed under the License is distributed AS IS, # WITHOUT WARRANTY OF ANY KIND, either express or implied. # See the License for the specific language governing rights # and limitations under the License. # # The Original Code was created by Pavel Cisar # # Copyright (c) Pavel Cisar # and all contributors signed below. # # All Rights Reserved. # Contributor(s): Philippe Makowski # ______________________________________. # # See LICENSE.TXT for details. #from ctypes import * from ctypes import c_char_p, c_wchar_p, c_char, c_byte, c_ubyte, c_int, c_uint, c_short, c_ushort, \ c_long, c_ulong, c_longlong, c_ulonglong, c_void_p, c_int8, c_int16, c_int32, c_int64, c_uint8, \ c_uint16, c_uint32, c_uint64, POINTER, Structure, CFUNCTYPE, CDLL from ctypes.util import find_library import sys from locale import getpreferredencoding import types import operator import platform import os PYTHON_MAJOR_VER = sys.version_info[0] #------------------- if PYTHON_MAJOR_VER == 3: from queue import PriorityQueue def nativestr(st, charset="latin-1"): if st is None: return st elif isinstance(st, bytes): return st.decode(charset) else: return st def b(st, charset="latin-1"): if st is None: return st elif isinstance(st, bytes): return st else: try: return st.encode(charset) except UnicodeEncodeError: return st def s(st): return st ord2 = lambda x: x if isinstance(x, IntType) else ord(x) if sys.version_info[1] <= 1: def int2byte(i): return bytes((i,)) else: # This is about 2x faster than the implementation above on 3.2+ int2byte = operator.methodcaller("to_bytes", 1, "big") def mychr(i): return i mybytes = bytes myunicode = str mylong = int StringType = str IntType = int LongType = int FloatType = float ListType = list UnicodeType = str TupleType = tuple xrange = range StringTypes = str else: from Queue import PriorityQueue def nativestr(st, charset="latin-1"): if st is None: return st elif isinstance(st, unicode): return st.encode(charset) else: return st def b(st, charset="latin-1"): if st is None: return st elif isinstance(st, types.StringType): return st else: try: return st.encode(charset) except UnicodeEncodeError: return st int2byte = chr s = str ord2 = ord def mychr(i): return chr(i) mybytes = str myunicode = unicode mylong = long StringType = types.StringType IntType = types.IntType LongType = types.LongType FloatType = types.FloatType ListType = types.ListType UnicodeType = types.UnicodeType TupleType = types.TupleType xrange = xrange StringTypes = (StringType, UnicodeType) # Support routines from ctypesgen generated file. # As of ctypes 1.0, ctypes does not support custom error-checking # functions on callbacks, nor does it support custom datatypes on # callbacks, so we must ensure that all callbacks return # primitive datatypes. # # Non-primitive return values wrapped with UNCHECKED won't be # typechecked, and will be converted to c_void_p. def UNCHECKED(type): if (hasattr(type, "_type_") and isinstance(type._type_, str) and type._type_ != "P"): return type else: return c_void_p # ibase.h FB_API_VER = 25 MAX_BLOB_SEGMENT_SIZE = 65535 # Event queue operation (and priority) codes OP_DIE = 1 OP_RECORD_AND_REREGISTER = 2 sys_encoding = getpreferredencoding() charset_map = { # DB CHAR SET NAME : PYTHON CODEC NAME (CANONICAL) # ------------------------------------------------------------------------- None : getpreferredencoding(), 'NONE' : getpreferredencoding(), 'OCTETS' : None, # Allow to pass through unchanged. 'UNICODE_FSS' : 'utf_8', 'UTF8' : 'utf_8', # (Firebird 2.0+) 'ASCII' : 'ascii', 'SJIS_0208' : 'shift_jis', 'EUCJ_0208' : 'euc_jp', 'DOS737' : 'cp737', 'DOS437' : 'cp437', 'DOS850' : 'cp850', 'DOS865' : 'cp865', 'DOS860' : 'cp860', 'DOS863' : 'cp863', 'DOS775' : 'cp775', 'DOS862' : 'cp862', 'DOS864' : 'cp864', 'ISO8859_1' : 'iso8859_1', 'ISO8859_2' : 'iso8859_2', 'ISO8859_3' : 'iso8859_3', 'ISO8859_4' : 'iso8859_4', 'ISO8859_5' : 'iso8859_5', 'ISO8859_6' : 'iso8859_6', 'ISO8859_7' : 'iso8859_7', 'ISO8859_8' : 'iso8859_8', 'ISO8859_9' : 'iso8859_9', 'ISO8859_13' : 'iso8859_13', 'KSC_5601' : 'euc_kr', 'DOS852' : 'cp852', 'DOS857' : 'cp857', 'DOS858' : 'cp858', 'DOS861' : 'cp861', 'DOS866' : 'cp866', 'DOS869' : 'cp869', 'WIN1250' : 'cp1250', 'WIN1251' : 'cp1251', 'WIN1252' : 'cp1252', 'WIN1253' : 'cp1253', 'WIN1254' : 'cp1254', 'BIG_5' : 'big5', 'GB_2312' : 'gb2312', 'WIN1255' : 'cp1255', 'WIN1256' : 'cp1256', 'WIN1257' : 'cp1257', 'GB18030' : 'gb18030', 'GBK' : 'gbk', 'KOI8R' : 'koi8_r', # (Firebird 2.0+) 'KOI8U' : 'koi8_u', # (Firebird 2.0+) 'WIN1258' : 'cp1258', # (Firebird 2.0+) } DB_CHAR_SET_NAME_TO_PYTHON_ENCODING_MAP = charset_map # C integer limit constants SHRT_MIN = -32768 SHRT_MAX = 32767 USHRT_MAX = 65535 INT_MIN = -2147483648 INT_MAX = 2147483647 UINT_MAX = 4294967295 LONG_MIN = -9223372036854775808 LONG_MAX = 9223372036854775807 SSIZE_T_MIN = INT_MIN SSIZE_T_MAX = INT_MAX # Constants DSQL_close = 1 DSQL_drop = 2 DSQL_unprepare = 4 SQLDA_version1 = 1 # Type codes SQL_TEXT = 452 SQL_VARYING = 448 SQL_SHORT = 500 SQL_LONG = 496 SQL_FLOAT = 482 SQL_DOUBLE = 480 SQL_D_FLOAT = 530 SQL_TIMESTAMP = 510 SQL_BLOB = 520 SQL_ARRAY = 540 SQL_QUAD = 550 SQL_TYPE_TIME = 560 SQL_TYPE_DATE = 570 SQL_INT64 = 580 SQL_BOOLEAN = 32764 # Firebird 3 SQL_NULL = 32766 SUBTYPE_NUMERIC = 1 SUBTYPE_DECIMAL = 2 # Internal type codes (for example used by ARRAY descriptor) blr_text = 14 blr_text2 = 15 blr_short = 7 blr_long = 8 blr_quad = 9 blr_float = 10 blr_double = 27 blr_d_float = 11 blr_timestamp = 35 blr_varying = 37 blr_varying2 = 38 blr_blob = 261 blr_cstring = 40 blr_cstring2 = 41 blr_blob_id = 45 blr_sql_date = 12 blr_sql_time = 13 blr_int64 = 16 blr_blob2 = 17 # Added in FB 2.1 blr_domain_name = 18 blr_domain_name2 = 19 blr_not_nullable = 20 # Added in FB 2.5 blr_column_name = 21 blr_column_name2 = 22 # Added in FB 3.0 blr_bool = 23 # Rest of BLR is defined in fdb.blr # Database parameter block stuff isc_dpb_version1 = 1 isc_dpb_version2 = 2 # Firebird 3 isc_dpb_cdd_pathname = 1 isc_dpb_allocation = 2 #isc_dpb_journal = 3 isc_dpb_page_size = 4 isc_dpb_num_buffers = 5 isc_dpb_buffer_length = 6 isc_dpb_debug = 7 isc_dpb_garbage_collect = 8 isc_dpb_verify = 9 isc_dpb_sweep = 10 #isc_dpb_enable_journal = 11 #isc_dpb_disable_journal = 12 isc_dpb_dbkey_scope = 13 isc_dpb_number_of_users = 14 isc_dpb_trace = 15 isc_dpb_no_garbage_collect = 16 isc_dpb_damaged = 17 #isc_dpb_license = 18 isc_dpb_sys_user_name = 19 isc_dpb_encrypt_key = 20 isc_dpb_activate_shadow = 21 isc_dpb_sweep_interval = 22 isc_dpb_delete_shadow = 23 isc_dpb_force_write = 24 isc_dpb_begin_log = 25 isc_dpb_quit_log = 26 isc_dpb_no_reserve = 27 isc_dpb_user_name = 28 isc_dpb_password = 29 isc_dpb_password_enc = 30 isc_dpb_sys_user_name_enc = 31 isc_dpb_interp = 32 isc_dpb_online_dump = 33 isc_dpb_old_file_size = 34 isc_dpb_old_num_files = 35 isc_dpb_old_file = 36 isc_dpb_old_start_page = 37 isc_dpb_old_start_seqno = 38 isc_dpb_old_start_file = 39 #isc_dpb_drop_walfile = 40 isc_dpb_old_dump_id = 41 #isc_dpb_wal_backup_dir = 42 #isc_dpb_wal_chkptlen = 43 #isc_dpb_wal_numbufs = 44 #isc_dpb_wal_bufsize = 45 #isc_dpb_wal_grp_cmt_wait = 46 isc_dpb_lc_messages = 47 isc_dpb_lc_ctype = 48 isc_dpb_cache_manager = 49 isc_dpb_shutdown = 50 isc_dpb_online = 51 isc_dpb_shutdown_delay = 52 isc_dpb_reserved = 53 isc_dpb_overwrite = 54 isc_dpb_sec_attach = 55 #isc_dpb_disable_wal = 56 isc_dpb_connect_timeout = 57 isc_dpb_dummy_packet_interval = 58 isc_dpb_gbak_attach = 59 isc_dpb_sql_role_name = 60 isc_dpb_set_page_buffers = 61 isc_dpb_working_directory = 62 isc_dpb_sql_dialect = 63 isc_dpb_set_db_readonly = 64 isc_dpb_set_db_sql_dialect = 65 isc_dpb_gfix_attach = 66 isc_dpb_gstat_attach = 67 isc_dpb_set_db_charset = 68 isc_dpb_gsec_attach = 69 # Deprecated in FB3 isc_dpb_address_path = 70 # Added in FB 2.1 isc_dpb_process_id = 71 isc_dpb_no_db_triggers = 72 isc_dpb_trusted_auth = 73 isc_dpb_process_name = 74 # Added in FB 2.5 isc_dpb_trusted_role = 75 isc_dpb_org_filename = 76 isc_dpb_utf8_filename = 77 isc_dpb_ext_call_depth = 78 # Added in FB 3.0 isc_dpb_auth_block = 79 isc_dpb_remote_protocol = 81 isc_dpb_client_version = 80 isc_dpb_host_name = 82 isc_dpb_os_user = 83 isc_dpb_specific_auth_data = 84 isc_dpb_auth_plugin_list = 85 isc_dpb_auth_plugin_name = 86 isc_dpb_config = 87 isc_dpb_nolinger = 88 isc_dpb_reset_icu = 89 isc_dpb_map_attach = 90 # structural codes isc_info_end = 1 isc_info_truncated = 2 isc_info_error = 3 isc_info_data_not_ready = 4 isc_info_length = 126 isc_info_flag_end = 127 isc_info_req_select_count = 13 isc_info_req_insert_count = 14 isc_info_req_update_count = 15 isc_info_req_delete_count = 16 # DB Info item codes isc_info_db_id = 4 # [db_filename,site_name[,site_name...]] isc_info_reads = 5 # number of page reads isc_info_writes = 6 # number of page writes isc_info_fetches = 7 # number of reads from the memory buffer cache isc_info_marks = 8 # number of writes to the memory buffer cache isc_info_implementation = 11 # (implementation code, implementation class) isc_info_isc_version = 12 # interbase server version identification string isc_info_base_level = 13 # capability version of the server isc_info_page_size = 14 isc_info_num_buffers = 15 # number of memory buffers currently allocated isc_info_limbo = 16 isc_info_current_memory = 17 # amount of server memory (in bytes) currently in use isc_info_max_memory = 18 # maximum amount of memory (in bytes) used at one time since the first process attached to the database # Obsolete 19-20 isc_info_allocation = 21 # number of last database page allocated isc_info_attachment_id = 22 # attachment id number # all *_count codes below return {[table_id]=operation_count,...}; table IDs are in the system table RDB$RELATIONS. isc_info_read_seq_count = 23 # number of sequential table scans (row reads) done on each table since the database was last attached isc_info_read_idx_count = 24 # number of reads done via an index since the database was last attached isc_info_insert_count = 25 # number of inserts into the database since the database was last attached isc_info_update_count = 26 # number of database updates since the database was last attached isc_info_delete_count = 27 # number of database deletes since the database was last attached isc_info_backout_count = 28 # number of removals of a version of a record isc_info_purge_count = 29 # number of removals of old versions of fully mature records (records that are committed, so that older ancestor versions are no longer needed) isc_info_expunge_count = 30 # number of removals of a record and all of its ancestors, for records whose deletions have been committed isc_info_sweep_interval = 31 # number of transactions that are committed between sweeps to remove database record versions that are no longer needed isc_info_ods_version = 32 # On-disk structure (ODS) minor major version number isc_info_ods_minor_version = 33 # On-disk structure (ODS) minor version number isc_info_no_reserve = 34 # 20% page space reservation for holding backup versions of modified records: 0=yes, 1=no # Obsolete 35-51 isc_info_forced_writes = 52 # mode in which database writes are performed: 0=sync, 1=async isc_info_user_names = 53 # array of names of all the users currently attached to the database isc_info_page_errors = 54 # number of page level errors validate found isc_info_record_errors = 55 # number of record level errors validate found isc_info_bpage_errors = 56 # number of blob page errors validate found isc_info_dpage_errors = 57 # number of data page errors validate found isc_info_ipage_errors = 58 # number of index page errors validate found isc_info_ppage_errors = 59 # number of pointer page errors validate found isc_info_tpage_errors = 60 # number of transaction page errors validate found isc_info_set_page_buffers = 61 # number of memory buffers that should be allocated isc_info_db_sql_dialect = 62 # dialect of currently attached database isc_info_db_read_only = 63 # whether the database is read-only (1) or not (0) isc_info_db_size_in_pages = 64 # number of allocated pages # Values 65 -100 unused to avoid conflict with InterBase frb_info_att_charset = 101 # charset of current attachment isc_info_db_class = 102 # server architecture isc_info_firebird_version = 103 # firebird server version identification string isc_info_oldest_transaction = 104 # ID of oldest transaction isc_info_oldest_active = 105 # ID of oldest active transaction isc_info_oldest_snapshot = 106 # ID of oldest snapshot transaction isc_info_next_transaction = 107 # ID of next transaction isc_info_db_provider = 108 # for firebird is 'isc_info_db_code_firebird' isc_info_active_transactions = 109 # array of active transaction IDs isc_info_active_tran_count = 110 # number of active transactions isc_info_creation_date = 111 # time_t struct representing database creation date & time isc_info_db_file_size = 112 # added in FB 2.1, nbackup-related - size (in pages) of locked db fb_info_page_contents = 113 # added in FB 2.5, get raw page contents; takes page_number as parameter; # Added in Firebird 3.0 fb_info_implementation = 114 # (cpu code, OS code, compiler code, flags, implementation class) fb_info_page_warns = 115 # number of page level warnings validate found fb_info_record_warns = 116 # number of record level warnings validate found fb_info_bpage_warns = 117 # number of blob page level warnings validate found fb_info_dpage_warns = 118 # number of data page level warnings validate found fb_info_ipage_warns = 119 # number of index page level warnings validate found fb_info_ppage_warns = 120 # number of pointer page level warnings validate found fb_info_tpage_warns = 121 # number of trabsaction page level warnings validate found fb_info_pip_errors = 122 # number of pip page level errors validate found fb_info_pip_warns = 123 # number of pip page level warnings validate found isc_info_db_last_value = (fb_info_pip_warns + 1) isc_info_version = isc_info_isc_version # flags set in fb_info_crypt_state fb_info_crypt_encrypted = 0x01, fb_info_crypt_process = 0x02 # Blob information items isc_info_blob_num_segments = 4 isc_info_blob_max_segment = 5 isc_info_blob_total_length = 6 isc_info_blob_type = 7 # Transaction information items isc_info_tra_id = 4 # current tran ID number isc_info_tra_oldest_interesting = 5 # oldest interesting tran ID when current tran started isc_info_tra_oldest_snapshot = 6 # min. tran ID of tra_oldest_active isc_info_tra_oldest_active = 7 # oldest active tran ID when current tran started isc_info_tra_isolation = 8 # pair: {one of isc_info_tra_isolation_flags, [one of isc_info_tra_read_committed_flags]} isc_info_tra_access = 9 # 'isc_info_tra_readonly' or 'isc_info_tra_readwrite' isc_info_tra_lock_timeout = 10 # lock timeout value # Firebird 3.0 fb_info_tra_dbpath = 11 # db filename for current transaction # isc_info_tra_isolation responses isc_info_tra_consistency = 1 isc_info_tra_concurrency = 2 isc_info_tra_read_committed = 3 # isc_info_tra_read_committed options isc_info_tra_no_rec_version = 0 isc_info_tra_rec_version = 1 # isc_info_tra_access responses isc_info_tra_readonly = 0 isc_info_tra_readwrite = 1 # SQL information items isc_info_sql_select = 4 isc_info_sql_bind = 5 isc_info_sql_num_variables = 6 isc_info_sql_describe_vars = 7 isc_info_sql_describe_end = 8 isc_info_sql_sqlda_seq = 9 isc_info_sql_message_seq = 10 isc_info_sql_type = 11 isc_info_sql_sub_type = 12 isc_info_sql_scale = 13 isc_info_sql_length = 14 isc_info_sql_null_ind = 15 isc_info_sql_field = 16 isc_info_sql_relation = 17 isc_info_sql_owner = 18 isc_info_sql_alias = 19 isc_info_sql_sqlda_start = 20 isc_info_sql_stmt_type = 21 isc_info_sql_get_plan = 22 isc_info_sql_records = 23 isc_info_sql_batch_fetch = 24 isc_info_sql_relation_alias = 25 # Added in Firebird 3.0 isc_info_sql_explain_plan = 26 isc_info_sql_stmt_flags = 27 # SQL information return values isc_info_sql_stmt_select = 1 isc_info_sql_stmt_insert = 2 isc_info_sql_stmt_update = 3 isc_info_sql_stmt_delete = 4 isc_info_sql_stmt_ddl = 5 isc_info_sql_stmt_get_segment = 6 isc_info_sql_stmt_put_segment = 7 isc_info_sql_stmt_exec_procedure = 8 isc_info_sql_stmt_start_trans = 9 isc_info_sql_stmt_commit = 10 isc_info_sql_stmt_rollback = 11 isc_info_sql_stmt_select_for_upd = 12 isc_info_sql_stmt_set_generator = 13 isc_info_sql_stmt_savepoint = 14 # Transaction parameter block stuff isc_tpb_version1 = 1 isc_tpb_version3 = 3 isc_tpb_consistency = 1 isc_tpb_concurrency = 2 isc_tpb_shared = 3 isc_tpb_protected = 4 isc_tpb_exclusive = 5 isc_tpb_wait = 6 isc_tpb_nowait = 7 isc_tpb_read = 8 isc_tpb_write = 9 isc_tpb_lock_read = 10 isc_tpb_lock_write = 11 isc_tpb_verb_time = 12 isc_tpb_commit_time = 13 isc_tpb_ignore_limbo = 14 isc_tpb_read_committed = 15 isc_tpb_autocommit = 16 isc_tpb_rec_version = 17 isc_tpb_no_rec_version = 18 isc_tpb_restart_requests = 19 isc_tpb_no_auto_undo = 20 isc_tpb_lock_timeout = 21 # BLOB parameter buffer isc_bpb_version1 = 1 isc_bpb_source_type = 1 isc_bpb_target_type = 2 isc_bpb_type = 3 isc_bpb_source_interp = 4 isc_bpb_target_interp = 5 isc_bpb_filter_parameter = 6 # Added in FB 2.1 isc_bpb_storage = 7 isc_bpb_type_segmented = 0 isc_bpb_type_stream = 1 # Added in FB 2.1 isc_bpb_storage_main = 0 isc_bpb_storage_temp = 2 # BLOB codes isc_segment = 335544366 isc_segstr_eof = 335544367 # Services API # Service parameter block stuff isc_spb_current_version = 2 isc_spb_version = isc_spb_current_version isc_spb_version3 = 3 # Firebird 3.0 isc_spb_user_name = isc_dpb_user_name isc_spb_sys_user_name = isc_dpb_sys_user_name isc_spb_sys_user_name_enc = isc_dpb_sys_user_name_enc isc_spb_password = isc_dpb_password isc_spb_password_enc = isc_dpb_password_enc isc_spb_command_line = 105 isc_spb_dbname = 106 isc_spb_verbose = 107 isc_spb_options = 108 isc_spb_address_path = 109 # Added in FB 2.1 isc_spb_process_id = 110 isc_spb_trusted_auth = 111 isc_spb_process_name = 112 # Added in FB 2.5 isc_spb_trusted_role = 113 # Added in FB 3.0 isc_spb_verbint = 114 isc_spb_auth_block = 115 isc_spb_auth_plugin_name = 116 isc_spb_auth_plugin_list = 117 isc_spb_utf8_filename = 118 isc_spb_client_version = 119 isc_spb_remote_protocol = 120 isc_spb_host_name = 121 isc_spb_os_user = 122 isc_spb_config = 123 isc_spb_expected_db = 124 # This will not be used in protocol 13, therefore may be reused isc_spb_specific_auth_data = isc_spb_trusted_auth # Service action items isc_action_svc_backup = 1 # Starts database backup process on the server isc_action_svc_restore = 2 # Starts database restore process on the server isc_action_svc_repair = 3 # Starts database repair process on the server isc_action_svc_add_user = 4 # Adds a new user to the security database isc_action_svc_delete_user = 5 # Deletes a user record from the security database isc_action_svc_modify_user = 6 # Modifies a user record in the security database isc_action_svc_display_user = 7 # Displays a user record from the security database isc_action_svc_properties = 8 # Sets database properties isc_action_svc_add_license = 9 # Adds a license to the license file isc_action_svc_remove_license = 10 # Removes a license from the license file isc_action_svc_db_stats = 11 # Retrieves database statistics isc_action_svc_get_ib_log = 12 # Retrieves the InterBase log file from the server isc_action_svc_get_fb_log = 12 # Retrieves the Firebird log file from the server # Added in FB 2.5 isc_action_svc_nbak = 20 isc_action_svc_nrest = 21 isc_action_svc_trace_start = 22 isc_action_svc_trace_stop = 23 isc_action_svc_trace_suspend = 24 isc_action_svc_trace_resume = 25 isc_action_svc_trace_list = 26 isc_action_svc_set_mapping = 27 isc_action_svc_drop_mapping = 28 isc_action_svc_display_user_adm = 29 isc_action_svc_validate = 30 # Firebird 3.0 isc_action_svc_last = 31 # Service information items isc_info_svc_svr_db_info = 50 # Retrieves the number of attachments and databases */ isc_info_svc_get_config = 53 # Retrieves the parameters and values for IB_CONFIG */ isc_info_svc_version = 54 # Retrieves the version of the services manager */ isc_info_svc_server_version = 55 # Retrieves the version of the Firebird server */ isc_info_svc_implementation = 56 # Retrieves the implementation of the Firebird server */ isc_info_svc_capabilities = 57 # Retrieves a bitmask representing the server's capabilities */ isc_info_svc_user_dbpath = 58 # Retrieves the path to the security database in use by the server */ isc_info_svc_get_env = 59 # Retrieves the setting of $FIREBIRD */ isc_info_svc_get_env_lock = 60 # Retrieves the setting of $FIREBIRD_LCK */ isc_info_svc_get_env_msg = 61 # Retrieves the setting of $FIREBIRD_MSG */ isc_info_svc_line = 62 # Retrieves 1 line of service output per call */ isc_info_svc_to_eof = 63 # Retrieves as much of the server output as will fit in the supplied buffer */ isc_info_svc_timeout = 64 # Sets / signifies a timeout value for reading service information */ isc_info_svc_limbo_trans = 66 # Retrieve the limbo transactions */ isc_info_svc_running = 67 # Checks to see if a service is running on an attachment */ isc_info_svc_get_users = 68 # Returns the user information from isc_action_svc_display_users */ isc_info_svc_auth_block = 69 # FB 3.0: Sets authentication block for service query() call */ isc_info_svc_stdin = 78 # Returns maximum size of data, needed as stdin for service */ # Parameters for isc_action_{add|del|mod|disp)_user isc_spb_sec_userid = 5 isc_spb_sec_groupid = 6 isc_spb_sec_username = 7 isc_spb_sec_password = 8 isc_spb_sec_groupname = 9 isc_spb_sec_firstname = 10 isc_spb_sec_middlename = 11 isc_spb_sec_lastname = 12 isc_spb_sec_admin = 13 # Parameters for isc_action_svc_backup isc_spb_bkp_file = 5 isc_spb_bkp_factor = 6 isc_spb_bkp_length = 7 isc_spb_bkp_skip_data = 8 # Firebird 3.0 isc_spb_bkp_stat = 15 # Firebird 2.5 isc_spb_bkp_ignore_checksums = 0x01 isc_spb_bkp_ignore_limbo = 0x02 isc_spb_bkp_metadata_only = 0x04 isc_spb_bkp_no_garbage_collect = 0x08 isc_spb_bkp_old_descriptions = 0x10 isc_spb_bkp_non_transportable = 0x20 isc_spb_bkp_convert = 0x40 isc_spb_bkp_expand = 0x80 isc_spb_bkp_no_triggers = 0x8000 # Parameters for isc_action_svc_properties isc_spb_prp_page_buffers = 5 isc_spb_prp_sweep_interval = 6 isc_spb_prp_shutdown_db = 7 isc_spb_prp_deny_new_attachments = 9 isc_spb_prp_deny_new_transactions = 10 isc_spb_prp_reserve_space = 11 isc_spb_prp_write_mode = 12 isc_spb_prp_access_mode = 13 isc_spb_prp_set_sql_dialect = 14 isc_spb_prp_activate = 0x0100 isc_spb_prp_db_online = 0x0200 isc_spb_prp_nolinger = 0x0400 # Firebird 3.0 isc_spb_prp_force_shutdown = 41 isc_spb_prp_attachments_shutdown = 42 isc_spb_prp_transactions_shutdown = 43 isc_spb_prp_shutdown_mode = 44 isc_spb_prp_online_mode = 45 # Parameters for isc_spb_prp_shutdown_mode and isc_spb_prp_online_mode isc_spb_prp_sm_normal = 0 isc_spb_prp_sm_multi = 1 isc_spb_prp_sm_single = 2 isc_spb_prp_sm_full = 3 # Parameters for isc_spb_prp_reserve_space isc_spb_prp_res_use_full = 35 isc_spb_prp_res = 36 # Parameters for isc_spb_prp_write_mode isc_spb_prp_wm_async = 37 isc_spb_prp_wm_sync = 38 # Parameters for isc_action_svc_validate isc_spb_val_tab_incl = 1 isc_spb_val_tab_excl = 2 isc_spb_val_idx_incl = 3 isc_spb_val_idx_excl = 4 isc_spb_val_lock_timeout = 5 # Parameters for isc_spb_prp_access_mode isc_spb_prp_am_readonly = 39 isc_spb_prp_am_readwrite = 40 # Parameters for isc_action_svc_repair isc_spb_rpr_commit_trans = 15 isc_spb_rpr_rollback_trans = 34 isc_spb_rpr_recover_two_phase = 17 isc_spb_tra_id = 18 isc_spb_single_tra_id = 19 isc_spb_multi_tra_id = 20 isc_spb_tra_state = 21 isc_spb_tra_state_limbo = 22 isc_spb_tra_state_commit = 23 isc_spb_tra_state_rollback = 24 isc_spb_tra_state_unknown = 25 isc_spb_tra_host_site = 26 isc_spb_tra_remote_site = 27 isc_spb_tra_db_path = 28 isc_spb_tra_advise = 29 isc_spb_tra_advise_commit = 30 isc_spb_tra_advise_rollback = 31 isc_spb_tra_advise_unknown = 33 # Added in Firebird 3.0 isc_spb_tra_id_64 = 46 isc_spb_single_tra_id_64 = 47 isc_spb_multi_tra_id_64 = 48 isc_spb_rpr_commit_trans_64 = 49 isc_spb_rpr_rollback_trans_64 = 50 isc_spb_rpr_recover_two_phase_64 = 51 isc_spb_rpr_validate_db = 0x01 isc_spb_rpr_sweep_db = 0x02 isc_spb_rpr_mend_db = 0x04 isc_spb_rpr_list_limbo_trans = 0x08 isc_spb_rpr_check_db = 0x10 isc_spb_rpr_ignore_checksum = 0x20 isc_spb_rpr_kill_shadows = 0x40 isc_spb_rpr_full = 0x80 isc_spb_rpr_icu = 0x0800 # Firebird 3.0 # Parameters for isc_action_svc_restore isc_spb_res_skip_data = isc_spb_bkp_skip_data # Firebird 3.0 isc_spb_res_buffers = 9 isc_spb_res_page_size = 10 isc_spb_res_length = 11 isc_spb_res_access_mode = 12 isc_spb_res_fix_fss_data = 13 isc_spb_res_fix_fss_metadata = 14 isc_spb_res_stat = 15 # Firebird 3.0 isc_spb_res_metadata_only = 0x04 isc_spb_res_deactivate_idx = 0x0100 isc_spb_res_no_shadow = 0x0200 isc_spb_res_no_validity = 0x0400 isc_spb_res_one_at_a_time = 0x0800 isc_spb_res_replace = 0x1000 isc_spb_res_create = 0x2000 isc_spb_res_use_all_space = 0x4000 # Parameters for isc_spb_res_access_mode isc_spb_res_am_readonly = isc_spb_prp_am_readonly isc_spb_res_am_readwrite = isc_spb_prp_am_readwrite # Parameters for isc_info_svc_svr_db_info isc_spb_num_att = 5 isc_spb_num_db = 6 # Parameters for isc_info_svc_db_stats isc_spb_sts_data_pages = 0x01 isc_spb_sts_db_log = 0x02 isc_spb_sts_hdr_pages = 0x04 isc_spb_sts_idx_pages = 0x08 isc_spb_sts_sys_relations = 0x10 isc_spb_sts_record_versions = 0x20 isc_spb_sts_table = 0x40 isc_spb_sts_nocreation = 0x80 isc_spb_sts_encryption = 0x100 # Firebird 3.0 # Parameters for isc_action_svc_nbak isc_spb_nbk_level = 5 isc_spb_nbk_file = 6 isc_spb_nbk_direct = 7 isc_spb_nbk_no_triggers = 0x01 # trace isc_spb_trc_id = 1 isc_spb_trc_name = 2 isc_spb_trc_cfg = 3 #------------------- STRING = c_char_p WSTRING = c_wchar_p blb_got_eof = 0 blb_got_fragment = -1 blb_got_full_segment = 1 blb_seek_relative = 1 blb_seek_from_tail = 2 # Implementation codes isc_info_db_impl_rdb_vms = 1 isc_info_db_impl_rdb_eln = 2 isc_info_db_impl_rdb_eln_dev = 3 isc_info_db_impl_rdb_vms_y = 4 isc_info_db_impl_rdb_eln_y = 5 isc_info_db_impl_jri = 6 isc_info_db_impl_jsv = 7 isc_info_db_impl_isc_apl_68K = 25 isc_info_db_impl_isc_vax_ultr = 26 isc_info_db_impl_isc_vms = 27 isc_info_db_impl_isc_sun_68k = 28 isc_info_db_impl_isc_os2 = 29 isc_info_db_impl_isc_sun4 = 30 isc_info_db_impl_isc_hp_ux = 31 isc_info_db_impl_isc_sun_386i = 32 isc_info_db_impl_isc_vms_orcl = 33 isc_info_db_impl_isc_mac_aux = 34 isc_info_db_impl_isc_rt_aix = 35 isc_info_db_impl_isc_mips_ult = 36 isc_info_db_impl_isc_xenix = 37 isc_info_db_impl_isc_dg = 38 isc_info_db_impl_isc_hp_mpexl = 39 isc_info_db_impl_isc_hp_ux68K = 40 isc_info_db_impl_isc_sgi = 41 isc_info_db_impl_isc_sco_unix = 42 isc_info_db_impl_isc_cray = 43 isc_info_db_impl_isc_imp = 44 isc_info_db_impl_isc_delta = 45 isc_info_db_impl_isc_next = 46 isc_info_db_impl_isc_dos = 47 isc_info_db_impl_m88K = 48 isc_info_db_impl_unixware = 49 isc_info_db_impl_isc_winnt_x86 = 50 isc_info_db_impl_isc_epson = 51 isc_info_db_impl_alpha_osf = 52 isc_info_db_impl_alpha_vms = 53 isc_info_db_impl_netware_386 = 54 isc_info_db_impl_win_only = 55 isc_info_db_impl_ncr_3000 = 56 isc_info_db_impl_winnt_ppc = 57 isc_info_db_impl_dg_x86 = 58 isc_info_db_impl_sco_ev = 59 isc_info_db_impl_i386 = 60 isc_info_db_impl_freebsd = 61 isc_info_db_impl_netbsd = 62 isc_info_db_impl_darwin_ppc = 63 isc_info_db_impl_sinixz = 64 isc_info_db_impl_linux_sparc = 65 isc_info_db_impl_linux_amd64 = 66 isc_info_db_impl_freebsd_amd64 = 67 isc_info_db_impl_winnt_amd64 = 68 isc_info_db_impl_linux_ppc = 69 isc_info_db_impl_darwin_x86 = 70 isc_info_db_impl_linux_mipsel = 71 # changed in 2.1, it was isc_info_db_impl_sun_amd64 in 2.0 # Added in FB 2.1 isc_info_db_impl_linux_mips = 72 isc_info_db_impl_darwin_x64 = 73 isc_info_db_impl_sun_amd64 = 74 isc_info_db_impl_linux_arm = 75 isc_info_db_impl_linux_ia64 = 76 isc_info_db_impl_darwin_ppc64 = 77 isc_info_db_impl_linux_s390x = 78 isc_info_db_impl_linux_s390 = 79 isc_info_db_impl_linux_sh = 80 isc_info_db_impl_linux_sheb = 81 # Added in FB 2.5 isc_info_db_impl_linux_hppa = 82 isc_info_db_impl_linux_alpha = 83 isc_info_db_impl_linux_arm64 = 84 isc_info_db_impl_linux_ppc64el = 85 isc_info_db_impl_linux_ppc64 = 86 # Firebird 3.0 isc_info_db_impl_last_value = (isc_info_db_impl_linux_ppc64 + 1) # Info DB provider isc_info_db_code_rdb_eln = 1 isc_info_db_code_rdb_vms = 2 isc_info_db_code_interbase = 3 isc_info_db_code_firebird = 4 isc_info_db_code_last_value = (isc_info_db_code_firebird+1) # Info db class isc_info_db_class_access = 1 isc_info_db_class_y_valve = 2 isc_info_db_class_rem_int = 3 isc_info_db_class_rem_srvr = 4 isc_info_db_class_pipe_int = 7 isc_info_db_class_pipe_srvr = 8 isc_info_db_class_sam_int = 9 isc_info_db_class_sam_srvr = 10 isc_info_db_class_gateway = 11 isc_info_db_class_cache = 12 isc_info_db_class_classic_access = 13 isc_info_db_class_server_access = 14 isc_info_db_class_last_value = (isc_info_db_class_server_access+1) # Request information items isc_info_number_messages = 4 isc_info_max_message = 5 isc_info_max_send = 6 isc_info_max_receive = 7 isc_info_state = 8 isc_info_message_number = 9 isc_info_message_size = 10 isc_info_request_cost = 11 isc_info_access_path = 12 isc_info_req_select_count = 13 isc_info_req_insert_count = 14 isc_info_req_update_count = 15 isc_info_req_delete_count = 16 # Access path items isc_info_rsb_end = 0 isc_info_rsb_begin = 1 isc_info_rsb_type = 2 isc_info_rsb_relation = 3 isc_info_rsb_plan = 4 # RecordSource (RSB) types isc_info_rsb_unknown = 1 isc_info_rsb_indexed = 2 isc_info_rsb_navigate = 3 isc_info_rsb_sequential = 4 isc_info_rsb_cross = 5 isc_info_rsb_sort = 6 isc_info_rsb_first = 7 isc_info_rsb_boolean = 8 isc_info_rsb_union = 9 isc_info_rsb_aggregate = 10 isc_info_rsb_merge = 11 isc_info_rsb_ext_sequential = 12 isc_info_rsb_ext_indexed = 13 isc_info_rsb_ext_dbkey = 14 isc_info_rsb_left_cross = 15 isc_info_rsb_select = 16 isc_info_rsb_sql_join = 17 isc_info_rsb_simulate = 18 isc_info_rsb_sim_cross = 19 isc_info_rsb_once = 20 isc_info_rsb_procedure = 21 isc_info_rsb_skip = 22 isc_info_rsb_virt_sequential = 23 isc_info_rsb_recursive = 24 # Firebird 3.0 isc_info_rsb_window = 25 isc_info_rsb_singular = 26 isc_info_rsb_writelock = 27 isc_info_rsb_buffer = 28 isc_info_rsb_hash = 29 # Bitmap expressions isc_info_rsb_and = 1 isc_info_rsb_or = 2 isc_info_rsb_dbkey = 3 isc_info_rsb_index = 4 isc_info_req_active = 2 isc_info_req_inactive = 3 isc_info_req_send = 4 isc_info_req_receive = 5 isc_info_req_select = 6 isc_info_req_sql_stall = 7 # Blob Subtypes isc_blob_untyped = 0 # internal subtypes isc_blob_text = 1 isc_blob_blr = 2 isc_blob_acl = 3 isc_blob_ranges = 4 isc_blob_summary = 5 isc_blob_format = 6 isc_blob_tra = 7 isc_blob_extfile = 8 isc_blob_debug_info = 9 isc_blob_max_predefined_subtype = 10 # Masks for fb_shutdown_callback fb_shut_confirmation = 1 fb_shut_preproviders = 2 fb_shut_postproviders = 4 fb_shut_finish = 8 fb_shut_exit = 16 # Firebird 3.0 # Shutdown reasons, used by engine # Users should provide positive values fb_shutrsn_svc_stopped = -1 fb_shutrsn_no_connection = -2 fb_shutrsn_app_stopped = -3 fb_shutrsn_device_removed = -4 # Not used by FB 3.0 fb_shutrsn_signal = -5 fb_shutrsn_services = -6 fb_shutrsn_exit_called = -7 # Cancel types for fb_cancel_operation fb_cancel_disable = 1 fb_cancel_enable = 2 fb_cancel_raise = 3 fb_cancel_abort = 4 # Debug information items fb_dbg_version = 1 fb_dbg_end = 255 fb_dbg_map_src2blr = 2 fb_dbg_map_varname = 3 fb_dbg_map_argument = 4 # Firebird 3.0 fb_dbg_subproc = 5 fb_dbg_subfunc = 6 fb_dbg_map_curname = 7 # sub code for fb_dbg_map_argument fb_dbg_arg_input = 0 fb_dbg_arg_output = 1 FB_API_HANDLE = c_uint if platform.architecture() == ('64bit', 'WindowsPE'): intptr_t = c_longlong uintptr_t = c_ulonglong else: intptr_t = c_long uintptr_t = c_ulong ISC_STATUS = intptr_t ISC_STATUS_PTR = POINTER(ISC_STATUS) ISC_STATUS_ARRAY = ISC_STATUS * 20 FB_SQLSTATE_STRING = c_char * (5 + 1) ISC_LONG = c_int ISC_ULONG = c_uint ISC_SHORT = c_short ISC_USHORT = c_ushort ISC_UCHAR = c_ubyte ISC_SCHAR = c_char ISC_INT64 = c_longlong ISC_UINT64 = c_ulonglong ISC_DATE = c_int ISC_TIME = c_uint ISC_TRUE = 1 ISC_FALSE = 0 class ISC_TIMESTAMP(Structure): pass ISC_TIMESTAMP._fields_ = [ ('timestamp_date', ISC_DATE), ('timestamp_time', ISC_TIME), ] class GDS_QUAD_t(Structure): pass GDS_QUAD_t._fields_ = [ ('gds_quad_high', ISC_LONG), ('gds_quad_low', ISC_ULONG), ] GDS_QUAD = GDS_QUAD_t ISC_QUAD = GDS_QUAD_t isc_att_handle = FB_API_HANDLE isc_blob_handle = FB_API_HANDLE isc_db_handle = FB_API_HANDLE isc_req_handle = FB_API_HANDLE isc_stmt_handle = FB_API_HANDLE isc_svc_handle = FB_API_HANDLE isc_tr_handle = FB_API_HANDLE isc_resv_handle = ISC_LONG FB_SHUTDOWN_CALLBACK = CFUNCTYPE(UNCHECKED(c_int), c_int, c_int, POINTER(None)) ISC_CALLBACK = CFUNCTYPE(None) ISC_PRINT_CALLBACK = CFUNCTYPE(None, c_void_p, c_short, STRING) ISC_VERSION_CALLBACK = CFUNCTYPE(None, c_void_p, STRING) ISC_EVENT_CALLBACK = CFUNCTYPE(None, POINTER(ISC_UCHAR), c_ushort, POINTER(ISC_UCHAR)) class ISC_ARRAY_BOUND(Structure): pass ISC_ARRAY_BOUND._fields_ = [ ('array_bound_lower', c_short), ('array_bound_upper', c_short), ] class ISC_ARRAY_DESC(Structure): pass ISC_ARRAY_DESC._fields_ = [ ('array_desc_dtype', ISC_UCHAR), ('array_desc_scale', ISC_UCHAR), ## was ISC_SCHAR), ('array_desc_length', c_ushort), ('array_desc_field_name', ISC_SCHAR * 32), ('array_desc_relation_name', ISC_SCHAR * 32), ('array_desc_dimensions', c_short), ('array_desc_flags', c_short), ('array_desc_bounds', ISC_ARRAY_BOUND * 16), ] class ISC_BLOB_DESC(Structure): pass ISC_BLOB_DESC._fields_ = [ ('blob_desc_subtype', c_short), ('blob_desc_charset', c_short), ('blob_desc_segment_size', c_short), ('blob_desc_field_name', ISC_UCHAR * 32), ('blob_desc_relation_name', ISC_UCHAR * 32), ] class isc_blob_ctl(Structure): pass isc_blob_ctl._fields_ = [ ('ctl_source', CFUNCTYPE(ISC_STATUS)), ('ctl_source_handle', POINTER(isc_blob_ctl)), ('ctl_to_sub_type', c_short), ('ctl_from_sub_type', c_short), ('ctl_buffer_length', c_ushort), ('ctl_segment_length', c_ushort), ('ctl_bpb_length', c_ushort), ('ctl_bpb', STRING), ('ctl_buffer', POINTER(ISC_UCHAR)), ('ctl_max_segment', ISC_LONG), ('ctl_number_segments', ISC_LONG), ('ctl_total_length', ISC_LONG), ('ctl_status', POINTER(ISC_STATUS)), ('ctl_data', c_long * 8), ] ISC_BLOB_CTL = POINTER(isc_blob_ctl) class bstream(Structure): pass bstream._fields_ = [ ('bstr_blob', isc_blob_handle), ('bstr_buffer', POINTER(c_char)), # STRING ('bstr_ptr', POINTER(c_char)), # STRING ('bstr_length', c_short), ('bstr_cnt', c_short), ('bstr_mode', c_char), ] BSTREAM = bstream FB_BLOB_STREAM = POINTER(bstream) # values for enumeration 'blob_lseek_mode' blob_lseek_mode = c_int # enum # values for enumeration 'blob_get_result' blob_get_result = c_int # enum class blobcallback(Structure): pass blobcallback._fields_ = [ ('blob_get_segment', CFUNCTYPE(c_short, c_void_p, POINTER(ISC_UCHAR), c_ushort, POINTER(ISC_USHORT))), ('blob_handle', c_void_p), ('blob_number_segments', ISC_LONG), ('blob_max_segment', ISC_LONG), ('blob_total_length', ISC_LONG), ('blob_put_segment', CFUNCTYPE(None, c_void_p, POINTER(ISC_UCHAR), c_ushort)), ('blob_lseek', CFUNCTYPE(ISC_LONG, c_void_p, c_ushort, c_int)), ] BLOBCALLBACK = POINTER(blobcallback) class paramdsc(Structure): pass paramdsc._fields_ = [ ('dsc_dtype', ISC_UCHAR), ('dsc_scale', c_byte), ('dsc_length', ISC_USHORT), ('dsc_sub_type', c_short), ('dsc_flags', ISC_USHORT), ('dsc_address', POINTER(ISC_UCHAR)), ] PARAMDSC = paramdsc class paramvary(Structure): pass paramvary._fields_ = [ ('vary_length', ISC_USHORT), ('vary_string', ISC_UCHAR * 1), ] PARAMVARY = paramvary class ISC_TEB(Structure): pass ISC_TEB._fields_ = [ ('db_ptr', POINTER(isc_db_handle)), ('tpb_len', ISC_SHORT), ('tpb_ptr', STRING) ] class XSQLVAR(Structure): pass XSQLVAR._fields_ = [ ('sqltype', ISC_SHORT), ('sqlscale', ISC_SHORT), ('sqlsubtype', ISC_SHORT), ('sqllen', ISC_SHORT), ('sqldata', POINTER(c_char)), # STRING), ('sqlind', POINTER(ISC_SHORT)), ('sqlname_length', ISC_SHORT), ('sqlname', ISC_SCHAR * 32), ('relname_length', ISC_SHORT), ('relname', ISC_SCHAR * 32), ('ownname_length', ISC_SHORT), ('ownname', ISC_SCHAR * 32), ('aliasname_length', ISC_SHORT), ('aliasname', ISC_SCHAR * 32), ] class XSQLDA(Structure): pass XSQLDA._fields_ = [ ('version', ISC_SHORT), ('sqldaid', ISC_SCHAR * 8), ('sqldabc', ISC_LONG), ('sqln', ISC_SHORT), ('sqld', ISC_SHORT), ('sqlvar', XSQLVAR * 1), ] XSQLDA_PTR = POINTER(XSQLDA) class USER_SEC_DATA(Structure): pass USER_SEC_DATA._fields_ = [ ('sec_flags', c_short), ('uid', c_int), ('gid', c_int), ('protocol', c_int), ('server', STRING), ('user_name', STRING), ('password', STRING), ('group_name', STRING), ('first_name', STRING), ('middle_name', STRING), ('last_name', STRING), ('dba_user_name', STRING), ('dba_password', STRING), ] RESULT_VECTOR = ISC_ULONG * 15 # values for enumeration 'db_info_types' db_info_types = c_int # enum # values for enumeration 'info_db_implementations' info_db_implementations = c_int # enum # values for enumeration 'info_db_class' info_db_class = c_int # enum # values for enumeration 'info_db_provider' info_db_provider = c_int # enum class imaxdiv_t(Structure): pass imaxdiv_t._fields_ = [ ('quot', c_long), ('rem', c_long), ] intmax_t = c_long int8_t = c_int8 int16_t = c_int16 int32_t = c_int32 int64_t = c_int64 uint8_t = c_uint8 uint16_t = c_uint16 uint32_t = c_uint32 uint64_t = c_uint64 int_least8_t = c_byte int_least16_t = c_short int_least32_t = c_int int_least64_t = c_long uint_least8_t = c_ubyte uint_least16_t = c_ushort uint_least32_t = c_uint uint_least64_t = c_ulong int_fast8_t = c_byte int_fast16_t = c_long int_fast32_t = c_long int_fast64_t = c_long uint_fast8_t = c_ubyte uint_fast16_t = c_ulong uint_fast32_t = c_ulong uint_fast64_t = c_ulong ptrdiff_t = c_long size_t = c_ulong uintmax_t = c_ulong class fbclient_API(object): """Firebird Client API interface object. Loads Firebird Client Library and exposes API functions as member methods. Uses :ref:`ctypes ` for bindings. """ def __init__(self, fb_library_name=None): def get_key(key, sub_key): try: return winreg.OpenKey(key, sub_key) except: return None if fb_library_name is None: if sys.platform == 'darwin': fb_library_name = find_library('Firebird') # Next elif is necessary hotfix for ctypes issue # http://bugs.python.org/issue16283 elif sys.platform == 'win32': fb_library_name = find_library('fbclient.dll') if not fb_library_name: # let's try windows registry if PYTHON_MAJOR_VER == 3: import winreg else: import _winreg as winreg # try find via installed Firebird server key = get_key(winreg.HKEY_LOCAL_MACHINE, 'SOFTWARE\\Firebird Project\\Firebird Server\\Instances') if not key: key = get_key(winreg.HKEY_LOCAL_MACHINE, 'SOFTWARE\\Wow6432Node\\Firebird Project\\Firebird Server\\Instances') if key: instFold = winreg.QueryValueEx(key, 'DefaultInstance') fb_library_name = os.path.join(os.path.join(instFold[0], 'bin'), 'fbclient.dll') else: fb_library_name = find_library('fbclient') if not fb_library_name: try: x = CDLL('libfbclient.so') fb_library_name = 'libfbclient.so' except: pass if not fb_library_name: raise Exception("The location of Firebird Client Library could not be determined.") elif not os.path.exists(fb_library_name): path, file_name = os.path.split(fb_library_name) file_name = find_library(file_name) if not file_name: raise Exception("Firebird Client Library '%s' not found" % fb_library_name) else: fb_library_name = file_name if sys.platform in ['win32', 'cygwin', 'os2', 'os2emx']: from ctypes import WinDLL fb_library = WinDLL(fb_library_name) else: fb_library = CDLL(fb_library_name) self.client_library = fb_library #: isc_attach_database(POINTER(ISC_STATUS), c_short, STRING, POINTER(isc_db_handle), c_short, STRING) self.isc_attach_database = fb_library.isc_attach_database self.isc_attach_database.restype = ISC_STATUS self.isc_attach_database.argtypes = [POINTER(ISC_STATUS), c_short, STRING, POINTER(isc_db_handle), c_short, STRING] #: isc_array_gen_sdl(POINTER(ISC_STATUS), POINTER(ISC_ARRAY_DESC), POINTER(ISC_SHORT), POINTER(ISC_UCHAR), POINTER(ISC_SHORT)) self.isc_array_gen_sdl = fb_library.isc_array_gen_sdl self.isc_array_gen_sdl.restype = ISC_STATUS self.isc_array_gen_sdl.argtypes = [POINTER(ISC_STATUS), POINTER(ISC_ARRAY_DESC), POINTER(ISC_SHORT), POINTER(ISC_UCHAR), POINTER(ISC_SHORT)] #: isc_array_get_slice(POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), POINTER(ISC_QUAD), POINTER(ISC_ARRAY_DESC), c_void_p, POINTER(ISC_LONG)) self.isc_array_get_slice = fb_library.isc_array_get_slice self.isc_array_get_slice.restype = ISC_STATUS self.isc_array_get_slice.argtypes = [POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), POINTER(ISC_QUAD), POINTER(ISC_ARRAY_DESC), c_void_p, POINTER(ISC_LONG)] #: isc_array_lookup_bounds(POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), STRING, STRING, POINTER(ISC_ARRAY_DESC)) self.isc_array_lookup_bounds = fb_library.isc_array_lookup_bounds self.isc_array_lookup_bounds.restype = ISC_STATUS self.isc_array_lookup_bounds.argtypes = [POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), STRING, STRING, POINTER(ISC_ARRAY_DESC)] #: isc_array_lookup_desc(POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), STRING, STRING, POINTER(ISC_ARRAY_DESC)) self.isc_array_lookup_desc = fb_library.isc_array_lookup_desc self.isc_array_lookup_desc.restype = ISC_STATUS self.isc_array_lookup_desc.argtypes = [POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), STRING, STRING, POINTER(ISC_ARRAY_DESC)] #: isc_array_set_desc(POINTER(ISC_STATUS), STRING, STRING, POINTER(c_short), POINTER(c_short), POINTER(c_short), POINTER(ISC_ARRAY_DESC)) self.isc_array_set_desc = fb_library.isc_array_set_desc self.isc_array_set_desc.restype = ISC_STATUS self.isc_array_set_desc.argtypes = [POINTER(ISC_STATUS), STRING, STRING, POINTER(c_short), POINTER(c_short), POINTER(c_short), POINTER(ISC_ARRAY_DESC)] #: isc_array_put_slice(POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), POINTER(ISC_QUAD), POINTER(ISC_ARRAY_DESC), c_void_p, POINTER(ISC_LONG)) self.isc_array_put_slice = fb_library.isc_array_put_slice self.isc_array_put_slice.restype = ISC_STATUS self.isc_array_put_slice.argtypes = [POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), POINTER(ISC_QUAD), POINTER(ISC_ARRAY_DESC), c_void_p, POINTER(ISC_LONG)] #: isc_blob_default_desc(POINTER(ISC_BLOB_DESC), POINTER(ISC_UCHAR), POINTER(ISC_UCHAR)) self.isc_blob_default_desc = fb_library.isc_blob_default_desc self.isc_blob_default_desc.restype = None self.isc_blob_default_desc.argtypes = [POINTER(ISC_BLOB_DESC), POINTER(ISC_UCHAR), POINTER(ISC_UCHAR)] #: isc_blob_gen_bpb(POINTER(ISC_STATUS), POINTER(ISC_BLOB_DESC), POINTER(ISC_BLOB_DESC), c_ushort, POINTER(ISC_UCHAR), POINTER(c_ushort)) self.isc_blob_gen_bpb = fb_library.isc_blob_gen_bpb self.isc_blob_gen_bpb.restype = ISC_STATUS self.isc_blob_gen_bpb.argtypes = [POINTER(ISC_STATUS), POINTER(ISC_BLOB_DESC), POINTER(ISC_BLOB_DESC), c_ushort, POINTER(ISC_UCHAR), POINTER(c_ushort)] #: isc_blob_info(POINTER(ISC_STATUS), POINTER(isc_blob_handle), c_short, STRING, c_short, POINTER(c_char)) self.isc_blob_info = fb_library.isc_blob_info self.isc_blob_info.restype = ISC_STATUS self.isc_blob_info.argtypes = [POINTER(ISC_STATUS), POINTER(isc_blob_handle), c_short, STRING, c_short, POINTER(c_char)] #: isc_blob_lookup_desc(POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), POINTER(ISC_UCHAR), POINTER(ISC_UCHAR), POINTER(ISC_BLOB_DESC), POINTER(ISC_UCHAR)) self.isc_blob_lookup_desc = fb_library.isc_blob_lookup_desc self.isc_blob_lookup_desc.restype = ISC_STATUS self.isc_blob_lookup_desc.argtypes = [POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), POINTER(ISC_UCHAR), POINTER(ISC_UCHAR), POINTER(ISC_BLOB_DESC), POINTER(ISC_UCHAR)] #: isc_blob_set_desc(POINTER(ISC_STATUS), POINTER(ISC_UCHAR), POINTER(ISC_UCHAR), c_short, c_short, c_short, POINTER(ISC_BLOB_DESC)) self.isc_blob_set_desc = fb_library.isc_blob_set_desc self.isc_blob_set_desc.restype = ISC_STATUS self.isc_blob_set_desc.argtypes = [POINTER(ISC_STATUS), POINTER(ISC_UCHAR), POINTER(ISC_UCHAR), c_short, c_short, c_short, POINTER(ISC_BLOB_DESC)] #: isc_cancel_blob(POINTER(ISC_STATUS), POINTER(isc_blob_handle)) self.isc_cancel_blob = fb_library.isc_cancel_blob self.isc_cancel_blob.restype = ISC_STATUS self.isc_cancel_blob.argtypes = [POINTER(ISC_STATUS), POINTER(isc_blob_handle)] #: isc_cancel_events(POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(ISC_LONG)) self.isc_cancel_events = fb_library.isc_cancel_events self.isc_cancel_events.restype = ISC_STATUS self.isc_cancel_events.argtypes = [POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(ISC_LONG)] #: isc_close_blob(POINTER(ISC_STATUS), POINTER(isc_blob_handle)) self.isc_close_blob = fb_library.isc_close_blob self.isc_close_blob.restype = ISC_STATUS self.isc_close_blob.argtypes = [POINTER(ISC_STATUS), POINTER(isc_blob_handle)] #: isc_commit_retaining(POINTER(ISC_STATUS), POINTER(isc_tr_handle)) self.isc_commit_retaining = fb_library.isc_commit_retaining self.isc_commit_retaining.restype = ISC_STATUS self.isc_commit_retaining.argtypes = [POINTER(ISC_STATUS), POINTER(isc_tr_handle)] #: isc_commit_transaction(POINTER(ISC_STATUS), POINTER(isc_tr_handle) self.isc_commit_transaction = fb_library.isc_commit_transaction self.isc_commit_transaction.restype = ISC_STATUS self.isc_commit_transaction.argtypes = [POINTER(ISC_STATUS), POINTER(isc_tr_handle)] #: isc_create_blob(POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), POINTER(isc_blob_handle), POINTER(ISC_QUAD)) self.isc_create_blob = fb_library.isc_create_blob self.isc_create_blob.restype = ISC_STATUS self.isc_create_blob.argtypes = [POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), POINTER(isc_blob_handle), POINTER(ISC_QUAD)] #: isc_create_blob2(POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), POINTER(isc_blob_handle), POINTER(ISC_QUAD), c_short, STRING) self.isc_create_blob2 = fb_library.isc_create_blob2 self.isc_create_blob2.restype = ISC_STATUS self.isc_create_blob2.argtypes = [POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), POINTER(isc_blob_handle), POINTER(ISC_QUAD), c_short, STRING] #: isc_create_database(POINTER(ISC_STATUS), c_short, STRING, POINTER(isc_db_handle), c_short, STRING, c_short) self.isc_create_database = fb_library.isc_create_database self.isc_create_database.restype = ISC_STATUS self.isc_create_database.argtypes = [POINTER(ISC_STATUS), c_short, STRING, POINTER(isc_db_handle), c_short, STRING, c_short] #: isc_database_info(POINTER(ISC_STATUS), POINTER(isc_db_handle), c_short, STRING, c_short, STRING) self.isc_database_info = fb_library.isc_database_info self.isc_database_info.restype = ISC_STATUS self.isc_database_info.argtypes = [POINTER(ISC_STATUS), POINTER(isc_db_handle), c_short, STRING, c_short, STRING] #: isc_decode_date(POINTER(ISC_QUAD), c_void_p) self.isc_decode_date = fb_library.isc_decode_date self.isc_decode_date.restype = None self.isc_decode_date.argtypes = [POINTER(ISC_QUAD), c_void_p] #: isc_decode_sql_date(POINTER(ISC_DATE), c_void_p) self.isc_decode_sql_date = fb_library.isc_decode_sql_date self.isc_decode_sql_date.restype = None self.isc_decode_sql_date.argtypes = [POINTER(ISC_DATE), c_void_p] #: isc_decode_sql_time(POINTER(ISC_TIME), c_void_p) self.isc_decode_sql_time = fb_library.isc_decode_sql_time self.isc_decode_sql_time.restype = None self.isc_decode_sql_time.argtypes = [POINTER(ISC_TIME), c_void_p] #: isc_decode_timestamp(POINTER(ISC_TIMESTAMP), c_void_p) self.isc_decode_timestamp = fb_library.isc_decode_timestamp self.isc_decode_timestamp.restype = None self.isc_decode_timestamp.argtypes = [POINTER(ISC_TIMESTAMP), c_void_p] #: isc_detach_database(POINTER(ISC_STATUS), POINTER(isc_db_handle)) self.isc_detach_database = fb_library.isc_detach_database self.isc_detach_database.restype = ISC_STATUS self.isc_detach_database.argtypes = [POINTER(ISC_STATUS), POINTER(isc_db_handle)] #: isc_drop_database(POINTER(ISC_STATUS), POINTER(isc_db_handle)) self.isc_drop_database = fb_library.isc_drop_database self.isc_drop_database.restype = ISC_STATUS self.isc_drop_database.argtypes = [POINTER(ISC_STATUS), POINTER(isc_db_handle)] #: isc_dsql_allocate_statement(POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_stmt_handle) self.isc_dsql_allocate_statement = fb_library.isc_dsql_allocate_statement self.isc_dsql_allocate_statement.restype = ISC_STATUS self.isc_dsql_allocate_statement.argtypes = [POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_stmt_handle)] #: isc_dsql_alloc_statement2(POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_stmt_handle)) self.isc_dsql_alloc_statement2 = fb_library.isc_dsql_alloc_statement2 self.isc_dsql_alloc_statement2.restype = ISC_STATUS self.isc_dsql_alloc_statement2.argtypes = [POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_stmt_handle)] #: isc_dsql_describe(POINTER(ISC_STATUS), POINTER(isc_stmt_handle), c_ushort, POINTER(XSQLDA)) self.isc_dsql_describe = fb_library.isc_dsql_describe self.isc_dsql_describe.restype = ISC_STATUS self.isc_dsql_describe.argtypes = [POINTER(ISC_STATUS), POINTER(isc_stmt_handle), c_ushort, POINTER(XSQLDA)] #: isc_dsql_describe_bind(POINTER(ISC_STATUS), POINTER(isc_stmt_handle), c_ushort, POINTER(XSQLDA) self.isc_dsql_describe_bind = fb_library.isc_dsql_describe_bind self.isc_dsql_describe_bind.restype = ISC_STATUS self.isc_dsql_describe_bind.argtypes = [POINTER(ISC_STATUS), POINTER(isc_stmt_handle), c_ushort, POINTER(XSQLDA)] #: isc_dsql_exec_immed2(POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), c_ushort, STRING, c_ushort, POINTER(XSQLDA), POINTER(XSQLDA)) self.isc_dsql_exec_immed2 = fb_library.isc_dsql_exec_immed2 self.isc_dsql_exec_immed2.restype = ISC_STATUS self.isc_dsql_exec_immed2.argtypes = [POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), c_ushort, STRING, c_ushort, POINTER(XSQLDA), POINTER(XSQLDA)] #: isc_dsql_execute(POINTER(ISC_STATUS), POINTER(isc_tr_handle), POINTER(isc_stmt_handle), c_ushort, POINTER(XSQLDA)) self.isc_dsql_execute = fb_library.isc_dsql_execute self.isc_dsql_execute.restype = ISC_STATUS self.isc_dsql_execute.argtypes = [POINTER(ISC_STATUS), POINTER(isc_tr_handle), POINTER(isc_stmt_handle), c_ushort, POINTER(XSQLDA)] #: isc_dsql_execute2(POINTER(ISC_STATUS), POINTER(isc_tr_handle), POINTER(isc_stmt_handle), c_ushort, POINTER(XSQLDA), POINTER(XSQLDA)) self.isc_dsql_execute2 = fb_library.isc_dsql_execute2 self.isc_dsql_execute2.restype = ISC_STATUS self.isc_dsql_execute2.argtypes = [POINTER(ISC_STATUS), POINTER(isc_tr_handle), POINTER(isc_stmt_handle), c_ushort, POINTER(XSQLDA), POINTER(XSQLDA)] #: isc_dsql_execute_immediate(POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), c_ushort, STRING, c_ushort, POINTER(XSQLDA)) self.isc_dsql_execute_immediate = fb_library.isc_dsql_execute_immediate self.isc_dsql_execute_immediate.restype = ISC_STATUS self.isc_dsql_execute_immediate.argtypes = [POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), c_ushort, STRING, c_ushort, POINTER(XSQLDA)] #: isc_dsql_fetch(POINTER(ISC_STATUS), POINTER(isc_stmt_handle), c_ushort, POINTER(XSQLDA)) self.isc_dsql_fetch = fb_library.isc_dsql_fetch self.isc_dsql_fetch.restype = ISC_STATUS self.isc_dsql_fetch.argtypes = [POINTER(ISC_STATUS), POINTER(isc_stmt_handle), c_ushort, POINTER(XSQLDA)] #: isc_dsql_finish(POINTER(isc_db_handle)) self.isc_dsql_finish = fb_library.isc_dsql_finish self.isc_dsql_finish.restype = ISC_STATUS self.isc_dsql_finish.argtypes = [POINTER(isc_db_handle)] #: isc_dsql_free_statement(POINTER(ISC_STATUS), POINTER(isc_stmt_handle), c_ushort) self.isc_dsql_free_statement = fb_library.isc_dsql_free_statement self.isc_dsql_free_statement.restype = ISC_STATUS self.isc_dsql_free_statement.argtypes = [POINTER(ISC_STATUS), POINTER(isc_stmt_handle), c_ushort] #: isc_dsql_insert(POINTER(ISC_STATUS), POINTER(isc_stmt_handle), c_ushort, POINTER(XSQLDA)) self.isc_dsql_insert = fb_library.isc_dsql_insert self.isc_dsql_insert.restype = ISC_STATUS self.isc_dsql_insert.argtypes = [POINTER(ISC_STATUS), POINTER(isc_stmt_handle), c_ushort, POINTER(XSQLDA)] #: isc_dsql_prepare(POINTER(ISC_STATUS), POINTER(isc_tr_handle), POINTER(isc_stmt_handle), c_ushort, STRING, c_ushort, POINTER(XSQLDA) self.isc_dsql_prepare = fb_library.isc_dsql_prepare self.isc_dsql_prepare.restype = ISC_STATUS self.isc_dsql_prepare.argtypes = [POINTER(ISC_STATUS), POINTER(isc_tr_handle), POINTER(isc_stmt_handle), c_ushort, STRING, c_ushort, POINTER(XSQLDA)] #: isc_dsql_set_cursor_name(POINTER(ISC_STATUS), POINTER(isc_stmt_handle), STRING, c_ushort) self.isc_dsql_set_cursor_name = fb_library.isc_dsql_set_cursor_name self.isc_dsql_set_cursor_name.restype = ISC_STATUS self.isc_dsql_set_cursor_name.argtypes = [POINTER(ISC_STATUS), POINTER(isc_stmt_handle), STRING, c_ushort] #: isc_dsql_sql_info(POINTER(ISC_STATUS), POINTER(isc_stmt_handle), c_short, STRING, c_short, STRING) self.isc_dsql_sql_info = fb_library.isc_dsql_sql_info self.isc_dsql_sql_info.restype = ISC_STATUS self.isc_dsql_sql_info.argtypes = [POINTER(ISC_STATUS), POINTER(isc_stmt_handle), c_short, STRING, c_short, STRING] #: isc_encode_date(c_void_p, POINTER(ISC_QUAD)) self.isc_encode_date = fb_library.isc_encode_date self.isc_encode_date.restype = None self.isc_encode_date.argtypes = [c_void_p, POINTER(ISC_QUAD)] #: isc_encode_sql_date(c_void_p, POINTER(ISC_DATE)) self.isc_encode_sql_date = fb_library.isc_encode_sql_date self.isc_encode_sql_date.restype = None self.isc_encode_sql_date.argtypes = [c_void_p, POINTER(ISC_DATE)] #: isc_encode_sql_time(c_void_p, POINTER(ISC_TIME)) self.isc_encode_sql_time = fb_library.isc_encode_sql_time self.isc_encode_sql_time.restype = None self.isc_encode_sql_time.argtypes = [c_void_p, POINTER(ISC_TIME)] #: isc_encode_timestamp(c_void_p, POINTER(ISC_TIMESTAMP)) self.isc_encode_timestamp = fb_library.isc_encode_timestamp self.isc_encode_timestamp.restype = None self.isc_encode_timestamp.argtypes = [c_void_p, POINTER(ISC_TIMESTAMP)] #: isc_event_counts(POINTER(RESULT_VECTOR), c_short, POINTER(ISC_UCHAR), POINTER(ISC_UCHAR)) self.isc_event_counts = fb_library.isc_event_counts self.isc_event_counts.restype = None self.isc_event_counts.argtypes = [POINTER(RESULT_VECTOR), c_short, POINTER(ISC_UCHAR), POINTER(ISC_UCHAR)] #: isc_expand_dpb(POINTER(STRING), POINTER(c_short)) self.isc_expand_dpb = fb_library.isc_expand_dpb self.isc_expand_dpb.restype = None self.isc_expand_dpb.argtypes = [POINTER(STRING), POINTER(c_short)] #: isc_modify_dpb(POINTER(STRING), POINTER(c_short), c_ushort, STRING, c_short) self.isc_modify_dpb = fb_library.isc_modify_dpb self.isc_modify_dpb.restype = c_int self.isc_modify_dpb.argtypes = [POINTER(STRING), POINTER(c_short), c_ushort, STRING, c_short] #: isc_free(STRING self.isc_free = fb_library.isc_free self.isc_free.restype = ISC_LONG self.isc_free.argtypes = [STRING] #: isc_get_segment(POINTER(ISC_STATUS), POINTER(isc_blob_handle), POINTER(c_ushort), c_ushort, c_void_p) self.isc_get_segment = fb_library.isc_get_segment self.isc_get_segment.restype = ISC_STATUS self.isc_get_segment.argtypes = [POINTER(ISC_STATUS), POINTER(isc_blob_handle), POINTER(c_ushort), c_ushort, c_void_p] #self.isc_get_segment.argtypes = [POINTER(ISC_STATUS), POINTER(isc_blob_handle), # POINTER(c_ushort), c_ushort, POINTER(c_char)] #: isc_get_slice(POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), POINTER(ISC_QUAD), c_short, STRING, c_short, POINTER(ISC_LONG), ISC_LONG, c_void_p, POINTER(ISC_LONG)) self.isc_get_slice = fb_library.isc_get_slice self.isc_get_slice.restype = ISC_STATUS self.isc_get_slice.argtypes = [POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), POINTER(ISC_QUAD), c_short, STRING, c_short, POINTER(ISC_LONG), ISC_LONG, c_void_p, POINTER(ISC_LONG)] #: isc_interprete(STRING, POINTER(POINTER(ISC_STATUS))) self.isc_interprete = fb_library.isc_interprete self.isc_interprete.restype = ISC_LONG self.isc_interprete.argtypes = [STRING, POINTER(POINTER(ISC_STATUS))] #: fb_interpret(STRING, c_uint, POINTER(POINTER(ISC_STATUS))) self.fb_interpret = fb_library.fb_interpret self.fb_interpret.restype = ISC_LONG self.fb_interpret.argtypes = [STRING, c_uint, POINTER(POINTER(ISC_STATUS))] #: isc_open_blob(POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), POINTER(isc_blob_handle), POINTER(ISC_QUAD)) self.isc_open_blob = fb_library.isc_open_blob self.isc_open_blob.restype = ISC_STATUS self.isc_open_blob.argtypes = [POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), POINTER(isc_blob_handle), POINTER(ISC_QUAD)] #: isc_open_blob2(POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), POINTER(isc_blob_handle), POINTER(ISC_QUAD), ISC_USHORT, STRING) self.isc_open_blob2 = fb_library.isc_open_blob2 self.isc_open_blob2.restype = ISC_STATUS self.isc_open_blob2.argtypes = [POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), POINTER(isc_blob_handle), POINTER(ISC_QUAD), ISC_USHORT, STRING] # POINTER(ISC_UCHAR) #: isc_prepare_transaction2(POINTER(ISC_STATUS), POINTER(isc_tr_handle), ISC_USHORT, POINTER(ISC_UCHAR)) self.isc_prepare_transaction2 = fb_library.isc_prepare_transaction2 self.isc_prepare_transaction2.restype = ISC_STATUS self.isc_prepare_transaction2.argtypes = [POINTER(ISC_STATUS), POINTER(isc_tr_handle), ISC_USHORT, POINTER(ISC_UCHAR)] #: isc_print_sqlerror(ISC_SHORT, POINTER(ISC_STATUS)) self.isc_print_sqlerror = fb_library.isc_print_sqlerror self.isc_print_sqlerror.restype = None self.isc_print_sqlerror.argtypes = [ISC_SHORT, POINTER(ISC_STATUS)] #: isc_print_status(POINTER(ISC_STATUS)) self.isc_print_status = fb_library.isc_print_status self.isc_print_status.restype = ISC_STATUS self.isc_print_status.argtypes = [POINTER(ISC_STATUS)] #: isc_put_segment(POINTER(ISC_STATUS), POINTER(isc_blob_handle), c_ushort, c_void_p) self.isc_put_segment = fb_library.isc_put_segment self.isc_put_segment.restype = ISC_STATUS self.isc_put_segment.argtypes = [POINTER(ISC_STATUS), POINTER(isc_blob_handle), c_ushort, c_void_p] #self.isc_put_segment.argtypes = [POINTER(ISC_STATUS), POINTER(isc_blob_handle), # c_ushort, STRING] #: isc_put_slice(POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), POINTER(ISC_QUAD), c_short, STRING, c_short, POINTER(ISC_LONG), ISC_LONG, c_void_p) self.isc_put_slice = fb_library.isc_put_slice self.isc_put_slice.restype = ISC_STATUS self.isc_put_slice.argtypes = [POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), POINTER(ISC_QUAD), c_short, STRING, c_short, POINTER(ISC_LONG), ISC_LONG, c_void_p] #: isc_que_events(POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(ISC_LONG), c_short, POINTER(ISC_UCHAR), ISC_EVENT_CALLBACK, POINTER(ISC_UCHAR)) self.isc_que_events = fb_library.isc_que_events self.isc_que_events.restype = ISC_STATUS self.isc_que_events.argtypes = [POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(ISC_LONG), c_short, POINTER(ISC_UCHAR), ISC_EVENT_CALLBACK, POINTER(ISC_UCHAR)] #: isc_rollback_retaining(POINTER(ISC_STATUS), POINTER(isc_tr_handle)) self.isc_rollback_retaining = fb_library.isc_rollback_retaining self.isc_rollback_retaining.restype = ISC_STATUS self.isc_rollback_retaining.argtypes = [POINTER(ISC_STATUS), POINTER(isc_tr_handle)] #: isc_rollback_transaction(POINTER(ISC_STATUS), POINTER(isc_tr_handle)) self.isc_rollback_transaction = fb_library.isc_rollback_transaction self.isc_rollback_transaction.restype = ISC_STATUS self.isc_rollback_transaction.argtypes = [POINTER(ISC_STATUS), POINTER(isc_tr_handle)] #: isc_start_multiple(POINTER(ISC_STATUS), POINTER(isc_tr_handle), c_short, c_void_p) self.isc_start_multiple = fb_library.isc_start_multiple self.isc_start_multiple.restype = ISC_STATUS self.isc_start_multiple.argtypes = [POINTER(ISC_STATUS), POINTER(isc_tr_handle), c_short, c_void_p] if sys.platform in ['win32', 'cygwin', 'os2', 'os2emx']: P_isc_start_transaction = CFUNCTYPE(ISC_STATUS, POINTER(ISC_STATUS), POINTER(isc_tr_handle), c_short, POINTER(isc_db_handle), c_short, STRING) self.isc_start_transaction = P_isc_start_transaction(('isc_start_transaction', fb_library)) else: #: isc_start_transaction(POINTER(ISC_STATUS), POINTER(isc_tr_handle), c_short, POINTER(isc_db_handle), c_short, STRING) self.isc_start_transaction = fb_library.isc_start_transaction self.isc_start_transaction.restype = ISC_STATUS self.isc_start_transaction.argtypes = [POINTER(ISC_STATUS), POINTER(isc_tr_handle), c_short, POINTER(isc_db_handle), c_short, STRING] #: isc_sqlcode(POINTER(ISC_STATUS)) self.isc_sqlcode = fb_library.isc_sqlcode self.isc_sqlcode.restype = ISC_LONG self.isc_sqlcode.argtypes = [POINTER(ISC_STATUS)] #: isc_sql_interprete(c_short, STRING, c_short) self.isc_sql_interprete = fb_library.isc_sql_interprete self.isc_sql_interprete.restype = None self.isc_sql_interprete.argtypes = [c_short, STRING, c_short] #: isc_transaction_info(POINTER(ISC_STATUS), POINTER(isc_tr_handle), c_short, STRING, c_short, STRING) self.isc_transaction_info = fb_library.isc_transaction_info self.isc_transaction_info.restype = ISC_STATUS self.isc_transaction_info.argtypes = [POINTER(ISC_STATUS), POINTER(isc_tr_handle), c_short, STRING, c_short, STRING] #: isc_transact_request(POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), c_ushort, STRING, c_ushort, STRING, c_ushort, STRING) self.isc_transact_request = fb_library.isc_transact_request self.isc_transact_request.restype = ISC_STATUS self.isc_transact_request.argtypes = [POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), c_ushort, STRING, c_ushort, STRING, c_ushort, STRING] #: isc_vax_integer(STRING, c_short) self.isc_vax_integer = fb_library.isc_vax_integer self.isc_vax_integer.restype = ISC_LONG self.isc_vax_integer.argtypes = [STRING, c_short] #: isc_portable_integer(POINTER(ISC_UCHAR), c_short) self.isc_portable_integer = fb_library.isc_portable_integer self.isc_portable_integer.restype = ISC_INT64 self.isc_portable_integer.argtypes = [POINTER(ISC_UCHAR), c_short] #: isc_add_user(POINTER(ISC_STATUS), POINTER(USER_SEC_DATA)) self.isc_add_user = fb_library.isc_add_user self.isc_add_user.restype = ISC_STATUS self.isc_add_user.argtypes = [POINTER(ISC_STATUS), POINTER(USER_SEC_DATA)] #: isc_delete_user(POINTER(ISC_STATUS), POINTER(USER_SEC_DATA)) self.isc_delete_user = fb_library.isc_delete_user self.isc_delete_user.restype = ISC_STATUS self.isc_delete_user.argtypes = [POINTER(ISC_STATUS), POINTER(USER_SEC_DATA)] #: isc_modify_user(POINTER(ISC_STATUS), POINTER(USER_SEC_DATA)) self.isc_modify_user = fb_library.isc_modify_user self.isc_modify_user.restype = ISC_STATUS self.isc_modify_user.argtypes = [POINTER(ISC_STATUS), POINTER(USER_SEC_DATA)] #: isc_compile_request(POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_req_handle), c_short, STRING) self.isc_compile_request = fb_library.isc_compile_request self.isc_compile_request.restype = ISC_STATUS self.isc_compile_request.argtypes = [POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_req_handle), c_short, STRING] #: isc_compile_request2(POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_req_handle), c_short, STRING) self.isc_compile_request2 = fb_library.isc_compile_request2 self.isc_compile_request2.restype = ISC_STATUS self.isc_compile_request2.argtypes = [POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_req_handle), c_short, STRING] #: isc_ddl(POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), c_short, STRING) #: This function always returns error since FB 3.0 self.isc_ddl = fb_library.isc_ddl self.isc_ddl.restype = ISC_STATUS self.isc_ddl.argtypes = [POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), c_short, STRING] #: isc_prepare_transaction(POINTER(ISC_STATUS), POINTER(isc_tr_handle)) self.isc_prepare_transaction = fb_library.isc_prepare_transaction self.isc_prepare_transaction.restype = ISC_STATUS self.isc_prepare_transaction.argtypes = [POINTER(ISC_STATUS), POINTER(isc_tr_handle)] #: isc_receive(POINTER(ISC_STATUS), POINTER(isc_req_handle), c_short, c_short, c_void_p, c_short) self.isc_receive = fb_library.isc_receive self.isc_receive.restype = ISC_STATUS self.isc_receive.argtypes = [POINTER(ISC_STATUS), POINTER(isc_req_handle), c_short, c_short, c_void_p, c_short] #: isc_reconnect_transaction(POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), c_short, STRING) self.isc_reconnect_transaction = fb_library.isc_reconnect_transaction self.isc_reconnect_transaction.restype = ISC_STATUS self.isc_reconnect_transaction.argtypes = [POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), c_short, STRING] #: isc_release_request(POINTER(ISC_STATUS), POINTER(isc_req_handle)) self.isc_release_request = fb_library.isc_release_request self.isc_release_request.restype = ISC_STATUS self.isc_release_request.argtypes = [POINTER(ISC_STATUS), POINTER(isc_req_handle)] #: isc_request_info(POINTER(ISC_STATUS), POINTER(isc_req_handle), c_short, c_short, STRING, c_short, STRING) self.isc_request_info = fb_library.isc_request_info self.isc_request_info.restype = ISC_STATUS self.isc_request_info.argtypes = [POINTER(ISC_STATUS), POINTER(isc_req_handle), c_short, c_short, STRING, c_short, STRING] #: isc_seek_blob(POINTER(ISC_STATUS), POINTER(isc_blob_handle), c_short, ISC_LONG, POINTER(ISC_LONG)) self.isc_seek_blob = fb_library.isc_seek_blob self.isc_seek_blob.restype = ISC_STATUS self.isc_seek_blob.argtypes = [POINTER(ISC_STATUS), POINTER(isc_blob_handle), c_short, ISC_LONG, POINTER(ISC_LONG)] #: isc_send(POINTER(ISC_STATUS), POINTER(isc_req_handle), c_short, c_short, c_void_p, c_short) self.isc_send = fb_library.isc_send self.isc_send.restype = ISC_STATUS self.isc_send.argtypes = [POINTER(ISC_STATUS), POINTER(isc_req_handle), c_short, c_short, c_void_p, c_short] #: isc_start_and_send(POINTER(ISC_STATUS), POINTER(isc_req_handle), POINTER(isc_tr_handle), c_short, c_short, c_void_p, c_short) self.isc_start_and_send = fb_library.isc_start_and_send self.isc_start_and_send.restype = ISC_STATUS self.isc_start_and_send.argtypes = [POINTER(ISC_STATUS), POINTER(isc_req_handle), POINTER(isc_tr_handle), c_short, c_short, c_void_p, c_short] #: isc_start_request(POINTER(ISC_STATUS), POINTER(isc_req_handle), POINTER(isc_tr_handle), c_short) self.isc_start_request = fb_library.isc_start_request self.isc_start_request.restype = ISC_STATUS self.isc_start_request.argtypes = [POINTER(ISC_STATUS), POINTER(isc_req_handle), POINTER(isc_tr_handle), c_short] #: isc_unwind_request(POINTER(ISC_STATUS), POINTER(isc_tr_handle), c_short) self.isc_unwind_request = fb_library.isc_unwind_request self.isc_unwind_request.restype = ISC_STATUS self.isc_unwind_request.argtypes = [POINTER(ISC_STATUS), POINTER(isc_tr_handle), c_short] #: isc_wait_for_event(POINTER(ISC_STATUS), POINTER(isc_db_handle), c_short, POINTER(ISC_UCHAR), POINTER(ISC_UCHAR)) self.isc_wait_for_event = fb_library.isc_wait_for_event self.isc_wait_for_event.restype = ISC_STATUS self.isc_wait_for_event.argtypes = [POINTER(ISC_STATUS), POINTER(isc_db_handle), c_short, POINTER(ISC_UCHAR), POINTER(ISC_UCHAR)] #: isc_close(POINTER(ISC_STATUS), STRING) self.isc_close = fb_library.isc_close self.isc_close.restype = ISC_STATUS self.isc_close.argtypes = [POINTER(ISC_STATUS), STRING] #: isc_declare(POINTER(ISC_STATUS), STRING, STRING) self.isc_declare = fb_library.isc_declare self.isc_declare.restype = ISC_STATUS self.isc_declare.argtypes = [POINTER(ISC_STATUS), STRING, STRING] #: isc_describe(POINTER(ISC_STATUS), STRING, POINTER(XSQLDA)) self.isc_describe = fb_library.isc_describe self.isc_describe.restype = ISC_STATUS self.isc_describe.argtypes = [POINTER(ISC_STATUS), STRING, POINTER(XSQLDA)] #: isc_describe_bind(POINTER(ISC_STATUS), STRING, POINTER(XSQLDA)) self.isc_describe_bind = fb_library.isc_describe_bind self.isc_describe_bind.restype = ISC_STATUS self.isc_describe_bind.argtypes = [POINTER(ISC_STATUS), STRING, POINTER(XSQLDA)] #: isc_execute(POINTER(ISC_STATUS), POINTER(isc_tr_handle), STRING, POINTER(XSQLDA)) self.isc_execute = fb_library.isc_execute self.isc_execute.restype = ISC_STATUS self.isc_execute.argtypes = [POINTER(ISC_STATUS), POINTER(isc_tr_handle), STRING, POINTER(XSQLDA)] #: isc_execute_immediate(POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), POINTER(c_short), STRING) self.isc_execute_immediate = fb_library.isc_execute_immediate self.isc_execute_immediate.restype = ISC_STATUS self.isc_execute_immediate.argtypes = [POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), POINTER(c_short), STRING] #: isc_fetch(POINTER(ISC_STATUS), STRING, POINTER(XSQLDA)) self.isc_fetch = fb_library.isc_fetch self.isc_fetch.restype = ISC_STATUS self.isc_fetch.argtypes = [POINTER(ISC_STATUS), STRING, POINTER(XSQLDA)] #: isc_open(POINTER(ISC_STATUS), POINTER(isc_tr_handle), STRING, POINTER(XSQLDA)) self.isc_open = fb_library.isc_open self.isc_open.restype = ISC_STATUS self.isc_open.argtypes = [POINTER(ISC_STATUS), POINTER(isc_tr_handle), STRING, POINTER(XSQLDA)] #: isc_prepare(POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), STRING, POINTER(c_short), STRING, POINTER(XSQLDA)) self.isc_prepare = fb_library.isc_prepare self.isc_prepare.restype = ISC_STATUS self.isc_prepare.argtypes = [POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), STRING, POINTER(c_short), STRING, POINTER(XSQLDA)] #: isc_dsql_execute_m(POINTER(ISC_STATUS), POINTER(isc_tr_handle), POINTER(isc_stmt_handle), c_ushort, STRING, c_ushort, c_ushort, STRING) self.isc_dsql_execute_m = fb_library.isc_dsql_execute_m self.isc_dsql_execute_m.restype = ISC_STATUS self.isc_dsql_execute_m.argtypes = [POINTER(ISC_STATUS), POINTER(isc_tr_handle), POINTER(isc_stmt_handle), c_ushort, STRING, c_ushort, c_ushort, STRING] #: isc_dsql_execute2_m(POINTER(ISC_STATUS), POINTER(isc_tr_handle), POINTER(isc_stmt_handle), c_ushort, STRING, c_ushort, c_ushort, STRING, c_ushort, STRING, c_ushort, c_ushort, STRING) self.isc_dsql_execute2_m = fb_library.isc_dsql_execute2_m self.isc_dsql_execute2_m.restype = ISC_STATUS self.isc_dsql_execute2_m.argtypes = [POINTER(ISC_STATUS), POINTER(isc_tr_handle), POINTER(isc_stmt_handle), c_ushort, STRING, c_ushort, c_ushort, STRING, c_ushort, STRING, c_ushort, c_ushort, STRING] #: isc_dsql_execute_immediate_m(POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), c_ushort, STRING, c_ushort, c_ushort, STRING, c_ushort, c_ushort, STRING) self.isc_dsql_execute_immediate_m = fb_library.isc_dsql_execute_immediate_m self.isc_dsql_execute_immediate_m.restype = ISC_STATUS self.isc_dsql_execute_immediate_m.argtypes = [POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), c_ushort, STRING, c_ushort, c_ushort, STRING, c_ushort, c_ushort, STRING] #: isc_dsql_exec_immed3_m(POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), c_ushort, STRING, c_ushort, c_ushort, STRING, c_ushort, c_ushort, STRING, c_ushort, STRING, c_ushort, c_ushort, STRING) self.isc_dsql_exec_immed3_m = fb_library.isc_dsql_exec_immed3_m self.isc_dsql_exec_immed3_m.restype = ISC_STATUS self.isc_dsql_exec_immed3_m.argtypes = [POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), c_ushort, STRING, c_ushort, c_ushort, STRING, c_ushort, c_ushort, STRING, c_ushort, STRING, c_ushort, c_ushort, STRING] #: isc_dsql_fetch_m(POINTER(ISC_STATUS), POINTER(isc_stmt_handle), c_ushort, STRING, c_ushort, c_ushort, STRING) self.isc_dsql_fetch_m = fb_library.isc_dsql_fetch_m self.isc_dsql_fetch_m.restype = ISC_STATUS self.isc_dsql_fetch_m.argtypes = [POINTER(ISC_STATUS), POINTER(isc_stmt_handle), c_ushort, STRING, c_ushort, c_ushort, STRING] #: isc_dsql_insert_m(POINTER(ISC_STATUS), POINTER(isc_stmt_handle), c_ushort, STRING, c_ushort, c_ushort, STRING) self.isc_dsql_insert_m = fb_library.isc_dsql_insert_m self.isc_dsql_insert_m.restype = ISC_STATUS self.isc_dsql_insert_m.argtypes = [POINTER(ISC_STATUS), POINTER(isc_stmt_handle), c_ushort, STRING, c_ushort, c_ushort, STRING] #: isc_dsql_prepare_m(POINTER(ISC_STATUS), POINTER(isc_tr_handle), POINTER(isc_stmt_handle), c_ushort, STRING, c_ushort, c_ushort, STRING, c_ushort, STRING) self.isc_dsql_prepare_m = fb_library.isc_dsql_prepare_m self.isc_dsql_prepare_m.restype = ISC_STATUS self.isc_dsql_prepare_m.argtypes = [POINTER(ISC_STATUS), POINTER(isc_tr_handle), POINTER(isc_stmt_handle), c_ushort, STRING, c_ushort, c_ushort, STRING, c_ushort, STRING] #: isc_dsql_release(POINTER(ISC_STATUS), STRING) self.isc_dsql_release = fb_library.isc_dsql_release self.isc_dsql_release.restype = ISC_STATUS self.isc_dsql_release.argtypes = [POINTER(ISC_STATUS), STRING] #: isc_embed_dsql_close(POINTER(ISC_STATUS), STRING) self.isc_embed_dsql_close = fb_library.isc_embed_dsql_close self.isc_embed_dsql_close.restype = ISC_STATUS self.isc_embed_dsql_close.argtypes = [POINTER(ISC_STATUS), STRING] #: isc_embed_dsql_declare(POINTER(ISC_STATUS), STRING, STRING) self.isc_embed_dsql_declare = fb_library.isc_embed_dsql_declare self.isc_embed_dsql_declare.restype = ISC_STATUS self.isc_embed_dsql_declare.argtypes = [POINTER(ISC_STATUS), STRING, STRING] #: isc_embed_dsql_describe(POINTER(ISC_STATUS), STRING, c_ushort, POINTER(XSQLDA)) self.isc_embed_dsql_describe = fb_library.isc_embed_dsql_describe self.isc_embed_dsql_describe.restype = ISC_STATUS self.isc_embed_dsql_describe.argtypes = [POINTER(ISC_STATUS), STRING, c_ushort, POINTER(XSQLDA)] #: isc_embed_dsql_describe_bind(POINTER(ISC_STATUS), STRING, c_ushort, POINTER(XSQLDA)) self.isc_embed_dsql_describe_bind = fb_library.isc_embed_dsql_describe_bind self.isc_embed_dsql_describe_bind.restype = ISC_STATUS self.isc_embed_dsql_describe_bind.argtypes = [POINTER(ISC_STATUS), STRING, c_ushort, POINTER(XSQLDA)] #: isc_embed_dsql_execute(POINTER(ISC_STATUS), POINTER(isc_tr_handle), STRING, c_ushort, POINTER(XSQLDA)) self.isc_embed_dsql_execute = fb_library.isc_embed_dsql_execute self.isc_embed_dsql_execute.restype = ISC_STATUS self.isc_embed_dsql_execute.argtypes = [POINTER(ISC_STATUS), POINTER(isc_tr_handle), STRING, c_ushort, POINTER(XSQLDA)] #: isc_embed_dsql_execute2(POINTER(ISC_STATUS), POINTER(isc_tr_handle), STRING, c_ushort, POINTER(XSQLDA), POINTER(XSQLDA)) self.isc_embed_dsql_execute2 = fb_library.isc_embed_dsql_execute2 self.isc_embed_dsql_execute2.restype = ISC_STATUS self.isc_embed_dsql_execute2.argtypes = [POINTER(ISC_STATUS), POINTER(isc_tr_handle), STRING, c_ushort, POINTER(XSQLDA), POINTER(XSQLDA)] #: isc_embed_dsql_execute_immed(POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), c_ushort, STRING, c_ushort, POINTER(XSQLDA)) self.isc_embed_dsql_execute_immed = fb_library.isc_embed_dsql_execute_immed self.isc_embed_dsql_execute_immed.restype = ISC_STATUS self.isc_embed_dsql_execute_immed.argtypes = [POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), c_ushort, STRING, c_ushort, POINTER(XSQLDA)] #: isc_embed_dsql_fetch(POINTER(ISC_STATUS), STRING, c_ushort, POINTER(XSQLDA)) self.isc_embed_dsql_fetch = fb_library.isc_embed_dsql_fetch self.isc_embed_dsql_fetch.restype = ISC_STATUS self.isc_embed_dsql_fetch.argtypes = [POINTER(ISC_STATUS), STRING, c_ushort, POINTER(XSQLDA)] #: isc_embed_dsql_fetch_a(POINTER(ISC_STATUS), POINTER(c_int), STRING, ISC_USHORT, POINTER(XSQLDA)) self.isc_embed_dsql_fetch_a = fb_library.isc_embed_dsql_fetch_a self.isc_embed_dsql_fetch_a.restype = ISC_STATUS self.isc_embed_dsql_fetch_a.argtypes = [POINTER(ISC_STATUS), POINTER(c_int), STRING, ISC_USHORT, POINTER(XSQLDA)] #: isc_embed_dsql_open(POINTER(ISC_STATUS), POINTER(isc_tr_handle), STRING, c_ushort, POINTER(XSQLDA)) self.isc_embed_dsql_open = fb_library.isc_embed_dsql_open self.isc_embed_dsql_open.restype = ISC_STATUS self.isc_embed_dsql_open.argtypes = [POINTER(ISC_STATUS), POINTER(isc_tr_handle), STRING, c_ushort, POINTER(XSQLDA)] #: isc_embed_dsql_open2(POINTER(ISC_STATUS), POINTER(isc_tr_handle), STRING, c_ushort, POINTER(XSQLDA), POINTER(XSQLDA)) self.isc_embed_dsql_open2 = fb_library.isc_embed_dsql_open2 self.isc_embed_dsql_open2.restype = ISC_STATUS self.isc_embed_dsql_open2.argtypes = [POINTER(ISC_STATUS), POINTER(isc_tr_handle), STRING, c_ushort, POINTER(XSQLDA), POINTER(XSQLDA)] #: isc_embed_dsql_insert(POINTER(ISC_STATUS), STRING, c_ushort, POINTER(XSQLDA)) self.isc_embed_dsql_insert = fb_library.isc_embed_dsql_insert self.isc_embed_dsql_insert.restype = ISC_STATUS self.isc_embed_dsql_insert.argtypes = [POINTER(ISC_STATUS), STRING, c_ushort, POINTER(XSQLDA)] #: isc_embed_dsql_prepare(POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), STRING, c_ushort, STRING, c_ushort, POINTER(XSQLDA)) self.isc_embed_dsql_prepare = fb_library.isc_embed_dsql_prepare self.isc_embed_dsql_prepare.restype = ISC_STATUS self.isc_embed_dsql_prepare.argtypes = [POINTER(ISC_STATUS), POINTER(isc_db_handle), POINTER(isc_tr_handle), STRING, c_ushort, STRING, c_ushort, POINTER(XSQLDA)] #: isc_embed_dsql_release(POINTER(ISC_STATUS), STRING) self.isc_embed_dsql_release = fb_library.isc_embed_dsql_release self.isc_embed_dsql_release.restype = ISC_STATUS self.isc_embed_dsql_release.argtypes = [POINTER(ISC_STATUS), STRING] #: BLOB_open(isc_blob_handle, STRING, c_int) self.BLOB_open = fb_library.BLOB_open self.BLOB_open.restype = POINTER(BSTREAM) self.BLOB_open.argtypes = [isc_blob_handle, STRING, c_int] #: BLOB_put(ISC_SCHAR, POINTER(BSTREAM)) self.BLOB_put = fb_library.BLOB_put self.BLOB_put.restype = c_int self.BLOB_put.argtypes = [ISC_SCHAR, POINTER(BSTREAM)] #: BLOB_close(POINTER(BSTREAM)) self.BLOB_close = fb_library.BLOB_close self.BLOB_close.restype = c_int self.BLOB_close.argtypes = [POINTER(BSTREAM)] #: BLOB_get(POINTER(BSTREAM)) self.BLOB_get = fb_library.BLOB_get self.BLOB_get.restype = c_int self.BLOB_get.argtypes = [POINTER(BSTREAM)] #: BLOB_display(POINTER(ISC_QUAD), isc_db_handle, isc_tr_handle, STRING) self.BLOB_display = fb_library.BLOB_display self.BLOB_display.restype = c_int self.BLOB_display.argtypes = [POINTER(ISC_QUAD), isc_db_handle, isc_tr_handle, STRING] #: BLOB_dump(POINTER(ISC_QUAD), isc_db_handle, isc_tr_handle, STRING) self.BLOB_dump = fb_library.BLOB_dump self.BLOB_dump.restype = c_int self.BLOB_dump.argtypes = [POINTER(ISC_QUAD), isc_db_handle, isc_tr_handle, STRING] #: BLOB_edit(POINTER(ISC_QUAD), isc_db_handle, isc_tr_handle, STRING) self.BLOB_edit = fb_library.BLOB_edit self.BLOB_edit.restype = c_int self.BLOB_edit.argtypes = [POINTER(ISC_QUAD), isc_db_handle, isc_tr_handle, STRING] #: BLOB_load(POINTER(ISC_QUAD), isc_db_handle, isc_tr_handle, STRING) self.BLOB_load = fb_library.BLOB_load self.BLOB_load.restype = c_int self.BLOB_load.argtypes = [POINTER(ISC_QUAD), isc_db_handle, isc_tr_handle, STRING] #: BLOB_text_dump(POINTER(ISC_QUAD), isc_db_handle, isc_tr_handle, STRING) self.BLOB_text_dump = fb_library.BLOB_text_dump self.BLOB_text_dump.restype = c_int self.BLOB_text_dump.argtypes = [POINTER(ISC_QUAD), isc_db_handle, isc_tr_handle, STRING] #: BLOB_text_load(POINTER(ISC_QUAD), isc_db_handle, isc_tr_handle, STRING) self.BLOB_text_load = fb_library.BLOB_text_load self.BLOB_text_load.restype = c_int self.BLOB_text_load.argtypes = [POINTER(ISC_QUAD), isc_db_handle, isc_tr_handle, STRING] #: Bopen(POINTER(ISC_QUAD), isc_db_handle, isc_tr_handle, STRING) self.Bopen = fb_library.Bopen self.Bopen.restype = POINTER(BSTREAM) self.Bopen.argtypes = [POINTER(ISC_QUAD), isc_db_handle, isc_tr_handle, STRING] #: isc_ftof(STRING, c_ushort, STRING, c_ushort) self.isc_ftof = fb_library.isc_ftof self.isc_ftof.restype = ISC_LONG self.isc_ftof.argtypes = [STRING, c_ushort, STRING, c_ushort] #: isc_print_blr(STRING, ISC_PRINT_CALLBACK, c_void_p, c_short) self.isc_print_blr = fb_library.isc_print_blr self.isc_print_blr.restype = ISC_STATUS self.isc_print_blr.argtypes = [STRING, ISC_PRINT_CALLBACK, c_void_p, c_short] #: isc_set_debug(c_int) self.isc_set_debug = fb_library.isc_set_debug self.isc_set_debug.restype = None self.isc_set_debug.argtypes = [c_int] #: isc_qtoq(POINTER(ISC_QUAD), POINTER(ISC_QUAD)) self.isc_qtoq = fb_library.isc_qtoq self.isc_qtoq.restype = None self.isc_qtoq.argtypes = [POINTER(ISC_QUAD), POINTER(ISC_QUAD)] #: isc_vtof(STRING, STRING, c_ushort) self.isc_vtof = fb_library.isc_vtof self.isc_vtof.restype = None self.isc_vtof.argtypes = [STRING, STRING, c_ushort] #: isc_vtov(STRING, STRING, c_short) self.isc_vtov = fb_library.isc_vtov self.isc_vtov.restype = None self.isc_vtov.argtypes = [STRING, STRING, c_short] #: isc_version(POINTER(isc_db_handle), ISC_VERSION_CALLBACK, c_void_p) self.isc_version = fb_library.isc_version self.isc_version.restype = c_int self.isc_version.argtypes = [POINTER(isc_db_handle), ISC_VERSION_CALLBACK, c_void_p] #: isc_service_attach(POINTER(ISC_STATUS), c_ushort, STRING, POINTER(isc_svc_handle), c_ushort, STRING) self.isc_service_attach = fb_library.isc_service_attach self.isc_service_attach.restype = ISC_STATUS self.isc_service_attach.argtypes = [POINTER(ISC_STATUS), c_ushort, STRING, POINTER(isc_svc_handle), c_ushort, STRING] #: isc_service_detach(POINTER(ISC_STATUS), POINTER(isc_svc_handle)) self.isc_service_detach = fb_library.isc_service_detach self.isc_service_detach.restype = ISC_STATUS self.isc_service_detach.argtypes = [POINTER(ISC_STATUS), POINTER(isc_svc_handle)] #: isc_service_query(POINTER(ISC_STATUS), POINTER(isc_svc_handle), POINTER(isc_resv_handle), c_ushort, STRING, c_ushort, STRING, c_ushort, STRING) self.isc_service_query = fb_library.isc_service_query self.isc_service_query.restype = ISC_STATUS self.isc_service_query.argtypes = [POINTER(ISC_STATUS), POINTER(isc_svc_handle), POINTER(isc_resv_handle), c_ushort, STRING, c_ushort, STRING, c_ushort, STRING] #: isc_service_start(POINTER(ISC_STATUS), POINTER(isc_svc_handle), POINTER(isc_resv_handle), c_ushort, STRING) self.isc_service_start = fb_library.isc_service_start self.isc_service_start.restype = ISC_STATUS self.isc_service_start.argtypes = [POINTER(ISC_STATUS), POINTER(isc_svc_handle), POINTER(isc_resv_handle), c_ushort, STRING] #: isc_get_client_version(STRING) self.isc_get_client_version = fb_library.isc_get_client_version self.isc_get_client_version.restype = None self.isc_get_client_version.argtypes = [STRING] #: isc_get_client_major_version() self.isc_get_client_major_version = fb_library.isc_get_client_major_version self.isc_get_client_major_version.restype = c_int self.isc_get_client_major_version.argtypes = [] #: isc_get_client_minor_version() self.isc_get_client_minor_version = fb_library.isc_get_client_minor_version self.isc_get_client_minor_version.restype = c_int self.isc_get_client_minor_version.argtypes = [] #self.imaxabs = fb_library.imaxabs #self.imaxabs.restype = intmax_t #self.imaxabs.argtypes = [intmax_t] #self.imaxdiv = fb_library.imaxdiv #self.imaxdiv.restype = imaxdiv_t #self.imaxdiv.argtypes = [intmax_t, intmax_t] #self.strtoimax = fb_library.strtoimax #self.strtoimax.restype = intmax_t #self.strtoimax.argtypes = [STRING, POINTER(STRING), c_int] #self.strtoumax = fb_library.strtoumax #self.strtoumax.restype = uintmax_t #self.strtoumax.argtypes = [STRING, POINTER(STRING), c_int] #self.wcstoimax = fb_library.wcstoimax #self.wcstoimax.restype = intmax_t #self.wcstoimax.argtypes = [WSTRING, POINTER(WSTRING), c_int] #self.wcstoumax = fb_library.wcstoumax #self.wcstoumax.restype = uintmax_t #self.wcstoumax.argtypes = [WSTRING, POINTER(WSTRING), c_int] self.P_isc_event_block = CFUNCTYPE(ISC_LONG, POINTER(POINTER(ISC_UCHAR)), POINTER(POINTER(ISC_UCHAR)), ISC_USHORT) #: C_isc_event_block(ISC_LONG, POINTER(POINTER(ISC_UCHAR)), POINTER(POINTER(ISC_UCHAR)), ISC_USHORT) self.C_isc_event_block = self.P_isc_event_block(('isc_event_block', fb_library)) self.P_isc_event_block_args = self.C_isc_event_block.argtypes def isc_event_block(self, event_buffer, result_buffer, *args): "Injects variable number of parameters into C_isc_event_block call" if len(args) > 15: raise Exception("isc_event_block takes no more than 15 event names") newargs = list(self.P_isc_event_block_args) for x in args: newargs.append(STRING) self.C_isc_event_block.argtypes = newargs result = self.C_isc_event_block(event_buffer, result_buffer, len(args), *args) return result fdb-2.0.0/fdb/services.py0000644000175000001440000026575513270362207015662 0ustar pcisarusers00000000000000#coding:utf-8 # # PROGRAM/MODULE: fdb # FILE: services.py # DESCRIPTION: Python driver for Firebird - Firebird services # CREATED: 19.11.2011 # # Software distributed under the License is distributed AS IS, # WITHOUT WARRANTY OF ANY KIND, either express or implied. # See the License for the specific language governing rights # and limitations under the License. # # The Original Code was created by Pavel Cisar # # Copyright (c) 2011 Pavel Cisar # and all contributors signed below. # # All Rights Reserved. # Contributor(s): Philippe Makowski # ______________________________________. # # See LICENSE.TXT for details. import fdb import sys import os import fdb.ibase as ibase import ctypes import struct import warnings import datetime import types api = None #: The following SHUT_* constants are to be passed as the `shutdown_mode` parameter to Connection.shutdown: SHUT_LEGACY = -1 SHUT_NORMAL = ibase.isc_spb_prp_sm_normal SHUT_MULTI = ibase.isc_spb_prp_sm_multi SHUT_SINGLE = ibase.isc_spb_prp_sm_single SHUT_FULL = ibase.isc_spb_prp_sm_full #: The following SHUT_* constants are to be passed as the `shutdown_method` parameter to Connection.shutdown: SHUT_FORCE = ibase.isc_spb_prp_shutdown_db SHUT_DENY_NEW_TRANSACTIONS = ibase.isc_spb_prp_deny_new_transactions SHUT_DENY_NEW_ATTACHMENTS = ibase.isc_spb_prp_deny_new_attachments #: The following WRITE_* constants are to be passed as the `mode` parameter to Connection.set_write_mode: WRITE_FORCED = ibase.isc_spb_prp_wm_sync WRITE_BUFFERED = ibase.isc_spb_prp_wm_async #: The following ACCESS_* constants are to be passed as the `mode` parameter to Connection.set_access_mode: ACCESS_READ_WRITE = ibase.isc_spb_prp_am_readwrite ACCESS_READ_ONLY = ibase.isc_spb_prp_am_readonly #: The following CAPABILITY_* constants are return values of `get_server_capabilities` CAPABILITY_MULTI_CLIENT = 0x2 CAPABILITY_REMOTE_HOP = 0x4 CAPABILITY_SERVER_CONFIG = 0x200 CAPABILITY_QUOTED_FILENAME = 0x400 CAPABILITY_NO_SERVER_SHUTDOWN = 0x100 #: The following STATS_* constants are options for backup/restore 'stats' parameter. STATS_TOTAL_TIME = 'T' STATS_TIME_DELTA = 'D' STATS_PAGE_READS = 'R' STATS_PAGE_WRITES = 'W' def _check_string(st): if ibase.PYTHON_MAJOR_VER == 3: try: if isinstance(st, str): # In str instances, Python allows any character # Since Firebird's # Services API only works (properly) with ASCII, we need to make # sure there are no non-ASCII characters in s. st.encode('ASCII') else: if not isinstance(st, ibase.mybytes): raise TypeError('String argument to Services API must be' ' of type %s, not %s.' % (type(ibase.mybytes), type(st))) except UnicodeEncodeError: raise TypeError("The database engine's Services API only works" " properly with ASCII string parameters, so str instances that" " contain non-ASCII characters are disallowed.") else: try: if isinstance(st, ibase.UnicodeType): # In str instances, Python allows any character in the "default # encoding", which is typically not ASCII. Since Firebird's # Services API only works (properly) with ASCII, we need to make # sure there are no non-ASCII characters in s, even though we # already know s is a str instance. st.encode('ASCII') else: if not isinstance(st, ibase.mybytes): raise TypeError('String argument to Services API must be' ' of type %s, not %s.' % (type(ibase.mybytes), type(st))) except UnicodeError: raise TypeError("The database engine's Services API only works" " properly with ASCII string parameters, so str instances that" " contain non-ASCII characters, and all unicode instances, are" " disallowed.") def _string2spb(spb, code, st): myslen = len(st) _numeric2spb(spb, code, myslen, numctype='H') myformat = str(myslen) + 's' # The length, then 's'. spb.append(struct.pack(myformat, st)) def _numeric2spb(spb, code, num, numctype='I'): # numCType is one of the pack format characters specified by the Python # standard library module 'struct'. _code2spb(spb, code) (numeric_format, numeric_bytes) = _render_sized_integer_for_spb(num, numctype) spb.append(struct.pack(numeric_format, numeric_bytes)) def _code2spb(spb, code): (myformat, mybytes) = _render_sized_integer_for_spb(code, 'b') spb.append(struct.pack(myformat, mybytes)) def _vax_inverse(i, myformat): # Apply the inverse of _ksrv.isc_vax_integer to a Python integer; return # the raw bytes of the resulting value. iraw = struct.pack(myformat, i) iconv = api.isc_vax_integer(iraw, len(iraw)) iconvraw = struct.pack(myformat, iconv) return iconvraw def _render_sized_integer_for_spb(i, myformat): # In order to prepare the Python integer i for inclusion in a Services # API action request buffer, the byte sequence of i must be reversed, which # will make i unrepresentible as a normal Python integer. # Therefore, the rendered version of i must be stored in a raw byte # buffer. # This function returns a 2-tuple containing: # 1. the calculated struct.pack-compatible format string for i # 2. the Python string containing the SPB-compatible raw binary rendering # of i # # Example: # To prepare the Python integer 12345 for storage as an unsigned int in a # SPB, use code such as this: # (iPackFormat, iRawBytes) = _renderSizedIntegerForSPB(12345, 'I') # spbBytes = struct.pack(iPackFormat, iRawBytes) # dest_format = '%ds' % struct.calcsize(myformat) dest_val = _vax_inverse(i, myformat) return (dest_format, dest_val) def connect(host='service_mgr', user=None, password=None): """Establishes a connection to the Services Manager. :param string host: (optional) Host machine specification. Local by default. :param string user: (optional) Administrative user name. Defaults to content of environment variable `'ISC_USER'` or `'SYSDBA'`. :param string password: Administrative user password. Default is content of environment variable `'ISC_PASSWORD'`. .. note:: By definition, a Services Manager connection is bound to a particular host. Therefore, the database specified as a parameter to methods such as `getStatistics` MUST NOT include the host name of the database server. **Hooks:** Event `HOOK_SERVICE_ATTACHED`: Executed before :class:`Connection` instance is returned. Hook must have signature: hook_func(connection). Any value returned by hook is ignored. """ setattr(sys.modules[__name__], 'api', fdb.load_api()) if not user: user = os.environ.get('ISC_USER', 'SYSDBA') if not password: password = os.environ.get('ISC_PASSWORD', None) if password is None: raise fdb.ProgrammingError('A password is required to use' ' the Services Manager.') _check_string(host) _check_string(user) _check_string(password) # The database engine's Services API requires that connection strings # conform to one of the following formats: # 1. 'service_mgr' - Connects to the Services Manager on localhost. # 2. 'hostname:service_mgr' - Connects to the Services Manager on the # server named hostname. # # This Python function glosses over the database engine's rules as follows: # - If the $host parameter is not supplied, the connection defaults to # the local host. # - If the $host parameter is supplied, the ':service_mgr' suffix is # optional (the suffix will be appended automatically if necessary). # # Of course, this scheme would collapse if someone actually had a host # named 'service_mgr', and supplied the connection string 'service_mgr' # with the intent of connecting to that host. In that case, the connection # would be attempted to the local host, not to the host named # 'service_mgr'. An easy workaround would be to supply the following # connection string: # 'service_mgr:service_mgr'. if not host.endswith('service_mgr'): if host and not host.endswith(':'): host += ':' host += 'service_mgr' con = Connection(host, user, password) for hook in fdb.get_hooks(fdb.HOOK_SERVICE_ATTACHED): hook(con) return con class Connection(object): """ Represents a sevice connection between the database client (the Python process) and the database server. .. important:: DO NOT create instances of this class directly! Use only :func:`connect` to get Connection instances. .. tip:: Connection supports the iterator protocol, yielding lines of result like :meth:`readline`. """ QUERY_TYPE_PLAIN_INTEGER = 1 QUERY_TYPE_PLAIN_STRING = 2 QUERY_TYPE_RAW = 3 def __init__(self, host, user, password, charset=None): self.__fetching = False self._svc_handle = ibase.isc_svc_handle(0) self._isc_status = ibase.ISC_STATUS_ARRAY() self._result_buffer = ctypes.create_string_buffer(ibase.USHRT_MAX) self._line_buffer = [] self.__eof = False self.charset = charset self.host = ibase.b(host) self.user = ibase.b(user) self.password = ibase.b(password) if len(self.host) + len(self.user) + len(self.password) > 118: raise fdb.ProgrammingError("The combined length of host, user and" " password can't exceed 118 bytes.") # spb_length = 2 + 1 + 1 + len(self.user) + 1 + 1 + len(self.password) spb = fdb.bs([ibase.isc_spb_version, ibase.isc_spb_current_version, ibase.isc_spb_user_name, len(self.user)]) + self.user + \ fdb.bs([ibase.isc_spb_password, len(self.password)]) + self.password api.isc_service_attach(self._isc_status, len(self.host), self.host, self._svc_handle, len(spb), spb) if fdb.db_api_error(self._isc_status): raise fdb.exception_from_status(fdb.DatabaseError, self._isc_status, "Services/isc_service_attach:") # Get Firebird engine version verstr = self.get_server_version() x = verstr.split() if x[0].find('V') > 0: (x, self.__version) = x[0].split('V') elif x[0].find('T') > 0: (x, self.__version) = x[0].split('T') else: # Unknown version self.__version = '0.0.0.0' x = self.__version.split('.') self.__engine_version = float('%s.%s' % (x[0], x[1])) def __del__(self): self.close() def next(self): """Return the next result line from service manager. Part of *iterator protocol*. :raises StopIteration: If there are no further lines. """ line = self.readline() if line is not None: return line else: self.__fetching = False raise StopIteration __next__ = next def __iter__(self): return self def __check_active(self): if not self._svc_handle: raise ProgrammingError("Connection object is detached from service manager") def __get_closed(self): return True if self._svc_handle else False def __get_fetching(self): return self.__fetching def __read_buffer(self, init=''): request = fdb.bs([ibase.isc_info_svc_to_eof]) spb = ibase.b('') api.isc_service_query(self._isc_status, self._svc_handle, None, len(spb), spb, len(request), request, ibase.USHRT_MAX, self._result_buffer) if fdb.db_api_error(self._isc_status): raise fdb.exception_from_status(fdb.DatabaseError, self._isc_status, "Services/isc_service_query:") (result, _) = self._extract_string(self._result_buffer, 1) if ord(self._result_buffer[_]) == ibase.isc_info_end: self.__eof = True if init: result = init + result self._line_buffer = result.split('\n') def __fetchline(self): if self._line_buffer: if len(self._line_buffer) == 1 and not self.__eof: self.__read_buffer(self._line_buffer.pop(0)) return self._line_buffer.pop(0) else: if not self.__eof: self.__read_buffer() if self._line_buffer: return self._line_buffer.pop(0) else: self.__fetching = False return None def __get_version(self): return self.__version def __get_engine_version(self): return self.__engine_version def _bytes_to_str(self, sb): ### Todo: verify handling of P version differences, refactor if ibase.PYTHON_MAJOR_VER == 3: return sb.decode(ibase.charset_map.get(self.charset, self.charset)) else: return sb.encode(ibase.charset_map.get(self.charset, self.charset)) def _str_to_bytes(self, st): ### Todo: verify handling of P version differences, refactor if ibase.PYTHON_MAJOR_VER == 3: return st.encode(ibase.charset_map.get(self.charset, self.charset)) else: return st def _extract_int(self, raw, index): new_index = index + ctypes.sizeof(ctypes.c_ushort) return (fdb.bytes_to_uint(raw[index:new_index]), new_index) def _extract_longint(self, raw, index): new_index = index + ctypes.sizeof(ctypes.c_uint) return (fdb.bytes_to_int(raw[index:new_index]), new_index) def _extract_string(self, raw, index): (size, index) = self._extract_int(raw, index) new_index = index + size ### Todo: verify handling of P version differences, refactor if ibase.PYTHON_MAJOR_VER == 3: return (str(raw[index:new_index], ibase.charset_map.get(self.charset, self.charset)), new_index) else: return (str(raw[index:new_index]), new_index) def _extract_bytes(self, raw, index): (size, index) = self._extract_int(raw, index) new_index = index + size return (bytes(raw[index:new_index]), new_index) def _Q(self, code, result_type, timeout=-1): if code < 0 or code > ibase.USHRT_MAX: raise fdb.ProgrammingError("The service query request_buf code must fall between 0 and %d, inclusive." % ibase.USHRT_MAX) result = None result_size = 1024 request = fdb.bs([code]) if timeout == -1: spb = ibase.b('') else: spb = fdb.bs(ibase.isc_info_svc_timeout, timeout) while True: result_buffer = ctypes.create_string_buffer(result_size) api.isc_service_query(self._isc_status, self._svc_handle, None, len(spb), spb, len(request), request, result_size, result_buffer) if fdb.db_api_error(self._isc_status): raise fdb.exception_from_status(fdb.DatabaseError, self._isc_status, "Services/isc_service_query:") if ord(result_buffer[0]) == ibase.isc_info_truncated: if result_size == ibase.USHRT_MAX: raise fdb.InternalError("Database C API constraints maximum result buffer size to %d" % ibase.USHRT_MAX) else: result_size = result_size * 4 if result_size > ibase.USHRT_MAX: result_size = ibase.USHRT_MAX continue break if result_type == self.QUERY_TYPE_PLAIN_INTEGER: (result, _) = self._extract_int(result_buffer, 1) elif result_type == self.QUERY_TYPE_PLAIN_STRING: (result, _) = self._extract_string(result_buffer, 1) elif result_type == self.QUERY_TYPE_RAW: size = result_size - 1 while result_buffer[size] == '\0': size -= 1 result = ibase.s(result_buffer[:size]) return result def _get_isc_info_svc_svr_db_info(self): num_attachments = -1 databases = [] raw = self._QR(ibase.isc_info_svc_svr_db_info) # assert raw[-1] == api.int2byte(ibase.isc_info_flag_end) pos = 1 # Ignore raw[0]. upper_limit = len(raw) - 1 while pos < upper_limit: cluster = ibase.ord2(raw[pos]) pos += 1 if cluster == ibase.isc_spb_num_att: # Number of attachments. (num_attachments, pos) = self._extract_int(raw, pos) elif cluster == ibase.isc_spb_num_db: # Number of databases # attached to. # Do nothing except to advance pos; the number of databases # can be had from len(databases). (_, pos) = self._extract_int(raw, pos) elif cluster == ibase.isc_spb_dbname: (db_name, pos) = self._extract_string(raw, pos) databases.append(db_name) return (num_attachments, databases) def _QI(self, code): return self._Q(code, self.QUERY_TYPE_PLAIN_INTEGER) def _QS(self, code): return self._Q(code, self.QUERY_TYPE_PLAIN_STRING) def _QR(self, code): return self._Q(code, self.QUERY_TYPE_RAW) def _action_thin(self, request_buffer): if len(request_buffer) > ibase.USHRT_MAX: raise fdb.ProgrammingError("The size of the request buffer must not exceed %d." % ibase.USHRT_MAX) api.isc_service_start(self._isc_status, self._svc_handle, None, len(request_buffer), request_buffer) if fdb.db_api_error(self._isc_status): raise fdb.exception_from_status(fdb.OperationalError, self._isc_status, "Unable to perform the requested Service API action:") return None def _act(self, request_buffer): return self._action_thin(request_buffer.render()) def _act_and_return_textual_results(self, request_buffer): self._act(request_buffer) return self._collect_unformatted_results() def _collect_unformatted_results(self, line_separator='\n'): # YYY: It might be desirable to replace this function with a more # performant version based on ibase.isc_info_svc_to_eof rather than # ibase.isc_info_svc_line; the function's interface is transparent # either way. # This enhancement should be a very low priority; the Service Manager # API is not typically used for performance-intensive operations. result_lines = [] while 1: try: line = self._QS(ibase.isc_info_svc_line) except fdb.OperationalError: # YYY: It is routine for actions such as RESTORE to raise an # exception at the end of their output. We ignore any such # exception and assume that it was expected, which is somewhat # risky. For example, suppose the network connection is broken # while the client is receiving the action's output... break if not line: break result_lines.append(line) return line_separator.join(result_lines) def _repair_action(self, database, partial_req_buf, line_separator='\n'): # Begin constructing the request buffer (incorporate the one passed as # param $partial_req_buf). full_req_buf = _ServiceActionRequestBuilder(ibase.isc_action_svc_repair) # The filename of the database must be specified regardless of the # action sub-action being perfomed. full_req_buf.add_database_name(database) # Incorporate the caller's partial request buffer. full_req_buf.extend(partial_req_buf) self._action_thin(full_req_buf.render()) # Return the results to the caller synchronously (in this case, there # won't be any textual results, but issuing this call will helpfully # cause the program to block until the Services Manager is finished # with the action). return self._collect_unformatted_results(line_separator=line_separator) def _validate_companion_string_numeric_sequences(self, strings, numbers, string_caption, number_caption): # The core constraint here is that len(numbers) must equal len(strings) - 1 strings_count = len(strings) numbers_count = len(numbers) required_numbers_count = strings_count - 1 if numbers_count != required_numbers_count: raise ValueError( 'Since you passed %d %s, you must %s corresponding %s.' % (strings_count, string_caption, ('pass %d' % required_numbers_count if required_numbers_count > 0 else 'not pass any'), number_caption) ) def _exclude_elements_of_types(self, seq, types_to_exclude): if not isinstance(types_to_exclude, tuple): types_to_exclude = tuple(types_to_exclude) return [element for element in seq if not isinstance(element, types_to_exclude)] def _require_str_or_tuple_of_str(self, x): if isinstance(x, str): x = (x,) elif isinstance(x, ibase.myunicode): # We know the following call to _checkString will raise an exception, # but calling it anyway allows us to centralize the error message # generation: _check_string(x) ### Todo: verify handling of P version differences, refactor? for el in x: _check_string(el) return x def _property_action(self, database, partial_req_buf): # Begin constructing the request buffer (incorporate the one passed as # param $partialReqBuf). full_req_buf = _ServiceActionRequestBuilder( ibase.isc_action_svc_properties) # The filename of the database must be specified regardless of the # action sub-action being perfomed. full_req_buf.add_database_name(database) # Incorporate the caller's partial request buffer. full_req_buf.extend(partial_req_buf) self._action_thin(full_req_buf.render()) # Return the results to the caller synchronously # because it blocks until there's been some resolution of the action. return self._collect_unformatted_results() def _property_action_with_one_num_code(self, database, code, num, num_ctype='I'): req_buf = _ServiceActionRequestBuilder() req_buf.add_numeric(code, num, numctype=num_ctype) return self._property_action(database, req_buf) def close(self): """Close the connection now (rather than whenever `__del__` is called). The connection will be unusable from this point forward; an :exc:`Error` (or subclass) exception will be raised if any operation is attempted with the connection. """ if self._svc_handle: api.isc_service_detach(self._isc_status, self._svc_handle) if fdb.db_api_error(self._isc_status): raise fdb.exception_from_status(fdb.DatabaseError, self._isc_status, "Services/isc_service_detach:") self._svc_handle = None self.__fetching = False def readline(self): """Get next line of textual output from last service query. :returns string: Output line. """ if self.__fetching: return self.__fetchline() else: return None def readlines(self): """Get list of remaining output lines from last service query. :returns list: Service output. :raises `~fdb.ProgrammingError`: When service is not in :attr:`fetching` mode. """ return [line for line in self] def isrunning(self): """Returns True if service is running. .. note:: Some services like :meth:`backup` or :meth:`sweep` may take time to comlete, so they're called asynchronously. Until they're finished, no other async service could be started. """ return self._QI(ibase.isc_info_svc_running) > 0 def wait(self): """Wait until running service completes, i.e. stops sending data. """ while self.isrunning(): for x in self: pass def get_service_manager_version(self): """Get Firebird Service Manager version number. :returns integer: Version number. """ self.__check_active() return self._QI(ibase.isc_info_svc_version) def get_server_version(self): """Get Firebird version. :returns string: Firebird version (example: 'LI-V2.5.2.26536 Firebird 2.5'). """ self.__check_active() return self._QS(ibase.isc_info_svc_server_version) def get_architecture(self): """Get Firebird Server architecture. :returns string: Architecture (example: 'Firebird/linux AMD64'). """ self.__check_active() return self._QS(ibase.isc_info_svc_implementation) def get_home_directory(self): """Get Firebird Home (installation) Directory. :returns string: Directory path. """ self.__check_active() return self._QS(ibase.isc_info_svc_get_env) def get_security_database_path(self): """Get full path to Firebird security database. :returns string: Path (path+filename) to security database. """ self.__check_active() return self._QS(ibase.isc_info_svc_user_dbpath) def get_lock_file_directory(self): """Get directory location for Firebird lock files. :returns string: Directory path. """ self.__check_active() return self._QS(ibase.isc_info_svc_get_env_lock) def get_server_capabilities(self): """Get list of Firebird capabilities. :returns tuple: Capability info codes for each capability reported by server. Next fdb.services constants define possible info codes returned:: CAPABILITY_MULTI_CLIENT CAPABILITY_REMOTE_HOP CAPABILITY_SERVER_CONFIG CAPABILITY_QUOTED_FILENAME CAPABILITY_NO_SERVER_SHUTDOWN **Example**:: >>>fdb.services.CAPABILITY_REMOTE_HOP in svc.get_server_capabilities() True """ self.__check_active() capabilities = self._QI(ibase.isc_info_svc_capabilities) return tuple([x for x in (CAPABILITY_MULTI_CLIENT, CAPABILITY_REMOTE_HOP, CAPABILITY_SERVER_CONFIG, CAPABILITY_QUOTED_FILENAME, CAPABILITY_NO_SERVER_SHUTDOWN) if capabilities & x]) def get_message_file_directory(self): """Get directory with Firebird message file. :returns string: Directory path. """ self.__check_active() return self._QS(ibase.isc_info_svc_get_env_msg) def get_connection_count(self): """Get number of attachments to server. :returns integer: Number of attachments. """ self.__check_active() return self._get_isc_info_svc_svr_db_info()[0] def get_attached_database_names(self): """Get list of attached databases. :returns list: Filenames of attached databases. """ self.__check_active() return self._get_isc_info_svc_svr_db_info()[1] def get_log(self, callback=None): """Request content of Firebird Server log. **(ASYNC service)** :param function callback: Function to call back with each output line. Function must accept only one parameter: line of output. If `callback` is not specified, log content could be retrieved through :meth:`readline`, :meth:`readlines`, iteration over `Connection` or ignored via call to :meth:`wait`. .. note:: Until log content is not fully fetched from service (or ignored via :meth:`wait`), any attempt to start another asynchronous service will fail with exception. """ self.__check_active() self._act(_ServiceActionRequestBuilder(ibase.isc_action_svc_get_ib_log)) self.__fetching = True self.__eof = False if callback: for line in self: callback(line) def get_limbo_transaction_ids(self, database): """Get list of transactions in limbo. :param string database: Database filename or alias. :returns list: Transaction IDs. :raises `~fdb.InternalError`: When can't process the result buffer. """ self.__check_active() _check_string(database) req_buf = _ServiceActionRequestBuilder() req_buf.add_option_mask(ibase.isc_spb_rpr_list_limbo_trans) raw = self._repair_action(database, req_buf, line_separator='') raw = ibase.b(raw) nbytes = len(raw) trans_ids = [] i = 0 while i < nbytes: byte = ibase.ord2(raw[i]) if byte in (ibase.isc_spb_single_tra_id, ibase.isc_spb_multi_tra_id): # The transaction ID is a 32-bit integer that begins # immediately after position i. trans_id = struct.unpack('i', raw[i + 1:i + 5])[0] i += 5 # Advance past the marker byte and the 32-bit integer. trans_ids.append(trans_id) else: raise fdb.InternalError('Unable to process buffer contents' ' beginning at position %d.' % i) return trans_ids def _resolve_limbo_transaction(self, resolution, database, transaction_id): _check_string(database) req_buf = _ServiceActionRequestBuilder() req_buf.add_numeric(resolution, transaction_id) self._repair_action(database, req_buf) def commit_limbo_transaction(self, database, transaction_id): """Resolve limbo transaction with commit. :param string database: Database filename or alias. :param integer transaction_id: ID of Transaction to resolve. """ self.__check_active() self._resolve_limbo_transaction(ibase.isc_spb_rpr_commit_trans, database, transaction_id) def rollback_limbo_transaction(self, database, transaction_id): """Resolve limbo transaction with rollback. :param string database: Database filename or alias. :param integer transaction_id: ID of Transaction to resolve. """ self.__check_active() self._resolve_limbo_transaction(ibase.isc_spb_rpr_rollback_trans, database, transaction_id) def get_statistics(self, database, show_only_db_log_pages=0, show_only_db_header_pages=0, show_user_data_pages=1, show_user_index_pages=1, show_system_tables_and_indexes=0, show_record_versions=0, callback=None, tables=None): """Request database statisctics. **(ASYNC service)** :param string database: Database specification. :param integer show_only_db_log_pages: `1` to analyze only log pages. :param integer show_only_db_header_pages: `1` to analyze only database header. When set, all other parameters are ignored. :param integer show_user_data_pages: `0` to skip user data analysis. :param integer show_user_index_pages: `0` to skip user index analysis. :param integer show_system_tables_and_indexes: `1` to analyze system tables and indices. :param integer show_record_versions: `1` to analyze record versions. :param function callback: Function to call back with each output line. Function must accept only one parameter: line of output. :param string_or_list tables: table name or iterable of table names. If `callback` is not specified, statistical report could be retrieved through :meth:`readline`, :meth:`readlines`, iteration over `Connection` or ignored via call to :meth:`wait`. .. note:: Until report is not fully fetched from service (or ignored via :meth:`wait`), any attempt to start another asynchronous service will fail with exception. """ self.__check_active() _check_string(database) req_buf = _ServiceActionRequestBuilder(ibase.isc_action_svc_db_stats) # Request for header negates all other options if show_only_db_header_pages: show_only_db_log_pages = show_user_data_pages = 0 show_user_index_pages = show_system_tables_and_indexes = 0 show_record_versions = 0 option_mask = 0 if show_user_data_pages: option_mask |= ibase.isc_spb_sts_data_pages if show_only_db_log_pages: option_mask |= ibase.isc_spb_sts_db_log if show_only_db_header_pages: option_mask |= ibase.isc_spb_sts_hdr_pages if show_user_index_pages: option_mask |= ibase.isc_spb_sts_idx_pages if show_system_tables_and_indexes: option_mask |= ibase.isc_spb_sts_sys_relations if show_record_versions: option_mask |= ibase.isc_spb_sts_record_versions req_buf.add_database_name(database) req_buf.add_option_mask(option_mask) if tables is not None: if isinstance(tables, ibase.StringTypes): tables = (tables,) cmdline = ['-t'] cmdline.extend(tables) req_buf.add_string(ibase.isc_spb_command_line, ' '.join(cmdline)) self._act(req_buf) self.__fetching = True self.__eof = False if callback: for line in self: callback(line) # Backup and Restore methods: def backup(self, source_database, dest_filenames, dest_file_sizes=(), #factor=None, # YYY:What is this? # Backup operation optionMask: ignore_checksums=0, ignore_limbo_transactions=0, metadata_only=0, collect_garbage=1, transportable=1, convert_external_tables=0, compressed=1, no_db_triggers=0, callback=None, stats=None): """Request logical (GBAK) database backup. **(ASYNC service)** :param string source_database: Source database specification. :param dest_filenames: Backup file(s) specification. :type dest_filenames: string or tuple of strings :param dest_file_sizes: (optional) specification of backup file max. sizes. :type dest_file_sizes: tuple of integers :param integer ignore_checksums: `1` to ignore checksums. :param integer ignore_limbo_transactions: `1` to ignore limbo transactions. :param integer metadata_only: `1` to create only metadata backup. :param integer collect_garbage: `0` to skip garbage collection. :param integer transportable: `0` to do not create transportable backup. :param integer convert_external_tables: `1` to convert external table to internal ones. :param integer compressed: `0` to create uncompressed backup. :param integer no_db_triggers: `1` to disable database triggers temporarily. :param function callback: Function to call back with each output line. Function must accept only one parameter: line of output. :param list stats: List of arguments for run-time statistics, see STATS_* constants. If `callback` is not specified, backup log could be retrieved through :meth:`readline`, :meth:`readlines`, iteration over `Connection` or ignored via call to :meth:`wait`. .. note:: Until backup report is not fully fetched from service (or ignored via :meth:`wait`), any attempt to start another asynchronous service will fail with exception. """ self.__check_active() # Begin parameter validation section. _check_string(source_database) dest_filenames = self._require_str_or_tuple_of_str(dest_filenames) dest_filenames_count = len(dest_filenames) # 2004.07.17: YYY: Temporary warning: # Current (1.5.1) versions of the database engine appear to hang the # Services API request when it contains more than 11 destFilenames if dest_filenames_count > 11: warnings.warn('Current versions of the database engine appear to hang when' ' passed a request to generate a backup with more than 11' ' constituents.', RuntimeWarning) if dest_filenames_count > 9999: raise fdb.ProgrammingError("The database engine cannot output a" " single source database to more than 9999 backup files.") self._validate_companion_string_numeric_sequences(dest_filenames, dest_file_sizes, 'destination filenames', 'destination file sizes') if len(self._exclude_elements_of_types(dest_file_sizes, (int, ibase.mylong))) > 0: raise TypeError("Every element of dest_file_sizes must be an int or long.") dest_file_sizes_count = len(dest_file_sizes) # The following should have already been checked by # _validateCompanionStringNumericSequences. assert dest_file_sizes_count == dest_filenames_count - 1 # End parameter validation section. # Begin option bitmask setup section. option_mask = 0 if ignore_checksums: option_mask |= ibase.isc_spb_bkp_ignore_checksums if ignore_limbo_transactions: option_mask |= ibase.isc_spb_bkp_ignore_limbo if metadata_only: option_mask |= ibase.isc_spb_bkp_metadata_only if not collect_garbage: option_mask |= ibase.isc_spb_bkp_no_garbage_collect if not transportable: option_mask |= ibase.isc_spb_bkp_non_transportable if convert_external_tables: option_mask |= ibase.isc_spb_bkp_convert if not compressed: option_mask |= ibase.isc_spb_bkp_expand if no_db_triggers: option_mask |= ibase.isc_spb_bkp_no_triggers # End option bitmask setup section. # Construct the request buffer. request = _ServiceActionRequestBuilder(ibase.isc_action_svc_backup) # Source database filename: request.add_database_name(source_database) # Backup filenames and sizes: request.add_string_numeric_pairs_sequence(ibase.isc_spb_bkp_file, dest_filenames, ibase.isc_spb_bkp_length, dest_file_sizes) # Options bitmask: request.add_numeric(ibase.isc_spb_options, option_mask) # Tell the service to make its output available to us. request.add_code(ibase.isc_spb_verbose) # handle request for run-time statistics if stats: request.add_string(ibase.isc_spb_bkp_stat, ''.join(stats)) # Done constructing the request buffer. self._act(request) self.__fetching = True self.__eof = False if callback: for line in self: callback(line) def local_backup(self, source_database, backup_stream, # Backup operation optionMask: ignore_checksums=0, ignore_limbo_transactions=0, metadata_only=0, collect_garbage=1, transportable=1, convert_external_tables=0, compressed=1, no_db_triggers=0): """Request logical (GBAK) database backup into local byte stream. **(SYNC service)** :param string source_database: Source database specification. :param backup_stream: Backup stream. :param integer ignore_checksums: `1` to ignore checksums. :param integer ignore_limbo_transactions: `1` to ignore limbo transactions. :param integer metadata_only: `1` to create only metadata backup. :param integer collect_garbage: `0` to skip garbage collection. :param integer transportable: `0` to do not create transportable backup. :param integer convert_external_tables: `1` to convert external table to internal ones. :param integer compressed: `0` to create uncompressed backup. :param integer no_db_triggers: `1` to disable database triggers temporarily. """ self.__check_active() # Begin parameter validation section. _check_string(source_database) # Begin option bitmask setup section. option_mask = 0 if ignore_checksums: option_mask |= ibase.isc_spb_bkp_ignore_checksums if ignore_limbo_transactions: option_mask |= ibase.isc_spb_bkp_ignore_limbo if metadata_only: option_mask |= ibase.isc_spb_bkp_metadata_only if not collect_garbage: option_mask |= ibase.isc_spb_bkp_no_garbage_collect if not transportable: option_mask |= ibase.isc_spb_bkp_non_transportable if convert_external_tables: option_mask |= ibase.isc_spb_bkp_convert if not compressed: option_mask |= ibase.isc_spb_bkp_expand if no_db_triggers: option_mask |= ibase.isc_spb_bkp_no_triggers # End option bitmask setup section. # Construct the request buffer. request = _ServiceActionRequestBuilder(ibase.isc_action_svc_backup) # Source database filename: request.add_database_name(source_database) # Backup file transported via stdout: request.add_string(ibase.isc_spb_bkp_file, 'stdout') # Options bitmask: request.add_numeric(ibase.isc_spb_options, option_mask) # Done constructing the request buffer. self._act(request) eof = False request = fdb.bs([ibase.isc_info_svc_to_eof]) spb = ibase.b('') while not eof: api.isc_service_query(self._isc_status, self._svc_handle, None, len(spb), spb, len(request), request, ibase.USHRT_MAX, self._result_buffer) if fdb.db_api_error(self._isc_status): raise fdb.exception_from_status(fdb.DatabaseError, self._isc_status, "Services/isc_service_query:") (result, _) = self._extract_bytes(self._result_buffer, 1) if ord(self._result_buffer[_]) == ibase.isc_info_end: eof = True backup_stream.write(result) def restore(self, source_filenames, dest_filenames, dest_file_pages=(), page_size=None, cache_buffers=None, access_mode_read_only=0, replace=0, deactivate_indexes=0, do_not_restore_shadows=0, do_not_enforce_constraints=0, commit_after_each_table=0, use_all_page_space=0, no_db_triggers=0, metadata_only=0, callback=None, stats=None): """Request database restore from logical (GBAK) backup. **(ASYNC service)** :param source_filenames: Backup file(s) specification. :type source_filenames: string or tuple of strings :param dest_filenames: Database file(s) specification. :type dest_filenames: string or tuple of strings :param dest_file_pages: (optional) specification of database file max. # of pages. :type dest_file_pages: tuple of integers :param integer page_size: (optional) Page size. :param integer cache_buffers: (optional) Size of page-cache for this database. :param integer access_mode_read_only: `1` to create R/O database. :param integer replace: `1` to replace existing database. :param integer deactivate_indexes: `1` to do not activate indices. :param integer do_not_restore_shadows: `1` to do not restore shadows. :param integer do_not_enforce_constraints: `1` to do not enforce constraints during restore. :param integer commit_after_each_table: `1` to commit after each table is restored. :param integer use_all_page_space: `1` to use all space on data pages. :param integer no_db_triggers: `1` to disable database triggers temporarily. :param integer metadata_only: `1` to restore only database metadata. :param function callback: Function to call back with each output line. Function must accept only one parameter: line of output. :param list stats: List of arguments for run-time statistics, see STATS_* constants. If `callback` is not specified, restore log could be retrieved through :meth:`readline`, :meth:`readlines`, iteration over `Connection` or ignored via call to :meth:`wait`. .. note:: Until restore report is not fully fetched from service (or ignored via :meth:`wait`), any attempt to start another asynchronous service will fail with exception. """ self.__check_active() # Begin parameter validation section. source_filenames = self._require_str_or_tuple_of_str(source_filenames) dest_filenames = self._require_str_or_tuple_of_str(dest_filenames) self._validate_companion_string_numeric_sequences( dest_filenames, dest_file_pages, 'destination filenames', 'destination file page counts' ) # End parameter validation section. # Begin option bitmask setup section. option_mask = 0 if replace: option_mask |= ibase.isc_spb_res_replace else: option_mask |= ibase.isc_spb_res_create if deactivate_indexes: option_mask |= ibase.isc_spb_res_deactivate_idx if do_not_restore_shadows: option_mask |= ibase.isc_spb_res_no_shadow if do_not_enforce_constraints: option_mask |= ibase.isc_spb_res_no_validity if commit_after_each_table: option_mask |= ibase.isc_spb_res_one_at_a_time if use_all_page_space: option_mask |= ibase.isc_spb_res_use_all_space if no_db_triggers: option_mask |= ibase.isc_spb_bkp_no_triggers if metadata_only: option_mask |= ibase.isc_spb_bkp_metadata_only # End option bitmask setup section. # Construct the request buffer. request = _ServiceActionRequestBuilder(ibase.isc_action_svc_restore) # Backup filenames: request.add_string_sequence(ibase.isc_spb_bkp_file, source_filenames) # Database filenames: request.add_string_numeric_pairs_sequence(ibase.isc_spb_dbname, dest_filenames, ibase.isc_spb_res_length, dest_file_pages) # Page size of the restored database: if page_size: request.add_numeric(ibase.isc_spb_res_page_size, page_size) # cacheBuffers is the number of default cache buffers to configure for # attachments to the restored database: if cache_buffers: request.add_numeric(ibase.isc_spb_res_buffers, cache_buffers) # accessModeReadOnly controls whether the restored database is # "mounted" in read only or read-write mode: if access_mode_read_only: access_mode = ibase.isc_spb_prp_am_readonly else: access_mode = ibase.isc_spb_prp_am_readwrite request.add_numeric(ibase.isc_spb_res_access_mode, access_mode, numctype='B') # Options bitmask: request.add_numeric(ibase.isc_spb_options, option_mask) # Tell the service to make its output available to us. request.add_code(ibase.isc_spb_verbose) # handle request for run-time statistics if stats: request.add_string(ibase.isc_spb_res_stat, ''.join(stats)) # Done constructing the request buffer. self._act(request) self.__fetching = True self.__eof = False if callback: for line in self: callback(line) def local_restore(self, backup_stream, dest_filenames, dest_file_pages=(), page_size=None, cache_buffers=None, access_mode_read_only=0, replace=0, deactivate_indexes=0, do_not_restore_shadows=0, do_not_enforce_constraints=0, commit_after_each_table=0, use_all_page_space=0, no_db_triggers=0, metadata_only=0): """Request database restore from logical (GBAK) backup stored in local byte stream. **(SYNC service)** :param backup_stream: Backup stream. :param dest_filenames: Database file(s) specification. :type dest_filenames: string or tuple of strings :param dest_file_pages: (optional) specification of database file max. # of pages. :type dest_file_pages: tuple of integers :param integer page_size: (optional) Page size. :param integer cache_buffers: (optional) Size of page-cache for this database. :param integer access_mode_read_only: `1` to create R/O database. :param integer replace: `1` to replace existing database. :param integer deactivate_indexes: `1` to do not activate indices. :param integer do_not_restore_shadows: `1` to do not restore shadows. :param integer do_not_enforce_constraints: `1` to do not enforce constraints during restore. :param integer commit_after_each_table: `1` to commit after each table is restored. :param integer use_all_page_space: `1` to use all space on data pages. :param integer no_db_triggers: `1` to disable database triggers temporarily. :param integer metadata_only: `1` to restore only database metadata. """ self.__check_active() # Begin parameter validation section. dest_filenames = self._require_str_or_tuple_of_str(dest_filenames) self._validate_companion_string_numeric_sequences( dest_filenames, dest_file_pages, 'destination filenames', 'destination file page counts' ) # End parameter validation section. # Begin option bitmask setup section. option_mask = 0 if replace: option_mask |= ibase.isc_spb_res_replace else: option_mask |= ibase.isc_spb_res_create if deactivate_indexes: option_mask |= ibase.isc_spb_res_deactivate_idx if do_not_restore_shadows: option_mask |= ibase.isc_spb_res_no_shadow if do_not_enforce_constraints: option_mask |= ibase.isc_spb_res_no_validity if commit_after_each_table: option_mask |= ibase.isc_spb_res_one_at_a_time if use_all_page_space: option_mask |= ibase.isc_spb_res_use_all_space if no_db_triggers: option_mask |= ibase.isc_spb_bkp_no_triggers if metadata_only: option_mask |= ibase.isc_spb_bkp_metadata_only # End option bitmask setup section. # Construct the request buffer. request = _ServiceActionRequestBuilder(ibase.isc_action_svc_restore) # Backup stream: request.add_string(ibase.isc_spb_bkp_file, 'stdin') # Database filenames: request.add_string_numeric_pairs_sequence(ibase.isc_spb_dbname, dest_filenames, ibase.isc_spb_res_length, dest_file_pages) # Page size of the restored database: if page_size: request.add_numeric(ibase.isc_spb_res_page_size, page_size) # cacheBuffers is the number of default cache buffers to configure for # attachments to the restored database: if cache_buffers: request.add_numeric(ibase.isc_spb_res_buffers, cache_buffers) # accessModeReadOnly controls whether the restored database is # "mounted" in read only or read-write mode: if access_mode_read_only: access_mode = ibase.isc_spb_prp_am_readonly else: access_mode = ibase.isc_spb_prp_am_readwrite request.add_numeric(ibase.isc_spb_res_access_mode, access_mode, numctype='B') # Options bitmask: request.add_numeric(ibase.isc_spb_options, option_mask) # Done constructing the request buffer. self._act(request) request_length = 0 stop = False pos = backup_stream.tell() backup_stream.seek(0, 2) bytes_available = backup_stream.tell() - pos backup_stream.seek(pos) spb = ctypes.create_string_buffer(16) spb[0] = ibase.int2byte(ibase.isc_info_svc_timeout) spb[1:3] = fdb.uint_to_bytes(4, 2) spb[3:7] = fdb.uint_to_bytes(1, 4) spb[7] = ibase.int2byte(ibase.isc_info_end) wait = True while not stop: if request_length > 0: request_length = min([request_length, 65500]) raw = backup_stream.read(request_length) if len(spb) < request_length+4: spb = ctypes.create_string_buffer(request_length+4) spb[0] = ibase.int2byte(ibase.isc_info_svc_line) spb[1:3] = fdb.uint_to_bytes(len(raw), 2) spb[3:3+len(raw)] = raw spb[3+len(raw)] = ibase.int2byte(ibase.isc_info_end) bytes_available -= len(raw) req = fdb.bs([ibase.isc_info_svc_stdin, ibase.isc_info_svc_line]) api.isc_service_query(self._isc_status, self._svc_handle, None, len(spb), spb, len(req), req, ibase.USHRT_MAX, self._result_buffer) if fdb.db_api_error(self._isc_status): raise fdb.exception_from_status(fdb.DatabaseError, self._isc_status, "Services/isc_service_query:") i = 0 request_length = 0 while self._result_buffer[i] != ibase.int2byte(ibase.isc_info_end): code = ibase.ord2(self._result_buffer[i]) i += 1 if code == ibase.isc_info_svc_stdin: (request_length, i) = self._extract_longint(self._result_buffer, i) elif code == ibase.isc_info_svc_line: (line, i) = self._extract_string(self._result_buffer, i) else: pass if not wait: stop = (request_length == 0) and (len(line) == 0) elif request_length != 0: wait = False # nbackup methods: def nbackup(self, source_database, dest_filename, nbackup_level=0, no_db_triggers=0): """Perform physical (NBACKUP) database backup. :param string source_database: Source database specification. :param dest_filename: Backup file specification. :param integer nbackup_level: Incremental backup level. :param integer no_db_triggers: `1` to disable database triggers temporarily. .. note:: Method call will not return until action is finished. """ self.__check_active() # Begin parameter validation section. _check_string(source_database) _check_string(dest_filename) dest_filename = ibase.b(dest_filename) # Begin option bitmask setup section. option_mask = 0 if no_db_triggers: option_mask |= ibase.isc_spb_bkp_no_triggers # End option bitmask setup section. # Construct the request buffer. request = _ServiceActionRequestBuilder(ibase.isc_action_svc_nbak) # Source database filename: request.add_database_name(source_database) # Backup filename: request.add_string(ibase.isc_spb_nbk_file, dest_filename) # backup level request.add_numeric(ibase.isc_spb_nbk_level, nbackup_level) # Options bitmask: request.add_numeric(ibase.isc_spb_options, option_mask) # Done constructing the request buffer. self._act(request) self.wait() def nrestore(self, source_filenames, dest_filename, no_db_triggers=0): """Perform restore from physical (NBACKUP) database backup. :param source_filenames: Backup file(s) specification. :type source_filenames: string or tuple of strings :param dest_filename: Database file specification. :param integer no_db_triggers: `1` to disable database triggers temporarily. .. note:: Method call will not return until action is finished. """ self.__check_active() # Begin parameter validation section. source_filenames = self._require_str_or_tuple_of_str(source_filenames) _check_string(dest_filename) dest_filename = ibase.b(dest_filename) # Begin option bitmask setup section. option_mask = 0 if no_db_triggers: option_mask |= ibase.isc_spb_bkp_no_triggers # End option bitmask setup section. # Construct the request buffer. request = _ServiceActionRequestBuilder(ibase.isc_action_svc_nrest) # Source database filename: request.add_string(ibase.isc_spb_dbname, dest_filename) # Backup filenames: request.add_string_sequence(ibase.isc_spb_nbk_file, source_filenames) # Options bitmask: request.add_numeric(ibase.isc_spb_options, option_mask) # Done constructing the request buffer. self._act(request) self.wait() # Trace def trace_start(self, config, name=None): """Start new trace session. **(ASYNC service)** :param string config: Trace session configuration. :param string name: (optional) Trace session name. :returns integer: Trace session ID. :raises fdb.DatabaseError: When session ID is not returned on start. Trace session output could be retrieved through :meth:`readline`, :meth:`readlines`, iteration over `Connection` or ignored via call to :meth:`wait`. .. note:: Until session output is not fully fetched from service (or ignored via :meth:`wait`), any attempt to start another asynchronous service including call to any `trace_` method will fail with exception. """ self.__check_active() if not name is None: _check_string(name) _check_string(config) # Construct the request buffer. req_buf = _ServiceActionRequestBuilder(ibase.isc_action_svc_trace_start) # trace name: if not name is None: req_buf.add_string(ibase.isc_spb_trc_name, name) # trace configuration: req_buf.add_string(ibase.isc_spb_trc_cfg, config) self._act(req_buf) self.__fetching = True self.__eof = False response = self._Q(ibase.isc_info_svc_line, self.QUERY_TYPE_PLAIN_STRING) if response.startswith('Trace session ID'): return int(response.split()[3]) else: # response should contain the error message raise fdb.DatabaseError(response) def trace_stop(self, trace_id): """Stop trace session. :param integer trace_id: Trace session ID. :returns string: Text with confirmation that session was stopped. :raises `~fdb.DatabaseError`: When trace session is not stopped. :raises `~fdb.OperationalError`: When server can't perform requested operation. """ self.__check_active() # Construct the request buffer. req_buf = _ServiceActionRequestBuilder(ibase.isc_action_svc_trace_stop) req_buf.add_numeric(ibase.isc_spb_trc_id, trace_id) response = self._act_and_return_textual_results(req_buf) if not response.startswith("Trace session ID %i stopped" % trace_id): # response should contain the error message raise fdb.DatabaseError(response) return response def trace_suspend(self, trace_id): """Suspend trace session. :param integer trace_id: Trace session ID. :returns string: Text with confirmation that session was paused. :raises `~fdb.DatabaseError`: When trace session is not paused. :raises `~fdb.OperationalError`: When server can't perform requested operation. """ self.__check_active() # Construct the request buffer. req_buf = _ServiceActionRequestBuilder(ibase.isc_action_svc_trace_suspend) req_buf.add_numeric(ibase.isc_spb_trc_id, trace_id) response = self._act_and_return_textual_results(req_buf) if not response.startswith("Trace session ID %i paused" % trace_id): # response should contain the error message raise fdb.DatabaseError(response) return response def trace_resume(self, trace_id): """Resume trace session. :param integer trace_id: Trace session ID. :returns string: Text with confirmation that session was resumed. :raises `~fdb.DatabaseError`: When trace session is not resumed. :raises `~fdb.OperationalError`: When server can't perform requested operation. """ self.__check_active() # Construct the request buffer. req_buf = _ServiceActionRequestBuilder(ibase.isc_action_svc_trace_resume) req_buf.add_numeric(ibase.isc_spb_trc_id, trace_id) response = self._act_and_return_textual_results(req_buf) if not response.startswith("Trace session ID %i resumed" % trace_id): # response should contain the error message raise fdb.DatabaseError(response) return response def trace_list(self): """Get information about existing trace sessions. :returns dictionary: Mapping `SESSION_ID -> SESSION_PARAMS` Session parameters is another dictionary with next keys: :name: (string) (optional) Session name if specified. :date: (datetime.datetime) Session start date and time. :user: (string) Trace user name. :flags: (list of strings) Session flags. :raises `~fdb.OperationalError`: When server can't perform requested operation. """ self.__check_active() # Construct the request buffer. req_buf = _ServiceActionRequestBuilder(ibase.isc_action_svc_trace_list) # Get and parse the returned list. session_list = self._act_and_return_textual_results(req_buf) result = {} session_id = None for line in session_list.split('\n'): if not line.strip(): session_id = None elif line.startswith("Session ID:"): session_id = int(line.split(':')[1].strip()) result[session_id] = dict() elif line.lstrip().startswith("name:"): result[session_id]["name"] = line.split(':')[1].strip() elif line.lstrip().startswith("user:"): result[session_id]["user"] = line.split(':')[1].strip() elif line.lstrip().startswith("date:"): result[session_id]["date"] = datetime.datetime.strptime( line.split(':', 1)[1].strip(), '%Y-%m-%d %H:%M:%S') elif line.lstrip().startswith("flags:"): result[session_id]["flags"] = line.split(':')[1].strip().split(',') else: raise fdb.OperationalError("Unexpected line in trace session list:`%s`" % line) return result # Database property alteration methods: def set_default_page_buffers(self, database, n): """Set individual page cache size for Database. :param string database: Database filename or alias. :param integer n: Number of pages. """ self.__check_active() _check_string(database) self._property_action_with_one_num_code(database, ibase.isc_spb_prp_page_buffers, n) def set_sweep_interval(self, database, n): """Set treshold for automatic sweep. :param string database: Database filename or alias. :param integer n: Sweep treshold, or `0` to disable automatic sweep. """ self.__check_active() _check_string(database) self._property_action_with_one_num_code(database, ibase.isc_spb_prp_sweep_interval, n) def set_reserve_page_space(self, database, reserve_space): """Set data page space reservation policy. :param string database: Database filename or alias. :param boolean reserve_space: `True` to reserve space, `False` to do not. """ self.__check_active() _check_string(database) if reserve_space: reserve_code = ibase.isc_spb_prp_res else: reserve_code = ibase.isc_spb_prp_res_use_full self._property_action_with_one_num_code(database, ibase.isc_spb_prp_reserve_space, reserve_code, num_ctype='b') def set_write_mode(self, database, mode): """Set Disk Write Mode: Sync (forced writes) or Async (buffered). :param string database: Database filename or alias. :param integer mode: One from following constants: :data:`~fdb.services.WRITE_FORCED` or :data:`~fdb.services.WRITE_BUFFERED` """ self.__check_active() _check_string(database) if mode not in (WRITE_FORCED, WRITE_BUFFERED): raise ValueError('mode must be one of the following constants:' ' fdb.services.WRITE_FORCED, fdb.services.WRITE_BUFFERED.') self._property_action_with_one_num_code(database, ibase.isc_spb_prp_write_mode, mode, num_ctype='b') def set_access_mode(self, database, mode): """Set Database Access mode: Read Only or Read/Write :param string database: Database filename or alias. :param integer mode: One from following constants: :data:`~fdb.services.ACCESS_READ_WRITE` or :data:`~fdb.services.ACCESS_READ_ONLY` """ self.__check_active() _check_string(database) if mode not in (ACCESS_READ_WRITE, ACCESS_READ_ONLY): raise ValueError('mode must be one of the following constants:' ' fdb.services.ACCESS_READ_WRITE, fdb.services.ACCESS_READ_ONLY.') self._property_action_with_one_num_code(database, ibase.isc_spb_prp_access_mode, mode, num_ctype='b') def set_sql_dialect(self, database, dialect): """Set SQL Dialect for Database. :param string database: Database filename or alias. :param integer dialect: `1` or `3`. """ self.__check_active() _check_string(database) # The IB 6 API Guide says that dialect "must be 1 or 3", but other # dialects may become valid in future versions, so don't require # dialect in (1, 3) self._property_action_with_one_num_code(database, ibase.isc_spb_prp_set_sql_dialect, dialect) def activate_shadow(self, database): """Activate Database Shadow(s). :param string database: Database filename or alias. """ self.__check_active() _check_string(database) req_buf = _ServiceActionRequestBuilder() req_buf.add_option_mask(ibase.isc_spb_prp_activate) self._property_action(database, req_buf) def no_linger(self, database): """Set one-off override for database linger. :param string database: Database filename or alias. """ self.__check_active() _check_string(database) req_buf = _ServiceActionRequestBuilder() req_buf.add_option_mask(ibase.isc_spb_prp_nolinger) self._property_action(database, req_buf) # Database repair/maintenance methods: def shutdown(self, database, shutdown_mode, shutdown_method, timeout): """Database shutdown. :param string database: Database filename or alias. :param integer shutdown_mode: One from following constants: :data:`~fdb.services.SHUT_LEGACY`, :data:`~fdb.services.SHUT_SINGLE`, :data:`~fdb.services.SHUT_MULTI` or :data:`~fdb.services.SHUT_FULL`. :param integer shutdown_method: One from following constants: :data:`~fdb.services.SHUT_FORCE`, :data:`~fdb.services.SHUT_DENY_NEW_TRANSACTIONS` or :data:`~fdb.services.SHUT_DENY_NEW_ATTACHMENTS`. :param integer timeout: Time in seconds, that the shutdown must complete in. .. seealso:: See also :meth:`~Connection.bring_online` method. """ self.__check_active() _check_string(database) if shutdown_mode not in (SHUT_LEGACY, SHUT_SINGLE, SHUT_MULTI, SHUT_FULL): raise ValueError('shutdown_mode must be one of the following' ' constants: fdb.services.SHUT_LEGACY, fdb.services.SHUT_SINGLE,' ' fdbfdb.services.SHUT_MULTI,' ' fdb.services.SHUT_FULL.') if shutdown_method not in (SHUT_FORCE, SHUT_DENY_NEW_TRANSACTIONS, SHUT_DENY_NEW_ATTACHMENTS): raise ValueError('shutdown_method must be one of the following' ' constants: fdb.services.SHUT_FORCE,' ' fdb.services.SHUT_DENY_NEW_TRANSACTIONS,' ' fdb.services.SHUT_DENY_NEW_ATTACHMENTS.') req_buf = _ServiceActionRequestBuilder() if shutdown_mode != SHUT_LEGACY: req_buf.add_numeric(ibase.isc_spb_prp_shutdown_mode, shutdown_mode, numctype='B') req_buf.add_numeric(shutdown_method, timeout, numctype='I') self._property_action(database, req_buf) def bring_online(self, database, online_mode=SHUT_NORMAL): """Bring previously shut down database back online. :param string database: Database filename or alias. :param integer online_mode: (Optional) One from following constants: :data:`~fdb.services.SHUT_LEGACY`, :data:`~fdb.services.SHUT_SINGLE`, :data:`~fdb.services.SHUT_MULTI` or :data:`~fdb.services.SHUT_NORMAL` (**Default**). .. seealso:: See also :meth:`~Connection.shutdown` method. """ self.__check_active() _check_string(database) if online_mode not in (SHUT_LEGACY, SHUT_NORMAL, SHUT_SINGLE, SHUT_MULTI): raise ValueError('online_mode must be one of the following' ' constants: fdb.services.SHUT_LEGACY, fdb.services.SHUT_NORMAL,' ' fdbfdb.services.SHUT_SINGLE,' ' fdb.services.SHUT_MULTI.') req_buf = _ServiceActionRequestBuilder() if online_mode == SHUT_LEGACY: req_buf.add_option_mask(ibase.isc_spb_prp_db_online) else: req_buf.add_numeric(ibase.isc_spb_prp_online_mode, online_mode, numctype='B') self._property_action(database, req_buf) def sweep(self, database): """Perform Database Sweep. .. note:: Method call will not return until sweep is finished. :param string database: Database filename or alias. """ self.__check_active() _check_string(database) req_buf = _ServiceActionRequestBuilder() option_mask = ibase.isc_spb_rpr_sweep_db req_buf.add_option_mask(option_mask) return self._repair_action(database, req_buf) def repair(self, database, read_only_validation=0, ignore_checksums=0, kill_unavailable_shadows=0, mend_database=0, validate_database=1, validate_record_fragments=1): """Database Validation and Repair. :param string database: Database filename or alias. :param integer read_only_validation: `1` to prevent any database changes. :param integer ignore_checksums: `1` to ignore page checksum errors. :param integer kill_unavailable_shadows: `1` to kill unavailable shadows. :param integer mend_database: `1` to fix database for backup. :param integer validate_database: `0` to skip database validation. :param integer validate_record_fragments: `0` to skip validation of record fragments. .. note:: Method call will not return until action is finished. """ self.__check_active() _check_string(database) # YYY: With certain option combinations, this method raises errors # that may not be very comprehensible to a Python programmer who's not # well versed with IB/FB. Should option combination filtering be # done right here instead of leaving that responsibility to the # database engine? # I think not, since any filtering done in this method is liable to # become outdated, or to inadvertently enforce an unnecessary, # crippling constraint on a certain option combination that the # database engine would have allowed. req_buf = _ServiceActionRequestBuilder() option_mask = 0 if read_only_validation: option_mask |= ibase.isc_spb_rpr_check_db if ignore_checksums: option_mask |= ibase.isc_spb_rpr_ignore_checksum if kill_unavailable_shadows: option_mask |= ibase.isc_spb_rpr_kill_shadows if mend_database: option_mask |= ibase.isc_spb_rpr_mend_db if validate_database: option_mask |= ibase.isc_spb_rpr_validate_db if validate_record_fragments: option_mask |= ibase.isc_spb_rpr_full req_buf.add_option_mask(option_mask) return self._repair_action(database, req_buf) # 2003.07.12: Removed method resolveLimboTransactions (dropped plans to # support that operation from kinterbasdb since transactions IDs are not # exposed at the Python level and I don't consider limbo transaction # resolution compelling enough to warrant exposing transaction IDs). def validate(self, database, include_tables=None, exclude_tables=None, include_indices=None, exclude_indices=None, lock_timeout=None, callback=None): """On-line database validation. :param string database: Database filename or alias. :param string include_tables: Pattern for table names to include in validation run. :param string exclude_tables: Pattern for table names to exclude from validation run. :param string include_indices: Pattern for index names to include in validation run. :param string exclude_indices: Pattern for index names to exclude from validation run. :param integer lock_timeout: lock timeout, used to acquire locks for table to validate, in seconds, default is 10 secs. 0 is no-wait, -1 is infinite wait. :param function callback: Function to call back with each output line. Function must accept only one parameter: line of output. .. note:: Patterns are regular expressions, processed by the same rules as SIMILAR TO expressions. All patterns are case-sensitive, regardless of database dialect. If the pattern for tables is omitted then all user tables will be validated. If the pattern for indexes is omitted then all indexes of the appointed tables will be validated. System tables are not validated. If `callback` is not specified, validation log could be retrieved through :meth:`readline`, :meth:`readlines`, iteration over `Connection` or ignored via call to :meth:`wait`. .. note:: Until validate report is not fully fetched from service (or ignored via :meth:`wait`), any attempt to start another asynchronous service will fail with exception. """ self.__check_active() _check_string(database) request = _ServiceActionRequestBuilder(ibase.isc_action_svc_validate) request.add_database_name(database) if include_tables is not None: request.add_string(ibase.isc_spb_val_tab_incl, include_tables) if exclude_tables is not None: request.add_string(ibase.isc_spb_val_tab_excl, exclude_tables) if include_indices is not None: request.add_string(ibase.isc_spb_val_idx_incl, include_indices) if exclude_indices is not None: request.add_string(ibase.isc_spb_val_idx_excl, exclude_indices) if lock_timeout is not None: request.add_numeric(ibase.isc_spb_val_lock_timeout, lock_timeout, numctype='i') # Done constructing the request buffer. self._act(request) self.__fetching = True self.__eof = False if callback: for line in self: callback(line) # User management methods: def get_users(self, user_name=None): """Get information about user(s). :param string user_name: (Optional) When specified, returns information only about user with specified user name. :returns list: :class:`User` instances. """ self.__check_active() if user_name is not None: if isinstance(user_name, ibase.myunicode): _check_string(user_name) user_name = ibase.b(user_name) req_buf = _ServiceActionRequestBuilder(ibase.isc_action_svc_display_user) if user_name: user_name = user_name.upper() req_buf.add_string(ibase.isc_spb_sec_username, user_name) self._act(req_buf) raw = self._QR(ibase.isc_info_svc_get_users) users = [] cur_user = None pos = 1 # Ignore raw[0]. upper_limit = len(raw) - 1 while pos < upper_limit: cluster = ibase.ord2(raw[pos]) pos += 1 if cluster == ibase.isc_spb_sec_username: if cur_user is not None: users.append(cur_user) cur_user = None (user_name, pos) = self._extract_string(raw, pos) cur_user = User(user_name) elif cluster == ibase.isc_spb_sec_password: (password, pos) = self._extract_string(raw, pos) cur_user.password = password elif cluster == ibase.isc_spb_sec_firstname: (first_name, pos) = self._extract_string(raw, pos) cur_user.first_name = first_name elif cluster == ibase.isc_spb_sec_middlename: (middle_name, pos) = self._extract_string(raw, pos) cur_user.middle_name = middle_name elif cluster == ibase.isc_spb_sec_lastname: (last_name, pos) = self._extract_string(raw, pos) cur_user.last_name = last_name elif cluster == ibase.isc_spb_sec_groupid: (group_id, pos) = self._extract_int(raw, pos) cur_user.group_id = group_id elif cluster == ibase.isc_spb_sec_userid: (user_id, pos) = self._extract_int(raw, pos) cur_user.user_id = user_id # Handle the last user: if cur_user is not None: users.append(cur_user) cur_user = None return users def add_user(self, user): """Add new user. :param user: Instance of :class:`User` with **at least** its :attr:`~User.name` and :attr:`~User.password` attributes specified as non-empty values. All other attributes are optional. :type user: :class:`User` .. note:: This method ignores the :attr:`~User.user_id` and :attr:`~User.group_id` attributes of :class:`~User` regardless of their values. """ self.__check_active() if not user.name: raise fdb.ProgrammingError('You must specify a username.') else: _check_string(user.name) user.name = ibase.b(user.name) if not user.password: raise fdb.ProgrammingError('You must specify a password.') else: _check_string(user.password) user.password = ibase.b(user.password) req_buf = _ServiceActionRequestBuilder(ibase.isc_action_svc_add_user) req_buf.add_string(ibase.isc_spb_sec_username, user.name) req_buf.add_string(ibase.isc_spb_sec_password, user.password) if user.first_name: user.first_name = ibase.b(user.first_name) req_buf.add_string(ibase.isc_spb_sec_firstname, user.first_name) if user.middle_name: user.middle_name = ibase.b(user.middle_name) req_buf.add_string(ibase.isc_spb_sec_middlename, user.middle_name) if user.last_name: user.last_name = ibase.b(user.last_name) req_buf.add_string(ibase.isc_spb_sec_lastname, user.last_name) self._act_and_return_textual_results(req_buf) def modify_user(self, user): """Modify user information. :param user: Instance of :class:`User` with **at least** its :attr:`~User.name` attribute specified as non-empty value. :type user: :class:`User` .. note:: This method sets :attr:`~User.first_name`, :attr:`~User.middle_name` and :attr:`~User.last_name` to their actual values, and ignores the :attr:`~User.user_id` and :attr:`~User.group_id` attributes regardless of their values. :attr:`~User.password` is set **only** when it has value. """ self.__check_active() req_buf = _ServiceActionRequestBuilder(ibase.isc_action_svc_modify_user) if isinstance(user.name, str): user.name = ibase.b(user.name) req_buf.add_string(ibase.isc_spb_sec_username, user.name) if isinstance(user.password, str): user.password = ibase.b(user.password) req_buf.add_string(ibase.isc_spb_sec_password, user.password) # Change the optional attributes whether they're empty or not. if isinstance(user.first_name, str): user.first_name = ibase.b(user.first_name) req_buf.add_string(ibase.isc_spb_sec_firstname, user.first_name) if isinstance(user.middle_name, str): user.middle_name = ibase.b(user.middle_name) req_buf.add_string(ibase.isc_spb_sec_middlename, user.middle_name) if isinstance(user.last_name, str): user.last_name = ibase.b(user.last_name) req_buf.add_string(ibase.isc_spb_sec_lastname, user.last_name) self._act_and_return_textual_results(req_buf) def remove_user(self, user): """Remove user. :param user: User name or Instance of :class:`User` with **at least** its :attr:`~User.name` attribute specified as non-empty value. :type user: string or :class:`User` """ self.__check_active() if isinstance(user, User): username = user.name else: _check_string(user) user = ibase.b(user) username = user req_buf = _ServiceActionRequestBuilder(ibase.isc_action_svc_delete_user) req_buf.add_string(ibase.isc_spb_sec_username, username) self._act_and_return_textual_results(req_buf) def user_exists(self, user): """Check for user's existence. :param user: User name or Instance of :class:`User` with **at least** its :attr:`~User.name` attribute specified as non-empty value. :type user: string or :class:`User` :returns boolean: `True` when the specified user exists. """ self.__check_active() if isinstance(user, User): username = user.name else: username = user return len(self.get_users(user_name=username)) > 0 #: (Read Only) `True` if connection is closed. closed = property(__get_closed) #: (Read Only) `True` if connection is fetching result. fetching = property(__get_fetching) #: (Read Only) (string) Firebird version number string of connected server. #: Uses Firebird version numbers in form: major.minor.subrelease.build version = property(__get_version) #: (Read Only) (float) Firebird version number of connected server. Only major.minor version. engine_version = property(__get_engine_version) class User(object): def __init__(self, name=None): if name: _check_string(name) self.name = name.upper() else: #: User `login` name (username). self.name = None #: Password. Not returned by user output methods, but must be #: specified to add a user. self.password = None #: First name. self.first_name = None #: Middle name self.middle_name = None #: Last name self.last_name = None # The user id and group id are not fully supported. For details, see # the documentation of the "User Management Methods" of # services.Connection. #: User ID self.user_id = None #: User group ID self.group_id = None def __str__(self): return '' % ( (self.name is None and 'without a name') or 'named "%s"' % self.name) def load_information(self, svc): """Load information about user from server. :param svc: Open service connection. :type svc: :class:`Connection` :returns: True if information was successfuly retrieved, False otherwise. :raises `~fdb.ProgrammingError`: If user name is not defined. """ if self.name is None: raise fdb.ProgrammingError("Can't load information about user without name.") user = svc.get_users(self.name) if len(user) > 0: self.first_name = user.first_name self.middle_name = user.middle_name self.last_name = user.last_name self.user_id = user.user_id self.group_id = user.group_id return len(user) > 0 class _ServiceActionRequestBuilder(object): # This private class helps public facilities in this module to build # the binary action request buffers required by the database Services API # using high-level, easily comprehensible syntax. def __init__(self, clusterIdentifier=None): self.ci = clusterIdentifier self.clear() def __str__(self): return self.render() def clear(self): self._buffer = [] if self.ci: self.add_code(self.ci) def extend(self, other_request_builder): self._buffer.append(other_request_builder.render()) def add_code(self, code): _code2spb(self._buffer, code) def add_string(self, code, s): _check_string(s) _string2spb(self._buffer, code, ibase.b(s)) def add_string_sequence(self, code, string_sequence): for s in string_sequence: self.add_string(code, s) def add_string_numeric_pairs_sequence(self, string_code, string_sequence, numeric_code, numeric_sequence): string_count = len(string_sequence) numeric_count = len(numeric_sequence) if numeric_count != string_count - 1: raise ValueError("Numeric sequence must contain exactly one less" " element than its companion string sequence.") i = 0 while i < string_count: self.add_string(string_code, string_sequence[i]) if i < numeric_count: self.add_numeric(numeric_code, numeric_sequence[i]) i += 1 def add_numeric(self, code, n, numctype='I'): _numeric2spb(self._buffer, code, n, numctype=numctype) def add_option_mask(self, option_mask): self.add_numeric(ibase.isc_spb_options, option_mask) def add_database_name(self, database_name): # 2003.07.20: Issue a warning for a hostname-containing databaseName # because it will cause isc_service_start to raise an inscrutable error # message with Firebird 1.5 (though it would not have raised an error # at all with Firebird 1.0 and earlier). ### Todo: verify handling of P version differences, refactor database_name = ibase.b(database_name, fdb.fbcore._FS_ENCODING) if ibase.PYTHON_MAJOR_VER == 3: colon_index = (database_name.decode(fdb.fbcore._FS_ENCODING)).find(':') else: colon_index = database_name.find(':') if colon_index != -1: # This code makes no provision for platforms other than Windows # that allow colons in paths (such as MacOS). Some of # kinterbasdb's current implementation (e.g., event handling) is # constrained to Windows or POSIX anyway. if not sys.platform.lower().startswith('win') or ( # This client process is running on Windows. # # Files that don't exist might still be valid if the connection # is to a server other than the local machine. not os.path.exists(ibase.nativestr(database_name, fdb.fbcore._FS_ENCODING)) # "Guess" that if the colon falls within the first two # characters of the string, the pre-colon portion refers to a # Windows drive letter rather than to a remote host. # This isn't guaranteed to be correct. and colon_index > 1): warnings.warn(' Unlike conventional DSNs, Services API database names' ' must not include the host name; remove the "%s" from' ' your database name.' ' (Firebird 1.0 will accept this, but Firebird 1.5 will' ' raise an error.)' % database_name[:colon_index + 1], UserWarning) self.add_string(ibase.isc_spb_dbname, database_name) def render(self): return ibase.b('').join(self._buffer) fdb-2.0.0/fdb/log.py0000644000175000001440000000557213270362207014605 0ustar pcisarusers00000000000000#coding:utf-8 # # PROGRAM/MODULE: fdb # FILE: log.py # DESCRIPTION: Python driver for Firebird - Firebird server log parser # CREATED: 11.4.2018 # # Software distributed under the License is distributed AS IS, # WITHOUT WARRANTY OF ANY KIND, either express or implied. # See the License for the specific language governing rights # and limitations under the License. # # The Original Code was created by Pavel Cisar # # Copyright (c) Pavel Cisar # and all contributors signed below. # # All Rights Reserved. # Contributor(s): ______________________________________. # # See LICENSE.TXT for details. from fdb import ParseError from datetime import datetime from collections import namedtuple from locale import LC_ALL, getlocale, setlocale, resetlocale import sys LogEntry = namedtuple('LogEntry', 'source_id,timestamp,message') def parse(lines): """Parse Firebird server log and yield named tuples describing individual log entries/events. :param lines: Iterable of lines from Firebird server log. :raises `~fdb.ParseError`: When any problem is found in input stream. """ line_no = 0 locale = getlocale(LC_ALL) if sys.platform == 'win32': setlocale(LC_ALL, 'English_United States') else: setlocale(LC_ALL, 'en_US') try: clean = (line.strip() for line in lines) entry_lines = [] timestamp = None source_id = 'UNKNOWN' for line in clean: line_no += 1 if line == '': continue items = line.split() if len(items) > 5: # It's potentially new entry try: new_timestamp = datetime.strptime(' '.join(items[len(items)-5:]), '%a %b %d %H:%M:%S %Y') except ValueError: new_timestamp = None if new_timestamp is not None: if entry_lines: yield LogEntry(source_id=source_id, timestamp=timestamp, message='\n'.join(entry_lines)) entry_lines = [] # Init new entry timestamp = new_timestamp source_id = ' '.join(items[:len(items)-5]) else: entry_lines.append(line) else: entry_lines.append(line) if entry_lines: yield LogEntry(source_id=source_id, timestamp=timestamp, message='\n'.join(entry_lines)) except Exception as e: raise ParseError("Can't parse line %d\n%s" % (line_no, e.message)) finally: if locale[0] is None: if sys.platform == 'win32': setlocale(LC_ALL, '') else: resetlocale(LC_ALL) else: setlocale(LC_ALL, locale) fdb-2.0.0/fdb/trace.py0000644000175000001440000013343613270362207015123 0ustar pcisarusers00000000000000#coding:utf-8 # # PROGRAM/MODULE: fdb # FILE: trace.py # DESCRIPTION: Python driver for Firebird - Firebird Trace & Audit # CREATED: 10.12.2017 # # Software distributed under the License is distributed AS IS, # WITHOUT WARRANTY OF ANY KIND, either express or implied. # See the License for the specific language governing rights # and limitations under the License. # # The Original Code was created by Pavel Cisar # # Copyright (c) Pavel Cisar # and all contributors signed below. # # All Rights Reserved. # Contributor(s): ______________________________________. # # See LICENSE.TXT for details. import fdb import datetime import decimal import collections from . import utils try: from sys import intern except ImportError: pass #: Trace event status codes STATUS_OK = ' ' STATUS_FAILED = 'F' STATUS_UNAUTHORIZED = 'U' STATUS_UNKNOWN = '?' #: Trace event codes, also works as index to EVENTS list EVENT_TRACE_INIT = 0 EVENT_TRACE_SUSPEND = 1 EVENT_TRACE_END = 2 EVENT_CREATE_DATABASE = 3 EVENT_DROP_DATABASE = 4 EVENT_ATTACH = 5 EVENT_DETACH = 6 EVENT_TRANSACTION_START = 7 EVENT_COMMIT = 8 EVENT_ROLLBACK = 9 EVENT_COMMIT_R = 10 EVENT_ROLLBACK_R = 11 EVENT_STMT_PREPARE = 12 EVENT_STMT_START = 13 EVENT_STMT_END = 14 EVENT_STMT_FREE = 15 EVENT_STMT_CLOSE = 16 EVENT_TRG_START = 17 EVENT_TRG_END = 18 EVENT_PROC_START = 19 EVENT_PROC_END = 20 EVENT_SVC_START = 21 EVENT_SVC_ATTACH = 22 EVENT_SVC_DETACH = 23 EVENT_SVC_QUERY = 24 EVENT_SET_CONTEXT = 25 EVENT_ERROR = 26 EVENT_WARNING = 27 EVENT_SWEEP_START = 28 EVENT_SWEEP_PROGRESS = 29 EVENT_SWEEP_FINISH = 30 EVENT_SWEEP_FAILED = 31 EVENT_BLR_COMPILE = 32 EVENT_BLR_EXECUTE = 33 EVENT_DYN_EXECUTE = 34 EVENT_UNKNOWN = 35 #: List of trace event names in order matching their numeric codes EVENTS = ['TRACE_INIT', 'TRACE_SUSPENDED', 'TRACE_FINI', 'CREATE_DATABASE', 'DROP_DATABASE', 'ATTACH_DATABASE', 'DETACH_DATABASE', 'START_TRANSACTION', 'COMMIT_TRANSACTION', 'ROLLBACK_TRANSACTION', 'COMMIT_RETAINING', 'ROLLBACK_RETAINING', 'PREPARE_STATEMENT', 'EXECUTE_STATEMENT_START', 'EXECUTE_STATEMENT_FINISH', 'FREE_STATEMENT', 'CLOSE_CURSOR', 'EXECUTE_TRIGGER_START', 'EXECUTE_TRIGGER_FINISH', 'EXECUTE_PROCEDURE_START', 'EXECUTE_PROCEDURE_FINISH', 'START_SERVICE', 'ATTACH_SERVICE', 'DETACH_SERVICE', 'QUERY_SERVICE', 'SET_CONTEXT', 'ERROR', 'WARNING', 'SWEEP_START', 'SWEEP_PROGRESS', 'SWEEP_FINISH', 'SWEEP_FAILED', 'COMPILE_BLR', 'EXECUTE_BLR', 'EXECUTE_DYN', 'UNKNOWN'] # # Named tuples for individual trace events AttachmentInfo = collections.namedtuple('AttachmentInfo', 'attachment_id,database,charset,protocol,address,user,role,remote_process,remote_pid') TransactionInfo = collections.namedtuple('TransactionInfo', 'attachment_id,transaction_id,options') ServiceInfo = collections.namedtuple('ServiceInfo', 'service_id,user,protocol,address,remote_process,remote_pid') SQLInfo = collections.namedtuple('SQLInfo', 'sql_id,sql,plan') ParamInfo = collections.namedtuple('ParamInfo', 'par_id,params') # AccessTuple = collections.namedtuple('AccessTuple', 'table,natural,index,update,insert,delete,backout,purge,expunge') # EventTraceInit = collections.namedtuple('EventTraceInit', 'event_id,timestamp,session_name') EventTraceSuspend = collections.namedtuple('EventTraceSuspend', 'event_id,timestamp,session_name') EventTraceFinish = collections.namedtuple('EventTraceFinish', 'event_id,timestamp,session_name') # EventCreate = collections.namedtuple('EventCreate', 'event_id,timestamp,status,attachment_id,database,charset,protocol,address,user,role,remote_process,remote_pid') EventDrop = collections.namedtuple('EventDrop', 'event_id,timestamp,status,attachment_id,database,charset,protocol,address,user,role,remote_process,remote_pid') EventAttach = collections.namedtuple('EventAttach', 'event_id,timestamp,status,attachment_id,database,charset,protocol,address,user,role,remote_process,remote_pid') EventDetach = collections.namedtuple('EventDetach', 'event_id,timestamp,status,attachment_id,database,charset,protocol,address,user,role,remote_process,remote_pid') # EventTransactionStart = collections.namedtuple('EventTransactionStart', 'event_id,timestamp,status,attachment_id,transaction_id,options') EventCommit = collections.namedtuple('EventCommit', 'event_id,timestamp,status,attachment_id,transaction_id,options,run_time,reads,writes,fetches,marks') EventRollback = collections.namedtuple('EventRollback', 'event_id,timestamp,status,attachment_id,transaction_id,options,run_time,reads,writes,fetches,marks') EventCommitRetaining = collections.namedtuple('EventCommitRetaining', 'event_id,timestamp,status,attachment_id,transaction_id,options,run_time,reads,writes,fetches,marks') EventRollbackRetaining = collections.namedtuple('EventRollbackRetaining', 'event_id,timestamp,status,attachment_id,transaction_id,options,run_time,reads,writes,fetches,marks') # EventPrepareStatement = collections.namedtuple('EventPrepareStatement', 'event_id,timestamp,status,attachment_id,transaction_id,statement_id,sql_id,prepare_time') EventStatementStart = collections.namedtuple('EventStatementStart', 'event_id,timestamp,status,attachment_id,transaction_id,statement_id,sql_id,param_id') EventStatementFinish = collections.namedtuple('EventStatementFinish', 'event_id,timestamp,status,attachment_id,transaction_id,statement_id,sql_id,param_id,records,run_time,reads,writes,fetches,marks,access') EventFreeStatement = collections.namedtuple('EventFreeStatement', 'event_id,timestamp,attachment_id,transaction_id,statement_id,sql_id') EventCloseCursor = collections.namedtuple('EventCloseCursor', 'event_id,timestamp,attachment_id,transaction_id,statement_id,sql_id') # EventTriggerStart = collections.namedtuple('EventTriggerStart', 'event_id,timestamp,status,attachment_id,transaction_id,trigger,table,event') EventTriggerFinish = collections.namedtuple('EventTriggerFinish', 'event_id,timestamp,status,attachment_id,transaction_id,trigger,table,event,run_time,reads,writes,fetches,marks,access') # EventProcedureStart = collections.namedtuple('EventProcedureStart', 'event_id,timestamp,status,attachment_id,transaction_id,procedure,param_id') EventProcedureFinish = collections.namedtuple('EventProcedureFinish', 'event_id,timestamp,status,attachment_id,transaction_id,procedure,param_id,run_time,reads,writes,fetches,marks,access') # EventServiceAttach = collections.namedtuple('EventServiceAttach', 'event_id,timestamp,status,service_id') EventServiceDetach = collections.namedtuple('EventServiceDetach', 'event_id,timestamp,status,service_id') EventServiceStart = collections.namedtuple('EventServiceStart', 'event_id,timestamp,status,service_id,action,parameters') EventServiceQuery = collections.namedtuple('EventServiceQuery', 'event_id,timestamp,status,service_id,action,parameters') # EventSetContext = collections.namedtuple('EventSetContext', 'event_id,timestamp,attachment_id,transaction_id,context,key,value') # EventError = collections.namedtuple('EventError', 'event_id,timestamp,attachment_id,place,details') EventServiceError = collections.namedtuple('EventServiceError', 'event_id,timestamp,service_id,place,details') EventWarning = collections.namedtuple('EventWarning', 'event_id,timestamp,attachment_id,place,details') EventServiceWarning = collections.namedtuple('EventServiceWarning', 'event_id,timestamp,service_id,place,details') # EventSweepStart = collections.namedtuple('EventSweepStart', 'event_id,timestamp,attachment_id,oit,oat,ost,next') EventSweepProgress = collections.namedtuple('EventSweepProgress', 'event_id,timestamp,attachment_id,run_time,reads,writes,fetches,marks,access') EventSweepFinish = collections.namedtuple('EventSweepFinish', 'event_id,timestamp,attachment_id,oit,oat,ost,next,run_time,reads,writes,fetches,marks') EventSweepFailed = collections.namedtuple('EventSweepFailed', 'event_id,timestamp,attachment_id') # EventBLRCompile = collections.namedtuple('EventBLRCompile', 'event_id,timestamp,status,attachment_id,statement_id,content,prepare_time') EventBLRExecute = collections.namedtuple('EventBLRExecute', 'event_id,timestamp,status,attachment_id,transaction_id,statement_id,content,run_time,reads,writes,fetches,marks,access') EventDYNExecute = collections.namedtuple('EventDYNExecute', 'event_id,timestamp,status,attachment_id,transaction_id,content,run_time') # EventUnknown = collections.namedtuple('EventUnknown', 'event_id,timestamp,data') class TraceParser(object): """Parser for standard textual trace log. Produces named tuples describing individual trace log entries/events. Attributes: :seen_attachments: Set of attachment ids that were already processed. :seen_transactions: Set of transaction ids that were already processed. :seen_services: Set of service ids that were already processed. :sqlinfo_map: Dictionary that maps (sql_cmd,plan) keys to internal ids :param_map: Dictionary that maps parameters (statement or procedure) keys to internal ids :next_event_id: Sequence id that would be assigned to next parsed event (starts with 1). :next_sql_id: Sequence id that would be assigned to next parsed unique SQL command (starts with 1). :next_param_id: Sequence id that would be assigned to next parsed unique parameter (starts with 1). """ def __init__(self): self.seen_attachments = set() self.seen_transactions = set() self.seen_services = set() self.sqlinfo_map = {} self.param_map = {} self.next_event_id = 1 self.next_sql_id = 1 self.next_param_id = 1 # self.__buffer = [] self.__current_event = None self.__current_block = None self.__last_timestamp = None self.__event_values = {} self.__parse_map = {EVENT_TRACE_INIT: self.__parser_trace_init, EVENT_TRACE_END: self.__parser_trace_finish, EVENT_TRANSACTION_START: self.__parser_start_transaction, EVENT_COMMIT: self.__parser_commit_transaction, EVENT_ROLLBACK: self.__parser_rollback_transaction, EVENT_COMMIT_R: self.__parser_commit_retaining, EVENT_ROLLBACK_R: self.__parser_rollback_retaining, EVENT_STMT_PREPARE: self.__parser_prepare_statement, EVENT_STMT_START: self.__parser_execute_statement_start, EVENT_STMT_END: self.__parser_execute_statement_finish, EVENT_STMT_FREE: self.__parser_free_statement, EVENT_STMT_CLOSE: self.__parser_close_cursor, EVENT_TRG_START: self.__parser_trigger_start, EVENT_TRG_END: self.__parser_trigger_finish, EVENT_PROC_START: self.__parser_procedure_start, EVENT_PROC_END: self.__parser_procedure_finish, EVENT_CREATE_DATABASE: self.__parser_create_db, EVENT_DROP_DATABASE: self.__parser_drop_db, EVENT_ATTACH: self.__parser_attach, EVENT_DETACH: self.__parser_detach, EVENT_SVC_START: self.__parser_service_start, EVENT_SVC_ATTACH: self.__parser_service_attach, EVENT_SVC_DETACH: self.__parser_service_detach, EVENT_SVC_QUERY: self.__parser_service_query, EVENT_SET_CONTEXT: self.__parser_set_context, EVENT_ERROR: self.__parser_error, EVENT_WARNING: self.__parser_warning, EVENT_SWEEP_START: self.__parser_sweep_start, EVENT_SWEEP_PROGRESS: self.__parser_sweep_progress, EVENT_SWEEP_FINISH: self.__parser_sweep_finish, EVENT_SWEEP_FAILED: self.__parser_sweep_failed, EVENT_BLR_COMPILE: self.__parser_blr_compile, EVENT_BLR_EXECUTE: self.__parser_blr_execute, EVENT_DYN_EXECUTE: self.__parser_dyn_execute, EVENT_UNKNOWN: self.__parser_unknown} def _is_entry_header(self, line): """Returns True if parameter is trace log entry header. This version only checks that first item is a timestamp in valid format. :param string line: Line of text to be checked. """ items = line.split() try: timestamp = datetime.datetime.strptime(items[0], '%Y-%m-%dT%H:%M:%S.%f') return True except: return False def _is_session_suspended(self, line): """Returns True if parameter is trace log message that trace session was suspended due to full log. :param string line: Line of text to be checked. """ return line.rfind('is suspended as its log is full ---') >= 0 def _is_plan_separator(self, line): """Returns True if parameter is statement plan separator. :param string line: Line of text to be checked. """ return line == '^'*79 def _is_perf_start(self, line): """Returns True if parameter is first item of statement performance information. :param string line: Line of text to be checked. """ return line.endswith(' records fetched') def _is_blr_perf_start(self, line): """Returns True if parameter is first item of BLR/DYN performance information. :param string line: Line of text to be checked. """ parts = line.split() return 'ms' in parts or 'fetch(es)' in parts or 'mark(s)' in parts or 'read(s)' in parts or 'write(s)' in parts def _is_param_start(self, line): """Returns True if parameter is first item in list of parameters. :param string line: Line of text to be checked. """ return line.startswith('param0 = ') def _iter_trace_blocks(self, ilines): lines = [] for line in ilines: line = line.strip() if line: if not lines: if self._is_entry_header(line): lines.append(line) else: if self._is_entry_header(line) or self._is_session_suspended(line): yield lines lines = [line] else: lines.append(line) if lines: yield lines def _parse_header(self, line): """Parses trace entry header into 3-item tuple. :param string line: Line of text to be parsed. :returns: Tuple with items: (timestamp, status, trace_entry_type_id) :raises `~fdb.ParseError`: When event is not recognized """ items = line.split() timestamp = datetime.datetime.strptime(items[0], '%Y-%m-%dT%H:%M:%S.%f') if (len(items) == 3) or (items[2] in ['ERROR', 'WARNING']): return (timestamp, STATUS_OK, EVENTS.index(items[2]) if items[2] in EVENTS else EVENT_UNKNOWN) else: if items[2] == 'UNAUTHORIZED': return (timestamp, STATUS_UNAUTHORIZED, EVENTS.index(items[3])) elif items[2] == 'FAILED': return (timestamp, STATUS_FAILED, EVENTS.index(items[3])) elif items[2] == 'Unknown': return (timestamp, STATUS_UNKNOWN, EVENT_UNKNOWN) # ' '.join(items[3:])) else: raise fdb.ParseError('Unrecognized event header: "%s"' % line) def _parse_attachment_info(self, values, check=True): line = self.__current_block.popleft() database, sep, attachment = line.partition(' (') values['database'] = database attachment_id, user_role, charset, protocol_address = attachment.strip('()').split(',') pad, s = attachment_id.split('_') values['attachment_id'] = int(s) values['charset'] = charset.strip() # protocol_address = protocol_address.strip() if protocol_address == '': protocol = address = protocol_address else: protocol, address = protocol_address.split(':') values['protocol'] = protocol values['address'] = address if ':' in user_role: a, b = user_role.strip().split(':') else: a = user_role.strip() b = 'NONE' values['user'] = a values['role'] = b if protocol_address == '': values['remote_process'] = None values['remote_pid'] = None elif len(self.__current_block) > 0 and not (self.__current_block[0].startswith('(TRA') or ' ms,' in self.__current_block[0] or 'Transaction counters:' in self.__current_block[0]): remote_process_id = self.__current_block.popleft() remote_process, remote_pid = remote_process_id.rsplit(':', 1) values['remote_process'] = remote_process values['remote_pid'] = int(remote_pid) else: values['remote_process'] = None values['remote_pid'] = None # if check and values['attachment_id'] not in self.seen_attachments: self.__buffer.append(AttachmentInfo(**values)) self.seen_attachments.add(values['attachment_id']) def _parse_transaction_info(self, values, check=True): # Transaction parameters transaction_id, transaction_options = self.__current_block.popleft().strip('\t ()').split(',') pad, s = transaction_id.split('_') values['attachment_id'] = values['attachment_id'] values['transaction_id'] = int(s) values['options'] = [intern(x.strip()) for x in transaction_options.split('|')] if check and values['transaction_id'] not in self.seen_transactions: self.__buffer.append(TransactionInfo(**values)) self.seen_transactions.add(values['transaction_id']) def _parse_transaction_performance(self): self.__event_values['run_time'] = None self.__event_values['reads'] = None self.__event_values['writes'] = None self.__event_values['fetches'] = None self.__event_values['marks'] = None if self.__current_block: values = self.__current_block.popleft().split(',') while values: value, val_type = values.pop().split() if 'ms' in val_type: self.__event_values['run_time'] = int(value) elif 'read' in val_type: self.__event_values['reads'] = int(value) elif 'write' in val_type: self.__event_values['writes'] = int(value) elif 'fetch' in val_type: self.__event_values['fetches'] = int(value) elif 'mark' in val_type: self.__event_values['marks'] = int(value) else: raise fdb.ParseError("Unhandled performance parameter %s" % val_type) def _parse_attachment_and_transaction(self): # Attachment att_values = {} self._parse_attachment_info(att_values) # Transaction tr_values = {} tr_values['attachment_id'] = att_values['attachment_id'] self._parse_transaction_info(tr_values) self.__event_values['attachment_id'] = tr_values['attachment_id'] self.__event_values['transaction_id'] = tr_values['transaction_id'] def _parse_statement_id(self): self.__event_values['plan'] = None self.__event_values['sql'] = None pad, s = self.__current_block.popleft().split() self.__event_values['statement_id'] = int(s[:-1]) if self.__current_block.popleft() != '-'*79: raise fdb.ParseError("Separator '-'*79 line expected") def _parse_blr_statement_id(self): line = self.__current_block[0].strip() if line.startswith('Statement ') and line[-1] == ':': pad, s = self.__current_block.popleft().split() self.__event_values['statement_id'] = int(s[:-1]) else: self.__event_values['statement_id'] = None def _parse_blrdyn_content(self): if self.__current_block[0] == '-'*79: self.__current_block.popleft() content = [] line = self.__current_block.popleft() while line and not self._is_blr_perf_start(line): content.append(line) if self.__current_block: line = self.__current_block.popleft() else: line = None if line: self.__current_block.appendleft(line) self.__event_values['content'] = '\n'.join(content) else: self.__event_values['content'] = None def _parse_prepare_time(self): if self.__current_block and self.__current_block[-1].endswith(' ms'): run_time = self.__current_block.pop() time, measure = run_time.split() self.__event_values['prepare_time'] = int(time) else: self.__event_values['prepare_time'] = None def _parse_sql_statement(self): if not self.__current_block: return line = self.__current_block.popleft() sql = [] while line and not (self._is_plan_separator(line) or self._is_perf_start(line) or self._is_param_start(line)): sql.append(line) if self.__current_block: line = self.__current_block.popleft() else: line = None if line: self.__current_block.appendleft(line) self.__event_values['sql'] = '\n'.join(sql) def _parse_plan(self): if not self.__current_block: return line = self.__current_block.popleft() if self._is_perf_start(line): self.__current_block.appendleft(line) return if self._is_param_start(line): self.__current_block.appendleft(line) return if not self._is_plan_separator(line): raise fdb.ParseError("Separator '^'*79 line expected") line = self.__current_block.popleft() plan = [] while line and not (self._is_perf_start(line) or self._is_param_start(line)): plan.append(line) if self.__current_block: line = self.__current_block.popleft() else: line = None if line: self.__current_block.appendleft(line) self.__event_values['plan'] = '\n'.join(plan) def _parse_parameters(self, for_procedure=False): parameters = [] while self.__current_block and self.__current_block[0].startswith('param'): line = self.__current_block.popleft() param_id, param_def = line.split(' = ') param_type, param_value = param_def.rsplit(',', 1) param_value = param_value.strip(' "') if param_value == '': param_value = None elif param_type in ['smallint', 'integer', 'bigint']: param_value = int(param_value) elif param_type == 'timestamp': param_value = datetime.datetime.strptime(param_value, '%Y-%m-%dT%H:%M:%S.%f') elif param_type == 'date': param_value = datetime.datetime.strptime(param_value, '%Y-%m-%d') elif param_type == 'time': param_value = datetime.datetime.strptime(param_value, '%H:%M:%S.%f') elif param_type in ['float', 'double precision']: param_value = decimal.Decimal(param_value) parameters.append((param_type, param_value,)) while self.__current_block and self.__current_block[0].endswith('more arguments skipped...'): self.__current_block.popleft() # param_id = None if len(parameters) > 0: key = tuple(parameters) if key in self.param_map: param_id = self.param_map[key] else: param_id = self.next_param_id self.next_param_id += 1 self.param_map[key] = param_id self.__buffer.append(ParamInfo(**{'par_id': param_id, 'params': parameters})) # self.__event_values['param_id'] = param_id def _parse_performance(self): self.__event_values['run_time'] = None self.__event_values['reads'] = None self.__event_values['writes'] = None self.__event_values['fetches'] = None self.__event_values['marks'] = None self.__event_values['access'] = None if not self.__current_block: return if 'records fetched' in self.__current_block[0]: line = self.__current_block.popleft() self.__event_values['records'] = int(line.split()[0]) values = self.__current_block.popleft().split(',') while values: value, val_type = values.pop().split() if 'ms' in val_type: self.__event_values['run_time'] = int(value) elif 'read' in val_type: self.__event_values['reads'] = int(value) elif 'write' in val_type: self.__event_values['writes'] = int(value) elif 'fetch' in val_type: self.__event_values['fetches'] = int(value) elif 'mark' in val_type: self.__event_values['marks'] = int(value) else: raise fdb.ParseError("Unhandled performance parameter %s" % val_type) if self.__current_block: self.__event_values['access'] = [] if self.__current_block.popleft() != "Table Natural Index Update Insert Delete Backout Purge Expunge": raise fdb.ParseError("Performance table header expected") if self.__current_block.popleft() != "*"*111: raise fdb.ParseError("Performance table header separator expected") while self.__current_block: entry = self.__current_block.popleft() self.__event_values['access'].append(AccessTuple._make((intern(entry[:32].strip()), utils.safe_int(entry[32:41].strip()), utils.safe_int(entry[41:51].strip()), utils.safe_int(entry[51:61].strip()), utils.safe_int(entry[61:71].strip()), utils.safe_int(entry[71:81].strip()), utils.safe_int(entry[81:91].strip()), utils.safe_int(entry[91:101].strip()), utils.safe_int(entry[101:111].strip())))) def _parse_sql_info(self): plan = self.__event_values['plan'] sql = self.__event_values['sql'] key = (sql, plan) # if key in self.sqlinfo_map: sql_id = self.sqlinfo_map[key] else: sql_id = self.next_sql_id self.next_sql_id += 1 self.sqlinfo_map[key] = sql_id self.__buffer.append(SQLInfo(**{'sql_id': sql_id, 'sql': sql, 'plan': plan,})) # del self.__event_values['plan'] del self.__event_values['sql'] self.__event_values['sql_id'] = sql_id def _parse_trigger(self): trigger, event = self.__current_block.popleft().split('(') if ' FOR ' in trigger: a, b = trigger.split(' FOR ') self.__event_values['trigger'] = a self.__event_values['table'] = b.strip() else: self.__event_values['trigger'] = trigger.strip() self.__event_values['table'] = None self.__event_values['event'] = event.strip('()') def _parse_service(self): line = self.__current_block.popleft() if 'service_mgr' not in line: raise fdb.ParseError("Service connection description expected.") pad, sep, s = line.partition(' (') svc_id, user, protocol_address, remote_process_id = s.strip('()').split(',') pad, svc_id = svc_id.split(' ') svc_id = int(svc_id if svc_id.startswith('0x') else '0x%s' % svc_id, 0) if svc_id not in self.seen_services: svc_values = {} svc_values['service_id'] = svc_id svc_values['user'] = user.strip() protocol_address = protocol_address.strip() if protocol_address == '': protocol = address = protocol_address else: protocol, address = protocol_address.split(':') svc_values['protocol'] = protocol svc_values['address'] = address remote_process_id = remote_process_id.strip() remote_process, remote_pid = remote_process_id.rsplit(':', 1) svc_values['remote_process'] = remote_process svc_values['remote_pid'] = int(remote_pid) self.__buffer.append(ServiceInfo(**svc_values)) self.seen_services.add(svc_id) self.__event_values['service_id'] = svc_id def _parse_sweep_attachment(self): att_values = {} self._parse_attachment_info(att_values) self.__event_values['attachment_id'] = att_values['attachment_id'] #values = {'remote_process': None, 'remote_pid': None,} #line = self.__current_block.popleft() #database, sep, attachment = line.partition(' (') #values['database'] = database #attachment_id, user_role, charset, protocol_address = attachment.strip('()').split(',') #pad, s = attachment_id.split('_') #self.__event_values['attachment_id'] = values['attachment_id'] = int(s) #values['charset'] = charset.strip() ## #protocol_address = protocol_address.strip() #if protocol_address == '': #protocol = address = protocol_address #else: #protocol, address = protocol_address.split(':') #values['protocol'] = protocol #values['address'] = address #if ':' in user_role: #a, b = user_role.strip().split(':') #else: #a = user_role.strip() #b = 'NONE' #values['user'] = a #values['role'] = b #if values['attachment_id'] not in self.seen_attachments: #self.__writer.write(AttachmentInfo(**values)) #self.seen_attachments.add(values['attachment_id']) def _parse_sweep_tr_counters(self): line = self.__current_block.popleft() if not line: line = self.__current_block.popleft() if 'Transaction counters:' not in line: raise fdb.ParseError("Transaction counters expected") while len(self.__current_block) > 0: line = self.__current_block.popleft() if 'Oldest interesting' in line: self.__event_values['oit'] = int(line.rsplit(' ', 1)[1]) elif 'Oldest active' in line: self.__event_values['oat'] = int(line.rsplit(' ', 1)[1]) elif 'Oldest snapshot' in line: self.__event_values['ost'] = int(line.rsplit(' ', 1)[1]) elif 'Next transaction' in line: self.__event_values['next'] = int(line.rsplit(' ', 1)[1]) elif 'ms' in line and len(self.__current_block) == 0: # Put back performance counters self.__current_block.appendleft(line) break def __parse_trace_header(self): self.__last_timestamp, status, self.__current_event = self._parse_header(self.__current_block.popleft()) self.__event_values['event_id'] = self.next_event_id self.next_event_id += 1 self.__event_values['status'] = status self.__event_values['timestamp'] = self.__last_timestamp def __parser_trace_suspend(self): # Session was suspended because log was full, so we will create fake event to note that line = self.__current_block.popleft() self.__event_values['timestamp'] = self.__last_timestamp self.__event_values['event_id'] = self.next_event_id session_name = line[4:line.find(' is suspended')] self.__event_values['session_name'] = session_name.replace(' ', '_').upper() self.next_event_id += 1 return EventTraceSuspend(**self.__event_values) def __parser_trace_init(self): self.__parse_trace_header() del self.__event_values['status'] self.__event_values['session_name'] = self.__current_block.popleft() return EventTraceInit(**self.__event_values) def __parser_trace_finish(self): self.__parse_trace_header() del self.__event_values['status'] self.__event_values['session_name'] = self.__current_block.popleft() return EventTraceFinish(**self.__event_values) def __parser_start_transaction(self): self.__parse_trace_header() # Attachment values = {} self._parse_attachment_info(values) self.__event_values['attachment_id'] = values['attachment_id'] # Transaction parameters self._parse_transaction_info(self.__event_values, check=False) return EventTransactionStart(**self.__event_values) def __parser_commit_transaction(self): self.__parse_trace_header() # Attachment values = {} self._parse_attachment_info(values) self.__event_values['attachment_id'] = values['attachment_id'] # Transaction parameters self._parse_transaction_info(self.__event_values, check=False) self._parse_transaction_performance() return EventCommit(**self.__event_values) def __parser_rollback_transaction(self): self.__parse_trace_header() # Attachment values = {} self._parse_attachment_info(values) self.__event_values['attachment_id'] = values['attachment_id'] # Transaction parameters self._parse_transaction_info(self.__event_values, check=False) self._parse_transaction_performance() return EventRollback(**self.__event_values) def __parser_commit_retaining(self): self.__parse_trace_header() # Attachment values = {} self._parse_attachment_info(values) self.__event_values['attachment_id'] = values['attachment_id'] # Transaction parameters self._parse_transaction_info(self.__event_values, check=False) self._parse_transaction_performance() return EventCommitRetaining(**self.__event_values) def __parser_rollback_retaining(self): self.__parse_trace_header() # Attachment values = {} self._parse_attachment_info(values) self.__event_values['attachment_id'] = values['attachment_id'] # Transaction parameters self._parse_transaction_info(self.__event_values, check=False) self._parse_transaction_performance() return EventRollbackRetaining(**self.__event_values) def __parser_prepare_statement(self): self.__parse_trace_header() self._parse_attachment_and_transaction() self._parse_statement_id() self._parse_prepare_time() self._parse_sql_statement() self._parse_plan() self._parse_sql_info() return EventPrepareStatement(**self.__event_values) def __parser_execute_statement_start(self): self.__parse_trace_header() self._parse_attachment_and_transaction() self._parse_statement_id() self._parse_sql_statement() self._parse_plan() self._parse_parameters() self._parse_sql_info() return EventStatementStart(**self.__event_values) def __parser_execute_statement_finish(self): self.__parse_trace_header() self._parse_attachment_and_transaction() self._parse_statement_id() self._parse_sql_statement() self._parse_plan() self._parse_parameters() self.__event_values['records'] = None self._parse_performance() self._parse_sql_info() return EventStatementFinish(**self.__event_values) def __parser_free_statement(self): self.__parse_trace_header() self._parse_attachment_and_transaction() self._parse_statement_id() self._parse_sql_statement() self._parse_plan() self._parse_sql_info() del self.__event_values['status'] return EventFreeStatement(**self.__event_values) def __parser_close_cursor(self): self.__parse_trace_header() self._parse_attachment_and_transaction() self._parse_statement_id() self._parse_sql_statement() self._parse_plan() self._parse_sql_info() del self.__event_values['status'] return EventCloseCursor(**self.__event_values) def __parser_trigger_start(self): self.__parse_trace_header() self._parse_attachment_and_transaction() self._parse_trigger() return EventTriggerStart(**self.__event_values) def __parser_trigger_finish(self): self.__parse_trace_header() self._parse_attachment_and_transaction() self._parse_trigger() self._parse_performance() return EventTriggerFinish(**self.__event_values) def __parser_procedure_start(self): self.__parse_trace_header() self._parse_attachment_and_transaction() pad, s = self.__current_block.popleft().split() self.__event_values['procedure'] = s[:-1] self._parse_parameters(for_procedure=True) return EventProcedureStart(**self.__event_values) def __parser_procedure_finish(self): self.__parse_trace_header() self._parse_attachment_and_transaction() pad, s = self.__current_block.popleft().split() self.__event_values['procedure'] = s[:-1] self._parse_parameters(for_procedure=True) self._parse_performance() return EventProcedureFinish(**self.__event_values) def __parser_create_db(self): self.__parse_trace_header() # Attachment parameters self._parse_attachment_info(self.__event_values, check=False) return EventCreate(**self.__event_values) def __parser_drop_db(self): self.__parse_trace_header() # Attachment parameters self._parse_attachment_info(self.__event_values, check=False) return EventDrop(**self.__event_values) def __parser_attach(self): self.__parse_trace_header() # Attachment parameters self._parse_attachment_info(self.__event_values, check=False) #self.__event_values['unauthorized'] = False return EventAttach(**self.__event_values) def __parser_detach(self): self.__parse_trace_header() # Attachment parameters self._parse_attachment_info(self.__event_values, check=False) return EventDetach(**self.__event_values) def __parser_service_start(self): self.__parse_trace_header() self._parse_service() # service parameters action = self.__current_block.popleft().strip('"') self.__event_values['action'] = action parameters = [] while len(self.__current_block) > 0: parameters.append(self.__current_block.popleft()) self.__event_values['parameters'] = parameters # return EventServiceStart(**self.__event_values) def __parser_service_attach(self): self.__parse_trace_header() self._parse_service() return EventServiceAttach(**self.__event_values) def __parser_service_detach(self): self.__parse_trace_header() self._parse_service() return EventServiceDetach(**self.__event_values) def __parser_service_query(self): self.__parse_trace_header() self._parse_service() # service parameters line = self.__current_block.popleft().strip() if line[0] == '"' and line[-1] == '"': action = line.strip('"') self.__event_values['action'] = action else: self.__event_values['action'] = None parameters = [] while len(self.__current_block) > 0: parameters.append(self.__current_block.popleft()) self.__event_values['parameters'] = parameters # return EventServiceQuery(**self.__event_values) def __parser_set_context(self): self.__parse_trace_header() self._parse_attachment_and_transaction() line = self.__current_block.popleft() context, line = line.split(']', 1) key, value = line.split('=', 1) self.__event_values['context'] = context[1:] self.__event_values['key'] = key.strip() self.__event_values['value'] = value.strip() del self.__event_values['status'] return EventSetContext(**self.__event_values) def __parser_error(self): self.__event_values['place'] = self.__current_block[0].split(' AT ')[1] self.__parse_trace_header() att_values = {} if 'service_mgr' in self.__current_block[0]: event_class = EventServiceError self._parse_service() else: event_class = EventError self._parse_attachment_info(att_values) self.__event_values['attachment_id'] = att_values['attachment_id'] details = [] while len(self.__current_block) > 0: details.append(self.__current_block.popleft()) self.__event_values['details'] = details del self.__event_values['status'] return event_class(**self.__event_values) def __parser_warning(self): self.__event_values['place'] = self.__current_block[0].split(' AT ')[1] self.__parse_trace_header() att_values = {} if 'service_mgr' in self.__current_block[0]: event_class = EventServiceWarning self._parse_service() else: event_class = EventWarning self._parse_attachment_info(att_values) self.__event_values['attachment_id'] = att_values['attachment_id'] details = [] while len(self.__current_block) > 0: details.append(self.__current_block.popleft()) self.__event_values['details'] = details del self.__event_values['status'] return event_class(**self.__event_values) def __parser_sweep_start(self): self.__parse_trace_header() self._parse_sweep_attachment() self._parse_sweep_tr_counters() del self.__event_values['status'] return EventSweepStart(**self.__event_values) def __parser_sweep_progress(self): self.__parse_trace_header() self._parse_sweep_attachment() self._parse_performance() del self.__event_values['status'] return EventSweepProgress(**self.__event_values) def __parser_sweep_finish(self): self.__parse_trace_header() self._parse_sweep_attachment() self._parse_sweep_tr_counters() self._parse_performance() del self.__event_values['status'] del self.__event_values['access'] return EventSweepFinish(**self.__event_values) def __parser_sweep_failed(self): self.__parse_trace_header() self._parse_sweep_attachment() del self.__event_values['status'] return EventSweepFailed(**self.__event_values) def __parser_blr_compile(self): self.__parse_trace_header() # Attachment values = {} self._parse_attachment_info(values) self.__event_values['attachment_id'] = values['attachment_id'] # BLR self._parse_blr_statement_id() self._parse_blrdyn_content() self._parse_prepare_time() return EventBLRCompile(**self.__event_values) def __parser_blr_execute(self): self.__parse_trace_header() self._parse_attachment_and_transaction() # BLR self._parse_blr_statement_id() self._parse_blrdyn_content() self._parse_performance() return EventBLRExecute(**self.__event_values) def __parser_dyn_execute(self): self.__parse_trace_header() self._parse_attachment_and_transaction() # DYN self._parse_blrdyn_content() value, ms = self.__current_block.popleft().split() self.__event_values['run_time'] = int(value) return EventDYNExecute(**self.__event_values) def __parser_unknown(self): items = self.__current_block[0].split() self.__parse_trace_header() self.__current_block.appendleft(' '.join(items[2:])) del self.__event_values['status'] self.__event_values['data'] = '\n'.join(self.__current_block) return EventUnknown(**self.__event_values) def _parse_block(self, parser): self.__event_values.clear() result = parser() return result def parse_event(self, trace_block): """Parse single trace event. :param list trace_block: List with trace entry lines for single trace event. :returns: Named tuple with parsed event. """ self.__current_block = collections.deque(trace_block) if self._is_session_suspended(self.__current_block[0]): record_parser = self.__parser_trace_suspend else: timestamp, status, trace_event = self._parse_header(self.__current_block[0]) record_parser = self.__parse_map[trace_event] # return self._parse_block(record_parser) def parse(self, lines): """Parse output from Firebird trace session and yield named tuples describing individual trace log entries/events. :param lines: Iterable that return lines produced by firebird trace session. :raises `~fdb.ParseError`: When any problem is found in input stream. """ for rec in (self.parse_event(x) for x in self._iter_trace_blocks(lines)): while len(self.__buffer) > 0: yield self.__buffer.pop(0) yield rec fdb-2.0.0/fdb/gstat.py0000644000175000001440000006714213270362207015147 0ustar pcisarusers00000000000000#coding:utf-8 # # PROGRAM/MODULE: fdb # FILE: gstat.py # DESCRIPTION: Python driver for Firebird - Firebird gstat output processing # CREATED: 8.11.2017 # # Software distributed under the License is distributed AS IS, # WITHOUT WARRANTY OF ANY KIND, either express or implied. # See the License for the specific language governing rights # and limitations under the License. # # The Original Code was created by Pavel Cisar # # Copyright (c) Pavel Cisar # and all contributors signed below. # # All Rights Reserved. # Contributor(s): ______________________________________. # # See LICENSE.TXT for details. from fdb import ParseError from fdb.utils import ObjectList import datetime import weakref from collections import namedtuple from locale import LC_ALL, getlocale, setlocale, resetlocale import sys GSTAT_25 = 2 GSTAT_30 = 3 ATTRIBUTES = ['force write', 'no reserve', 'shared cache disabled', 'active shadow', 'multi-user maintenance', 'single-user maintenance', 'full shutdown', 'read only', 'backup lock', 'backup merge', 'wrong backup state'] ATTR_FORCE_WRITE = 0 #'force write' ATTR_NO_RESERVE = 1 #'no reserve' ATTR_NO_SHARED_CACHE = 2 #'shared cache disabled' ATTR_ACTIVE_SHADOW = 3 #'active shadow' ATTR_SHUTDOWN_MULTI = 4 #'multi-user maintenance' ATTR_SHUTDOWN_SINGLE = 5 #'single-user maintenance' ATTR_SHUTDOWN_FULL = 6 #'full shutdown' ATTR_READ_ONLY = 7 #'read only' ATTR_BACKUP_LOCK = 8 #'backup lock' ATTR_BACKUP_MERGE = 9 #'backup merge' ATTR_BACKUP_WRONG = 10 #'wrong backup state %d' FillDistribution = namedtuple('FillDistribution', 'd20,d40,d50,d80,d100') Encryption = namedtuple('Encryption', 'pages,encrypted,unencrypted') def empty_str(str_): "Return True if string is empty (whitespace don't count) or None" return true if str_ is None else str_.strip() == '' class StatTable(object): "Statisctics for single database table." def __init__(self): #: (str) Table name self.name = None #: (int) Table ID self.table_id = None #: (int) Primary Pointer Page for table self.primary_pointer_page = None #: (int) Index Root Page for table self.index_root_page = None #: (float) Average record length self.avg_record_length = None #: (int) Total number of record in table self.total_records = None #: (float) Average record version length self.avg_version_length = None #: (int) Total number of record versions self.total_versions = None #: (int) Max number of versions for single record self.max_versions = None #: (int) Number of data pages for table self.data_pages = None #: (int) Number of data page slots for table self.data_page_slots = None #: (float) Average data page fill ratio self.avg_fill = None #: (:class:`FillDistribution`) Data page fill distribution statistics self.distribution = None #: (:class:`~fdb.utils.ObjectList`) Indices belonging to table self.indices = [] class StatTable3(StatTable): "Statisctics for single database table (Firebird 3 and above)." def __init__(self): super(StatTable3, self).__init__() #: (int) Number of Pointer Pages self.pointer_pages = None #: (int) Number of record formats self.total_formats = None #: (int) Number of actually used record formats self.used_formats = None #: (float) Average length of record fragments self.avg_fragment_length = None #: (int) Total number of record fragments self.total_fragments = None #: (int) Max number of fragments for single record self.max_fragments = None #: (float) Average length of unpacked record self.avg_unpacked_length = None #: (float) Record compression ratio self.compression_ratio = None #: (int) Number of Primary Data Pages self.primary_pages = None #: (int) Number of Secondary Data Pages self.secondary_pages = None #: (int) Number of swept data pages self.swept_pages = None #: (int) Number of empty data pages self.empty_pages = None #: (int) Number of full data pages self.full_pages = None #: (int) Number of BLOB values self.blobs = None #: (int) Total length of BLOB values (bytes) self.blobs_total_length = None #: (int) Number of BLOB pages self.blob_pages = None #: (int) Number of Level 0 BLOB values self.level_0 = None #: (int) Number of Level 1 BLOB values self.level_1 = None #: (int) Number of Level 2 BLOB values self.level_2 = None class StatIndex(object): "Statisctics for single database index." def __init__(self, table): #: (wekref.proxy) Proxy to parent :class:`TableStats` self.table = weakref.proxy(table) table.indices.append(weakref.proxy(self)) #: (str) Index name self.name = None #: (int) Index ID self.index_id = None #: (int) Depth of index tree self.depth = None #: (int) Number of leaft index tree buckets self.leaf_buckets = None #: (int) Number of index tree nodes self.nodes = None #: (float) Average data length self.avg_data_length = None #: (int) Total number of duplicate keys self.total_dup = None #: (int) Max number of occurences for single duplicate key self.max_dup = None #: (:class:`FillDistribution`) Index page fill distribution statistics self.distribution = None class StatIndex3(StatIndex): "Statisctics for single database index (Firebird 3 and above)." def __init__(self, table): super(StatIndex3, self).__init__(table) #: (int) Index Root page self.root_page = None #: (float) Average node length self.avg_node_length = None #: (float) Average key length self.avg_key_length = None #: (float) Index key compression ratio self.compression_ratio = None #: (float) Average key prefix length self.avg_prefix_length = None #: (float) Index clustering factor self.clustering_factor = None #: (float) Ration self.ratio = None class StatDatabase(object): """Firebird database statistics (produced by gstat). """ def __init__(self): #: (int) GSTAT version self.gstat_version = None #: (int) System change number (v3 only) self.system_change_number = None # ver3 #: (datetime) gstat execution timestamp self.executed = None #: (datetime) gstat completion timestamp self.completed = None # ver3 #: (str) Database filename self.filename = None #: (int) Database flags self.flags = 0 #: (int) Checksum (v2 only) self.checksum = 12345 # ver2 #: (int) Database header generation self.generation = 0 #: (int) Database page size self.page_size = 0 #self.ods_version = None #: (int) Oldest Interesting Transaction self.oit = 0 #: (int) Oldest Active Transaction self.oat = 0 #: (int) Oldest Snapshot Transaction self.ost = 0 #: (int) Next Transaction self.next_transaction = 0 #: (int) Bumped Transaction (v2 only) self.bumped_transaction = None # ver2 #self.sequence_number = 0 #: (int) Next attachment ID self.next_attachment_id = 0 #: (int) Implementation ID (v2 only) self.implementation_id = 0 # ver2 #: (str) Implementation (v3 only) self.implementation = None # ver3 #: (int) Number of shadows self.shadow_count = 0 #: (int) Number of page buffers self.page_buffers = 0 #: (int) Next header page self.next_header_page = 0 #: (int) SQL Dialect self.database_dialect = 0 #: (datetime) Database creation timestamp self.creation_date = None #: (list) Database attributes self.attributes = [] # Variable data #: (int) Sweep interval (variable hdr item) self.sweep_interval = None #: (str) Continuation file (variable hdr item) self.continuation_file = None #: (int) Last logical page (variable hdr item) self.last_logical_page = None #: (str) Backup GUID (variable hdr item) self.backup_guid = None #: (str) Root file name (variable hdr item) self.root_filename = None #: (str) replay logging file (variable hdr item) self.replay_logging_file = None #: (str) Backup difference file (variable hdr item) self.backup_diff_file = None #: (Encryption) Stats for enxrypted data page self.encrypted_data_pages = None #: (Encryption) Stats for enxrypted index page self.encrypted_index_pages = None #: (Encryption) Stats for enxrypted blob page self.encrypted_blob_pages = None #: List of database file names self.continuation_files = [] #: :class:`~fdb.utils.ObjectList` with :class:`StatTable` or :class:`StatTable3` instances self.tables = None #: :class:`~fdb.utils.ObjectList` with :class:`StatIndex` or :class:`StatIndex3` instances self.indices = None def has_table_stats(self): """Return True if instance contains information about tables. .. important:: This is not the same as check for empty :data:`tables` list. When gstat is run with `-i` without `-d` option, :data:`tables` list contains instances that does not have any other information about table but table name and its indices. """ return self.tables[0].primary_pointer_page is not None if len(self.tables) > 0 else False def has_row_stats(self): "Return True if instance contains information about table rows." return self.has_table_stats() and self.tables[0].avg_version_length is not None def has_index_stats(self): "Return True if instance contains information about indices." return self.indices[0].depth is not None if len(self.indices) > 0 else False def has_encryption_stats(self): "Return True if instance contains information about database encryption." return self.encrypted_data_pages is not None def has_system(self): "Return True if instance contains information about system tables." return self.tables.contains('RDB$DATABASE', 'item.name') def parse(lines): """Parse output from Firebird gstat utility. :param lines: Iterable of lines produced by Firebird gstat utility. :returns: :class:`~fdb.gstat.StatDatabase` instance with parsed results. :raises `~fdb.ParseError`: When any problem is found in input stream. """ def parse_hdr(line): "Parse line from header" for key, valtype, name in items_hdr: if line.startswith(key): # Check for GSTAT_VERSION if db.gstat_version is None: if key == 'Checksum': db.gstat_version = GSTAT_25 db.tables = ObjectList(_cls=StatTable, key_expr='item.name') db.indices = ObjectList(_cls=StatIndex, key_expr='item.name') elif key == 'System Change Number': db.gstat_version = GSTAT_30 db.tables = ObjectList(_cls=StatTable3, key_expr='item.name') db.indices = ObjectList(_cls=StatIndex3, key_expr='item.name') # value = line[len(key):].strip() if valtype == 'i': # integer value = int(value) elif valtype == 's': # string pass elif valtype == 'd': # date time value = datetime.datetime.strptime(value, '%b %d, %Y %H:%M:%S') elif valtype == 'l': # list if value == '': value = [] else: value = [x.strip() for x in value.split(',')] value = tuple([ATTRIBUTES.index(x) for x in value]) else: raise ParseError("Unknown value type %s" % valtype) if name is None: name = key.lower().replace(' ', '_') setattr(db, name, value) return raise ParseError('Unknown information (line %i)' % line_no) def parse_var(line): "Parse line from variable header data" if line == '*END*': return for key, valtype, name in items_var: if line.startswith(key): value = line[len(key):].strip() if valtype == 'i': # integer value = int(value) elif valtype == 's': # string pass elif valtype == 'd': # date time value = datetime.datetime.strptime(value, '%b %d, %Y %H:%M:%S') else: raise ParseError("Unknown value type %s" % valtype) if name is None: name = key.lower().strip(':').replace(' ', '_') setattr(db, name, value) return raise ParseError('Unknown information (line %i)' % line_no) def parse_fseq(line): "Parse line from file sequence" if not line.startswith('File '): raise ParseError("Bad file specification (line %i)" % line_no) if 'is the only file' in line: return if ' is the ' in line: db.continuation_files.append(line[5:line.index(' is the ')]) elif ' continues as' in line: db.continuation_files.append(line[5:line.index(' continues as')]) else: raise ParseError("Bad file specification (line %i)" % line_no) def parse_table(line, table): "Parse line from table data" if table.name is None: # We should parse header tname, tid = line.split(' (') table.name = tname.strip(' "') table.table_id = int(tid.strip('()')) else: if ',' in line: # Data values for item in line.split(','): item = item.strip() found = False items = items_tbl2 if db.gstat_version == GSTAT_25 else items_tbl3 for key, valtype, name in items: if item.startswith(key): value = item[len(key):].strip() if valtype == 'i': # integer value = int(value) elif valtype == 'f': # float value = float(value) elif valtype == 'p': # % value = int(value.strip('%')) else: raise ParseError("Unknown value type %s" % valtype) if name is None: name = key.lower().strip(':').replace(' ', '_') setattr(table, name, value) found = True break if not found: raise ParseError('Unknown information (line %i)' % line_no) else: # Fill distribution if '=' in line: fill_range, fill_value = line.split('=') i = items_fill.index(fill_range.strip()) if table.distribution is None: table.distribution = [0, 0, 0, 0, 0] table.distribution[i] = int(fill_value.strip()) elif line.startswith('Fill distribution:'): pass else: raise ParseError('Unknown information (line %i)' % line_no) def parse_index(line, index): "Parse line from index data" if index.name is None: # We should parse header iname, iid = line[6:].split(' (') index.name = iname.strip(' "') index.index_id = int(iid.strip('()')) else: if ',' in line: # Data values for item in line.split(','): item = item.strip() found = False items = items_idx2 if db.gstat_version == GSTAT_25 else items_idx3 for key, valtype, name in items: if item.startswith(key): value = item[len(key):].strip() if valtype == 'i': # integer value = int(value) elif valtype == 'f': # float value = float(value) elif valtype == 'p': # % value = int(value.strip('%')) else: raise ParseError("Unknown value type %s" % valtype) if name is None: name = key.lower().strip(':').replace(' ', '_') setattr(index, name, value) found = True break if not found: raise ParseError('Unknown information (line %i)' % line_no) else: # Fill distribution if '=' in line: fill_range, fill_value = line.split('=') i = items_fill.index(fill_range.strip()) if index.distribution is None: index.distribution = [0, 0, 0, 0, 0] index.distribution[i] = int(fill_value.strip()) elif line.startswith('Fill distribution:'): pass else: raise ParseError('Unknown information (line %i)' % line_no) def parse_encryption(line): "Parse line from encryption data" try: total, encrypted, unencrypted = line.split(',') pad, total = total.rsplit(' ', 1) total = int(total) pad, encrypted = encrypted.rsplit(' ', 1) encrypted = int(encrypted) pad, unencrypted = unencrypted.rsplit(' ', 1) unencrypted = int(unencrypted) data = Encryption(total, encrypted, unencrypted) except: raise ParseError('Malformed encryption information (line %i)' % line_no) if 'Data pages:' in line: db.encrypted_data_pages = data elif 'Index pages:' in line: db.encrypted_index_pages = data elif 'Blob pages:' in line: db.encrypted_blob_pages = data else: raise ParseError('Unknown encryption information (line %i)' % line_no) # items_hdr = [('Flags', 'i', None), ('Checksum', 'i', None), ('Generation', 'i', None), ('System Change Number', 'i', 'system_change_number'), ('Page size', 'i', None), ('ODS version', 's', None), ('Oldest transaction', 'i', 'oit'), ('Oldest active', 'i', 'oat'), ('Oldest snapshot', 'i', 'ost'), ('Next transaction', 'i', None), ('Bumped transaction', 'i', None), ('Sequence number', 'i', None), ('Next attachment ID', 'i', None), ('Implementation ID', 'i', None), ('Implementation', 's', None), ('Shadow count', 'i', None), ('Page buffers', 'i', None), ('Next header page', 'i', None), ('Database dialect', 'i', None), ('Creation date', 'd', None), ('Attributes', 'l', None)] items_var = [('Sweep interval:', 'i', None), ('Continuation file:', 's', None), ('Last logical page:', 'i', None), ('Database backup GUID:', 's', 'backup_guid'), ('Root file name:', 's', 'root_filename'), ('Replay logging file:', 's', None), ('Backup difference file:', 's', 'backup_diff_file')] items_tbl2 = [('Primary pointer page:', 'i', None), ('Index root page:', 'i', None), ('Pointer pages:', 'i', 'pointer_pages'), ('Average record length:', 'f', 'avg_record_length'), ('total records:', 'i', None), ('Average version length:', 'f', 'avg_version_length'), ('total versions:', 'i', None), ('max versions:', 'i', None), ('Data pages:', 'i', None), ('data page slots:', 'i', None), ('average fill:', 'p', 'avg_fill'), ('Primary pages:', 'i', None), ('secondary pages:', 'i', None), ('swept pages:', 'i', None), ('Empty pages:', 'i', None), ('full pages:', 'i', None)] items_tbl3 = [('Primary pointer page:', 'i', None), ('Index root page:', 'i', None), ('Total formats:', 'i', None), ('used formats:', 'i', None), ('Average record length:', 'f', 'avg_record_length'), ('total records:', 'i', None), ('Average version length:', 'f', 'avg_version_length'), ('total versions:', 'i', None), ('max versions:', 'i', None), ('Average fragment length:', 'f', 'avg_fragment_length'), ('total fragments:', 'i', None), ('max fragments:', 'i', None), ('Average unpacked length:', 'f', 'avg_unpacked_length'), ('compression ratio:', 'f', None), ('Pointer pages:', 'i', 'pointer_pages'), ('data page slots:', 'i', None), ('Data pages:', 'i', None), ('average fill:', 'p', 'avg_fill'), ('Primary pages:', 'i', None), ('secondary pages:', 'i', None), ('swept pages:', 'i', None), ('Empty pages:', 'i', None), ('full pages:', 'i', None), ('Blobs:', 'i', None), ('total length:', 'i', 'blobs_total_length'), ('blob pages:', 'i', None), ('Level 0:', 'i', None), ('Level 1:', 'i', None), ('Level 2:', 'i', None)] items_idx2 = [('Depth:', 'i', None), ('leaf buckets:', 'i', None), ('nodes:', 'i', None), ('Average data length:', 'f', 'avg_data_length'), ('total dup:', 'i', None), ('max dup:', 'i', None)] items_idx3 = [('Root page:', 'i', None), ('depth:', 'i', None), ('leaf buckets:', 'i', None), ('nodes:', 'i', None), ('Average node length:', 'f', 'avg_node_length'), ('total dup:', 'i', None), ('max dup:', 'i', None), ('Average key length:', 'f', 'avg_key_length'), ('compression ratio:', 'f', None), ('Average prefix length:', 'f', 'avg_prefix_length'), ('average data length:', 'f', 'avg_data_length'), ('Clustering factor:', 'f', None), ('ratio:', 'f', None)] items_fill = ['0 - 19%', '20 - 39%', '40 - 59%', '60 - 79%', '80 - 99%'] # db = StatDatabase() line_no = 0 table = None index = None new_block = True in_table = False # line_no = 0 step = 0 # Look for sections and skip empty lines try: locale = getlocale(LC_ALL) if sys.platform == 'win32': setlocale(LC_ALL, 'English_United States') else: setlocale(LC_ALL, 'en_US') # Skip empty lines at start for line in (x.strip() for x in lines): line_no += 1 if line.startswith('Gstat completion time'): db.completed = datetime.datetime.strptime(line[22:], '%a %b %d %H:%M:%S %Y') elif step == 0: # Looking for section or db name if line.startswith('Gstat execution time'): db.executed = datetime.datetime.strptime(line[21:], '%a %b %d %H:%M:%S %Y') elif line.startswith('Database header page information:'): step = 1 elif line.startswith('Variable header data:'): step = 2 elif line.startswith('Database file sequence:'): step = 3 elif 'encrypted' in line and 'non-crypted' in line: parse_encryption(line) elif line.startswith('Analyzing database pages ...'): step = 4 elif empty_str(line): pass elif line.startswith('Database "'): x, s = line.split(' ') db.filename = s.strip('"') step = 0 else: raise ParseError("Unrecognized data (line %i)" % line_no) elif step == 1: # Header if empty_str(line): # section ends with empty line step = 0 else: parse_hdr(line) elif step == 2: # Variable data if empty_str(line): # section ends with empty line step = 0 else: parse_var(line) elif step == 3: # File sequence if empty_str(line): # section ends with empty line step = 0 else: parse_fseq(line) elif step == 4: # Tables and indices if empty_str(line): # section ends with empty line new_block = True else: if new_block: new_block = False if not line.startswith('Index '): # Should be table table = StatTable() if db.gstat_version == GSTAT_25 else StatTable3() db.tables.append(table) in_table = True parse_table(line, table) else: # It's index index = StatIndex(table) if db.gstat_version == GSTAT_25 else StatIndex3(table) db.indices.append(index) in_table = False parse_index(line, index) else: if in_table: parse_table(line, table) else: parse_index(line, index) # Final touch if db.has_table_stats(): for table in db.tables: table.distribution = FillDistribution(*table.distribution) if db.has_index_stats(): for index in db.indices: index.distribution = FillDistribution(*index.distribution) db.tables.freeze() db.indices.freeze() finally: if locale[0] is None: if sys.platform == 'win32': setlocale(LC_ALL, '') else: resetlocale(LC_ALL) else: setlocale(LC_ALL, locale) return db fdb-2.0.0/fdb/__init__.py0000644000175000001440000001606113270362207015556 0ustar pcisarusers00000000000000#coding:utf-8 # # PROGRAM/MODULE: fdb # FILE: __init__.py # DESCRIPTION: Python driver for Firebird # CREATED: 8.10.2011 # # Software distributed under the License is distributed AS IS, # WITHOUT WARRANTY OF ANY KIND, either express or implied. # See the License for the specific language governing rights # and limitations under the License. # # The Original Code was created by Pavel Cisar # # Copyright (c) Pavel Cisar # and all contributors signed below. # # All Rights Reserved. # Contributor(s): ______________________________________. # # See LICENSE.TXT for details. from fdb.fbcore import * from fdb.fbcore import __version__ from fdb import services from fdb import blr from fdb import trace from fdb import gstat __all__ = (# Common with KInterbasDB 'BINARY', 'Binary', 'BlobReader', 'Connection', 'ConnectionGroup', 'Cursor', 'DATETIME', 'DBAPITypeObject', 'DESCRIPTION_DISPLAY_SIZE', 'DESCRIPTION_INTERNAL_SIZE', 'DESCRIPTION_NAME', 'DESCRIPTION_NULL_OK', 'DESCRIPTION_PRECISION', 'DESCRIPTION_SCALE', 'DESCRIPTION_TYPE_CODE', 'DIST_TRANS_MAX_DATABASES', 'DataError', 'DatabaseError', 'Date', 'DateFromTicks', 'Error', 'EventConduit', 'IntegrityError', 'InterfaceError', 'InternalError', 'NUMBER', 'NotSupportedError', 'OperationalError', 'PreparedStatement', 'ProgrammingError', 'ROWID', 'STRING', 'TPB', 'TableReservation', 'ParameterBuffer', 'Time', 'TimeFromTicks', 'TimestampFromTicks', 'Transaction', 'TransactionConflict', '__version__', 'apilevel', 'connect', 'create_database', 'frb_info_att_charset', 'isc_dpb_activate_shadow', 'isc_dpb_address_path', 'isc_dpb_allocation', 'isc_dpb_begin_log', 'isc_dpb_buffer_length', 'isc_dpb_cache_manager', 'isc_dpb_cdd_pathname', 'isc_dpb_connect_timeout', 'isc_dpb_damaged', 'isc_dpb_dbkey_scope', 'isc_dpb_debug', 'isc_dpb_delete_shadow', 'isc_dpb_dummy_packet_interval', 'isc_dpb_encrypt_key', 'isc_dpb_force_write', 'isc_dpb_garbage_collect', 'isc_dpb_gbak_attach', 'isc_dpb_gfix_attach', 'isc_dpb_gsec_attach', 'isc_dpb_gstat_attach', 'isc_dpb_interp', 'isc_dpb_lc_ctype', 'isc_dpb_lc_messages', 'isc_dpb_no_garbage_collect', 'isc_dpb_no_reserve', 'isc_dpb_num_buffers', 'isc_dpb_number_of_users', 'isc_dpb_old_dump_id', 'isc_dpb_old_file', 'isc_dpb_old_file_size', 'isc_dpb_old_num_files', 'isc_dpb_old_start_file', 'isc_dpb_old_start_page', 'isc_dpb_old_start_seqno', 'isc_dpb_online', 'isc_dpb_online_dump', 'isc_dpb_overwrite', 'isc_dpb_page_size', 'isc_dpb_password', 'isc_dpb_password_enc', 'isc_dpb_quit_log', 'isc_dpb_reserved', 'isc_dpb_sec_attach', 'isc_dpb_set_db_charset', 'isc_dpb_set_db_readonly', 'isc_dpb_set_db_sql_dialect', 'isc_dpb_set_page_buffers', 'isc_dpb_shutdown', 'isc_dpb_shutdown_delay', 'isc_dpb_sql_dialect', 'isc_dpb_sql_role_name', 'isc_dpb_sweep', 'isc_dpb_sweep_interval', 'isc_dpb_sys_user_name', 'isc_dpb_sys_user_name_enc', 'isc_dpb_trace', 'isc_dpb_user_name', 'isc_dpb_verify', 'isc_dpb_version1', 'isc_dpb_working_directory', 'isc_info_active_tran_count', 'isc_info_active_transactions', 'isc_info_allocation', 'isc_info_attachment_id', 'isc_info_backout_count', 'isc_info_base_level', 'isc_info_bpage_errors', 'isc_info_creation_date', 'isc_info_cur_log_part_offset', 'isc_info_cur_logfile_name', 'isc_info_current_memory', 'isc_info_db_class', 'fb_info_page_contents', 'isc_info_db_id', 'isc_info_db_provider', 'isc_info_db_read_only', 'isc_info_db_size_in_pages', 'isc_info_db_sql_dialect', 'isc_info_delete_count', 'isc_info_dpage_errors', 'isc_info_expunge_count', 'isc_info_fetches', 'isc_info_firebird_version', 'isc_info_forced_writes', 'isc_info_implementation', 'isc_info_insert_count', 'isc_info_ipage_errors', 'isc_info_isc_version', 'isc_info_license', 'isc_info_limbo', 'isc_info_logfile', 'isc_info_marks', 'isc_info_max_memory', 'isc_info_next_transaction', 'isc_info_no_reserve', 'isc_info_num_buffers', 'isc_info_num_wal_buffers', 'isc_info_ods_minor_version', 'isc_info_ods_version', 'isc_info_oldest_active', 'isc_info_oldest_snapshot', 'isc_info_oldest_transaction', 'isc_info_page_errors', 'isc_info_page_size', 'isc_info_ppage_errors', 'isc_info_purge_count', 'isc_info_read_idx_count', 'isc_info_read_seq_count', 'isc_info_reads', 'isc_info_record_errors', 'isc_info_set_page_buffers', 'isc_info_sql_stmt_commit', 'isc_info_sql_stmt_ddl', 'isc_info_sql_stmt_delete', 'isc_info_sql_stmt_exec_procedure', 'isc_info_sql_stmt_get_segment', 'isc_info_sql_stmt_insert', 'isc_info_sql_stmt_put_segment', 'isc_info_sql_stmt_rollback', 'isc_info_sql_stmt_savepoint', 'isc_info_sql_stmt_select', 'isc_info_sql_stmt_select_for_upd', 'isc_info_sql_stmt_set_generator', 'isc_info_sql_stmt_start_trans', 'isc_info_sql_stmt_update', 'isc_info_sweep_interval', 'isc_info_tpage_errors', 'isc_info_tra_access', 'isc_info_tra_concurrency', 'isc_info_tra_consistency', 'isc_info_tra_id', 'isc_info_tra_isolation', 'isc_info_tra_lock_timeout', 'isc_info_tra_no_rec_version', 'isc_info_tra_oldest_active', 'isc_info_tra_oldest_interesting', 'isc_info_tra_oldest_snapshot', 'isc_info_tra_read_committed', 'isc_info_tra_readonly', 'fb_info_tra_dbpath', 'isc_info_tra_readwrite', 'isc_info_tra_rec_version', 'isc_info_update_count', 'isc_info_user_names', 'isc_info_version', 'isc_info_wal_avg_grpc_size', 'isc_info_wal_avg_io_size', 'isc_info_wal_buffer_size', 'isc_info_wal_ckpt_length', 'isc_info_wal_cur_ckpt_interval', 'isc_info_wal_grpc_wait_usecs', 'isc_info_wal_num_commits', 'isc_info_wal_num_io', 'isc_info_wal_prv_ckpt_fname', 'isc_info_wal_prv_ckpt_poffset', 'isc_info_wal_recv_ckpt_fname', 'isc_info_wal_recv_ckpt_poffset', 'isc_info_window_turns', 'isc_info_writes', 'isc_tpb_autocommit', 'isc_tpb_commit_time', 'isc_tpb_concurrency', 'isc_tpb_consistency', 'isc_tpb_exclusive', 'isc_tpb_ignore_limbo', 'isc_tpb_lock_read', 'isc_tpb_lock_timeout', 'isc_tpb_lock_write', 'isc_tpb_no_auto_undo', 'isc_tpb_no_rec_version', 'isc_tpb_nowait', 'isc_tpb_protected', 'isc_tpb_read', 'isc_tpb_read_committed', 'isc_tpb_rec_version', 'isc_tpb_restart_requests', 'isc_tpb_shared', 'isc_tpb_verb_time', 'isc_tpb_version3', 'isc_tpb_wait', 'isc_tpb_write', 'paramstyle', 'threadsafety', # New in FDB 'ISOLATION_LEVEL_READ_COMMITED', 'ISOLATION_LEVEL_READ_COMMITED_LEGACY', 'ISOLATION_LEVEL_REPEATABLE_READ', 'ISOLATION_LEVEL_SERIALIZABLE', 'ISOLATION_LEVEL_SNAPSHOT', 'ISOLATION_LEVEL_SNAPSHOT_TABLE_STABILITY', 'ISOLATION_LEVEL_READ_COMMITED_RO', 'MAX_BLOB_SEGMENT_SIZE', 'SQL_ARRAY', 'SQL_BLOB', 'SQL_DOUBLE', 'SQL_D_FLOAT', 'SQL_FLOAT', 'SQL_INT64', 'SQL_LONG', 'SQL_QUAD', 'SQL_SHORT', 'SQL_TEXT', 'SQL_TIMESTAMP', 'SQL_TYPE_DATE', 'SQL_TYPE_TIME', 'SQL_VARYING', 'SUBTYPE_DECIMAL', 'SUBTYPE_NUMERIC', 'SQL_BOOLEAN', 'charset_map', 'load_api', 'isc_info_end', 'bs', 'ConnectionWithSchema', # 'isc_sqlcode', 'ODS_FB_20', 'ODS_FB_21', 'ODS_FB_25', 'ODS_FB_30') fdb-2.0.0/fdb/monitor.py0000644000175000001440000013277313270362207015517 0ustar pcisarusers00000000000000#coding:utf-8 # # PROGRAM: fdb # MODULE: monitor.py # DESCRIPTION: Python driver for Firebird - Database monitoring # CREATED: 10.5.2013 # # Software distributed under the License is distributed AS IS, # WITHOUT WARRANTY OF ANY KIND, either express or implied. # See the License for the specific language governing rights # and limitations under the License. # # The Original Code was created by Pavel Cisar # # Copyright (c) Pavel Cisar # and all contributors signed below. # # All Rights Reserved. # Contributor(s): ______________________________________. # # See LICENSE.TXT for details. import fdb from fdb.utils import LateBindingProperty, ObjectList import weakref # Current shutdown mode SHUTDOWN_MODE_ONLINE = 0 SHUTDOWN_MODE_MULTI = 1 SHUTDOWN_MODE_SINGLE = 2 SHUTDOWN_MODE_FULL = 3 # Current physical backup state BACKUP_STATE_NORMAL = 0 BACKUP_STATE_STALLED = 1 BACKUP_STATE_MERGE = 2 # State STATE_IDLE = 0 STATE_ACTIVE = 1 # Flags FLAG_NOT_SET = 0 FLAG_SET = 1 # Transaction solation mode ISOLATION_CONSISTENCY = 0 ISOLATION_CONCURRENCY = 1 ISOLATION_READ_COMMITTED_RV = 2 ISOLATION_READ_COMMITTED_NO_RV = 3 # Wait mode INFINITE_WAIT = -1 NO_WAIT = 0 # Statistics group STAT_DATABASE = 0 STAT_ATTACHMENT = 1 STAT_TRANSACTION = 2 STAT_STATEMENT = 3 STAT_CALL = 4 # Security database SEC_DEFAULT = 'Default' SEC_SELF = 'Self' SEC_OTHER = 'Other' class Monitor(object): """Class for access to Firebird monitoring tables. """ def __init__(self): self._con = None self._ic = None self.__internal = False self.clear() def __del__(self): if not self.closed: self._close() def __get_closed(self): return self._con is None def __fail_if_closed(self): if self.closed: raise fdb.ProgrammingError("Monitor is not binded to connection.") def _close(self): self._ic.close() self._con = None self._ic = None def _set_as_internal(self): """Mark this instance as `internal` (embedded). This blocks calls to :meth:`bind` and :meth:`close`.""" self.__internal = True self._con = weakref.proxy(self._con) #--- protected def _get_database(self): if self.__database is None: self.__fail_if_closed() if self._con.ods >= fdb.ODS_FB_21: self._ic.execute("select * from mon$database") self.__database = DatabaseInfo(self, self._ic.fetchonemap()) return self.__database def _get_attachments(self): if self.__attachments is None: self.__fail_if_closed() if self._con.ods >= fdb.ODS_FB_21: self._ic.execute("select * from mon$attachments") self.__attachments = ObjectList((AttachmentInfo(self, row) for row in self._ic.itermap()), AttachmentInfo, 'item.id') else: self.__attachments = ObjectList() self.__attachments.freeze() return self.__attachments def _get_this_attachment(self): return self.get_attachment(self._con.db_info(fdb.isc_info_attachment_id)) def _get_transactions(self): if self.__transactions is None: self.__fail_if_closed() if self._con.ods >= fdb.ODS_FB_21: self._ic.execute("select * from mon$transactions") self.__transactions = ObjectList((TransactionInfo(self, row) for row in self._ic.itermap()), TransactionInfo, 'item.id') else: self.__transactions = ObjectList() self.__transactions.freeze() return self.__transactions def _get_statements(self): if self.__statements is None: self.__fail_if_closed() if self._con.ods >= fdb.ODS_FB_21: self._ic.execute("select * from mon$statements") self.__statements = ObjectList((StatementInfo(self, row) for row in self._ic.itermap()), StatementInfo, 'item.id') else: self.__statements = ObjectList() self.__statements.freeze() return self.__statements def _get_callstack(self): if self.__callstack is None: self.__fail_if_closed() if self._con.ods >= fdb.ODS_FB_21: self._ic.execute("select * from mon$call_stack") self.__callstack = ObjectList((CallStackInfo(self, row) for row in self._ic.itermap()), CallStackInfo, 'item.id') else: self.__callstack = ObjectList() self.__callstack.freeze() return self.__callstack def _get_iostats(self): if self.__iostats is None: self.__fail_if_closed() if self._con.ods >= fdb.ODS_FB_30: self._ic.execute("""SELECT r.MON$STAT_ID, r.MON$STAT_GROUP, r.MON$RECORD_SEQ_READS, r.MON$RECORD_IDX_READS, r.MON$RECORD_INSERTS, r.MON$RECORD_UPDATES, r.MON$RECORD_DELETES, r.MON$RECORD_BACKOUTS, r.MON$RECORD_PURGES, r.MON$RECORD_EXPUNGES, r.MON$RECORD_LOCKS, r.MON$RECORD_WAITS, r.MON$RECORD_CONFLICTS, r.MON$BACKVERSION_READS, r.MON$FRAGMENT_READS, r.MON$RECORD_RPT_READS, io.MON$PAGE_FETCHES, io.MON$PAGE_MARKS, io.MON$PAGE_READS, io.MON$PAGE_WRITES, m.MON$MEMORY_ALLOCATED, m.MON$MEMORY_USED, m.MON$MAX_MEMORY_ALLOCATED, m.MON$MAX_MEMORY_USED FROM MON$RECORD_STATS r join MON$IO_STATS io on r.MON$STAT_ID = io.MON$STAT_ID and r.MON$STAT_GROUP = io.MON$STAT_GROUP join MON$MEMORY_USAGE m on r.MON$STAT_ID = m.MON$STAT_ID and r.MON$STAT_GROUP = m.MON$STAT_GROUP""") elif self._con.ods >= fdb.ODS_FB_25: self._ic.execute("""SELECT r.MON$STAT_ID, r.MON$STAT_GROUP, r.MON$RECORD_SEQ_READS, r.MON$RECORD_IDX_READS, r.MON$RECORD_INSERTS, r.MON$RECORD_UPDATES, r.MON$RECORD_DELETES, r.MON$RECORD_BACKOUTS, r.MON$RECORD_PURGES, r.MON$RECORD_EXPUNGES, io.MON$PAGE_FETCHES, io.MON$PAGE_MARKS, io.MON$PAGE_READS, io.MON$PAGE_WRITES, m.MON$MEMORY_ALLOCATED, m.MON$MEMORY_USED, m.MON$MAX_MEMORY_ALLOCATED, m.MON$MAX_MEMORY_USED FROM MON$RECORD_STATS r join MON$IO_STATS io on r.MON$STAT_ID = io.MON$STAT_ID and r.MON$STAT_GROUP = io.MON$STAT_GROUP join MON$MEMORY_USAGE m on r.MON$STAT_ID = m.MON$STAT_ID and r.MON$STAT_GROUP = m.MON$STAT_GROUP""") elif self._con.ods >= fdb.ODS_FB_21: self._ic.execute("""SELECT r.MON$STAT_ID, r.MON$STAT_GROUP, r.MON$RECORD_SEQ_READS, r.MON$RECORD_IDX_READS, r.MON$RECORD_INSERTS, r.MON$RECORD_UPDATES, r.MON$RECORD_DELETES, r.MON$RECORD_BACKOUTS, r.MON$RECORD_PURGES, r.MON$RECORD_EXPUNGES, io.MON$PAGE_FETCHES, io.MON$PAGE_MARKS, io.MON$PAGE_READS, io.MON$PAGE_WRITES FROM MON$RECORD_STATS r join MON$IO_STATS io on r.MON$STAT_ID = io.MON$STAT_ID and r.MON$STAT_GROUP = io.MON$STAT_GROUP""") if self._con.ods >= fdb.ODS_FB_21: self.__iostats = ObjectList((IOStatsInfo(self, row) for row in self._ic.itermap()), IOStatsInfo, 'item.stat_id') else: self.__iostats = ObjectList() self.__iostats.freeze() return self.__iostats def _get_variables(self): if self.__variables is None: self.__fail_if_closed() if self._con.ods >= fdb.ODS_FB_25: self._ic.execute("select * from mon$context_variables") self.__variables = ObjectList((ContextVariableInfo(self, row) for row in self._ic.itermap()), ContextVariableInfo, 'item.stat_id') else: self.__variables = ObjectList() self.__variables.freeze() return self.__variables def _get_tablestats(self): if self.__tablestats is None: self.__fail_if_closed() if self._con.ods >= fdb.ODS_FB_30: self._ic.execute("""SELECT ts.MON$STAT_ID, ts.MON$STAT_GROUP, ts.MON$TABLE_NAME, ts.MON$RECORD_STAT_ID, r.MON$RECORD_SEQ_READS, r.MON$RECORD_IDX_READS, r.MON$RECORD_INSERTS, r.MON$RECORD_UPDATES, r.MON$RECORD_DELETES, r.MON$RECORD_BACKOUTS, r.MON$RECORD_PURGES, r.MON$RECORD_EXPUNGES, r.MON$RECORD_LOCKS, r.MON$RECORD_WAITS, r.MON$RECORD_CONFLICTS, r.MON$BACKVERSION_READS, r.MON$FRAGMENT_READS, r.MON$RECORD_RPT_READS FROM MON$TABLE_STATS ts join MON$RECORD_STATS r on ts.MON$RECORD_STAT_ID = r.MON$STAT_ID""") self.__tablestats = ObjectList((TableStatsInfo(self, row) for row in self._ic.itermap()), TableStatsInfo, 'item.stat_id') else: self.__tablestats = ObjectList() self.__tablestats.freeze() return self.__tablestats #--- Properties #: True if link to :class:`~fdb.Connection` is closed. closed = property(__get_closed) db = LateBindingProperty(_get_database, doc=":class:`DatabaseInfo` object for attached database.") attachments = LateBindingProperty(_get_attachments, doc=":class:`~fdb.utils.ObjectList` of all attachments.\nItems are :class:`AttachmentInfo` objects.") this_attachment = LateBindingProperty(_get_this_attachment, doc=":class:`AttachmentInfo` object for current connection.") transactions = LateBindingProperty(_get_transactions, doc=":class:`~fdb.utils.ObjectList` of all transactions.\nItems are :class:`TransactionInfo` objects.") statements = LateBindingProperty(_get_statements, doc=":class:`~fdb.utils.ObjectList` of all statements.\nItems are :class:`StatementInfo` objects.") callstack = LateBindingProperty(_get_callstack, doc=":class:`~fdb.utils.ObjectList` with complete call stack.\nItems are :class:`CallStackInfo` objects.") iostats = LateBindingProperty(_get_iostats, doc=":class:`~fdb.utils.ObjectList` of all I/O statistics.\nItems are :class:`IOStatsInfo` objects.") variables = LateBindingProperty(_get_variables, doc=":class:`~fdb.utils.ObjectList` of all context variables.\nItems are :class:`ContextVariableInfo` objects.") # FB 3.0 tablestats = LateBindingProperty(_get_tablestats, doc=":class:`~fdb.utils.ObjectList` of all table record I/O statistics.\nItems are :class:`TableStatsInfo` objects.") #--- Public def bind(self, connection): """Bind this instance to specified :class:`~fdb.Connection`. :param connection: :class:`~fdb.Connection` instance. :raises fdb.ProgrammingError: If Monitor object was set as internal (via :meth:`_set_as_internal`) or database has ODS lower than 11.1. """ if self.__internal: raise fdb.ProgrammingError("Call to 'bind' not allowed for embedded Monitor.") if self._con: self.close() if connection.ods < fdb.ODS_FB_21: raise fdb.ProgrammingError("Monitoring tables are available only " \ "for databases with ODS 11.1 and higher.") self._con = connection self._ic = self._con.trans(fdb.ISOLATION_LEVEL_READ_COMMITED_RO).cursor() self.clear() def close(self): """Sever link to :class:`~fdb.Connection`. :raises fdb.ProgrammingError: If Monitor object was set as internal (via :meth:`_set_as_internal`). """ if self.__internal: raise fdb.ProgrammingError("Call to 'close' not allowed for embedded Monitor.") self._close() self.clear() def clear(self): """Drop all cached information objects. Force reload of fresh monitoring information on next reference.""" self.__database = None self.__attachments = None self.__transactions = None self.__statements = None self.__callstack = None self.__iostats = None self.__variables = None self.__tablestats = None if not self.closed: self._ic.transaction.commit() def refresh(self): "Reloads fresh monitoring information." self.__fail_if_closed() self._ic.transaction.commit() self.clear() self._get_database() def get_attachment(self, id): """Get :class:`AttachmentInfo` by ID. :param int id: Attachment ID. :returns: :class:`AttachmentInfo` with specified ID or `None`. """ for attachment in self.attachments: if attachment.id == id: return attachment else: return None def get_transaction(self, id): """Get :class:`TransactionInfo` by ID. :param int id: Transaction ID. :returns: :class:`TransactionInfo` with specified ID or `None`. """ for transaction in self.transactions: if transaction.id == id: return transaction else: return None def get_statement(self, id): """Get :class:`StatementInfo` by ID. :param int id: Statement ID. :returns: :class:`StatementInfo` with specified ID or `None`. """ for statement in self.statements: if statement.id == id: return statement else: return None def get_call(self, id): """Get :class:`CallStackInfo` by ID. :param int id: Callstack ID. :returns: :class:`CallStackInfo` with specified ID or `None`. """ for call in self.callstack: if call.id == id: return call else: return None class BaseInfoItem(object): "Base class for all database monitoring objects." #: Weak reference to parent :class:`Monitor` instance. monitor = None def __init__(self, monitor, attributes): self.monitor = monitor if isinstance(monitor, weakref.ProxyType) else weakref.proxy(monitor) self._attributes = dict(attributes) #--- protected def _strip_attribute(self, attr): if self._attributes.get(attr): self._attributes[attr] = self._attributes[attr].strip() #--- Protected def _get_stat_id(self): return self._attributes.get('MON$STAT_ID') #--- properties stat_id = LateBindingProperty(_get_stat_id, doc="Internal ID.") class DatabaseInfo(BaseInfoItem): "Information about attached database." def __init__(self, monitor, attributes): super(DatabaseInfo, self).__init__(monitor, attributes) self._strip_attribute('MON$DATABASE_NAME') self._strip_attribute('MON$OWNER') self._strip_attribute('MON$SEC_DATABASE') #--- Protected def __get_name(self): return self._attributes['MON$DATABASE_NAME'] def __get_page_size(self): return self._attributes['MON$PAGE_SIZE'] def __get_ods(self): return float('%d.%d' % (self._attributes['MON$ODS_MAJOR'], self._attributes['MON$ODS_MINOR'])) def __get_oit(self): return self._attributes['MON$OLDEST_TRANSACTION'] def __get_oat(self): return self._attributes['MON$OLDEST_ACTIVE'] def __get_ost(self): return self._attributes['MON$OLDEST_SNAPSHOT'] def __get_next_transaction(self): return self._attributes['MON$NEXT_TRANSACTION'] def __get_cache_size(self): return self._attributes['MON$PAGE_BUFFERS'] def __get_sql_dialect(self): return self._attributes['MON$SQL_DIALECT'] def __get_shutdown_mode(self): return self._attributes['MON$SHUTDOWN_MODE'] def __get_sweep_interval(self): return self._attributes['MON$SWEEP_INTERVAL'] def __get_read_only(self): return bool(self._attributes['MON$READ_ONLY']) def __get_forced_writes(self): return bool(self._attributes['MON$FORCED_WRITES']) def __get_reserve_space(self): return bool(self._attributes['MON$RESERVE_SPACE']) def __get_created(self): return self._attributes['MON$CREATION_DATE'] def __get_pages(self): return self._attributes['MON$PAGES'] def __get_backup_state(self): return self._attributes['MON$BACKUP_STATE'] def __get_iostats(self): for io in self.monitor.iostats: if (io.stat_id == self.stat_id) and (io.group == STAT_DATABASE): return io return None def __get_crypt_page(self): return self._attributes.get('MON$CRYPT_PAGE') def __get_owner(self): return self._attributes.get('MON$OWNER') def __get_security_database(self): return self._attributes.get('MON$SEC_DATABASE') def __get_tablestats(self): return dict(((io.table_name, io) for io in self.monitor.tablestats if (io.stat_id == self.stat_id) and (io.group == STAT_DATABASE))) #--- properties name = property(__get_name, doc="Database pathname or alias.") page_size = property(__get_page_size, doc="Size of database page in bytes.") ods = property(__get_ods, doc="On-Disk Structure (ODS) version number.") oit = property(__get_oit, doc="Transaction ID of the oldest [interesting] transaction.") oat = property(__get_oat, doc="Transaction ID of the oldest active transaction.") ost = property(__get_ost, doc="Transaction ID of the Oldest Snapshot, i.e., the number of the OAT when the last garbage collection was done.") next_transaction = property(__get_next_transaction, doc="Transaction ID of the next transaction that will be started.") cache_size = property(__get_cache_size, doc="Number of pages allocated in the page cache.") sql_dialect = property(__get_sql_dialect, doc="SQL dialect of the database.") shutdown_mode = property(__get_shutdown_mode, doc="Current shutdown mode.") sweep_interval = property(__get_sweep_interval, doc="The sweep interval configured in the database header. " \ "Value 0 indicates that sweeping is disabled.") read_only = property(__get_read_only, doc="True if database is Read Only.") forced_writes = property(__get_forced_writes, doc="True if database uses synchronous writes.") reserve_space = property(__get_reserve_space, doc="True if database reserves space on data pages.") created = property(__get_created, doc="Creation date and time, i.e., when the database was created or last restored.") pages = property(__get_pages, doc="Number of pages allocated on disk.") backup_state = property(__get_backup_state, doc="Current state of database with respect to nbackup physical backup.") iostats = property(__get_iostats, doc=":class:`IOStatsInfo` for this object.") # FB 3.0 crypt_page = property(__get_crypt_page, doc="Number of page being encrypted.") owner = property(__get_owner, doc="User name of database owner.") security_database = property(__get_security_database, doc="Type of security database (Default, Self or Other).") tablestats = property(__get_tablestats, doc="Dictionary of :class:`TableStatsInfo` instances for this object.") class AttachmentInfo(BaseInfoItem): "Information about attachment (connection) to database." def __init__(self, monitor, attributes): super(AttachmentInfo, self).__init__(monitor, attributes) self._strip_attribute('MON$ATTACHMENT_NAME') self._strip_attribute('MON$USER') self._strip_attribute('MON$ROLE') self._strip_attribute('MON$REMOTE_PROTOCOL') self._strip_attribute('MON$REMOTE_ADDRESS') self._strip_attribute('MON$REMOTE_PROCESS') self._strip_attribute('MON$CLIENT_VERSION') self._strip_attribute('MON$REMOTE_VERSION') self._strip_attribute('MON$REMOTE_HOST') self._strip_attribute('MON$REMOTE_OS_USER') self._strip_attribute('MON$AUTH_METHOD') #--- Protected def __get_id(self): return self._attributes['MON$ATTACHMENT_ID'] def __get_server_pid(self): return self._attributes['MON$SERVER_PID'] def __get_state(self): return self._attributes['MON$STATE'] def __get_name(self): return self._attributes['MON$ATTACHMENT_NAME'] def __get_user(self): return self._attributes['MON$USER'] def __get_role(self): return self._attributes['MON$ROLE'] def __get_remote_protocol(self): return self._attributes['MON$REMOTE_PROTOCOL'] def __get_remote_address(self): return self._attributes['MON$REMOTE_ADDRESS'] def __get_remote_pid(self): return self._attributes['MON$REMOTE_PID'] def __get_remote_process(self): return self._attributes['MON$REMOTE_PROCESS'] def __get_character_set(self): return self.monitor._con.schema.get_character_set_by_id(self._attributes['MON$CHARACTER_SET_ID']) def __get_timestamp(self): return self._attributes['MON$TIMESTAMP'] def _get_transactions(self): return self.monitor.transactions.filter(lambda s: s._attributes['MON$ATTACHMENT_ID'] == self.id) def _get_statements(self): return self.monitor.statements.filter(lambda s: s._attributes['MON$ATTACHMENT_ID'] == self.id) def _get_variables(self): return self.monitor.variables.filter(lambda s: s._attributes['MON$ATTACHMENT_ID'] == self.id) def __get_iostats(self): for io in self.monitor.iostats: if (io.stat_id == self.stat_id) and (io.group == STAT_ATTACHMENT): return io return None def __get_auth_method(self): return self._attributes.get('MON$AUTH_METHOD') def __get_client_version(self): return self._attributes.get('MON$CLIENT_VERSION') def __get_remote_version(self): return self._attributes.get('MON$REMOTE_VERSION') def __get_remote_os_user(self): return self._attributes.get('MON$REMOTE_OS_USER') def __get_remote_host(self): return self._attributes.get('MON$REMOTE_HOST') def __get_system(self): return bool(self._attributes.get('MON$SYSTEM_FLAG')) def __get_tablestats(self): return dict(((io.table_name, io) for io in self.monitor.tablestats if (io.stat_id == self.stat_id) and (io.group == STAT_ATTACHMENT))) #--- properties id = property(__get_id, doc="Attachment ID.") server_pid = property(__get_server_pid, doc="Server process ID.") state = property(__get_state, doc="Attachment state (idle/active).") name = property(__get_name, doc="Database pathname or alias.") user = property(__get_user, doc="User name.") role = property(__get_role, doc="Role name.") remote_protocol = property(__get_remote_protocol, doc="Remote protocol name.") remote_address = property(__get_remote_address, doc="Remote address.") remote_pid = property(__get_remote_pid, doc="Remote client process ID.") remote_process = property(__get_remote_process, doc="Remote client process pathname.") character_set = property(__get_character_set, doc=":class:`~fdb.schema.CharacterSet` for this attachment.") timestamp = property(__get_timestamp, doc="Attachment date/time.") transactions = LateBindingProperty(_get_transactions, doc=":class:`~fdb.utils.ObjectList` of transactions associated with attachment.\nItems are :class:`TransactionInfo` objects.") statements = LateBindingProperty(_get_statements, doc=":class:`~fdb.utils.ObjectList` of statements associated with attachment.\nItems are :class:`StatementInfo` objects.") variables = LateBindingProperty(_get_variables, doc=":class:`~fdb.utils.ObjectList` of variables associated with attachment.\nItems are :class:`ContextVariableInfo` objects.") iostats = property(__get_iostats, doc=":class:`IOStatsInfo` for this object.") # FB 3.0 auth_method = property(__get_auth_method, doc="Authentication method.") client_version = property(__get_client_version, doc="Client library version.") remote_version = property(__get_remote_version, doc="Remote protocol version.") remote_os_user = property(__get_remote_os_user, doc="OS user name of client process.") remote_host = property(__get_remote_host, doc="Name of remote host.") system = property(__get_system, None, None, "True for system attachments.") tablestats = property(__get_tablestats, doc="Dictionary of :class:`TableStatsInfo` instances for this object.") #--- Public def isactive(self): "Returns True if attachment is active." return self.state == STATE_ACTIVE def isidle(self): "Returns True if attachment is idle." return self.state == STATE_IDLE def isgcallowed(self): "Returns True if Garbage Collection is enabled for this attachment." return bool(self._attributes['MON$GARBAGE_COLLECTION']) def isinternal(self): "Returns True if attachment is internal system attachment." return bool(self._attributes.get('MON$SYSTEM_FLAG')) def terminate(self): """Terminates client session associated with this attachment. :raises fdb.ProgrammingError: If database has ODS lower than 11.2 or this attachement is current session. """ if self.monitor._con.ods < fdb.ODS_FB_25: raise fdb.ProgrammingError("Attachments could be terminated only " \ "for databases with ODS 11.2 and higher.") elif self is self.monitor.this_attachment: raise fdb.ProgrammingError("Can't terminate current session.") else: self.monitor._ic.execute('delete from mon$attachments where mon$attachment_id = ?', (self.id,)) class TransactionInfo(BaseInfoItem): "Information about transaction." def __init__(self, monitor, attributes): super(TransactionInfo, self).__init__(monitor, attributes) #--- Protected def __get_id(self): return self._attributes['MON$TRANSACTION_ID'] def __get_attachment(self): return self.monitor.get_attachment(self._attributes['MON$ATTACHMENT_ID']) def __get_state(self): return self._attributes['MON$STATE'] def __get_timestamp(self): return self._attributes['MON$TIMESTAMP'] def __get_top(self): return self._attributes['MON$TOP_TRANSACTION'] def __get_oldest(self): return self._attributes['MON$OLDEST_TRANSACTION'] def __get_oldest_active(self): return self._attributes['MON$OLDEST_ACTIVE'] def __get_isolation_mode(self): return self._attributes['MON$ISOLATION_MODE'] def __get_lock_timeout(self): return self._attributes['MON$LOCK_TIMEOUT'] def _get_statements(self): return self.monitor.statements.filter(lambda s: s._attributes['MON$TRANSACTION_ID'] == self.id) def _get_variables(self): return self.monitor.variables.filter(lambda s: s._attributes['MON$TRANSACTION_ID'] == self.id) def __get_iostats(self): for io in self.monitor.iostats: if (io.stat_id == self.stat_id) and (io.group == STAT_TRANSACTION): return io return None def __get_tablestats(self): return dict(((io.table_name, io) for io in self.monitor.tablestats if (io.stat_id == self.stat_id) and (io.group == STAT_TRANSACTION))) #--- properties id = property(__get_id, doc="Transaction ID.") attachment = property(__get_attachment, doc=":class:`AttachmentInfo` instance to which this transaction belongs.") state = property(__get_state, doc="Transaction state (idle/active).") timestamp = property(__get_timestamp, doc="Transaction start date/time.") top = property(__get_top, doc="Top transaction.") oldest = property(__get_oldest, doc="Oldest transaction (local OIT).") oldest_active = property(__get_oldest_active, doc="Oldest active transaction (local OAT).") isolation_mode = property(__get_isolation_mode, doc="Transaction isolation mode code.") lock_timeout = property(__get_lock_timeout, doc="Lock timeout.") statements = LateBindingProperty(_get_statements, doc=":class:`~fdb.utils.ObjectList` of statements associated with transaction.\nItems are :class:`StatementInfo` objects.") variables = LateBindingProperty(_get_variables, doc=":class:`~fdb.utils.ObjectList` of variables associated with transaction.\nItems are :class:`ContextVariableInfo` objects.") iostats = property(__get_iostats, doc=":class:`IOStatsInfo` for this object.") # FB 3.0 tablestats = property(__get_tablestats, doc="Dictionary of :class:`TableStatsInfo` instances for this object.") #--- Public def isactive(self): "Returns True if transaction is active." return self.state == STATE_ACTIVE def isidle(self): "Returns True if transaction is idle." return self.state == STATE_IDLE def isreadonly(self): "Returns True if transaction is Read Only." return self._attributes['MON$READ_ONLY'] == FLAG_SET #return bool(self._attributes['MON$READ_ONLY']) def isautocommit(self): "Returns True for autocommited transaction." return self._attributes['MON$AUTO_COMMIT'] == FLAG_SET #return bool(self._attributes['MON$AUTO_COMMIT']) def isautoundo(self): "Returns True for transaction with automatic undo." return self._attributes['MON$AUTO_UNDO'] == FLAG_SET #return bool(self._attributes['MON$AUTO_UNDO']) class StatementInfo(BaseInfoItem): "Information about executed SQL statement." def __init__(self, monitor, attributes): super(StatementInfo, self).__init__(monitor, attributes) self._strip_attribute('MON$SQL_TEXT') self._strip_attribute('MON$EXPLAINED_PLAN') #--- Protected def __get_id(self): return self._attributes['MON$STATEMENT_ID'] def __get_attachment(self): return self.monitor.get_attachment(self._attributes['MON$ATTACHMENT_ID']) def __get_transaction(self): tr = self._attributes['MON$TRANSACTION_ID'] return None if tr is None else self.monitor.get_transaction(tr) def __get_state(self): return self._attributes['MON$STATE'] def __get_timestamp(self): return self._attributes['MON$TIMESTAMP'] def __get_sql_text(self): return self._attributes['MON$SQL_TEXT'] def __get_callstack(self): callstack = self.monitor.callstack.filter(lambda x: ((x._attributes['MON$STATEMENT_ID'] == self.id) and (x._attributes['MON$CALLER_ID'] is None))) if len(callstack) > 0: item = callstack[0] while item is not None: caller_id = item.id item = None for x in self.monitor.callstack: if x._attributes['MON$CALLER_ID'] == caller_id: callstack.append(x) item = x break return callstack def __get_iostats(self): for io in self.monitor.iostats: if (io.stat_id == self.stat_id) and (io.group == STAT_STATEMENT): return io return None def __get_plan(self): return self._attributes.get('MON$EXPLAINED_PLAN') def __get_tablestats(self): return dict(((io.table_name, io) for io in self.monitor.tablestats if (io.stat_id == self.stat_id) and (io.group == STAT_STATEMENT))) #--- properties id = property(__get_id, doc="Statement ID.") attachment = property(__get_attachment, doc=":class:`AttachmentInfo` instance to which this statement belongs.") transaction = property(__get_transaction, doc=":class:`TransactionInfo` instance to which this statement belongs or None.") state = property(__get_state, doc="Statement state (idle/active).") timestamp = property(__get_timestamp, doc="Statement start date/time.") sql_text = property(__get_sql_text, doc="Statement text, if appropriate.") callstack = property(__get_callstack, doc=":class:`~fdb.utils.ObjectList` with call stack for statement.\nItems are :class:`CallStackInfo` objects.") iostats = property(__get_iostats, doc=":class:`IOStatsInfo` for this object.") # FB 3.0 plan = property(__get_plan, doc="Explained execution plan.") tablestats = property(__get_tablestats, doc="Dictionary of :class:`TableStatsInfo` instances for this object.") #--- Public def isactive(self): "Returns True if statement is active." return self.state == STATE_ACTIVE def isidle(self): "Returns True if statement is idle." return self.state == STATE_IDLE def terminate(self): """Terminates execution of statement. :raises fdb.ProgrammingError: If this attachement is current session. """ if self.attachment == self.monitor.this_attachment: raise fdb.ProgrammingError("Can't terminate statement from current session.") else: self.monitor._ic.execute('delete from mon$statements where mon$statement_id = ?', (self.id,)) class CallStackInfo(BaseInfoItem): "Information about PSQL call (stack frame)." def __init__(self, monitor, attributes): super(CallStackInfo, self).__init__(monitor, attributes) self._strip_attribute('MON$OBJECT_NAME') self._strip_attribute('MON$PACKAGE_NAME') #--- Protected def __get_id(self): return self._attributes['MON$CALL_ID'] def __get_statement(self): return self.monitor.get_statement(self._attributes['MON$STATEMENT_ID']) def __get_caller(self): return self.monitor.get_call(self._attributes['MON$CALLER_ID']) def __get_dbobject(self): obj_name = self._attributes['MON$OBJECT_NAME'] obj_type = self._attributes['MON$OBJECT_TYPE'] if obj_type == 5: # procedure return self.monitor._con.schema.get_procedure(obj_name) elif obj_type == 2: # trigger return self.monitor._con.schema.get_trigger(obj_name) else: raise fdb.ProgrammingError("Unrecognized object type '%d'" % obj_type) def __get_timestamp(self): return self._attributes['MON$TIMESTAMP'] def __get_line(self): return self._attributes['MON$SOURCE_LINE'] def __get_column(self): return self._attributes['MON$SOURCE_COLUMN'] def __get_iostats(self): for io in self.monitor.iostats: if (io.stat_id == self.stat_id) and (io.group == STAT_CALL): return io return None def __get_package_name(self): return self._attributes.get('MON$PACKAGE_NAME') #--- properties id = property(__get_id, doc="Call ID.") statement = property(__get_statement, doc="Top-level :class:`StatementInfo` instance to which this call stack entry belongs.") caller = property(__get_caller, doc="Call stack entry (:class:`CallStackInfo`) of the caller.") dbobject = property(__get_dbobject, doc="PSQL object. :class:`~fdb.schema.Procedure` or :class:`~fdb.schema.Trigger` instance.") timestamp = property(__get_timestamp, doc="Request start date/time.") line = property(__get_line, doc="SQL source line number.") column = property(__get_column, doc="SQL source column number.") iostats = property(__get_iostats, doc=":class:`IOStatsInfo` for this object.") # FB 3.0 package_name = property(__get_package_name, doc="Package name.") #--- Public class IOStatsInfo(BaseInfoItem): "Information about page and row level I/O operations, and about memory consumption." def __init__(self, monitor, attributes): super(IOStatsInfo, self).__init__(monitor, attributes) #--- Protected def __get_owner(self): def find(seq): for x in seq: if x.stat_id == self.stat_id: return x return None obj_type = self.group if obj_type == STAT_DATABASE: return self.monitor.db elif obj_type == STAT_ATTACHMENT: return find(self.monitor.attachments) elif obj_type == STAT_TRANSACTION: return find(self.monitor.transactions) elif obj_type == STAT_STATEMENT: return find(self.monitor.statements) elif obj_type == STAT_CALL: return find(self.monitor.callstack) else: raise fdb.ProgrammingError("Unrecognized stat group '%d'" % obj_type) def __get_group(self): return self._attributes['MON$STAT_GROUP'] def __get_reads(self): return self._attributes['MON$PAGE_READS'] def __get_writes(self): return self._attributes['MON$PAGE_WRITES'] def __get_fetches(self): return self._attributes['MON$PAGE_FETCHES'] def __get_marks(self): return self._attributes['MON$PAGE_MARKS'] def __get_seq_reads(self): return self._attributes['MON$RECORD_SEQ_READS'] def __get_idx_reads(self): return self._attributes['MON$RECORD_IDX_READS'] def __get_inserts(self): return self._attributes['MON$RECORD_INSERTS'] def __get_updates(self): return self._attributes['MON$RECORD_UPDATES'] def __get_deletes(self): return self._attributes['MON$RECORD_DELETES'] def __get_backouts(self): return self._attributes['MON$RECORD_BACKOUTS'] def __get_purges(self): return self._attributes['MON$RECORD_PURGES'] def __get_expunges(self): return self._attributes['MON$RECORD_EXPUNGES'] def __get_memory_used(self): return self._attributes.get('MON$MEMORY_USED') def __get_memory_allocated(self): return self._attributes.get('MON$MEMORY_ALLOCATED') def __get_max_memory_used(self): return self._attributes.get('MON$MAX_MEMORY_USED') def __get_max_memory_allocated(self): return self._attributes.get('MON$MAX_MEMORY_ALLOCATED') def __get_locks(self): return self._attributes.get('MON$RECORD_LOCKS') def __get_waits(self): return self._attributes.get('MON$RECORD_WAITS') def __get_conflits(self): return self._attributes.get('MON$RECORD_CONFLICTS') def __get_backversion_reads(self): return self._attributes.get('MON$BACKVERSION_READS') def __get_fragment_reads(self): return self._attributes.get('MON$FRAGMENT_READS') def __get_repeated_reads(self): return self._attributes.get('MON$RECORD_RPT_READS') #--- properties owner = property(__get_owner, doc="""Object that owns this IOStats instance. Could be either :class:`DatabaseInfo`, :class:`AttachmentInfo`, :class:`TransactionInfo`, :class:`StatementInfo` or :class:`CallStackInfo` instance.""") group = property(__get_group, doc="Object group code.") reads = property(__get_reads, doc="Number of page reads.") writes = property(__get_writes, doc="Number of page writes.") fetches = property(__get_fetches, doc="Number of page fetches.") marks = property(__get_marks, doc="Number of pages with changes pending.") seq_reads = property(__get_seq_reads, doc="Number of records read sequentially.") idx_reads = property(__get_idx_reads, doc="Number of records read via an index.") inserts = property(__get_inserts, doc="Number of inserted records.") updates = property(__get_updates, doc="Number of updated records.") deletes = property(__get_deletes, doc="Number of deleted records.") backouts = property(__get_backouts, doc="Number of records where a new primary record version or a change to " \ "an existing primary record version is backed out due to rollback or " \ "savepoint undo.") purges = property(__get_purges, doc="Number of records where record version chain is being purged of " \ "versions no longer needed by OAT or younger transactions.") expunges = property(__get_expunges, doc="Number of records where record version chain is being deleted due to " \ "deletions by transactions older than OAT.") memory_used = property(__get_memory_used, doc="Number of bytes currently in use.") memory_allocated = property(__get_memory_allocated, doc="Number of bytes currently allocated at the OS level.") max_memory_used = property(__get_max_memory_used, doc="Maximum number of bytes used by this object.") max_memory_allocated = property(__get_max_memory_allocated, doc="Maximum number of bytes allocated from the operating system by this object.") # FB 3.0 locks = property(__get_locks, doc="Number of record locks.") waits = property(__get_waits, doc="Number of record waits.") conflits = property(__get_conflits, doc="Number of record conflits.") backversion_reads = property(__get_backversion_reads, doc="Number of record backversion reads.") fragment_reads = property(__get_fragment_reads, doc="Number of record fragment reads.") repeated_reads = property(__get_repeated_reads, doc="Number of repeated record reads.") #--- Public class TableStatsInfo(BaseInfoItem): "Information about row level I/O operations on single table." def __init__(self, monitor, attributes): super(TableStatsInfo, self).__init__(monitor, attributes) self._strip_attribute('MON$TABLE_NAME') #--- Protected def __get_owner(self): def find(seq): for x in seq: if x.stat_id == self.stat_id: return x return None obj_type = self.group if obj_type == STAT_DATABASE: return self.monitor.db elif obj_type == STAT_ATTACHMENT: return find(self.monitor.attachments) elif obj_type == STAT_TRANSACTION: return find(self.monitor.transactions) elif obj_type == STAT_STATEMENT: return find(self.monitor.statements) elif obj_type == STAT_CALL: return find(self.monitor.callstack) else: raise fdb.ProgrammingError("Unrecognized table stat group '%d'" % obj_type) def __get_row_stat_id(self): return self._attributes['MON$RECORD_STAT_ID'] def __get_table_name(self): return self._attributes['MON$TABLE_NAME'] def __get_group(self): return self._attributes['MON$STAT_GROUP'] def __get_seq_reads(self): return self._attributes['MON$RECORD_SEQ_READS'] def __get_idx_reads(self): return self._attributes['MON$RECORD_IDX_READS'] def __get_inserts(self): return self._attributes['MON$RECORD_INSERTS'] def __get_updates(self): return self._attributes['MON$RECORD_UPDATES'] def __get_deletes(self): return self._attributes['MON$RECORD_DELETES'] def __get_backouts(self): return self._attributes['MON$RECORD_BACKOUTS'] def __get_purges(self): return self._attributes['MON$RECORD_PURGES'] def __get_expunges(self): return self._attributes['MON$RECORD_EXPUNGES'] def __get_locks(self): return self._attributes['MON$RECORD_LOCKS'] def __get_waits(self): return self._attributes['MON$RECORD_WAITS'] def __get_conflits(self): return self._attributes['MON$RECORD_CONFLICTS'] def __get_backversion_reads(self): return self._attributes['MON$BACKVERSION_READS'] def __get_fragment_reads(self): return self._attributes['MON$FRAGMENT_READS'] def __get_repeated_reads(self): return self._attributes['MON$RECORD_RPT_READS'] #--- properties owner = property(__get_owner, doc="""Object that owns this TableStats instance. Could be either :class:`DatabaseInfo`, :class:`AttachmentInfo`, :class:`TransactionInfo`, :class:`StatementInfo` or :class:`CallStackInfo` instance.""") row_stat_id = property(__get_row_stat_id, doc="Internal ID.") table_name = property(__get_table_name, doc="Table name.") group = property(__get_group, doc="Object group code.") seq_reads = property(__get_seq_reads, doc="Number of records read sequentially.") idx_reads = property(__get_idx_reads, doc="Number of records read via an index.") inserts = property(__get_inserts, doc="Number of inserted records.") updates = property(__get_updates, doc="Number of updated records.") deletes = property(__get_deletes, doc="Number of deleted records.") backouts = property(__get_backouts, doc="Number of records where a new primary record version or a change to " \ "an existing primary record version is backed out due to rollback or " \ "savepoint undo.") purges = property(__get_purges, doc="Number of records where record version chain is being purged of " \ "versions no longer needed by OAT or younger transactions.") expunges = property(__get_expunges, doc="Number of records where record version chain is being deleted due to deletions by transactions older than OAT.") locks = property(__get_locks, doc="Number of record locks.") waits = property(__get_waits, doc="Number of record waits.") conflits = property(__get_conflits, doc="Number of record conflits.") backversion_reads = property(__get_backversion_reads, doc="Number of record backversion reads.") fragment_reads = property(__get_fragment_reads, doc="Number of record fragment reads.") repeated_reads = property(__get_repeated_reads, doc="Number of repeated record reads.") #--- Public class ContextVariableInfo(BaseInfoItem): "Information about context variable." def __init__(self, monitor, attributes): super(ContextVariableInfo, self).__init__(monitor, attributes) self._strip_attribute('MON$VARIABLE_NAME') self._strip_attribute('MON$VARIABLE_VALUE') #--- Protected def __get_attachment(self): return self.monitor.get_attachment(self._attributes['MON$ATTACHMENT_ID']) def __get_transaction(self): tr = self._attributes['MON$TRANSACTION_ID'] return None if tr is None else self.monitor.get_transaction(tr) def __get_name(self): return self._attributes['MON$VARIABLE_NAME'] def __get_value(self): return self._attributes['MON$VARIABLE_VALUE'] #--- properties attachment = property(__get_attachment, doc=":class:`AttachmentInfo` instance to which this context variable belongs or None.") transaction = property(__get_transaction, doc=":class:`TransactionInfo` instance to which this context variable belongs or None.") name = property(__get_name, doc="Context variable name.") value = property(__get_value, doc="Value of context variable.") #--- Public def isattachmentvar(self): "Returns True if variable is associated to attachment context." return self._attributes['MON$ATTACHMENT_ID'] is not None def istransactionvar(self): "Returns True if variable is associated to transaction context." return self._attributes['MON$TRANSACTION_ID'] is not None fdb-2.0.0/PKG-INFO0000644000175000001440000001036413270606076014014 0ustar pcisarusers00000000000000Metadata-Version: 2.1 Name: fdb Version: 2.0.0 Summary: The Python driver for Firebird Home-page: http://www.firebirdsql.org/en/devel-python-driver/ Author: Pavel Císař Author-email: pcisar@users.sourceforge.net License: UNKNOWN Project-URL: Funding, https://www.firebirdsql.org/en/donate/ Project-URL: Documentation, http://fdb2.readthedocs.io/en/latest/ Project-URL: Bug Reports, http://tracker.firebirdsql.org/browse/PYFB Project-URL: Say Thanks!, https://saythanks.io/to/pcisar Project-URL: Source, https://github.com/FirebirdSQL/fdb Description: ==================================== FDB - The Python driver for Firebird ==================================== |docs| || Home_ || `Bug Reports`_ || Source_ || `Say Thanks!`_ FDB is a `Python`_ library package that implements `Python Database API 2.0`_-compliant support for the open source relational database `Firebird`_ ®. In addition to the minimal feature set of the standard Python DB API, FDB also exposes the entire native (old-style) client API of the database engine. Notably: * Automatic data conversion from strings on input. * Automatic input/output conversions of textual data between UNICODE and database character sets. * Support for prepared SQL statements. * Multiple independent transactions per single connection. * All transaction parameters that Firebird supports, including table access specifications. * Distributed transactions. * Firebird BLOB support, including support for stream BLOBs. * Support for Firebird Events. * Support for Firebird ARRAY data type. * Support for all Firebird Services FDB also contains extensive collection of submodules that simplify various Firebird-related tasks. Notably: * Database schema * Firebird monitoring tables * Parsing Firebird trace & audit logs * Parsing Firebird server log * Parsing Firebird gstat utility output FDB is implemented on top of Firebird client library using ctypes, and currently uses only traditional Firebird API. FDB works with Firebird 2.0 and newer, and Python 2.7 and 3.4+. FDB is free – covered by a permissive BSD-style license that both commercial and noncommercial users should find agreeable. FDB is replacement for discontinued KInterbasDB library, and as such it's designed to be as much compatible with KInterbasDB as possible, but there are some differences. See FDB documentation for full description of these differences. |donate| .. _Python: http://python.org .. _Python Database API 2.0: http://www.python.org/dev/peps/pep-0249/ .. _Firebird: http://www.firebirdsql.org .. _Bug Reports: http://tracker.firebirdsql.org/browse/PYFB .. _Home: http://www.firebirdsql.org/en/devel-python-driver/ .. _Source: https://github.com/FirebirdSQL/fdb .. _Say Thanks!: https://saythanks.io/to/pcisar .. |donate| image:: https://www.firebirdsql.org/img/donate/donate_to_firebird.gif :alt: Contribute to the development :scale: 100% :target: https://www.firebirdsql.org/en/donate/ .. |docs| image:: https://readthedocs.org/projects/fdb/badge/?version=v2.0 :alt: Documentation Status :scale: 100% :target: http://fdb.readthedocs.io/en/v2.0/ Keywords: Firebird Platform: UNKNOWN Classifier: Development Status :: 5 - Production/Stable Classifier: Intended Audience :: Developers Classifier: License :: OSI Approved :: BSD License Classifier: Programming Language :: Python :: 2 Classifier: Programming Language :: Python :: 2.7 Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.4 Classifier: Programming Language :: Python :: 3.5 Classifier: Programming Language :: Python :: 3.6 Classifier: Operating System :: POSIX :: Linux Classifier: Operating System :: Microsoft :: Windows Classifier: Operating System :: MacOS Classifier: Topic :: Database Requires-Python: >=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, <4 Description-Content-Type: text/x-rst fdb-2.0.0/setup.cfg0000644000175000001440000000026413270606076014536 0ustar pcisarusers00000000000000[egg_info] tag_build = tag_date = 0 tag_svn_revision = 0 [build_sphinx] source-dir = sphinx all-files = True [metadata] license_file = LICENSE.TXT [bdist_wheel] universal = 1 fdb-2.0.0/setup.py0000755000175000001440000000423413270573665014442 0ustar pcisarusers00000000000000#coding:utf-8 """A setuptools based setup module for FDB package. See: https://packaging.python.org/en/latest/distributing.html https://github.com/pypa/sampleproject """ # Always prefer setuptools over distutils from setuptools import setup, find_packages # To use a consistent encoding from codecs import open from os import path from fdb import __version__ here = path.abspath(path.dirname(__file__)) # Get the long description from the README file with open(path.join(here, 'README.rst'), encoding='utf-8') as f: long_description = f.read() # Arguments marked as "Required" below must be included for upload to PyPI. # Fields marked as "Optional" may be commented out. setup( name='fdb', version=__version__, description='The Python driver for Firebird', long_description=long_description, long_description_content_type='text/x-rst', url='http://www.firebirdsql.org/en/devel-python-driver/', author='Pavel Císař', author_email='pcisar@users.sourceforge.net', classifiers=[ 'Development Status :: 5 - Production/Stable', 'Intended Audience :: Developers', 'License :: OSI Approved :: BSD License', 'Programming Language :: Python :: 2', 'Programming Language :: Python :: 2.7', 'Programming Language :: Python :: 3', 'Programming Language :: Python :: 3.4', 'Programming Language :: Python :: 3.5', 'Programming Language :: Python :: 3.6', 'Operating System :: POSIX :: Linux', 'Operating System :: Microsoft :: Windows', 'Operating System :: MacOS', 'Topic :: Database', ], keywords='Firebird', # Optional packages=find_packages(), # Required install_requires=['future>=0.16.0'], # Optional python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, <4', test_suite='nose.collector', project_urls={ 'Documentation': 'http://fdb2.readthedocs.io/en/latest/', 'Bug Reports': 'http://tracker.firebirdsql.org/browse/PYFB', 'Funding': 'https://www.firebirdsql.org/en/donate/', 'Say Thanks!': 'https://saythanks.io/to/pcisar', 'Source': 'https://github.com/FirebirdSQL/fdb', }, ) fdb-2.0.0/requirements.txt0000644000175000001440000000005513270374110016166 0ustar pcisarusers00000000000000future>=0.16.0 sphinx-bootstrap-theme==0.6.5 fdb-2.0.0/LICENSE.TXT0000644000175000001440000000452513167442200014374 0ustar pcisarusers00000000000000======= LICENSE ======= The following contributors hold Copyright (c) over their respective portions of code and documentation: The main portion of initial code (~95%); Current maintainer: Pavel Cisar Contributors: [Initial trace API & nbackup support; Python 3 support] Philippe Makowski Some code in inital version is Python code from KInterbasDB 3.3.0. As it's very hard to find out who exactly was the original author of used code, here is the full list of KInterbasDB contributors: [Author of original version; maintained through version 2.0:] 1998-2001 [alex] Alexander Kuznetsov [Author of ~90% of current code, most of current documentation; maintained through version 3.3:] 2002-2007 [dsr] David S. Rushby [Finishing touch to v3.3; New Documentation; Current maintainer:] 2008-2011 [paci] Pavel Cisar [Significant Contributors:] 2001-2002 [maz] Marek Isalski Marek made important first steps in removing the limitations of version 2.0 in preparation for version 3.0. 2001 [eac] Evgeny A. Cherkashin Evgeny wrote the first version of the distutils build script, which was included in a 2.x point release. 2001-2002 [janez] Janez Jere Janez contributed several bugfixes, including fixes for the date and time parameter conversion code in preparation for version 3.0. Permission to use, copy, modify, and distribute this software and its documentation for any purpose and without fee or royalty is hereby granted, provided that the above copyright notice appears in all copies and that both the copyright notice and this permission notice appear in supporting documentation or portions thereof, including modifications, that you make. The authors disclaim all warranties with regard to this software, including all implied warranties of merchantability and fitness. In no event shall any author be liable for any special, indirect or consequential damages or any damages whatsoever resulting from loss of use, data or profits, whether in an action of contract, negligence or other tortious action, arising out of or in connection with the use or performance of this software. fdb-2.0.0/.gitignore0000644000175000001440000000020213270362207014670 0ustar pcisarusers00000000000000# general things to ignore build/ dist/ docs/ sphinx/.build/ *.egg-info/ *.egg *.py[cod] __pycache__/ *.so *~ *.wpr *.wpu *.log fdb-2.0.0/sphinx/0000755000175000001440000000000013270606076014224 5ustar pcisarusers00000000000000fdb-2.0.0/sphinx/.static/0000755000175000001440000000000013270606076015571 5ustar pcisarusers00000000000000fdb-2.0.0/sphinx/.static/dialog-warning.png0000644000175000001440000000255713270365332021207 0ustar pcisarusers00000000000000PNG  IHDR szzsBIT|d&IDATX[hTGsً1Y&݈˪xɂ VD4R%Z[郈}Xj" &` D%"ffxC%&FOaaf3ρ(#Q.e&( 9\_ao; 28>\;b8JeoOI3iaPwBW< AiqnqqJ<-ͺw|s:boG)`[QDXu5d RP4T{C((?OYJmt *+ۤAg/18#Ѐс/'.p*~8|z2ʬYhaԹs]J6ߕBt¡ PԔmm-[~?B\.WbBLIUTD&'K7 B5kx[UpwLyqՑ Nm᫤ٳsܦ)څLJJ8r8((躞ĵw/OR&M]C8 TSav G/,L8IdӉ|o^sSi#+'GSwg [`k)={xb4\$`4nɏw0 M@ k耝}@L i( p M/IENDB`fdb-2.0.0/sphinx/.static/dialog-note.png0000644000175000001440000000305613270365332020502 0ustar pcisarusers00000000000000PNG  IHDR szzsBIT|dIDATX[lgn{.k5ƀcSJ(iHREIUTJ}@}ᩴTU<4m_@MV$vHk3]ok>}2Mck&>[vd@UEN9pnJE+S%|*~'ȞQiFCCCmL6%6uc_-9;|gˆj><88(DI|ne0@ G ayͼ?O8uuh'.v?r⨮.$$HsՔVuR)4u١Cۇ ұvGm ʏLrgH6Q}Ak ځžHgeEvv@h=vWQ x;9=1^w3{nwOP[s\%8푠:~MP^~'/M!O[X:/0 @frK WQ{OoE~W^W^9J)N|67=ϋe'Px(@NM,Ww t.=}~/MuA.$}--B@RHsHʇCK'e%n!SK{F R\p8u*@y!{A'` "eyS`@MI.$-dv;%^R8LO""`+-KS@SHHHӐ2XD$IĖk)!ǐiDo;"g\n-ܿ;P s&2; Ӵl\}++,f[.ḳ;[޺Uc ېqϧ Yנf,޽(X"Ċ3jU3~*_$=&Xw]/\DoƥV)#u\r.HY .ayN_I&inA ZW!s>| \1szO'GGo?#Uwsx16~ŷyv*řtßKzky0qH6Lѯ<7g'U 8/sU g'ӓYVB kw7dࠦ Ȥ;@#wp_}j> p} o펑[>]բC,vvZ[ʮnF|l?K |\x'U?c] WzM[ 61tY`-e,albb/Bk+a/xx_34vP|sޏ-XCr*{%m\-UT`Xg*7^ъVVjb8e= +?F)QWb,B_OxǣIENDB`fdb-2.0.0/sphinx/.static/dialog-seealso.png0000644000175000001440000000273613270365332021174 0ustar pcisarusers00000000000000PNG  IHDR szzsBIT|dtEXtSoftwarewww.inkscape.org<pIDATXŖ[lUߙٝݶR)޸GBT MVcFYD|GQLLKb$5ATB@*ZJwv/s;nn|9̜93Aqx|ڗvܳB~PpUB)O+yX<sY:h9]@ڪze/ *Fn,Rǧ5" <_Ϫ7r5.mmǧh WYf. لrbgO[MfWkO(<9|RMH6a ȃ{f(p%69mOsg- _vW-D B0L)08 lf1;XTrYEoe^=tVN,{lC5:&$[H։'EMҵt\{WO0ԩV/|tGWkTB0nm6OԔr$d '4-X§Uŵݠ> ?${0w$(q@ qnoѿHd ۣX Rf@!Gy0dP98rJmglij@tHMZ|G;X99'@Dw s4ɍKSFPHiQl:ysKoOm:H)ʁƑΎ`aq(|Dpr&wN AMȤ>ND$aJ(i[XtlCN @id6aO$pj{EU_c΋MW0%3(4vz9s;7N tYf<ȔK63ۓ>H2܆D$3!QPuQ݊JW'Dҥ,f@e։U?V!R%lf'$ˤ eS|LI8gMs݇ <U}_3y੧B 2ܦ&r=j6yg{ZWS]V:4F=tqibV+/z BΘNiش6D9"!7Bٲ_ɹ?|p:3k՚칳 kO㤋7{(iM:41rAA2"D(/PWٜ9sd }ntscړ$HD0N6'KdrFVXxC(NlKslMun? l\U,l)@A9D9#1f2N,3s2U:^Δ}ݧ;߄Z E.É9Ĕd 3RW]cDŠ UV4u(E5S6LsBsal`A O|WT W7]|?Z71e'^Z!͈,8#sA*_Yӝw ""1ֽXESTw -Ze6ld&dh Gz%%̚U˛ZX&<.W67z\#b5D*ϕtN-V쉹9\1qY6b27h@h3 mTr_={jl i]koY昮,'BH%#-Mn\% Cx/Yр}bW^͟o՗u|H6u6 FJ  k_8t=ӆ7ėXXT-gIbL2];ڞ5Qڮ\&yv)sR\e ՆS}ᱯ]{.~rt'VJpb=N}>XJnm{۳cU%/L 8vn5+E:Uuzq;nX籇lAOZ4F#\# [;oС;6ڲنxO.J A>7/xMKY@z?]+= d YvՓ3 z@G"C])8qcSp]{;00ch\G….B[ugrwwӚaL\6ALCX蹏05y\RbPg29|Y q@EpzdNNf;;{essw?6mze;8rdj喝׃jPK Btz㞱Q?Ϗ?~yuT_s%^E Dhooo}g2bM`B`l [՜^PJqˍoMJ tbody > tr > td > ul { padding-left: 0em; } table.indextable tr.pcap { height: 10px; } table.indextable tr.cap { margin-top: 10px; background-color: #f2f2f2; } img.toggler { margin-right: 3px; margin-top: 3px; cursor: pointer; } div.modindex-jumpbox { border-top: 1px solid #ddd; border-bottom: 1px solid #ddd; margin: 1em 0 1em 0; padding: 0.4em; } div.genindex-jumpbox { border-top: 1px solid #ddd; border-bottom: 1px solid #ddd; margin: 1em 0 1em 0; padding: 0.4em; } /* -- domain module index --------------------------------------------------- */ table.modindextable td { padding: 2px; border-collapse: collapse; } /* -- general body styles --------------------------------------------------- */ div.body { min-width: 450px; max-width: 800px; } div.body p, div.body dd, div.body li, div.body blockquote { -moz-hyphens: auto; -ms-hyphens: auto; -webkit-hyphens: auto; hyphens: auto; } a.headerlink { visibility: hidden; } h1:hover > a.headerlink, h2:hover > a.headerlink, h3:hover > a.headerlink, h4:hover > a.headerlink, h5:hover > a.headerlink, h6:hover > a.headerlink, dt:hover > a.headerlink, caption:hover > a.headerlink, p.caption:hover > a.headerlink, div.code-block-caption:hover > a.headerlink { visibility: visible; } div.body p.caption { text-align: inherit; } div.body td { text-align: left; } .first { margin-top: 0 !important; } p.rubric { margin-top: 30px; font-weight: bold; } img.align-left, .figure.align-left, object.align-left { clear: left; float: left; margin-right: 1em; } img.align-right, .figure.align-right, object.align-right { clear: right; float: right; margin-left: 1em; } img.align-center, .figure.align-center, object.align-center { display: block; margin-left: auto; margin-right: auto; } .align-left { text-align: left; } .align-center { text-align: center; } .align-right { text-align: right; } /* -- sidebars -------------------------------------------------------------- */ div.sidebar { margin: 0 0 0.5em 1em; border: 1px solid #ddb; padding: 7px 7px 0 7px; background-color: #ffe; width: 40%; float: right; } p.sidebar-title { font-weight: bold; } /* -- topics ---------------------------------------------------------------- */ div.topic { border: 1px solid #ccc; padding: 7px 7px 0 7px; margin: 10px 0 10px 0; } p.topic-title { font-size: 1.1em; font-weight: bold; margin-top: 10px; } /* -- admonitions ----------------------------------------------------------- */ div.admonition { margin-top: 10px; margin-bottom: 10px; padding: 7px; } div.admonition dt { font-weight: bold; } div.admonition dl { margin-bottom: 0; } p.admonition-title { margin: 0px 10px 5px 0px; font-weight: bold; } div.body p.centered { text-align: center; margin-top: 25px; } /* -- tables ---------------------------------------------------------------- */ table.docutils { border: 0; border-collapse: collapse; } table.align-center { margin-left: auto; margin-right: auto; } table caption span.caption-number { font-style: italic; } table caption span.caption-text { } table.docutils td, table.docutils th { padding: 1px 8px 1px 5px; border-top: 0; border-left: 0; border-right: 0; border-bottom: 1px solid #aaa; } table.footnote td, table.footnote th { border: 0 !important; } th { text-align: left; padding-right: 5px; } table.citation { border-left: solid 1px gray; margin-left: 1px; } table.citation td { border-bottom: none; } /* -- figures --------------------------------------------------------------- */ div.figure { margin: 0.5em; padding: 0.5em; } div.figure p.caption { padding: 0.3em; } div.figure p.caption span.caption-number { font-style: italic; } div.figure p.caption span.caption-text { } /* -- field list styles ----------------------------------------------------- */ table.field-list td, table.field-list th { border: 0 !important; } .field-list ul { margin: 0; padding-left: 1em; } .field-list p { margin: 0; } .field-name { -moz-hyphens: manual; -ms-hyphens: manual; -webkit-hyphens: manual; hyphens: manual; } /* -- other body styles ----------------------------------------------------- */ ol.arabic { list-style: decimal; } ol.loweralpha { list-style: lower-alpha; } ol.upperalpha { list-style: upper-alpha; } ol.lowerroman { list-style: lower-roman; } ol.upperroman { list-style: upper-roman; } dl { margin-bottom: 15px; } dd p { margin-top: 0px; } dd ul, dd table { margin-bottom: 10px; } dd { margin-top: 3px; margin-bottom: 10px; margin-left: 30px; } dt:target, span.highlighted { background-color: #fbe54e; } rect.highlighted { fill: #fbe54e; } dl.glossary dt { font-weight: bold; font-size: 1.1em; } .optional { font-size: 1.3em; } .sig-paren { font-size: larger; } .versionmodified { font-style: italic; } .system-message { background-color: #fda; padding: 5px; border: 3px solid red; } .footnote:target { background-color: #ffa; } .line-block { display: block; margin-top: 1em; margin-bottom: 1em; } .line-block .line-block { margin-top: 0; margin-bottom: 0; margin-left: 1.5em; } .guilabel, .menuselection { font-family: sans-serif; } .accelerator { text-decoration: underline; } .classifier { font-style: oblique; } abbr, acronym { border-bottom: dotted 1px; cursor: help; } /* -- code displays --------------------------------------------------------- */ pre { overflow: auto; overflow-y: hidden; /* fixes display issues on Chrome browsers */ } span.pre { -moz-hyphens: none; -ms-hyphens: none; -webkit-hyphens: none; hyphens: none; } td.linenos pre { padding: 5px 0px; border: 0; background-color: transparent; color: #aaa; } table.highlighttable { margin-left: 0.5em; } table.highlighttable td { padding: 0 0.5em 0 0.5em; } div.code-block-caption { padding: 2px 5px; font-size: small; } div.code-block-caption code { background-color: transparent; } div.code-block-caption + div > div.highlight > pre { margin-top: 0; } div.code-block-caption span.caption-number { padding: 0.1em 0.3em; font-style: italic; } div.code-block-caption span.caption-text { } div.literal-block-wrapper { padding: 1em 1em 0; } div.literal-block-wrapper div.highlight { margin: 0; } code.descname { background-color: transparent; font-weight: bold; font-size: 1.2em; border-style: none; padding: 0; } code.descclassname { background-color: transparent; border-style: none; padding: 0; } code.xref, a code { background-color: transparent; font-weight: bold; border-style: none; padding: 0; } h1 code, h2 code, h3 code, h4 code, h5 code, h6 code { background-color: transparent; } .viewcode-link { float: right; } .viewcode-back { float: right; font-family: sans-serif; } div.viewcode-block:target { margin: -1px -10px; padding: 0 10px; } /* -- math display ---------------------------------------------------------- */ img.math { vertical-align: middle; } div.body div.math p { text-align: center; } span.eqno { float: right; } span.eqno a.headerlink { position: relative; left: 0px; z-index: 1; } div.math:hover a.headerlink { visibility: visible; } /* -- printout stylesheet --------------------------------------------------- */ @media print { div.document, div.documentwrapper, div.bodywrapper { margin: 0 !important; width: 100%; } div.sphinxsidebar, div.related, div.footer, #top-link { display: none; } } /* -- My additions ---------------------------------------------------------- */ div.note { color: black; border: 2px solid #7a9eec; border-right-style: none; border-left-style: none; padding: 10px 20px 10px 60px; background: #e1ecfe url(dialog-note.png) no-repeat 10px 8px; } div.danger { color: black; border: 2px solid #fbc2c4; border-right-style: none; border-left-style: none; padding: 10px 20px 10px 60px; background: #fbe3e4 url(dialog-note.png) no-repeat 10px 8px; } div.attention { color: black; border: 2px solid #ffd324; border-right-style: none; border-left-style: none; padding: 10px 20px 10px 60px; background: #fff6bf url(dialog-note.png) no-repeat 10px 8px; } div.caution { color: black; border: 2px solid #ffd324; border-right-style: none; border-left-style: none; padding: 10px 20px 10px 60px; background: #fff6bf url(dialog-warning.png) no-repeat 10px 8px; } div.important { color: black; background: #fbe3e4 url(dialog-seealso.png) no-repeat 10px 8px; border: 2px solid #fbc2c4; border-left-style: none; border-right-style: none; padding: 10px 20px 10px 60px; } div.seealso { color: black; background: #fff6bf url(dialog-seealso.png) no-repeat 10px 8px; border: 2px solid #ffd324; border-left-style: none; border-right-style: none; padding: 10px 20px 10px 60px; } div.hint, div.tip { color: black; background: #eeffcc url(dialog-topic.png) no-repeat 10px 8px; border: 2px solid #aacc99; border-left-style: none; border-right-style: none; padding: 10px 20px 10px 60px; } div.warning, div.error { color: black; background: #fbe3e4 url(dialog-warning.png) no-repeat 10px 8px; border: 2px solid #fbc2c4; border-right-style: none; border-left-style: none; padding: 10px 20px 10px 60px; } p { text-align: justify; padding-bottom: 5px; } h1 { background: #fff6bf; border: 2px solid #ffd324; border-left-style: none; border-right-style: none; padding: 10px 10px 10px 10px; text-align: center; } h2 { /* background: #eeffcc; */ border: 2px solid #aacc99; border-left-style: none; border-right-style: none; border-top-style: none; padding: 10px 0px 0px 0px; /* text-align: center; */ } h3 { /* background: #eeffcc; */ border: 1px solid #7a9eec; border-left-style: none; border-right-style: none; border-top-style: none; padding: 0; /* text-align: center; */ } h4 { background: #eeffcc; /* border: 1px solid #aacc99; */ border-left-style: none; border-right-style: none; border-top-style: none; padding: 5px 5px 5px 5px; /* text-align: center; */ } fdb-2.0.0/sphinx/index.txt0000644000175000001440000000340613270365516016100 0ustar pcisarusers00000000000000 #################################### FDB - The Python driver for Firebird #################################### FDB is a `Python`_ library package that implements `Python Database API 2.0`_-compliant support for the open source relational database `Firebird`_ ®. In addition to the minimal feature set of the standard Python DB API, FDB also exposes the entire native (old-style) client API of the database engine and number of additional extensions and enhancements for convenient use of Firebird. FDB is developed `under`_ the `Firebird Project`_, and is used internally as key component for `Firebird QA `__. FDB is free -- covered by a permissive BSD-style `license `__ that both commercial and noncommercial users should find agreeable. This documentation set is not a tutorial on Python, SQL, or Firebird; rather, it is a topical presentation of FDB's feature set, with example code to demonstrate basic usage patterns. For detailed information about Firebird features, see the `Firebird documentation `__, and especially the excellent `The Firebird Book `__ written by Helen Borrie and published by IBPhoenix_. Content ******* .. toctree:: :maxdepth: 2 getting-started usage-guide python-db-api-compliance reference changelog license Indices and tables ****************** * :ref:`genindex` * :ref:`modindex` .. _IBPhoenix: http://www.ibphoenix.com .. _Python: http://python.org .. _Python Database API 2.0: http://www.python.org/dev/peps/pep-0249/ .. _Firebird: http://www.firebirdsql.org .. _under: http://www.firebirdsql.org/en/devel-python-driver/ .. _Firebird Project: http://www.firebirdsql.org fdb-2.0.0/sphinx/getting-started.txt0000644000175000001440000002140113270362207020063 0ustar pcisarusers00000000000000 ############### Getting Started ############### Installation ************ FDB is written as pure-Python module (requires Python 2.7 or 3.4+) on top of Firebird client library (fbclient.so/dll) using ctypes_, so **make sure you have Firebird client properly installed before you try to install FDB**, otherwise the installation will fail. FDB supports Firebird version 2.0 and higher. FDB is distributed as `setuptools`_ package and the preferred installation method is via pip_ tool. Installation from PYPI_ ======================= Run pip:: $ pip install fdb Installation from source ======================== Download the source tarball, uncompress it, then run the install command:: $ tar -xzvf fdb-2.0.tar.gz $ cd fdb-2.0 $ python setup.py install .. _setuptools: https://pypi.org/project/setuptools/ .. _PYPI: https://pypi.org/ .. _ctypes: http://docs.python.org/library/ctypes.html .. _pip: https://pypi.org/project/pip/ Quick-start Guide ***************** This brief tutorial aims to get the reader started by demonstrating elementary usage of FDB. It is not a comprehensive Python Database API tutorial, nor is it comprehensive in its coverage of anything else. The numerous advanced features of FDB are covered in another section of this documentation, which is not in a tutorial format, though it is replete with examples. Connecting to a Database ======================== **Example 1** A database connection is typically established with code such as this: .. sourcecode:: python import fdb # The server is named 'bison'; the database file is at '/temp/test.db'. con = fdb.connect(dsn='bison:/temp/test.db', user='sysdba', password='pass') # Or, equivalently: con = fdb.connect( host='bison', database='/temp/test.db', user='sysdba', password='pass' ) **Example 2** Suppose we want to connect to the database in SQL Dialect 1 and specifying UTF-8 as the character set of the connection: .. sourcecode:: python import fdb con = fdb.connect( dsn='bison:/temp/test.db', user='sysdba', password='pass', dialect=1, # necessary for all dialect 1 databases charset='UTF8' # specify a character set for the connection ) Executing SQL Statements ======================== For this section, suppose we have a table defined and populated by the following SQL code: .. sourcecode:: sql create table languages ( name varchar(20), year_released integer ); insert into languages (name, year_released) values ('C', 1972); insert into languages (name, year_released) values ('Python', 1991); **Example 1** This example shows the *simplest* way to print the entire contents of the `languages` table: .. sourcecode:: python import fdb con = fdb.connect(dsn='/temp/test.db', user='sysdba', password='masterkey') # Create a Cursor object that operates in the context of Connection con: cur = con.cursor() # Execute the SELECT statement: cur.execute("select * from languages order by year_released") # Retrieve all rows as a sequence and print that sequence: print cur.fetchall() Sample output: .. sourcecode:: python [('C', 1972), ('Python', 1991)] **Example 2** Here's another trivial example that demonstrates various ways of fetching a single row at a time from a `SELECT`-cursor: .. sourcecode:: python import fdb con = fdb.connect(dsn='/temp/test.db', user='sysdba', password='masterkey') cur = con.cursor() SELECT = "select name, year_released from languages order by year_released" # 1. Iterate over the rows available from the cursor, unpacking the # resulting sequences to yield their elements (name, year_released): cur.execute(SELECT) for (name, year_released) in cur: print '%s has been publicly available since %d.' % (name, year_released) # 2. Equivalently: cur.execute(SELECT) for row in cur: print '%s has been publicly available since %d.' % (row[0], row[1]) # 3. Using mapping-iteration rather than sequence-iteration: cur.execute(SELECT) for row in cur.itermap(): print '%(name)s has been publicly available since %(year_released)d.' % row Sample output: .. sourcecode:: python C has been publicly available since 1972. Python has been publicly available since 1991. C has been publicly available since 1972. Python has been publicly available since 1991. C has been publicly available since 1972. Python has been publicly available since 1991. **Example 3** The following program is a simplistic table printer (applied in this example to `languages`): .. sourcecode:: python import fdb TABLE_NAME = 'languages' SELECT = 'select * from %s order by year_released' % TABLE_NAME con = fdb.connect(dsn='/temp/test.db', user='sysdba', password='masterkey') cur = con.cursor() cur.execute(SELECT) # Print a header. for fieldDesc in cur.description: print fieldDesc[fdb.DESCRIPTION_NAME].ljust(fieldDesc[fdb.DESCRIPTION_DISPLAY_SIZE]) , print # Finish the header with a newline. print '-' * 78 # For each row, print the value of each field left-justified within # the maximum possible width of that field. fieldIndices = range(len(cur.description)) for row in cur: for fieldIndex in fieldIndices: fieldValue = str(row[fieldIndex]) fieldMaxWidth = cur.description[fieldIndex][fdb.DESCRIPTION_DISPLAY_SIZE] print fieldValue.ljust(fieldMaxWidth) , print # Finish the row with a newline. Sample output: .. sourcecode:: python NAME YEAR_RELEASED ------------------------------------------------------------------------------ C 1972 Python 1991 **Example 4** Let's insert more languages: .. sourcecode:: python import fdb con = fdb.connect(dsn='/temp/test.db', user='sysdba', password='masterkey') cur = con.cursor() newLanguages = [ ('Lisp', 1958), ('Dylan', 1995), ] cur.executemany("insert into languages (name, year_released) values (?, ?)", newLanguages ) # The changes will not be saved unless the transaction is committed explicitly: con.commit() Note the use of a *parameterized* SQL statement above. When dealing with repetitive statements, this is much faster and less error-prone than assembling each SQL statement manually. (You can read more about parameterized SQL statements in the section on :ref:`Prepared Statements `.) After running Example 4, the table printer from Example 3 would print: .. sourcecode:: python NAME YEAR_RELEASED ------------------------------------------------------------------------------ Lisp 1958 C 1972 Python 1991 Dylan 1995 Calling Stored Procedures ========================= Firebird supports stored procedures written in a proprietary procedural SQL language. Firebird stored procedures can have *input* parameters and/or *output* parameters. Some databases support *input/output* parameters, where the same parameter is used for both input and output; Firebird does not support this. It is important to distinguish between procedures that *return a result set* and procedures that *populate and return their output parameters exactly once*. Conceptually, the latter "return their output parameters" like a Python function, whereas the former "yield result rows" like a Python generator. Firebird's *server-side* procedural SQL syntax makes no such distinction, but *client-side* SQL code (and C API code) must. A result set is retrieved from a stored procedure by `SELECT`ing from the procedure, whereas output parameters are retrieved with an `EXECUTE PROCEDURE` statement. To *retrieve a result set* from a stored procedure with FDB, use code such as this: .. sourcecode:: python cur.execute("select output1, output2 from the_proc(?, ?)", (input1, input2)) # Ordinary fetch code here, such as: for row in cur: ... # process row con.commit() # If the procedure had any side effects, commit them. To *execute* a stored procedure and *access its output parameters*, use code such as this: .. sourcecode:: python cur.callproc("the_proc", (input1, input2)) # If there are output parameters, retrieve them as though they were the # first row of a result set. For example: outputParams = cur.fetchone() con.commit() # If the procedure had any side effects, commit them. This latter is not very elegant; it would be preferable to access the procedure's output parameters as the return value of `Cursor.callproc()`. The Python DB API specification requires the current behavior, however. fdb-2.0.0/sphinx/usage-guide.txt0000644000175000001440000046427213270362207017176 0ustar pcisarusers00000000000000 =========== Usage Guide =========== .. currentModule:: fdb Driver structure ================ Source code is currently divided into next submodules: * :mod:`~fdb.ibase` - Python :ref:`ctypes ` interface to Firebird client library. * :mod:`~fdb.fbcore` - Main driver source code. * :mod:`~fdb.services` - Driver code to work with Firebird Services. * :mod:`~fdb.schema` - Driver code to work with Firebird database schema (metadata). * :mod:`~fdb.monitor` - Driver code to work with Firebird monitoring tables. * :mod:`~fdb.trace` - Code for Firebird Trace & Audit processing. * :mod:`~fdb.gstat` - Code for Firebird gstat output processing. * :mod:`~fdb.log` - Code for Firebird server log processing. * :mod:`~fdb.utils` - Various classes and functions used by driver that are generally useful. * :mod:`~fdb.blr` - Firebird BLR-related definitions. All important data, functions, classes and constants are available directly in fdb namespace, so there is no need to import or use :mod:`~fdb.fbcore` and :mod:`~fdb.ibase` submodules directly. Other submodules (like :mod:`fdb.services` submodule that contains functions and classes for work with Firebird Services) contain optional driver functionality that is not exposed directly through main module namespace. Because :mod:`~fdb.services` submodule contains names also used by main driver (:func:`~fdb.services.connect`, :class:`~fdb.services.Connection`), it's advised to use fully qualified names when refering to them instead importing them via `from fdb.services import ...`. .. index:: Database Databases ========= Access to the database is made available through :class:`Connection` objects. FDB provides two constructors for these: * :func:`connect` - Returns `Connection` to database that already exists. * :func:`create_database` - Returns `Connection` to newly created database. .. index:: pair: Database; connect Using `connect` --------------- This constructor has number of keyword parameters that could be divided into several groups: * Database specification (parameters `dsn`, `host`, `database` and `port`) * User specification (parameters `user`, `password` and `role`) * Connection options (parameters `sql_dialect`, `charset`, `isolation_level`, `buffers`, `force_writes`, `no_reserve` and `db_key_scope`) To establish a connection to database, you always must specify the database, either as `connection string` parameter `dsn`, or as required combination of parameters `host`, `database` and `port`. .. important:: Current driver version ignores the value of `port` parameter. If you need to specify the port number, you have to use `dsn` parameter instead. Although specification of `user` and `password` parameters is optional (if environment variables `ISC_USER` and `ISC_PASSWORD` are set, their values are used if these parameters are ommited), it's recommended practice to use them. Parameter `role` is needed only when you use Firebird roles. Connection options are optional (see `Firebird Documentation`_ for details). However you may often want to specify `charset`, as it directs automatic conversions of string data between client and server, and automatic conversions from/to unicode performed by FDB driver (see `Data handling and conversions`_ for details). **Examples:** .. code-block:: python # Connecting via 'dsn' # # Local database (local protocol, if supported) con = fdb.connect(dsn='/path/database.fdb', user='sysdba', password='pass') # Local database (TCP/IP) con = fdb.connect(dsn='localhost:/path/database.fdb', user='sysdba', password='pass') # Local database (TCP/IP with port specification) con = fdb.connect(dsn='localhost/3050:/path/database.fdb', user='sysdba', password='pass') # Remote database con = fdb.connect(dsn='host:/path/database.db', user='sysdba', password='pass') # Remote database with port specification con = fdb.connect(dsn='host/3050:/path/database.db', user='sysdba', password='pass') # # Connecting via 'database', 'host' and 'port' # # Local database (local protocol, if supported) con = fdb.connect(database='/path/database.db', user='sysdba', password='pass') # Local database (TCP/IP) con = fdb.connect(host='localhost', database='/path/database.db', user='sysdba', password='pass') # Local database (TCP/IP with port specification) con = fdb.connect(host='localhost', port=3050, database='/path/database.db', user='sysdba', password='pass') # Remote database con = fdb.connect(host='myhost', database='/path/database.db', user='sysdba', password='pass') Since version 1.2 FDB supports additional `Connection` class(es) that extend :class:`Connection` functionality in optional (opt-in) way. For example :class:`ConnectionWithSchema` extends Connection interface with methods and attributes provided by :class:`~fdb.schema.Schema`. New `connection_class` parameter was introduced to `connect` and `create_database` to connect to/create database using different class than descends from `Connection`. **Example:** .. code-block:: python # Connecting through ConnectionWithSchema # con = fdb.connect(dsn='/path/database.fdb', user='sysdba', password='pass', connection_class=fdb.ConnectionWithSchema) .. index:: pair: Database; create Using `create_database` ----------------------- The Firebird engine supports dynamic database creation via the SQL statement `CREATE DATABASE`. FDB wraps it into :func:`create_database`, that returns `Connection` instance attached to newly created database. **Example:** .. code-block:: python con = fdb.create_database("create database 'host:/temp/db.db' user 'sysdba' password 'pass'") .. note:: Since version 1.2 FDB supports additional method for database creation. Instead `CREATE DATABASE` SQL statement you can use number of optional keyword parameters introduced to :func:`create_database`. **Example:** .. code-block:: python con = fdb.create_database(dsn='/temp/db.fdb',user='sysdba',password='pass',page_size=8192) .. index:: pair: Database; delete Deleting databases ------------------ The Firebird engine also supports dropping (deleting) databases dynamically, but dropping is a more complicated operation than creating, for several reasons: an existing database may be in use by users other than the one who requests the deletion, it may have supporting objects such as temporary sort files, and it may even have dependent shadow databases. Although the database engine recognizes a `DROP DATABASE` SQL statement, support for that statement is limited to the `isql` command-line administration utility. However, the engine supports the deletion of databases via an API call, which FDB exposes as :meth:`~Connection.drop_database` method in :class:`Connection` class. So, to drop a database you need to connect to it first. **Examples:** .. code-block:: python import fdb con = fdb.create_database("create database '/temp/db.db' user 'sysdba' password 'pass'") con.drop_database() con = fdb.connect(dsn='/path/database.fdb', user='sysdba', password='pass') con.drop_database() .. index:: pair: Connection; usage Connection object ----------------- :class:`Connection` object represents a direct link to database, and works as gateway for next operations with it: * `Executing SQL Statements`_: methods :meth:`~Connection.execute_immediate` and :meth:`~Connection.cursor`. * Dropping database: method :meth:`~Connection.drop_database`. * `Transanction management`_: methods :meth:`~Connection.begin`, :meth:`~Connection.commit`, :meth:`~Connection.rollback`, :meth:`~Connection.savepoint`, :meth:`~Connection.trans`, :meth:`~Connection.trans_info` and :meth:`~Connection.transaction_info`, and attributes :attr:`~Connection.main_transaction`, :attr:`~Connection.transactions`, :attr:`~Connection.default_tpb` and :attr:`~Connection.group`. * Work with `Database Events`_: method :meth:`~Connection.event_conduit`. * `Getting information about Firebird version`_: attributes :attr:`~Connection.server_version`, :attr:`~Connection.firebird_version`, :attr:`~Connection.version` and :attr:`~Connection.engine_version`. * `Getting information about database`_: methods :meth:`~Connection.db_info` and :meth:`~Connection.database_info`. * :ref:`Getting information about database metadata `: attribute :attr:`~Connection.schema` and :attr:`~Connection.ods`. .. index:: pair: Firebird; information about Getting information about Firebird version ------------------------------------------ Because functionality and some features depends on actual Firebird version, it could be important for FDB users to check it. This (otherwise) simple task could be confusing for new Firebird users, because Firebird uses two different version lineages. This abomination was introduced to Firebird thanks to its InterBase legacy (Firebird 1.0 is a fork of InterBase 6.0), as applications designed to work with InterBase can often work with Firebird without problems (and vice versa). However, legacy applications designed to work with InterBase may stop working properly if they would detect unexpectedly low server version, so default version number returned by Firebird (and FDB) is based on InterBase version number. For example this version for Firebird 2.5.2 is 6.3.2, so condition for legacy applications that require at least IB 6.0 is met. FDB provides these version strings as two `Connection` properties: - :attr:`~Connection.server_version` - Legacy InterBase-friendly version string. - :attr:`~Connection.firebird_version` - Firebird's own version string. However, this version string contains more information than version number. For example for Linux Firebird 2.5.2 it's 'LI-V2.5.2.26540 Firebird 2.5'. So FDB provides two more properties for your convenience: - :attr:`~Connection.version` - Only Firebird version number. It's a string with format: major.minor.subrelease.build - :attr:`~Connection.engine_version` - Engine (major.minor) version as (float) number. FDB also provides convenient constants for supported engine versions: `ODS_FB_20`,`ODS_FB_21` and `ODS_FB_25`. .. index:: pair: Database; On-disk Structure Database On-Disk Structure -------------------------- Particular Firebird features may also depend on specific support in database (for example monitoring tables introduced in Firebird 2.1). These required structures are present automatically when database is created by particular engine verison that needs them, but Firebird engine may typically work with databases created by older versions and thus with older structure, so it could be necessary to consult also On-Disk Structure (ODS for short) version. FDB provides this number as :attr:`Connection.ods` (float) property. **Example**: .. code-block:: python con = fdb.connect(dsn='/path/database.fdb', user='sysdba', password='pass') print 'Firebird version:',con.version print 'ODS version:',con.ods :: Firebird version: 2.5.2.26540 ODS version: 11.1 In abover example although connected Firebird engine is version 2.5, connected database has ODS 11.1 which came with Firebird 2.1, and some Firebird 2.5 features will not be available on this database. .. index:: pair: Database; information about Getting information about database ---------------------------------- Firebird provides various informations about server and connected database via `database_info` API call. FDB surfaces this API through methods :meth:`~Connection.db_info` and :meth:`~Connection.database_info` on Connection object. :meth:`Connection.database_info` is a `very thin` wrapper around function `isc_database_info()`. This method does not attempt to interpret its results except with regard to whether they are a string or an integer. For example, requesting `isc_info_user_names` with the call:: con.database_info(fdb.isc_info_user_names, 's') will return a binary string containing a raw succession of length-name pairs. **Example program:** .. code-block:: python import fdb con = fdb.connect(dsn='localhost:/temp/test.db', user='sysdba', password='pass') # Retrieving an integer info item is quite simple. bytesInUse = con.database_info(fdb.isc_info_current_memory, 'i') print 'The server is currently using %d bytes of memory.' % bytesInUse # Retrieving a string info item is somewhat more involved, because the # information is returned in a raw binary buffer that must be parsed # according to the rules defined in the Interbase® 6 API Guide section # entitled "Requesting buffer items and result buffer values" (page 51). # # Often, the buffer contains a succession of length-string pairs # (one byte telling the length of s, followed by s itself). # Function fdb.ibase.ord2 is provided to convert a raw # byte to a Python integer (see examples below). buf = con.database_info(fdb.isc_info_db_id, 's') # Parse the filename from the buffer. beginningOfFilename = 2 # The second byte in the buffer contains the size of the database filename # in bytes. lengthOfFilename = fdb.ibase.ord2(buf[1]) filename = buf[beginningOfFilename:beginningOfFilename + lengthOfFilename] # Parse the host name from the buffer. beginningOfHostName = (beginningOfFilename + lengthOfFilename) + 1 # The first byte after the end of the database filename contains the size # of the host name in bytes. lengthOfHostName = fdb.ibase.ord2(buf[beginningOfHostName - 1]) host = buf[beginningOfHostName:beginningOfHostName + lengthOfHostName] print 'We are connected to the database at %s on host %s.' % (filename, host) Sample output:: The server is currently using 8931328 bytes of memory. We are connected to the database at C:\TEMP\TEST.DB on host WEASEL. A more convenient way to access the same functionality is via the :meth:`~Connection.db_info` method, which is high-level convenience wrapper around the `database_info()` method that parses the output of database_info into Python-friendly objects instead of returning raw binary buffers in the case of complex result types. For example, requesting `isc_info_user_names` with the call:: con.db_info(fdb.isc_info_user_names) returns a dictionary that maps (username -> number of open connections). If SYSDBA has one open connection to the database to which `con` is connected, and TEST_USER_1 has three open connections to that same database, the return value would be:: {‘SYSDBA’: 1, ‘TEST_USER_1’: 3} **Example program:** .. code-block:: python import fdb import os.path ############################################################################### # Querying an isc_info_* item that has a complex result: ############################################################################### # Establish three connections to the test database as TEST_USER_1, and one # connection as SYSDBA. Then use the Connection.db_info method to query the # number of attachments by each user to the test database. testUserCons = [] for i in range(3): tcon = fdb.connect(dsn='localhost:/temp/test.db', user='TEST_USER_1', password='pass') testUserCons.append(tcon) con = fdb.connect(dsn='localhost:/temp/test.db', user='sysdba', password='pass') print 'Open connections to this database:' print con.db_info(fdb.isc_info_user_names) ############################################################################### # Querying multiple isc_info_* items at once: ############################################################################### # Request multiple db_info items at once, specifically the page size of the # database and the number of pages currently allocated. Compare the size # computed by that method with the size reported by the file system. # The advantages of using db_info instead of the file system to compute # database size are: # - db_info works seamlessly on connections to remote databases that reside # in file systems to which the client program lacks access. # - If the database is split across multiple files, db_info includes all of # them. res = con.db_info([fdb.isc_info_page_size, fdb.isc_info_allocation]) pagesAllocated = res[fdb.isc_info_allocation] pageSize = res[fdb.isc_info_page_size] print '\ndb_info indicates database size is', pageSize * pagesAllocated, 'bytes' print 'os.path.getsize indicates size is ', os.path.getsize(DB_FILENAME), 'bytes' Sample output:: Open connections to this database: {'SYSDBA': 1, 'TEST_USER_1': 3} db_info indicates database size is 20684800 bytes os.path.getsize indicates size is 20684800 bytes .. index:: SQL Statement Executing SQL Statements ======================== FDB implements two ways for execution of SQL commands against connected database: * :meth:`~Connection.execute_immediate` - for execution of SQL commands that don't return any result. * :class:`Cursor` objects that offer rich interface for execution of SQL commands and fetching their results. .. index:: pair: Cursor; usage Cursor object ------------- Because `Cursor` objects always operate in context of single :class:`Connection` (and :class:`Transaction`), `Cursor` instances are not created directly, but by constructor method. Python DB API 2.0 assume that if database engine supports transactions, it supports only one transaction per connection, hence it defines constructor method :meth:`~Connection.cursor` (and other transaction-related methods) as part of :class:`Connection` interface. However, Firebird supports multiple independent transactions per connection. To conform to Python DB API, FDB uses concept of internal :attr:`~Connection.main_transaction` and secondary :attr:`~Connection.transactions`. Cursor constructor is primarily defined by :class:`Transaction`, and Cursor constructor on `Connection` is therefore a shortcut for `main_transaction.cursor()`. `Cursor` objects are used for next operations: * Execution of SQL Statemets: methods :meth:`~Cursor.execute`, :meth:`~Cursor.executemany` and :meth:`~Cursor.callproc`. * Creating :class:`PreparedStatement` objects for efficient repeated execution of SQL statements, and to obtain additional information about SQL statements (like execution :attr:`~PreparedStatement.plan`): method :meth:`~Cursor.prep`. * `Fetching results `_: methods :meth:`~Cursor.fetchone`, :meth:`~Cursor.fetchmany`, :meth:`~Cursor.fetchall`, :meth:`~Cursor.fetchonemap`, :meth:`~Cursor.fetchmanymap`, :meth:`~Cursor.fetchallmap`, :meth:`~Cursor.iter`, :meth:`~Cursor.itermap` and :meth:`~Cursor.next`. .. index:: pair: SQL Statement; execution SQL Execution Basics -------------------- There are three methods how to execute SQL commands: * :meth:`Connection.execute_immediate` or :meth:`Transaction.execute_immediate` for SQL commands that don't return any result, and are not executed frequently. This method also **doesn't** support either `parametrized statements`_ or `prepared statements`_. .. tip:: This method is efficient for `administrative` and `DDL`_ SQL commands, like `DROP`, `CREATE` or `ALTER` commands, `SET STATISTICS` etc. * :meth:`Cursor.execute` or :meth:`Cursor.executemany` for commands that return result sets, i.e. sequence of `rows` of the same structure, and sequence has unknown number of `rows` (including zero). .. tip:: This method is preferred for all `SELECT` and other `DML`_ statements, or any statement that is executed frequently, either `as is` or in `parametrized` form. * :meth:`Cursor.callproc` for execution of `Stored procedures` that always return exactly one set of values. .. note:: This method of SP invocation is equivalent to `"EXECUTE PROCEDURE ..."` SQL statement. .. index:: pair: SQL Statement; parametrized .. _parametrized-statements: Parametrized statements ----------------------- When SQL command you want to execute contains data `values`, you can either: * Embed them `directly` or via `string formatting` into command `string`, e.g.: .. code-block:: python cur.execute("insert into the_table (a,b,c) values ('aardvark', 1, 0.1)") # or cur.execute("select * from the_table where col == 'aardvark'") # or cur.execute("insert into the_table (a,b,c) values ('%s', %i, %f)" % ('aardvark',1,0.1)) # or cur.execute("select * from the_table where col == '%s'" % 'aardvark') * Use parameter marker (`?`) in command `string` in the slots where values are expected, then supply those values as Python list or tuple: .. code-block:: python cur.execute("insert into the_table (a,b,c) values (?,?,?)", ('aardvark', 1, 0.1)) # or cur.execute("select * from the_table where col == ?",('aardvark',)) While both methods have the same results, the second one (called `parametrized`) has several important advantages: * You don't need to handle conversions from Python data types to strings. * FDB will handle all data type conversions (if necessary) from Python data types to Firebird ones, including `None/NULL` conversion and conversion from `unicode` to `byte strings` in encoding expected by server. * You may pass BLOB values as open `file-like` objects, and FDB will handle the transfer of BLOB value. Parametrized statemets also have some limitations. Currently: * `DATE`, `TIME` and `DATETIME` values must be relevant :mod:`datetime` objects. * `NUMERIC` and `DECIMAL` values must be :mod:`decimal` objects. .. index:: pair: Cursor; fetching data Fetching data from server ------------------------- Result of SQL statement execution consists from sequence of zero to unknown number of `rows`, where each `row` is a set of exactly the same number of values. :class:`Cursor` object offer number of different methods for fetching these `rows`, that should satisfy all your specific needs: * :meth:`~Cursor.fetchone` - Returns the next row of a query result set, or `None` when no more data is available. .. tip:: Cursor supports the :ref:`iterator protocol `, yielding tuples of values like :meth:`~Cursor.fetchone`. * :meth:`~Cursor.fetchmany` - Returns the next set of rows of a query result, returning a sequence of sequences (e.g. a list of tuples). An empty sequence is returned when no more rows are available. The number of rows to fetch per call is specified by the parameter. If it is not given, the cursor’s :attr:`~Cursor.arraysize` determines the number of rows to be fetched. The method does try to fetch as many rows as indicated by the size parameter. If this is not possible due to the specified number of rows not being available, fewer rows may be returned. .. note:: The default value of :attr:`~Cursor.arraysize` is `1`, so without paremeter it's equivalent to :meth:`~Cursor.fetchone`, but returns list of `rows`, instead actual `row` directly. * :meth:`~Cursor.fetchall` - Returns all (remaining) rows of a query result as list of tuples, where each tuple is one row of returned values. .. tip:: This method can potentially return huge amount of data, that may exhaust available memory. If you need just `iteration` over potentially big result set, use loops with :meth:`~Cursor.fetchone`, Cursor's built-in support for :ref:`iterator protocol ` or call to :meth:`~Cursor.iter` instead this method. * :meth:`~Cursor.fetchonemap` - Returns the next row like :meth:`~Cursor.fetchone`, but returns a mapping of `field name` to `field value`, rather than a tuple. * :meth:`~Cursor.fetchmanymap` - Returns the next set of rows of a query result like :meth:`~Cursor.fetchmany`, but returns a list of mapping of `field name` to `field value`, rather than a tuple. * :meth:`~Cursor.fetchallmap` - Returns all (remaining) rows of a query result like :meth:`~Cursor.fetchall`, returns a list of mappings of `field name` to `field value`, rather than a tuple. .. tip:: This method can potentially return huge amount of data, that may exhaust available memory. If you need just `iteration` over potentially big result set with mapping support, use :meth:`~Cursor.itermap` instead this method. * :meth:`~Cursor.iter` - Equivalent to the :meth:`~Cursor.fetchall`, except that it returns :ref:`iterator ` rather than materialized list. * :meth:`~Cursor.itermap` - Equivalent to the :meth:`~Cursor.fetchallmap`, except that it returns :ref:`iterator ` rather than materialized list. * Call to :meth:`~Cursor.execute` returns self (Cursor instance) that itself supports the :ref:`iterator protocol `, yielding tuples of values like :meth:`~Cursor.fetchone`. .. important:: FDB makes absolutely no guarantees about the return value of the `fetchone` / `fetchmany` / `fetchall` methods except that it is a sequence indexed by field position. FDB makes absolutely no guarantees about the return value of the `fetchonemap` / `fetchmanymap` / `fetchallmap` methods except that it is a mapping of field name to field value. Therefore, client programmers should not rely on the return value being an instance of a particular class or type. **Examples:** .. code-block:: python import fdb con = fdb.connect(dsn='/temp/test.db', user='sysdba', password='masterkey') cur = con.cursor() SELECT = "select name, year_released from languages order by year_released" # 1. Using built-in support for iteration protocol to iterate over the rows available from the cursor, # unpacking the resulting sequences to yield their elements (name, year_released): cur.execute(SELECT) for (name, year_released) in cur: print '%s has been publicly available since %d.' % (name, year_released) # or alternatively you can take an advantage of cur.execute returning self. for (name, year_released) in cur.execute(SELECT): print '%s has been publicly available since %d.' % (name, year_released) # 2. Equivalently using fetchall(): # This is potentially dangerous if result set is huge, as the whole result set is first materialized # as list and then used for iteration. cur.execute(SELECT) for row in cur.fetchall(): print '%s has been publicly available since %d.' % (row[0], row[1]) # 3. Using mapping-iteration rather than sequence-iteration: cur.execute(SELECT) for row in cur.itermap(): print '%(name)s has been publicly available since %(year_released)d.' % row .. tip:: :meth:`Cursor.execute` and :meth:`Cursor.executemany` return self, so you can use calls to them as iterators (see example above). .. index:: pair: SQL Statement; prepared .. _prepared-statements: Prepared Statements ------------------- Execution of any SQL statement has three phases: * *Preparation*: command is analyzed, validated, execution plan is determined by optimizer and all necessary data structures (for example for input and output parameters) are initialized. * *Execution*: input parameters (if any) are passed to server and previously prepared statement is actually executed by database engine. * *Fetching*: result of execution and data (if any) are transfered from server to client, and allocated resources are then released. The preparation phase consumes some amount of server resources (memory and CPU). Although preparation and release of resources typically takes only small amount of CPU time, it builds up as number of executed statements grows. Firebird (like most database engines) allows to spare this time for subsequent execution if particular statement should be executed repeatedly - by reusing once prepared statement for repeated execution. This may save significant amount of server processing time, and result in better overall performance. FDB builds on this by encapsulating all statement-related code into separate :class:`PreparedStatement` class, and implementing :class:`Cursor` class as a wrapper around it. .. warning:: FDB's implementation of Cursor somewhat violates the Python DB API 2.0, which requires that cursor will be unusable after call to `close`; and an Error (or subclass) exception should be raised if any operation is attempted with the cursor. If you'll take advantage of this `anomaly`, your code would be less portable to other Python DB API 2.0 compliant drivers. Beside SQL command string, `Cursor` also allows to aquire and use `PreparedStatement` instances explicitly. `PreparedStatement` are aquired by calling :meth:`~Cursor.prep` method could be then passed to :meth:`~Cursor.execute` or :meth:`~Cursor.executemany` instead `command string`. **Example:** .. code-block:: python insertStatement = cur.prep("insert into the_table (a,b,c) values (?,?,?)") inputRows = [ ('aardvark', 1, 0.1), ('zymurgy', 2147483647, 99999.999), ('foobar', 2000, 9.9) ] for row in inputRows: cur.execute(insertStatement,row) # # or you can use executemany # cur.executemany(insertStatement, inputRows) Prepared statements are bound to `Cursor` instance that created them, and can't be used with any other `Cursor` instance. Beside repeated execution they are also useful to get information about statement (like its output :attr:`~PreparedStatement.description`, execution :attr:`~PreparedStatement.plan` or :attr:`~PreparedStatement.statement_type`) before its execution. **Example Program:** The following program demonstrates the explicit use of `PreparedStatements`. It also benchmarks explicit `PreparedStatement` reuse against normal execution that prepares statements on each execution. .. code-block:: python import time import fdb con = fdb.connect(dsn='localhost:employee', user='sysdba', password='masterkey' ) cur = con.cursor() # Create supporting database entities: cur.execute("recreate table t (a int, b varchar(50))") con.commit() cur.execute("create unique index unique_t_a on t(a)") con.commit() # Explicitly prepare the insert statement: psIns = cur.prep("insert into t (a,b) values (?,?)") print 'psIns.sql: "%s"' % psIns.sql print 'psIns.statement_type == fdb.isc_info_sql_stmt_insert:', ( psIns.statement_type == fdb.isc_info_sql_stmt_insert ) print 'psIns.n_input_params: %d' % psIns.n_input_params print 'psIns.n_output_params: %d' % psIns.n_output_params print 'psIns.plan: %s' % psIns.plan print N = 50000 iStart = 0 # The client programmer uses a PreparedStatement explicitly: startTime = time.time() for i in xrange(iStart, iStart + N): cur.execute(psIns, (i, str(i))) print ( 'With explicit prepared statement, performed' '\n %0.2f insertions per second.' % (N / (time.time() - startTime)) ) con.commit() iStart += N # A new SQL string containing the inputs is submitted every time. Also, in a # more complicated scenario where the end user supplied the string input # values, the program would risk SQL injection attacks: startTime = time.time() for i in xrange(iStart, iStart + N): cur.execute("insert into t (a,b) values (%d,'%s')" % (i, str(i))) print ( 'When unable to reuse prepared statement, performed' '\n %0.2f insertions per second.' % (N / (time.time() - startTime)) ) con.commit() # Prepare a SELECT statement and examine its properties. The optimizer's plan # should use the unique index that we created at the beginning of this program. print psSel = cur.prep("select * from t where a = ?") print 'psSel.sql: "%s"' % psSel.sql print 'psSel.statement_type == fdb.isc_info_sql_stmt_select:', ( psSel.statement_type == fdb.isc_info_sql_stmt_select ) print 'psSel.n_input_params: %d' % psSel.n_input_params print 'psSel.n_output_params: %d' % psSel.n_output_params print 'psSel.plan: %s' % psSel.plan # The current implementation does not allow PreparedStatements to be prepared # on one Cursor and executed on another: print print 'Note that PreparedStatements are not transferrable from one cursor to another:' cur2 = con.cursor() cur2.execute(psSel) Sample output:: psIns.sql: "insert into t (a,b) values (?,?)" psIns.statement_type == fdb.isc_info_sql_stmt_insert: True psIns.n_input_params: 2 psIns.n_output_params: 0 psIns.plan: None With explicit prepared statement, performed 4276.00 insertions per second. When unable to reuse prepared statement, performed 2037.70 insertions per second. psSel.sql: "select * from t where a = ?" psSel.statement_type == fdb.isc_info_sql_stmt_select: True psSel.n_input_params: 1 psSel.n_output_params: 2 psSel.plan: PLAN (T INDEX (UNIQUE_T_A)) Note that PreparedStatements are not transferrable from one cursor to another: Traceback (most recent call last): File "pstest.py", line 85, in cur2.execute(psSel) File "/home/job/python/envs/pyfirebird/fdb/fdb/fbcore.py", line 2623, in execute raise ValueError("PreparedStatement was created by different Cursor.") ValueError: PreparedStatement was created by different Cursor. As you can see, the version that prevents the reuse of prepared statements is about two times slower – *for a trivial statement*. In a real application, SQL statements are likely to be far more complicated, so the speed advantage of using prepared statements would only increase. .. index:: pair: Cursor; named Named Cursors ------------- To allow the Python programmer to perform scrolling `UPDATE` or `DELETE` via the `“SELECT ... FOR UPDATE”` syntax, FDB provides the read/write property :attr:`Cursor.name`. **Example Program:** .. code-block:: python import fdb con = fdb.connect(dsn='localhost:/temp/test.db', user='sysdba', password='pass') curScroll = con.cursor() curUpdate = con.cursor() curScroll.execute("select city from addresses for update") curScroll.name = 'city_scroller' update = "update addresses set city=? where current of " + curScroll.name for (city,) in curScroll: city = ... # make some changes to city curUpdate.execute( update, (city,) ) con.commit() .. index:: pair: Stored procedure; execution Working with stored procedures ------------------------------ Firebird stored procedures can have `input` parameters and/or `output` parameters. Some databases support `input/output` parameters, where the same parameter is used for both input and output; Firebird does not support this. It is important to distinguish between procedures that `return a result set` and procedures that `populate and return their output parameters` exactly once. Conceptually, the latter “return their output parameters” like a Python function, whereas the former “yield result rows” like a Python generator. Firebird’s `server-side` procedural SQL syntax makes no such distinction, but client-side SQL code (and C API code) must. A result set is retrieved from a stored procedure by `SELECT'ing from the procedure, whereas output parameters are retrieved with an 'EXECUTE PROCEDURE' statement`. To **retrieve a result set** from a stored procedure with FDB, use code such as this: .. code-block:: python cur.execute("select output1, output2 from the_proc(?, ?)", (input1, input2)) # Ordinary fetch code here, such as: for row in cur: ... # process row con.commit() # If the procedure had any side effects, commit them. To **execute** a stored procedure and **access its output parameters**, use code such as this: .. code-block:: python cur.callproc("the_proc", (input1, input2)) # If there are output parameters, retrieve them as though they were the # first row of a result set. For example: outputParams = cur.fetchone() con.commit() # If the procedure had any side effects, commit them. This latter is not very elegant; it would be preferable to access the procedure’s output parameters as the return value of :meth:`Cursor.callproc`. The Python DB API specification requires the current behavior, however. .. index:: pair: Data; conversion Data handling and conversions ============================= .. index:: pair: parameter; conversion Implicit Conversion of Input Parameters from Strings ---------------------------------------------------- The database engine treats most SQL data types in a weakly typed fashion: the engine may attempt to convert the raw value to a different type, as appropriate for the current context. For instance, the SQL expressions `123` (integer) and `‘123’` (string) are treated equivalently when the value is to be inserted into an `integer` field; the same applies when `‘123’` and `123` are to be inserted into a `varchar` field. This weak typing model is quite unlike Python’s dynamic yet strong typing. Although weak typing is regarded with suspicion by most experienced Python programmers, the database engine is in certain situations so aggressive about its typing model that FDB must compromise in order to remain an elegant means of programming the database engine. An example is the handling of “magic values” for date and time fields. The database engine interprets certain string values such as `‘yesterday’` and `‘now’` as having special meaning in a date/time context. If FDB did not accept strings as the values of parameters destined for storage in date/time fields, the resulting code would be awkward. Consider the difference between the two Python snippets below, which insert a row containing an integer and a timestamp into a table defined with the following DDL statement: .. code-block:: sql create table test_table (i integer, t timestamp) .. code-block:: python i = 1 t = 'now' sqlWithMagicValues = "insert into test_table (i, t) values (?, '%s')" % t cur.execute( sqlWithMagicValues, (i,) ) .. code-block:: python i = 1 t = 'now' cur.execute( "insert into test_table (i, t) values (?, ?)", (i, t) ) If FDB did not support weak parameter typing, string parameters that the database engine is to interpret as “magic values” would have to be rolled into the SQL statement in a separate operation from the binding of the rest of the parameters, as in the first Python snippet above. Implicit conversion of parameter values from strings allows the consistency evident in the second snippet, which is both more readable and more general. It should be noted that FDB does not perform the conversion from string itself. Instead, it passes that responsibility to the database engine by changing the parameter metadata structure dynamically at the last moment, then restoring the original state of the metadata structure after the database engine has performed the conversion. A secondary benefit is that when one uses FDB to import large amounts of data from flat files into the database, the incoming values need not necessarily be converted to their proper Python types before being passed to the database engine. Eliminating this intermediate step may accelerate the import process considerably, although other factors such as the chosen connection protocol and the deactivation of indexes during the import are more consequential. For bulk import tasks, the database engine’s external tables also deserve consideration. External tables can be used to suck semi-structured data from flat files directly into the relational database without the intervention of an ad hoc conversion program. .. index:: pair: unicode; conversion Automatic conversion from/to unicode ------------------------------------ In Firebird, every `CHAR`, `VARCHAR` or textual `BLOB` field can (or, better: must) have a `character set` assigned. While it's possible to define single character set for whole database, it's also possible to define different character set for each textual field. This information is used to correctly store the bytes that make up the character string, and together with collation information (that defines the sort ordering and uppercase conversions for a string) is vital for correct data manupulation, including automatic transliteration between character sets when necessary. .. important:: Because data also flow between server and client application, it's vital that client will send data encoded only in character set(s) that server expects. While it's possible to leave this responsibility completely on client application, it's better when client and server settle on single character set they would use for communication, especially when database operates with multiple character sets, or uses character set that is not `native` for client application. Character set for communication is specified using `charset` parameter in :func:`connection ` call. When `connection charset` is defined, all textual data returned from server are encoded in this charset, and client application must ensure that all textual data sent to server are encoded only in this charset as well. FDB helps with client side of this character set bargain by automatically converting `unicode` strings into `bytes/strings` encoded in connection character set, and vice versa. However, developers are still responsible that `non-unicode` strings passed to server are in correct encoding (because FDB makes no assuption about encoding of non-unicode strings, so it can't recode them to connection charset). .. important:: In case that `connection charset` is NOT defined at all, or `NONE` charset is specified, FDB uses :func:`locale.getpreferredencoding` to determine encoding for conversions from/to `unicode`. .. important:: There is one exception to automatic conversion: when character set OCTETS is defined for data column. Values assigned to OCTETS columns are always passed `as is`, because they're basically binary streams. This has specific implications regarding Python version you use. Python 2.x `native strings` are `bytes`, suitable for such binary streams, but Python 3 native strings are `unicode`, and you would probably want to use `bytes` type instead. However, FDB in this case doesn't check the value type at all, so you'll not be warned if you'll make a mistake and pass `unicode` to OCTETS column (unless you'll pass more bytes than column may hold, or you intend to store unicode that way). Rules for automatic conversion depend on Python version you use: * Native Python 2.x `strings` are passed to server as is, and developers must explicitly use `unicode` strings to take advantage of automatic conversion. String values coming from server are converted to `unicode` **only**: * for data stored in database (i.e. not for string values returned by Firebird Service and `info` calls etc.). * when `connection charset` is specified. * Native Python 3 strings are `unicode`, so conversion is fully automatic in both directions for all textual data, i.e. including for string values returned by Firebird Service and `info` calls etc. When `connection charset` is not specified, FDB uses :func:`locale.getpreferredencoding` to determine encoding for conversions from/to `unicode`. .. tip:: Except for legacy databases that doesn't have `character set` defined, **always** define character set for your databases and specify `connection charset`. It will make your life much easier. .. index:: BLOB pair: Data; BLOB pair: BLOB; materialized pair: BLOB; stream .. _working_with_blobs: Working with BLOBs ------------------ FDB uses two types of BLOB values: * **Materialized** BLOB values are Python strings. This is the **default** type. * **Streamed** BLOB values are `file-like` objects. Materialized BLOBs are easy to work with, but are not suitable for: * **deferred loading** of BLOBs. They're called `materialized` because they're always fetched from server as part of row fetch. Fetching BLOB value means separate API calls (and network roundtrips), which may slow down you application considerably. * **large values**, as they are always stored in memory in full size. These drawbacks are addressed by `stream` BLOBs. Using BLOBs in `stream` mode is easy: * For **input** values, simply use :ref:`parametrized statement ` and pass any `file-like` object in place of BLOB parameter. The `file-like` object must implement only the :meth:`~file.read` method, as no other metod is used. * For **output** values, you have to call :meth:`Cursor.set_stream_blob` (or :meth:`PreparedStatement.set_stream_blob`) method with specification of column name(s) that should be returned as `file-like` objects. FDB then returns :class:`BlobReader` instance instead string in place of returned BLOB value for these column(s). .. important:: **Before FDB version 1.8** load of materialized blob with multiple segments (i.e. larger than 64K) failed with error (SQLCODE: 101 - segment buffer length shorter than expected). This was an artefact of backward compatibility with KInterbasDB that prevented them to exhaust your memory with very large materialized blobs. **Since FDB version 1.8** this memory exhaustion safeguard was enhanced in more convenient (but backward incompatible) way. New methods :meth:`PreparedStatement.set_stream_blob_treshold()` and :meth:`Cursor.set_stream_blob_treshold()` were introduced to control the maximum size of materialized blobs. When particular blob value exceeds this threshold, an instance of :class:`BlobReader` is returned instead string value, so your application has to be prepared to handle BLOBs in both incarnations. Zero value effectively forces all blobs to be returned as stream blobs. Negative value means no size limit for materialized blobs (use at your own risk). **The default treshold value is 64K.** Blob size treshold has effect only on materialized blob columns, i.e. columns not explicitly requested to be returned as streamed ones using :meth:`PreparedStatement.set_stream_blob` that are always returned as stream blobs. The :class:`BlobReader` instance is bound to particular BLOB value returned by server, so its life time is limited. The actual BLOB value is not opened initially, so no additonal API calls to server are made if you'll decide to ignore the value completely. You also don't need to open the BLOB value explicitly, as BLOB is opened automatically on first call to :meth:`~BlobReader.next`, :meth:`~BlobReader.read`, :meth:`~BlobReader.readline`, :meth:`~BlobReader.readlines` or :meth:`~BlobReader.seek`. However, it's good practice to :meth:`~BlobReader.close` the reader once you're finished reading, as it's likely that Python's garbage collector would call the `__del__` method too late, when fetch context is already gone, and closing the reader would cause an error. .. warning:: If BLOB was NOT CREATED as `stream` BLOB, calling :meth:`BlobReader.seek` method will raise :exc:`DatabaseError` exception. **This constraint is set by Firebird.** .. important:: When working with BLOB values, always have memory efficiency in mind, especially when you're processing huge quantity of rows with BLOB values at once. Materialized BLOB values may exhaust your memory quickly, but using stream BLOBs may have inpact on performance too, as new `BlobReader` instance is created for each value fetched. **Example program:** .. code-block:: python import os.path from cStringIO import StringIO import fdb con = fdb.connect(dsn='localhost:employee',user='sysdba', password='masterkey') cur = con.cursor() cur.execute("recreate table blob_test (a blob)") con.commit() # --- Materialized mode (str objects for both input and output) --- # Insertion: cur.execute("insert into blob_test values (?)", ('abcdef',)) cur.execute("insert into blob_test values (?)", ('ghijklmnop',)) # Retrieval: cur.execute("select * from blob_test") print 'Materialized retrieval (as str):' print cur.fetchall() cur.execute("delete from blob_test") # --- Streaming mode (file-like objects for input; fdb.BlobReader objects for output) --- # Insertion: cur.execute("insert into blob_test values (?)", (StringIO('abcdef'),)) cur.execute("insert into blob_test values (?)", (StringIO('ghijklmnop'),)) f = file(os.path.abspath(__file__), 'rb') cur.execute("insert into blob_test values (?)", (f,)) f.close() # Retrieval using the "file-like" methods of BlobReader: cur.execute("select * from blob_test") cur.set_stream_blob('A') # Note the capital letter readerA = cur.fetchone()[0] print '\nStreaming retrieval (via fdb.BlobReader):' # Python "file-like" interface: print 'readerA.mode: "%s"' % readerA.mode print 'readerA.closed: %s' % readerA.closed print 'readerA.tell(): %d' % readerA.tell() print 'readerA.read(2): "%s"' % readerA.read(2) print 'readerA.tell(): %d' % readerA.tell() print 'readerA.read(): "%s"' % readerA.read() print 'readerA.tell(): %d' % readerA.tell() print 'readerA.read(): "%s"' % readerA.read() readerA.close() print 'readerA.closed: %s' % readerA.closed Output:: Materialized retrieval (as str): [('abcdef',), ('ghijklmnop',)] Streaming retrieval (via fdb.BlobReader): readerA.mode: "rb" readerA.closed: False readerA.tell(): 0 readerA.read(2): "ab" readerA.tell(): 2 readerA.read(): "cdef" readerA.tell(): 6 readerA.read(): "" readerA.closed: True .. index:: ARRAY pair: Data; ARRAY Firebird ARRAY type ------------------- FDB supports Firebird ARRAY data type. ARRAY values are represented as Python lists. On input, the Python sequence (list or tuple) must be nested appropriately if the array field is multi-dimensional, and the incoming sequence must not fall short of its maximum possible length (it will not be “padded” implicitly–see below). On output, the lists will be nested if the database array has multiple dimensions. .. note:: Database arrays have no place in a purely relational data model, which requires that data values be atomized (that is, every value stored in the database must be reduced to elementary, non-decomposable parts). The Firebird implementation of database arrays, like that of most relational database engines that support this data type, is fraught with limitations. Database arrays are of fixed size, with a predeclared number of dimensions (max. 16) and number of elements per dimension. Individual array elements cannot be set to NULL / None, so the mapping between Python lists (which have dynamic length and are therefore not normally “padded” with dummy values) and non-trivial database arrays is clumsy. Stored procedures cannot have array parameters. Finally, many interface libraries, GUIs, and even the isql command line utility do not support database arrays. In general, it is preferable to avoid using database arrays unless you have a compelling reason. **Example:** .. code-block:: python >>> import fdb >>> con = fdb.connect(dsn='localhost:employee',user='sysdba', password='masterkey') >>> cur = con.cursor() >>> cur.execute("select LANGUAGE_REQ from job where job_code='Eng' and job_grade=3 and job_country='Japan'") >>> cur.fetchone() (['Japanese\n', 'Mandarin\n', 'English\n', '\n', '\n'],) **Example program:** .. code-block:: python import fdb con = fdb.connect(dsn='localhost:/temp/test.db', user='sysdba', password='pass') con.execute_immediate("recreate table array_table (a int[3,4])") con.commit() cur = con.cursor() arrayIn = [ [1, 2, 3, 4], [5, 6, 7, 8], [9,10,11,12] ] print 'arrayIn: %s' % arrayIn cur.execute("insert into array_table values (?)", (arrayIn,)) cur.execute("select a from array_table") arrayOut = cur.fetchone()[0] print 'arrayOut: %s' % arrayOut con.commit() Output:: arrayIn: [[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]] arrayOut: [[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]] .. index:: pair: Transaction; management Transanction management ======================= For the sake of simplicity, FDB lets the Python programmer ignore transaction management to the greatest extent allowed by the Python Database API Specification 2.0. The specification says, “if the database supports an auto-commit feature, this must be initially off”. At a minimum, therefore, it is necessary to call the commit method of the connection in order to persist any changes made to the database. Remember that because of ACID, every data manipulation operation in the Firebird database engine takes place in the context of a transaction, including operations that are conceptually “read-only”, such as a typical SELECT. The client programmer of FDB establishes a transaction implicitly by using any SQL execution method, such as :meth:`~Connection.execute_immediate`, :meth:`Cursor.execute()`, or :meth:`Cursor.callproc`. Although FDB allows the programmer to pay little attention to transactions, it also exposes the full complement of the database engine’s advanced transaction control features: `transaction parameters`_, `retaining transactions`_, `savepoints`_, and `distributed transactions`_. Basics ------ When it comes to transactions, Python Database API 2.0 specify that :class:`Connection` object has to respond to the following methods: :meth:`Connection.commit()` Commit any pending transaction to the database. Note that if the database supports an auto-commit feature, this must be initially off. An interface method may be provided to turn it back on. Database modules that do not support transactions should implement this method with void functionality. :meth:`Connection.rollback()` (optional) In case a database does provide transactions this method causes the the database to roll back to the start of any pending transaction. **Closing a connection without committing the changes first will cause an implicit rollback to be performed.** In addition to the implicit transaction initiation required by Python Database API, FDB allows the programmer to start transactions explicitly via the :meth:`Connection.begin` method. Also :meth:`Connection.savepoint` method was added to provide support for `Firebird SAVEPOINTs`_. But Python Database API 2.0 was created with assumption that connection can support only one transactions per single connection. However, Firebird can support multiple independent transactions that can run simultaneously within single connection / attachment to the database. This feature is very important, as applications may require multiple transaction openned simultaneously to perform various tasks, which would require to open multiple connections and thus consume more resources than necessary. FDB surfaces this Firebird feature by separating transaction management out from :class:`Connection` into separate :class:`Transaction` objects. To comply with Python DB API 2.0 requirements, `Connection` object uses one `Transaction` instance as :attr:`main transaction `, and delegates :meth:`~Connection.begin`, :meth:`~Connection.savepoint`, :meth:`~Connection.commit`, :meth:`~Connection.rollback`, :meth:`~Connection.trans_info` and :meth:`~Connection.transaction_info` calls to it. .. seealso:: More about using multiple transactions with the same connection in separate :ref:`section `. **Example:** .. code-block:: python import fdb con = fdb.connect(dsn='localhost:employee',user='sysdba', password='masterkey') cur = con.cursor() # Most minimalistic transaction management -> implicit start, only commit() and rollback() # ======================================================================================== # # Transaction is started implicitly cur.execute('insert into country values ('Oz','Crowns') con.commit() # commits active transaction # Again, transaction is started implicitly cur.execute('insert into country values ('Barsoom','XXX') con.rollback() # rolls back active transaction cur.execute('insert into country values ('Pellucidar','Shells') # This will roll back the transaction # because Python DB API 2.0 requires that closing connection # with pending transaction must cause an implicit rollback con.close() .. index:: pair: Transaction; auto-commit Auto-commit ----------- FDB doesn't support `auto-commit` feature directly, but developers may achieve the similar result using `explicit` transaction start, taking advantage of :attr:`~Transaction.default_action` and its default value (`commit`). **Example:** .. code-block:: python import fdb con = fdb.connect(dsn='localhost:employee',user='sysdba', password='masterkey') cur = con.cursor() con.begin() cur.execute('insert into country values ('Oz','Crowns') con.begin() # commits active transaction and starts new one cur.execute('insert into country values ('Barsoom','XXX') con.begin() # commits active transaction and starts new one cur.execute('insert into country values ('Pellucidar','Shells') # However, commit is required before connection is closed, # because Python DB API 2.0 requires that closing connection # with pending transaction must cause an implicit rollback con.commit() con.close() .. index:: pair: Transaction; parameters Transaction parameters ---------------------- The database engine offers the client programmer an optional facility called `transaction parameter buffers` (TPBs) for tweaking the operating characteristics of the transactions he initiates. These include characteristics such as whether the transaction has read and write access to tables, or read-only access, and whether or not other simultaneously active transactions can share table access with the transaction. Connections have a :attr:`~Connection.default_tpb` attribute that can be changed to set the default TPB for all transactions subsequently started on the connection. Alternatively, if the programmer only wants to set the TPB for a single transaction, he can start a transaction explicitly via the :meth:`~Connection.begin()` method and pass a TPB for that single transaction. For details about TPB construction, see the `Firebird API documentation`_. In particular, the `ibase.h` supplied with Firebird contains all possible TPB elements – single bytes that the C API defines as constants whose names begin with `isc_tpb_`. FDB makes all of those TPB constants available (under the same names) as module-level constants. A transaction parameter buffer is handled in C as a character array; FDB requires that TPBs be constructed as Python `strings` (or `bytes` for Python 3). Since the constants in the `fdb.isc_tpb_*` family are numbers, they can't be simply concatenated to create a TPB, but you may use utility function **fdb.bs(byte_array)** that accepts sequence of numbers and returns `string` (P2) or `bytes` (P3). For example next call returns TPB for typical `READ COMMITED` transaction: .. code-block:: python from fdb import * TPB = bs([isc_tpb_version3, isc_tpb_write, isc_tpb_wait, isc_tpb_read_committed, isc_tpb_rec_version]) .. warning:: This method requires good knowledge of `tpc_block` structure and proper order of various parameters, as Firebird engine will raise an error when badly structured block would be used. Also definition of `table reservation` parameters is uncomfortable as you’ll need to mix binary codes with table names passed as Pascal strings (characters preceded by string length). FDB provides several predefined TPB's for convenience: * **ISOLATION_LEVEL_READ_COMMITED** Read/Write READ COMMITED with `record version` and WAIT option. Isolation level with greatest concurrent throughput. This is **Default** TPB. .. tip:: This isolation level is optimal for transactions that write data and doesn't require stable snapshot of database for their operations (i.e. most operations are limited to individual rows). * **ISOLATION_LEVEL_READ_COMMITED_LEGACY** Read/Write READ COMMITED with `NO record version` and WAIT option. .. warning:: This isolation level emulates RDBMS that use locks instead multiversion control (MVC). It's not recommended to use it at all, except for legacy applications lazily ported from such RDBMS to Firebird. * **ISOLATION_LEVEL_READ_COMMITED_RO** Like `ISOLATION_LEVEL_READ_COMMITED`, but **Read Only**. .. tip:: Firebird threats these transactions as `pre-committed`, so they are best option **for long running transactions that only read data**. Internaly FDB uses such transaction to read metadata from connected database. This internal transaction is also available to developers for convenience as :attr:`Connection.query_transaction`. * **ISOLATION_LEVEL_REPEATABLE_READ** or **ISOLATION_LEVEL_SNAPSHOT** Read/Write SNAPSHOT (concurrency) with WAIT option. .. tip:: This isolation level is necessary for transactions that process data in bulk, like reporting, recalculations etc. * **ISOLATION_LEVEL_SERIALIZABLE** or **ISOLATION_LEVEL_SNAPSHOT_TABLE_STABILITY** Read/Write SNAPSHOT TABLE STABILITY (consistency) with WAIT option. Like REPEATABLE_READ/SNAPSHOT, but locks whole tables for writes from other transactions. Isolation level with lowest concurrent throughput. .. warning:: Because tables are locked for `protected write` (i.e. no other transaction can write until lock is released) **at time of first access**, there is a great risk of `deadlock` between transactions. .. tip:: To prevent deadlocks and increase concurrent throughput it's recommended to use custom TPB's with `fine-grained table access reservation`. **Example:** .. code-block:: python import fdb con = fdb.connect(dsn='localhost:employee',user='sysdba', password='masterkey') cur = con.cursor() # Start transaction with default_tpb (ISOLATION_LEVEL_READ_COMMITED) con.begin() cur.execute('select * from JOB') com.commit() # Start using transactions in REPEATABLE READ (SNAPSHOT) isolation con.default_tpb = fdb.ISOLATION_LEVEL_REPEATABLE_READ con.begin() cur.execute('select * from JOB') com.commit() # Start THIS transaction as R/O READ COMMITTED con.begin(fdb.ISOLATION_LEVEL_READ_COMMITED_RO) cur.execute('select * from JOB') com.commit() For cases when predefined transaction parameter blocks are not suitable for your needs, FDB offers utility class :class:`TPB` for convenient and safe construction of custom `tpb blocks`. Simply create instance of this class, set member attributes to required values and use either :meth:`rendered ` binary tpb block or `TPB` instance itself to set `default_tpb` or as paraneter to `begin()`. **Example:** .. code-block:: python import fdb con = fdb.connect(dsn='localhost:employee',user='sysdba', password='masterkey') # Use TPB to construct valid transaction parameter block # from the fdb.isc_tpb_* family. customTPB = fdb.TPB() customTPB.isolation_level = fdb.isc_tpb_consistency # SERIALIZABLE customTPB.table_reservation["MY_TABLE"] = (fdb.isc_tpb_protected, fdb.isc_tpb_lock_write) # Explicitly start a transaction with the custom TPB: con.begin(tpb=customTPB) # For frequent use, it's better to use already assembled version of TPB customTPB = fdb.TPB() customTPB.access_mode = fdb.isc_tpb_read # read only customTPB.isolation_level = fdb.isc_tpb_concurrency # SNAPSHOT customTPB = customTPB.render() # Create valid block according to current values of member attributes. for x in range(1000): con.begin(tpb=customTPB) If you want to build only `table reservation` part of `tpb` (for example to add to various custom built parameter blocks), you can use class :class:`TableReservation` instead `TPB`. .. index:: pair: Transaction; information about Getting information about transaction ------------------------------------- :class:`Transaction` object exposes two methods that return information about currently managed active transaction (the same methods are exposed also by :class:`Connection` object for :attr:`~Connection.main_transaction`): :meth:`~Transaction.transaction_info` is a very thin wrapper around function `isc_transaction_info()`. This method does not attempt to interpret its results except with regard to whether they are a string or an integer. A more convenient way to access the same functionality is via the :meth:`~Transaction.trans_info` method, which is high-level convenience wrapper around the `transaction_info` method that parses the output of `transaction_info` into Python-friendly objects instead of returning raw binary buffers in the case of complex result types. **Example program:** .. code-block:: python import fdb con = fdb.connect(dsn='localhost:employee',user='sysdba', password='masterkey') # Start transaction, so we can get information about it con.begin() info = con.trans_info([fdb.isc_info_tra_id, fdb.isc_info_tra_oldest_interesting, fdb.isc_info_tra_oldest_snapshot, fdb.isc_info_tra_oldest_active, fdb.isc_info_tra_isolation, fdb.isc_info_tra_access, fdb.isc_info_tra_lock_timeout]) print info print "TransactionID:", info[fdb.isc_info_tra_id] print "Oldest Interesting (OIT):",info[fdb.isc_info_tra_oldest_interesting] print "Oldest Snapshot:",info[fdb.isc_info_tra_oldest_snapshot] print "Oldest Active (OAT):",info[fdb.isc_info_tra_oldest_active] print "Isolation Level:",info[fdb.isc_info_tra_isolation] print "Access Mode:",info[fdb.isc_info_tra_access] print "Lock Timeout:",info[fdb.isc_info_tra_lock_timeout] Output:: {4: 459, 5: 430, 6: 459, 7: 459, 8: (3, 1), 9: 1, 10: -1} TransactionID: 459 Oldest Interesting (OIT): 430 Oldest Snapshot: 459 Oldest Active (OAT): 459 Isolation Level: (3, 1) Access Mode: 1 Lock Timeout: -1 .. note:: Isolation level info values are available as FDB constants `isc_info_tra_consistency`, `isc_info_tra_concurrency` and `isc_info_tra_read_committed`. For `read committed`, a tuple of two values is returned instead single value, where the second value is record version flag `isc_info_tra_no_rec_version` or `isc_info_tra_rec_version`. Access mode values are available as FDB constants `isc_info_tra_readonly` and `isc_info_tra_readwrite`. .. index:: pair: Transaction; retaining Retaining transactions ---------------------- The :meth:`~Transaction.commit` and :meth:`~Transaction.rollback` methods accept an optional boolean parameter `retaining` (**default False**) to indicate whether to recycle the transactional context of the transaction being resolved by the method call. If retaining is `True`, the infrastructural support for the transaction active at the time of the method call will be “retained” (efficiently and transparently recycled) after the database server has committed or rolled back the conceptual transaction. .. important:: In code that commits or rolls back frequently, “retaining” the transaction yields considerably better performance. However, retaining transactions must be used cautiously because they can interfere with the server’s ability to garbage collect old record versions. For details about this issue, read the “Garbage” section of `this document `_ by Ann Harrison. For more information about retaining transactions, see `Firebird documentation`. .. index:: SAVEPOINT pair: Transaction; SAVEPOINT Savepoints ---------- Savepoints are named, intermediate control points within an open transaction that can later be rolled back to, without affecting the preceding work. Multiple savepoints can exist within a single unresolved transaction, providing “multi-level undo” functionality. Although Firebird savepoints are fully supported from SQL alone via the `SAVEPOINT ‘name’` and `ROLLBACK TO ‘name’` statements, FDB also exposes savepoints at the Python API level for the sake of convenience. Call to method :meth:`~Transaction.savepoint` establishes a savepoint with the specified `name`. To roll back to a specific savepoint, call the :meth:`~Transaction.rollback` method and provide a value (the name of the savepoint) for the optional `savepoint` parameter. If the savepoint parameter of `rollback()` is not specified, the active transaction is cancelled in its entirety, as required by the Python Database API Specification. The following program demonstrates savepoint manipulation via the FDB API, rather than raw SQL. .. code-block:: python import fdb con = fdb.connect(dsn='employee', user='sysdba', password='pass') cur = con.cursor() cur.execute("recreate table test_savepoints (a integer)") con.commit() print 'Before the first savepoint, the contents of the table are:' cur.execute("select * from test_savepoints") print ' ', cur.fetchall() cur.execute("insert into test_savepoints values (?)", [1]) con.savepoint('A') print 'After savepoint A, the contents of the table are:' cur.execute("select * from test_savepoints") print ' ', cur.fetchall() cur.execute("insert into test_savepoints values (?)", [2]) con.savepoint('B') print 'After savepoint B, the contents of the table are:' cur.execute("select * from test_savepoints") print ' ', cur.fetchall() cur.execute("insert into test_savepoints values (?)", [3]) con.savepoint('C') print 'After savepoint C, the contents of the table are:' cur.execute("select * from test_savepoints") print ' ', cur.fetchall() con.rollback(savepoint='A') print 'After rolling back to savepoint A, the contents of the table are:' cur.execute("select * from test_savepoints") print ' ', cur.fetchall() con.rollback() print 'After rolling back entirely, the contents of the table are:' cur.execute("select * from test_savepoints") print ' ', cur.fetchall() The output of the example program is shown below:: Before the first savepoint, the contents of the table are: [] After savepoint A, the contents of the table are: [(1,)] After savepoint B, the contents of the table are: [(1,), (2,)] After savepoint C, the contents of the table are: [(1,), (2,), (3,)] After rolling back to savepoint A, the contents of the table are: [(1,)] After rolling back entirely, the contents of the table are: [] .. index:: pair: Transaction; multiple .. _multiple_transactions: Using multiple transactions with the same connection ---------------------------------------------------- To use additional transactions that could run simultaneously with :attr:`main transaction ` managed by `Connection`, create new :class:`Transaction` object calling :meth:`Connection.trans` method. If you don't specify the optional `default_tpb` parameter, this new `Transaction` inherits the :attr:`~Connection.default_tpb` from `Connection`. Physical transaction is not started when `Transaction` instance is created, but implicitly when first SQL statement is executed, or explicitly via :meth:`Transaction.begin` call. To execute statements in context of this additional transaction you have to use `cursors` obtained directly from this `Transaction` instance calling its :meth:`~Transaction.cursor` method, or call :meth:`Transaction.execute_immediate` method. **Example:** .. code-block:: python import fdb con = fdb.connect(dsn='employee', user='sysdba', password='pass') # Cursor for main_transaction context cur = con.cursor() # Create new READ ONLY READ COMMITTED transaction ro_transaction = con.trans(fdb.ISOLATION_LEVEL_READ_COMMITED_RO) # and cursor ro_cur = ro_transaction.cursor() cur.execute('insert into country values ('Oz','Crowns') con.commit() # commits main transaction # Read data created by main transaction from second one ro_cur.execute("select * from COUNTRY where COUNTRY = `Oz`") print ro_cur.fetchall() # Insert more data, but don't commit cur.execute('insert into country values ('Barsoom','XXX') # Read data created by main transaction from second one ro_cur.execute("select * from COUNTRY where COUNTRY = `Barsoom`") print ro_cur.fetchall() .. index:: pair: Transaction; distributed .. _distributed_transactions: Distributed Transactions ------------------------ Distributed transactions are transactions that span multiple databases. FDB provides this Firebird feature through :class:`ConnectionGroup` class. Instances of this class act as managers for :class:`Transaction` object that is bound to multiple connections, and to :class:`cursors ` bound to it and connections participated in group. That's it, distributed transaction is fully independent from all other transactions, main or secondary, of member connections. To assemble a group of connections, you can either pass the sequence of :class:`Connection` instances to :class:`ConnectionGroup` constructor, or add connections latter calling :meth:`ConnectionGroup.add` method. Any :class:`Connection` could be a member of only one group, and attempt to add it to another one would raise an exception. Also, `Connection` participating in group cannot be :meth:`closed ` before it's :meth:`removed ` or whole group is :meth:`disbanded `. .. warning:: Never add more than one connection to the same database to the same `ConnectionGroup`! Similarly to :class:`Transaction`, distributed transactions are managed through :meth:`ConnectionGroup.begin`, :meth:`ConnectionGroup.savepoint`. :meth:`ConnectionGroup.commit` and :meth:`ConnectionGroup.rollback` methods. Additionally, `ConnectionGroup` exposes method :meth:`~ConnectionGroup.prepare` that explicitly initiates the first phase of `Two-Phase Commit Protocol`. Transaction parameters are defined similarly to :class:`Transaction` using :attr:`ConnectionGroup.default_tpb` or as optional parameter to :meth:`~ConnectionGroup.begin` call. SQL statements that should belong to context of distributed transaction are executed via :class:`Cursor` instances aquired through :meth:`ConnectionGroup.cursor` method, or calling :meth:`ConnectionGroup.execute_immediate` method. .. note:: Because :class:`Cursor` instances can belong to only one :class:`Connection`, the :meth:`~ConnectionGroup.cursor` method has manadory parameter `connection`, to specify to which member connection cursor should belong. **Example program:** .. code-block:: python import fdb # First database con1 = fdb.create_database("CREATE DATABASE 'testdb-1.fdb' USER 'SYSDBA' PASSWORD 'masterkey'") con1.execute_immediate("recreate table T (PK integer, C1 integer)") con1.commit() # Second database con2 = fdb.create_database("CREATE DATABASE 'testdb-2,fdb' USER 'SYSDBA' PASSWORD 'masterkey'") con2.execute_immediate("recreate table T (PK integer, C1 integer)") con2.commit() # Create connection group cg = fdb.ConnectionGroup((con1,con2)) # Prepare Group cursors for each connection gc1 = cg.cursor(con1) gc2 = cg.cursor(con2) # Connection cursors to check content of databases q = 'select * from T order by pk' cc1 = con1.cursor() p1 = cc1.prep(q) cc2 = con2.cursor() p2 = cc2.prep(q) print "Distributed transaction: COMMIT" # =============================== gc1.execute('insert into t (pk) values (1)') gc2.execute('insert into t (pk) values (1)') cg.commit() # check it con1.commit() cc1.execute(p1) print 'db1:',cc1.fetchall() con2.commit() cc2.execute(p2) print 'db2:',cc2.fetchall() print "Distributed transaction: PREPARE + COMMIT" # ========================================= gc1.execute('insert into t (pk) values (2)') gc2.execute('insert into t (pk) values (2)') cg.prepare() cg.commit() # check it con1.commit() cc1.execute(p1) print 'db1:',cc1.fetchall() con2.commit() cc2.execute(p2) print 'db2:',cc2.fetchall() print "Distributed transaction: SAVEPOINT + ROLLBACK to it" # =================================================== gc1.execute('insert into t (pk) values (3)') cg.savepoint('CG_SAVEPOINT') gc2.execute('insert into t (pk) values (3)') cg.rollback(savepoint='CG_SAVEPOINT') # check it - via group cursors, as transaction is still active gc1.execute(q) print 'db1:',gc1.fetchall() gc2.execute(q) print 'db2:',gc2.fetchall() print "Distributed transaction: ROLLBACK" # ================================= cg.rollback() # check it con1.commit() cc1.execute(p1) print 'db1:',cc1.fetchall() con2.commit() cc2.execute(p2) print 'db2:',cc2.fetchall() print "Distributed transaction: EXECUTE_IMMEDIATE" # ========================================== cg.execute_immediate('insert into t (pk) values (3)') cg.commit() # check it con1.commit() cc1.execute(p1) print 'db1:',cc1.fetchall() con2.commit() cc2.execute(p2) print 'db2:',cc2.fetchall() # Finalize con1.drop_database() con1.close() con2.drop_database() con2.close() Output:: Distributed transaction: COMMIT db1: [(1, None)] db2: [(1, None)] Distributed transaction: PREPARE + COMMIT db1: [(1, None), (2, None)] db2: [(1, None), (2, None)] Distributed transaction: SAVEPOINT + ROLLBACK to it db1: [(1, None), (2, None), (3, None)] db2: [(1, None), (2, None)] Distributed transaction: ROLLBACK db1: [(1, None), (2, None)] db2: [(1, None), (2, None)] Distributed transaction: EXECUTE_IMMEDIATE db1: [(1, None), (2, None), (3, None)] db2: [(1, None), (2, None), (3, None)] .. index:: pair: Transaction; context manager .. _transaction-context-manager: Transaction Context Manager --------------------------- FDB provides context manager :class:`TransactionContext` that allows automatic transaction management using :ref:`WITH`. It can work with any object that supports `begin()`, `commit()` and `rollback()` methods, i.e. :class:`Connection`, :class:`ConnectionGroup` or :class:`Transaction`. It starts transaction when WITH block is entered and commits it if no exception occurst within it, or calls `rollback()` otherwise. Exceptions raised in WITH block are never suppressed. Examples: .. sourcecode:: python con = fdb.connect(dsn='employee',user='sysdba',password='masterkey') # Uses default main transaction with TransactionContext(con): cur = con.cursor() cur.execute("insert into T (PK,C1) values (1,'TXT')") # Uses separate transaction with TransactionContext(con.trans()) as tr: cur = tr.cursor() cur.execute("insert into T (PK,C1) values (2,'AAA')") # Uses connection group (distributed transaction) con2 = fdb.connect(dsn='remote:employee',user='sysdba',password='masterkey') cg = fdb.ConnectionGroup((con,con2)) with TransactionContext(cg): cur1 = cg.cursor(con) cur2 = cg.cursor(con2) cur1.execute("insert into T (PK,C1) values (3,'Local')") cur2.execute("insert into T (PK,C1) values (3,'Remote')") .. index:: pair: Database; events Database Events =============== What they are ------------- The Firebird engine features a distributed, interprocess communication mechanism based on messages called `database events`. A database event is a message passed from a trigger or stored procedure to an application to announce the occurrence of a specified condition or action, usually a database change such as an insertion, modification, or deletion of a record. The Firebird event mechanism enables applications to respond to actions and database changes made by other, concurrently running applications without the need for those applications to communicate directly with one another, and without incurring the expense of CPU time required for periodic polling to determine if an event has occurred. Why use them ------------ Anything that can be accomplished with database events can also be implemented using other techniques, so why bother with events? Since you’ve chosen to write database-centric programs in Python rather than assembly language, you probably already know the answer to this question, but let’s illustrate. A typical application for database events is the handling of administrative messages. Suppose you have an administrative message database with a `message's` table, into which various applications insert timestamped status reports. It may be desirable to react to these messages in diverse ways, depending on the status they indicate: to ignore them, to initiate the update of dependent databases upon their arrival, to forward them by e-mail to a remote administrator, or even to set off an alarm so that on-site administrators will know a problem has occurred. It is undesirable to tightly couple the program whose status is being reported (the `message producer`) to the program that handles the status reports (the `message handler`). There are obvious losses of flexibility in doing so. For example, the message producer may run on a separate machine from the administrative message database and may lack access rights to the downstream reporting facilities (e.g., network access to the SMTP server, in the case of forwarded e-mail notifications). Additionally, the actions required to handle status reports may themselves be time-consuming and error-prone, as in accessing a remote network to transmit e-mail. In the absence of database event support, the message handler would probably be implemented via `polling`. Polling is simply the repetition of a check for a condition at a specified interval. In this case, the message handler would check in an infinite loop to see whether the most recent record in the `messages` table was more recent than the last message it had handled. If so, it would handle the fresh message(s); if not, it would go to sleep for a specified interval, then loop. The `polling-based` implementation of the message handler is fundamentally flawed. Polling is a form of busy-wait_; the check for new messages is performed at the specified interval, regardless of the actual activity level of the message producers. If the polling interval is lengthy, messages might not be handled within a reasonable time period after their arrival; if the polling interval is brief, the message handler program (and there may be many such programs) will waste a large amount of CPU time on unnecessary checks. The database server is necessarily aware of the exact moment when a new message arrives. Why not let the message handler program request that the database server send it a notification when a new message arrives? The message handler can then efficiently sleep until the moment its services are needed. Under this `event-based` scheme, the message handler becomes aware of new messages at the instant they arrive, yet it does not waste CPU time checking in vain for new messages when there are none available. How events are exposed ---------------------- #. Server Process ("An event just occurred!") To notify any interested listeners that a specific event has occurred, issue the `POST_EVENT` statement from Stored Procedure or Trigger. The `POST_EVENT` statement has one parameter: the name of the event to post. In the preceding example of the administrative message database, `POST_EVENT` might be used from an `after insert` trigger on the `messages` table, like this: .. code-block:: s create trigger trig_messages_handle_insert for messages after insert as begin POST_EVENT 'new_message'; end .. note:: The physical notification of the client process does not occur until the transaction in which the `POST_EVENT` took place is actually committed. Therefore, multiple events may *conceptually* occur before the client process is *physically* informed of even one occurrence. Furthermore, the database engine makes no guarantee that clients will be informed of events in the same groupings in which they conceptually occurred. If, within a single transaction, an event named `event_a` is posted once and an event named `event_b` is posted once, the client may receive those posts in separate "batches", despite the fact that they occurred in the same conceptual unit (a single transaction). This also applies to multiple occurrences of *the same* event within a single conceptual unit: the physical notifications may arrive at the client separately. #. Client Process ("Send me a message when an event occurs.") .. note:: If you don't care about the gory details of event notification, skip to the section that describes FDB's Python-level event handling API. The Firebird C client library offers two forms of event notification. The first form is *synchronous* notification, by way of the function :c:func:`isc_wait_for_event()`. This form is admirably simple for a C programmer to use, but is inappropriate as a basis for FDB's event support, chiefly because it's not sophisticated enough to serve as the basis for a comfortable Python-level API. The other form of event notification offered by the database client library is *asynchronous*, by way of the functions :c:func:`isc_que_events()` (note that the name of that function is misspelled), :c:func:`isc_cancel_events()`, and others. The details are as nasty as they are numerous, but the essence of using asynchronous notification from C is as follows: #. Call :c:func:`isc_event_block()` to create a formatted binary buffer that will tell the server which events the client wants to listen for. #. Call :c:func:`isc_que_events()` (passing the buffer created in the previous step) to inform the server that the client is ready to receive event notifications, and provide a callback that will be asynchronously invoked when one or more of the registered events occurs. #. [The thread that called :c:func:`isc_que_events()` to initiate event listening must now do something else.] #. When the callback is invoked (the database client library starts a thread dedicated to this purpose), it can use the :c:func:`isc_event_counts()` function to determine how many times each of the registered events has occurred since the last call to :c:func:`isc_event_counts()` (if any). #. [The callback thread should now "do its thing", which may include communicating with the thread that called :c:func:`isc_que_events()`.] #. When the callback thread is finished handling an event notification, it must call :c:func:`isc_que_events()` again in order to receive future notifications. Future notifications will invoke the callback again, effectively "looping" the callback thread back to Step 4. API for Python developers ------------------------- The FDB database event API is comprised of the following: the method :meth:`Connection.event_conduit` and the class :class:`EventConduit`. The :class:`EventConduit` class serve as "conduit" through which database event notifications will flow into the Python program. It's not designed to be instantiated directly by the Python programmer. Instead, use the :meth:`Connection.event_conduit` method to create `EventConduit` instances. `event_conduit` is a method of `Connection` rather than a module-level function or a class constructor because the database engine deals with events in the context of a particular database (after all, `POST_EVENT` must be issued by a stored procedure or a trigger). :meth:`Connection.event_conduit` takes a sequence of string event names as parameter, and returns :class:`EventConduit` instance. .. important:: To start listening for events it's necessary (starting from FDB version 1.4.2) to call :meth:`EventConduit.begin` method or use EventConduit's context manager interface. Immediately when :meth:`~EventConduit.begin` method is called, EventConduit starts to accumulate notifications of any events that occur within the conduit’s internal queue until the conduit is closed either explicitly (via the :meth:`~EventConduit.close` method) or implicitly (via garbage collection). Notifications about events are aquired through call to :meth:`~EventConduit.wait` method, that blocks the calling thread until at least one of the events occurs, or the specified `timeout` (if any) expires, and returns `None` if the wait timed out, or a dictionary that maps `event_name -> event_occurrence_count`. .. important:: `EventConduit` can act as context manager that ensures execution of :meth:`~EventConduit.begin` and :meth:`~EventConduit.close` methods. It's strongly advised to use the `EventConduit` with the `with` statement. **Example:** .. code-block:: python with connection.event_conduit( ('event_a', 'event_b') ) as conduit: events = conduit.wait() process_events(events) If you want to drop notifications accumulated so far by conduit, call :meth:`EventConduit.flush` method. **Example program:** .. code-block:: python import fdb import threading import time # Prepare database con = fdb.create_database("CREATE DATABASE 'event_test.fdb' USER 'SYSDBA' PASSWORD 'masterkey'") con.execute_immediate("CREATE TABLE T (PK Integer, C1 Integer)") con.execute_immediate("""CREATE TRIGGER EVENTS_AU FOR T ACTIVE BEFORE UPDATE POSITION 0 AS BEGIN if (old.C1 <> new.C1) then post_event 'c1_updated' ; END""") con.execute_immediate("""CREATE TRIGGER EVENTS_AI FOR T ACTIVE AFTER INSERT POSITION 0 AS BEGIN if (new.c1 = 1) then post_event 'insert_1' ; else if (new.c1 = 2) then post_event 'insert_2' ; else if (new.c1 = 3) then post_event 'insert_3' ; else post_event 'insert_other' ; END""") con.commit() cur = con.cursor() # Utility function def send_events(command_list): for cmd in command_list: cur.execute(cmd) con.commit() print "One event" # ========= timed_event = threading.Timer(3.0,send_events,args=[["insert into T (PK,C1) values (1,1)",]]) events = con.event_conduit(['insert_1']) events.begin() timed_event.start() e = events.wait() events.close() print e print "Multiple events" # =============== cmds = ["insert into T (PK,C1) values (1,1)", "insert into T (PK,C1) values (1,2)", "insert into T (PK,C1) values (1,3)", "insert into T (PK,C1) values (1,1)", "insert into T (PK,C1) values (1,2)",] timed_event = threading.Timer(3.0,send_events,args=[cmds]) events = self.con.event_conduit(['insert_1','insert_3']) events.begin() timed_event.start() e = events.wait() events.close() print e print "20 events" # ========= cmds = ["insert into T (PK,C1) values (1,1)", "insert into T (PK,C1) values (1,2)", "insert into T (PK,C1) values (1,3)", "insert into T (PK,C1) values (1,1)", "insert into T (PK,C1) values (1,2)",] timed_event = threading.Timer(1.0,send_events,args=[cmds]) events = con.event_conduit(['insert_1','A','B','C','D', 'E','F','G','H','I','J','K','L','M', 'N','O','P','Q','R','insert_3']) events.begin() timed_event.start() time.sleep(3) e = events.wait() events.close() print e print "Flush events" # ============ timed_event = threading.Timer(3.0,send_events,args=[["insert into T (PK,C1) values (1,1)",]]) events = con.event_conduit(['insert_1']) events.begin() send_events(["insert into T (PK,C1) values (1,1)", "insert into T (PK,C1) values (1,1)"]) time.sleep(2) events.flush() timed_event.start() e = events.wait() events.close() print e # Finalize con.drop_database() con.close() Output:: One event {'insert_1': 1} Multiple events {'insert_3': 1, 'insert_1': 2} 20 events {'A': 0, 'C': 0, 'B': 0, 'E': 0, 'D': 0, 'G': 0, 'insert_1': 2, 'I': 0, 'H': 0, 'K': 0, 'J': 0, 'M': 0, 'L': 0, 'O': 0, 'N': 0, 'Q': 0, 'P': 0, 'R': 0, 'insert_3': 1, 'F': 0} Flush events {'insert_1': 1} .. currentModule:: fdb.services .. index:: Services pair: Services; working with .. _working_with_services: Working with Services ===================== Database server maintenance tasks such as user management, load monitoring, and database backup have traditionally been automated by scripting the command-line tools :program:`gbak`, :program:`gfix`, :program:`gsec`, and :program:`gstat`. The API presented to the client programmer by these utilities is inelegant because they are, after all, command-line tools rather than native components of the client language. To address this problem, Firebird has a facility called the `Services API`, which exposes a uniform interface to the administrative functionality of the traditional command-line tools. The native Services API, though consistent, is much lower-level than a Pythonic API. If the native version were exposed directly, accomplishing a given task would probably require more Python code than scripting the traditional command-line tools. For this reason, FDB presents its own abstraction over the native API via the :mod:`fdb.services` module. .. index:: pair: Services; connection Services API Connections ------------------------ All Services API operations are performed in the context of a `connection` to a specific database server, represented by the :class:`fdb.services.Connection` class. Similarly to database connections, FDB provides :func:`~fdb.services.connect()` constructor function to create such connections. This constructor has three keyword parameters: :host: The network name of the computer on which the database server is running. :user: The name of the database user under whose authority the maintenance tasks are to be performed. :password: User’s password. Since maintenance operations are most often initiated by an administrative user on the same computer as the database server, `host` defaults to the local computer, and `user` defaults to `SYSDBA`. The three calls to :func:`fdb.services.connect()` in the following program are equivalent: .. code-block:: python from fdb import services con = services.connect(password='masterkey') con = services.connect(user='sysdba', password='masterkey') con = services.connect(host='localhost', user='sysdba', password='masterkey') .. note:: Like database connections, it's good practice to :meth:`~Connection.close` them when you don't need them anymore. :class:`~fdb.services.Connection` object provides number of methods that could be divided into several groups: * `Server Configuration and State`_: To get information about server configuration, active attachments or users, or to get content of server log. * `Database options`_: To set various database parameters like size of page cache, access mode or SQL dialect. * `Database maintenance`_: To perform backup, restore, validation or other database maintenance tasks. * `User maintenance`_: To get or change information about users defined in security database, to create new or remove users. * `Trace service`_: To start, stop, pause/resume or list Firebird `trace sessions`. * `Text ouput from Services`_: Some services like `backup` or `trace` may return significant amount of text. This output is not returned directly by method that starts the service, but through separate methods that emulate read from text file, or provide :ref:`iterator protocol ` support on `Connection`. .. index:: pair: Services; server information Server Configuration and State ------------------------------ :meth:`~Connection.get_service_manager_version()` To help client programs adapt to version changes, the service manager exposes its version number as an integer. .. code-block:: python # 64-bit Linux Firebird 2.5.1 SuperServer >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') >>> print con.get_service_manager_version() 2 `fdb.services` is a thick wrapper of the Services API that can shield its users from changes in the underlying C API, so this method is unlikely to be useful to the typical Python client programmer. :meth:`~Connection.get_server_version()` Returns the server’s version string .. code-block:: python # 64-bit Linux Firebird 2.5.1 SuperServer >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') >>> print con.get_server_version() LI-V2.5.2.26536 Firebird 2.5 At first glance, this method appears to duplicate the functionality of the :attr:`fdb.Connection.server_version` property, but when working with Firebird, there is a difference. :attr:`fdb.Connection.server_version` is based on a C API call (`isc_database_info()`) that existed long before the introduction of the Services API. Some programs written before the advent of Firebird test the version number in the return value of `isc_database_info()`, and refuse to work if it indicates that the server is too old. Since the first stable version of Firebird was labeled 1.0, this pre-Firebird version testing scheme incorrectly concludes that (e.g.) Firebird 1.0 is older than Interbase 5.0. Firebird addresses this problem by making `isc_database_info()` return a “pseudo-InterBase” version number, whereas the Services API returns the true Firebird version, as shown: .. code-block:: python # 64-bit Linux Firebird 2.5.1 SuperServer import fdb con = fdb.connect(dsn='employee', user='sysdba', password='masterkey') print 'Interbase-compatible version string:', con.server_version svcCon = fdb.services.connect(password='masterkey') print 'Actual Firebird version string: ', svcCon.get_server_version() Output (on Firebird 2.5.1/Linux64):: Interbase-compatible version string: LI-V6.3.1.26351 Firebird 2.5 Actual Firebird version string: LI-V2.5.1.26351 Firebird 2.5 :meth:`~Connection.get_architecture()` Returns platform information for the server, including hardware architecture and operating system family. .. code-block:: python # 64-bit Linux Firebird 2.5.1 SuperServer >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') >>> print con.get_architecture() Firebird/linux AMD64 :meth:`~Connection.get_home_directory()` Returns the equivalent of the RootDirectory setting from :file:`firebird.conf`. .. code-block:: python # 64-bit Linux Firebird 2.5.1 SuperServer >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') >>> print con.get_home_directory() /opt/firebird/ :meth:`~Connection.get_security_database_path()` Returns the location of the server’s core security database, which contains user definitions and such. Name of this database is :file:`security2.fdb` (Firebird 2.0 and later) or :file:`security.fdb` (Firebird 1.5). .. code-block:: python # 64-bit Linux Firebird 2.5.1 SuperServer >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') >>> print con.get_security_database_path() /opt/firebird/security2.fdb :meth:`~Connection.get_lock_file_directory()` Returns the directory location for Firebird lock files. .. code-block:: python # 64-bit Linux Firebird 2.5.1 SuperServer >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') >>> print con.get_lock_file_directory() /tmp/firebird/ :meth:`~Connection.get_server_capabilities()` Returns tuple of capability info codes for each capability reported by Firebird server. Following constants are defined in :mod:`fdb.services` for convenience: * :data:`CAPABILITY_MULTI_CLIENT` * :data:`CAPABILITY_REMOTE_HOP` * :data:`CAPABILITY_SERVER_CONFIG` * :data:`CAPABILITY_QUOTED_FILENAME` * :data:`CAPABILITY_NO_SERVER_SHUTDOWN` .. code-block:: python # 64-bit Linux Firebird 2.5.1 SuperServer >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') >>> print con.get_server_capabilities() (2L, 4L, 512L, 256L) >>> fdb.services.CAPABILITY_MULTI_CLIENT in con.get_server_capabilities() True >>> fdb.services.CAPABILITY_QUOTED_FILENAME in con.get_server_capabilities() False :meth:`~Connection.get_message_file_directory()` To support internationalized error messages/prompts, the database engine stores its messages in a file named :file:`firebird.msg`. The directory in which this file resides can be determined with this method. .. code-block:: python # 64-bit Linux Firebird 2.5.1 SuperServer >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') >>> print con.get_message_file_directory() /opt/firebird/ :meth:`~Connection.get_connection_count()` Returns the number of active connections to databases managed by the server. This count only includes database connections (such as open instances of :class:`fdb.Connection`), not services manager connections (such as open instances of :class:`fdb.services.Connection`). .. code-block:: python # 64-bit Linux Firebird 2.5.1 SuperServer >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') >>> db1 = fdb.connect(dsn='employee',user='sysdba',password='masterkey') >>> db2 = fdb.connect(dsn='employee',user='sysdba',password='masterkey') >>> print con.get_connection_count() 2 :meth:`~Connection.get_attached_database_names()` Returns a list of the names of all databases to which the server is maintaining at least one connection. The database names are not guaranteed to be in any particular order. .. code-block:: python # 64-bit Linux Firebird 2.5.1 SuperServer >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') >>> db1 = fdb.connect(dsn='employee',user='sysdba',password='masterkey') >>> db2 = fdb.connect(dsn='employee',user='sysdba',password='masterkey') >>> print con.get_attached_database_names() ['/opt/firebird/examples/empbuild/employee.fdb'] :meth:`~Connection.get_log()` Request the contents of the server’s log file (:file:`firebird.log`). This method is so-called `Async method` that only initiates log transfer. Actual log content could be read by one from many methods for `text ouput from Services`_ that `Connection` provides . .. code-block:: python # 64-bit Linux Firebird 2.5.1 SuperServer >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') >>> con.get_log() >>> log = con.readlines() .. tip:: You can use :mod:`fdb.log` module for parsing and further data processing. .. index:: pair: Services; database options Database options ---------------- :meth:`~Connection.set_default_page_buffers` Sets individual page cache size for Database. .. code-block:: python >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') >>> con.set_default_page_buffers('employee',100) :meth:`~Connection.set_sweep_interval` Sets treshold for automatic sweep. .. code-block:: python >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') >>> con.set_sweep_interval('employee',100000) :meth:`~Connection.set_reserve_page_space` Sets data page space reservation policy. .. code-block:: python >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') # Use all space >>> con.set_reserve_page_space('employee',False) :meth:`~Connection.set_write_mode` Sets Disk Write Mode: Sync (forced writes) or Async (buffered). Following constants are defined in :mod:`fdb.services` for convenience: * :data:`~fdb.services.WRITE_FORCED` * :data:`~fdb.services.WRITE_BUFFERED` .. code-block:: python >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') # Disable Forced Writes >>> con.set_write_mode('employee',services.WRITE_BUFFERED) :meth:`~Connection.set_access_mode` Sets Database Access mode: Read Only or Read/Write. Following constants are defined in :mod:`fdb.services` for convenience: * :data:`~fdb.services.ACCESS_READ_WRITE` * :data:`~fdb.services.ACCESS_READ_ONLY` .. code-block:: python >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') # Set database to R/O mode >>> con.set_access_mode('employee',services.ACCESS_READ_ONLY) :meth:`~Connection.set_sql_dialect` Sets SQL Dialect for Database. .. warning:: Changing SQL dialect on existing database is not recommended. Only newly created database objects would respect new dialect setting, while objects created with previous dialect remain unchanged. That may have dire consequences. .. code-block:: python >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') # Use SQL dialect 1 >>> con.set_sql_dialect('employee',1) .. index:: triple: Services; Database; maintenance Database maintenance -------------------- :meth:`~Connection.get_limbo_transaction_ids` Returns list of transactions in limbo. :meth:`~Connection.commit_limbo_transaction` Resolves limbo transaction with commit. :meth:`~Connection.rollback_limbo_transaction` Resolves limbo transaction with rollback. :meth:`~Connection.get_statistics` Request database statisctics. Report is in the same format as the output of the gstat command-line utility. This method has one required parameter, the location of the database on which to compute statistics, and six optional boolean parameters for controlling the domain of the statistics. This method is so-called `Async method` that only initiates report processing. Actual report could be read by one from many methods for `text ouput from Services`_ that `Connection` provides . .. note:: Until statistical report is not fully fetched from service (or ignored via :meth:`~Connection.wait`), any attempt to start another asynchronous service will fail with exception. .. code-block:: python >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') >>> con.get_statistics('employee') >>> stat_report = con.readlines() .. tip:: You can use :mod:`fdb.gstat` module for parsing and further data processing. :meth:`~Connection.backup` Request logical (GBAK) database backup. Produces report about backup process. This method is so-called `Async method` that only initiates backup process. Actual report could be read by one from many methods for `text ouput from Services`_ that `Connection` provides . .. note:: Until backup report is not fully fetched from service (or ignored via :meth:`~Connection.wait`), any attempt to start another asynchronous service will fail with exception. .. code-block:: python # 64-bit Linux Firebird 2.5.1 SuperServer >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') >>> con.backup('employee', '/home/data/employee.fbk', metadata_only=True, collect_garbage=False) >>> backup_report = con.readlines() .. note:: :meth:`~Connection.backup` creates a backup file on server host. Alternatively you can use :meth:`~Connection.local_backup` to create a backup file on local machine. :meth:`~Connection.restore` Request database restore from logical (GBAK) backup. Produces report about restore process. This method is so-called `Async method` that only initiates restore process. Actual report could be read by one from many methods for `text ouput from Services`_ that `Connection` provides . .. note:: Until restore report is not fully fetched from service (or ignored via :meth:`~Connection.wait`), any attempt to start another asynchronous service will fail with exception. .. code-block:: python # 64-bit Linux Firebird 2.5.1 SuperServer >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') >>> con.restore('/home/data/employee.fbk', '/home/data/empcopy.fdb') >>> restore_report = con.readlines() .. note:: :meth:`~Connection.restore` uses a backup file on server host. Alternatively you can use :meth:`~Connection.local_restore` to use a backup file on local machine. :meth:`~Connection.nbackup` Perform physical (NBACKUP) database backup. .. note:: Method call will not return until sweep is finished. :meth:`~Connection.nrestore` Perform restore from physical (NBACKUP) database backup. .. note:: Method call will not return until sweep is finished. :meth:`~Connection.shutdown` Database shutdown. Following constants are defined in :mod:`fdb.services` for convenience: For shutdow mode: * :data:`~fdb.services.SHUT_SINGLE` * :data:`~fdb.services.SHUT_MULTI` * :data:`~fdb.services.SHUT_FULL` For shutdown method: * :data:`~fdb.services.SHUT_FORCE` * :data:`~fdb.services.SHUT_DENY_NEW_TRANSACTIONS` * :data:`~fdb.services.SHUT_DENY_NEW_ATTACHMENTS` .. code-block:: python # 64-bit Linux Firebird 2.5.1 SuperServer >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') # Shutdown database to single-user maintenance mode >>> con.shutdown('empoyee', services.SHUT_SINGLE, services.SHUT_FORCE, 0) # Go to full shutdown mode, disabling new attachments during 5 seconds >>> con.shutdown('empoyee', services.SHUT_FULL, services.SHUT_DENY_NEW_ATTACHMENTS, 5) :meth:`~Connection.bring_online` Bring previously shut down database back online. Following constants are defined in :mod:`fdb.services` for convenience: For on-line mode: * :data:`~fdb.services.SHUT_NORMAL` * :data:`~fdb.services.SHUT_SINGLE` * :data:`~fdb.services.SHUT_MULTI` .. code-block:: python # 64-bit Linux Firebird 2.5.1 SuperServer >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') # Enable multi-user maintenance >>> con.bring_online('employee', services.SHUT_MULTI) # Enable single-user maintenance >>> con.bring_online('employee', services.SHUT_SINGLE) # Return to normal state >>> con.bring_online('employee') :meth:`~Connection.activate_shadow` Activates Database Shadow(s). :meth:`~Connection.sweep` Performs Database Sweep. .. note:: Method call will not return until sweep is finished. .. code-block:: python >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') >>> con.sweep('empoyee') :meth:`~Connection.repair` Database Validation and Repair. .. note:: Method call will not return until action is finished. .. code-block:: python >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') # Just validate >>> con.repair('empoyee', ignore_checksums=True, read_only_validation=True) # Mend the database >>> con.repair('empoyee', ignore_checksums=True, mend_database=True) .. index:: triple: Services; Database; users User maintenance ---------------- :meth:`~Connection.get_users` Returns information about specified user or all users as a list of :class:`User` instances. .. code-block:: python >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') >>> users = con.get_users() >>> for user in users: ... print user.name ... print user.first_name, user.middle_name, user.last_name ... print user.user_id, user.group_id SYSDBA Sql Server Administrator 0 0 :meth:`~Connection.add_user` Adds new user. Requires instance of :class:`User` with **at least** its :attr:`~User.name` and :attr:`~User.password` attributes specified as non-empty values. All other attributes are optional. .. code-block:: python >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') >>> user = services.User('NewUser') >>> user.password = 'secret' >>> user.first_name = 'John' >>> user.last_name = 'Doe' >>> con.add_users(User) :meth:`~Connection.modify_user` Modifycation of user information. Requires instance of :class:`User` with **at least** its :attr:`~User.name` attribute specified as non-empty value. .. note:: Sets :attr:`~User.first_name`, :attr:`~User.middle_name` and :attr:`~User.last_name` to their actual values, and ignores the :attr:`~User.user_id` and :attr:`~User.group_id` attributes regardless of their values. :attr:`~User.password` is set **only** when it has value. .. code-block:: python >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') >>> user = services.User('SYSDBA') # Change pasword >>> user.password = 'Pa$$w0rd' >>> con.modify_user(User) :meth:`~Connection.remove_user` Removes user. Requires User name or instance of :class:`User` with **at least** its :attr:`~User.name` attribute specified as non-empty value. .. code-block:: python >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') >>> con.remove_user('NewUser') :meth:`~Connection.user_exists` Checks for user's existence. Requires User name or instance of :class:`User` with **at least** its :attr:`~User.name` attribute specified as non-empty value. .. code-block:: python >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') >>> con.user_exists('NewUser') False .. index:: pair: Services; trace Trace service ------------- .. tip:: You can use :mod:`fdb.trace` module for parsing and further data processing. :meth:`~Connection.trace_start` Starts new trace session. Requires trace `configuration` and returns `Session ID`. Trace session output could be retrieved through :meth:`~Connection.readline`, :meth:`~Connection.readlines`, iteration over `Connection` or ignored via call to :meth:`~Connection.wait`. .. note:: Until session output is not fully fetched from service (or ignored via :meth:`~Connection.wait`), any attempt to start another asynchronous service including call to any `trace_` method will fail with exception. .. code-block:: python import fdb svc = fdb.services.connect(password='masterkey') # Because trace session blocks the connection, we need another one to stop trace session! svc_aux = fdb.services.connect(password='masterkey') trace_config = """ enabled true log_statement_finish true print_plan true include_filter %%SELECT%% exclude_filter %%RDB$%% time_threshold 0 max_sql_length 2048 """ trace_id = svc.trace_start(trace_config,'test_trace_1') trace_log = [] # Get first 10 lines of trace output for i in range(10): trace_log.append(svc.readline()) # Stop trace session svc_aux.stop_trace(trace_id) :meth:`~Connection.trace_stop` Stops trace session specified by ID. .. code-block:: python >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') >>> con.trace_stop(15) Trace session ID 15 stopped :meth:`~Connection.trace_suspend` Suspends trace session specified by ID. .. code-block:: python >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') >>> con.trace_suspend(15) Trace session ID 15 paused :meth:`~Connection.trace_resume` resumes trace session specified by ID. .. code-block:: python >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') >>> con.trace_resume(15) Trace session ID 15 resumed :meth:`~Connection.trace_list` Returns information about existing trace sessions as dictionary mapping `SESSION_ID -> SESSION_PARAMS`. Session parameters is another dictionary with next keys: :name: (string) (optional) Session name if specified. :date: (datetime.datetime) Session start date and time. :user: (string) Trace user name. :flags: (list of strings) Session flags. .. code-block:: python >>> from fdb import services >>> con = services.connect(host='localhost', user='sysdba', password='masterkey') >>> con.trace_list() {53: {'date': datetime.datetime(2012, 10, 5, 10, 45, 4), 'flags': ['active', ' admin', ' trace'], 'user': 'SYSDBA'}} .. index:: pair: Services; output Text ouput from Services ------------------------ Some services like `backup` or `trace` may return significant amount of text. Rather than return the whole text as single string value by methods that provide access to these services, FDB isolated the transfer process to separate methods: * :meth:`~Connection.readline` - Similar to :meth:`file.readline`, returns next line of output from Service. * :meth:`~Connection.readlines` - Like :meth:`file.readlines`, returns list of output lines. * Iteration over `Connection` object, because :class:`Connection` has built-in support for :ref:`iterator protocol `. * Using `callback` method provided by developer. Each `Connection` method that returns its result asynchronously accepts an optional parameter `callback`, which must be a function that accepts one string parameter. This method is then called with each output line coming from service. * :meth:`~Connection.wait` - Waits for Sevice to finish, ignoring rest of the output it may produce. .. warning:: Until output is not fully fetched from service, any attempt to start another asynchronous service will fail with exception! This constraint is set by Firebird Service API. You may check the status of asynchronous Services using :attr:`Connection.fetching` attribute or :meth:`Connection.isrunning` method. In cases when you're not interested in output produced by Service, call :meth:`~Connection.wait` to wait for service to complete. **Examples:** .. code-block:: python import fdb svc = fdb.services.connect(password='masterkey') print "Fetch materialized" print "==================" print "Start backup" svc.backup('employee', 'employee.fbk') print "svc.fetching is", svc.fetching print "svc.running is", svc.isrunning() report = svc.readlines() print "%i lines returned" % len(report) print "First 5 lines from output:" for i in xrange(5): print i,report[i] print "svc.fetching is", svc.fetching print "svc.running is", svc.isrunning() print print "Iterate over result" print "===================" svc.backup('employee', 'employee.fbk') output = [] for line in svc: output.append(line) print "%i lines returned" % len(output) print "Last 5 lines from output:" for line in output[-5:]: print line print print "Callback" print "========" output = [] # Callback function def fetchline(line): output.append(line) svc.backup('employee', 'employee.fbk', callback=fetchline) print "%i lines returned" % len(output) print "Last 5 lines from output:" for line in output[-5:]: print line Output:: Fetch materialized ================== Start backup svc.fetching is True svc.running is True 558 lines returned First 5 lines from output: 0 gbak:readied database employee for backup 1 gbak:creating file employee.fbk 2 gbak:starting transaction 3 gbak:database employee has a page size of 4096 bytes. 4 gbak:writing domains svc.fetching is False svc.running is False Iterate over result =================== 558 lines returned Last 5 lines from output: gbak:writing referential constraints gbak:writing check constraints gbak:writing SQL roles gbak:writing names mapping gbak:closing file, committing, and finishing. 74752 bytes written Callback ======== 558 lines returned Last 5 lines from output: gbak:writing referential constraints gbak:writing check constraints gbak:writing SQL roles gbak:writing names mapping gbak:closing file, committing, and finishing. 74752 bytes written .. currentModule:: fdb.schema .. index:: pair: Database; schema pair: Database schema; working with .. _working-with-schema: Working with database schema ============================ Description of database objects like tables, views, stored procedures, triggers or UDF functions that represent database schema is stored in set of system tables present in every database. Firebird users can query these tables to get information about these objects and their relations. But querying system tables is inconvenient, as it requires good knowledge how this information is structured and requires significant amount of Python code. Changes in system tables between Firebird versions further add to this complexity. Hence FDB provides set of classes (isolated in separate module :mod:`fdb.schema`) that transform information stored in system tables into set of Python objects that surface the vital information in meaningful way, and additionaly provide set of methods for most commonly used operations or checks. Database schema could be accessed in three different ways, each suitable for different use case: - By direct creation of :class:`fdb.schema.Schema` instances that are then :meth:`binded ` to particular :class:`Connection` instance. This method is best if you want to work with schema only occasionally, or you want to keep connections as lightweight as possible. - Accessing :attr:`fdb.Connection.schema` property. This method is more convenient than previous one, and represents a compromise between convenience and resource consumption because :class:`~fdb.schem.Schema` instance is not created until first reference and is managed by connection itself. Individual metadata objects are not loaded from system tables until first reference. - Using :class:`~fdb.ConnectionWithSchema` instead :class:`Connection` by specifying `connection_class=ConnectionWithSchema` parameter to :func:`~fdb.connect` or :func:`~fdb.create_database`. This `Connection` descendant loads all database metadata immediately and provides directly all attributes and methods provided by :class:`~fdb.schema.Schema` class. This method is most suitable in case you want to work with database metadata extensively. **Examples:** 1. Using Schema instance: .. code-block:: python >>> import fdb >>> con = fdb.connect(dsn='employee',user='sysdba', password='masterkey') >>> schema = fdb.schema.Schema() >>> schema.bind(con) >>> [t.name for t in schema.tables] ['COUNTRY', 'JOB', 'DEPARTMENT', 'EMPLOYEE', 'SALES', 'PROJECT', 'EMPLOYEE_PROJECT', 'PROJ_DEPT_BUDGET', 'SALARY_HISTORY', 'CUSTOMER'] 2. Using Connection.schema: .. code-block:: python >>> import fdb >>> con = fdb.connect(dsn='employee',user='sysdba', password='masterkey') >>> [t.name for t in con.schema.tables] ['COUNTRY', 'JOB', 'DEPARTMENT', 'EMPLOYEE', 'SALES', 'PROJECT', 'EMPLOYEE_PROJECT', 'PROJ_DEPT_BUDGET', 'SALARY_HISTORY', 'CUSTOMER'] 3. Using ConnectionWithSchema: .. code-block:: python >>> import fdb >>> con = fdb.connect(dsn='employee',user='sysdba', password='masterkey', connection_class=fdb.ConnectionWithSchema) >>> [t.name for t in con.tables] ['COUNTRY', 'JOB', 'DEPARTMENT', 'EMPLOYEE', 'SALES', 'PROJECT', 'EMPLOYEE_PROJECT', 'PROJ_DEPT_BUDGET', 'SALARY_HISTORY', 'CUSTOMER'] .. note:: Individual metadata information (i.e. information about :attr:`domains `, :attr:`tables ` etc.) is loaded on first access and cached for further reference until it's :meth:`clared ` or :meth:`reload ` is requested. Because once loaded information is cached, it's good to :meth:`clar ` it when it's no longer needed to conserve memory. .. index:: pair: Database schema; categories Available information --------------------- The :class:`~fdb.schema.Schema` provides information about: - **Database:** :attr:`Owner name `, :attr:`default character set `, :attr:`description `, :attr:`security class `, :attr:`nbackup backup history ` and whether database consist from :meth:`single or multiple files `. - **Facilities:** Available :attr:`character sets `, :attr:`collations `, BLOB :attr:`filters `, database :attr:`files ` and :attr:`shadows `. - **User database objects:** :attr:`exceptions `, :attr:`generators `, :attr:`domains `, :attr:`tables ` and their :attr:`constraints `, :attr:`indices `, :attr:`views `, :attr:`triggers `, :attr:`procedures `, user :attr:`roles `, :attr:`user defined functions ` and :attr:`packages `. - **System database objects:** :attr:`generators `, :attr:`domains `, :attr:`tables ` and their constraints, :attr:`indices `, :attr:`views `, :attr:`triggers `, :attr:`procedures `, :attr:`functions ` and :attr:`backup history `. - **Relations between objects:** Through direct links between metadata objects and :attr:`dependencies `. - **Privileges:** :attr:`All ` privileges, or privileges granted for specific :attr:`table `, :attr:`table column `, :attr:`view `, :attr:`view column `, :attr:`procedure ` or :attr:`role `. It's also possible to get all privileges :meth:`granted to ` specific user, role, procedure, trigger or view. .. index:: pair: Database schema; metadata objects Metadata objects ---------------- Schema information is presented as Python objects of various classes with common parent class :class:`~fdb.schema.BaseSchemaItem` (except :class:`~fdb.schema.Schema` itself), that defines several common attributes and methods: Attributes: - :attr:`~fdb.schema.BaseSchemaItem.name`: Name of database object or None if object doesn't have a name. - :attr:`~fdb.schema.BaseSchemaItem.description`: Description (documentation text) for object or None if object doesn't have a description. - :attr:`~fdb.schema.BaseSchemaItem.actions`: List of supported SQL operations on schema object instance. Methods: - :meth:`~fdb.schema.BaseSchemaItem.accept_visitor`: :ref:`Visitor Pattern support `. - :meth:`~fdb.schema.BaseSchemaItem.issystemobject`: Returns True if this database object is system object. - :meth:`~fdb.schema.BaseSchemaItem.get_quoted_name`: Returns quoted (if necessary) name of database object. - :meth:`~fdb.schema.BaseSchemaItem.get_dependents`: Returns list of all database objects that :ref:`depend ` on this one. - :meth:`~fdb.schema.BaseSchemaItem.get_dependencies`: Returns list of database objects that this object :ref:`depend ` on. - :meth:`~fdb.schema.BaseSchemaItem.get_sql_for`: Returns :ref:`SQL command string ` for specified action on database object. There are next schema objects: :class:`~fdb.schema.Collation`, :class:`~fdb.schema.CharacterSet`, :class:`~fdb.schema.DatabaseException`, :class:`~fdb.schema.Sequence` (Generator), :class:`~fdb.schema.Domain`, :class:`~fdb.schema.Index`, :class:`~fdb.schema.Table`, :class:`~fdb.schema.TableColumn`, :class:`~fdb.schema.Constraint`, :class:`~fdb.schema.View`, :class:`~fdb.schema.ViewColumn`, :class:`~fdb.schema.Trigger`, :class:`~fdb.schema.Procedure`, :class:`~fdb.schema.ProcedureParameter`, :class:`~fdb.schema.Function`, :class:`~fdb.schema.FunctionArgument`, :class:`~fdb.schema.Role`, :class:`~fdb.schema.Dependency`, :class:`~fdb.schema.DatabaseFile`, :class:`~fdb.schema.Shadow`, :class:`~fdb.schema.Package`, :class:`~fdb.schema.Filter`, :class:`~fdb.schema.BackupHistory` and :class:`~fdb.schema.Privilege`. .. index:: pair: visitor pattern; usage pair: Database schema; visitor pattern .. _visitor-pattern-support: Visitor Pattern support ----------------------- .. versionchanged:: 2.0 Class *fdb.schema.SchemaVisitor* was replaced with :class:`fdb.utils.Visitor` class. `Visitor Pattern`_ is particularly useful when you need to process various objects that need special handling in common algorithm (for example display information about them or generate SQL commands to create them in new database). Each metadata objects (including :class:`~fdb.schema.Schema`) descend from :class:`~fdb.utils.Visitable` class and thus support :meth:`~fdb.schema.BaseSchemaItem.accept` method that calls visitor's :meth:`~fdb.utils.Visitor.visit` method. This method dispatch calls to specific class-handling method or :meth:`fdb.utils.Visitor.default_action` if there is no such special class-handling method defined in your visitor class. Special class-handling methods must have a name that follows *visit_* pattern, for example method that should handle :class:`~fdb.schema.Table` (or its descendants) objects must be named as *visit_Table*. .. index:: pair: visitor pattern; example Next code uses visitor pattern to print all DROP SQL statements necessary to drop database object, taking its dependencies into account, i.e. it could be necessary to first drop other - dependant objects before it could be dropped. .. code-block:: python import fdb # Object dropper class ObjectDropper(fdb.utils.Visitor): def __init__(self): self.seen = [] def drop(self, obj): self.seen = [] obj.accept(self) # You can call self.visit(obj) directly here as well def default_action(self, obj): if not obj.issystemobject() and 'drop' in obj.actions: for dependency in obj.get_dependents(): d = dependency.dependent if d and d not in self.seen: d.accept(self) if obj not in self.seen: print obj.get_sql_for('drop') self.seen.append(obj) def visit_TableColumn(self, column): column.table.accept(self) def visit_ViewColumn(self, column): column.view.accept(self) def visit_ProcedureParameter(self, param): param.procedure.accept(self) def visit_FunctionArgument(self, arg): arg.function.accept(self) # Sample use: con = fdb.connect(dsn='employee',user='sysdba', password='masterkey') table = con.schema.get_table('JOB') dropper = ObjectDropper() dropper.drop(table) Will produce next result:: DROP PROCEDURE ALL_LANGS DROP PROCEDURE SHOW_LANGS DROP TABLE JOB .. _object-dependencies: .. index:: pair: dependencies; working with pair: Database schema; dependencies Object dependencies ------------------- Close relations between metadata object like `ownership` (Table vs. TableColumn, Index or Trigger) or `cooperation` (like FK Index vs. partner UQ/PK Index) are defined directly using properties of particular schema objects. Besides close releations Firebird also uses `dependencies`, that describe functional dependency between otherwise independent metadata objects. For example stored procedure can call other stored procedures, define its parameters using domains or work with tables or views. Removing or changing these objects may/will cause the procedure to stop working correctly, so Firebird tracks these dependencies. FDB surfaces these dependencies as :class:`~fdb.schema.Dependency` schema objects, and all schema objects have :meth:`~fdb.schema.BaseSchemaItem.get_dependents` and :meth:`~fdb.schema.BaseSchemaItem.get_dependencies` methods to get list of :class:`~fdb.schema.Dependency` instances that describe these dependencies. :class:`~fdb.schema.Dependency` object provides names and types of dependent/depended on database objects, and access to their respective schema Python objects as well. .. _enhanced-object-list: .. index:: pair: list; enhanced Enhanced list of objects ------------------------ .. versionadded:: 2.0 Whenever possible, schema module uses enhanced :class:`~fdb.utils.ObjectList` list descendant for collections of metadata objects. This enhanced list provides several convenient methods for advanced list processing: - filtering - :meth:`~fdb.utils.ObjectList.filter`, :meth:`~fdb.utils.ObjectList.ifilter` and :meth:`~fdb.utils.ObjectList.ifilterfalse` - sorting - :meth:`~fdb.utils.ObjectList.sort` - extracting/splitting - :meth:`~fdb.utils.ObjectList.extract` and :meth:`~fdb.utils.ObjectList.split` - testing - :meth:`~fdb.utils.ObjectList.contains`, :meth:`~fdb.utils.ObjectList.all` and :meth:`~fdb.utils.ObjectList.any` - reporting - :meth:`~fdb.utils.ObjectList.ecount`, :meth:`~fdb.utils.ObjectList.report` and :meth:`~fdb.utils.ObjectList.ireport` - fast key access - :attr:`~fdb.utils.ObjectList.key`, :attr:`~fdb.utils.ObjectList.frozen`, :meth:`~fdb.utils.ObjectList.freeze` and :meth:`~fdb.utils.ObjectList.get` .. _sql-operations: SQL operations -------------- FDB doesn't allow you to change database metadata directly using schema objects. Instead it supports generation of DDL SQL commands from schema objects using :meth:`~fdb.schema.BaseSchemaItem.get_sql_for` method present on all schema objects except Schema itself. DDL commands that could be generated depend on object type and context (for example it's not possible to generate all DDL commands for system database objects), and list of DDL commands that could be generated for particular schema object could be obtained from its :attr:`~fdb.schema.BaseSchemaItem.actions` attribute. Possible `actions` could be: create, recreate, create_or_alter, alter, drop, activate, deactivate, recompute and declare. Some actions require/allow additional parameters. .. list-table:: SQL actions :widths: 20 15 15 5 45 :header-rows: 1 * - Schema class - Action - Parameter - Required - Description * - :class:`~fdb.schema.Collation` - create - - - * - - drop - - - * - - comment - - - * - :class:`~fdb.schema.CharacterSet` - alter - collation - Yes - :class:`~fdb.schema.Collation` instance or collation name * - - comment - - - * - :class:`~fdb.schema.DatabaseException` - create - - - * - - recreate - - - * - - alter - message - Yes - string. * - - create_or_alter - - - * - - drop - - - * - - comment - - - * - :class:`~fdb.schema.Sequence` - create - - - * - - alter - value - Yes - integer * - - drop - - - * - - comment - - - * - :class:`~fdb.schema.Domain` - create - - - * - - alter - - - One from next parameters required * - - - name - No - string * - - - default - No - string definition or None to drop default * - - - check - No - string definition or None to drop check * - - - datatype - No - string SQL datatype definition * - - drop - - - * - - comment - - - * - :class:`~fdb.schema.Constraint` - create - - - * - - drop - - - * - :class:`~fdb.schema.Index` - create - - - * - - activate - - - * - - deactivate - - - * - - recompute - - - * - - drop - - - * - - comment - - - * - :class:`~fdb.schema.Table` - create - - - * - - - no_pk - No - Do not generate PK constraint * - - - no_unique - No - Do not generate unique constraints * - - recreate - - - * - - - no_pk - No - Do not generate PK constraint * - - - no_unique - No - Do not generate unique constraints * - - drop - - - * - - comment - - - * - :class:`~fdb.schema.TableColumn` - alter - - - One from next parameters required * - - - name - No - string * - - - datatype - No - string SQL type definition * - - - position - No - integer * - - - expression - No - string with COMPUTED BY expression * - - - restart - No - None or initial value * - - drop - - - * - - comment - - - * - :class:`~fdb.schema.View` - create - - - * - - recreate - - - * - - alter - columns - No - string or list of strings * - - - query - Yes - string * - - - check - No - True for WITH CHECK OPTION clause * - - create_or_alter - - - * - - drop - - - * - - comment - - - * - :class:`~fdb.schema.Trigger` - create - inactive - No - Create inactive trigger * - - recreate - - - * - - create_or_alter - - - * - - alter - - - Requires parameters for either header or body definition. * - - - fire_on - No - string * - - - active - No - bool * - - - sequence - No - integer * - - - declare - No - string or list of strings * - - - code - No - string or list of strings * - - drop - - - * - - comment - - - * - :class:`~fdb.schema.Procedure` - create - no_code - No - True to supress procedure body from output * - - recreate - no_code - No - True to supress procedure body from output * - - create_or_alter - no_code - No - True to supress procedure body from output * - - alter - input - No - Input parameters * - - - output - No - Output parameters * - - - declare - No - Variable declarations * - - - code - Yes - Procedure code / body * - - drop - - - * - - comment - - - * - :class:`~fdb.schema.Role` - create - - - * - - drop - - - * - - comment - - - * - :class:`~fdb.schema.Function` - declare - - - * - - drop - - - * - - create - no_code - No - Generate PSQL function code or not * - - create_or_alter - no_code - No - Generate PSQL function code or not * - - recreate - no_code - No - Generate PSQL function code or not * - - alter - arguments - No - Function arguments * - - - returns - Yes - Function return value * - - - declare - No - Variable declarations * - - - code - Yes - PSQL function body / code * - - comment - - - * - :class:`~fdb.schema.DatabaseFile` - create - - - * - :class:`~fdb.schema.Shadow` - create - - - * - - drop - preserve - No - Preserve file or not * - :class:`~fdb.schema.Privilege` - grant - grantors - No - List of grantor names. Generates GRANTED BY clause if grantor is not in list. * - - revoke - grantors - No - List of grantor names. Generates GRANTED BY clause if grantor is not in list. * - - - grant_option - No - True to get REVOKE of GRANT/ADMIN OPTION only. Raises ProgrammingError if privilege doesn't have such option. * - :class:`~fdb.schema.Package` - create - body - No - (bool) Generate package body * - - recreate - body - No - (bool) Generate package body * - - create_or_alter - body - No - (bool) Generate package body * - - alter - header - No - (string_or_list) Package header * - - drop - body - No - (bool) Drop only package body **Examples:** .. code-block:: python >>> import fdb >>> con = fdb.connect(dsn='employee',user='sysdba', password='masterkey') >>> t = con.schema.get_table('EMPLOYEE') >>> print t.get_sql_for('create') CREATE TABLE EMPLOYEE ( EMP_NO EMPNO NOT NULL, FIRST_NAME "FIRSTNAME" NOT NULL, LAST_NAME "LASTNAME" NOT NULL, PHONE_EXT VARCHAR(4), HIRE_DATE TIMESTAMP DEFAULT 'NOW' NOT NULL, DEPT_NO DEPTNO NOT NULL, JOB_CODE JOBCODE NOT NULL, JOB_GRADE JOBGRADE NOT NULL, JOB_COUNTRY COUNTRYNAME NOT NULL, SALARY SALARY NOT NULL, FULL_NAME COMPUTED BY (last_name || ', ' || first_name), PRIMARY KEY (EMP_NO) ) >>> for i in t.indices: ... if 'create' in i.actions: ... print i.get_sql_for('create') ... CREATE ASCENDING INDEX NAMEX ON EMPLOYEE (LAST_NAME,FIRST_NAME) >>> for c in [x for x in t.constraints if x.ischeck() or x.isfkey()]: ... print c.get_sql_for('create') ... ALTER TABLE EMPLOYEE ADD FOREIGN KEY (DEPT_NO) REFERENCES DEPARTMENT (DEPT_NO) ALTER TABLE EMPLOYEE ADD FOREIGN KEY (JOB_CODE,JOB_GRADE,JOB_COUNTRY) REFERENCES JOB (JOB_CODE,JOB_GRADE,JOB_COUNTRY) ALTER TABLE EMPLOYEE ADD CHECK ( salary >= (SELECT min_salary FROM job WHERE job.job_code = employee.job_code AND job.job_grade = employee.job_grade AND job.job_country = employee.job_country) AND salary <= (SELECT max_salary FROM job WHERE job.job_code = employee.job_code AND job.job_grade = employee.job_grade AND job.job_country = employee.job_country)) >>> p = con.schema.get_procedure('GET_EMP_PROJ') >>> print p.get_sql_for('recreate',no_code=True) RECREATE PROCEDURE GET_EMP_PROJ (EMP_NO SMALLINT) RETURNS (PROJ_ID CHAR(5)) AS BEGIN END >>> print p.get_sql_for('create_or_alter') CREATE OR ALTER PROCEDURE GET_EMP_PROJ (EMP_NO SMALLINT) RETURNS (PROJ_ID CHAR(5)) AS BEGIN FOR SELECT proj_id FROM employee_project WHERE emp_no = :emp_no INTO :proj_id DO SUSPEND; END >>> print p.get_sql_for('alter',input=['In1 INTEGER','In2 VARCHAR(5)'], ... output='Out1 INETEGER,\nOut2 VARCHAR(10)',declare=['declare variable i integer = 1;'], ... code=['/* body */','Out1 = i',"Out2 = 'Value'"]) ALTER PROCEDURE GET_EMP_PROJ ( In1 INTEGER, In2 VARCHAR(5) ) RETURNS (Out1 INETEGER, Out2 VARCHAR(10)) AS declare variable i integer = 1; BEGIN /* body */ Out1 = i Out2 = 'Value' END .. index:: pair: privileges; working with Working with user privileges ============================ User or database object privileges are part of database metadata accesible through FDB :class:`~fdb.schema.Schema` support. Each discrete privilege is represented by :class:`~fdb.schema.Privilege` instance. You can access either :attr:`all ` privileges, or privileges granted for specific :attr:`table `, :attr:`table column `, :attr:`view `, :attr:`view column `, :attr:`procedure ` or :attr:`role `. It's also possible to get all privileges :meth:`granted to ` specific user, role, procedure, trigger or view. :class:`~fdb.schema.Privilege` class supports :meth:`~fdb.schema.Privilege.get_sql_for` method to generate GRANT and REVOKE SQL statements for given privilege. If you want to generate grant/revoke statements for set of privileges (for example all privileges granted on specific object or grated to specific user), it's more convenient to use function :func:`~fdb.schema.get_grants` that returns list of minimal set of SQL commands required for task. **Examples:** .. code-block:: python >>> import fdb >>> con = fdb.connect(dsn='employee',user='sysdba', password='masterkey') >>> t = con.schema.get_table('EMPLOYEE') >>> for p in t.privileges: ... print p.get_sql_for('grant') ... GRANT SELECT ON EMPLOYEE TO SYSDBA WITH GRANT OPTION GRANT INSERT ON EMPLOYEE TO SYSDBA WITH GRANT OPTION GRANT UPDATE ON EMPLOYEE TO SYSDBA WITH GRANT OPTION GRANT DELETE ON EMPLOYEE TO SYSDBA WITH GRANT OPTION GRANT REFERENCES ON EMPLOYEE TO SYSDBA WITH GRANT OPTION GRANT SELECT ON EMPLOYEE TO PUBLIC WITH GRANT OPTION GRANT INSERT ON EMPLOYEE TO PUBLIC WITH GRANT OPTION GRANT UPDATE ON EMPLOYEE TO PUBLIC WITH GRANT OPTION GRANT DELETE ON EMPLOYEE TO PUBLIC WITH GRANT OPTION GRANT REFERENCES ON EMPLOYEE TO PUBLIC WITH GRANT OPTION >>> for p in fdb.schema.get_grants(t.privileges): ... print p ... GRANT DELETE, INSERT, REFERENCES, SELECT, UPDATE ON EMPLOYEE TO PUBLIC WITH GRANT OPTION GRANT DELETE, INSERT, REFERENCES, SELECT, UPDATE ON EMPLOYEE TO SYSDBA WITH GRANT OPTION Normally generated GRANT/REVOKE statements don't contain grantor's name, because GRANTED BY clause is supported only since Firebird 2.5. If you want to get GRANT/REVOKE statements including this clause, use `grantors` parameter for `get_sql_for` and `get_grants`. This parameter is a list of grantor names, and GRANTED BY clause is generated only for privileges not granted by user from this list. It's useful to suppress GRANTED BY clause for SYSDBA or database owner. .. currentModule:: fdb.monitor .. index:: pair: monitoring tables; working with Working with monitoring tables ============================== Starting from Firebird 2.1 (ODS 11.1) its possible to monitor server-side activity happening inside a particular database. The engine offers a set of “virtual” tables (so-called "monitoring tables") that provides the user with a snapshot of the current activity within the given database. FDB provides access to this information through set of classes (isolated in separate module :mod:`fdb.monitor`) that transform information stored in monitoring tables into set of Python objects that surface the information in meaningful way, and additionaly provide set of methods for available operations or checks. Like database schema, monitoring tables could be accessed in two different ways, each suitable for different use case: - By direct creation of :class:`fdb.monitor.Monitor` instances that are then binded to particular :class:`~fdb.Connection` instance. This method is best if you want to work with monitoring data only occasionally, or you want to keep connections as lightweight as possible. - Accessing :attr:`fdb.Connection.monitor` property. This method is more convenient than previous one, and represents a compromise between convenience and resource consumption because :class:`~fdb.monitor.Monitor` instance is not created until first reference and is managed by connection itself. **Examples:** 1. Using Monitor instance: .. code-block:: python >>> import fdb >>> con = fdb.connect(dsn='employee',user='sysdba', password='masterkey') >>> monitor = fdb.monitor.Monitor() >>> monitor.bind(con) >>> monitor.db.name '/opt/firebird/examples/empbuild/employee.fdb' 2. Using Connection.monitor: .. code-block:: python >>> import fdb >>> con = fdb.connect(dsn='employee',user='sysdba', password='masterkey') >>> con.monitor.db.name '/opt/firebird/examples/empbuild/employee.fdb' Available information --------------------- The :class:`~fdb.monitor.Monitor` provides information about: - :attr:`Database `. - :attr:`Connections ` to database and :attr:`current ` connection. - :attr:`Transactions `. - Executed :attr:`SQL statements `. - PSQL :attr:`callstack `. - Page and row :attr:`I/O statistics `, including memory usage. - :attr:`Context variables `. Activity snapshot ----------------- The key term of the monitoring feature is an `activity snapshot`. It represents the current state of the database, comprising a variety of information about the database itself, active attachments and users, transactions, prepared and running statements, and more. A snapshot is created the first time any of the monitoring information is being accessed from in the given `Monitor` instance and it is preserved until :attr:`~fdb.monitor.Monitor.closed`, :meth:`clared ` or :meth:`refreshed `, in order that accessed information is always consistent. There are two ways to refresh the snapshot: 1. Call :meth:`Monitor.clear() ` method. New snapshot will be taken on next access to monitoring information. 2. Call :meth:`Monitor.refresh() ` method to take the new snapshot immediately. .. important:: In both cases, any instances of information objects your application may hold would be obsolete. Using them may result in error, or (more likely) provide outdated information. .. note:: Individual monitoring information (i.e. information about :attr:`connections `, :attr:`transactions ` etc.) is loaded from activity snapshot on first access and cached for further reference until it's :meth:`clared ` or snapshot is :meth:`refreshed `. Because once loaded information is cached, it's good to :meth:`clear ` it when it's no longer needed to conserve memory. .. currentModule:: fdb .. _driver-hooks: .. index:: pair: driver; hooks pair: hooks; usage Driver hooks ============ .. versionadded:: 2.0 FDB provides internal notification mechanism that allows installation of custom *hooks* into certain driver tasks. This mechanism consists from next functions: - :meth:`fdb.add_hook` that instals hook function for specified *hook_type*. - :meth:`fdb.remove_hook` that uninstalls previously installed hook function for specified *hook_type*. - :meth:`fdb.get_hooks` that returns list of installed hook routines for specified *hook_type*. .. index:: pair: hooks; types .. index:: HOOK_API_LOADED, HOOK_DATABASE_ATTACHED, HOOK_DATABASE_ATTACH_REQUEST .. index:: HOOK_DATABASE_DETACH_REQUEST, HOOK_DATABASE_CLOSED, HOOK_SERVICE_ATTACHED FDB provides next *hook types* (exposed as constants in *fdb* namespace): .. data:: HOOK_API_LOADED This hook is invoked once when instance of :class:`~fdb.ibase.fbclient_API` is created. It could be used for additional initialization tasks that require Firebird API. Hook routine must have signature: *hook_func(api)*. Any value returned by hook is ignored. .. data:: HOOK_DATABASE_ATTACHED This hook is invoked just before :class:`Connection` (or subclass) instance is returned to the client application. Hook routine must have signature: *hook_func(connection)*. Any value returned by hook is ignored. .. data:: HOOK_DATABASE_ATTACH_REQUEST This hook is invoked after all parameters are preprocessed and before :class:`Connection` is created. Hook routine must have signature: *hook_func(dsn, dpb)* where `dpb` is :class:`~fdb.ParameterBuffer` instance. It may return :class:`Connection` (or subclass) instance or None. First instance returned by any hook of this type will become the return value of caller function and other hooks of the same type are not invoked. .. data:: HOOK_DATABASE_DETACH_REQUEST This hook is invoked before connection is closed. Hook must have signature: *hook_func(connection)*. If any hook function returns True, connection is not closed. .. data:: HOOK_DATABASE_CLOSED This hook is invoked after connection is closed. Hook must have signature: *hook_func(connection)*. Any value returned by hook is ignored. .. data:: HOOK_SERVICE_ATTACHED This hook is invoked before :class:`fdb.services.Connection` instance is returned. Hook must have signature: hook_func(connection). Any value returned by hook is ignored. .. index:: pair: hooks; invocation Installed hook functions are invoked by next fdb code. :func:`fdb.load_api` hooks: - Event :data:`HOOK_API_LOADED` :func:`fdb.connect` hooks: - Event :data:`HOOK_DATABASE_ATTACH_REQUEST` - Event :data:`HOOK_DATABASE_ATTACHED` :func:`fdb.create_database` hooks: - Event :data:`HOOK_DATABASE_ATTACHED` :meth:`fdb.Connection.close` hooks: - Event :data:`HOOK_DATABASE_DETACH_REQUEST` - Event :data:`HOOK_DATABASE_CLOSED` :func:`fdb.services.connect` hooks: - Event :data:`HOOK_SERVICE_ATTACHED` .. _Firebird Documentation: http://www.firebirdsql.org/en/reference-manuals/ .. _DDL: http://en.wikipedia.org/wiki/Data_Definition_Language .. _DML: http://en.wikipedia.org/wiki/Data_Manipulation_Language .. _Firebird SAVEPOINTs: http://www.firebirdsql.org/refdocs/langrefupd15-savepoint.html .. _Firebird API documentation: http://www.ibphoenix.com/files/60ApiGuide.zip .. _busy-wait: http://www.catb.org/jargon/html/B/busy-wait.html .. _Visitor Pattern: http://en.wikipedia.org/wiki/Visitor_pattern fdb-2.0.0/sphinx/reference.txt0000644000175000001440000016633413270362207016733 0ustar pcisarusers00000000000000 ############# FDB Reference ############# ===================== Main driver namespace ===================== .. module:: fdb :synopsis: Python Database API 2.0 Compliant driver for Firebird Constants ========= :__version__: Current driver version, string. PyDB API 2.0 globals -------------------- :apilevel: String constant stating the supported DB API level (2.0). :threadsafety: Integer constant stating the level of thread safety the interface supports. Curretly `1` = Threads may share the module, but not connections. :paramstyle: String constant stating the type of parameter marker formatting expected by the interface. `'qmark'` = Question mark style, e.g. '...WHERE name=?' Constants for work with driver :ref:`hooks ` ---------------------------------------------------------- - HOOK_API_LOADED - HOOK_DATABASE_ATTACHED - HOOK_DATABASE_ATTACH_REQUEST - HOOK_DATABASE_DETACH_REQUEST - HOOK_DATABASE_CLOSED - HOOK_SERVICE_ATTACHED Helper constants for work with :attr:`Cursor.description` content ----------------------------------------------------------------- - DESCRIPTION_NAME - DESCRIPTION_TYPE_CODE - DESCRIPTION_DISPLAY_SIZE - DESCRIPTION_INTERNAL_SIZE - DESCRIPTION_PRECISION - DESCRIPTION_SCALE - DESCRIPTION_NULL_OK Helper Transaction Parameter Block (TPB) constants -------------------------------------------------- :ISOLATION_LEVEL_READ_COMMITED_LEGACY: R/W WAIT READ COMMITTED NO RECORD VERSION transaction. :ISOLATION_LEVEL_READ_COMMITED: R/W WAIT READ COMMITTED RECORD VERSION transaction. :ISOLATION_LEVEL_REPEATABLE_READ: Same as ISOLATION_LEVEL_SNAPSHOT. :ISOLATION_LEVEL_SNAPSHOT: R/W WAIT SNAPSHOT transaction. :ISOLATION_LEVEL_SERIALIZABLE: R/W WAIT SERIALIZABLE transaction. :ISOLATION_LEVEL_SNAPSHOT_TABLE_STABILITY: Same as ISOLATION_LEVEL_SERIALIZABLE. :ISOLATION_LEVEL_READ_COMMITED_RO: R/O WAIT READ COMMITTED RECORD VERSION transaction. :MAX_BLOB_SEGMENT_SIZE: :charset_map: Python dictionary that maps Firebird character set names (key) to Python character sets (value). ODS version numbers introduced by Firebird engine version --------------------------------------------------------- - ODS_FB_20 - ODS_FB_21 - ODS_FB_25 - ODS_FB_30 Translation dictionaries ------------------------ :IMPLEMENTATION_NAMES: Dictionary to map Implementation codes to names :PROVIDER_NAMES: Dictionary to map provider codes to names :DB_CLASS_NAMES: Dictionary to map database class codes to names Firebird API constants and globals ---------------------------------- .. hlist:: :columns: 5 * frb_info_att_charset * isc_dpb_activate_shadow * isc_dpb_address_path * isc_dpb_allocation * isc_dpb_begin_log * isc_dpb_buffer_length * isc_dpb_cache_manager * isc_dpb_cdd_pathname * isc_dpb_connect_timeout * isc_dpb_damaged * isc_dpb_dbkey_scope * isc_dpb_debug * isc_dpb_delete_shadow * isc_dpb_dummy_packet_interval * isc_dpb_encrypt_key * isc_dpb_force_write * isc_dpb_garbage_collect * isc_dpb_gbak_attach * isc_dpb_gfix_attach * isc_dpb_gsec_attach * isc_dpb_gstat_attach * isc_dpb_interp * isc_dpb_lc_ctype * isc_dpb_lc_messages * isc_dpb_no_garbage_collect * isc_dpb_no_reserve * isc_dpb_num_buffers * isc_dpb_number_of_users * isc_dpb_old_dump_id * isc_dpb_old_file * isc_dpb_old_file_size * isc_dpb_old_num_files * isc_dpb_old_start_file * isc_dpb_old_start_page * isc_dpb_old_start_seqno * isc_dpb_online * isc_dpb_online_dump * isc_dpb_overwrite * isc_dpb_page_size * isc_dpb_password * isc_dpb_password_enc * isc_dpb_reserved * isc_dpb_sec_attach * isc_dpb_set_db_charset * isc_dpb_set_db_readonly * isc_dpb_set_db_sql_dialect * isc_dpb_set_page_buffers * isc_dpb_shutdown * isc_dpb_shutdown_delay * isc_dpb_sql_dialect * isc_dpb_sql_role_name * isc_dpb_sweep * isc_dpb_sweep_interval * isc_dpb_sys_user_name * isc_dpb_sys_user_name_enc * isc_dpb_trace * isc_dpb_user_name * isc_dpb_verify * isc_dpb_version1 * isc_dpb_working_directory * isc_dpb_no_db_triggers * isc_dpb_nolinger, * isc_info_active_tran_count * isc_info_end * isc_info_truncated * isc_info_sql_stmt_type * isc_info_sql_get_plan * isc_info_sql_records * isc_info_req_select_count * isc_info_req_insert_count * isc_info_req_update_count * isc_info_req_delete_count * isc_info_blob_total_length * isc_info_blob_max_segment * isc_info_blob_type * isc_info_blob_num_segments * fb_info_page_contents * isc_info_active_transactions * isc_info_allocation * isc_info_attachment_id * isc_info_backout_count * isc_info_base_level * isc_info_bpage_errors * isc_info_creation_date * isc_info_current_memory * isc_info_db_class * isc_info_db_id * isc_info_db_provider * isc_info_db_read_only * isc_info_db_size_in_pages * isc_info_db_sql_dialect * isc_info_delete_count * isc_info_dpage_errors * isc_info_expunge_count * isc_info_fetches * isc_info_firebird_version * isc_info_forced_writes * isc_info_implementation * isc_info_insert_count * isc_info_ipage_errors * isc_info_isc_version * isc_info_limbo * isc_info_marks * isc_info_max_memory * isc_info_next_transaction * isc_info_no_reserve * isc_info_num_buffers * isc_info_ods_minor_version * isc_info_ods_version * isc_info_oldest_active * isc_info_oldest_snapshot * isc_info_oldest_transaction * isc_info_page_errors * isc_info_page_size * isc_info_ppage_errors * isc_info_purge_count * isc_info_read_idx_count * isc_info_read_seq_count * isc_info_reads * isc_info_record_errors * isc_info_set_page_buffers * isc_info_sql_stmt_commit * isc_info_sql_stmt_ddl * isc_info_sql_stmt_delete * isc_info_sql_stmt_exec_procedure * isc_info_sql_stmt_get_segment * isc_info_sql_stmt_insert * isc_info_sql_stmt_put_segment * isc_info_sql_stmt_rollback * isc_info_sql_stmt_savepoint * isc_info_sql_stmt_select * isc_info_sql_stmt_select_for_upd * isc_info_sql_stmt_set_generator * isc_info_sql_stmt_start_trans * isc_info_sql_stmt_update * isc_info_sweep_interval * isc_info_tpage_errors * isc_info_tra_access * isc_info_tra_concurrency * isc_info_tra_consistency * isc_info_tra_id * isc_info_tra_isolation * isc_info_tra_lock_timeout * isc_info_tra_no_rec_version * isc_info_tra_oldest_active * isc_info_tra_oldest_interesting * isc_info_tra_oldest_snapshot * isc_info_tra_read_committed * isc_info_tra_readonly * isc_info_tra_readwrite * isc_info_tra_rec_version * fb_info_tra_dbpath * isc_info_update_count * isc_info_user_names * isc_info_version * isc_info_writes * isc_tpb_autocommit * isc_dpb_version2 * fb_info_implementation * fb_info_page_warns * fb_info_record_warns * fb_info_bpage_warns * fb_info_dpage_warns * fb_info_ipage_warns * fb_info_ppage_warns * fb_info_tpage_warns * fb_info_pip_errors * fb_info_pip_warns * isc_tpb_commit_time * isc_tpb_concurrency * isc_tpb_consistency * isc_tpb_exclusive * isc_tpb_ignore_limbo * isc_tpb_lock_read * isc_tpb_lock_timeout * isc_tpb_lock_write * isc_tpb_no_auto_undo * isc_tpb_no_rec_version * isc_tpb_nowait * isc_tpb_protected * isc_tpb_read * isc_tpb_read_committed * isc_tpb_rec_version * isc_tpb_restart_requests * isc_tpb_shared * isc_tpb_verb_time * isc_tpb_version3 * isc_tpb_wait * isc_tpb_write * charset_map * XSQLDA_PTR * ISC_SHORT * ISC_LONG * ISC_SCHAR * ISC_UCHAR * ISC_QUAD * ISC_DATE * ISC_TIME * SHRT_MIN * SHRT_MAX * USHRT_MAX * INT_MIN * INT_MAX * LONG_MIN * LONG_MAX * SQL_TEXT * SQL_VARYING * SQL_SHORT * SQL_LONG * SQL_FLOAT * SQL_DOUBLE * SQL_D_FLOAT * SQL_TIMESTAMP * SQL_BLOB * SQL_ARRAY * SQL_QUAD * SQL_TYPE_TIME * SQL_TYPE_DATE * SQL_INT64 * SQL_BOOLEAN * SUBTYPE_NUMERIC * SUBTYPE_DECIMAL * MAX_BLOB_SEGMENT_SIZE * ISC_INT64 * XSQLVAR * ISC_TEB * RESULT_VECTOR * ISC_STATUS * ISC_STATUS_ARRAY * ISC_STATUS_PTR * ISC_EVENT_CALLBACK * ISC_ARRAY_DESC * blr_varying * blr_varying2 * blr_text * blr_text2 * blr_short * blr_long * blr_int64 * blr_float * blr_d_float * blr_double * blr_timestamp * blr_sql_date * blr_sql_time * blr_cstring * blr_quad * blr_blob * blr_bool * SQLDA_version1 * isc_segment * isc_db_handle * isc_tr_handle * isc_stmt_handle * isc_blob_handle * sys_encoding Exceptions ========== .. autoexception:: Error :show-inheritance: :no-inherited-members: .. autoexception:: InterfaceError :show-inheritance: :no-inherited-members: .. autoexception:: DatabaseError :show-inheritance: :no-inherited-members: .. autoexception:: DataError :show-inheritance: :no-inherited-members: .. autoexception:: OperationalError :show-inheritance: :no-inherited-members: .. autoexception:: IntegrityError :show-inheritance: :no-inherited-members: .. autoexception:: InternalError :show-inheritance: :no-inherited-members: .. autoexception:: ProgrammingError :show-inheritance: :no-inherited-members: .. autoexception:: NotSupportedError :show-inheritance: :no-inherited-members: .. autoexception:: TransactionConflict :show-inheritance: :no-inherited-members: .. autoexception:: ParseError :show-inheritance: :no-inherited-members: This is the exception inheritance layout:: StandardError |__Warning |__Error |__InterfaceError |__ParseError |__DatabaseError |__DataError |__OperationalError |__IntegrityError |__InternalError |__ProgrammingError |__NotSupportedError Functions ========= connect ------- .. autofunction:: connect create_database --------------- .. autofunction:: create_database load_api -------- .. autofunction:: load_api hook-related functions ---------------------- .. autofunction:: add_hook .. autofunction:: remove_hook .. autofunction:: get_hooks .. autofunction:: is_dead_proxy Classes ======= Connection ---------- .. autoclass:: Connection ConnectionWithSchema -------------------- .. autoclass:: ConnectionWithSchema :no-members: :no-inherited-members: :undoc-members: .. note:: For list of methods see :class:`fdb.schema.Schema`. Cursor ------ .. autoclass:: Cursor :undoc-members: Transaction ----------- .. autoclass:: Transaction PreparedStatement ----------------- .. autoclass:: PreparedStatement ConnectionGroup --------------- .. autoclass:: ConnectionGroup TransactionContext ------------------ .. autoclass:: TransactionContext EventConduit ------------ .. autoclass:: EventConduit BlobReader ---------- .. autoclass:: BlobReader TPB --- .. autoclass:: TPB TableReservation ---------------- .. autoclass:: TableReservation ParameterBuffer --------------- .. autoclass:: ParameterBuffer Internally used classes exposed to driver users =============================================== .. module:: fdb.fbcore :synopsis: Implementation of Firebird driver _RowMapping ----------- .. autoclass:: _RowMapping EventBlock ---------- .. autoclass:: EventBlock TableAccessStats ---------------- .. autoclass:: _TableAccessStats .. _services_api: ======== Services ======== .. module:: fdb.services :synopsis: Submodule for work with Firebird Services Constants ========= `shutdown_mode` codes for :meth:`Connection.shutdown` and :meth:`Connection.bring_online` ----------------------------------------------------------------------------------------- - SHUT_LEGACY - SHUT_NORMAL - SHUT_MULTI - SHUT_SINGLE - SHUT_FULL `shutdown_method` codes for :meth:`Connection.shutdown` ------------------------------------------------------- - SHUT_FORCE - SHUT_DENY_NEW_TRANSACTIONS - SHUT_DENY_NEW_ATTACHMENTS `mode` codes for :meth:`Connection.setWriteMode` ------------------------------------------------ - WRITE_FORCED - WRITE_BUFFERED `mode` codes for :meth:`Connection.setAccessMode` ------------------------------------------------- - ACCESS_READ_WRITE - ACCESS_READ_ONLY :meth:`Connection.get_server_capabilities` return codes ------------------------------------------------------- - CAPABILITY_MULTI_CLIENT - CAPABILITY_REMOTE_HOP - CAPABILITY_SERVER_CONFIG - CAPABILITY_QUOTED_FILENAME - CAPABILITY_NO_SERVER_SHUTDOWN 'stats' codes for :meth:`Connection.backup`/:meth:`Connection.restore` ---------------------------------------------------------------------- - STATS_TOTAL_TIME - STATS_TIME_DELTA - STATS_PAGE_READS - STATS_PAGE_WRITES Functions ========= connect ------- .. autofunction:: connect Classes ======= Connection ---------- .. autoclass:: Connection User ---- .. autoclass:: User =============== Database schema =============== .. module:: fdb.schema :synopsis: Submodule for work with database metadata (schema) Module globals ============== Firebird field type codes ------------------------- .. hlist:: :columns: 6 - FBT_SMALLINT - FBT_INTEGER - FBT_QUAD - FBT_FLOAT - FBT_CHAR - FBT_DOUBLE_PRECISION - FBT_DATE - FBT_VARCHAR - FBT_CSTRING - FBT_BLOB_ID - FBT_BLOB - FBT_SQL_DATE - FBT_SQL_TIME - FBT_SQL_TIMESTAMP - FBT_BIGINT - FBT_BOOLEAN Trigger masks ------------- .. hlist:: :columns: 6 - TRIGGER_TYPE_SHIFT - TRIGGER_TYPE_MASK - TRIGGER_TYPE_DML - TRIGGER_TYPE_DB - TRIGGER_TYPE_DDL Trigger type codes ------------------ .. hlist:: :columns: 4 - DDL_TRIGGER_ANY - DDL_TRIGGER_CREATE_TABLE - DDL_TRIGGER_ALTER_TABLE - DDL_TRIGGER_DROP_TABLE - DDL_TRIGGER_CREATE_PROCEDURE - DDL_TRIGGER_ALTER_PROCEDURE - DDL_TRIGGER_DROP_PROCEDURE - DDL_TRIGGER_CREATE_FUNCTION - DDL_TRIGGER_ALTER_FUNCTION - DDL_TRIGGER_DROP_FUNCTION - DDL_TRIGGER_CREATE_TRIGGER - DDL_TRIGGER_ALTER_TRIGGER - DDL_TRIGGER_DROP_TRIGGER - DDL_TRIGGER_CREATE_EXCEPTION - DDL_TRIGGER_ALTER_EXCEPTION - DDL_TRIGGER_DROP_EXCEPTION - DDL_TRIGGER_CREATE_VIEW - DDL_TRIGGER_ALTER_VIEW - DDL_TRIGGER_DROP_VIEW - DDL_TRIGGER_CREATE_DOMAIN - DDL_TRIGGER_ALTER_DOMAIN - DDL_TRIGGER_DROP_DOMAIN - DDL_TRIGGER_CREATE_ROLE - DDL_TRIGGER_ALTER_ROLE - DDL_TRIGGER_DROP_ROLE - DDL_TRIGGER_CREATE_INDEX - DDL_TRIGGER_ALTER_INDEX - DDL_TRIGGER_DROP_INDEX - DDL_TRIGGER_CREATE_SEQUENCE - DDL_TRIGGER_ALTER_SEQUENCE - DDL_TRIGGER_DROP_SEQUENCE - DDL_TRIGGER_CREATE_USER - DDL_TRIGGER_ALTER_USER - DDL_TRIGGER_DROP_USER - DDL_TRIGGER_CREATE_COLLATION - DDL_TRIGGER_DROP_COLLATION - DDL_TRIGGER_ALTER_CHARACTER_SET - DDL_TRIGGER_CREATE_PACKAGE - DDL_TRIGGER_ALTER_PACKAGE - DDL_TRIGGER_DROP_PACKAGE - DDL_TRIGGER_CREATE_PACKAGE_BODY - DDL_TRIGGER_DROP_PACKAGE_BODY - DDL_TRIGGER_CREATE_MAPPING - DDL_TRIGGER_ALTER_MAPPING - DDL_TRIGGER_DROP_MAPPING Collation parameters codes -------------------------- - COLLATION_PAD_SPACE - COLLATION_CASE_INSENSITIVE - COLLATION_ACCENT_INSENSITIVE Index type names ---------------- - INDEX_TYPE_ASCENDING - INDEX_TYPE_DESCENDING Relation type codes ------------------- - RELATION_TYPE_TABLE - RELATION_TYPE_VIEW - RELATION_TYPE_GTT - RELATION_TYPE_GTT_PRESERVE - RELATION_TYPE_GTT_DELETE Procedure parameter type codes ------------------------------ - PROCPAR_DATATYPE - PROCPAR_DOMAIN - PROCPAR_TYPE_OF_DOMAIN - PROCPAR_TYPE_OF_COLUMN Section codes for :meth:`Schema.get_metadata_ddl` ------------------------------------------------- .. hlist:: :columns: 4 - SCRIPT_COLLATIONS - SCRIPT_CHARACTER_SETS - SCRIPT_UDFS - SCRIPT_GENERATORS - SCRIPT_EXCEPTIONS - SCRIPT_DOMAINS - SCRIPT_PACKAGE_DEFS - SCRIPT_FUNCTION_DEFS - SCRIPT_PROCEDURE_DEFS - SCRIPT_TABLES - SCRIPT_PRIMARY_KEYS - SCRIPT_UNIQUE_CONSTRAINTS - SCRIPT_CHECK_CONSTRAINTS - SCRIPT_FOREIGN_CONSTRAINTS - SCRIPT_INDICES - SCRIPT_VIEWS - SCRIPT_PACKAGE_BODIES - SCRIPT_PROCEDURE_BODIES - SCRIPT_FUNCTION_BODIES - SCRIPT_TRIGGERS - SCRIPT_ROLES - SCRIPT_GRANTS - SCRIPT_COMMENTS - SCRIPT_SHADOWS - SCRIPT_SET_GENERATORS - SCRIPT_INDEX_DEACTIVATIONS - SCRIPT_INDEX_ACTIVATIONS - SCRIPT_TRIGGER_DEACTIVATIONS - SCRIPT_TRIGGER_ACTIVATIONS Lists and disctionary maps -------------------------- :COLUMN_TYPES: Dictionary map from filed type codes to type names :INTEGRAL_SUBTYPES: List of integral type names, works as map :INDEX_TYPES: List of index types :BLOB_SUBTYPES: List of blob type names, works as map :TRIGGER_PREFIX_TYPES: List of trigger prefix type names, works as map :TRIGGER_SUFFIX_TYPES: List of trigger suffix type names, works as map :TRIGGER_DB_TYPES: List of db trigger type names, works as map :TRIGGER_DDL_TYPES: List of DLL trigger type names, works as map :RESERVED: List of reserved Firebird words :NON_RESERVED: List of non-reserved Firebird words :SCRIPT_DEFAULT_ORDER: List of default sections (in order) for :meth:`Schema.get_metadata_ddl` Functions ========= get_grants ---------- .. autofunction:: get_grants iskeyword --------- .. autofunction:: iskeyword escape_single_quotes -------------------- .. autofunction:: escape_single_quotes Classes ======= Schema ------ .. autoclass:: Schema :show-inheritance: BaseSchemaItem -------------- .. autoclass:: BaseSchemaItem :show-inheritance: Collation --------- .. autoclass:: Collation :show-inheritance: CharacterSet ------------ .. autoclass:: CharacterSet :show-inheritance: DatabaseException ----------------- .. autoclass:: DatabaseException :show-inheritance: Sequence -------- .. autoclass:: Sequence :show-inheritance: Index ----- .. autoclass:: Index :show-inheritance: TableColumn ----------- .. autoclass:: TableColumn :show-inheritance: ViewColumn ---------- .. autoclass:: ViewColumn :show-inheritance: Domain ------ .. autoclass:: Domain :show-inheritance: Dependency ---------- .. autoclass:: Dependency :show-inheritance: Constraint ---------- .. autoclass:: Constraint :show-inheritance: Table ----- .. autoclass:: Table :show-inheritance: View ---- .. autoclass:: View :show-inheritance: Trigger ------- .. autoclass:: Trigger :show-inheritance: ProcedureParameter ------------------ .. autoclass:: ProcedureParameter :show-inheritance: Procedure --------- .. autoclass:: Procedure :show-inheritance: Role ---- .. autoclass:: Role :show-inheritance: FunctionArgument ---------------- .. autoclass:: FunctionArgument :show-inheritance: Function -------- .. autoclass:: Function :show-inheritance: DatabaseFile ------------ .. autoclass:: DatabaseFile :show-inheritance: Shadow ------ .. autoclass:: Shadow :show-inheritance: Privilege --------- .. autoclass:: Privilege :show-inheritance: Package ------- .. autoclass:: Package :show-inheritance: BackupHistory ------------- .. autoclass:: BackupHistory :show-inheritance: Filter ------ .. autoclass:: Filter :show-inheritance: ====================== Monitoring information ====================== .. module:: fdb.monitor :synopsis: Submodule for work with database monitoring tables Constants ========= Shutdown modes for :attr:`DatabaseInfo.shutdown_mode` ----------------------------------------------------- - SHUTDOWN_MODE_ONLINE - SHUTDOWN_MODE_MULTI - SHUTDOWN_MODE_SINGLE - SHUTDOWN_MODE_FULL Backup states for :attr:`DatabaseInfo.backup_state` ---------------------------------------------------- - BACKUP_STATE_NORMAL - BACKUP_STATE_STALLED - BACKUP_STATE_MERGE States for :attr:`AttachmentInfo.state`, :attr:`TransactionInfo.state` and :attr:`StatementInfo.state` ------------------------------------------------------------------------------------------------------ - STATE_IDLE - STATE_ACTIVE Isolation modes for :attr:`TransactionInfo.isolation_mode` ---------------------------------------------------------- - ISOLATION_CONSISTENCY - ISOLATION_CONCURRENCY - ISOLATION_READ_COMMITTED_RV - ISOLATION_READ_COMMITTED_NO_RV Special timeout values for :attr:`TransactionInfo.lock_timeout` --------------------------------------------------------------- - INFINITE_WAIT - NO_WAIT Group codes for :attr:`IOStatsInfo.group` ----------------------------------------- - STAT_DATABASE - STAT_ATTACHMENT - STAT_TRANSACTION - STAT_STATEMENT - STAT_CALL Security database ----------------- - SEC_DEFAULT - SEC_SELF - SEC_OTHER Classes ======= Monitor ------- .. autoclass:: Monitor BaseInfoItem ------------ .. autoclass:: BaseInfoItem :show-inheritance: DatabaseInfo ------------ .. autoclass:: DatabaseInfo :show-inheritance: AttachmentInfo -------------- .. autoclass:: AttachmentInfo :show-inheritance: TransactionInfo --------------- .. autoclass:: TransactionInfo :show-inheritance: StatementInfo ------------- .. autoclass:: StatementInfo :show-inheritance: CallStackInfo ------------- .. autoclass:: CallStackInfo :show-inheritance: IOStatsInfo ----------- .. autoclass:: IOStatsInfo :show-inheritance: TableStatsInfo -------------- .. autoclass:: TableStatsInfo :show-inheritance: ContextVariableInfo ------------------- .. autoclass:: ContextVariableInfo :show-inheritance: ====================== Firebird trace & audit ====================== .. module:: fdb.trace :synopsis: Firebird Trace & Audit log processing Constants ========= Trace event status codes ------------------------ - STATUS_OK - STATUS_FAILED - STATUS_UNAUTHORIZED - STATUS_UNKNOWN Trace event codes ----------------- .. note: Also works as index to :data:`EVENTS` list. .. hlist:: :columns: 5 - EVENT_TRACE_INIT - EVENT_TRACE_SUSPEND - EVENT_TRACE_END - EVENT_CREATE_DATABASE - EVENT_DROP_DATABASE - EVENT_ATTACH - EVENT_DETACH - EVENT_TRANSACTION_START - EVENT_COMMIT - EVENT_ROLLBACK - EVENT_COMMIT_R - EVENT_ROLLBACK_R - EVENT_STMT_PREPARE - EVENT_STMT_START - EVENT_STMT_END - EVENT_STMT_FREE - EVENT_STMT_CLOSE - EVENT_TRG_START - EVENT_TRG_END - EVENT_PROC_START - EVENT_PROC_END - EVENT_SVC_START - EVENT_SVC_ATTACH - EVENT_SVC_DETACH - EVENT_SVC_QUERY - EVENT_SET_CONTEXT - EVENT_ERROR - EVENT_WARNING - EVENT_SWEEP_START - EVENT_SWEEP_PROGRESS - EVENT_SWEEP_FINISH - EVENT_SWEEP_FAILED - EVENT_BLR_COMPILE - EVENT_BLR_EXECUTE - EVENT_DYN_EXECUTE - EVENT_UNKNOWN :EVENTS: List of trace event names in order matching their numeric codes Classes ======= Named tuples for information packages ------------------------------------- .. autofunction:: AttachmentInfo .. autofunction:: TransactionInfo .. autofunction:: ServiceInfo .. autofunction:: SQLInfo .. autofunction:: ParamInfo .. autofunction:: AccessTuple Named tuples for individual trace events ---------------------------------------- .. autofunction:: EventTraceInit .. autofunction:: EventTraceSuspend .. autofunction:: EventTraceFinish .. autofunction:: EventCreate .. autofunction:: EventDrop .. autofunction:: EventAttach .. autofunction:: EventDetach .. autofunction:: EventTransactionStart .. autofunction:: EventCommit .. autofunction:: EventRollback .. autofunction:: EventCommitRetaining .. autofunction:: EventRollbackRetaining .. autofunction:: EventPrepareStatement .. autofunction:: EventStatementStart .. autofunction:: EventStatementFinish .. autofunction:: EventFreeStatement .. autofunction:: EventCloseCursor .. autofunction:: EventTriggerStart .. autofunction:: EventTriggerFinish .. autofunction:: EventProcedureStart .. autofunction:: EventProcedureFinish .. autofunction:: EventServiceAttach .. autofunction:: EventServiceDetach .. autofunction:: EventServiceStart .. autofunction:: EventServiceQuery .. autofunction:: EventSetContext .. autofunction:: EventError .. autofunction:: EventServiceError .. autofunction:: EventWarning .. autofunction:: EventServiceWarning .. autofunction:: EventSweepStart .. autofunction:: EventSweepProgress .. autofunction:: EventSweepFinish .. autofunction:: EventSweepFailed .. autofunction:: EventBLRCompile .. autofunction:: EventBLRExecute .. autofunction:: EventDYNExecute .. autofunction:: EventUnknown TraceParser ----------- .. autoclass:: TraceParser =============== GSTAT protocols =============== .. module:: fdb.gstat :synopsis: Firebird gstat output processing Module globals ============== GSTAT version ------------- - GSTAT_25 - GSTAT_30 Database attribute codes ------------------------ - ATTR_FORCE_WRITE - ATTR_NO_RESERVE - ATTR_NO_SHARED_CACHE - ATTR_ACTIVE_SHADOW - ATTR_SHUTDOWN_MULTI - ATTR_SHUTDOWN_SINGLE - ATTR_SHUTDOWN_FULL - ATTR_READ_ONLY - ATTR_BACKUP_LOCK - ATTR_BACKUP_MERGE - ATTR_BACKUP_WRONG .. note:: Also works as index to ATTRIBUTES. :ATTRIBUTES: List with database attribute names Functions ========= empty_str --------- .. autofunction:: empty_str parse ----- .. autofunction:: parse Classes ======= Named tuples ------------ .. autofunction:: FillDistribution .. autofunction:: Encryption StatDatabase ------------ .. autoclass:: StatDatabase :show-inheritance: StatTable --------- .. autoclass:: StatTable :show-inheritance: StatTable3 ---------- .. autoclass:: StatTable3 :show-inheritance: StatIndex --------- .. autoclass:: StatIndex :show-inheritance: StatIndex3 ---------- .. autoclass:: StatIndex3 :show-inheritance: =================== Firebird server log =================== .. module:: fdb.log :synopsis: Firebird server log processing Functions ========= parse ----- .. autofunction:: parse Classes ======= Named tuples ------------ .. autofunction:: LogEntry ========= Utilities ========= .. module:: fdb.utils :synopsis: Submodule with various classes and functions Functions ========= .. autofunction:: update_meta .. autofunction:: iter_class_properties .. autofunction:: iter_class_variables .. autofunction:: embed_attributes Classes ======= LateBindingProperty ------------------- .. autoclass:: LateBindingProperty :show-inheritance: :no-inherited-members: Iterator -------- .. autoclass:: Iterator :members: __iter__, next, __next__ EmbeddedProperty ---------------- .. autoclass:: EmbeddedProperty :show-inheritance: :no-inherited-members: EmbeddedAttribute ----------------- .. autoclass:: EmbeddedAttribute :show-inheritance: :no-inherited-members: ObjectList ---------- .. autoclass:: ObjectList :show-inheritance: :no-inherited-members: Visitable --------- .. autoclass:: Visitable Visitor ------- .. autoclass:: Visitor .. module:: fdb.ibase :synopsis: Python ctypes interface to Firebird client library =========================================== ctypes interface to Firebird client library =========================================== :ref:`ctypes ` interface to `fbclient.so/dll` is defined in submodule :mod:`fdb.ibase` and :mod:`fdb.blr`. The :mod:`fdb.ibase` is the main module for Firebird API. The :mod:`fdb.blr` module contains only constants related to Firebird's low-level Binary Language Representation (BLR). Constants ========= C integer limit constants ------------------------- .. hlist:: :columns: 6 - SHRT_MIN - SHRT_MAX - USHRT_MAX - INT_MIN - INT_MAX - UINT_MAX - LONG_MIN - LONG_MAX - SSIZE_T_MIN - SSIZE_T_MAX Type codes ---------- .. hlist:: :columns: 6 - SQL_TEXT - SQL_VARYING - SQL_SHORT - SQL_LONG - SQL_FLOAT - SQL_DOUBLE - SQL_D_FLOAT - SQL_TIMESTAMP - SQL_BLOB - SQL_ARRAY - SQL_QUAD - SQL_TYPE_TIME - SQL_TYPE_DATE - SQL_INT64 - SQL_BOOLEAN - SQL_NULL - SUBTYPE_NUMERIC - SUBTYPE_DECIMAL Internal type codes (for example used by ARRAY descriptor) ---------------------------------------------------------- .. hlist:: :columns: 6 - blr_text - blr_text2 - blr_short - blr_long - blr_quad - blr_float - blr_double - blr_d_float - blr_timestamp - blr_varying - blr_varying2 - blr_blob - blr_cstring - blr_cstring2 - blr_blob_id - blr_sql_date - blr_sql_time - blr_int64 - blr_blob2 - blr_domain_name - blr_domain_name2 - blr_not_nullable - blr_column_name - blr_column_name2 - blr_bool Database parameter block stuff ------------------------------ .. hlist:: :columns: 5 - isc_dpb_version1 - isc_dpb_version2 - isc_dpb_cdd_pathname - isc_dpb_allocation - isc_dpb_page_size - isc_dpb_num_buffers - isc_dpb_buffer_length - isc_dpb_debug - isc_dpb_garbage_collect - isc_dpb_verify - isc_dpb_sweep - isc_dpb_dbkey_scope - isc_dpb_number_of_users - isc_dpb_trace - isc_dpb_no_garbage_collect - isc_dpb_damaged - isc_dpb_sys_user_name - isc_dpb_encrypt_key - isc_dpb_activate_shadow - isc_dpb_sweep_interval - isc_dpb_delete_shadow - isc_dpb_force_write - isc_dpb_begin_log - isc_dpb_quit_log - isc_dpb_no_reserve - isc_dpb_user_name - isc_dpb_password - isc_dpb_password_enc - isc_dpb_sys_user_name_enc - isc_dpb_interp - isc_dpb_online_dump - isc_dpb_old_file_size - isc_dpb_old_num_files - isc_dpb_old_file - isc_dpb_old_start_page - isc_dpb_old_start_seqno - isc_dpb_old_start_file - isc_dpb_old_dump_id - isc_dpb_lc_messages - isc_dpb_lc_ctype - isc_dpb_cache_manager - isc_dpb_shutdown - isc_dpb_online - isc_dpb_shutdown_delay - isc_dpb_reserved - isc_dpb_overwrite - isc_dpb_sec_attach - isc_dpb_connect_timeout - isc_dpb_dummy_packet_interval - isc_dpb_gbak_attach - isc_dpb_sql_role_name - isc_dpb_set_page_buffers - isc_dpb_working_directory - isc_dpb_sql_dialect - isc_dpb_set_db_readonly - isc_dpb_set_db_sql_dialect - isc_dpb_gfix_attach - isc_dpb_gstat_attach - isc_dpb_set_db_charset - isc_dpb_gsec_attach - isc_dpb_address_path - isc_dpb_process_id - isc_dpb_no_db_triggers - isc_dpb_trusted_auth - isc_dpb_process_name - isc_dpb_trusted_role - isc_dpb_org_filename - isc_dpb_utf8_filename - isc_dpb_ext_call_depth - isc_dpb_auth_block - isc_dpb_remote_protocol - isc_dpb_client_version - isc_dpb_host_name - isc_dpb_os_user - isc_dpb_specific_auth_data - isc_dpb_auth_plugin_list - isc_dpb_auth_plugin_name - isc_dpb_config - isc_dpb_nolinger - isc_dpb_reset_icu - isc_dpb_map_attach Info structural codes --------------------- .. hlist:: :columns: 6 - isc_info_end - isc_info_truncated - isc_info_error - isc_info_data_not_ready - isc_info_length - isc_info_flag_end DB Info item codes ------------------ :isc_info_db_id: [db_filename,site_name[,site_name...]] :isc_info_reads: number of page reads :isc_info_writes: number of page writes :isc_info_fetches: number of reads from the memory buffer cache :isc_info_marks: number of writes to the memory buffer cache :isc_info_implementation: (implementation code, implementation class) :isc_info_isc_version: interbase server version identification string :isc_info_base_level: capability version of the server :isc_info_page_size: database page size :isc_info_num_buffers: number of memory buffers currently allocated :isc_info_limbo: limbo transactions :isc_info_current_memory: amount of server memory (in bytes) currently in use :isc_info_max_memory: maximum amount of memory (in bytes) used at one time since the first process attached to the database :isc_info_allocation: number of last database page allocated :isc_info_attachment_id: attachment id number all `*_count` codes below return {[table_id]=operation_count,...}; table IDs are in the system table RDB$RELATIONS. :isc_info_read_seq_count: number of sequential table scans (row reads) done on each table since the database was last attached :isc_info_read_idx_count: number of reads done via an index since the database was last attached :isc_info_insert_count: number of inserts into the database since the database was last attached :isc_info_update_count: number of database updates since the database was last attached :isc_info_delete_count: number of database deletes since the database was last attached :isc_info_backout_count: number of removals of a version of a record :isc_info_purge_count: number of removals of old versions of fully mature records (records that are committed, so that older ancestor versions are no longer needed) :isc_info_expunge_count: number of removals of a record and all of its ancestors, for records whose deletions have been committed :isc_info_sweep_interval: number of transactions that are committed between sweeps to remove database record versions that are no longer needed :isc_info_ods_version: On-disk structure (ODS) minor major version number :isc_info_ods_minor_version: On-disk structure (ODS) minor version number :isc_info_no_reserve: 20% page space reservation for holding backup versions of modified records: 0=yes, 1=no :isc_info_forced_writes: mode in which database writes are performed: 0=sync, 1=async :isc_info_user_names: array of names of all the users currently attached to the database :isc_info_page_errors: number of page level errors validate found :isc_info_record_errors: number of record level errors validate found :isc_info_bpage_errors: number of blob page errors validate found :isc_info_dpage_errors: number of data page errors validate found :isc_info_ipage_errors: number of index page errors validate found :isc_info_ppage_errors: number of pointer page errors validate found :isc_info_tpage_errors: number of transaction page errors validate found :isc_info_set_page_buffers: number of memory buffers that should be allocated :isc_info_db_sql_dialect: dialect of currently attached database :isc_info_db_read_only: whether the database is read-only (1) or not (0) :isc_info_db_size_in_pages: number of allocated pages :frb_info_att_charset: charset of current attachment :isc_info_db_class: server architecture :isc_info_firebird_version: firebird server version identification string :isc_info_oldest_transaction: ID of oldest transaction :isc_info_oldest_active: ID of oldest active transaction :isc_info_oldest_snapshot: ID of oldest snapshot transaction :isc_info_next_transaction: ID of next transaction :isc_info_db_provider: for firebird is 'isc_info_db_code_firebird' :isc_info_active_transactions: array of active transaction IDs :isc_info_active_tran_count: number of active transactions :isc_info_creation_date: time_t struct representing database creation date & time :isc_info_db_file_size: added in FB 2.1, nbackup-related - size (in pages) of locked db :fb_info_page_contents: added in FB 2.5, get raw page contents; takes page_number as parameter; :fb_info_implementation: (cpu code, OS code, compiler code, flags, implementation class) :fb_info_page_warns: number of page level warnings validate found :fb_info_record_warns: number of record level warnings validate found :fb_info_bpage_warns: number of blob page level warnings validate found :fb_info_dpage_warns: number of data page level warnings validate found :fb_info_ipage_warns: number of index page level warnings validate found :fb_info_ppage_warns: number of pointer page level warnings validate found :fb_info_tpage_warns: number of trabsaction page level warnings validate found :fb_info_pip_errors: number of pip page level errors validate found :fb_info_pip_warns: number of pip page level warnings validate found :isc_info_version: = isc_info_isc_version Blob information items ---------------------- .. hlist:: :columns: 4 - isc_info_blob_num_segments - isc_info_blob_max_segment - isc_info_blob_total_length - isc_info_blob_type Transaction information items ----------------------------- .. hlist:: :columns: 4 - isc_info_tra_id - isc_info_tra_oldest_interesting - isc_info_tra_oldest_snapshot - isc_info_tra_oldest_active - isc_info_tra_isolation - isc_info_tra_access - isc_info_tra_lock_timeout - fb_info_tra_dbpath **isc_info_tra_isolation responses:** - isc_info_tra_consistency - isc_info_tra_concurrency - isc_info_tra_read_committed **isc_info_tra_read_committed options:** - isc_info_tra_no_rec_version - isc_info_tra_rec_version **isc_info_tra_access responses:** - isc_info_tra_readonly - isc_info_tra_readwrite SQL information items --------------------- .. hlist:: :columns: 4 - isc_info_sql_select - isc_info_sql_bind - isc_info_sql_num_variables - isc_info_sql_describe_vars - isc_info_sql_describe_end - isc_info_sql_sqlda_seq - isc_info_sql_message_seq - isc_info_sql_type - isc_info_sql_sub_type - isc_info_sql_scale - isc_info_sql_length - isc_info_sql_null_ind - isc_info_sql_field - isc_info_sql_relation - isc_info_sql_owner - isc_info_sql_alias - isc_info_sql_sqlda_start - isc_info_sql_stmt_type - isc_info_sql_get_plan - isc_info_sql_records - isc_info_sql_batch_fetch - isc_info_sql_relation_alias - isc_info_sql_explain_plan - isc_info_sql_stmt_flags SQL information return values ----------------------------- .. hlist:: :columns: 4 - isc_info_sql_stmt_select - isc_info_sql_stmt_insert - isc_info_sql_stmt_update - isc_info_sql_stmt_delete - isc_info_sql_stmt_ddl - isc_info_sql_stmt_get_segment - isc_info_sql_stmt_put_segment - isc_info_sql_stmt_exec_procedure - isc_info_sql_stmt_start_trans - isc_info_sql_stmt_commit - isc_info_sql_stmt_rollback - isc_info_sql_stmt_select_for_upd - isc_info_sql_stmt_set_generator - isc_info_sql_stmt_savepoint Transaction parameter block stuff --------------------------------- .. hlist:: :columns: 6 - isc_tpb_version1 - isc_tpb_version3 - isc_tpb_consistency - isc_tpb_concurrency - isc_tpb_shared - isc_tpb_protected - isc_tpb_exclusive - isc_tpb_wait - isc_tpb_nowait - isc_tpb_read - isc_tpb_write - isc_tpb_lock_read - isc_tpb_lock_write - isc_tpb_verb_time - isc_tpb_commit_time - isc_tpb_ignore_limbo - isc_tpb_read_committed - isc_tpb_autocommit - isc_tpb_rec_version - isc_tpb_no_rec_version - isc_tpb_restart_requests - isc_tpb_no_auto_undo - isc_tpb_lock_timeout BLOB parameter buffer --------------------- .. hlist:: :columns: 6 - isc_bpb_version1 - isc_bpb_source_type - isc_bpb_target_type - isc_bpb_type - isc_bpb_source_interp - isc_bpb_target_interp - isc_bpb_filter_parameter - isc_bpb_storage - isc_bpb_type_segmented - isc_bpb_type_stream - isc_bpb_storage_main - isc_bpb_storage_temp Service parameter block stuff ----------------------------- .. hlist:: :columns: 6 - isc_spb_current_version - isc_spb_version - isc_spb_version3 - isc_spb_user_name - isc_spb_sys_user_name - isc_spb_sys_user_name_enc - isc_spb_password - isc_spb_password_enc - isc_spb_command_line - isc_spb_dbname - isc_spb_verbose - isc_spb_options - isc_spb_address_path - isc_spb_process_id - isc_spb_trusted_auth - isc_spb_process_name - isc_spb_trusted_role - isc_spb_verbint - isc_spb_auth_block - isc_spb_auth_plugin_name - isc_spb_auth_plugin_list - isc_spb_utf8_filename - isc_spb_client_version - isc_spb_remote_protocol - isc_spb_host_name - isc_spb_os_user - isc_spb_config - isc_spb_expected_db **Parameters for isc_action_{add|del|mod|disp)_user:** .. hlist:: :columns: 6 - isc_spb_sec_userid - isc_spb_sec_groupid - isc_spb_sec_username - isc_spb_sec_password - isc_spb_sec_groupname - isc_spb_sec_firstname - isc_spb_sec_middlename - isc_spb_sec_lastname - isc_spb_sec_admin **Parameters for isc_action_svc_backup:** .. hlist:: :columns: 4 - isc_spb_bkp_file - isc_spb_bkp_factor - isc_spb_bkp_length - isc_spb_bkp_skip_data - isc_spb_bkp_stat - isc_spb_bkp_ignore_checksums - isc_spb_bkp_ignore_limbo - isc_spb_bkp_metadata_only - isc_spb_bkp_no_garbage_collect - isc_spb_bkp_old_descriptions - isc_spb_bkp_non_transportable - isc_spb_bkp_convert - isc_spb_bkp_expand - isc_spb_bkp_no_triggers **Parameters for isc_action_svc_properties:** .. hlist:: :columns: 4 - isc_spb_prp_page_buffers - isc_spb_prp_sweep_interval - isc_spb_prp_shutdown_db - isc_spb_prp_deny_new_attachments - isc_spb_prp_deny_new_transactions - isc_spb_prp_reserve_space - isc_spb_prp_write_mode - isc_spb_prp_access_mode - isc_spb_prp_set_sql_dialect - isc_spb_prp_activate - isc_spb_prp_db_online - isc_spb_prp_nolinger - isc_spb_prp_force_shutdown - isc_spb_prp_attachments_shutdown - isc_spb_prp_transactions_shutdown - isc_spb_prp_shutdown_mode - isc_spb_prp_online_mode **Parameters for isc_spb_prp_shutdown_mode and isc_spb_prp_online_mode:** .. hlist:: :columns: 4 - isc_spb_prp_sm_normal - isc_spb_prp_sm_multi - isc_spb_prp_sm_single - isc_spb_prp_sm_full **Parameters for isc_spb_prp_reserve_space:** .. hlist:: - isc_spb_prp_res_use_full - isc_spb_prp_res **Parameters for isc_spb_prp_write_mode:** .. hlist:: - isc_spb_prp_wm_async - isc_spb_prp_wm_sync **Parameters for isc_action_svc_validate:** .. hlist:: :columns: 5 - isc_spb_val_tab_incl - isc_spb_val_tab_excl - isc_spb_val_idx_incl - isc_spb_val_idx_excl - isc_spb_val_lock_timeout **Parameters for isc_spb_prp_access_mode:** .. hlist:: :columns: 5 - isc_spb_rpr_commit_trans - isc_spb_rpr_rollback_trans - isc_spb_rpr_recover_two_phase - isc_spb_tra_id - isc_spb_single_tra_id - isc_spb_multi_tra_id - isc_spb_tra_state - isc_spb_tra_state_limbo - isc_spb_tra_state_commit - isc_spb_tra_state_rollback - isc_spb_tra_state_unknown - isc_spb_tra_host_site - isc_spb_tra_remote_site - isc_spb_tra_db_path - isc_spb_tra_advise - isc_spb_tra_advise_commit - isc_spb_tra_advise_rollback - isc_spb_tra_advise_unknown - isc_spb_tra_id_64 - isc_spb_single_tra_id_64 - isc_spb_multi_tra_id_64 - isc_spb_rpr_commit_trans_64 - isc_spb_rpr_rollback_trans_64 - isc_spb_rpr_recover_two_phase_64 - isc_spb_rpr_validate_db - isc_spb_rpr_sweep_db - isc_spb_rpr_mend_db - isc_spb_rpr_list_limbo_trans - isc_spb_rpr_check_db - isc_spb_rpr_ignore_checksum - isc_spb_rpr_kill_shadows - isc_spb_rpr_full - isc_spb_rpr_icu **Parameters for isc_action_svc_restore:** .. hlist:: :columns: 4 - isc_spb_res_skip_data - isc_spb_res_buffers - isc_spb_res_page_size - isc_spb_res_length - isc_spb_res_access_mode - isc_spb_res_fix_fss_data - isc_spb_res_fix_fss_metadata - isc_spb_res_stat - isc_spb_res_metadata_only - isc_spb_res_deactivate_idx - isc_spb_res_no_shadow - isc_spb_res_no_validity - isc_spb_res_one_at_a_time - isc_spb_res_replace - isc_spb_res_create - isc_spb_res_use_all_space **Parameters for isc_spb_res_access_mode:** .. hlist:: - isc_spb_res_am_readonly - isc_spb_res_am_readwrite **Parameters for isc_info_svc_svr_db_info:** .. hlist:: - isc_spb_num_att - isc_spb_num_db **Parameters for isc_info_svc_db_stats:** .. hlist:: :columns: 4 - isc_spb_sts_data_pages - isc_spb_sts_db_log - isc_spb_sts_hdr_pages - isc_spb_sts_idx_pages - isc_spb_sts_sys_relations - isc_spb_sts_record_versions - isc_spb_sts_table - isc_spb_sts_nocreation - isc_spb_sts_encryption **Parameters for isc_action_svc_nbak:** .. hlist:: :columns: 4 - isc_spb_nbk_level - isc_spb_nbk_file - isc_spb_nbk_direct - isc_spb_nbk_no_triggers **Parameters for trace:** .. hlist:: - isc_spb_trc_id - isc_spb_trc_name - isc_spb_trc_cfg Service action items -------------------- .. hlist:: :columns: 5 - isc_action_svc_backup - isc_action_svc_restore - isc_action_svc_repair - isc_action_svc_add_user - isc_action_svc_delete_user - isc_action_svc_modify_user - isc_action_svc_display_user - isc_action_svc_properties - isc_action_svc_add_license - isc_action_svc_remove_license - isc_action_svc_db_stats - isc_action_svc_get_ib_log - isc_action_svc_get_fb_log - isc_action_svc_nbak - isc_action_svc_nrest - isc_action_svc_trace_start - isc_action_svc_trace_stop - isc_action_svc_trace_suspend - isc_action_svc_trace_resume - isc_action_svc_trace_list - isc_action_svc_set_mapping - isc_action_svc_drop_mapping - isc_action_svc_display_user_adm - isc_action_svc_validate Service information items ------------------------- .. hlist:: :columns: 4 - isc_info_svc_svr_db_info - isc_info_svc_get_config - isc_info_svc_version - isc_info_svc_server_version - isc_info_svc_implementation - isc_info_svc_capabilities - isc_info_svc_user_dbpath - isc_info_svc_get_env - isc_info_svc_get_env_lock - isc_info_svc_get_env_msg - isc_info_svc_line - isc_info_svc_to_eof - isc_info_svc_timeout - isc_info_svc_limbo_trans - isc_info_svc_running - isc_info_svc_get_users - isc_info_svc_auth_block - isc_info_svc_stdin BLOB action codes ----------------- .. hlist:: :columns: 5 - blb_got_eof - blb_got_fragment - blb_got_full_segment - blb_seek_relative - blb_seek_from_tail Implementation codes -------------------- .. hlist:: :columns: 4 - isc_info_db_impl_rdb_vms - isc_info_db_impl_rdb_eln - isc_info_db_impl_rdb_eln_dev - isc_info_db_impl_rdb_vms_y - isc_info_db_impl_rdb_eln_y - isc_info_db_impl_jri - isc_info_db_impl_jsv - isc_info_db_impl_isc_apl_68K - isc_info_db_impl_isc_vax_ultr - isc_info_db_impl_isc_vms - isc_info_db_impl_isc_sun_68k - isc_info_db_impl_isc_os2 - isc_info_db_impl_isc_sun4 - isc_info_db_impl_isc_hp_ux - isc_info_db_impl_isc_sun_386i - isc_info_db_impl_isc_vms_orcl - isc_info_db_impl_isc_mac_aux - isc_info_db_impl_isc_rt_aix - isc_info_db_impl_isc_mips_ult - isc_info_db_impl_isc_xenix - isc_info_db_impl_isc_dg - isc_info_db_impl_isc_hp_mpexl - isc_info_db_impl_isc_hp_ux68K - isc_info_db_impl_isc_sgi - isc_info_db_impl_isc_sco_unix - isc_info_db_impl_isc_cray - isc_info_db_impl_isc_imp - isc_info_db_impl_isc_delta - isc_info_db_impl_isc_next - isc_info_db_impl_isc_dos - isc_info_db_impl_m88K - isc_info_db_impl_unixware - isc_info_db_impl_isc_winnt_x86 - isc_info_db_impl_isc_epson - isc_info_db_impl_alpha_osf - isc_info_db_impl_alpha_vms - isc_info_db_impl_netware_386 - isc_info_db_impl_win_only - isc_info_db_impl_ncr_3000 - isc_info_db_impl_winnt_ppc - isc_info_db_impl_dg_x86 - isc_info_db_impl_sco_ev - isc_info_db_impl_i386 - isc_info_db_impl_freebsd - isc_info_db_impl_netbsd - isc_info_db_impl_darwin_ppc - isc_info_db_impl_sinixz - isc_info_db_impl_linux_sparc - isc_info_db_impl_linux_amd64 - isc_info_db_impl_freebsd_amd64 - isc_info_db_impl_winnt_amd64 - isc_info_db_impl_linux_ppc - isc_info_db_impl_darwin_x86 - isc_info_db_impl_linux_mipsel - isc_info_db_impl_linux_mips - isc_info_db_impl_darwin_x64 - isc_info_db_impl_sun_amd64 - isc_info_db_impl_linux_arm - isc_info_db_impl_linux_ia64 - isc_info_db_impl_darwin_ppc64 - isc_info_db_impl_linux_s390x - isc_info_db_impl_linux_s390 - isc_info_db_impl_linux_sh - isc_info_db_impl_linux_sheb - isc_info_db_impl_linux_hppa - isc_info_db_impl_linux_alpha - isc_info_db_impl_linux_arm64 - isc_info_db_impl_linux_ppc64el - isc_info_db_impl_linux_ppc64 Info DB provider codes ---------------------- .. hlist:: :columns: 4 - isc_info_db_code_rdb_eln - isc_info_db_code_rdb_vms - isc_info_db_code_interbase - isc_info_db_code_firebird Info DB class codes ------------------- .. hlist:: :columns: 4 - isc_info_db_class_access - isc_info_db_class_y_valve - isc_info_db_class_rem_int - isc_info_db_class_rem_srvr - isc_info_db_class_pipe_int - isc_info_db_class_pipe_srvr - isc_info_db_class_sam_int - isc_info_db_class_sam_srvr - isc_info_db_class_gateway - isc_info_db_class_cache - isc_info_db_class_classic_access - isc_info_db_class_server_access Request information items ------------------------- .. hlist:: :columns: 4 - isc_info_number_messages - isc_info_max_message - isc_info_max_send - isc_info_max_receive - isc_info_state - isc_info_message_number - isc_info_message_size - isc_info_request_cost - isc_info_access_path - isc_info_req_select_count - isc_info_req_insert_count - isc_info_req_update_count - isc_info_req_delete_count Access path items ----------------- .. hlist:: :columns: 5 - isc_info_rsb_end - isc_info_rsb_begin - isc_info_rsb_type - isc_info_rsb_relation - isc_info_rsb_plan Record Source Block (RSB) types ------------------------------- .. hlist:: :columns: 5 - isc_info_rsb_unknown - isc_info_rsb_indexed - isc_info_rsb_navigate - isc_info_rsb_sequential - isc_info_rsb_cross - isc_info_rsb_sort - isc_info_rsb_first - isc_info_rsb_boolean - isc_info_rsb_union - isc_info_rsb_aggregate - isc_info_rsb_merge - isc_info_rsb_ext_sequential - isc_info_rsb_ext_indexed - isc_info_rsb_ext_dbkey - isc_info_rsb_left_cross - isc_info_rsb_select - isc_info_rsb_sql_join - isc_info_rsb_simulate - isc_info_rsb_sim_cross - isc_info_rsb_once - isc_info_rsb_procedure - isc_info_rsb_skip - isc_info_rsb_virt_sequential - isc_info_rsb_recursive - isc_info_rsb_window - isc_info_rsb_singular - isc_info_rsb_writelock - isc_info_rsb_buffer - isc_info_rsb_hash RSB Bitmap expressions ---------------------- .. hlist:: :columns: 4 - isc_info_rsb_and - isc_info_rsb_or - isc_info_rsb_dbkey - isc_info_rsb_index Info request response codes --------------------------- .. hlist:: :columns: 4 - isc_info_req_active - isc_info_req_inactive - isc_info_req_send - isc_info_req_receive - isc_info_req_select - isc_info_req_sql_stall Blob Subtypes ------------- .. hlist:: :columns: 6 - isc_blob_untyped - isc_blob_text - isc_blob_blr - isc_blob_acl - isc_blob_ranges - isc_blob_summary - isc_blob_format - isc_blob_tra - isc_blob_extfile - isc_blob_debug_info - isc_blob_max_predefined_subtype Cancel types for fb_cancel_operation ------------------------------------ .. hlist:: :columns: 4 - fb_cancel_disable - fb_cancel_enable - fb_cancel_raise - fb_cancel_abort Other constants --------------- - DSQL_close - DSQL_drop - DSQL_unprepare - SQLDA_version1 - isc_info_req_select_count - isc_info_req_insert_count - isc_info_req_update_count - isc_info_req_delete_count flags set in fb_info_crypt_state: - fb_info_crypt_encrypted - fb_info_crypt_process :FB_API_VER: (int) Firebird API version number :MAX_BLOB_SEGMENT_SIZE: (int) Max size for BLOB segment Types ===== Basic types ----------- .. autoclass:: STRING :no-show-inheritance: :no-inherited-members: .. autoclass:: WSTRING :no-show-inheritance: :no-inherited-members: .. autoclass:: FB_API_HANDLE :no-show-inheritance: :no-inherited-members: .. autoclass:: ISC_STATUS :no-show-inheritance: :no-inherited-members: .. autoclass:: ISC_STATUS_PTR :no-show-inheritance: :no-inherited-members: .. autoclass:: ISC_STATUS_ARRAY :no-show-inheritance: :no-inherited-members: .. autoclass:: FB_SQLSTATE_STRING :no-show-inheritance: :no-inherited-members: .. autoclass:: ISC_LONG :no-show-inheritance: :no-inherited-members: .. autoclass:: ISC_ULONG :no-show-inheritance: :no-inherited-members: .. autoclass:: ISC_SHORT :no-show-inheritance: :no-inherited-members: .. autoclass:: ISC_USHORT :no-show-inheritance: :no-inherited-members: .. autoclass:: ISC_UCHAR :no-show-inheritance: :no-inherited-members: .. autoclass:: ISC_SCHAR :no-show-inheritance: :no-inherited-members: .. autoclass:: ISC_INT64 :no-show-inheritance: :no-inherited-members: .. autoclass:: ISC_UINT64 :no-show-inheritance: :no-inherited-members: .. autoclass:: ISC_DATE :no-show-inheritance: :no-inherited-members: .. autoclass:: ISC_TIME :no-show-inheritance: :no-inherited-members: .. autoclass:: ISC_TIMESTAMP :no-show-inheritance: :no-inherited-members: .. autoclass:: GDS_QUAD_t :no-show-inheritance: :no-inherited-members: .. autoclass:: GDS_QUAD :no-show-inheritance: :no-inherited-members: .. autoclass:: ISC_QUAD :no-show-inheritance: :no-inherited-members: .. autoclass:: ISC_ARRAY_BOUND :no-show-inheritance: :no-inherited-members: .. autoclass:: ISC_ARRAY_DESC :no-show-inheritance: :no-inherited-members: .. autoclass:: ISC_BLOB_DESC :no-show-inheritance: :no-inherited-members: .. autoclass:: isc_blob_ctl :no-show-inheritance: :no-inherited-members: .. autoclass:: bstream :no-show-inheritance: :no-inherited-members: .. autoclass:: BSTREAM :no-show-inheritance: :no-inherited-members: .. autoclass:: FB_BLOB_STREAM :no-show-inheritance: :no-inherited-members: .. autoclass:: paramdsc :no-show-inheritance: :no-inherited-members: .. autoclass:: paramvary :no-show-inheritance: :no-inherited-members: ISC_TEB ------- .. autoclass:: ISC_TEB :no-show-inheritance: :no-inherited-members: XSQLVAR ------- .. autoclass:: XSQLVAR :no-show-inheritance: :no-inherited-members: XSQLDA ------ .. autoclass:: XSQLDA :no-show-inheritance: :no-inherited-members: .. autoclass:: XSQLDA_PTR :no-show-inheritance: :no-inherited-members: USER_SEC_DATA ------------- .. autoclass:: USER_SEC_DATA :no-show-inheritance: :no-inherited-members: RESULT_VECTOR ------------- .. autoclass:: RESULT_VECTOR :no-show-inheritance: :no-inherited-members: Callbacks --------- .. class:: FB_SHUTDOWN_CALLBACK ctypes.CFUNCTYPE(UNCHECKED(c_int), c_int, c_int, POINTER(None)) .. class:: ISC_CALLBACK ctypes.CFUNCTYPE(None) .. class:: ISC_PRINT_CALLBACK ctypes.CFUNCTYPE(None, c_void_p, c_short, STRING) .. class:: ISC_VERSION_CALLBACK ctypes.CFUNCTYPE(None, c_void_p, STRING) .. class:: ISC_EVENT_CALLBACK ctypes.CFUNCTYPE(None, POINTER(ISC_UCHAR), c_ushort, POINTER(ISC_UCHAR)) .. autoclass:: blobcallback :no-show-inheritance: :no-inherited-members: Other globals ============= :charset_map: Dictionary that maps DB CHAR SET NAME to PYTHON CODEC NAME (CANONICAL) .. data:: ISC_TRUE .. data:: ISC_FALSE Functions ========= Classes ======= fbclient_API ------------ .. autoclass:: fbclient_API :members: .. module:: fdb.blr :synopsis: Python ctypes interface to Firebird client library (BLR) BLR Constants ============= .. note:: BLR data types are defined in :mod:`fdb.ibase` Main BLR codes -------------- .. hlist:: :columns: 7 - blr_inner - blr_left - blr_right - blr_full - blr_gds_code - blr_sql_code - blr_exception - blr_trigger_code - blr_default_code - blr_raise - blr_exception_msg - blr_exception_params - blr_sql_state - blr_version4 - blr_version5 - blr_eoc - blr_end - blr_assignment - blr_begin - blr_dcl_variable - blr_message - blr_erase - blr_fetch - blr_for - blr_if - blr_loop - blr_modify - blr_handler - blr_receive - blr_select - blr_send - blr_store - blr_label - blr_leave - blr_store2 - blr_post - blr_literal - blr_dbkey - blr_field - blr_fid - blr_parameter - blr_variable - blr_average - blr_count - blr_maximum - blr_minimum - blr_total - blr_add - blr_subtract - blr_multiply - blr_divide - blr_negate - blr_concatenate - blr_substring - blr_parameter2 - blr_from - blr_via - blr_parameter2_old - blr_user_name - blr_null - blr_equiv - blr_eql - blr_neq - blr_gtr - blr_geq - blr_lss - blr_leq - blr_containing - blr_matching - blr_starting - blr_between - blr_or - blr_and - blr_not - blr_any - blr_missing - blr_unique - blr_like - blr_rse - blr_first - blr_project - blr_sort - blr_boolean - blr_ascending - blr_descending - blr_relation - blr_rid - blr_union - blr_map - blr_group_by - blr_aggregate - blr_join_type - blr_agg_count - blr_agg_max - blr_agg_min - blr_agg_total - blr_agg_average - blr_parameter3 - blr_agg_count2 - blr_agg_count_distinct - blr_agg_total_distinct - blr_agg_average_distinct - blr_function - blr_gen_id - blr_prot_mask - blr_upcase - blr_lock_state - blr_value_if - blr_matching2 - blr_index - blr_ansi_like - blr_scrollable - blr_run_count - blr_rs_stream - blr_exec_proc - blr_procedure - blr_pid - blr_exec_pid - blr_singular - blr_abort - blr_block - blr_error_handler - blr_cast - blr_pid2 - blr_procedure2 - blr_start_savepoint - blr_end_savepoint Other BLR codes --------------- .. hlist:: :columns: 6 - blr_domain_type_of - blr_domain_full - blr_date - blr_plan - blr_merge - blr_join - blr_sequential - blr_navigational - blr_indices - blr_retrieve - blr_relation2 - blr_rid2 - blr_set_generator - blr_ansi_any - blr_exists - blr_record_version - blr_stall - blr_ansi_all - blr_extract - blr_continue - blr_forward - blr_backward - blr_bof_forward - blr_eof_backward - blr_extract_year - blr_extract_month - blr_extract_day - blr_extract_hour - blr_extract_minute - blr_extract_second - blr_extract_weekday - blr_extract_yearday - blr_extract_millisecond - blr_extract_week - blr_current_date - blr_current_timestamp - blr_current_time - blr_post_arg - blr_exec_into - blr_user_savepoint - blr_dcl_cursor - blr_cursor_stmt - blr_current_timestamp2 - blr_current_time2 - blr_agg_list - blr_agg_list_distinct - blr_modify2 - blr_current_role - blr_skip - blr_exec_sql - blr_internal_info - blr_nullsfirst - blr_writelock - blr_nullslast - blr_lowcase - blr_strlen - blr_strlen_bit - blr_strlen_char - blr_strlen_octet - blr_trim - blr_trim_both - blr_trim_leading - blr_trim_trailing - blr_trim_spaces - blr_trim_characters - blr_savepoint_set - blr_savepoint_release - blr_savepoint_undo - blr_savepoint_release_single - blr_cursor_open - blr_cursor_close - blr_cursor_fetch - blr_cursor_fetch_scroll - blr_croll_forward - blr_croll_backward - blr_croll_bof - blr_croll_eof - blr_croll_absolute - blr_croll_relative - blr_init_variable - blr_recurse - blr_sys_function - blr_auto_trans - blr_similar - blr_exec_stmt - blr_exec_stmt_inputs - blr_exec_stmt_outputs - blr_exec_stmt_sql - blr_exec_stmt_proc_block - blr_exec_stmt_data_src - blr_exec_stmt_user - blr_exec_stmt_pwd - blr_exec_stmt_tran - blr_exec_stmt_tran_clone - blr_exec_stmt_privs - blr_exec_stmt_in_params - blr_exec_stmt_in_params2 - blr_exec_stmt_out_params - blr_exec_stmt_role - blr_stmt_expr - blr_derived_expr - blr_procedure3 - blr_exec_proc2 - blr_function2 - blr_window - blr_partition_by - blr_continue_loop - blr_procedure4 - blr_agg_function - blr_substring_similar - blr_bool_as_value - blr_coalesce - blr_decode - blr_exec_subproc - blr_subproc_decl - blr_subproc - blr_subfunc_decl - blr_subfunc - blr_record_version2 - blr_gen_id2 fdb-2.0.0/sphinx/python-db-api-compliance.txt0000644000175000001440000002076613270362207021556 0ustar pcisarusers00000000000000########################## Compliance to PyDB API 2.0 ########################## .. currentmodule:: fdb Full text of Python Database API 2.0 (PEP 249) is available at `http://www.python.org/dev/peps/pep-0249/ `__ Unsupported Optional Features ============================= :meth:`fdb.Cursor.nextset` This method is not implemented because the database engine does not support opening multiple result sets simultaneously with a single cursor. Supported Optional Features =========================== - `Connection.Error`, `Connection.ProgrammingError`, etc. All exception classes defined by the DB API standard are exposed on the Connection objects as attributes (in addition to being available at module scope). - `Cursor.connection` This read-only attribute return a reference to the Connection object on which the cursor was created. Nominally Supported Optional Features ===================================== :class:`fdb.Cursor` :attr:`~fdb.Cursor.arraysize` As required by the spec, the value of this attribute is observed with respect to the `fetchmany` method. However, changing the value of this attribute does not make any difference in fetch efficiency because the database engine only supports fetching a single row at a time. :meth:`~fdb.Cursor.setinputsizes` Although this method is present, it does nothing, as allowed by the spec. :meth:`~fdb.Cursor.setoutputsize` Although this method is present, it does nothing, as allowed by the spec. Extensions and Caveats ====================== FDB offers a large feature set beyond the minimal requirements of the Python DB API. This section attempts to document only those features that overlap with the DB API, or are too insignificant to warrant their own subsection elsewhere. :func:`fdb.connect` This function supports the following optional keyword arguments in addition to those required by the spec: :role: For connecting to a database with a specific SQL role. *Example:* .. sourcecode:: python fdb.connect(dsn='host:/path/database.db', user='limited_user', password='pass', role='MORE_POWERFUL_ROLE') :charset: For explicitly specifying the character set of the connection. See Firebird Documentation for a list of available character sets, and `Unicode Fields and FDB` section for information on handling extended character sets with FDB. *Example:* .. sourcecode:: python fdb.connect(dsn='host:/path/database.db', user='sysdba', password='pass', charset='UTF8') :sql_dialect: The SQL dialect is feature for backward compatibility with Interbase® 5.5 or earlier. The default dialect is `3` (the most featureful dialect, default for Firebird). If you want to connect to `legacy` databases, you must explicitly set this argument's value to `1`. Dialect `2` is a transitional dialect that is normally used only during ports from IB < 6 to IB >= 6 or Firebird. See Firebird documentation for more information about SQL Dialects. *Example:* .. sourcecode:: python fdb.connect(dsn='host:/path/database.db', user='sysdba', password='pass', dialect=1) :class:`fdb.Connection` :attr:`~fdb.Connection.charset` *(read-only)* The character set of the connection (set via the `charset` parameter of :func:`fdb.connect()`). See Firebird Documentation for a list of available character sets, and `Unicode Fields and FDB` section for information on handling extended character sets with FDB. :attr:`~fdb.Connection.sql_dialect` This integer attribute indicates which SQL dialect the connection is using. You should not change a connection's dialect; instead, discard the connection and establish a new one with the desired dialect. For more information, see the documentation of the `sql_dialect` argument of the `connect` function. :attr:`~fdb.Connection.server_version` *(read-only)* The version string of the database server to which this connection is connected. :meth:`~fdb.Connection.execute_immediate` Executes a statement without caching its prepared form. The statement must *not* be of a type that returns a result set. In most cases (especially cases in which the same statement -- perhaps a parameterized statement -- is executed repeatedly), it is better to create a cursor using the connection's `cursor` method, then execute the statement using one of the cursor's execute methods. Arguments: :sql: String containing the SQL statement to execute. :meth:`~fdb.Connection.commit(retaining=False)` :meth:`~fdb.Connection.rollback(retaining=False, savepoint=None)` The `commit` and `rollback` methods accept an optional boolean parameter `retaining` (default `False`) that indicates whether the transactional context of the transaction being resolved should be recycled. For details, see the Advanced Transaction Control: Retaining Operations section of this document. The `rollback` method accepts an optional string parameter `savepoint` that causes the transaction to roll back only as far as the designated savepoint, rather than rolling back entirely. For details, see the Advanced Transaction Control: Savepoints section of this document. :class:`fdb.Cursor` :attr:`~fdb.Cursor.description` FDB makes absolutely no guarantees about `description` except those required by the Python Database API Specification 2.0 (that is, `description` is either `None` or a sequence of 7-element sequences). Therefore, client programmers should *not* rely on `description` being an instance of a particular class or type. FDB provides several named positional constants to be used as indices into a given element of `description` . The contents of all `description` elements are defined by the DB API spec; these constants are provided merely for convenience. .. sourcecode:: python DESCRIPTION_NAME DESCRIPTION_TYPE_CODE DESCRIPTION_DISPLAY_SIZE DESCRIPTION_INTERNAL_SIZE DESCRIPTION_PRECISION DESCRIPTION_SCALE DESCRIPTION_NULL_OK Here is an example of accessing the *name* of the first field in the `description` of cursor `cur`: .. sourcecode:: python nameOfFirstField = cur.description[0][fdb.DESCRIPTION_NAME] For more information, see the documentation of Cursor.description in the `DB API Specification `__. :attr:`~fdb.Cursor.rowcount` Although FDB's `Cursor`s implement this attribute, the database engine's own support for the determination of "rows affected"/"rows selected" is quirky. The database engine only supports the determination of rowcount for `INSERT`, `UPDATE`, `DELETE`, and `SELECT` statements. When stored procedures become involved, row count figures are usually not available to the client. Determining rowcount for `SELECT` statements is problematic: the rowcount is reported as zero until at least one row has been fetched from the result set, and the rowcount is misreported if the result set is larger than 1302 rows. The server apparently marshals result sets internally in batches of 1302, and will misreport the rowcount for result sets larger than 1302 rows until the 1303rd row is fetched, result sets larger than 2604 rows until the 2605th row is fetched, and so on, in increments of 1302. As required by the Python DB API Spec, the rowcount attribute "is -1 in case no executeXX() has been performed on the cursor or the rowcount of the last operation is not determinable by the interface". :meth:`~fdb.Cursor.fetchone` :meth:`~fdb.Cursor.fetchmany` :meth:`~fdb.Cursor.fetchall` FDB makes absolutely no guarantees about the return value of the `fetchone` / `fetchmany` / `fetchall` methods except that it is a sequence indexed by field position. FDB makes absolutely no guarantees about the return value of the `fetchonemap` / `fetchmanymap` / `fetchallmap` methods (documented below) except that it is a mapping of field name to field value. Therefore, client programmers should *not* rely on the return value being an instance of a particular class or type. fdb-2.0.0/sphinx/license.txt0000644000175000001440000000003513167442200016375 0ustar pcisarusers00000000000000.. include:: ../LICENSE.TXT fdb-2.0.0/sphinx/Makefile0000644000175000001440000000446713167442200015667 0ustar pcisarusers00000000000000# Makefile for Sphinx documentation # # You can set these variables from the command line. SPHINXOPTS = SPHINXBUILD = sphinx-build PAPER = # Internal variables. PAPEROPT_a4 = -D latex_paper_size=a4 PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d .build/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . .PHONY: help clean html web pickle htmlhelp latex changes linkcheck help: @echo "Please use \`make ' where is one of" @echo " html to make standalone HTML files" @echo " pickle to make pickle files" @echo " json to make JSON files" @echo " htmlhelp to make HTML files and a HTML help project" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" @echo " changes to make an overview over all changed/added/deprecated items" @echo " linkcheck to check all external links for integrity" clean: -rm -rf .build/* html: mkdir -p .build/html .build/doctrees $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) ../docs @echo @echo "Build finished. The HTML pages are in ../docs." pickle: mkdir -p .build/pickle .build/doctrees $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) .build/pickle @echo @echo "Build finished; now you can process the pickle files." web: pickle json: mkdir -p .build/json .build/doctrees $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) .build/json @echo @echo "Build finished; now you can process the JSON files." htmlhelp: mkdir -p .build/htmlhelp .build/doctrees $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) .build/htmlhelp @echo @echo "Build finished; now you can run HTML Help Workshop with the" \ ".hhp project file in .build/htmlhelp." latex: mkdir -p .build/latex .build/doctrees $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) .build/latex @echo @echo "Build finished; the LaTeX files are in .build/latex." @echo "Run \`make all-pdf' or \`make all-ps' in that directory to" \ "run these through (pdf)latex." changes: mkdir -p .build/changes .build/doctrees $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) .build/changes @echo @echo "The overview file is in .build/changes." linkcheck: mkdir -p .build/linkcheck .build/doctrees $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) .build/linkcheck @echo @echo "Link check complete; look for any errors in the above output " \ "or in .build/linkcheck/output.txt." fdb-2.0.0/sphinx/changelog.txt0000644000175000001440000004076613270362207016724 0ustar pcisarusers00000000000000######### Changelog ######### Version 2.0 =========== .. important:: This is initial release of new *"SweetBitter"* driver generation. During this (v2) generation FDB driver will undergo a transition from development centered around Python 2.7 / Firebird 2.x to development centered around Python 3 / Firebird 3. There are some backward incompatible changes between v2 and v1 generation, and you may expect some also between individual releases of second generation. To *soften* this *bitter* pill, the second generation will have new functionality, enhancements and optimalizations gradually added into each public release. **The second generation is also the last one that will directly support Python 2.7 and will be tested with Firebird 2.** The plan is to move forward with v3 generation (Python 3/Firebird 3+) as soon as v2 code base will become mature. Improvements ------------ - Hooks. - New modules for parsing Firebird trace & audit logs (:mod:`fdb.trace`), gstat output (:mod:`fdb.gstat`) and server log (:mod:`fdb.log`) - Added :class:`fdb.utils.ObjectList` class for improved object collection manipulation. - Modules :mod:`~fdb.monitor` and :mod:`~fdb.schema` now use new :class:`fdb.utils.ObjectList` for collections of information objects. - Methods :meth:`fdb.Connection.database_info()` and :meth:`fdb.Transaction.transaction_info()` now distinguish between text and binary strings with `result_type` code. - Significant changes to documentation. Other changes ------------- - Exception fdb.Warning removed as duplicate to standard Warning exception. - Changes to make pylint more happy about fdb code. - Parameter `result_type` in methods :meth:`fdb.Transaction.transaction_info()` and :meth:`fdb.Connection.database_info()` now does not support value 's' for string results as these converted strings to unicode in Python 3 which does not makes sense (it's always binary data, at least partially). Instead new value 'b' is introduced for binary string results. - Reworked Visitor Pattern support in :mod:`~fdb.schema` module, added classes :class:`fdb.utils.Visitable` and :class:`fdb.utils.Visitor`. - Method :meth:`fdb.schema.Schema.reload` now takes as parameter numeric metadata category code(s) instead string name. - Cleanup of unused constants Bugs Fixed ---------- - (PYFB-72) - exception_from_status function gives an UnicodeDecodeError - (PYFB-73) - Buffer filled with zero-characters is returned instead of actual content of page when page number more than 64 K - (Unregistered) - BOOLEAN arrays were not supported - (Unregistered) - Issues with Python 3 and Windows compatibility from latest 1.x versions. Version 1.8 =========== - In relation to (PYFB-71_) a better memory exhaustion safeguard was implemented for materialized blobs. See :ref:`Working with BLOBs ` for details. - Added service support for backup and restore from/to local byte stream. See :meth:`~fdb.services.Connection.local_backup()` and :meth:`~fdb.services.Connection.local_restore()` for details. - Added attribute :attr:`fdb.schema.TableColumn.id` (RDB$FIELD_ID) - Added method :meth:`fdb.BlobReader.get_info()`. Version 1.7 =========== - (PYFB-66_) - Port parameter for connect and create_database is not used - (PYFB-69_) - Can not connect to FB services if set ISC_USER & ISC_PASSWORD by os.environ[...] - (PYFB-70_) - executemany(operation, seq_of_parameters) appears to run slower than it should - Number of fixes to DDL generators in schema module - Added support for :class:`~fdb.schema.Filter` and :class:`~fdb.schema.BackupHistory` in schema module. - Added DDL scripts generator :meth:`~fdb.schema.Schema.get_metadata_ddl`. Version 1.6.1 ============= - (PYFB-68_) - Add support for isc_spb_sts_table option - (PYFB-67_) - Cursor fails after use with executemany(). ReferenceError: weakly-referenced object no longer exists Version 1.6 =========== - New: Extended support for database and transaction info (new attributes and functions on :class:`~fdb.Connection` and :class:`~fdb.Transaction`, fixes and improvements to :meth:`~fdb.Connection.db_info` and :meth:`~fdb.Connection.database_info`). - Fix: Missing character sets for automatic translations. - (PYFB-64_) - cursor.description throws ReferenceError after executemany INSERT Version 1.5.1 ============= - New :func:`~fdb.connect()` parameters: no_gc, no_db_triggers and no_linger. - Direct support for *with* statement (PEP8) in :class:`~fdb.Connection` class. Version 1.5 =========== - Initial support for Firebird 3.0 - BOOLEAN datatype - IDENTITY datatype (in schema) - Database linger - Preserve SHADOW on DROP - DDL triggers - New and extended system tables - New and extended monitoring tables - GBAK statistics (service) - On-line validation (service) - (PYFB-60_) Cursor: executemany(operation, seq_of_parameters) does PREPARE of for each parameter from Version 1.4.11 ============== - (PYFB-58_) Severe performance loss and minor memory leak Version 1.4.10 ============== - (PYFB-54_) Windows 7x64 and FB2.5.2x32 Python2.7: Error in Registry Path. FDB driver does not find the library fbclient.dll - (PYFB-55_) get_sql_for incorrect generate sql query for Views - (PYFB-56_) schema.reload typing mistake for views - (PYFB-57_) role.privileges does not return correct privilege list Version 1.4.9 ============= - (PYFB-51_) .get_sql_for('create') returns invalid output parameters - (PYFB-52_) isc_info* types which are _DATABASE_INFO_CODES_WITH_COUNT_RESULTS raises TypeError: 'float' object cannot be interpreted as an integer Version 1.4.8 ============= - Enhancement to automatic client library location detection on POSIX. Now it also looks at LD_LIBRARY_PATH dir if specified. Version 1.4.7 ============= - Forgoten debug printout removed. Annoying for Python 2.x users, fatal for 3.x users. Version 1.4.6 ============= Bugs Fixed ---------- - (PYFB-50_) Exception ReferenceError: 'weakly-referenced object no longer exists' in PreparedStatement and Cursor Version 1.4.5 ============= Bugs Fixed ---------- - (PYFB-49_) Memory and DB resource leak due to circular references. Version 1.4.4 ============= Improvements ------------ - (PYFB-47_) Firebird client library path added as optinal parameter to :func:`fdb.connect` and :func:`fdb.create_database`. Bugs Fixed ---------- - Additional fix related to PYFB-43_ - Additional correction for unregistered problem with circular ref. between PS and Cursor when explicit PS is executed. Version 1.4.3 ============= Bugs Fixed ---------- - Previous fix for PYFB-43_ was incomplete, corrected. Version 1.4.2 ============= Improvements ------------ - In relation to PYFB-43_ I had to make a **backward incompatible change** to event processing API. Starting from this version `EventConduit` does not automatically starts collection of events upon creation, but it's now necessary to call :meth:`~fdb.EventConduit.begin` method. To mitigate the inconvenience, EventConduit now supports context manager protocol that ensures calls to begin() and close() via `with` statement. - In relation to PYFB-39_ I have decided to drop support for implicitly cached and reused prepared statements. I never liked this feature as I think it's a sneaky method how to put some performance to badly written applications that in worst case may lead to significant resource consumption on server side when developers are not only lazy but also stupid. It was implemented for the sake of compatibility with KInterbasDB. **This change has no impact on API, but may affect performance of your applications.** Bugs Fixed ---------- - PYFB-44_ - Inserting a datetime.date into a TIMESTAMP column does not work - PYFB-42_ - Python 3.4 and FDB - backup throws an exception - Unregistered - Fixes in monitor.TransactionInfo Version 1.4.1 ============= Improvements ------------ - PYFB-40_ - fbclient.dll is not found if not in path. Aside from registry lookup, client library isn't loaded until first call to :func:`fdb.connect`, :func:`fdb.create_database` or :func:`fdb.load_api` (which now supports optional specification of Firebird Client Library to load). - Adjustments for Firebird 3.0 (Alpha1) - Properties :attr:`~fdb.services.Connection.version` and :attr:`~fdb.services.Connection.engine_version` added to :class:`fdb.services.Connection` Bugs Fixed ---------- - Unregistered - isolation_level parameter for fdb.connection has no effect. - Unregistered - Information gathered from monitoring tables is not properly dropped upon refresh request. Version 1.4 =========== New Features ------------ - :mod:`fdb.schema` submodule extended with support for user privileges. Improvements ------------ - :meth:`fdb.services.User.load_information` method to load information about user from server. - :mod:`fdb.ibase` content cleanup and additions. - :mod:`fdb.blr` submodule with BLR definitions. Bugs Fixed ---------- - PYFB-37_ - Unicode Strings incorrect not allowed for insertion into BLOB SubType 1. Version 1.3 =========== New Features ------------ - :mod:`fdb.monitor` submodule for access to / work with monitoring tables. - New :attr:`fdb.Connection.monitor` property for access to monitoring tables. Improvements ------------ - :attr:`~fdb.schema.Schema.closed` property and :meth:`~fdb.schema.Schema.clear` method for Schema. - Unit tests reworked. Bugs Fixed ---------- - Unregistered: Bug in :meth:`fdb.schema.Schema.close` and :meth:`fdb.schema.Schema.bind`. Version 1.2 =========== New Features ------------ - :mod:`fdb.schema` submodule for access to / work with database metadata. - :mod:`fdb.utils` submodule with various helper classes and functions. - New :attr:`fdb.Connection.schema` property for access to database schema. - New :class:`~fdb.ConnectionWithSchema` connection class that provides more direct access to database schema than `Connection`. - New :attr:`fdb.Connection.firebird_version`, :attr:`fdb.Connection.version` and :attr:`fdb.Connection.engine_version` properties. - New `Connection.ods` read only property that returns ODS version number of connected database. There are also new module-level constants `ODS_FB_20`, `ODS_FB_21` and `ODS_FB_25`. - New :attr:`fdb.Connection.query_transaction` property. This is ReadOnly ReadCommitted transaction that could be active indefinitely without blocking garbage collection. It’s used internally to query metadata, but it’s generally useful. Improvements ------------ - Optional PEP 249 (Python DB API 2.0) Extensions - `Connection.Error`, `Connection.ProgrammingError`, etc. All exception classes defined by the DB API standard are exposed on the Connection objects as attributes (in addition to being available at module scope). - `Cursor.connection` This read-only attribute return a reference to the Connection object on which the cursor was created. - `Cursor.transaction` read-only attribute returns a reference to the Transaction object on which the cursor was created. - Optimized wekref management, especially for `PreparedStatement`. - `create_database` now supports two methods for database screation. You can specify CREATE DATABASE statement (as before) or provide set of named database parameters (SQL statement is created automatically from them). - Functions `connection` and `create_database` now take optional keyword parameter `connection_class` to obtain instances of different class instead `Connection`. - Support for legacy (pre-2.5) shutdown mode with mode `fdb.services.SHUT_LEGACY`. - :meth:`fdb.Cursor.executemany` returns `self`, so it could be used directly as iterator. - Documentation inprovements. Bugs Fixed ---------- - Unregistered: `buffers` parameter of `fdb.connection` doesn't support values greater than 255. - Unregistered: Lowercase character set name passed to `fdb.connect` may result in wrong funcion of automatic data conversions and other failures (exceptions raised). Version 1.1.1 ============= Bugs Fixed ---------- - PYFB-35_ - Call to fetch after a sql statement without a result should raise exception - PYFB-34_ - Server resources not released on PreparedStatement destruction Version 1.1 =========== New Features ------------ - :ref:`Context Manager ` for transactions. Bugs Fixed ---------- - PYFB-30_ - BLOBs are truncated at first zero byte Version 1.0 =========== Improvements ------------ - Removed dependency on fbclient library to be present at import time (PYFB-24) Bugs Fixed ---------- - PYFB-25_ - Truncate long text from VARCHAR(5000) Version 0.9.9 ============= New Features ------------ - Firebird ARRAY support. Other changes ------------- - Cursor.execute() returns Self, so it could be used as iterator. - Reading output from Services now uses more efficient method to get data from server. Bugs Fixed ---------- - Fix: precision_cache in Connection works as intended. Version 0.9.1 ============= Just bugfixes to make FDB work on P3K again. Version 0.9 =========== New Features ------------ - Documentation; both in-source (in Sphinx autodoc format) and Sphinx (html) - Services API completely reworked Other changes ------------- - Unregistered bugs fixed. - Various optimizations and cleanup - Object reference graph optimalizations - Many new tests in test suite Version 0.8.5 ============= New Features ------------ - Support for Firebird stream BLOBs (see ReleaseNotes for details) - Documentation (stub, from KInterbasDB 3.3.0) Bugs Fixed ---------- - Fix for PYFB-17_ and PYFB-18_ (see our JIRA tracker for details) - Fixes for automatic unicode conversions + refactoring - Some optimizations Version 0.8 =========== New Features ------------ - (PYFB-8_) - Support for Firebird Event Notifications Bugs Fixes ---------- - (PYFB-16_) - database_info (isc_info_firebird_version) fails opn amd64 linux - (PYFB-15_) - more than 2 consecutive cursor open execute and iter fail Version 0.7.2 ============= New Features ------------ - Python 3 Support (thanks to Philippe Makowski) - Support for Distributed Transactions And as always, some (unregistered) bugs fixed. Version 0.7.1 ============= Bug fixes. Version 0.7 =========== Initial release. Almost feature-complete (ready for 95% of users), but it could be still buggy (it’s beta!), and the code wasn’t optimized for size and speed. In all other ways it’s ready for wide testing. What’s missing -------------- - Distributed transactions - ARRAY support - EVENTs support - Stream BLOBs - TRACE service - Documentation (but you can use KInterbasDB one as FDB is as close to it as possible). - Python 3.x support (haven’t had time to test it, but it shouldn’t be hard to make it work there) .. _PYFB-8: http://tracker.firebirdsql.org/browse/PYFB-8 .. _PYFB-15: http://tracker.firebirdsql.org/browse/PYFB-15 .. _PYFB-16: http://tracker.firebirdsql.org/browse/PYFB-16 .. _PYFB-17: http://tracker.firebirdsql.org/browse/PYFB-17 .. _PYFB-18: http://tracker.firebirdsql.org/browse/PYFB-18 .. _PYFB-24: http://tracker.firebirdsql.org/browse/PYFB-24 .. _PYFB-25: http://tracker.firebirdsql.org/browse/PYFB-25 .. _PYFB-30: http://tracker.firebirdsql.org/browse/PYFB-30 .. _PYFB-34: http://tracker.firebirdsql.org/browse/PYFB-34 .. _PYFB-35: http://tracker.firebirdsql.org/browse/PYFB-35 .. _PYFB-37: http://tracker.firebirdsql.org/browse/PYFB-37 .. _PYFB-39: http://tracker.firebirdsql.org/browse/PYFB-39 .. _PYFB-40: http://tracker.firebirdsql.org/browse/PYFB-40 .. _PYFB-42: http://tracker.firebirdsql.org/browse/PYFB-42 .. _PYFB-43: http://tracker.firebirdsql.org/browse/PYFB-43 .. _PYFB-44: http://tracker.firebirdsql.org/browse/PYFB-44 .. _PYFB-47: http://tracker.firebirdsql.org/browse/PYFB-47 .. _PYFB-49: http://tracker.firebirdsql.org/browse/PYFB-49 .. _PYFB-50: http://tracker.firebirdsql.org/browse/PYFB-50 .. _PYFB-51: http://tracker.firebirdsql.org/browse/PYFB-51 .. _PYFB-52: http://tracker.firebirdsql.org/browse/PYFB-52 .. _PYFB-54: http://tracker.firebirdsql.org/browse/PYFB-54 .. _PYFB-55: http://tracker.firebirdsql.org/browse/PYFB-55 .. _PYFB-56: http://tracker.firebirdsql.org/browse/PYFB-56 .. _PYFB-57: http://tracker.firebirdsql.org/browse/PYFB-57 .. _PYFB-58: http://tracker.firebirdsql.org/browse/PYFB-58 .. _PYFB-60: http://tracker.firebirdsql.org/browse/PYFB-60 .. _PYFB-64: http://tracker.firebirdsql.org/browse/PYFB-64 .. _PYFB-66: http://tracker.firebirdsql.org/browse/PYFB-66 .. _PYFB-67: http://tracker.firebirdsql.org/browse/PYFB-67 .. _PYFB-68: http://tracker.firebirdsql.org/browse/PYFB-68 .. _PYFB-69: http://tracker.firebirdsql.org/browse/PYFB-69 .. _PYFB-70: http://tracker.firebirdsql.org/browse/PYFB-70 .. _PYFB-71: http://tracker.firebirdsql.org/browse/PYFB-71 fdb-2.0.0/sphinx/conf.py0000644000175000001440000002214613270362207015523 0ustar pcisarusers00000000000000# -*- coding: utf-8 -*- # # FDB documentation build configuration file, created by # sphinx-quickstart on Wed Jan 7 12:29:48 2009. # # This file is execfile()d with the current directory set to its containing dir. # # The contents of this file are pickled, so don't put values in the namespace # that aren't pickleable (module imports are okay, they're removed automatically). # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. import sys, os import sphinx_bootstrap_theme # If your extensions are in another directory, add it here. If the directory # is relative to the documentation root, use os.path.abspath to make it # absolute, like shown here. sys.path.append(os.path.abspath('..')) # General configuration # --------------------- # Add any Sphinx extension module names here, as strings. They can be extensions # coming with Sphinx (named 'sphinx.ext.*') or your custom ones. extensions = ['sphinx.ext.autodoc', 'sphinx.ext.intersphinx', 'sphinx.ext.todo'] # Add any paths that contain templates here, relative to this directory. templates_path = ['.templates'] # The suffix of source filenames. source_suffix = '.txt' # The encoding of source files. #source_encoding = 'utf-8' # The master toctree document. master_doc = 'index' # General information about the project. project = u'FDB' copyright = u'2009-2018 Pavel Cisar' # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # # The short X.Y version. version = '2.0' # The full version, including alpha/beta/rc tags. release = '2.0' # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. #language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: #today = '' # Else, today_fmt is used as the format for a strftime call. #today_fmt = '%B %d, %Y' # List of documents that shouldn't be included in the build. unused_docs = [] # List of directories, relative to source directory, that shouldn't be searched # for source files. exclude_trees = ['.build'] # The reST default role (used for this markup: `text`) to use for all documents. #default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. #add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). #add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. #show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # Autodoc options # --------------- autoclass_content = 'both' autodoc_default_flags = ['members','inherited-members','undoc-members'] autodoc_member_order = 'groupwise' # Intersphinx options intersphinx_mapping = {'python': ('http://docs.python.org/2/', None)} # Options for HTML output # ----------------------- #html_theme = "fdbtheme" #html_theme_path = ["."] html_theme = "bootstrap" html_theme_path = sphinx_bootstrap_theme.get_html_theme_path() # bootstrap theme config # (Optional) Logo. Should be small enough to fit the navbar (ideally 24x24). # Path should be relative to the ``_static`` files directory. #html_logo = "my_logo.png" # Theme options are theme-specific and customize the look and feel of a # theme further. html_theme_options = { # Navigation bar title. (Default: ``project`` value) #'navbar_title': "FDB", # Tab name for entire site. (Default: "Site") 'navbar_site_name': "Content", # A list of tuples containing pages or urls to link to. # Valid tuples should be in the following forms: # (name, page) # a link to a page # (name, "/aa/bb", 1) # a link to an arbitrary relative url # (name, "http://example.com", True) # arbitrary absolute url # Note the "1" or "True" value above as the third argument to indicate # an arbitrary url. 'navbar_links': [ ("Usage Guide", "usage-guide"), ("Reference", "reference"), ("Index", "genindex"), ], # Render the next and previous page links in navbar. (Default: true) #'navbar_sidebarrel': True, # Render the current pages TOC in the navbar. (Default: true) #'navbar_pagenav': True, # Tab name for the current pages TOC. (Default: "Page") #'navbar_pagenav_name': "Page", # Global TOC depth for "site" navbar tab. (Default: 1) # Switching to -1 shows all levels. 'globaltoc_depth': 3, # Include hidden TOCs in Site navbar? # # Note: If this is "false", you cannot have mixed ``:hidden:`` and # non-hidden ``toctree`` directives in the same page, or else the build # will break. # # Values: "true" (default) or "false" 'globaltoc_includehidden': "true", # HTML navbar class (Default: "navbar") to attach to
element. # For black navbar, do "navbar navbar-inverse" 'navbar_class': "navbar navbar-inverse", # Fix navigation bar to top of page? # Values: "true" (default) or "false" 'navbar_fixed_top': "true", # Location of link to source. # Options are "nav" (default), "footer" or anything else to exclude. 'source_link_position': "none", # Bootswatch (http://bootswatch.com/) theme. # # Options are nothing (default) or the name of a valid theme # such as "cosmo" or "sandstone". # # The set of valid themes depend on the version of Bootstrap # that's used (the next config option). # # Currently, the supported themes are: # - Bootstrap 2: https://bootswatch.com/2 # - Bootstrap 3: https://bootswatch.com/3 #'bootswatch_theme': "united", # cerulean, flatly, lumen, materia, united, yeti 'bootswatch_theme': "cerulean", # Choose Bootstrap version. # Values: "3" (default) or "2" (in quotes) 'bootstrap_version': "2", } # The style sheet to use for HTML and HTML Help pages. A file of that name # must exist either in Sphinx' static/ path, or in one of the custom paths # given in html_static_path. #html_style = 'pyramid.css' # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". #html_title = None # A shorter title for the navigation bar. Default is the same as html_title. #html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. #html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. #html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['.static'] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. #html_last_updated_fmt = '%b %d, %Y' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. #html_use_smartypants = True # Custom sidebar templates, maps document names to template names. #html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. #html_additional_pages = {} # If false, no module index is generated. #html_use_modindex = True # If false, no index is generated. #html_use_index = True # If true, the index is split into individual pages for each letter. #html_split_index = False # If true, the reST sources are included in the HTML build as _sources/. #html_copy_source = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. #html_use_opensearch = '' # If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml"). #html_file_suffix = '' # Output file base name for HTML help builder. htmlhelp_basename = 'FDBdoc' # Options for LaTeX output # ------------------------ # The paper size ('letter' or 'a4'). #latex_paper_size = 'letter' # The font size ('10pt', '11pt' or '12pt'). #latex_font_size = '10pt' # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, document class [howto/manual]). latex_documents = [ ('index', 'FDB.tex', ur'FDB Documentation', ur'Pavel Cisar', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. #latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. #latex_use_parts = False # Additional stuff for the LaTeX preamble. #latex_preamble = '' # Documents to append as an appendix to all manuals. #latex_appendices = [] # If false, no module index is generated. #latex_use_modindex = True fdb-2.0.0/sphinx/fdbtheme/0000755000175000001440000000000013270606076016002 5ustar pcisarusers00000000000000fdb-2.0.0/sphinx/fdbtheme/layout.html0000644000175000001440000000155013167442200020176 0ustar pcisarusers00000000000000{% extends "basic/layout.html" %} {%- block extrahead %} {% endblock %} {% block header %} {%- if logo %} {%- endif %} {% endblock %} {%- block sidebarlogo %}{%- endblock %} {%- block sidebarsourcelink %}{%- endblock %} fdb-2.0.0/sphinx/fdbtheme/static/0000755000175000001440000000000013270606076017271 5ustar pcisarusers00000000000000fdb-2.0.0/sphinx/fdbtheme/static/ie6.css0000644000175000001440000000132613167442200020460 0ustar pcisarusers00000000000000* html img, * html .png{position:relative;behavior:expression((this.runtimeStyle.behavior="none")&&(this.pngSet?this.pngSet=true:(this.nodeName == "IMG" && this.src.toLowerCase().indexOf('.png')>-1?(this.runtimeStyle.backgroundImage = "none", this.runtimeStyle.filter = "progid:DXImageTransform.Microsoft.AlphaImageLoader(src='" + this.src + "',sizingMethod='image')", this.src = "_static/transparent.gif"):(this.origBg = this.origBg? this.origBg :this.currentStyle.backgroundImage.toString().replace('url("','').replace('")',''), this.runtimeStyle.filter = "progid:DXImageTransform.Microsoft.AlphaImageLoader(src='" + this.origBg + "',sizingMethod='crop')", this.runtimeStyle.backgroundImage = "none")),this.pngSet=true) );} fdb-2.0.0/sphinx/fdbtheme/static/footerbg.png0000644000175000001440000000051513167442200021577 0ustar pcisarusers00000000000000PNG  IHDRfIDAT8ɑ0Z샔%[ڇ58| X0`oqNG@ř;mH#8Kk8FO-&19Z#AdF\g/8MLGp8dk躓}}tse.u9d=|9MӍ}ջ晍k>;t.k~!;?۟ZqY:wumM|<+3ݝ9p-nV=F|IENDB`fdb-2.0.0/sphinx/fdbtheme/static/dialog-warning.png0000644000175000001440000000255713167442200022702 0ustar pcisarusers00000000000000PNG  IHDR szzsBIT|d&IDATX[hTGsً1Y&݈˪xɂ VD4R%Z[郈}Xj" &` D%"ffxC%&FOaaf3ρ(#Q.e&( 9\_ao; 28>\;b8JeoOI3iaPwBW< AiqnqqJ<-ͺw|s:boG)`[QDXu5d RP4T{C((?OYJmt *+ۤAg/18#Ѐс/'.p*~8|z2ʬYhaԹs]J6ߕBt¡ PԔmm-[~?B\.WbBLIUTD&'K7 B5kx[UpwLyqՑ Nm᫤ٳsܦ)څLJJ8r8((躞ĵw/OR&M]C8 TSav G/,L8IdӉ|o^sSi#+'GSwg [`k)={xb4\$`4nɏw0 M@ k耝}@L i( p M/IENDB`fdb-2.0.0/sphinx/fdbtheme/static/headerbg.png0000644000175000001440000000031313167442200021525 0ustar pcisarusers00000000000000PNG  IHDR4btIDAT8A C\Sn`b)4o9DRphȯu?޲ @\S=3}OoF59,c߻'mZ&k q~xX5E3,01]sIENDB`fdb-2.0.0/sphinx/fdbtheme/static/dialog-note.png0000644000175000001440000000305613167442200022175 0ustar pcisarusers00000000000000PNG  IHDR szzsBIT|dIDATX[lgn{.k5ƀcSJ(iHREIUTJ}@}ᩴTU<4m_@MV$vHk3]ok>}2Mck&>[vd@UEN9pnJE+S%|*~'ȞQiFCCCmL6%6uc_-9;|gˆj><88(DI|ne0@ G ayͼ?O8uuh'.v?r⨮.$$HsՔVuR)4u١Cۇ ұvGm ʏLrgH6Q}Ak ځžHgeEvv@h=vWQ x;9=1^w3{nwOP[s\%8푠:~MP^~'/M!O[X:/0 @frK WQ{OoE~W^W^9J)N|67=ϋe'Px(@NM,Ww t.=}~/MuA.$}--B@RHsHʇCK'e%n!SK{F R\p8u*@y!{A'` "eyS`@MI.$-dv;%^R8LO""`+-KS@SHHHӐ2XD$IĖk)!ǐiDo;"g\n-ܿ;P s&2; Ӵl\}++,f[.ḳ;[޺Uc ېqϧ Yנf,޽(X"Ċ3jU3~*_$=&Xw]/\DoƥV)#u\r.HY .ayN_I&inA ZW!s>| \1szO'GGo?#Uwsx16~ŷyv*řtßKzky0qH6Lѯ<7g'U 8/sU g'ӓYVB kw7dࠦ Ȥ;@#wp_}j> p} o펑[>]բC,vvZ[ʮnF|l?K |\x'U?c] WzM[ 61tY`-e,albb/Bk+a/xx_34vP|sޏ-XCr*{%m\-UT`Xg*7^ъVVjb8e= +?F)QWb,B_OxǣIENDB`fdb-2.0.0/sphinx/fdbtheme/static/dialog-seealso.png0000644000175000001440000000273613167442200022667 0ustar pcisarusers00000000000000PNG  IHDR szzsBIT|dtEXtSoftwarewww.inkscape.org<pIDATXŖ[lUߙٝݶR)޸GBT MVcFYD|GQLLKb$5ATB@*ZJwv/s;nn|9̜93Aqx|ڗvܳB~PpUB)O+yX<sY:h9]@ڪze/ *Fn,Rǧ5" <_Ϫ7r5.mmǧh WYf. لrbgO[MfWkO(<9|RMH6a ȃ{f(p%69mOsg- _vW-D B0L)08 lf1;XTrYEoe^=tVN,{lC5:&$[H։'EMҵt\{WO0ԩV/|tGWkTB0nm6OԔr$d '4-X§Uŵݠ> ?${0w$(q@ qnoѿHd ۣX Rf@!Gy0dP98rJmglij@tHMZ|G;X99'@Dw s4ɍKSFPHiQl:ysKoOm:H)ʁƑΎ`aq(|Dpr&wN AMȤ>ND$aJ(i[XtlCN @id6aO$pj{EU_c΋MW0%3(4vz9s;7N tYf<ȔK63ۓ>H2܆D$3!QPuQ݊JW'Dҥ,f@e։U?V!R%lf'$ˤ eS|LI8gM