aiopg-1.0.0/0000775000372000037200000000000013542143051013447 5ustar travistravis00000000000000aiopg-1.0.0/CHANGES.txt0000664000372000037200000001470713542142326015275 0ustar travistravis000000000000001.0.0 (2019-09-20) * Removal of an asynchronous call in favor of issues # 550 * Big editing of documentation and minor bugs #534 0.16.0 (2019-01-25) ^^^^^^^^^^^^^^^^^^^ * Fix select priority name `#525 `_ * Rename `psycopg2` to `psycopg2-binary` to fix deprecation warning `#507 `_ * Fix `#189 `_ hstore when using ReadDictCursor `#512 `_ * close cannot be used while an asynchronous query is underway `#452 `_ * sqlalchemy adapter trx begin allow transaction_mode `#498 `_ 0.15.0 (2018-08-14) ^^^^^^^^^^^^^^^^^^^ * Support Python 3.7 `#437 `_ 0.14.0 (2018-05-10) ^^^^^^^^^^^^^^^^^^^ * Add ``get_dialect`` func to have ability to pass ``json_serializer`` `#451 `_ 0.13.2 (2018-01-03) ^^^^^^^^^^^^^^^^^^^ * Fixed compatibility with SQLAlchemy 1.2.0 `#412 `_ * Added support for transaction isolation levels `#219 `_ 0.13.1 (2017-09-10) ^^^^^^^^^^^^^^^^^^^ * Added connection poll recycling logic `#373 `_ 0.13.0 (2016-12-02) ^^^^^^^^^^^^^^^^^^^ * Add `async with` support to `.begin_nested()` `#208 `_ * Fix connection.cancel() `#212 `_ `#223 `_ * Raise informative error on unexpected connection closing `#191 `_ * Added support for python types columns issues `#217 `_ * Added support for default values in SA table issues `#206 `_ 0.12.0 (2016-10-09) ^^^^^^^^^^^^^^^^^^^ * Add an on_connect callback parameter to pool `#141 `_ * Fixed connection to work under both windows and posix based systems `#142 `_ 0.11.0 (2016-09-12) ^^^^^^^^^^^^^^^^^^^ * Immediately remove callbacks from a closed file descriptor `#139 `_ * Drop Python 3.3 support 0.10.0 (2016-07-16) ^^^^^^^^^^^^^^^^^^^ * Refactor tests to use dockerized Postgres server `#107 `_ * Reduce default pool minsize to 1 `#106 `_ * Explicitly enumerate packages in setup.py `#85 `_ * Remove expired connections from pool on acquire `#116 `_ * Don't crash when Connection is GC'ed `#124 `_ * Use loop.create_future() if available 0.9.2 (2016-01-31) ^^^^^^^^^^^^^^^^^^ * Make pool.release return asyncio.Future, so we can wait on it in `__aexit__` `#102 `_ * Add support for uuid type `#103 `_ 0.9.1 (2016-01-17) ^^^^^^^^^^^^^^^^^^ * Documentation update `#101 `_ 0.9.0 (2016-01-14) ^^^^^^^^^^^^^^^^^^ * Add async context managers for transactions `#91 `_ * Support async iterator in ResultProxy `#92 `_ * Add async with for engine `#90 `_ 0.8.0 (2015-12-31) ^^^^^^^^^^^^^^^^^^ * Add PostgreSQL notification support `#58 `_ * Support pools with unlimited size `#59 `_ * Cancel current DB operation on asyncio timeout `#66 `_ * Add async with support for Pool, Connection, Cursor `#88 `_ 0.7.0 (2015-04-22) ^^^^^^^^^^^^^^^^^^ * Get rid of resource leak on connection failure. * Report ResourceWarning on non-closed connections. * Deprecate iteration protocol support in cursor and ResultProxy. * Release sa connection to pool on `connection.close()`. 0.6.0 (2015-02-03) ^^^^^^^^^^^^^^^^^^ * Accept dict, list, tuple, named and positional parameters in `SAConnection.execute()` 0.5.2 (2014-12-08) ^^^^^^^^^^^^^^^^^^ * Minor release, fixes a bug that leaves connection in broken state after `cursor.execute()` failure. 0.5.1 (2014-10-31) ^^^^^^^^^^^^^^^^^^ * Fix a bug for processing transactions in line. 0.5.0 (2014-10-31) ^^^^^^^^^^^^^^^^^^ * Add .terminate() to Pool and Engine * Reimplement connection pool (now pool size cannot be greater than pool.maxsize) * Add .close() and .wait_closed() to Pool and Engine * Add minsize, maxsize, size and freesize properties to sa.Engine * Support *echo* parameter for logging executed SQL commands * Connection.close() is not a coroutine (but we keep backward compatibility). 0.4.1 (2014-10-02) ^^^^^^^^^^^^^^^^^^ * make cursor iterable * update docs 0.4.0 (2014-10-02) ^^^^^^^^^^^^^^^^^^ * add timeouts for database operations. * Autoregister psycopg2 support for json data type. * Support JSON in aiopg.sa * Support ARRAY in aiopg.sa * Autoregister hstore support if present in connected DB * Support HSTORE in aiopg.sa 0.3.2 (2014-07-07) ^^^^^^^^^^^^^^^^^^ * change signature to cursor.execute(operation, parameters=None) to follow psycopg2 convention. 0.3.1 (2014-07-04) ^^^^^^^^^^^^^^^^^^ * Forward arguments to cursor constructor for pooled connections. 0.3.0 (2014-06-22) ^^^^^^^^^^^^^^^^^^ * Allow executing SQLAlchemy DDL statements. * Fix bug with race conditions on acquiring/releasing connections from pool. 0.2.3 (2014-06-12) ^^^^^^^^^^^^^^^^^^ * Fix bug in connection pool. 0.2.2 (2014-06-07) ^^^^^^^^^^^^^^^^^^ * Fix bug with passing parameters into SAConnection.execute when executing raw SQL expression. 0.2.1 (2014-05-08) ^^^^^^^^^^^^^^^^^^ * Close connection with invalid transaction status on returning to pool. 0.2.0 (2014-05-04) ^^^^^^^^^^^^^^^^^^ * Implemented optional support for sqlalchemy functional sql layer. 0.1.0 (2014-04-06) ^^^^^^^^^^^^^^^^^^ * Implemented plain connections: connect, Connection, Cursor. * Implemented database pools: create_pool and Pool. aiopg-1.0.0/LICENSE.txt0000664000372000037200000000242313542142326015277 0ustar travistravis00000000000000Copyright (c) 2014, 2015, Andrew Svetlov All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. aiopg-1.0.0/MAINTAINERS.txt0000664000372000037200000000020413542142326015762 0ustar travistravis00000000000000* Andrew Svetlov * Alexey Firsov * Alexey Popravka aiopg-1.0.0/MANIFEST.in0000664000372000037200000000020513542142326015206 0ustar travistravis00000000000000include LICENSE.txt include CHANGES.txt include README.rst include MAINTAINERS.txt graft aiopg global-exclude *.pyc exclude tests/** aiopg-1.0.0/PKG-INFO0000664000372000037200000003201713542143051014547 0ustar travistravis00000000000000Metadata-Version: 2.1 Name: aiopg Version: 1.0.0 Summary: Postgres integration with asyncio. Home-page: https://aiopg.readthedocs.io Author: Andrew Svetlov Author-email: andrew.svetlov@gmail.com Maintainer: Andrew Svetlov , Alexey Firsov , Alexey Popravka Maintainer-email: virmir49@gmail.com License: BSD Download-URL: https://pypi.python.org/pypi/aiopg Project-URL: Chat: Gitter, https://gitter.im/aio-libs/Lobby Project-URL: CI: Travis, https://travis-ci.com/aio-libs/aiopg Project-URL: Coverage: codecov, https://codecov.io/gh/aio-libs/aiopg Project-URL: Docs: RTD, https://aiopg.readthedocs.io Project-URL: GitHub: issues, https://github.com/aio-libs/aiopg/issues Project-URL: GitHub: repo, https://github.com/aio-libs/aiopg Description: aiopg ===== .. image:: https://travis-ci.com/aio-libs/aiopg.svg?branch=master :target: https://travis-ci.com/aio-libs/aiopg .. image:: https://codecov.io/gh/aio-libs/aiopg/branch/master/graph/badge.svg :target: https://codecov.io/gh/aio-libs/aiopg .. image:: https://badges.gitter.im/Join%20Chat.svg :target: https://gitter.im/aio-libs/Lobby :alt: Chat on Gitter **aiopg** is a library for accessing a PostgreSQL_ database from the asyncio_ (PEP-3156/tulip) framework. It wraps asynchronous features of the Psycopg database driver. Example ------- .. code:: python import asyncio import aiopg dsn = 'dbname=aiopg user=aiopg password=passwd host=127.0.0.1' async def go(): pool = await aiopg.create_pool(dsn) async with pool.acquire() as conn: async with conn.cursor() as cur: await cur.execute("SELECT 1") ret = [] async for row in cur: ret.append(row) assert ret == [(1,)] loop = asyncio.get_event_loop() loop.run_until_complete(go()) Example of SQLAlchemy optional integration ------------------------------------------ .. code:: python import asyncio from aiopg.sa import create_engine import sqlalchemy as sa metadata = sa.MetaData() tbl = sa.Table('tbl', metadata, sa.Column('id', sa.Integer, primary_key=True), sa.Column('val', sa.String(255))) async def create_table(engine): async with engine.acquire() as conn: await conn.execute('DROP TABLE IF EXISTS tbl') await conn.execute('''CREATE TABLE tbl ( id serial PRIMARY KEY, val varchar(255))''') async def go(): async with create_engine(user='aiopg', database='aiopg', host='127.0.0.1', password='passwd') as engine: async with engine.acquire() as conn: await conn.execute(tbl.insert().values(val='abc')) async for row in conn.execute(tbl.select()): print(row.id, row.val) loop = asyncio.get_event_loop() loop.run_until_complete(go()) .. _PostgreSQL: http://www.postgresql.org/ .. _asyncio: http://docs.python.org/3.4/library/asyncio.html Please use:: $ make test for executing the project's unittests. See https://aiopg.readthedocs.io/en/stable/contributing.html for details on how to set up your environment to run the tests. Changelog --------- 1.0.0 (2019-09-20) * Removal of an asynchronous call in favor of issues # 550 * Big editing of documentation and minor bugs #534 0.16.0 (2019-01-25) ^^^^^^^^^^^^^^^^^^^ * Fix select priority name `#525 `_ * Rename `psycopg2` to `psycopg2-binary` to fix deprecation warning `#507 `_ * Fix `#189 `_ hstore when using ReadDictCursor `#512 `_ * close cannot be used while an asynchronous query is underway `#452 `_ * sqlalchemy adapter trx begin allow transaction_mode `#498 `_ 0.15.0 (2018-08-14) ^^^^^^^^^^^^^^^^^^^ * Support Python 3.7 `#437 `_ 0.14.0 (2018-05-10) ^^^^^^^^^^^^^^^^^^^ * Add ``get_dialect`` func to have ability to pass ``json_serializer`` `#451 `_ 0.13.2 (2018-01-03) ^^^^^^^^^^^^^^^^^^^ * Fixed compatibility with SQLAlchemy 1.2.0 `#412 `_ * Added support for transaction isolation levels `#219 `_ 0.13.1 (2017-09-10) ^^^^^^^^^^^^^^^^^^^ * Added connection poll recycling logic `#373 `_ 0.13.0 (2016-12-02) ^^^^^^^^^^^^^^^^^^^ * Add `async with` support to `.begin_nested()` `#208 `_ * Fix connection.cancel() `#212 `_ `#223 `_ * Raise informative error on unexpected connection closing `#191 `_ * Added support for python types columns issues `#217 `_ * Added support for default values in SA table issues `#206 `_ 0.12.0 (2016-10-09) ^^^^^^^^^^^^^^^^^^^ * Add an on_connect callback parameter to pool `#141 `_ * Fixed connection to work under both windows and posix based systems `#142 `_ 0.11.0 (2016-09-12) ^^^^^^^^^^^^^^^^^^^ * Immediately remove callbacks from a closed file descriptor `#139 `_ * Drop Python 3.3 support 0.10.0 (2016-07-16) ^^^^^^^^^^^^^^^^^^^ * Refactor tests to use dockerized Postgres server `#107 `_ * Reduce default pool minsize to 1 `#106 `_ * Explicitly enumerate packages in setup.py `#85 `_ * Remove expired connections from pool on acquire `#116 `_ * Don't crash when Connection is GC'ed `#124 `_ * Use loop.create_future() if available 0.9.2 (2016-01-31) ^^^^^^^^^^^^^^^^^^ * Make pool.release return asyncio.Future, so we can wait on it in `__aexit__` `#102 `_ * Add support for uuid type `#103 `_ 0.9.1 (2016-01-17) ^^^^^^^^^^^^^^^^^^ * Documentation update `#101 `_ 0.9.0 (2016-01-14) ^^^^^^^^^^^^^^^^^^ * Add async context managers for transactions `#91 `_ * Support async iterator in ResultProxy `#92 `_ * Add async with for engine `#90 `_ 0.8.0 (2015-12-31) ^^^^^^^^^^^^^^^^^^ * Add PostgreSQL notification support `#58 `_ * Support pools with unlimited size `#59 `_ * Cancel current DB operation on asyncio timeout `#66 `_ * Add async with support for Pool, Connection, Cursor `#88 `_ 0.7.0 (2015-04-22) ^^^^^^^^^^^^^^^^^^ * Get rid of resource leak on connection failure. * Report ResourceWarning on non-closed connections. * Deprecate iteration protocol support in cursor and ResultProxy. * Release sa connection to pool on `connection.close()`. 0.6.0 (2015-02-03) ^^^^^^^^^^^^^^^^^^ * Accept dict, list, tuple, named and positional parameters in `SAConnection.execute()` 0.5.2 (2014-12-08) ^^^^^^^^^^^^^^^^^^ * Minor release, fixes a bug that leaves connection in broken state after `cursor.execute()` failure. 0.5.1 (2014-10-31) ^^^^^^^^^^^^^^^^^^ * Fix a bug for processing transactions in line. 0.5.0 (2014-10-31) ^^^^^^^^^^^^^^^^^^ * Add .terminate() to Pool and Engine * Reimplement connection pool (now pool size cannot be greater than pool.maxsize) * Add .close() and .wait_closed() to Pool and Engine * Add minsize, maxsize, size and freesize properties to sa.Engine * Support *echo* parameter for logging executed SQL commands * Connection.close() is not a coroutine (but we keep backward compatibility). 0.4.1 (2014-10-02) ^^^^^^^^^^^^^^^^^^ * make cursor iterable * update docs 0.4.0 (2014-10-02) ^^^^^^^^^^^^^^^^^^ * add timeouts for database operations. * Autoregister psycopg2 support for json data type. * Support JSON in aiopg.sa * Support ARRAY in aiopg.sa * Autoregister hstore support if present in connected DB * Support HSTORE in aiopg.sa 0.3.2 (2014-07-07) ^^^^^^^^^^^^^^^^^^ * change signature to cursor.execute(operation, parameters=None) to follow psycopg2 convention. 0.3.1 (2014-07-04) ^^^^^^^^^^^^^^^^^^ * Forward arguments to cursor constructor for pooled connections. 0.3.0 (2014-06-22) ^^^^^^^^^^^^^^^^^^ * Allow executing SQLAlchemy DDL statements. * Fix bug with race conditions on acquiring/releasing connections from pool. 0.2.3 (2014-06-12) ^^^^^^^^^^^^^^^^^^ * Fix bug in connection pool. 0.2.2 (2014-06-07) ^^^^^^^^^^^^^^^^^^ * Fix bug with passing parameters into SAConnection.execute when executing raw SQL expression. 0.2.1 (2014-05-08) ^^^^^^^^^^^^^^^^^^ * Close connection with invalid transaction status on returning to pool. 0.2.0 (2014-05-04) ^^^^^^^^^^^^^^^^^^ * Implemented optional support for sqlalchemy functional sql layer. 0.1.0 (2014-04-06) ^^^^^^^^^^^^^^^^^^ * Implemented plain connections: connect, Connection, Cursor. * Implemented database pools: create_pool and Pool. Platform: macOS Platform: POSIX Platform: Windows Classifier: License :: OSI Approved :: BSD License Classifier: Intended Audience :: Developers Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3 :: Only Classifier: Programming Language :: Python :: 3.5 Classifier: Programming Language :: Python :: 3.6 Classifier: Programming Language :: Python :: 3.7 Classifier: Operating System :: POSIX Classifier: Operating System :: MacOS :: MacOS X Classifier: Operating System :: Microsoft :: Windows Classifier: Environment :: Web Environment Classifier: Development Status :: 5 - Production/Stable Classifier: Topic :: Database Classifier: Topic :: Database :: Front-Ends Classifier: Framework :: AsyncIO Requires-Python: >=3.5.3 Provides-Extra: sa aiopg-1.0.0/README.rst0000664000372000037200000000500713542142326015144 0ustar travistravis00000000000000aiopg ===== .. image:: https://travis-ci.com/aio-libs/aiopg.svg?branch=master :target: https://travis-ci.com/aio-libs/aiopg .. image:: https://codecov.io/gh/aio-libs/aiopg/branch/master/graph/badge.svg :target: https://codecov.io/gh/aio-libs/aiopg .. image:: https://badges.gitter.im/Join%20Chat.svg :target: https://gitter.im/aio-libs/Lobby :alt: Chat on Gitter **aiopg** is a library for accessing a PostgreSQL_ database from the asyncio_ (PEP-3156/tulip) framework. It wraps asynchronous features of the Psycopg database driver. Example ------- .. code:: python import asyncio import aiopg dsn = 'dbname=aiopg user=aiopg password=passwd host=127.0.0.1' async def go(): pool = await aiopg.create_pool(dsn) async with pool.acquire() as conn: async with conn.cursor() as cur: await cur.execute("SELECT 1") ret = [] async for row in cur: ret.append(row) assert ret == [(1,)] loop = asyncio.get_event_loop() loop.run_until_complete(go()) Example of SQLAlchemy optional integration ------------------------------------------ .. code:: python import asyncio from aiopg.sa import create_engine import sqlalchemy as sa metadata = sa.MetaData() tbl = sa.Table('tbl', metadata, sa.Column('id', sa.Integer, primary_key=True), sa.Column('val', sa.String(255))) async def create_table(engine): async with engine.acquire() as conn: await conn.execute('DROP TABLE IF EXISTS tbl') await conn.execute('''CREATE TABLE tbl ( id serial PRIMARY KEY, val varchar(255))''') async def go(): async with create_engine(user='aiopg', database='aiopg', host='127.0.0.1', password='passwd') as engine: async with engine.acquire() as conn: await conn.execute(tbl.insert().values(val='abc')) async for row in conn.execute(tbl.select()): print(row.id, row.val) loop = asyncio.get_event_loop() loop.run_until_complete(go()) .. _PostgreSQL: http://www.postgresql.org/ .. _asyncio: http://docs.python.org/3.4/library/asyncio.html Please use:: $ make test for executing the project's unittests. See https://aiopg.readthedocs.io/en/stable/contributing.html for details on how to set up your environment to run the tests. aiopg-1.0.0/aiopg/0000775000372000037200000000000013542143051014546 5ustar travistravis00000000000000aiopg-1.0.0/aiopg/__init__.py0000664000372000037200000000334413542142326016667 0ustar travistravis00000000000000import re import sys import warnings from collections import namedtuple from .connection import connect, Connection, TIMEOUT as DEFAULT_TIMEOUT from .cursor import Cursor from .pool import create_pool, Pool from .transaction import IsolationLevel, Transaction from .utils import get_running_loop warnings.filterwarnings( 'always', '.*', category=ResourceWarning, module=r'aiopg(\.\w+)+', append=False ) __all__ = ('connect', 'create_pool', 'get_running_loop', 'Connection', 'Cursor', 'Pool', 'version', 'version_info', 'DEFAULT_TIMEOUT', 'IsolationLevel', 'Transaction') __version__ = '1.0.0' version = __version__ + ' , Python ' + sys.version VersionInfo = namedtuple('VersionInfo', 'major minor micro releaselevel serial') def _parse_version(ver): RE = ( r'^' r'(?P\d+)\.(?P\d+)\.(?P\d+)' r'((?P[a-z]+)(?P\d+)?)?' r'$' ) match = re.match(RE, ver) try: major = int(match.group('major')) minor = int(match.group('minor')) micro = int(match.group('micro')) levels = {'rc': 'candidate', 'a': 'alpha', 'b': 'beta', None: 'final'} releaselevel = levels[match.group('releaselevel')] serial = int(match.group('serial')) if match.group('serial') else 0 return VersionInfo(major, minor, micro, releaselevel, serial) except Exception as e: raise ImportError("Invalid package version {}".format(ver)) from e version_info = _parse_version(__version__) # make pyflakes happy (connect, create_pool, Connection, Cursor, Pool, DEFAULT_TIMEOUT, IsolationLevel, Transaction, get_running_loop) aiopg-1.0.0/aiopg/connection.py0000775000372000037200000004535613542142326017303 0ustar travistravis00000000000000import asyncio import contextlib import errno import platform import select import sys import traceback import warnings import weakref from collections.abc import Mapping import psycopg2 from psycopg2 import extras from psycopg2.extensions import POLL_ERROR, POLL_OK, POLL_READ, POLL_WRITE from .cursor import Cursor from .utils import _ContextManager, create_future, get_running_loop __all__ = ('connect',) TIMEOUT = 60.0 # Windows specific error code, not in errno for some reason, and doesnt map # to OSError.errno EBADF WSAENOTSOCK = 10038 def connect(dsn=None, *, timeout=TIMEOUT, enable_json=True, enable_hstore=True, enable_uuid=True, echo=False, **kwargs): """A factory for connecting to PostgreSQL. The coroutine accepts all parameters that psycopg2.connect() does plus optional keyword-only `timeout` parameters. Returns instantiated Connection object. """ coro = Connection( dsn, timeout, bool(echo), enable_hstore=enable_hstore, enable_uuid=enable_uuid, enable_json=enable_json, **kwargs ) return _ContextManager(coro) def _is_bad_descriptor_error(os_error): if platform.system() == 'Windows': # pragma: no cover return os_error.winerror == WSAENOTSOCK else: return os_error.errno == errno.EBADF class Connection: """Low-level asynchronous interface for wrapped psycopg2 connection. The Connection instance encapsulates a database session. Provides support for creating asynchronous cursors. """ _source_traceback = None def __init__( self, dsn, timeout, echo, *, enable_json=True, enable_hstore=True, enable_uuid=True, **kwargs ): self._enable_json = enable_json self._enable_hstore = enable_hstore self._enable_uuid = enable_uuid self._loop = get_running_loop(kwargs.pop('loop', None) is not None) self._waiter = create_future(self._loop) kwargs['async_'] = kwargs.pop('async', True) self._conn = psycopg2.connect(dsn, **kwargs) self._dsn = self._conn.dsn assert self._conn.isexecuting(), "Is conn an async at all???" self._fileno = self._conn.fileno() self._timeout = timeout self._last_usage = self._loop.time() self._writing = False self._cancelling = False self._cancellation_waiter = None self._echo = echo self._cursor_instance = None self._notifies = asyncio.Queue(loop=self._loop) self._weakref = weakref.ref(self) self._loop.add_reader(self._fileno, self._ready, self._weakref) if self._loop.get_debug(): self._source_traceback = traceback.extract_stack(sys._getframe(1)) @staticmethod def _ready(weak_self): self = weak_self() if self is None: return waiter = self._waiter try: state = self._conn.poll() while self._conn.notifies: notify = self._conn.notifies.pop(0) self._notifies.put_nowait(notify) except (psycopg2.Warning, psycopg2.Error) as exc: if self._fileno is not None: try: select.select([self._fileno], [], [], 0) except OSError as os_exc: if _is_bad_descriptor_error(os_exc): with contextlib.suppress(OSError): self._loop.remove_reader(self._fileno) # forget a bad file descriptor, don't try to # touch it self._fileno = None try: if self._writing: self._writing = False if self._fileno is not None: self._loop.remove_writer(self._fileno) except OSError as exc2: if exc2.errno != errno.EBADF: # EBADF is ok for closed file descriptor # chain exception otherwise exc2.__cause__ = exc exc = exc2 if waiter is not None and not waiter.done(): waiter.set_exception(exc) else: if self._fileno is None: # connection closed if waiter is not None and not waiter.done(): waiter.set_exception( psycopg2.OperationalError("Connection closed")) if state == POLL_OK: if self._writing: self._loop.remove_writer(self._fileno) self._writing = False if waiter is not None and not waiter.done(): waiter.set_result(None) elif state == POLL_READ: if self._writing: self._loop.remove_writer(self._fileno) self._writing = False elif state == POLL_WRITE: if not self._writing: self._loop.add_writer(self._fileno, self._ready, weak_self) self._writing = True elif state == POLL_ERROR: self._fatal_error("Fatal error on aiopg connection: " "POLL_ERROR from underlying .poll() call") else: self._fatal_error("Fatal error on aiopg connection: " "unknown answer {} from underlying " ".poll() call" .format(state)) def _fatal_error(self, message): # Should be called from exception handler only. self._loop.call_exception_handler({ 'message': message, 'connection': self, }) self.close() if self._waiter and not self._waiter.done(): self._waiter.set_exception(psycopg2.OperationalError(message)) def _create_waiter(self, func_name): if self._waiter is not None: if self._cancelling: if not self._waiter.done(): raise RuntimeError('%s() called while connection is ' 'being cancelled' % func_name) else: raise RuntimeError('%s() called while another coroutine is ' 'already waiting for incoming ' 'data' % func_name) self._waiter = create_future(self._loop) return self._waiter async def _poll(self, waiter, timeout): assert waiter is self._waiter, (waiter, self._waiter) self._ready(self._weakref) async def cancel(): self._waiter = create_future(self._loop) self._cancelling = True self._cancellation_waiter = self._waiter self._conn.cancel() if not self._conn.isexecuting(): return try: await asyncio.wait_for(self._waiter, timeout, loop=self._loop) except psycopg2.extensions.QueryCanceledError: pass except asyncio.TimeoutError: self._close() try: await asyncio.wait_for(self._waiter, timeout, loop=self._loop) except (asyncio.CancelledError, asyncio.TimeoutError) as exc: await asyncio.shield(cancel(), loop=self._loop) raise exc except psycopg2.extensions.QueryCanceledError as exc: self._loop.call_exception_handler({ 'message': exc.pgerror, 'exception': exc, 'future': self._waiter, }) raise asyncio.CancelledError finally: if self._cancelling: self._cancelling = False if self._waiter is self._cancellation_waiter: self._waiter = None self._cancellation_waiter = None else: self._waiter = None def _isexecuting(self): return self._conn.isexecuting() def cursor(self, name=None, cursor_factory=None, scrollable=None, withhold=False, timeout=None): """A coroutine that returns a new cursor object using the connection. *cursor_factory* argument can be used to create non-standard cursors. The argument must be subclass of `psycopg2.extensions.cursor`. *name*, *scrollable* and *withhold* parameters are not supported by psycopg in asynchronous mode. NOTE: as of [TODO] any previously created created cursor from this connection will be closed """ self._last_usage = self._loop.time() coro = self._cursor(name=name, cursor_factory=cursor_factory, scrollable=scrollable, withhold=withhold, timeout=timeout) return _ContextManager(coro) async def _cursor(self, name=None, cursor_factory=None, scrollable=None, withhold=False, timeout=None): if not self.closed_cursor: warnings.warn(('You can only have one cursor per connection. ' 'The cursor for connection will be closed forcibly' ' {!r}.').format(self), ResourceWarning) self.free_cursor() if timeout is None: timeout = self._timeout impl = await self._cursor_impl(name=name, cursor_factory=cursor_factory, scrollable=scrollable, withhold=withhold) self._cursor_instance = Cursor(self, impl, timeout, self._echo) return self._cursor_instance async def _cursor_impl(self, name=None, cursor_factory=None, scrollable=None, withhold=False): if cursor_factory is None: impl = self._conn.cursor(name=name, scrollable=scrollable, withhold=withhold) else: impl = self._conn.cursor(name=name, cursor_factory=cursor_factory, scrollable=scrollable, withhold=withhold) return impl def _close(self): """Remove the connection from the event_loop and close it.""" # N.B. If connection contains uncommitted transaction the # transaction will be discarded if self._fileno is not None: self._loop.remove_reader(self._fileno) if self._writing: self._writing = False self._loop.remove_writer(self._fileno) self._conn.close() self.free_cursor() if self._waiter is not None and not self._waiter.done(): self._waiter.set_exception( psycopg2.OperationalError("Connection closed")) @property def closed_cursor(self): if not self._cursor_instance: return True return bool(self._cursor_instance.closed) def free_cursor(self): if not self.closed_cursor: self._cursor_instance.close() self._cursor_instance = None def close(self): self._close() ret = create_future(self._loop) ret.set_result(None) return ret @property def closed(self): """Connection status. Read-only attribute reporting whether the database connection is open (False) or closed (True). """ return self._conn.closed @property def raw(self): """Underlying psycopg connection object, readonly""" return self._conn async def commit(self): raise psycopg2.ProgrammingError( "commit cannot be used in asynchronous mode") async def rollback(self): raise psycopg2.ProgrammingError( "rollback cannot be used in asynchronous mode") # TPC async def xid(self, format_id, gtrid, bqual): return self._conn.xid(format_id, gtrid, bqual) async def tpc_begin(self, xid=None): raise psycopg2.ProgrammingError( "tpc_begin cannot be used in asynchronous mode") async def tpc_prepare(self): raise psycopg2.ProgrammingError( "tpc_prepare cannot be used in asynchronous mode") async def tpc_commit(self, xid=None): raise psycopg2.ProgrammingError( "tpc_commit cannot be used in asynchronous mode") async def tpc_rollback(self, xid=None): raise psycopg2.ProgrammingError( "tpc_rollback cannot be used in asynchronous mode") async def tpc_recover(self): raise psycopg2.ProgrammingError( "tpc_recover cannot be used in asynchronous mode") async def cancel(self): """Cancel the current database operation.""" if self._waiter is None: return async def cancel(): self._conn.cancel() try: await self._waiter except psycopg2.extensions.QueryCanceledError: pass await asyncio.shield(cancel(), loop=self._loop) async def reset(self): raise psycopg2.ProgrammingError( "reset cannot be used in asynchronous mode") @property def dsn(self): """DSN connection string. Read-only attribute representing dsn connection string used for connectint to PostgreSQL server. """ return self._dsn async def set_session(self, *, isolation_level=None, readonly=None, deferrable=None, autocommit=None): raise psycopg2.ProgrammingError( "set_session cannot be used in asynchronous mode") @property def autocommit(self): """Autocommit status""" return self._conn.autocommit @autocommit.setter def autocommit(self, val): """Autocommit status""" self._conn.autocommit = val @property def isolation_level(self): """Transaction isolation level. The only allowed value is ISOLATION_LEVEL_READ_COMMITTED. """ return self._conn.isolation_level async def set_isolation_level(self, val): """Transaction isolation level. The only allowed value is ISOLATION_LEVEL_READ_COMMITTED. """ self._conn.set_isolation_level(val) @property def encoding(self): """Client encoding for SQL operations.""" return self._conn.encoding async def set_client_encoding(self, val): self._conn.set_client_encoding(val) @property def notices(self): """A list of all db messages sent to the client during the session.""" return self._conn.notices @property def cursor_factory(self): """The default cursor factory used by .cursor().""" return self._conn.cursor_factory async def get_backend_pid(self): """Returns the PID of the backend server process.""" return self._conn.get_backend_pid() async def get_parameter_status(self, parameter): """Look up a current parameter setting of the server.""" return self._conn.get_parameter_status(parameter) async def get_transaction_status(self): """Return the current session transaction status as an integer.""" return self._conn.get_transaction_status() @property def protocol_version(self): """A read-only integer representing protocol being used.""" return self._conn.protocol_version @property def server_version(self): """A read-only integer representing the backend version.""" return self._conn.server_version @property def status(self): """A read-only integer representing the status of the connection.""" return self._conn.status async def lobject(self, *args, **kwargs): raise psycopg2.ProgrammingError( "lobject cannot be used in asynchronous mode") @property def timeout(self): """Return default timeout for connection operations.""" return self._timeout @property def last_usage(self): """Return time() when connection was used.""" return self._last_usage @property def echo(self): """Return echo mode status.""" return self._echo def __repr__(self): msg = ( '<' '{module_name}::{class_name} ' 'isexecuting={isexecuting}, ' 'closed={closed}, ' 'echo={echo}, ' 'cursor={cursor}' '>' ) return msg.format( module_name=type(self).__module__, class_name=type(self).__name__, echo=self.echo, isexecuting=self._isexecuting(), closed=bool(self.closed), cursor=repr(self._cursor_instance) ) def __del__(self): try: _conn = self._conn except AttributeError: return if _conn is not None and not _conn.closed: self.close() warnings.warn("Unclosed connection {!r}".format(self), ResourceWarning) context = {'connection': self, 'message': 'Unclosed connection'} if self._source_traceback is not None: context['source_traceback'] = self._source_traceback self._loop.call_exception_handler(context) @property def notifies(self): """Return notification queue.""" return self._notifies async def _get_oids(self): cur = await self.cursor() rv0, rv1 = [], [] try: await cur.execute( "SELECT t.oid, typarray " "FROM pg_type t JOIN pg_namespace ns ON typnamespace = ns.oid " "WHERE typname = 'hstore';" ) async for oids in cur: if isinstance(oids, Mapping): rv0.append(oids['oid']) rv1.append(oids['typarray']) else: rv0.append(oids[0]) rv1.append(oids[1]) finally: cur.close() return tuple(rv0), tuple(rv1) async def _connect(self): try: await self._poll(self._waiter, self._timeout) except Exception: self.close() raise if self._enable_json: extras.register_default_json(self._conn) if self._enable_uuid: extras.register_uuid(conn_or_curs=self._conn) if self._enable_hstore: oids = await self._get_oids() if oids is not None: oid, array_oid = oids extras.register_hstore( self._conn, oid=oid, array_oid=array_oid ) return self def __await__(self): return self._connect().__await__() async def __aenter__(self): return self async def __aexit__(self, exc_type, exc_val, exc_tb): self.close() aiopg-1.0.0/aiopg/cursor.py0000664000372000037200000003033613542142326016446 0ustar travistravis00000000000000import asyncio import psycopg2 from .log import logger from .transaction import IsolationLevel, Transaction from .utils import _TransactionBeginContextManager class Cursor: def __init__(self, conn, impl, timeout, echo): self._conn = conn self._impl = impl self._timeout = timeout self._echo = echo self._transaction = Transaction(self, IsolationLevel.repeatable_read) @property def echo(self): """Return echo mode status.""" return self._echo @property def description(self): """This read-only attribute is a sequence of 7-item sequences. Each of these sequences is a collections.namedtuple containing information describing one result column: 0. name: the name of the column returned. 1. type_code: the PostgreSQL OID of the column. 2. display_size: the actual length of the column in bytes. 3. internal_size: the size in bytes of the column associated to this column on the server. 4. precision: total number of significant digits in columns of type NUMERIC. None for other types. 5. scale: count of decimal digits in the fractional part in columns of type NUMERIC. None for other types. 6. null_ok: always None as not easy to retrieve from the libpq. This attribute will be None for operations that do not return rows or if the cursor has not had an operation invoked via the execute() method yet. """ return self._impl.description def close(self): """Close the cursor now.""" if not self.closed: self._impl.close() @property def closed(self): """Read-only boolean attribute: specifies if the cursor is closed.""" return self._impl.closed @property def connection(self): """Read-only attribute returning a reference to the `Connection`.""" return self._conn @property def raw(self): """Underlying psycopg cursor object, readonly""" return self._impl @property def name(self): # Not supported return self._impl.name @property def scrollable(self): # Not supported return self._impl.scrollable @scrollable.setter def scrollable(self, val): # Not supported self._impl.scrollable = val @property def withhold(self): # Not supported return self._impl.withhold @withhold.setter def withhold(self, val): # Not supported self._impl.withhold = val async def execute(self, operation, parameters=None, *, timeout=None): """Prepare and execute a database operation (query or command). Parameters may be provided as sequence or mapping and will be bound to variables in the operation. Variables are specified either with positional %s or named %({name})s placeholders. """ if timeout is None: timeout = self._timeout waiter = self._conn._create_waiter('cursor.execute') if self._echo: logger.info(operation) logger.info("%r", parameters) try: self._impl.execute(operation, parameters) except BaseException: self._conn._waiter = None raise try: await self._conn._poll(waiter, timeout) except asyncio.TimeoutError: self._impl.close() raise async def executemany(self, operation, seq_of_parameters): # Not supported raise psycopg2.ProgrammingError( "executemany cannot be used in asynchronous mode") async def callproc(self, procname, parameters=None, *, timeout=None): """Call a stored database procedure with the given name. The sequence of parameters must contain one entry for each argument that the procedure expects. The result of the call is returned as modified copy of the input sequence. Input parameters are left untouched, output and input/output parameters replaced with possibly new values. """ if timeout is None: timeout = self._timeout waiter = self._conn._create_waiter('cursor.callproc') if self._echo: logger.info("CALL %s", procname) logger.info("%r", parameters) try: self._impl.callproc(procname, parameters) except BaseException: self._conn._waiter = None raise else: await self._conn._poll(waiter, timeout) def begin(self): return _TransactionBeginContextManager(self._transaction.begin()) def begin_nested(self): if not self._transaction.is_begin: return _TransactionBeginContextManager( self._transaction.begin()) else: return self._transaction.point() def mogrify(self, operation, parameters=None): """Return a query string after arguments binding. The string returned is exactly the one that would be sent to the database running the .execute() method or similar. """ ret = self._impl.mogrify(operation, parameters) assert not self._conn._isexecuting(), ("Don't support server side " "mogrify") return ret async def setinputsizes(self, sizes): """This method is exposed in compliance with the DBAPI. It currently does nothing but it is safe to call it. """ self._impl.setinputsizes(sizes) async def fetchone(self): """Fetch the next row of a query result set. Returns a single tuple, or None when no more data is available. """ ret = self._impl.fetchone() assert not self._conn._isexecuting(), ("Don't support server side " "cursors yet") return ret async def fetchmany(self, size=None): """Fetch the next set of rows of a query result. Returns a list of tuples. An empty list is returned when no more rows are available. The number of rows to fetch per call is specified by the parameter. If it is not given, the cursor's .arraysize determines the number of rows to be fetched. The method should try to fetch as many rows as indicated by the size parameter. If this is not possible due to the specified number of rows not being available, fewer rows may be returned. """ if size is None: size = self._impl.arraysize ret = self._impl.fetchmany(size) assert not self._conn._isexecuting(), ("Don't support server side " "cursors yet") return ret async def fetchall(self): """Fetch all (remaining) rows of a query result. Returns them as a list of tuples. An empty list is returned if there is no more record to fetch. """ ret = self._impl.fetchall() assert not self._conn._isexecuting(), ("Don't support server side " "cursors yet") return ret async def scroll(self, value, mode="relative"): """Scroll to a new position according to mode. If mode is relative (default), value is taken as offset to the current position in the result set, if set to absolute, value states an absolute target position. """ ret = self._impl.scroll(value, mode) assert not self._conn._isexecuting(), ("Don't support server side " "cursors yet") return ret @property def arraysize(self): """How many rows will be returned by fetchmany() call. This read/write attribute specifies the number of rows to fetch at a time with fetchmany(). It defaults to 1 meaning to fetch a single row at a time. """ return self._impl.arraysize @arraysize.setter def arraysize(self, val): """How many rows will be returned by fetchmany() call. This read/write attribute specifies the number of rows to fetch at a time with fetchmany(). It defaults to 1 meaning to fetch a single row at a time. """ self._impl.arraysize = val @property def itersize(self): # Not supported return self._impl.itersize @itersize.setter def itersize(self, val): # Not supported self._impl.itersize = val @property def rowcount(self): """Returns the number of rows that has been produced of affected. This read-only attribute specifies the number of rows that the last :meth:`execute` produced (for Data Query Language statements like SELECT) or affected (for Data Manipulation Language statements like UPDATE or INSERT). The attribute is -1 in case no .execute() has been performed on the cursor or the row count of the last operation if it can't be determined by the interface. """ return self._impl.rowcount @property def rownumber(self): """Row index. This read-only attribute provides the current 0-based index of the cursor in the result set or ``None`` if the index cannot be determined.""" return self._impl.rownumber @property def lastrowid(self): """OID of the last inserted row. This read-only attribute provides the OID of the last row inserted by the cursor. If the table wasn't created with OID support or the last operation is not a single record insert, the attribute is set to None. """ return self._impl.lastrowid @property def query(self): """The last executed query string. Read-only attribute containing the body of the last query sent to the backend (including bound arguments) as bytes string. None if no query has been executed yet. """ return self._impl.query @property def statusmessage(self): """the message returned by the last command.""" return self._impl.statusmessage # async def cast(self, old, s): # ... @property def tzinfo_factory(self): """The time zone factory used to handle data types such as `TIMESTAMP WITH TIME ZONE`. """ return self._impl.tzinfo_factory @tzinfo_factory.setter def tzinfo_factory(self, val): """The time zone factory used to handle data types such as `TIMESTAMP WITH TIME ZONE`. """ self._impl.tzinfo_factory = val async def nextset(self): # Not supported self._impl.nextset() # raises psycopg2.NotSupportedError async def setoutputsize(self, size, column=None): # Does nothing self._impl.setoutputsize(size, column) async def copy_from(self, file, table, sep='\t', null='\\N', size=8192, columns=None): raise psycopg2.ProgrammingError( "copy_from cannot be used in asynchronous mode") async def copy_to(self, file, table, sep='\t', null='\\N', columns=None): raise psycopg2.ProgrammingError( "copy_to cannot be used in asynchronous mode") async def copy_expert(self, sql, file, size=8192): raise psycopg2.ProgrammingError( "copy_expert cannot be used in asynchronous mode") @property def timeout(self): """Return default timeout for cursor operations.""" return self._timeout def __aiter__(self): return self async def __anext__(self): ret = await self.fetchone() if ret is not None: return ret else: raise StopAsyncIteration async def __aenter__(self): return self async def __aexit__(self, exc_type, exc_val, exc_tb): self.close() return def __repr__(self): msg = ( '<' '{module_name}::{class_name} ' 'name={name}, ' 'closed={closed}' '>' ) return msg.format( module_name=type(self).__module__, class_name=type(self).__name__, name=self.name, closed=self.closed ) aiopg-1.0.0/aiopg/log.py0000664000372000037200000000017313542142326015706 0ustar travistravis00000000000000"""Logging configuration.""" import logging # Name the logger after the package. logger = logging.getLogger(__package__) aiopg-1.0.0/aiopg/pool.py0000664000372000037200000002312313542142326016076 0ustar travistravis00000000000000import asyncio import collections import warnings from psycopg2.extensions import TRANSACTION_STATUS_IDLE from .connection import TIMEOUT, connect from .utils import ( _PoolAcquireContextManager, _PoolConnectionContextManager, _PoolContextManager, _PoolCursorContextManager, create_future, ensure_future, get_running_loop, ) def create_pool(dsn=None, *, minsize=1, maxsize=10, timeout=TIMEOUT, pool_recycle=-1, enable_json=True, enable_hstore=True, enable_uuid=True, echo=False, on_connect=None, **kwargs): coro = Pool.from_pool_fill( dsn, minsize, maxsize, timeout, enable_json=enable_json, enable_hstore=enable_hstore, enable_uuid=enable_uuid, echo=echo, on_connect=on_connect, pool_recycle=pool_recycle, **kwargs ) return _PoolContextManager(coro) class Pool(asyncio.AbstractServer): """Connection pool""" def __init__(self, dsn, minsize, maxsize, timeout, *, enable_json, enable_hstore, enable_uuid, echo, on_connect, pool_recycle, **kwargs): if minsize < 0: raise ValueError("minsize should be zero or greater") if maxsize < minsize and maxsize != 0: raise ValueError("maxsize should be not less than minsize") self._dsn = dsn self._minsize = minsize self._loop = get_running_loop(kwargs.pop('loop', None) is not None) self._timeout = timeout self._recycle = pool_recycle self._enable_json = enable_json self._enable_hstore = enable_hstore self._enable_uuid = enable_uuid self._echo = echo self._on_connect = on_connect self._conn_kwargs = kwargs self._acquiring = 0 self._free = collections.deque(maxlen=maxsize or None) self._cond = asyncio.Condition(loop=self._loop) self._used = set() self._terminated = set() self._closing = False self._closed = False @property def echo(self): return self._echo @property def minsize(self): return self._minsize @property def maxsize(self): return self._free.maxlen @property def size(self): return self.freesize + len(self._used) + self._acquiring @property def freesize(self): return len(self._free) @property def timeout(self): return self._timeout async def clear(self): """Close all free connections in pool.""" async with self._cond: while self._free: conn = self._free.popleft() await conn.close() self._cond.notify() @property def closed(self): return self._closed def close(self): """Close pool. Mark all pool connections to be closed on getting back to pool. Closed pool doesn't allow to acquire new connections. """ if self._closed: return self._closing = True def terminate(self): """Terminate pool. Close pool with instantly closing all acquired connections also. """ self.close() for conn in list(self._used): conn.close() self._terminated.add(conn) self._used.clear() async def wait_closed(self): """Wait for closing all pool's connections.""" if self._closed: return if not self._closing: raise RuntimeError(".wait_closed() should be called " "after .close()") while self._free: conn = self._free.popleft() conn.close() async with self._cond: while self.size > self.freesize: await self._cond.wait() self._closed = True def acquire(self): """Acquire free connection from the pool.""" coro = self._acquire() return _PoolAcquireContextManager(coro, self) @classmethod async def from_pool_fill(cls, *args, **kwargs): """constructor for filling the free pool with connections, the number is controlled by the minsize parameter """ self = cls(*args, **kwargs) if self._minsize > 0: async with self._cond: await self._fill_free_pool(False) return self async def _acquire(self): if self._closing: raise RuntimeError("Cannot acquire connection after closing pool") async with self._cond: while True: await self._fill_free_pool(True) if self._free: conn = self._free.popleft() assert not conn.closed, conn assert conn not in self._used, (conn, self._used) self._used.add(conn) if self._on_connect is not None: await self._on_connect(conn) return conn else: await self._cond.wait() async def _fill_free_pool(self, override_min): # iterate over free connections and remove timeouted ones n, free = 0, len(self._free) while n < free: conn = self._free[-1] if conn.closed: self._free.pop() elif -1 < self._recycle < self._loop.time() - conn.last_usage: conn.close() self._free.pop() else: self._free.rotate() n += 1 while self.size < self.minsize: self._acquiring += 1 try: conn = await connect( self._dsn, timeout=self._timeout, enable_json=self._enable_json, enable_hstore=self._enable_hstore, enable_uuid=self._enable_uuid, echo=self._echo, **self._conn_kwargs) # raise exception if pool is closing self._free.append(conn) self._cond.notify() finally: self._acquiring -= 1 if self._free: return if override_min and self.size < self.maxsize: self._acquiring += 1 try: conn = await connect( self._dsn, timeout=self._timeout, enable_json=self._enable_json, enable_hstore=self._enable_hstore, enable_uuid=self._enable_uuid, echo=self._echo, **self._conn_kwargs) # raise exception if pool is closing self._free.append(conn) self._cond.notify() finally: self._acquiring -= 1 async def _wakeup(self): async with self._cond: self._cond.notify() def release(self, conn): """Release free connection back to the connection pool. """ fut = create_future(self._loop) fut.set_result(None) if conn in self._terminated: assert conn.closed, conn self._terminated.remove(conn) return fut assert conn in self._used, (conn, self._used) self._used.remove(conn) if not conn.closed: tran_status = conn._conn.get_transaction_status() if tran_status != TRANSACTION_STATUS_IDLE: warnings.warn( ("Invalid transaction status on " "released connection: {}").format(tran_status), ResourceWarning ) conn.close() return fut if self._closing: conn.close() else: conn.free_cursor() self._free.append(conn) fut = ensure_future(self._wakeup(), loop=self._loop) return fut async def cursor(self, name=None, cursor_factory=None, scrollable=None, withhold=False, *, timeout=None): conn = await self.acquire() cur = await conn.cursor(name=name, cursor_factory=cursor_factory, scrollable=scrollable, withhold=withhold, timeout=timeout) return _PoolCursorContextManager(self, conn, cur) def __await__(self): # This is not a coroutine. It is meant to enable the idiom: # # with (await pool) as conn: # # # as an alternative to: # # conn = await pool.acquire() # try: # # finally: # conn.release() conn = yield from self._acquire().__await__() return _PoolConnectionContextManager(self, conn) def __enter__(self): raise RuntimeError( '"await" should be used as context manager expression') def __exit__(self, *args): # This must exist because __enter__ exists, even though that # always raises; that's how the with-statement works. pass # pragma: nocover async def __aenter__(self): return self async def __aexit__(self, exc_type, exc_val, exc_tb): self.close() await self.wait_closed() def __del__(self): try: self._free except AttributeError: return # frame has been cleared, __dict__ is empty if self._free: left = 0 while self._free: conn = self._free.popleft() conn.close() left += 1 warnings.warn( "Unclosed {} connections in {!r}".format(left, self), ResourceWarning) aiopg-1.0.0/aiopg/sa/0000775000372000037200000000000013542143051015151 5ustar travistravis00000000000000aiopg-1.0.0/aiopg/sa/__init__.py0000664000372000037200000000106113542142326017264 0ustar travistravis00000000000000"""Optional support for sqlalchemy.sql dynamic query generation.""" from .connection import SAConnection from .engine import create_engine, Engine from .exc import (Error, ArgumentError, InvalidRequestError, NoSuchColumnError, ResourceClosedError) __all__ = ('create_engine', 'SAConnection', 'Error', 'ArgumentError', 'InvalidRequestError', 'NoSuchColumnError', 'ResourceClosedError', 'Engine') (SAConnection, Error, ArgumentError, InvalidRequestError, NoSuchColumnError, ResourceClosedError, create_engine, Engine) aiopg-1.0.0/aiopg/sa/connection.py0000664000372000037200000003320413542142326017670 0ustar travistravis00000000000000from sqlalchemy.sql import ClauseElement from sqlalchemy.sql.ddl import DDLElement from sqlalchemy.sql.dml import UpdateBase from . import exc from ..utils import _SAConnectionContextManager, _TransactionContextManager from .result import ResultProxy from .transaction import ( NestedTransaction, RootTransaction, Transaction, TwoPhaseTransaction, ) class SAConnection: def __init__(self, connection, engine): self._connection = connection self._transaction = None self._savepoint_seq = 0 self._engine = engine self._dialect = engine.dialect self._cursor = None def execute(self, query, *multiparams, **params): """Executes a SQL query with optional parameters. query - a SQL query string or any sqlalchemy expression. *multiparams/**params - represent bound parameter values to be used in the execution. Typically, the format is a dictionary passed to *multiparams: await conn.execute( table.insert(), {"id":1, "value":"v1"}, ) ...or individual key/values interpreted by **params:: await conn.execute( table.insert(), id=1, value="v1" ) In the case that a plain SQL string is passed, a tuple or individual values in \\*multiparams may be passed:: await conn.execute( "INSERT INTO table (id, value) VALUES (%d, %s)", (1, "v1") ) await conn.execute( "INSERT INTO table (id, value) VALUES (%s, %s)", 1, "v1" ) Returns ResultProxy instance with results of SQL query execution. """ coro = self._execute(query, *multiparams, **params) return _SAConnectionContextManager(coro) async def _get_cursor(self): if self._cursor and not self._cursor.closed: return self._cursor self._cursor = await self._connection.cursor() return self._cursor async def _execute(self, query, *multiparams, **params): cursor = await self._get_cursor() dp = _distill_params(multiparams, params) if len(dp) > 1: raise exc.ArgumentError("aiopg doesn't support executemany") elif dp: dp = dp[0] result_map = None if isinstance(query, str): await cursor.execute(query, dp) elif isinstance(query, ClauseElement): compiled = query.compile(dialect=self._dialect) # parameters = compiled.params if not isinstance(query, DDLElement): if dp and isinstance(dp, (list, tuple)): if isinstance(query, UpdateBase): dp = {c.key: pval for c, pval in zip(query.table.c, dp)} else: raise exc.ArgumentError("Don't mix sqlalchemy SELECT " "clause with positional " "parameters") compiled_parameters = [compiled.construct_params(dp)] processed_parameters = [] processors = compiled._bind_processors for compiled_params in compiled_parameters: params = {key: (processors[key](compiled_params[key]) if key in processors else compiled_params[key]) for key in compiled_params} processed_parameters.append(params) post_processed_params = self._dialect.execute_sequence_format( processed_parameters) # _result_columns is a private API of Compiled, # but I couldn't find any public API exposing this data. result_map = compiled._result_columns else: if dp: raise exc.ArgumentError("Don't mix sqlalchemy DDL clause " "and execution with parameters") post_processed_params = [compiled.construct_params()] result_map = None await cursor.execute(str(compiled), post_processed_params[0]) else: raise exc.ArgumentError("sql statement should be str or " "SQLAlchemy data " "selection/modification clause") return ResultProxy(self, cursor, self._dialect, result_map) async def scalar(self, query, *multiparams, **params): """Executes a SQL query and returns a scalar value.""" res = await self.execute(query, *multiparams, **params) return await res.scalar() @property def closed(self): """The readonly property that returns True if connections is closed.""" return self.connection is None or self.connection.closed @property def connection(self): return self._connection def begin(self, isolation_level=None, readonly=False, deferrable=False): """Begin a transaction and return a transaction handle. isolation_level - The isolation level of the transaction, should be one of 'SERIALIZABLE', 'REPEATABLE READ', 'READ COMMITTED', 'READ UNCOMMITTED', default (None) is 'READ COMMITTED' readonly - The transaction is read only deferrable - The transaction may block when acquiring data before running without overhead of SERLIALIZABLE, has no effect unless transaction is both SERIALIZABLE and readonly The returned object is an instance of Transaction. This object represents the "scope" of the transaction, which completes when either the .rollback or .commit method is called. Nested calls to .begin on the same SAConnection instance will return new Transaction objects that represent an emulated transaction within the scope of the enclosing transaction, that is:: trans = await conn.begin() # outermost transaction trans2 = await conn.begin() # "nested" await trans2.commit() # does nothing await trans.commit() # actually commits Calls to .commit only have an effect when invoked via the outermost Transaction object, though the .rollback method of any of the Transaction objects will roll back the transaction. See also: .begin_nested - use a SAVEPOINT .begin_twophase - use a two phase/XA transaction """ coro = self._begin(isolation_level, readonly, deferrable) return _TransactionContextManager(coro) async def _begin(self, isolation_level, readonly, deferrable): if self._transaction is None: self._transaction = RootTransaction(self) await self._begin_impl(isolation_level, readonly, deferrable) return self._transaction else: return Transaction(self, self._transaction) async def _begin_impl(self, isolation_level, readonly, deferrable): stmt = 'BEGIN' if isolation_level is not None: stmt += ' ISOLATION LEVEL ' + isolation_level if readonly: stmt += ' READ ONLY' if deferrable: stmt += ' DEFERRABLE' cur = await self._get_cursor() try: await cur.execute(stmt) finally: cur.close() async def _commit_impl(self): cur = await self._get_cursor() try: await cur.execute('COMMIT') finally: cur.close() self._transaction = None async def _rollback_impl(self): cur = await self._get_cursor() try: await cur.execute('ROLLBACK') finally: cur.close() self._transaction = None def begin_nested(self): """Begin a nested transaction and return a transaction handle. The returned object is an instance of :class:`.NestedTransaction`. Nested transactions require SAVEPOINT support in the underlying database. Any transaction in the hierarchy may .commit() and .rollback(), however the outermost transaction still controls the overall .commit() or .rollback() of the transaction of a whole. """ coro = self._begin_nested() return _TransactionContextManager(coro) async def _begin_nested(self): if self._transaction is None: self._transaction = RootTransaction(self) await self._begin_impl(None, False, False) else: self._transaction = NestedTransaction(self, self._transaction) self._transaction._savepoint = await self._savepoint_impl() return self._transaction async def _savepoint_impl(self, name=None): self._savepoint_seq += 1 name = 'aiopg_sa_savepoint_%s' % self._savepoint_seq cur = await self._get_cursor() try: await cur.execute('SAVEPOINT ' + name) return name finally: cur.close() async def _rollback_to_savepoint_impl(self, name, parent): cur = await self._get_cursor() try: await cur.execute('ROLLBACK TO SAVEPOINT ' + name) finally: cur.close() self._transaction = parent async def _release_savepoint_impl(self, name, parent): cur = await self._get_cursor() try: await cur.execute('RELEASE SAVEPOINT ' + name) finally: cur.close() self._transaction = parent async def begin_twophase(self, xid=None): """Begin a two-phase or XA transaction and return a transaction handle. The returned object is an instance of TwoPhaseTransaction, which in addition to the methods provided by Transaction, also provides a TwoPhaseTransaction.prepare() method. xid - the two phase transaction id. If not supplied, a random id will be generated. """ if self._transaction is not None: raise exc.InvalidRequestError( "Cannot start a two phase transaction when a transaction " "is already in progress.") if xid is None: xid = self._dialect.create_xid() self._transaction = TwoPhaseTransaction(self, xid) await self._begin_impl() return self._transaction async def _prepare_twophase_impl(self, xid): await self.execute("PREPARE TRANSACTION '%s'" % xid) async def recover_twophase(self): """Return a list of prepared twophase transaction ids.""" result = await self.execute("SELECT gid FROM pg_prepared_xacts") return [row[0] for row in result] async def rollback_prepared(self, xid, *, is_prepared=True): """Rollback prepared twophase transaction.""" if is_prepared: await self.execute("ROLLBACK PREPARED '%s'" % xid) else: await self._rollback_impl() async def commit_prepared(self, xid, *, is_prepared=True): """Commit prepared twophase transaction.""" if is_prepared: await self.execute("COMMIT PREPARED '%s'" % xid) else: await self._commit_impl() @property def in_transaction(self): """Return True if a transaction is in progress.""" return self._transaction is not None and self._transaction.is_active async def close(self): """Close this SAConnection. This results in a release of the underlying database resources, that is, the underlying connection referenced internally. The underlying connection is typically restored back to the connection-holding Pool referenced by the Engine that produced this SAConnection. Any transactional state present on the underlying connection is also unconditionally released via calling Transaction.rollback() method. After .close() is called, the SAConnection is permanently in a closed state, and will allow no further operations. """ if self.connection is None: return if self._transaction is not None: await self._transaction.rollback() self._transaction = None # don't close underlying connection, it can be reused by pool # conn.close() self._engine.release(self) self._connection = None self._engine = None def _distill_params(multiparams, params): """Given arguments from the calling form *multiparams, **params, return a list of bind parameter structures, usually a list of dictionaries. In the case of 'raw' execution which accepts positional parameters, it may be a list of tuples or lists. """ if not multiparams: if params: return [params] else: return [] elif len(multiparams) == 1: zero = multiparams[0] if isinstance(zero, (list, tuple)): if not zero or hasattr(zero[0], '__iter__') and \ not hasattr(zero[0], 'strip'): # execute(stmt, [{}, {}, {}, ...]) # execute(stmt, [(), (), (), ...]) return zero else: # execute(stmt, ("value", "value")) return [zero] elif hasattr(zero, 'keys'): # execute(stmt, {"key":"value"}) return [zero] else: # execute(stmt, "value") return [[zero]] else: if (hasattr(multiparams[0], '__iter__') and not hasattr(multiparams[0], 'strip')): return multiparams else: return [multiparams] aiopg-1.0.0/aiopg/sa/engine.py0000664000372000037200000001502313542142326016775 0ustar travistravis00000000000000import json import aiopg from ..connection import TIMEOUT from ..utils import _PoolAcquireContextManager, _PoolContextManager from .connection import SAConnection from .exc import InvalidRequestError try: from sqlalchemy.dialects.postgresql.psycopg2 import PGDialect_psycopg2 from sqlalchemy.dialects.postgresql.psycopg2 import PGCompiler_psycopg2 except ImportError: # pragma: no cover raise ImportError('aiopg.sa requires sqlalchemy') class APGCompiler_psycopg2(PGCompiler_psycopg2): def construct_params(self, params=None, _group_number=None, _check=True): pd = super().construct_params(params, _group_number, _check) for column in self.prefetch: pd[column.key] = self._exec_default(column.default) return pd def _exec_default(self, default): if default.is_callable: return default.arg(self.dialect) else: return default.arg def get_dialect(json_serializer=json.dumps, json_deserializer=lambda x: x): dialect = PGDialect_psycopg2(json_serializer=json_serializer, json_deserializer=json_deserializer) dialect.statement_compiler = APGCompiler_psycopg2 dialect.implicit_returning = True dialect.supports_native_enum = True dialect.supports_smallserial = True # 9.2+ dialect._backslash_escapes = False dialect.supports_sane_multi_rowcount = True # psycopg 2.0.9+ dialect._has_native_hstore = True return dialect _dialect = get_dialect() def create_engine(dsn=None, *, minsize=1, maxsize=10, dialect=_dialect, timeout=TIMEOUT, pool_recycle=-1, **kwargs): """A coroutine for Engine creation. Returns Engine instance with embedded connection pool. The pool has *minsize* opened connections to PostgreSQL server. """ coro = _create_engine(dsn=dsn, minsize=minsize, maxsize=maxsize, dialect=dialect, timeout=timeout, pool_recycle=pool_recycle, **kwargs) return _EngineContextManager(coro) async def _create_engine(dsn=None, *, minsize=1, maxsize=10, dialect=_dialect, timeout=TIMEOUT, pool_recycle=-1, **kwargs): pool = await aiopg.create_pool( dsn, minsize=minsize, maxsize=maxsize, timeout=timeout, pool_recycle=pool_recycle, **kwargs ) conn = await pool.acquire() try: real_dsn = conn.dsn return Engine(dialect, pool, real_dsn) finally: await pool.release(conn) class Engine: """Connects a aiopg.Pool and sqlalchemy.engine.interfaces.Dialect together to provide a source of database connectivity and behavior. An Engine object is instantiated publicly using the create_engine coroutine. """ def __init__(self, dialect, pool, dsn): self._dialect = dialect self._pool = pool self._dsn = dsn @property def dialect(self): """An dialect for engine.""" return self._dialect @property def name(self): """A name of the dialect.""" return self._dialect.name @property def driver(self): """A driver of the dialect.""" return self._dialect.driver @property def dsn(self): """DSN connection info""" return self._dsn @property def timeout(self): return self._pool.timeout @property def minsize(self): return self._pool.minsize @property def maxsize(self): return self._pool.maxsize @property def size(self): return self._pool.size @property def freesize(self): return self._pool.freesize @property def closed(self): return self._pool.closed def close(self): """Close engine. Mark all engine connections to be closed on getting back to pool. Closed engine doesn't allow to acquire new connections. """ self._pool.close() def terminate(self): """Terminate engine. Terminate engine pool with instantly closing all acquired connections also. """ self._pool.terminate() async def wait_closed(self): """Wait for closing all engine's connections.""" await self._pool.wait_closed() def acquire(self): """Get a connection from pool.""" coro = self._acquire() return _EngineAcquireContextManager(coro, self) async def _acquire(self): raw = await self._pool.acquire() conn = SAConnection(raw, self) return conn def release(self, conn): """Revert back connection to pool.""" if conn.in_transaction: raise InvalidRequestError("Cannot release a connection with " "not finished transaction") raw = conn.connection fut = self._pool.release(raw) return fut def __enter__(self): raise RuntimeError( '"await" should be used as context manager expression') def __exit__(self, *args): # This must exist because __enter__ exists, even though that # always raises; that's how the with-statement works. pass # pragma: nocover def __await__(self): # This is not a coroutine. It is meant to enable the idiom: # # with (await engine) as conn: # # # as an alternative to: # # conn = await engine.acquire() # try: # # finally: # engine.release(conn) conn = yield from self._acquire().__await__() return _ConnectionContextManager(self, conn) async def __aenter__(self): return self async def __aexit__(self, exc_type, exc_val, exc_tb): self.close() await self.wait_closed() _EngineContextManager = _PoolContextManager _EngineAcquireContextManager = _PoolAcquireContextManager class _ConnectionContextManager: """Context manager. This enables the following idiom for acquiring and releasing a connection around a block: async with engine as conn: cur = await conn.cursor() while failing loudly when accidentally using: with engine: """ __slots__ = ('_engine', '_conn') def __init__(self, engine, conn): self._engine = engine self._conn = conn def __enter__(self): return self._conn def __exit__(self, *args): try: self._engine.release(self._conn) finally: self._engine = None self._conn = None aiopg-1.0.0/aiopg/sa/exc.py0000664000372000037200000000127113542142326016307 0ustar travistravis00000000000000class Error(Exception): """Generic error class.""" class ArgumentError(Error): """Raised when an invalid or conflicting function argument is supplied. This error generally corresponds to construction time state errors. """ class InvalidRequestError(ArgumentError): """aiopg.sa was asked to do something it can't do. This error generally corresponds to runtime state errors. """ class NoSuchColumnError(KeyError, InvalidRequestError): """A nonexistent column is requested from a ``RowProxy``.""" class ResourceClosedError(InvalidRequestError): """An operation was requested from a connection, cursor, or other object that's in a closed state.""" aiopg-1.0.0/aiopg/sa/result.py0000664000372000037200000003412713542142326017054 0ustar travistravis00000000000000import weakref from collections.abc import Mapping, Sequence from sqlalchemy.sql import expression, sqltypes from . import exc class RowProxy(Mapping): __slots__ = ('_result_proxy', '_row', '_processors', '_keymap') def __init__(self, result_proxy, row, processors, keymap): """RowProxy objects are constructed by ResultProxy objects.""" self._result_proxy = result_proxy self._row = row self._processors = processors self._keymap = keymap def __iter__(self): return iter(self._result_proxy.keys) def __len__(self): return len(self._row) def __getitem__(self, key): try: processor, obj, index = self._keymap[key] except KeyError: processor, obj, index = self._result_proxy._key_fallback(key) # Do we need slicing at all? RowProxy now is Mapping not Sequence # except TypeError: # if isinstance(key, slice): # l = [] # for processor, value in zip(self._processors[key], # self._row[key]): # if processor is None: # l.append(value) # else: # l.append(processor(value)) # return tuple(l) # else: # raise if index is None: raise exc.InvalidRequestError( "Ambiguous column name '%s' in result set! " "try 'use_labels' option on select statement." % key) if processor is not None: return processor(self._row[index]) else: return self._row[index] def __getattr__(self, name): try: return self[name] except KeyError as e: raise AttributeError(e.args[0]) def __contains__(self, key): return self._result_proxy._has_key(self._row, key) __hash__ = None def __eq__(self, other): if isinstance(other, RowProxy): return self.as_tuple() == other.as_tuple() elif isinstance(other, Sequence): return self.as_tuple() == other else: return NotImplemented def __ne__(self, other): return not self == other def as_tuple(self): return tuple(self[k] for k in self) def __repr__(self): return repr(self.as_tuple()) class ResultMetaData(object): """Handle cursor.description, applying additional info from an execution context.""" def __init__(self, result_proxy, cursor_description): self._processors = processors = [] map_type, map_column_name = self.result_map(result_proxy._result_map) # We do not strictly need to store the processor in the key mapping, # though it is faster in the Python version (probably because of the # saved attribute lookup self._processors) self._keymap = keymap = {} self.keys = [] dialect = result_proxy.dialect # `dbapi_type_map` property removed in SQLAlchemy 1.2+. # Usage of `getattr` only needed for backward compatibility with # older versions of SQLAlchemy. typemap = getattr(dialect, 'dbapi_type_map', {}) assert dialect.case_sensitive, \ "Doesn't support case insensitive database connection" # high precedence key values. primary_keymap = {} assert not dialect.description_encoding, \ "psycopg in py3k should not use this" for i, rec in enumerate(cursor_description): colname = rec[0] coltype = rec[1] # PostgreSQL doesn't require this. # if dialect.requires_name_normalize: # colname = dialect.normalize_name(colname) name, obj, type_ = ( map_column_name.get(colname, colname), None, map_type.get(colname, typemap.get(coltype, sqltypes.NULLTYPE)) ) processor = type_._cached_result_processor(dialect, coltype) processors.append(processor) rec = (processor, obj, i) # indexes as keys. This is only needed for the Python version of # RowProxy (the C version uses a faster path for integer indexes). primary_keymap[i] = rec # populate primary keymap, looking for conflicts. if primary_keymap.setdefault(name, rec) is not rec: # place a record that doesn't have the "index" - this # is interpreted later as an AmbiguousColumnError, # but only when actually accessed. Columns # colliding by name is not a problem if those names # aren't used; integer access is always # unambiguous. primary_keymap[name] = rec = (None, obj, None) self.keys.append(name) if obj: for o in obj: keymap[o] = rec # technically we should be doing this but we # are saving on callcounts by not doing so. # if keymap.setdefault(o, rec) is not rec: # keymap[o] = (None, obj, None) # overwrite keymap values with those of the # high precedence keymap. keymap.update(primary_keymap) def result_map(self, data_map): data_map = data_map or {} map_type = {} map_column_name = {} for elem in data_map: name = elem[0] priority_name = getattr(elem[2][0], 'key', name) map_type[name] = elem[3] # type column map_column_name[name] = priority_name return map_type, map_column_name def _key_fallback(self, key, raiseerr=True): map = self._keymap result = None if isinstance(key, str): result = map.get(key) # fallback for targeting a ColumnElement to a textual expression # this is a rare use case which only occurs when matching text() # or colummn('name') constructs to ColumnElements, or after a # pickle/unpickle roundtrip elif isinstance(key, expression.ColumnElement): if (key._label and key._label in map): result = map[key._label] elif (hasattr(key, 'key') and key.key in map): # match is only on name. result = map[key.key] # search extra hard to make sure this # isn't a column/label name overlap. # this check isn't currently available if the row # was unpickled. if result is not None and result[1] is not None: for obj in result[1]: if key._compare_name_for_result(obj): break else: result = None if result is None: if raiseerr: raise exc.NoSuchColumnError( "Could not locate column in row for column '%s'" % expression._string_or_unprintable(key)) else: return None else: map[key] = result return result def _has_key(self, row, key): if key in self._keymap: return True else: return self._key_fallback(key, False) is not None class ResultProxy: """Wraps a DB-API cursor object to provide easier access to row columns. Individual columns may be accessed by their integer position, case-insensitive column name, or by sqlalchemy schema.Column object. e.g.: row = fetchone() col1 = row[0] # access via integer position col2 = row['col2'] # access via name col3 = row[mytable.c.mycol] # access via Column object. ResultProxy also handles post-processing of result column data using sqlalchemy TypeEngine objects, which are referenced from the originating SQL statement that produced this result set. """ def __init__(self, connection, cursor, dialect, result_map=None): self._dialect = dialect self._result_map = result_map self._cursor = cursor self._connection = connection self._rowcount = cursor.rowcount self._metadata = None self._weak = None self._init_metadata() @property def dialect(self): """SQLAlchemy dialect.""" return self._dialect @property def cursor(self): return self._cursor def keys(self): """Return the current set of string keys for rows.""" if self._metadata: return tuple(self._metadata.keys) else: return () @property def rowcount(self): """Return the 'rowcount' for this result. The 'rowcount' reports the number of rows *matched* by the WHERE criterion of an UPDATE or DELETE statement. .. note:: Notes regarding .rowcount: * This attribute returns the number of rows *matched*, which is not necessarily the same as the number of rows that were actually *modified* - an UPDATE statement, for example, may have no net change on a given row if the SET values given are the same as those present in the row already. Such a row would be matched but not modified. * .rowcount is *only* useful in conjunction with an UPDATE or DELETE statement. Contrary to what the Python DBAPI says, it does *not* return the number of rows available from the results of a SELECT statement as DBAPIs cannot support this functionality when rows are unbuffered. * Statements that use RETURNING may not return a correct rowcount. """ return self._rowcount def _init_metadata(self): cursor_description = self.cursor.description if cursor_description is not None: self._metadata = ResultMetaData(self, cursor_description) self._weak = weakref.ref(self, lambda wr: self.cursor.close()) else: self.close() self._weak = None @property def returns_rows(self): """True if this ResultProxy returns rows. I.e. if it is legal to call the methods .fetchone(), .fetchmany() and .fetchall()`. """ return self._metadata is not None @property def closed(self): if self._cursor is None: return True return bool(self._cursor.closed) def close(self): """Close this ResultProxy. Closes the underlying DBAPI cursor corresponding to the execution. Note that any data cached within this ResultProxy is still available. For some types of results, this may include buffered rows. If this ResultProxy was generated from an implicit execution, the underlying Connection will also be closed (returns the underlying DBAPI connection to the connection pool.) This method is called automatically when: * all result rows are exhausted using the fetchXXX() methods. * cursor.description is None. """ if not self.closed: self.cursor.close() # allow consistent errors self._cursor = None self._weak = None def __aiter__(self): return self async def __anext__(self): ret = await self.fetchone() if ret is not None: return ret else: raise StopAsyncIteration def _non_result(self): if self._metadata is None: raise exc.ResourceClosedError( "This result object does not return rows. " "It has been closed automatically.") else: raise exc.ResourceClosedError("This result object is closed.") def _process_rows(self, rows): process_row = RowProxy metadata = self._metadata keymap = metadata._keymap processors = metadata._processors return [process_row(metadata, row, processors, keymap) for row in rows] async def fetchall(self): """Fetch all rows, just like DB-API cursor.fetchall().""" try: rows = await self.cursor.fetchall() except AttributeError: self._non_result() else: res = self._process_rows(rows) self.close() return res async def fetchone(self): """Fetch one row, just like DB-API cursor.fetchone(). If a row is present, the cursor remains open after this is called. Else the cursor is automatically closed and None is returned. """ try: row = await self.cursor.fetchone() except AttributeError: self._non_result() else: if row is not None: return self._process_rows([row])[0] else: self.close() return None async def fetchmany(self, size=None): """Fetch many rows, just like DB-API cursor.fetchmany(size=cursor.arraysize). If rows are present, the cursor remains open after this is called. Else the cursor is automatically closed and an empty list is returned. """ try: if size is None: rows = await self.cursor.fetchmany() else: rows = await self.cursor.fetchmany(size) except AttributeError: self._non_result() else: res = self._process_rows(rows) if len(res) == 0: self.close() return res async def first(self): """Fetch the first row and then close the result set unconditionally. Returns None if no row is present. """ if self._metadata is None: self._non_result() try: return await self.fetchone() finally: self.close() async def scalar(self): """Fetch the first column of the first row, and close the result set. Returns None if no row is present. """ row = await self.first() if row is not None: return row[0] else: return None aiopg-1.0.0/aiopg/sa/transaction.py0000664000372000037200000001135113542142326020055 0ustar travistravis00000000000000from . import exc class Transaction(object): """Represent a database transaction in progress. The Transaction object is procured by calling the SAConnection.begin() method of SAConnection: async with engine as conn: trans = await conn.begin() try: await conn.execute("insert into x (a, b) values (1, 2)") except Exception: await trans.rollback() else: await trans.commit() The object provides .rollback() and .commit() methods in order to control transaction boundaries. See also: SAConnection.begin(), SAConnection.begin_twophase(), SAConnection.begin_nested(). """ def __init__(self, connection, parent): self._connection = connection self._parent = parent or self self._is_active = True @property def is_active(self): """Return ``True`` if a transaction is active.""" return self._is_active @property def connection(self): """Return transaction's connection (SAConnection instance).""" return self._connection async def close(self): """Close this transaction. If this transaction is the base transaction in a begin/commit nesting, the transaction will rollback(). Otherwise, the method returns. This is used to cancel a Transaction without affecting the scope of an enclosing transaction. """ if not self._parent._is_active: return if self._parent is self: await self.rollback() else: self._is_active = False async def rollback(self): """Roll back this transaction.""" if not self._parent._is_active: return await self._do_rollback() self._is_active = False async def _do_rollback(self): await self._parent.rollback() async def commit(self): """Commit this transaction.""" if not self._parent._is_active: raise exc.InvalidRequestError("This transaction is inactive") await self._do_commit() self._is_active = False async def _do_commit(self): pass async def __aenter__(self): return self async def __aexit__(self, exc_type, exc_val, exc_tb): if exc_type: await self.rollback() else: if self._is_active: await self.commit() class RootTransaction(Transaction): def __init__(self, connection): super().__init__(connection, None) async def _do_rollback(self): await self._connection._rollback_impl() async def _do_commit(self): await self._connection._commit_impl() class NestedTransaction(Transaction): """Represent a 'nested', or SAVEPOINT transaction. A new NestedTransaction object may be procured using the SAConnection.begin_nested() method. The interface is the same as that of Transaction class. """ _savepoint = None def __init__(self, connection, parent): super(NestedTransaction, self).__init__(connection, parent) async def _do_rollback(self): assert self._savepoint is not None, "Broken transaction logic" if self._is_active: await self._connection._rollback_to_savepoint_impl( self._savepoint, self._parent) async def _do_commit(self): assert self._savepoint is not None, "Broken transaction logic" if self._is_active: await self._connection._release_savepoint_impl( self._savepoint, self._parent) class TwoPhaseTransaction(Transaction): """Represent a two-phase transaction. A new TwoPhaseTransaction object may be procured using the SAConnection.begin_twophase() method. The interface is the same as that of Transaction class with the addition of the .prepare() method. """ def __init__(self, connection, xid): super().__init__(connection, None) self._is_prepared = False self._xid = xid @property def xid(self): """Returns twophase transaction id.""" return self._xid async def prepare(self): """Prepare this TwoPhaseTransaction. After a PREPARE, the transaction can be committed. """ if not self._parent.is_active: raise exc.InvalidRequestError("This transaction is inactive") await self._connection._prepare_twophase_impl(self._xid) self._is_prepared = True async def _do_rollback(self): await self._connection._rollback_twophase_impl( self._xid, is_prepared=self._is_prepared) async def _do_commit(self): await self._connection._commit_twophase_impl( self._xid, is_prepared=self._is_prepared) aiopg-1.0.0/aiopg/transaction.py0000664000372000037200000001176613542142326017464 0ustar travistravis00000000000000import enum import uuid import warnings from abc import ABC, abstractmethod import psycopg2 from aiopg.utils import _TransactionPointContextManager __all__ = ('IsolationLevel', 'Transaction') class IsolationCompiler(ABC): name = '' __slots__ = ('_readonly', '_deferrable') def __init__(self, readonly, deferrable): self._readonly = readonly self._deferrable = deferrable self._check_readonly_deferrable() def _check_readonly_deferrable(self): available = self._readonly or self._deferrable if not isinstance(self, SerializableCompiler) and available: raise ValueError('Is only available for serializable transactions') def savepoint(self, unique_id): return 'SAVEPOINT {}'.format(unique_id) def release_savepoint(self, unique_id): return 'RELEASE SAVEPOINT {}'.format(unique_id) def rollback_savepoint(self, unique_id): return 'ROLLBACK TO SAVEPOINT {}'.format(unique_id) def commit(self): return 'COMMIT' def rollback(self): return 'ROLLBACK' @abstractmethod def begin(self): raise NotImplementedError("Please Implement this method") def __repr__(self): return self.name class ReadCommittedCompiler(IsolationCompiler): name = 'Read committed' def begin(self): return 'BEGIN' class RepeatableReadCompiler(IsolationCompiler): name = 'Repeatable read' def begin(self): return 'BEGIN ISOLATION LEVEL REPEATABLE READ' class SerializableCompiler(IsolationCompiler): name = 'Serializable' def begin(self): query = 'BEGIN ISOLATION LEVEL SERIALIZABLE' if self._readonly: query += ' READ ONLY' if self._deferrable: query += ' DEFERRABLE' return query class IsolationLevel(enum.Enum): serializable = SerializableCompiler repeatable_read = RepeatableReadCompiler read_committed = ReadCommittedCompiler def __call__(self, readonly, deferrable): return self.value(readonly, deferrable) class Transaction: __slots__ = ('_cur', '_is_begin', '_isolation', '_unique_id') def __init__(self, cur, isolation_level, readonly=False, deferrable=False): self._cur = cur self._is_begin = False self._unique_id = None self._isolation = isolation_level(readonly, deferrable) @property def is_begin(self): return self._is_begin async def begin(self): if self._is_begin: raise psycopg2.ProgrammingError( 'You are trying to open a new transaction, use the save point') self._is_begin = True await self._cur.execute(self._isolation.begin()) return self async def commit(self): self._check_commit_rollback() await self._cur.execute(self._isolation.commit()) self._is_begin = False async def rollback(self): self._check_commit_rollback() await self._cur.execute(self._isolation.rollback()) self._is_begin = False async def rollback_savepoint(self): self._check_release_rollback() await self._cur.execute( self._isolation.rollback_savepoint(self._unique_id)) self._unique_id = None async def release_savepoint(self): self._check_release_rollback() await self._cur.execute( self._isolation.release_savepoint(self._unique_id)) self._unique_id = None async def savepoint(self): self._check_commit_rollback() if self._unique_id is not None: raise psycopg2.ProgrammingError('You do not shut down savepoint') self._unique_id = 's{}'.format(uuid.uuid1().hex) await self._cur.execute( self._isolation.savepoint(self._unique_id)) return self def point(self): return _TransactionPointContextManager(self.savepoint()) def _check_commit_rollback(self): if not self._is_begin: raise psycopg2.ProgrammingError('You are trying to commit ' 'the transaction does not open') def _check_release_rollback(self): self._check_commit_rollback() if self._unique_id is None: raise psycopg2.ProgrammingError('You do not start savepoint') def __repr__(self): return "<{} transaction={} id={:#x}>".format( self.__class__.__name__, self._isolation, id(self) ) def __del__(self): if self._is_begin: warnings.warn( "You have not closed transaction {!r}".format(self), ResourceWarning) if self._unique_id is not None: warnings.warn( "You have not closed savepoint {!r}".format(self), ResourceWarning) async def __aenter__(self): return (await self.begin()) async def __aexit__(self, exc_type, exc, tb): if exc_type is not None: await self.rollback() else: await self.commit() aiopg-1.0.0/aiopg/utils.py0000664000372000037200000001407313542142326016271 0ustar travistravis00000000000000import asyncio import sys import warnings from collections.abc import Coroutine import psycopg2 from .log import logger try: ensure_future = asyncio.ensure_future except AttributeError: ensure_future = getattr(asyncio, 'async') if sys.version_info >= (3, 7, 0): __get_running_loop = asyncio.get_running_loop else: def __get_running_loop() -> asyncio.AbstractEventLoop: loop = asyncio.get_event_loop() if not loop.is_running(): raise RuntimeError('no running event loop') return loop def get_running_loop(is_warn: bool = False) -> asyncio.AbstractEventLoop: loop = __get_running_loop() if is_warn: warnings.warn( 'aiopg always uses "aiopg.get_running_loop", ' 'look the documentation.', DeprecationWarning, stacklevel=3 ) if loop.get_debug(): logger.warning( 'aiopg always uses "aiopg.get_running_loop", ' 'look the documentation.', exc_info=True ) return loop def create_future(loop): try: return loop.create_future() except AttributeError: return asyncio.Future(loop=loop) class _ContextManager(Coroutine): __slots__ = ('_coro', '_obj') def __init__(self, coro): self._coro = coro self._obj = None def send(self, value): return self._coro.send(value) def throw(self, typ, val=None, tb=None): if val is None: return self._coro.throw(typ) elif tb is None: return self._coro.throw(typ, val) else: return self._coro.throw(typ, val, tb) def close(self): return self._coro.close() @property def gi_frame(self): return self._coro.gi_frame @property def gi_running(self): return self._coro.gi_running @property def gi_code(self): return self._coro.gi_code def __next__(self): return self.send(None) def __await__(self): resp = self._coro.__await__() return resp async def __aenter__(self): self._obj = await self._coro return self._obj async def __aexit__(self, exc_type, exc, tb): self._obj.close() self._obj = None class _SAConnectionContextManager(_ContextManager): def __aiter__(self): return self async def __anext__(self): if self._obj is None: self._obj = await self._coro try: return (await self._obj.__anext__()) except StopAsyncIteration: self._obj.close() self._obj = None raise class _PoolContextManager(_ContextManager): async def __aexit__(self, exc_type, exc, tb): self._obj.close() await self._obj.wait_closed() self._obj = None class _TransactionPointContextManager(_ContextManager): async def __aexit__(self, exc_type, exc_val, exc_tb): if exc_type is not None: await self._obj.rollback_savepoint() else: await self._obj.release_savepoint() self._obj = None class _TransactionBeginContextManager(_ContextManager): async def __aexit__(self, exc_type, exc_val, exc_tb): if exc_type is not None: await self._obj.rollback() else: await self._obj.commit() self._obj = None class _TransactionContextManager(_ContextManager): async def __aexit__(self, exc_type, exc, tb): if exc_type: await self._obj.rollback() else: if self._obj.is_active: await self._obj.commit() self._obj = None class _PoolAcquireContextManager(_ContextManager): __slots__ = ('_coro', '_obj', '_pool') def __init__(self, coro, pool): super().__init__(coro) self._pool = pool async def __aexit__(self, exc_type, exc, tb): await self._pool.release(self._obj) self._pool = None self._obj = None class _PoolConnectionContextManager: """Context manager. This enables the following idiom for acquiring and releasing a connection around a block: async with pool as conn: cur = await conn.cursor() while failing loudly when accidentally using: with pool: """ __slots__ = ('_pool', '_conn') def __init__(self, pool, conn): self._pool = pool self._conn = conn def __enter__(self): assert self._conn return self._conn def __exit__(self, exc_type, exc_val, exc_tb): try: self._pool.release(self._conn) finally: self._pool = None self._conn = None async def __aenter__(self): assert not self._conn self._conn = await self._pool.acquire() return self._conn async def __aexit__(self, exc_type, exc_val, exc_tb): try: await self._pool.release(self._conn) finally: self._pool = None self._conn = None class _PoolCursorContextManager: """Context manager. This enables the following idiom for acquiring and releasing a cursor around a block: async with pool.cursor() as cur: await cur.execute("SELECT 1") while failing loudly when accidentally using: with pool: """ __slots__ = ('_pool', '_conn', '_cur') def __init__(self, pool, conn, cur): self._pool = pool self._conn = conn self._cur = cur def __enter__(self): return self._cur def __exit__(self, *args): try: self._cur.close() except psycopg2.ProgrammingError: # seen instances where the cursor fails to close: # https://github.com/aio-libs/aiopg/issues/364 # We close it here so we don't return a bad connection to the pool self._conn.close() raise finally: try: self._pool.release(self._conn) finally: self._pool = None self._conn = None self._cur = None aiopg-1.0.0/aiopg.egg-info/0000775000372000037200000000000013542143051016240 5ustar travistravis00000000000000aiopg-1.0.0/aiopg.egg-info/PKG-INFO0000664000372000037200000003201713542143051017340 0ustar travistravis00000000000000Metadata-Version: 2.1 Name: aiopg Version: 1.0.0 Summary: Postgres integration with asyncio. Home-page: https://aiopg.readthedocs.io Author: Andrew Svetlov Author-email: andrew.svetlov@gmail.com Maintainer: Andrew Svetlov , Alexey Firsov , Alexey Popravka Maintainer-email: virmir49@gmail.com License: BSD Download-URL: https://pypi.python.org/pypi/aiopg Project-URL: Chat: Gitter, https://gitter.im/aio-libs/Lobby Project-URL: CI: Travis, https://travis-ci.com/aio-libs/aiopg Project-URL: Coverage: codecov, https://codecov.io/gh/aio-libs/aiopg Project-URL: Docs: RTD, https://aiopg.readthedocs.io Project-URL: GitHub: issues, https://github.com/aio-libs/aiopg/issues Project-URL: GitHub: repo, https://github.com/aio-libs/aiopg Description: aiopg ===== .. image:: https://travis-ci.com/aio-libs/aiopg.svg?branch=master :target: https://travis-ci.com/aio-libs/aiopg .. image:: https://codecov.io/gh/aio-libs/aiopg/branch/master/graph/badge.svg :target: https://codecov.io/gh/aio-libs/aiopg .. image:: https://badges.gitter.im/Join%20Chat.svg :target: https://gitter.im/aio-libs/Lobby :alt: Chat on Gitter **aiopg** is a library for accessing a PostgreSQL_ database from the asyncio_ (PEP-3156/tulip) framework. It wraps asynchronous features of the Psycopg database driver. Example ------- .. code:: python import asyncio import aiopg dsn = 'dbname=aiopg user=aiopg password=passwd host=127.0.0.1' async def go(): pool = await aiopg.create_pool(dsn) async with pool.acquire() as conn: async with conn.cursor() as cur: await cur.execute("SELECT 1") ret = [] async for row in cur: ret.append(row) assert ret == [(1,)] loop = asyncio.get_event_loop() loop.run_until_complete(go()) Example of SQLAlchemy optional integration ------------------------------------------ .. code:: python import asyncio from aiopg.sa import create_engine import sqlalchemy as sa metadata = sa.MetaData() tbl = sa.Table('tbl', metadata, sa.Column('id', sa.Integer, primary_key=True), sa.Column('val', sa.String(255))) async def create_table(engine): async with engine.acquire() as conn: await conn.execute('DROP TABLE IF EXISTS tbl') await conn.execute('''CREATE TABLE tbl ( id serial PRIMARY KEY, val varchar(255))''') async def go(): async with create_engine(user='aiopg', database='aiopg', host='127.0.0.1', password='passwd') as engine: async with engine.acquire() as conn: await conn.execute(tbl.insert().values(val='abc')) async for row in conn.execute(tbl.select()): print(row.id, row.val) loop = asyncio.get_event_loop() loop.run_until_complete(go()) .. _PostgreSQL: http://www.postgresql.org/ .. _asyncio: http://docs.python.org/3.4/library/asyncio.html Please use:: $ make test for executing the project's unittests. See https://aiopg.readthedocs.io/en/stable/contributing.html for details on how to set up your environment to run the tests. Changelog --------- 1.0.0 (2019-09-20) * Removal of an asynchronous call in favor of issues # 550 * Big editing of documentation and minor bugs #534 0.16.0 (2019-01-25) ^^^^^^^^^^^^^^^^^^^ * Fix select priority name `#525 `_ * Rename `psycopg2` to `psycopg2-binary` to fix deprecation warning `#507 `_ * Fix `#189 `_ hstore when using ReadDictCursor `#512 `_ * close cannot be used while an asynchronous query is underway `#452 `_ * sqlalchemy adapter trx begin allow transaction_mode `#498 `_ 0.15.0 (2018-08-14) ^^^^^^^^^^^^^^^^^^^ * Support Python 3.7 `#437 `_ 0.14.0 (2018-05-10) ^^^^^^^^^^^^^^^^^^^ * Add ``get_dialect`` func to have ability to pass ``json_serializer`` `#451 `_ 0.13.2 (2018-01-03) ^^^^^^^^^^^^^^^^^^^ * Fixed compatibility with SQLAlchemy 1.2.0 `#412 `_ * Added support for transaction isolation levels `#219 `_ 0.13.1 (2017-09-10) ^^^^^^^^^^^^^^^^^^^ * Added connection poll recycling logic `#373 `_ 0.13.0 (2016-12-02) ^^^^^^^^^^^^^^^^^^^ * Add `async with` support to `.begin_nested()` `#208 `_ * Fix connection.cancel() `#212 `_ `#223 `_ * Raise informative error on unexpected connection closing `#191 `_ * Added support for python types columns issues `#217 `_ * Added support for default values in SA table issues `#206 `_ 0.12.0 (2016-10-09) ^^^^^^^^^^^^^^^^^^^ * Add an on_connect callback parameter to pool `#141 `_ * Fixed connection to work under both windows and posix based systems `#142 `_ 0.11.0 (2016-09-12) ^^^^^^^^^^^^^^^^^^^ * Immediately remove callbacks from a closed file descriptor `#139 `_ * Drop Python 3.3 support 0.10.0 (2016-07-16) ^^^^^^^^^^^^^^^^^^^ * Refactor tests to use dockerized Postgres server `#107 `_ * Reduce default pool minsize to 1 `#106 `_ * Explicitly enumerate packages in setup.py `#85 `_ * Remove expired connections from pool on acquire `#116 `_ * Don't crash when Connection is GC'ed `#124 `_ * Use loop.create_future() if available 0.9.2 (2016-01-31) ^^^^^^^^^^^^^^^^^^ * Make pool.release return asyncio.Future, so we can wait on it in `__aexit__` `#102 `_ * Add support for uuid type `#103 `_ 0.9.1 (2016-01-17) ^^^^^^^^^^^^^^^^^^ * Documentation update `#101 `_ 0.9.0 (2016-01-14) ^^^^^^^^^^^^^^^^^^ * Add async context managers for transactions `#91 `_ * Support async iterator in ResultProxy `#92 `_ * Add async with for engine `#90 `_ 0.8.0 (2015-12-31) ^^^^^^^^^^^^^^^^^^ * Add PostgreSQL notification support `#58 `_ * Support pools with unlimited size `#59 `_ * Cancel current DB operation on asyncio timeout `#66 `_ * Add async with support for Pool, Connection, Cursor `#88 `_ 0.7.0 (2015-04-22) ^^^^^^^^^^^^^^^^^^ * Get rid of resource leak on connection failure. * Report ResourceWarning on non-closed connections. * Deprecate iteration protocol support in cursor and ResultProxy. * Release sa connection to pool on `connection.close()`. 0.6.0 (2015-02-03) ^^^^^^^^^^^^^^^^^^ * Accept dict, list, tuple, named and positional parameters in `SAConnection.execute()` 0.5.2 (2014-12-08) ^^^^^^^^^^^^^^^^^^ * Minor release, fixes a bug that leaves connection in broken state after `cursor.execute()` failure. 0.5.1 (2014-10-31) ^^^^^^^^^^^^^^^^^^ * Fix a bug for processing transactions in line. 0.5.0 (2014-10-31) ^^^^^^^^^^^^^^^^^^ * Add .terminate() to Pool and Engine * Reimplement connection pool (now pool size cannot be greater than pool.maxsize) * Add .close() and .wait_closed() to Pool and Engine * Add minsize, maxsize, size and freesize properties to sa.Engine * Support *echo* parameter for logging executed SQL commands * Connection.close() is not a coroutine (but we keep backward compatibility). 0.4.1 (2014-10-02) ^^^^^^^^^^^^^^^^^^ * make cursor iterable * update docs 0.4.0 (2014-10-02) ^^^^^^^^^^^^^^^^^^ * add timeouts for database operations. * Autoregister psycopg2 support for json data type. * Support JSON in aiopg.sa * Support ARRAY in aiopg.sa * Autoregister hstore support if present in connected DB * Support HSTORE in aiopg.sa 0.3.2 (2014-07-07) ^^^^^^^^^^^^^^^^^^ * change signature to cursor.execute(operation, parameters=None) to follow psycopg2 convention. 0.3.1 (2014-07-04) ^^^^^^^^^^^^^^^^^^ * Forward arguments to cursor constructor for pooled connections. 0.3.0 (2014-06-22) ^^^^^^^^^^^^^^^^^^ * Allow executing SQLAlchemy DDL statements. * Fix bug with race conditions on acquiring/releasing connections from pool. 0.2.3 (2014-06-12) ^^^^^^^^^^^^^^^^^^ * Fix bug in connection pool. 0.2.2 (2014-06-07) ^^^^^^^^^^^^^^^^^^ * Fix bug with passing parameters into SAConnection.execute when executing raw SQL expression. 0.2.1 (2014-05-08) ^^^^^^^^^^^^^^^^^^ * Close connection with invalid transaction status on returning to pool. 0.2.0 (2014-05-04) ^^^^^^^^^^^^^^^^^^ * Implemented optional support for sqlalchemy functional sql layer. 0.1.0 (2014-04-06) ^^^^^^^^^^^^^^^^^^ * Implemented plain connections: connect, Connection, Cursor. * Implemented database pools: create_pool and Pool. Platform: macOS Platform: POSIX Platform: Windows Classifier: License :: OSI Approved :: BSD License Classifier: Intended Audience :: Developers Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3 :: Only Classifier: Programming Language :: Python :: 3.5 Classifier: Programming Language :: Python :: 3.6 Classifier: Programming Language :: Python :: 3.7 Classifier: Operating System :: POSIX Classifier: Operating System :: MacOS :: MacOS X Classifier: Operating System :: Microsoft :: Windows Classifier: Environment :: Web Environment Classifier: Development Status :: 5 - Production/Stable Classifier: Topic :: Database Classifier: Topic :: Database :: Front-Ends Classifier: Framework :: AsyncIO Requires-Python: >=3.5.3 Provides-Extra: sa aiopg-1.0.0/aiopg.egg-info/SOURCES.txt0000664000372000037200000000072013542143051020123 0ustar travistravis00000000000000CHANGES.txt LICENSE.txt MAINTAINERS.txt MANIFEST.in README.rst setup.cfg setup.py aiopg/__init__.py aiopg/connection.py aiopg/cursor.py aiopg/log.py aiopg/pool.py aiopg/transaction.py aiopg/utils.py aiopg.egg-info/PKG-INFO aiopg.egg-info/SOURCES.txt aiopg.egg-info/dependency_links.txt aiopg.egg-info/requires.txt aiopg.egg-info/top_level.txt aiopg/sa/__init__.py aiopg/sa/connection.py aiopg/sa/engine.py aiopg/sa/exc.py aiopg/sa/result.py aiopg/sa/transaction.pyaiopg-1.0.0/aiopg.egg-info/dependency_links.txt0000664000372000037200000000000113542143051022306 0ustar travistravis00000000000000 aiopg-1.0.0/aiopg.egg-info/requires.txt0000664000372000037200000000011013542143051020630 0ustar travistravis00000000000000psycopg2-binary>=2.7.0 [sa] sqlalchemy[postgresql_psycopg2binary]>=1.1 aiopg-1.0.0/aiopg.egg-info/top_level.txt0000664000372000037200000000000613542143051020766 0ustar travistravis00000000000000aiopg aiopg-1.0.0/setup.cfg0000664000372000037200000000016613542143051015273 0ustar travistravis00000000000000[tool:pytest] timeout = 300 [coverage:run] branch = true source = aiopg,tests [egg_info] tag_build = tag_date = 0 aiopg-1.0.0/setup.py0000664000372000037200000000522613542142326015172 0ustar travistravis00000000000000import os import re from setuptools import setup, find_packages install_requires = ['psycopg2-binary>=2.7.0'] extras_require = {'sa': ['sqlalchemy[postgresql_psycopg2binary]>=1.1']} def read(f): return open(os.path.join(os.path.dirname(__file__), f)).read().strip() def get_maintainers(path='MAINTAINERS.txt'): with open(os.path.join(os.path.dirname(__file__), path)) as f: return ', '.join(x.strip().strip('*').strip() for x in f.readlines()) def read_version(): regexp = re.compile(r"^__version__\W*=\W*'([\d.abrc]+)'") init_py = os.path.join(os.path.dirname(__file__), 'aiopg', '__init__.py') with open(init_py) as f: for line in f: match = regexp.match(line) if match is not None: return match.group(1) else: raise RuntimeError('Cannot find version in aiopg/__init__.py') def read_changelog(path='CHANGES.txt'): return 'Changelog\n---------\n\n{}'.format(read(path)) classifiers = [ 'License :: OSI Approved :: BSD License', 'Intended Audience :: Developers', 'Programming Language :: Python :: 3', 'Programming Language :: Python :: 3 :: Only', 'Programming Language :: Python :: 3.5', 'Programming Language :: Python :: 3.6', 'Programming Language :: Python :: 3.7', 'Operating System :: POSIX', 'Operating System :: MacOS :: MacOS X', 'Operating System :: Microsoft :: Windows', 'Environment :: Web Environment', 'Development Status :: 5 - Production/Stable', 'Topic :: Database', 'Topic :: Database :: Front-Ends', 'Framework :: AsyncIO', ] setup( name='aiopg', version=read_version(), description='Postgres integration with asyncio.', long_description='\n\n'.join((read('README.rst'), read_changelog())), classifiers=classifiers, platforms=['macOS', 'POSIX', 'Windows'], author='Andrew Svetlov', python_requires='>=3.5.3', project_urls={ 'Chat: Gitter': 'https://gitter.im/aio-libs/Lobby', 'CI: Travis': 'https://travis-ci.com/aio-libs/aiopg', 'Coverage: codecov': 'https://codecov.io/gh/aio-libs/aiopg', 'Docs: RTD': 'https://aiopg.readthedocs.io', 'GitHub: issues': 'https://github.com/aio-libs/aiopg/issues', 'GitHub: repo': 'https://github.com/aio-libs/aiopg', }, author_email='andrew.svetlov@gmail.com', maintainer=get_maintainers(), maintainer_email='virmir49@gmail.com', url='https://aiopg.readthedocs.io', download_url='https://pypi.python.org/pypi/aiopg', license='BSD', packages=find_packages(), install_requires=install_requires, extras_require=extras_require, include_package_data=True )