././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1635784409.7227902 aiopg-1.3.3/0000755000175100001710000000000000000000000012155 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1635784389.0 aiopg-1.3.3/CHANGES.txt0000644000175100001710000002316100000000000013771 0ustar00runnerdocker1.3.3 (2021-11-01) ^^^^^^^^^^^^^^^^^^ * Support async-timeout 4.0+ 1.3.2 (2021-10-07) ^^^^^^^^^^^^^^^^^^ 1.3.2b2 (2021-10-07) ^^^^^^^^^^^^^^^^^^^^ * Respect use_labels for select statement `#882 `_ 1.3.2b1 (2021-07-11) ^^^^^^^^^^^^^^^^^^^^ * Fix compatibility with SQLAlchemy >= 1.4 `#870 `_ 1.3.1 (2021-07-08) ^^^^^^^^^^^^^^^^^^ 1.3.1b2 (2021-07-06) ^^^^^^^^^^^^^^^^^^^^ * Suppress "Future exception was never retrieved" `#862 `_ 1.3.1b1 (2021-07-05) ^^^^^^^^^^^^^^^^^^^^ * Fix ClosableQueue.get on cancellation, close it on Connection.close `#859 `_ 1.3.0 (2021-06-30) ^^^^^^^^^^^^^^^^^^ 1.3.0b4 (2021-06-28) ^^^^^^^^^^^^^^^^^^^^ * Fix "Unable to detect disconnect when using NOTIFY/LISTEN" `#559 `_ 1.3.0b3 (2021-04-03) ^^^^^^^^^^^^^^^^^^^^ * Reformat using black `#814 `_ 1.3.0b2 (2021-04-02) ^^^^^^^^^^^^^^^^^^^^ * Type annotations `#813 `_ 1.3.0b1 (2021-03-30) ^^^^^^^^^^^^^^^^^^^^ * Raise ResourceClosedError if we try to open a cursor on a closed SAConnection `#811 `_ 1.3.0b0 (2021-03-25) ^^^^^^^^^^^^^^^^^^^^ * Fix compatibility with SA 1.4 for IN statement `#806 `_ 1.2.1 (2021-03-23) ^^^^^^^^^^^^^^^^^^ * Pop loop in connection init due to backward compatibility `#808 `_ 1.2.0b4 (2021-03-23) ^^^^^^^^^^^^^^^^^^^^ * Set max supported sqlalchemy version `#805 `_ 1.2.0b3 (2021-03-22) ^^^^^^^^^^^^^^^^^^^^ * Don't run ROLLBACK when the connection is closed `#778 `_ * Multiple cursors support `#801 `_ 1.2.0b2 (2020-12-21) ^^^^^^^^^^^^^^^^^^^^ * Fix IsolationLevel.read_committed and introduce IsolationLevel.default `#770 `_ * Fix python 3.8 warnings in tests `#771 `_ 1.2.0b1 (2020-12-16) ^^^^^^^^^^^^^^^^^^^^ * Deprecate blocking connection.cancel() method `#570 `_ 1.2.0b0 (2020-12-15) ^^^^^^^^^^^^^^^^^^^^ * Implement timeout on acquiring connection from pool `#766 `_ 1.1.0 (2020-12-10) ^^^^^^^^^^^^^^^^^^ 1.1.0b2 (2020-12-09) ^^^^^^^^^^^^^^^^^^^^ * Added missing slots to context managers `#763 `_ 1.1.0b1 (2020-12-07) ^^^^^^^^^^^^^^^^^^^^ * Fix on_connect multiple call on acquire `#552 `_ * Fix python 3.8 warnings `#622 `_ * Bump minimum psycopg version to 2.8.4 `#754 `_ * Fix Engine.release method to release connection in any way `#756 `_ 1.0.0 (2019-09-20) ^^^^^^^^^^^^^^^^^^ * Removal of an asynchronous call in favor of issues # 550 * Big editing of documentation and minor bugs #534 0.16.0 (2019-01-25) ^^^^^^^^^^^^^^^^^^^ * Fix select priority name `#525 `_ * Rename `psycopg2` to `psycopg2-binary` to fix deprecation warning `#507 `_ * Fix `#189 `_ hstore when using ReadDictCursor `#512 `_ * close cannot be used while an asynchronous query is underway `#452 `_ * sqlalchemy adapter trx begin allow transaction_mode `#498 `_ 0.15.0 (2018-08-14) ^^^^^^^^^^^^^^^^^^^ * Support Python 3.7 `#437 `_ 0.14.0 (2018-05-10) ^^^^^^^^^^^^^^^^^^^ * Add ``get_dialect`` func to have ability to pass ``json_serializer`` `#451 `_ 0.13.2 (2018-01-03) ^^^^^^^^^^^^^^^^^^^ * Fixed compatibility with SQLAlchemy 1.2.0 `#412 `_ * Added support for transaction isolation levels `#219 `_ 0.13.1 (2017-09-10) ^^^^^^^^^^^^^^^^^^^ * Added connection poll recycling logic `#373 `_ 0.13.0 (2016-12-02) ^^^^^^^^^^^^^^^^^^^ * Add `async with` support to `.begin_nested()` `#208 `_ * Fix connection.cancel() `#212 `_ `#223 `_ * Raise informative error on unexpected connection closing `#191 `_ * Added support for python types columns issues `#217 `_ * Added support for default values in SA table issues `#206 `_ 0.12.0 (2016-10-09) ^^^^^^^^^^^^^^^^^^^ * Add an on_connect callback parameter to pool `#141 `_ * Fixed connection to work under both windows and posix based systems `#142 `_ 0.11.0 (2016-09-12) ^^^^^^^^^^^^^^^^^^^ * Immediately remove callbacks from a closed file descriptor `#139 `_ * Drop Python 3.3 support 0.10.0 (2016-07-16) ^^^^^^^^^^^^^^^^^^^ * Refactor tests to use dockerized Postgres server `#107 `_ * Reduce default pool minsize to 1 `#106 `_ * Explicitly enumerate packages in setup.py `#85 `_ * Remove expired connections from pool on acquire `#116 `_ * Don't crash when Connection is GC'ed `#124 `_ * Use loop.create_future() if available 0.9.2 (2016-01-31) ^^^^^^^^^^^^^^^^^^ * Make pool.release return asyncio.Future, so we can wait on it in `__aexit__` `#102 `_ * Add support for uuid type `#103 `_ 0.9.1 (2016-01-17) ^^^^^^^^^^^^^^^^^^ * Documentation update `#101 `_ 0.9.0 (2016-01-14) ^^^^^^^^^^^^^^^^^^ * Add async context managers for transactions `#91 `_ * Support async iterator in ResultProxy `#92 `_ * Add async with for engine `#90 `_ 0.8.0 (2015-12-31) ^^^^^^^^^^^^^^^^^^ * Add PostgreSQL notification support `#58 `_ * Support pools with unlimited size `#59 `_ * Cancel current DB operation on asyncio timeout `#66 `_ * Add async with support for Pool, Connection, Cursor `#88 `_ 0.7.0 (2015-04-22) ^^^^^^^^^^^^^^^^^^ * Get rid of resource leak on connection failure. * Report ResourceWarning on non-closed connections. * Deprecate iteration protocol support in cursor and ResultProxy. * Release sa connection to pool on `connection.close()`. 0.6.0 (2015-02-03) ^^^^^^^^^^^^^^^^^^ * Accept dict, list, tuple, named and positional parameters in `SAConnection.execute()` 0.5.2 (2014-12-08) ^^^^^^^^^^^^^^^^^^ * Minor release, fixes a bug that leaves connection in broken state after `cursor.execute()` failure. 0.5.1 (2014-10-31) ^^^^^^^^^^^^^^^^^^ * Fix a bug for processing transactions in line. 0.5.0 (2014-10-31) ^^^^^^^^^^^^^^^^^^ * Add .terminate() to Pool and Engine * Reimplement connection pool (now pool size cannot be greater than pool.maxsize) * Add .close() and .wait_closed() to Pool and Engine * Add minsize, maxsize, size and freesize properties to sa.Engine * Support *echo* parameter for logging executed SQL commands * Connection.close() is not a coroutine (but we keep backward compatibility). 0.4.1 (2014-10-02) ^^^^^^^^^^^^^^^^^^ * make cursor iterable * update docs 0.4.0 (2014-10-02) ^^^^^^^^^^^^^^^^^^ * add timeouts for database operations. * Autoregister psycopg2 support for json data type. * Support JSON in aiopg.sa * Support ARRAY in aiopg.sa * Autoregister hstore support if present in connected DB * Support HSTORE in aiopg.sa 0.3.2 (2014-07-07) ^^^^^^^^^^^^^^^^^^ * change signature to cursor.execute(operation, parameters=None) to follow psycopg2 convention. 0.3.1 (2014-07-04) ^^^^^^^^^^^^^^^^^^ * Forward arguments to cursor constructor for pooled connections. 0.3.0 (2014-06-22) ^^^^^^^^^^^^^^^^^^ * Allow executing SQLAlchemy DDL statements. * Fix bug with race conditions on acquiring/releasing connections from pool. 0.2.3 (2014-06-12) ^^^^^^^^^^^^^^^^^^ * Fix bug in connection pool. 0.2.2 (2014-06-07) ^^^^^^^^^^^^^^^^^^ * Fix bug with passing parameters into SAConnection.execute when executing raw SQL expression. 0.2.1 (2014-05-08) ^^^^^^^^^^^^^^^^^^ * Close connection with invalid transaction status on returning to pool. 0.2.0 (2014-05-04) ^^^^^^^^^^^^^^^^^^ * Implemented optional support for sqlalchemy functional sql layer. 0.1.0 (2014-04-06) ^^^^^^^^^^^^^^^^^^ * Implemented plain connections: connect, Connection, Cursor. * Implemented database pools: create_pool and Pool. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1635784389.0 aiopg-1.3.3/LICENSE0000644000175100001710000000242300000000000013163 0ustar00runnerdockerCopyright (c) 2014, 2015, Andrew Svetlov All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1635784389.0 aiopg-1.3.3/MAINTAINERS.txt0000644000175100001710000000025200000000000014467 0ustar00runnerdocker* Andrew Svetlov * Alexey Firsov * Alexey Popravka * Yury Pliner ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1635784389.0 aiopg-1.3.3/MANIFEST.in0000644000175100001710000000020100000000000013704 0ustar00runnerdockerinclude LICENSE include CHANGES.txt include README.rst include MAINTAINERS.txt graft aiopg global-exclude *.pyc exclude tests/** ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1635784409.7227902 aiopg-1.3.3/PKG-INFO0000644000175100001710000003370000000000000013255 0ustar00runnerdockerMetadata-Version: 2.1 Name: aiopg Version: 1.3.3 Summary: Postgres integration with asyncio. Home-page: https://aiopg.readthedocs.io Author: Andrew Svetlov Author-email: andrew.svetlov@gmail.com Maintainer: Andrew Svetlov , Alexey Firsov , Alexey Popravka , Yury Pliner Maintainer-email: virmir49@gmail.com License: BSD Download-URL: https://pypi.python.org/pypi/aiopg Project-URL: Chat: Gitter, https://gitter.im/aio-libs/Lobby Project-URL: CI: GA, https://github.com/aio-libs/aiopg/actions?query=workflow%3ACI Project-URL: Coverage: codecov, https://codecov.io/gh/aio-libs/aiopg Project-URL: Docs: RTD, https://aiopg.readthedocs.io Project-URL: GitHub: issues, https://github.com/aio-libs/aiopg/issues Project-URL: GitHub: repo, https://github.com/aio-libs/aiopg Platform: macOS Platform: POSIX Platform: Windows Classifier: License :: OSI Approved :: BSD License Classifier: Intended Audience :: Developers Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3 :: Only Classifier: Programming Language :: Python :: 3.6 Classifier: Programming Language :: Python :: 3.7 Classifier: Programming Language :: Python :: 3.8 Classifier: Programming Language :: Python :: 3.9 Classifier: Programming Language :: Python :: 3.10 Classifier: Operating System :: POSIX Classifier: Operating System :: MacOS :: MacOS X Classifier: Operating System :: Microsoft :: Windows Classifier: Environment :: Web Environment Classifier: Development Status :: 5 - Production/Stable Classifier: Topic :: Database Classifier: Topic :: Database :: Front-Ends Classifier: Framework :: AsyncIO Requires-Python: >=3.6 Description-Content-Type: text/x-rst Provides-Extra: sa License-File: LICENSE aiopg ===== .. image:: https://github.com/aio-libs/aiopg/workflows/CI/badge.svg :target: https://github.com/aio-libs/aiopg/actions?query=workflow%3ACI .. image:: https://codecov.io/gh/aio-libs/aiopg/branch/master/graph/badge.svg :target: https://codecov.io/gh/aio-libs/aiopg .. image:: https://badges.gitter.im/Join%20Chat.svg :target: https://gitter.im/aio-libs/Lobby :alt: Chat on Gitter **aiopg** is a library for accessing a PostgreSQL_ database from the asyncio_ (PEP-3156/tulip) framework. It wraps asynchronous features of the Psycopg database driver. Example ------- .. code:: python import asyncio import aiopg dsn = 'dbname=aiopg user=aiopg password=passwd host=127.0.0.1' async def go(): pool = await aiopg.create_pool(dsn) async with pool.acquire() as conn: async with conn.cursor() as cur: await cur.execute("SELECT 1") ret = [] async for row in cur: ret.append(row) assert ret == [(1,)] loop = asyncio.get_event_loop() loop.run_until_complete(go()) Example of SQLAlchemy optional integration ------------------------------------------ .. code:: python import asyncio from aiopg.sa import create_engine import sqlalchemy as sa metadata = sa.MetaData() tbl = sa.Table('tbl', metadata, sa.Column('id', sa.Integer, primary_key=True), sa.Column('val', sa.String(255))) async def create_table(engine): async with engine.acquire() as conn: await conn.execute('DROP TABLE IF EXISTS tbl') await conn.execute('''CREATE TABLE tbl ( id serial PRIMARY KEY, val varchar(255))''') async def go(): async with create_engine(user='aiopg', database='aiopg', host='127.0.0.1', password='passwd') as engine: async with engine.acquire() as conn: await conn.execute(tbl.insert().values(val='abc')) async for row in conn.execute(tbl.select()): print(row.id, row.val) loop = asyncio.get_event_loop() loop.run_until_complete(go()) .. _PostgreSQL: http://www.postgresql.org/ .. _asyncio: https://docs.python.org/3/library/asyncio.html Please use:: $ make test for executing the project's unittests. See https://aiopg.readthedocs.io/en/stable/contributing.html for details on how to set up your environment to run the tests. Changelog --------- 1.3.3 (2021-11-01) ^^^^^^^^^^^^^^^^^^ * Support async-timeout 4.0+ 1.3.2 (2021-10-07) ^^^^^^^^^^^^^^^^^^ 1.3.2b2 (2021-10-07) ^^^^^^^^^^^^^^^^^^^^ * Respect use_labels for select statement `#882 `_ 1.3.2b1 (2021-07-11) ^^^^^^^^^^^^^^^^^^^^ * Fix compatibility with SQLAlchemy >= 1.4 `#870 `_ 1.3.1 (2021-07-08) ^^^^^^^^^^^^^^^^^^ 1.3.1b2 (2021-07-06) ^^^^^^^^^^^^^^^^^^^^ * Suppress "Future exception was never retrieved" `#862 `_ 1.3.1b1 (2021-07-05) ^^^^^^^^^^^^^^^^^^^^ * Fix ClosableQueue.get on cancellation, close it on Connection.close `#859 `_ 1.3.0 (2021-06-30) ^^^^^^^^^^^^^^^^^^ 1.3.0b4 (2021-06-28) ^^^^^^^^^^^^^^^^^^^^ * Fix "Unable to detect disconnect when using NOTIFY/LISTEN" `#559 `_ 1.3.0b3 (2021-04-03) ^^^^^^^^^^^^^^^^^^^^ * Reformat using black `#814 `_ 1.3.0b2 (2021-04-02) ^^^^^^^^^^^^^^^^^^^^ * Type annotations `#813 `_ 1.3.0b1 (2021-03-30) ^^^^^^^^^^^^^^^^^^^^ * Raise ResourceClosedError if we try to open a cursor on a closed SAConnection `#811 `_ 1.3.0b0 (2021-03-25) ^^^^^^^^^^^^^^^^^^^^ * Fix compatibility with SA 1.4 for IN statement `#806 `_ 1.2.1 (2021-03-23) ^^^^^^^^^^^^^^^^^^ * Pop loop in connection init due to backward compatibility `#808 `_ 1.2.0b4 (2021-03-23) ^^^^^^^^^^^^^^^^^^^^ * Set max supported sqlalchemy version `#805 `_ 1.2.0b3 (2021-03-22) ^^^^^^^^^^^^^^^^^^^^ * Don't run ROLLBACK when the connection is closed `#778 `_ * Multiple cursors support `#801 `_ 1.2.0b2 (2020-12-21) ^^^^^^^^^^^^^^^^^^^^ * Fix IsolationLevel.read_committed and introduce IsolationLevel.default `#770 `_ * Fix python 3.8 warnings in tests `#771 `_ 1.2.0b1 (2020-12-16) ^^^^^^^^^^^^^^^^^^^^ * Deprecate blocking connection.cancel() method `#570 `_ 1.2.0b0 (2020-12-15) ^^^^^^^^^^^^^^^^^^^^ * Implement timeout on acquiring connection from pool `#766 `_ 1.1.0 (2020-12-10) ^^^^^^^^^^^^^^^^^^ 1.1.0b2 (2020-12-09) ^^^^^^^^^^^^^^^^^^^^ * Added missing slots to context managers `#763 `_ 1.1.0b1 (2020-12-07) ^^^^^^^^^^^^^^^^^^^^ * Fix on_connect multiple call on acquire `#552 `_ * Fix python 3.8 warnings `#622 `_ * Bump minimum psycopg version to 2.8.4 `#754 `_ * Fix Engine.release method to release connection in any way `#756 `_ 1.0.0 (2019-09-20) ^^^^^^^^^^^^^^^^^^ * Removal of an asynchronous call in favor of issues # 550 * Big editing of documentation and minor bugs #534 0.16.0 (2019-01-25) ^^^^^^^^^^^^^^^^^^^ * Fix select priority name `#525 `_ * Rename `psycopg2` to `psycopg2-binary` to fix deprecation warning `#507 `_ * Fix `#189 `_ hstore when using ReadDictCursor `#512 `_ * close cannot be used while an asynchronous query is underway `#452 `_ * sqlalchemy adapter trx begin allow transaction_mode `#498 `_ 0.15.0 (2018-08-14) ^^^^^^^^^^^^^^^^^^^ * Support Python 3.7 `#437 `_ 0.14.0 (2018-05-10) ^^^^^^^^^^^^^^^^^^^ * Add ``get_dialect`` func to have ability to pass ``json_serializer`` `#451 `_ 0.13.2 (2018-01-03) ^^^^^^^^^^^^^^^^^^^ * Fixed compatibility with SQLAlchemy 1.2.0 `#412 `_ * Added support for transaction isolation levels `#219 `_ 0.13.1 (2017-09-10) ^^^^^^^^^^^^^^^^^^^ * Added connection poll recycling logic `#373 `_ 0.13.0 (2016-12-02) ^^^^^^^^^^^^^^^^^^^ * Add `async with` support to `.begin_nested()` `#208 `_ * Fix connection.cancel() `#212 `_ `#223 `_ * Raise informative error on unexpected connection closing `#191 `_ * Added support for python types columns issues `#217 `_ * Added support for default values in SA table issues `#206 `_ 0.12.0 (2016-10-09) ^^^^^^^^^^^^^^^^^^^ * Add an on_connect callback parameter to pool `#141 `_ * Fixed connection to work under both windows and posix based systems `#142 `_ 0.11.0 (2016-09-12) ^^^^^^^^^^^^^^^^^^^ * Immediately remove callbacks from a closed file descriptor `#139 `_ * Drop Python 3.3 support 0.10.0 (2016-07-16) ^^^^^^^^^^^^^^^^^^^ * Refactor tests to use dockerized Postgres server `#107 `_ * Reduce default pool minsize to 1 `#106 `_ * Explicitly enumerate packages in setup.py `#85 `_ * Remove expired connections from pool on acquire `#116 `_ * Don't crash when Connection is GC'ed `#124 `_ * Use loop.create_future() if available 0.9.2 (2016-01-31) ^^^^^^^^^^^^^^^^^^ * Make pool.release return asyncio.Future, so we can wait on it in `__aexit__` `#102 `_ * Add support for uuid type `#103 `_ 0.9.1 (2016-01-17) ^^^^^^^^^^^^^^^^^^ * Documentation update `#101 `_ 0.9.0 (2016-01-14) ^^^^^^^^^^^^^^^^^^ * Add async context managers for transactions `#91 `_ * Support async iterator in ResultProxy `#92 `_ * Add async with for engine `#90 `_ 0.8.0 (2015-12-31) ^^^^^^^^^^^^^^^^^^ * Add PostgreSQL notification support `#58 `_ * Support pools with unlimited size `#59 `_ * Cancel current DB operation on asyncio timeout `#66 `_ * Add async with support for Pool, Connection, Cursor `#88 `_ 0.7.0 (2015-04-22) ^^^^^^^^^^^^^^^^^^ * Get rid of resource leak on connection failure. * Report ResourceWarning on non-closed connections. * Deprecate iteration protocol support in cursor and ResultProxy. * Release sa connection to pool on `connection.close()`. 0.6.0 (2015-02-03) ^^^^^^^^^^^^^^^^^^ * Accept dict, list, tuple, named and positional parameters in `SAConnection.execute()` 0.5.2 (2014-12-08) ^^^^^^^^^^^^^^^^^^ * Minor release, fixes a bug that leaves connection in broken state after `cursor.execute()` failure. 0.5.1 (2014-10-31) ^^^^^^^^^^^^^^^^^^ * Fix a bug for processing transactions in line. 0.5.0 (2014-10-31) ^^^^^^^^^^^^^^^^^^ * Add .terminate() to Pool and Engine * Reimplement connection pool (now pool size cannot be greater than pool.maxsize) * Add .close() and .wait_closed() to Pool and Engine * Add minsize, maxsize, size and freesize properties to sa.Engine * Support *echo* parameter for logging executed SQL commands * Connection.close() is not a coroutine (but we keep backward compatibility). 0.4.1 (2014-10-02) ^^^^^^^^^^^^^^^^^^ * make cursor iterable * update docs 0.4.0 (2014-10-02) ^^^^^^^^^^^^^^^^^^ * add timeouts for database operations. * Autoregister psycopg2 support for json data type. * Support JSON in aiopg.sa * Support ARRAY in aiopg.sa * Autoregister hstore support if present in connected DB * Support HSTORE in aiopg.sa 0.3.2 (2014-07-07) ^^^^^^^^^^^^^^^^^^ * change signature to cursor.execute(operation, parameters=None) to follow psycopg2 convention. 0.3.1 (2014-07-04) ^^^^^^^^^^^^^^^^^^ * Forward arguments to cursor constructor for pooled connections. 0.3.0 (2014-06-22) ^^^^^^^^^^^^^^^^^^ * Allow executing SQLAlchemy DDL statements. * Fix bug with race conditions on acquiring/releasing connections from pool. 0.2.3 (2014-06-12) ^^^^^^^^^^^^^^^^^^ * Fix bug in connection pool. 0.2.2 (2014-06-07) ^^^^^^^^^^^^^^^^^^ * Fix bug with passing parameters into SAConnection.execute when executing raw SQL expression. 0.2.1 (2014-05-08) ^^^^^^^^^^^^^^^^^^ * Close connection with invalid transaction status on returning to pool. 0.2.0 (2014-05-04) ^^^^^^^^^^^^^^^^^^ * Implemented optional support for sqlalchemy functional sql layer. 0.1.0 (2014-04-06) ^^^^^^^^^^^^^^^^^^ * Implemented plain connections: connect, Connection, Cursor. * Implemented database pools: create_pool and Pool. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1635784389.0 aiopg-1.3.3/README.rst0000644000175100001710000000504100000000000013644 0ustar00runnerdockeraiopg ===== .. image:: https://github.com/aio-libs/aiopg/workflows/CI/badge.svg :target: https://github.com/aio-libs/aiopg/actions?query=workflow%3ACI .. image:: https://codecov.io/gh/aio-libs/aiopg/branch/master/graph/badge.svg :target: https://codecov.io/gh/aio-libs/aiopg .. image:: https://badges.gitter.im/Join%20Chat.svg :target: https://gitter.im/aio-libs/Lobby :alt: Chat on Gitter **aiopg** is a library for accessing a PostgreSQL_ database from the asyncio_ (PEP-3156/tulip) framework. It wraps asynchronous features of the Psycopg database driver. Example ------- .. code:: python import asyncio import aiopg dsn = 'dbname=aiopg user=aiopg password=passwd host=127.0.0.1' async def go(): pool = await aiopg.create_pool(dsn) async with pool.acquire() as conn: async with conn.cursor() as cur: await cur.execute("SELECT 1") ret = [] async for row in cur: ret.append(row) assert ret == [(1,)] loop = asyncio.get_event_loop() loop.run_until_complete(go()) Example of SQLAlchemy optional integration ------------------------------------------ .. code:: python import asyncio from aiopg.sa import create_engine import sqlalchemy as sa metadata = sa.MetaData() tbl = sa.Table('tbl', metadata, sa.Column('id', sa.Integer, primary_key=True), sa.Column('val', sa.String(255))) async def create_table(engine): async with engine.acquire() as conn: await conn.execute('DROP TABLE IF EXISTS tbl') await conn.execute('''CREATE TABLE tbl ( id serial PRIMARY KEY, val varchar(255))''') async def go(): async with create_engine(user='aiopg', database='aiopg', host='127.0.0.1', password='passwd') as engine: async with engine.acquire() as conn: await conn.execute(tbl.insert().values(val='abc')) async for row in conn.execute(tbl.select()): print(row.id, row.val) loop = asyncio.get_event_loop() loop.run_until_complete(go()) .. _PostgreSQL: http://www.postgresql.org/ .. _asyncio: https://docs.python.org/3/library/asyncio.html Please use:: $ make test for executing the project's unittests. See https://aiopg.readthedocs.io/en/stable/contributing.html for details on how to set up your environment to run the tests. ././@PaxHeader0000000000000000000000000000003200000000000010210 xustar0026 mtime=1635784409.71879 aiopg-1.3.3/aiopg/0000755000175100001710000000000000000000000013254 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1635784389.0 aiopg-1.3.3/aiopg/__init__.py0000644000175100001710000000404300000000000015366 0ustar00runnerdockerimport re import sys import warnings from collections import namedtuple from .connection import ( TIMEOUT as DEFAULT_TIMEOUT, Connection, Cursor, DefaultCompiler, IsolationCompiler, IsolationLevel, ReadCommittedCompiler, RepeatableReadCompiler, SerializableCompiler, Transaction, connect, ) from .pool import Pool, create_pool from .utils import get_running_loop warnings.filterwarnings( "always", ".*", category=ResourceWarning, module=r"aiopg(\.\w+)+", append=False, ) __all__ = ( "connect", "create_pool", "get_running_loop", "Connection", "Cursor", "Pool", "version", "version_info", "DEFAULT_TIMEOUT", "IsolationLevel", "Transaction", ) __version__ = "1.3.3" version = f"{__version__}, Python {sys.version}" VersionInfo = namedtuple( "VersionInfo", "major minor micro releaselevel serial" ) def _parse_version(ver: str) -> VersionInfo: RE = ( r"^" r"(?P\d+)\.(?P\d+)\.(?P\d+)" r"((?P[a-z]+)(?P\d+)?)?" r"$" ) match = re.match(RE, ver) if not match: raise ImportError(f"Invalid package version {ver}") try: major = int(match.group("major")) minor = int(match.group("minor")) micro = int(match.group("micro")) levels = {"rc": "candidate", "a": "alpha", "b": "beta", None: "final"} releaselevel = levels[match.group("releaselevel")] serial = int(match.group("serial")) if match.group("serial") else 0 return VersionInfo(major, minor, micro, releaselevel, serial) except Exception as e: raise ImportError(f"Invalid package version {ver}") from e version_info = _parse_version(__version__) # make pyflakes happy ( connect, create_pool, Connection, Cursor, Pool, DEFAULT_TIMEOUT, IsolationLevel, Transaction, get_running_loop, IsolationCompiler, DefaultCompiler, ReadCommittedCompiler, RepeatableReadCompiler, SerializableCompiler, ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1635784389.0 aiopg-1.3.3/aiopg/connection.py0000755000175100001710000011446600000000000016004 0ustar00runnerdockerimport abc import asyncio import contextlib import datetime import enum import errno import platform import select import sys import traceback import uuid import warnings import weakref from collections.abc import Mapping from types import TracebackType from typing import ( Any, Callable, Generator, List, Optional, Sequence, Tuple, Type, cast, ) import psycopg2 import psycopg2.extensions import psycopg2.extras from .log import logger from .utils import ( ClosableQueue, _ContextManager, create_completed_future, get_running_loop, ) TIMEOUT = 60.0 # Windows specific error code, not in errno for some reason, and doesnt map # to OSError.errno EBADF WSAENOTSOCK = 10038 def connect( dsn: Optional[str] = None, *, timeout: float = TIMEOUT, enable_json: bool = True, enable_hstore: bool = True, enable_uuid: bool = True, echo: bool = False, **kwargs: Any, ) -> _ContextManager["Connection"]: """A factory for connecting to PostgreSQL. The coroutine accepts all parameters that psycopg2.connect() does plus optional keyword-only `timeout` parameters. Returns instantiated Connection object. """ connection = Connection( dsn, timeout, bool(echo), enable_hstore=enable_hstore, enable_uuid=enable_uuid, enable_json=enable_json, **kwargs, ) return _ContextManager[Connection](connection, disconnect) # type: ignore async def disconnect(c: "Connection") -> None: await c.close() def _is_bad_descriptor_error(os_error: OSError) -> bool: if platform.system() == "Windows": # pragma: no cover winerror = int(getattr(os_error, "winerror", 0)) return winerror == WSAENOTSOCK return os_error.errno == errno.EBADF class IsolationCompiler(abc.ABC): __slots__ = ("_isolation_level", "_readonly", "_deferrable") def __init__( self, isolation_level: Optional[str], readonly: bool, deferrable: bool ): self._isolation_level = isolation_level self._readonly = readonly self._deferrable = deferrable @property def name(self) -> str: return self._isolation_level or "Unknown" def savepoint(self, unique_id: str) -> str: return f"SAVEPOINT {unique_id}" def release_savepoint(self, unique_id: str) -> str: return f"RELEASE SAVEPOINT {unique_id}" def rollback_savepoint(self, unique_id: str) -> str: return f"ROLLBACK TO SAVEPOINT {unique_id}" def commit(self) -> str: return "COMMIT" def rollback(self) -> str: return "ROLLBACK" def begin(self) -> str: query = "BEGIN" if self._isolation_level is not None: query += f" ISOLATION LEVEL {self._isolation_level.upper()}" if self._readonly: query += " READ ONLY" if self._deferrable: query += " DEFERRABLE" return query def __repr__(self) -> str: return self.name class ReadCommittedCompiler(IsolationCompiler): __slots__ = () def __init__(self, readonly: bool, deferrable: bool): super().__init__("Read committed", readonly, deferrable) class RepeatableReadCompiler(IsolationCompiler): __slots__ = () def __init__(self, readonly: bool, deferrable: bool): super().__init__("Repeatable read", readonly, deferrable) class SerializableCompiler(IsolationCompiler): __slots__ = () def __init__(self, readonly: bool, deferrable: bool): super().__init__("Serializable", readonly, deferrable) class DefaultCompiler(IsolationCompiler): __slots__ = () def __init__(self, readonly: bool, deferrable: bool): super().__init__(None, readonly, deferrable) @property def name(self) -> str: return "Default" class IsolationLevel(enum.Enum): serializable = SerializableCompiler repeatable_read = RepeatableReadCompiler read_committed = ReadCommittedCompiler default = DefaultCompiler def __call__(self, readonly: bool, deferrable: bool) -> IsolationCompiler: return self.value(readonly, deferrable) # type: ignore async def _release_savepoint(t: "Transaction") -> None: await t.release_savepoint() async def _rollback_savepoint(t: "Transaction") -> None: await t.rollback_savepoint() class Transaction: __slots__ = ("_cursor", "_is_begin", "_isolation", "_unique_id") def __init__( self, cursor: "Cursor", isolation_level: Callable[[bool, bool], IsolationCompiler], readonly: bool = False, deferrable: bool = False, ): self._cursor = cursor self._is_begin = False self._unique_id: Optional[str] = None self._isolation = isolation_level(readonly, deferrable) @property def is_begin(self) -> bool: return self._is_begin async def begin(self) -> "Transaction": if self._is_begin: raise psycopg2.ProgrammingError( "You are trying to open a new transaction, use the save point" ) self._is_begin = True await self._cursor.execute(self._isolation.begin()) return self async def commit(self) -> None: self._check_commit_rollback() await self._cursor.execute(self._isolation.commit()) self._is_begin = False async def rollback(self) -> None: self._check_commit_rollback() if not self._cursor.closed: await self._cursor.execute(self._isolation.rollback()) self._is_begin = False async def rollback_savepoint(self) -> None: self._check_release_rollback() if not self._cursor.closed: await self._cursor.execute( self._isolation.rollback_savepoint( self._unique_id # type: ignore ) ) self._unique_id = None async def release_savepoint(self) -> None: self._check_release_rollback() await self._cursor.execute( self._isolation.release_savepoint(self._unique_id) # type: ignore ) self._unique_id = None async def savepoint(self) -> "Transaction": self._check_commit_rollback() if self._unique_id is not None: raise psycopg2.ProgrammingError("You do not shut down savepoint") self._unique_id = f"s{uuid.uuid1().hex}" await self._cursor.execute(self._isolation.savepoint(self._unique_id)) return self def point(self) -> _ContextManager["Transaction"]: return _ContextManager[Transaction]( self.savepoint(), _release_savepoint, _rollback_savepoint, ) def _check_commit_rollback(self) -> None: if not self._is_begin: raise psycopg2.ProgrammingError( "You are trying to commit " "the transaction does not open" ) def _check_release_rollback(self) -> None: self._check_commit_rollback() if self._unique_id is None: raise psycopg2.ProgrammingError("You do not start savepoint") def __repr__(self) -> str: return ( f"<{self.__class__.__name__} " f"transaction={self._isolation} id={id(self):#x}>" ) def __del__(self) -> None: if self._is_begin: warnings.warn( f"You have not closed transaction {self!r}", ResourceWarning ) if self._unique_id is not None: warnings.warn( f"You have not closed savepoint {self!r}", ResourceWarning ) async def __aenter__(self) -> "Transaction": return await self.begin() async def __aexit__( self, exc_type: Optional[Type[BaseException]], exc: Optional[BaseException], tb: Optional[TracebackType], ) -> None: if exc_type is not None: await self.rollback() else: await self.commit() async def _commit_transaction(t: Transaction) -> None: await t.commit() async def _rollback_transaction(t: Transaction) -> None: await t.rollback() class Cursor: def __init__( self, conn: "Connection", impl: Any, timeout: float, echo: bool, isolation_level: Optional[IsolationLevel] = None, ): self._conn = conn self._impl = impl self._timeout = timeout self._echo = echo self._transaction = Transaction( self, isolation_level or IsolationLevel.default ) @property def echo(self) -> bool: """Return echo mode status.""" return self._echo @property def description(self) -> Optional[Sequence[Any]]: """This read-only attribute is a sequence of 7-item sequences. Each of these sequences is a collections.namedtuple containing information describing one result column: 0. name: the name of the column returned. 1. type_code: the PostgreSQL OID of the column. 2. display_size: the actual length of the column in bytes. 3. internal_size: the size in bytes of the column associated to this column on the server. 4. precision: total number of significant digits in columns of type NUMERIC. None for other types. 5. scale: count of decimal digits in the fractional part in columns of type NUMERIC. None for other types. 6. null_ok: always None as not easy to retrieve from the libpq. This attribute will be None for operations that do not return rows or if the cursor has not had an operation invoked via the execute() method yet. """ return self._impl.description # type: ignore def close(self) -> None: """Close the cursor now.""" if not self.closed: self._impl.close() @property def closed(self) -> bool: """Read-only boolean attribute: specifies if the cursor is closed.""" return self._impl.closed # type: ignore @property def connection(self) -> "Connection": """Read-only attribute returning a reference to the `Connection`.""" return self._conn @property def raw(self) -> Any: """Underlying psycopg cursor object, readonly""" return self._impl @property def name(self) -> str: # Not supported return self._impl.name # type: ignore @property def scrollable(self) -> Optional[bool]: # Not supported return self._impl.scrollable # type: ignore @scrollable.setter def scrollable(self, val: bool) -> None: # Not supported self._impl.scrollable = val @property def withhold(self) -> bool: # Not supported return self._impl.withhold # type: ignore @withhold.setter def withhold(self, val: bool) -> None: # Not supported self._impl.withhold = val async def execute( self, operation: str, parameters: Any = None, *, timeout: Optional[float] = None, ) -> None: """Prepare and execute a database operation (query or command). Parameters may be provided as sequence or mapping and will be bound to variables in the operation. Variables are specified either with positional %s or named %({name})s placeholders. """ if timeout is None: timeout = self._timeout waiter = self._conn._create_waiter("cursor.execute") if self._echo: logger.info(operation) logger.info("%r", parameters) try: self._impl.execute(operation, parameters) except BaseException: self._conn._waiter = None raise try: await self._conn._poll(waiter, timeout) except asyncio.TimeoutError: self._impl.close() raise async def executemany(self, *args: Any, **kwargs: Any) -> None: # Not supported raise psycopg2.ProgrammingError( "executemany cannot be used in asynchronous mode" ) async def callproc( self, procname: str, parameters: Any = None, *, timeout: Optional[float] = None, ) -> None: """Call a stored database procedure with the given name. The sequence of parameters must contain one entry for each argument that the procedure expects. The result of the call is returned as modified copy of the input sequence. Input parameters are left untouched, output and input/output parameters replaced with possibly new values. """ if timeout is None: timeout = self._timeout waiter = self._conn._create_waiter("cursor.callproc") if self._echo: logger.info("CALL %s", procname) logger.info("%r", parameters) try: self._impl.callproc(procname, parameters) except BaseException: self._conn._waiter = None raise else: await self._conn._poll(waiter, timeout) def begin(self) -> _ContextManager[Transaction]: return _ContextManager[Transaction]( self._transaction.begin(), _commit_transaction, _rollback_transaction, ) def begin_nested(self) -> _ContextManager[Transaction]: if self._transaction.is_begin: return self._transaction.point() return _ContextManager[Transaction]( self._transaction.begin(), _commit_transaction, _rollback_transaction, ) def mogrify(self, operation: str, parameters: Any = None) -> bytes: """Return a query string after arguments binding. The byte string returned is exactly the one that would be sent to the database running the .execute() method or similar. """ ret = self._impl.mogrify(operation, parameters) assert ( not self._conn.isexecuting() ), "Don't support server side mogrify" return ret # type: ignore async def setinputsizes(self, sizes: int) -> None: """This method is exposed in compliance with the DBAPI. It currently does nothing but it is safe to call it. """ self._impl.setinputsizes(sizes) async def fetchone(self) -> Any: """Fetch the next row of a query result set. Returns a single tuple, or None when no more data is available. """ ret = self._impl.fetchone() assert ( not self._conn.isexecuting() ), "Don't support server side cursors yet" return ret async def fetchmany(self, size: Optional[int] = None) -> List[Any]: """Fetch the next set of rows of a query result. Returns a list of tuples. An empty list is returned when no more rows are available. The number of rows to fetch per call is specified by the parameter. If it is not given, the cursor's .arraysize determines the number of rows to be fetched. The method should try to fetch as many rows as indicated by the size parameter. If this is not possible due to the specified number of rows not being available, fewer rows may be returned. """ if size is None: size = self._impl.arraysize ret = self._impl.fetchmany(size) assert ( not self._conn.isexecuting() ), "Don't support server side cursors yet" return ret # type: ignore async def fetchall(self) -> List[Any]: """Fetch all (remaining) rows of a query result. Returns them as a list of tuples. An empty list is returned if there is no more record to fetch. """ ret = self._impl.fetchall() assert ( not self._conn.isexecuting() ), "Don't support server side cursors yet" return ret # type: ignore async def scroll(self, value: int, mode: str = "relative") -> None: """Scroll to a new position according to mode. If mode is relative (default), value is taken as offset to the current position in the result set, if set to absolute, value states an absolute target position. """ self._impl.scroll(value, mode) assert ( not self._conn.isexecuting() ), "Don't support server side cursors yet" @property def arraysize(self) -> int: """How many rows will be returned by fetchmany() call. This read/write attribute specifies the number of rows to fetch at a time with fetchmany(). It defaults to 1 meaning to fetch a single row at a time. """ return self._impl.arraysize # type: ignore @arraysize.setter def arraysize(self, val: int) -> None: """How many rows will be returned by fetchmany() call. This read/write attribute specifies the number of rows to fetch at a time with fetchmany(). It defaults to 1 meaning to fetch a single row at a time. """ self._impl.arraysize = val @property def itersize(self) -> int: # Not supported return self._impl.itersize # type: ignore @itersize.setter def itersize(self, val: int) -> None: # Not supported self._impl.itersize = val @property def rowcount(self) -> int: """Returns the number of rows that has been produced of affected. This read-only attribute specifies the number of rows that the last :meth:`execute` produced (for Data Query Language statements like SELECT) or affected (for Data Manipulation Language statements like UPDATE or INSERT). The attribute is -1 in case no .execute() has been performed on the cursor or the row count of the last operation if it can't be determined by the interface. """ return self._impl.rowcount # type: ignore @property def rownumber(self) -> int: """Row index. This read-only attribute provides the current 0-based index of the cursor in the result set or ``None`` if the index cannot be determined.""" return self._impl.rownumber # type: ignore @property def lastrowid(self) -> int: """OID of the last inserted row. This read-only attribute provides the OID of the last row inserted by the cursor. If the table wasn't created with OID support or the last operation is not a single record insert, the attribute is set to None. """ return self._impl.lastrowid # type: ignore @property def query(self) -> Optional[str]: """The last executed query string. Read-only attribute containing the body of the last query sent to the backend (including bound arguments) as bytes string. None if no query has been executed yet. """ return self._impl.query # type: ignore @property def statusmessage(self) -> str: """the message returned by the last command.""" return self._impl.statusmessage # type: ignore @property def tzinfo_factory(self) -> datetime.tzinfo: """The time zone factory used to handle data types such as `TIMESTAMP WITH TIME ZONE`. """ return self._impl.tzinfo_factory # type: ignore @tzinfo_factory.setter def tzinfo_factory(self, val: datetime.tzinfo) -> None: """The time zone factory used to handle data types such as `TIMESTAMP WITH TIME ZONE`. """ self._impl.tzinfo_factory = val async def nextset(self) -> None: # Not supported self._impl.nextset() # raises psycopg2.NotSupportedError async def setoutputsize( self, size: int, column: Optional[int] = None ) -> None: # Does nothing self._impl.setoutputsize(size, column) async def copy_from(self, *args: Any, **kwargs: Any) -> None: raise psycopg2.ProgrammingError( "copy_from cannot be used in asynchronous mode" ) async def copy_to(self, *args: Any, **kwargs: Any) -> None: raise psycopg2.ProgrammingError( "copy_to cannot be used in asynchronous mode" ) async def copy_expert(self, *args: Any, **kwargs: Any) -> None: raise psycopg2.ProgrammingError( "copy_expert cannot be used in asynchronous mode" ) @property def timeout(self) -> float: """Return default timeout for cursor operations.""" return self._timeout def __aiter__(self) -> "Cursor": return self async def __anext__(self) -> Any: ret = await self.fetchone() if ret is not None: return ret raise StopAsyncIteration async def __aenter__(self) -> "Cursor": return self async def __aexit__( self, exc_type: Optional[Type[BaseException]], exc: Optional[BaseException], tb: Optional[TracebackType], ) -> None: self.close() def __repr__(self) -> str: return ( f"<" f"{type(self).__module__}::{type(self).__name__} " f"name={self.name}, " f"closed={self.closed}" f">" ) async def _close_cursor(c: Cursor) -> None: c.close() class Connection: """Low-level asynchronous interface for wrapped psycopg2 connection. The Connection instance encapsulates a database session. Provides support for creating asynchronous cursors. """ _source_traceback = None def __init__( self, dsn: Optional[str], timeout: float, echo: bool = False, enable_json: bool = True, enable_hstore: bool = True, enable_uuid: bool = True, **kwargs: Any, ): self._enable_json = enable_json self._enable_hstore = enable_hstore self._enable_uuid = enable_uuid self._loop = get_running_loop() self._waiter: Optional[ "asyncio.Future[None]" ] = self._loop.create_future() kwargs["async_"] = kwargs.pop("async", True) kwargs.pop("loop", None) # backward compatibility self._conn = psycopg2.connect(dsn, **kwargs) self._dsn = self._conn.dsn assert self._conn.isexecuting(), "Is conn an async at all???" self._fileno: Optional[int] = self._conn.fileno() self._timeout = timeout self._last_usage = self._loop.time() self._writing = False self._echo = echo self._notifies = asyncio.Queue() # type: ignore self._notifies_proxy = ClosableQueue(self._notifies, self._loop) self._weakref = weakref.ref(self) self._loop.add_reader( self._fileno, self._ready, self._weakref # type: ignore ) if self._loop.get_debug(): self._source_traceback = traceback.extract_stack(sys._getframe(1)) @staticmethod def _ready(weak_self: "weakref.ref[Any]") -> None: self = cast(Connection, weak_self()) if self is None: return waiter = self._waiter try: state = self._conn.poll() while self._conn.notifies: notify = self._conn.notifies.pop(0) self._notifies.put_nowait(notify) except (psycopg2.Warning, psycopg2.Error) as exc: if self._fileno is not None: try: select.select([self._fileno], [], [], 0) except OSError as os_exc: if _is_bad_descriptor_error(os_exc): with contextlib.suppress(OSError): self._loop.remove_reader(self._fileno) # forget a bad file descriptor, don't try to # touch it self._fileno = None try: if self._writing: self._writing = False if self._fileno is not None: self._loop.remove_writer(self._fileno) except OSError as exc2: if exc2.errno != errno.EBADF: # EBADF is ok for closed file descriptor # chain exception otherwise exc2.__cause__ = exc exc = exc2 self._notifies_proxy.close(exc) if waiter is not None and not waiter.done(): waiter.set_exception(exc) else: if self._fileno is None: # connection closed if waiter is not None and not waiter.done(): waiter.set_exception( psycopg2.OperationalError("Connection closed") ) if state == psycopg2.extensions.POLL_OK: if self._writing: self._loop.remove_writer(self._fileno) # type: ignore self._writing = False if waiter is not None and not waiter.done(): waiter.set_result(None) elif state == psycopg2.extensions.POLL_READ: if self._writing: self._loop.remove_writer(self._fileno) # type: ignore self._writing = False elif state == psycopg2.extensions.POLL_WRITE: if not self._writing: self._loop.add_writer( self._fileno, self._ready, weak_self # type: ignore ) self._writing = True elif state == psycopg2.extensions.POLL_ERROR: self._fatal_error( "Fatal error on aiopg connection: " "POLL_ERROR from underlying .poll() call" ) else: self._fatal_error( f"Fatal error on aiopg connection: " f"unknown answer {state} from underlying " f".poll() call" ) def _fatal_error(self, message: str) -> None: # Should be called from exception handler only. self._loop.call_exception_handler( { "message": message, "connection": self, } ) self.close() if self._waiter and not self._waiter.done(): self._waiter.set_exception(psycopg2.OperationalError(message)) def _create_waiter(self, func_name: str) -> "asyncio.Future[None]": if self._waiter is not None: raise RuntimeError( f"{func_name}() called while another coroutine " f"is already waiting for incoming data" ) self._waiter = self._loop.create_future() return self._waiter async def _poll( self, waiter: "asyncio.Future[None]", timeout: float ) -> None: assert waiter is self._waiter, (waiter, self._waiter) self._ready(self._weakref) try: await asyncio.wait_for(self._waiter, timeout) except (asyncio.CancelledError, asyncio.TimeoutError) as exc: await asyncio.shield(self.close()) raise exc except psycopg2.extensions.QueryCanceledError as exc: self._loop.call_exception_handler( { "message": exc.pgerror, "exception": exc, "future": self._waiter, } ) raise asyncio.CancelledError finally: self._waiter = None def isexecuting(self) -> bool: return self._conn.isexecuting() # type: ignore def cursor( self, name: Optional[str] = None, cursor_factory: Any = None, scrollable: Optional[bool] = None, withhold: bool = False, timeout: Optional[float] = None, isolation_level: Optional[IsolationLevel] = None, ) -> _ContextManager[Cursor]: """A coroutine that returns a new cursor object using the connection. *cursor_factory* argument can be used to create non-standard cursors. The argument must be subclass of `psycopg2.extensions.cursor`. *name*, *scrollable* and *withhold* parameters are not supported by psycopg in asynchronous mode. """ self._last_usage = self._loop.time() coro = self._cursor( name=name, cursor_factory=cursor_factory, scrollable=scrollable, withhold=withhold, timeout=timeout, isolation_level=isolation_level, ) return _ContextManager[Cursor](coro, _close_cursor) async def _cursor( self, name: Optional[str] = None, cursor_factory: Any = None, scrollable: Optional[bool] = None, withhold: bool = False, timeout: Optional[float] = None, isolation_level: Optional[IsolationLevel] = None, ) -> Cursor: if timeout is None: timeout = self._timeout impl = await self._cursor_impl( name=name, cursor_factory=cursor_factory, scrollable=scrollable, withhold=withhold, ) cursor = Cursor(self, impl, timeout, self._echo, isolation_level) return cursor async def _cursor_impl( self, name: Optional[str] = None, cursor_factory: Any = None, scrollable: Optional[bool] = None, withhold: bool = False, ) -> Any: if cursor_factory is None: impl = self._conn.cursor( name=name, scrollable=scrollable, withhold=withhold ) else: impl = self._conn.cursor( name=name, cursor_factory=cursor_factory, scrollable=scrollable, withhold=withhold, ) return impl def _close(self) -> None: """Remove the connection from the event_loop and close it.""" # N.B. If connection contains uncommitted transaction the # transaction will be discarded if self._fileno is not None: self._loop.remove_reader(self._fileno) if self._writing: self._writing = False self._loop.remove_writer(self._fileno) self._conn.close() if not self._loop.is_closed(): if self._waiter is not None and not self._waiter.done(): self._waiter.set_exception( psycopg2.OperationalError("Connection closed") ) self._notifies_proxy.close( psycopg2.OperationalError("Connection closed") ) def close(self) -> "asyncio.Future[None]": self._close() return create_completed_future(self._loop) @property def closed(self) -> bool: """Connection status. Read-only attribute reporting whether the database connection is open (False) or closed (True). """ return self._conn.closed # type: ignore @property def raw(self) -> Any: """Underlying psycopg connection object, readonly""" return self._conn async def commit(self) -> None: raise psycopg2.ProgrammingError( "commit cannot be used in asynchronous mode" ) async def rollback(self) -> None: raise psycopg2.ProgrammingError( "rollback cannot be used in asynchronous mode" ) # TPC async def xid( self, format_id: int, gtrid: str, bqual: str ) -> Tuple[int, str, str]: return self._conn.xid(format_id, gtrid, bqual) # type: ignore async def tpc_begin(self, *args: Any, **kwargs: Any) -> None: raise psycopg2.ProgrammingError( "tpc_begin cannot be used in asynchronous mode" ) async def tpc_prepare(self) -> None: raise psycopg2.ProgrammingError( "tpc_prepare cannot be used in asynchronous mode" ) async def tpc_commit(self, *args: Any, **kwargs: Any) -> None: raise psycopg2.ProgrammingError( "tpc_commit cannot be used in asynchronous mode" ) async def tpc_rollback(self, *args: Any, **kwargs: Any) -> None: raise psycopg2.ProgrammingError( "tpc_rollback cannot be used in asynchronous mode" ) async def tpc_recover(self) -> None: raise psycopg2.ProgrammingError( "tpc_recover cannot be used in asynchronous mode" ) async def cancel(self) -> None: raise psycopg2.ProgrammingError( "cancel cannot be used in asynchronous mode" ) async def reset(self) -> None: raise psycopg2.ProgrammingError( "reset cannot be used in asynchronous mode" ) @property def dsn(self) -> Optional[str]: """DSN connection string. Read-only attribute representing dsn connection string used for connectint to PostgreSQL server. """ return self._dsn # type: ignore async def set_session(self, *args: Any, **kwargs: Any) -> None: raise psycopg2.ProgrammingError( "set_session cannot be used in asynchronous mode" ) @property def autocommit(self) -> bool: """Autocommit status""" return self._conn.autocommit # type: ignore @autocommit.setter def autocommit(self, val: bool) -> None: """Autocommit status""" self._conn.autocommit = val @property def isolation_level(self) -> int: """Transaction isolation level. The only allowed value is ISOLATION_LEVEL_READ_COMMITTED. """ return self._conn.isolation_level # type: ignore async def set_isolation_level(self, val: int) -> None: """Transaction isolation level. The only allowed value is ISOLATION_LEVEL_READ_COMMITTED. """ self._conn.set_isolation_level(val) @property def encoding(self) -> str: """Client encoding for SQL operations.""" return self._conn.encoding # type: ignore async def set_client_encoding(self, val: str) -> None: self._conn.set_client_encoding(val) @property def notices(self) -> List[str]: """A list of all db messages sent to the client during the session.""" return self._conn.notices # type: ignore @property def cursor_factory(self) -> Any: """The default cursor factory used by .cursor().""" return self._conn.cursor_factory async def get_backend_pid(self) -> int: """Returns the PID of the backend server process.""" return self._conn.get_backend_pid() # type: ignore async def get_parameter_status(self, parameter: str) -> Optional[str]: """Look up a current parameter setting of the server.""" return self._conn.get_parameter_status(parameter) # type: ignore async def get_transaction_status(self) -> int: """Return the current session transaction status as an integer.""" return self._conn.get_transaction_status() # type: ignore @property def protocol_version(self) -> int: """A read-only integer representing protocol being used.""" return self._conn.protocol_version # type: ignore @property def server_version(self) -> int: """A read-only integer representing the backend version.""" return self._conn.server_version # type: ignore @property def status(self) -> int: """A read-only integer representing the status of the connection.""" return self._conn.status # type: ignore async def lobject(self, *args: Any, **kwargs: Any) -> None: raise psycopg2.ProgrammingError( "lobject cannot be used in asynchronous mode" ) @property def timeout(self) -> float: """Return default timeout for connection operations.""" return self._timeout @property def last_usage(self) -> float: """Return time() when connection was used.""" return self._last_usage @property def echo(self) -> bool: """Return echo mode status.""" return self._echo def __repr__(self) -> str: return ( f"<" f"{type(self).__module__}::{type(self).__name__} " f"isexecuting={self.isexecuting()}, " f"closed={self.closed}, " f"echo={self.echo}, " f">" ) def __del__(self) -> None: try: _conn = self._conn except AttributeError: return if _conn is not None and not _conn.closed: self.close() warnings.warn(f"Unclosed connection {self!r}", ResourceWarning) context = {"connection": self, "message": "Unclosed connection"} if self._source_traceback is not None: context["source_traceback"] = self._source_traceback self._loop.call_exception_handler(context) @property def notifies(self) -> ClosableQueue: """Return notification queue (an asyncio.Queue -like object).""" return self._notifies_proxy async def _get_oids(self) -> Tuple[Any, Any]: cursor = await self.cursor() rv0, rv1 = [], [] try: await cursor.execute( "SELECT t.oid, typarray " "FROM pg_type t JOIN pg_namespace ns ON typnamespace = ns.oid " "WHERE typname = 'hstore';" ) async for oids in cursor: if isinstance(oids, Mapping): rv0.append(oids["oid"]) rv1.append(oids["typarray"]) else: rv0.append(oids[0]) rv1.append(oids[1]) finally: cursor.close() return tuple(rv0), tuple(rv1) async def _connect(self) -> "Connection": try: await self._poll(self._waiter, self._timeout) # type: ignore except BaseException: await asyncio.shield(self.close()) raise if self._enable_json: psycopg2.extras.register_default_json(self._conn) if self._enable_uuid: psycopg2.extras.register_uuid(conn_or_curs=self._conn) if self._enable_hstore: oid, array_oid = await self._get_oids() psycopg2.extras.register_hstore( self._conn, oid=oid, array_oid=array_oid ) return self def __await__(self) -> Generator[Any, None, "Connection"]: return self._connect().__await__() async def __aenter__(self) -> "Connection": return self async def __aexit__( self, exc_type: Optional[Type[BaseException]], exc: Optional[BaseException], tb: Optional[TracebackType], ) -> None: await self.close() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1635784389.0 aiopg-1.3.3/aiopg/log.py0000644000175100001710000000017300000000000014410 0ustar00runnerdocker"""Logging configuration.""" import logging # Name the logger after the package. logger = logging.getLogger(__package__) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1635784389.0 aiopg-1.3.3/aiopg/pool.py0000644000175100001710000003337000000000000014605 0ustar00runnerdockerimport asyncio import collections import warnings from types import TracebackType from typing import ( Any, Awaitable, Callable, Deque, Generator, Optional, Set, Type, ) import async_timeout import psycopg2.extensions from .connection import TIMEOUT, Connection, Cursor, connect from .utils import _ContextManager, create_completed_future, get_running_loop def create_pool( dsn: Optional[str] = None, *, minsize: int = 1, maxsize: int = 10, timeout: float = TIMEOUT, pool_recycle: float = -1.0, enable_json: bool = True, enable_hstore: bool = True, enable_uuid: bool = True, echo: bool = False, on_connect: Optional[Callable[[Connection], Awaitable[None]]] = None, **kwargs: Any, ) -> _ContextManager["Pool"]: coro = Pool.from_pool_fill( dsn, minsize, maxsize, timeout, enable_json=enable_json, enable_hstore=enable_hstore, enable_uuid=enable_uuid, echo=echo, on_connect=on_connect, pool_recycle=pool_recycle, **kwargs, ) return _ContextManager[Pool](coro, _destroy_pool) async def _destroy_pool(pool: "Pool") -> None: pool.close() await pool.wait_closed() class _PoolConnectionContextManager: """Context manager. This enables the following idiom for acquiring and releasing a connection around a block: async with pool as conn: cur = await conn.cursor() while failing loudly when accidentally using: with pool: """ __slots__ = ("_pool", "_conn") def __init__(self, pool: "Pool", conn: Connection): self._pool: Optional[Pool] = pool self._conn: Optional[Connection] = conn def __enter__(self) -> Connection: assert self._conn return self._conn def __exit__( self, exc_type: Optional[Type[BaseException]], exc: Optional[BaseException], tb: Optional[TracebackType], ) -> None: if self._pool is None or self._conn is None: return try: self._pool.release(self._conn) finally: self._pool = None self._conn = None async def __aenter__(self) -> Connection: assert self._conn return self._conn async def __aexit__( self, exc_type: Optional[Type[BaseException]], exc: Optional[BaseException], tb: Optional[TracebackType], ) -> None: if self._pool is None or self._conn is None: return try: await self._pool.release(self._conn) finally: self._pool = None self._conn = None class _PoolCursorContextManager: """Context manager. This enables the following idiom for acquiring and releasing a cursor around a block: async with pool.cursor() as cur: await cur.execute("SELECT 1") while failing loudly when accidentally using: with pool: """ __slots__ = ("_pool", "_conn", "_cursor") def __init__(self, pool: "Pool", conn: Connection, cursor: Cursor): self._pool = pool self._conn = conn self._cursor = cursor def __enter__(self) -> Cursor: return self._cursor def __exit__( self, exc_type: Optional[Type[BaseException]], exc: Optional[BaseException], tb: Optional[TracebackType], ) -> None: try: self._cursor.close() except psycopg2.ProgrammingError: # seen instances where the cursor fails to close: # https://github.com/aio-libs/aiopg/issues/364 # We close it here so we don't return a bad connection to the pool self._conn.close() raise finally: try: self._pool.release(self._conn) finally: self._pool = None # type: ignore self._conn = None # type: ignore self._cursor = None # type: ignore class Pool: """Connection pool""" def __init__( self, dsn: str, minsize: int, maxsize: int, timeout: float, *, enable_json: bool, enable_hstore: bool, enable_uuid: bool, echo: bool, on_connect: Optional[Callable[[Connection], Awaitable[None]]], pool_recycle: float, **kwargs: Any, ): if minsize < 0: raise ValueError("minsize should be zero or greater") if maxsize < minsize and maxsize != 0: raise ValueError("maxsize should be not less than minsize") self._dsn = dsn self._minsize = minsize self._loop = get_running_loop() self._timeout = timeout self._recycle = pool_recycle self._enable_json = enable_json self._enable_hstore = enable_hstore self._enable_uuid = enable_uuid self._echo = echo self._on_connect = on_connect self._conn_kwargs = kwargs self._acquiring = 0 self._free: Deque[Connection] = collections.deque( maxlen=maxsize or None ) self._cond = asyncio.Condition() self._used: Set[Connection] = set() self._terminated: Set[Connection] = set() self._closing = False self._closed = False @property def echo(self) -> bool: return self._echo @property def minsize(self) -> int: return self._minsize @property def maxsize(self) -> Optional[int]: return self._free.maxlen @property def size(self) -> int: return self.freesize + len(self._used) + self._acquiring @property def freesize(self) -> int: return len(self._free) @property def timeout(self) -> float: return self._timeout async def clear(self) -> None: """Close all free connections in pool.""" async with self._cond: while self._free: conn = self._free.popleft() await conn.close() self._cond.notify() @property def closed(self) -> bool: return self._closed def close(self) -> None: """Close pool. Mark all pool connections to be closed on getting back to pool. Closed pool doesn't allow to acquire new connections. """ if self._closed: return self._closing = True def terminate(self) -> None: """Terminate pool. Close pool with instantly closing all acquired connections also. """ self.close() for conn in list(self._used): conn.close() self._terminated.add(conn) self._used.clear() async def wait_closed(self) -> None: """Wait for closing all pool's connections.""" if self._closed: return if not self._closing: raise RuntimeError( ".wait_closed() should be called " "after .close()" ) while self._free: conn = self._free.popleft() await conn.close() async with self._cond: while self.size > self.freesize: await self._cond.wait() self._closed = True def acquire(self) -> _ContextManager[Connection]: """Acquire free connection from the pool.""" coro = self._acquire() return _ContextManager[Connection](coro, self.release) @classmethod async def from_pool_fill(cls, *args: Any, **kwargs: Any) -> "Pool": """constructor for filling the free pool with connections, the number is controlled by the minsize parameter """ self = cls(*args, **kwargs) if self._minsize > 0: async with self._cond: await self._fill_free_pool(False) return self async def _acquire(self) -> Connection: if self._closing: raise RuntimeError("Cannot acquire connection after closing pool") async with async_timeout.timeout(self._timeout), self._cond: while True: await self._fill_free_pool(True) if self._free: conn = self._free.popleft() assert not conn.closed, conn assert conn not in self._used, (conn, self._used) self._used.add(conn) return conn else: await self._cond.wait() async def _fill_free_pool(self, override_min: bool) -> None: # iterate over free connections and remove timeouted ones n, free = 0, len(self._free) while n < free: conn = self._free[-1] if conn.closed: self._free.pop() elif -1 < self._recycle < self._loop.time() - conn.last_usage: await conn.close() self._free.pop() else: self._free.rotate() n += 1 while self.size < self.minsize: self._acquiring += 1 try: conn = await connect( self._dsn, timeout=self._timeout, enable_json=self._enable_json, enable_hstore=self._enable_hstore, enable_uuid=self._enable_uuid, echo=self._echo, **self._conn_kwargs, ) if self._on_connect is not None: await self._on_connect(conn) # raise exception if pool is closing self._free.append(conn) self._cond.notify() finally: self._acquiring -= 1 if self._free: return if override_min and self.size < (self.maxsize or 0): self._acquiring += 1 try: conn = await connect( self._dsn, timeout=self._timeout, enable_json=self._enable_json, enable_hstore=self._enable_hstore, enable_uuid=self._enable_uuid, echo=self._echo, **self._conn_kwargs, ) if self._on_connect is not None: await self._on_connect(conn) # raise exception if pool is closing self._free.append(conn) self._cond.notify() finally: self._acquiring -= 1 async def _wakeup(self) -> None: async with self._cond: self._cond.notify() def release(self, conn: Connection) -> "asyncio.Future[None]": """Release free connection back to the connection pool.""" future = create_completed_future(self._loop) if conn in self._terminated: assert conn.closed, conn self._terminated.remove(conn) return future assert conn in self._used, (conn, self._used) self._used.remove(conn) if conn.closed: return future transaction_status = conn.raw.get_transaction_status() if transaction_status != psycopg2.extensions.TRANSACTION_STATUS_IDLE: warnings.warn( f"Invalid transaction status on " f"released connection: {transaction_status}", ResourceWarning, ) conn.close() return future if self._closing: conn.close() else: self._free.append(conn) return asyncio.ensure_future(self._wakeup(), loop=self._loop) async def cursor( self, name: Optional[str] = None, cursor_factory: Any = None, scrollable: Optional[bool] = None, withhold: bool = False, *, timeout: Optional[float] = None, ) -> _PoolCursorContextManager: conn = await self.acquire() cursor = await conn.cursor( name=name, cursor_factory=cursor_factory, scrollable=scrollable, withhold=withhold, timeout=timeout, ) return _PoolCursorContextManager(self, conn, cursor) def __await__(self) -> Generator[Any, Any, _PoolConnectionContextManager]: # This is not a coroutine. It is meant to enable the idiom: # # with (await pool) as conn: # # # as an alternative to: # # conn = await pool.acquire() # try: # # finally: # conn.release() conn = yield from self._acquire().__await__() return _PoolConnectionContextManager(self, conn) def __enter__(self) -> "Pool": raise RuntimeError( '"await" should be used as context manager expression' ) def __exit__( self, exc_type: Optional[Type[BaseException]], exc: Optional[BaseException], tb: Optional[TracebackType], ) -> None: # This must exist because __enter__ exists, even though that # always raises; that's how the with-statement works. pass # pragma: nocover async def __aenter__(self) -> "Pool": return self async def __aexit__( self, exc_type: Optional[Type[BaseException]], exc: Optional[BaseException], tb: Optional[TracebackType], ) -> None: self.close() await self.wait_closed() def __del__(self) -> None: try: self._free except AttributeError: return # frame has been cleared, __dict__ is empty if self._free: left = 0 while self._free: conn = self._free.popleft() conn.close() left += 1 warnings.warn( f"Unclosed {left} connections in {self!r}", ResourceWarning ) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1635784409.7227902 aiopg-1.3.3/aiopg/sa/0000755000175100001710000000000000000000000013657 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1635784389.0 aiopg-1.3.3/aiopg/sa/__init__.py0000644000175100001710000000114400000000000015770 0ustar00runnerdocker"""Optional support for sqlalchemy.sql dynamic query generation.""" from .connection import SAConnection from .engine import Engine, create_engine from .exc import ( ArgumentError, Error, InvalidRequestError, NoSuchColumnError, ResourceClosedError, ) __all__ = ( "create_engine", "SAConnection", "Error", "ArgumentError", "InvalidRequestError", "NoSuchColumnError", "ResourceClosedError", "Engine", ) ( SAConnection, Error, ArgumentError, InvalidRequestError, NoSuchColumnError, ResourceClosedError, create_engine, Engine, ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1635784389.0 aiopg-1.3.3/aiopg/sa/connection.py0000644000175100001710000003677600000000000016413 0ustar00runnerdockerimport asyncio import contextlib import weakref from sqlalchemy.sql import ClauseElement from sqlalchemy.sql.ddl import DDLElement from sqlalchemy.sql.dml import UpdateBase from ..utils import _ContextManager, _IterableContextManager from . import exc from .result import ResultProxy from .transaction import ( NestedTransaction, RootTransaction, Transaction, TwoPhaseTransaction, ) async def _commit_transaction_if_active(t: Transaction) -> None: if t.is_active: await t.commit() async def _rollback_transaction(t: Transaction) -> None: await t.rollback() async def _close_result_proxy(c: "ResultProxy") -> None: c.close() class SAConnection: _QUERY_COMPILE_KWARGS = (("render_postcompile", True),) __slots__ = ( "_connection", "_transaction", "_savepoint_seq", "_engine", "_dialect", "_cursors", "_query_compile_kwargs", ) def __init__(self, connection, engine): self._connection = connection self._transaction = None self._savepoint_seq = 0 self._engine = engine self._dialect = engine.dialect self._cursors = weakref.WeakSet() self._query_compile_kwargs = dict(self._QUERY_COMPILE_KWARGS) def execute(self, query, *multiparams, **params): """Executes a SQL query with optional parameters. query - a SQL query string or any sqlalchemy expression. *multiparams/**params - represent bound parameter values to be used in the execution. Typically, the format is a dictionary passed to *multiparams: await conn.execute( table.insert(), {"id":1, "value":"v1"}, ) ...or individual key/values interpreted by **params:: await conn.execute( table.insert(), id=1, value="v1" ) In the case that a plain SQL string is passed, a tuple or individual values in \\*multiparams may be passed:: await conn.execute( "INSERT INTO table (id, value) VALUES (%d, %s)", (1, "v1") ) await conn.execute( "INSERT INTO table (id, value) VALUES (%s, %s)", 1, "v1" ) Returns ResultProxy instance with results of SQL query execution. """ coro = self._execute(query, *multiparams, **params) return _IterableContextManager[ResultProxy](coro, _close_result_proxy) async def _open_cursor(self): if self._connection is None: raise exc.ResourceClosedError("This connection is closed.") cursor = await self._connection.cursor() self._cursors.add(cursor) return cursor def _close_cursor(self, cursor): self._cursors.remove(cursor) cursor.close() async def _execute(self, query, *multiparams, **params): cursor = await self._open_cursor() dp = _distill_params(multiparams, params) if len(dp) > 1: raise exc.ArgumentError("aiopg doesn't support executemany") elif dp: dp = dp[0] result_map = None if isinstance(query, str): await cursor.execute(query, dp) elif isinstance(query, ClauseElement): # parameters = compiled.params if not isinstance(query, DDLElement): compiled = query.compile( dialect=self._dialect, compile_kwargs=self._query_compile_kwargs, ) if dp and isinstance(dp, (list, tuple)): if isinstance(query, UpdateBase): dp = { c.key: pval for c, pval in zip(query.table.c, dp) } else: raise exc.ArgumentError( "Don't mix sqlalchemy SELECT " "clause with positional " "parameters" ) compiled_parameters = [compiled.construct_params(dp)] processed_parameters = [] processors = compiled._bind_processors for compiled_params in compiled_parameters: params = { key: ( processors[key](compiled_params[key]) if key in processors else compiled_params[key] ) for key in compiled_params } processed_parameters.append(params) post_processed_params = self._dialect.execute_sequence_format( processed_parameters ) # _result_columns is a private API of Compiled, # but I couldn't find any public API exposing this data. result_map = compiled._result_columns else: compiled = query.compile(dialect=self._dialect) if dp: raise exc.ArgumentError( "Don't mix sqlalchemy DDL clause " "and execution with parameters" ) post_processed_params = [compiled.construct_params()] result_map = None await cursor.execute(str(compiled), post_processed_params[0]) else: raise exc.ArgumentError( "sql statement should be str or " "SQLAlchemy data " "selection/modification clause" ) return ResultProxy(self, cursor, self._dialect, result_map) async def scalar(self, query, *multiparams, **params): """Executes a SQL query and returns a scalar value.""" res = await self.execute(query, *multiparams, **params) return await res.scalar() @property def closed(self): """The readonly property that returns True if connections is closed.""" return self.connection is None or self.connection.closed @property def connection(self): return self._connection def begin(self, isolation_level=None, readonly=False, deferrable=False): """Begin a transaction and return a transaction handle. isolation_level - The isolation level of the transaction, should be one of 'SERIALIZABLE', 'REPEATABLE READ', 'READ COMMITTED', 'READ UNCOMMITTED', default (None) is 'READ COMMITTED' readonly - The transaction is read only deferrable - The transaction may block when acquiring data before running without overhead of SERLIALIZABLE, has no effect unless transaction is both SERIALIZABLE and readonly The returned object is an instance of Transaction. This object represents the "scope" of the transaction, which completes when either the .rollback or .commit method is called. Nested calls to .begin on the same SAConnection instance will return new Transaction objects that represent an emulated transaction within the scope of the enclosing transaction, that is:: trans = await conn.begin() # outermost transaction trans2 = await conn.begin() # "nested" await trans2.commit() # does nothing await trans.commit() # actually commits Calls to .commit only have an effect when invoked via the outermost Transaction object, though the .rollback method of any of the Transaction objects will roll back the transaction. See also: .begin_nested - use a SAVEPOINT .begin_twophase - use a two phase/XA transaction """ coro = self._begin(isolation_level, readonly, deferrable) return _ContextManager[Transaction]( coro, _commit_transaction_if_active, _rollback_transaction ) async def _begin(self, isolation_level, readonly, deferrable): if self._transaction is None: self._transaction = RootTransaction(self) await self._begin_impl(isolation_level, readonly, deferrable) return self._transaction return Transaction(self, self._transaction) async def _begin_impl(self, isolation_level, readonly, deferrable): stmt = "BEGIN" if isolation_level is not None: stmt += f" ISOLATION LEVEL {isolation_level}" if readonly: stmt += " READ ONLY" if deferrable: stmt += " DEFERRABLE" cursor = await self._open_cursor() try: await cursor.execute(stmt) finally: self._close_cursor(cursor) async def _commit_impl(self): cursor = await self._open_cursor() try: await cursor.execute("COMMIT") finally: self._close_cursor(cursor) self._transaction = None async def _rollback_impl(self): try: if self._connection.closed: return cursor = await self._open_cursor() try: await cursor.execute("ROLLBACK") finally: self._close_cursor(cursor) finally: self._transaction = None def begin_nested(self): """Begin a nested transaction and return a transaction handle. The returned object is an instance of :class:`.NestedTransaction`. Nested transactions require SAVEPOINT support in the underlying database. Any transaction in the hierarchy may .commit() and .rollback(), however the outermost transaction still controls the overall .commit() or .rollback() of the transaction of a whole. """ coro = self._begin_nested() return _ContextManager( coro, _commit_transaction_if_active, _rollback_transaction ) async def _begin_nested(self): if self._transaction is None: self._transaction = RootTransaction(self) await self._begin_impl(None, False, False) else: self._transaction = NestedTransaction(self, self._transaction) self._transaction._savepoint = await self._savepoint_impl() return self._transaction async def _savepoint_impl(self): self._savepoint_seq += 1 name = f"aiopg_sa_savepoint_{self._savepoint_seq}" cursor = await self._open_cursor() try: await cursor.execute(f"SAVEPOINT {name}") return name finally: self._close_cursor(cursor) async def _rollback_to_savepoint_impl(self, name, parent): try: if self._connection.closed: return cursor = await self._open_cursor() try: await cursor.execute(f"ROLLBACK TO SAVEPOINT {name}") finally: self._close_cursor(cursor) finally: self._transaction = parent async def _release_savepoint_impl(self, name, parent): cursor = await self._open_cursor() try: await cursor.execute(f"RELEASE SAVEPOINT {name}") finally: self._close_cursor(cursor) self._transaction = parent async def begin_twophase(self, xid=None): """Begin a two-phase or XA transaction and return a transaction handle. The returned object is an instance of TwoPhaseTransaction, which in addition to the methods provided by Transaction, also provides a TwoPhaseTransaction.prepare() method. xid - the two phase transaction id. If not supplied, a random id will be generated. """ if self._transaction is not None: raise exc.InvalidRequestError( "Cannot start a two phase transaction when a transaction " "is already in progress." ) if xid is None: xid = self._dialect.create_xid() self._transaction = TwoPhaseTransaction(self, xid) await self._begin_impl(None, False, False) return self._transaction async def _prepare_twophase_impl(self, xid): await self.execute(f"PREPARE TRANSACTION {xid!r}") async def recover_twophase(self): """Return a list of prepared twophase transaction ids.""" result = await self.execute("SELECT gid FROM pg_prepared_xacts") return [row[0] for row in result] async def rollback_prepared(self, xid, *, is_prepared=True): """Rollback prepared twophase transaction.""" if is_prepared: await self.execute(f"ROLLBACK PREPARED {xid:!r}") else: await self._rollback_impl() async def commit_prepared(self, xid, *, is_prepared=True): """Commit prepared twophase transaction.""" if is_prepared: await self.execute(f"COMMIT PREPARED {xid!r}") else: await self._commit_impl() @property def in_transaction(self): """Return True if a transaction is in progress.""" return self._transaction is not None and self._transaction.is_active async def close(self): """Close this SAConnection. This results in a release of the underlying database resources, that is, the underlying connection referenced internally. The underlying connection is typically restored back to the connection-holding Pool referenced by the Engine that produced this SAConnection. Any transactional state present on the underlying connection is also unconditionally released via calling Transaction.rollback() method. After .close() is called, the SAConnection is permanently in a closed state, and will allow no further operations. """ if self.connection is None: return await asyncio.shield(self._close()) async def _close(self): if self._transaction is not None: with contextlib.suppress(Exception): await self._transaction.rollback() self._transaction = None for cursor in self._cursors: cursor.close() self._cursors.clear() if self._engine is not None: with contextlib.suppress(Exception): await self._engine.release(self) self._connection = None self._engine = None def _distill_params(multiparams, params): """Given arguments from the calling form *multiparams, **params, return a list of bind parameter structures, usually a list of dictionaries. In the case of 'raw' execution which accepts positional parameters, it may be a list of tuples or lists. """ if not multiparams: if params: return [params] else: return [] elif len(multiparams) == 1: zero = multiparams[0] if isinstance(zero, (list, tuple)): if ( not zero or hasattr(zero[0], "__iter__") and not hasattr(zero[0], "strip") ): # execute(stmt, [{}, {}, {}, ...]) # execute(stmt, [(), (), (), ...]) return zero else: # execute(stmt, ("value", "value")) return [zero] elif hasattr(zero, "keys"): # execute(stmt, {"key":"value"}) return [zero] else: # execute(stmt, "value") return [[zero]] else: if hasattr(multiparams[0], "__iter__") and not hasattr( multiparams[0], "strip" ): return multiparams else: return [multiparams] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1635784389.0 aiopg-1.3.3/aiopg/sa/engine.py0000644000175100001710000001466500000000000015512 0ustar00runnerdockerimport asyncio import json import aiopg from ..connection import TIMEOUT from ..utils import _ContextManager, get_running_loop from .connection import SAConnection try: from sqlalchemy.dialects.postgresql.psycopg2 import ( PGCompiler_psycopg2, PGDialect_psycopg2, ) except ImportError: # pragma: no cover raise ImportError("aiopg.sa requires sqlalchemy") class APGCompiler_psycopg2(PGCompiler_psycopg2): def construct_params(self, params=None, _group_number=None, _check=True): pd = super().construct_params(params, _group_number, _check) for column in self.prefetch: pd[column.key] = self._exec_default(column.default) return pd def _exec_default(self, default): if default.is_callable: return default.arg(self.dialect) else: return default.arg def get_dialect(json_serializer=json.dumps, json_deserializer=lambda x: x): dialect = PGDialect_psycopg2( json_serializer=json_serializer, json_deserializer=json_deserializer ) dialect.statement_compiler = APGCompiler_psycopg2 dialect.implicit_returning = True dialect.supports_native_enum = True dialect.supports_smallserial = True # 9.2+ dialect._backslash_escapes = False dialect.supports_sane_multi_rowcount = True # psycopg 2.0.9+ dialect._has_native_hstore = True return dialect _dialect = get_dialect() def create_engine( dsn=None, *, minsize=1, maxsize=10, dialect=_dialect, timeout=TIMEOUT, pool_recycle=-1, **kwargs ): """A coroutine for Engine creation. Returns Engine instance with embedded connection pool. The pool has *minsize* opened connections to PostgreSQL server. """ coro = _create_engine( dsn=dsn, minsize=minsize, maxsize=maxsize, dialect=dialect, timeout=timeout, pool_recycle=pool_recycle, **kwargs ) return _ContextManager(coro, _close_engine) async def _create_engine( dsn=None, *, minsize=1, maxsize=10, dialect=_dialect, timeout=TIMEOUT, pool_recycle=-1, **kwargs ): pool = await aiopg.create_pool( dsn, minsize=minsize, maxsize=maxsize, timeout=timeout, pool_recycle=pool_recycle, **kwargs ) conn = await pool.acquire() try: real_dsn = conn.dsn return Engine(dialect, pool, real_dsn) finally: await pool.release(conn) async def _close_engine(engine: "Engine") -> None: engine.close() await engine.wait_closed() async def _close_connection(c: SAConnection) -> None: await c.close() class Engine: """Connects a aiopg.Pool and sqlalchemy.engine.interfaces.Dialect together to provide a source of database connectivity and behavior. An Engine object is instantiated publicly using the create_engine coroutine. """ __slots__ = ("_dialect", "_pool", "_dsn", "_loop") def __init__(self, dialect, pool, dsn): self._dialect = dialect self._pool = pool self._dsn = dsn self._loop = get_running_loop() @property def dialect(self): """An dialect for engine.""" return self._dialect @property def name(self): """A name of the dialect.""" return self._dialect.name @property def driver(self): """A driver of the dialect.""" return self._dialect.driver @property def dsn(self): """DSN connection info""" return self._dsn @property def timeout(self): return self._pool.timeout @property def minsize(self): return self._pool.minsize @property def maxsize(self): return self._pool.maxsize @property def size(self): return self._pool.size @property def freesize(self): return self._pool.freesize @property def closed(self): return self._pool.closed def close(self): """Close engine. Mark all engine connections to be closed on getting back to pool. Closed engine doesn't allow to acquire new connections. """ self._pool.close() def terminate(self): """Terminate engine. Terminate engine pool with instantly closing all acquired connections also. """ self._pool.terminate() async def wait_closed(self): """Wait for closing all engine's connections.""" await self._pool.wait_closed() def acquire(self): """Get a connection from pool.""" coro = self._acquire() return _ContextManager[SAConnection](coro, _close_connection) async def _acquire(self): raw = await self._pool.acquire() return SAConnection(raw, self) def release(self, conn): return self._pool.release(conn.connection) def __enter__(self): raise RuntimeError( '"await" should be used as context manager expression' ) def __exit__(self, *args): # This must exist because __enter__ exists, even though that # always raises; that's how the with-statement works. pass # pragma: nocover def __await__(self): # This is not a coroutine. It is meant to enable the idiom: # # with (await engine) as conn: # # # as an alternative to: # # conn = await engine.acquire() # try: # # finally: # engine.release(conn) conn = yield from self._acquire().__await__() return _ConnectionContextManager(conn, self._loop) async def __aenter__(self): return self async def __aexit__(self, exc_type, exc_val, exc_tb): self.close() await self.wait_closed() class _ConnectionContextManager: """Context manager. This enables the following idiom for acquiring and releasing a connection around a block: async with engine as conn: cur = await conn.cursor() while failing loudly when accidentally using: with engine: """ __slots__ = ("_conn", "_loop") def __init__(self, conn: SAConnection, loop: asyncio.AbstractEventLoop): self._conn = conn self._loop = loop def __enter__(self): return self._conn def __exit__(self, *args): asyncio.ensure_future(self._conn.close(), loop=self._loop) self._conn = None ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1635784389.0 aiopg-1.3.3/aiopg/sa/exc.py0000644000175100001710000000127100000000000015011 0ustar00runnerdockerclass Error(Exception): """Generic error class.""" class ArgumentError(Error): """Raised when an invalid or conflicting function argument is supplied. This error generally corresponds to construction time state errors. """ class InvalidRequestError(ArgumentError): """aiopg.sa was asked to do something it can't do. This error generally corresponds to runtime state errors. """ class NoSuchColumnError(KeyError, InvalidRequestError): """A nonexistent column is requested from a ``RowProxy``.""" class ResourceClosedError(InvalidRequestError): """An operation was requested from a connection, cursor, or other object that's in a closed state.""" ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1635784389.0 aiopg-1.3.3/aiopg/sa/result.py0000644000175100001710000003450000000000000015551 0ustar00runnerdockerimport weakref from collections.abc import Mapping, Sequence from sqlalchemy.sql import expression, sqltypes from . import exc from .utils import SQLALCHEMY_VERSION if SQLALCHEMY_VERSION >= ["1", "4"]: from sqlalchemy.util import string_or_unprintable else: from sqlalchemy.sql.expression import ( _string_or_unprintable as string_or_unprintable, ) class RowProxy(Mapping): __slots__ = ("_result_proxy", "_row", "_processors", "_keymap") def __init__(self, result_proxy, row, processors, keymap): """RowProxy objects are constructed by ResultProxy objects.""" self._result_proxy = result_proxy self._row = row self._processors = processors self._keymap = keymap def __iter__(self): return iter(self._result_proxy.keys) def __len__(self): return len(self._row) def __getitem__(self, key): try: processor, obj, index = self._keymap[key] except KeyError: processor, obj, index = self._result_proxy._key_fallback(key) # Do we need slicing at all? RowProxy now is Mapping not Sequence # except TypeError: # if isinstance(key, slice): # l = [] # for processor, value in zip(self._processors[key], # self._row[key]): # if processor is None: # l.append(value) # else: # l.append(processor(value)) # return tuple(l) # else: # raise if index is None: raise exc.InvalidRequestError( f"Ambiguous column name {key!r} in result set! " f"try 'use_labels' option on select statement." ) if processor is not None: return processor(self._row[index]) else: return self._row[index] def __getattr__(self, name): try: return self[name] except KeyError as e: raise AttributeError(e.args[0]) def __contains__(self, key): return self._result_proxy._has_key(self._row, key) __hash__ = None def __eq__(self, other): if isinstance(other, RowProxy): return self.as_tuple() == other.as_tuple() elif isinstance(other, Sequence): return self.as_tuple() == other else: return NotImplemented def __ne__(self, other): return not self == other def as_tuple(self): return tuple(self[k] for k in self) def __repr__(self): return repr(self.as_tuple()) class ResultMetaData: """Handle cursor.description, applying additional info from an execution context.""" def __init__(self, result_proxy, cursor_description): self._processors = processors = [] map_type, map_column_name = self.result_map(result_proxy._result_map) # We do not strictly need to store the processor in the key mapping, # though it is faster in the Python version (probably because of the # saved attribute lookup self._processors) self._keymap = keymap = {} self.keys = [] dialect = result_proxy.dialect # `dbapi_type_map` property removed in SQLAlchemy 1.2+. # Usage of `getattr` only needed for backward compatibility with # older versions of SQLAlchemy. typemap = getattr(dialect, "dbapi_type_map", {}) assert ( dialect.case_sensitive ), "Doesn't support case insensitive database connection" # high precedence key values. primary_keymap = {} assert ( not dialect.description_encoding ), "psycopg in py3k should not use this" for i, rec in enumerate(cursor_description): colname = rec[0] coltype = rec[1] # PostgreSQL doesn't require this. # if dialect.requires_name_normalize: # colname = dialect.normalize_name(colname) name, obj, type_ = ( map_column_name.get(colname, colname), None, map_type.get(colname, typemap.get(coltype, sqltypes.NULLTYPE)), ) processor = type_._cached_result_processor(dialect, coltype) processors.append(processor) rec = (processor, obj, i) # indexes as keys. This is only needed for the Python version of # RowProxy (the C version uses a faster path for integer indexes). primary_keymap[i] = rec # populate primary keymap, looking for conflicts. if primary_keymap.setdefault(name, rec) != rec: # place a record that doesn't have the "index" - this # is interpreted later as an AmbiguousColumnError, # but only when actually accessed. Columns # colliding by name is not a problem if those names # aren't used; integer access is always # unambiguous. primary_keymap[name] = rec = (None, obj, None) self.keys.append(name) if obj: for o in obj: keymap[o] = rec # technically we should be doing this but we # are saving on callcounts by not doing so. # if keymap.setdefault(o, rec) is not rec: # keymap[o] = (None, obj, None) # overwrite keymap values with those of the # high precedence keymap. keymap.update(primary_keymap) def result_map(self, data_map): data_map = data_map or {} map_type = {} map_column_name = {} for elem in data_map: name = elem[0] priority_name = getattr(elem[2][0], "key", None) or name map_type[name] = elem[3] # type column map_column_name[name] = priority_name return map_type, map_column_name def _key_fallback(self, key, raiseerr=True): map = self._keymap result = None if isinstance(key, str): result = map.get(key) # fallback for targeting a ColumnElement to a textual expression # this is a rare use case which only occurs when matching text() # or colummn('name') constructs to ColumnElements, or after a # pickle/unpickle roundtrip elif isinstance(key, expression.ColumnElement): if key._label and key._label in map: result = map[key._label] elif hasattr(key, "key") and key.key in map: # match is only on name. result = map[key.key] # search extra hard to make sure this # isn't a column/label name overlap. # this check isn't currently available if the row # was unpickled. if result is not None and result[1] is not None: for obj in result[1]: if key._compare_name_for_result(obj): break else: result = None if result is None: if raiseerr: raise exc.NoSuchColumnError( f"Could not locate column in row for column " f"{string_or_unprintable(key)!r}" ) else: return None else: map[key] = result return result def _has_key(self, row, key): if key in self._keymap: return True else: return self._key_fallback(key, False) is not None class ResultProxy: """Wraps a DB-API cursor object to provide easier access to row columns. Individual columns may be accessed by their integer position, case-insensitive column name, or by sqlalchemy schema.Column object. e.g.: row = fetchone() col1 = row[0] # access via integer position col2 = row['col2'] # access via name col3 = row[mytable.c.mycol] # access via Column object. ResultProxy also handles post-processing of result column data using sqlalchemy TypeEngine objects, which are referenced from the originating SQL statement that produced this result set. """ def __init__(self, connection, cursor, dialect, result_map=None): self._dialect = dialect self._result_map = result_map self._cursor = cursor self._connection = connection self._rowcount = cursor.rowcount self._metadata = None self._weak = None self._init_metadata() @property def dialect(self): """SQLAlchemy dialect.""" return self._dialect @property def cursor(self): return self._cursor def keys(self): """Return the current set of string keys for rows.""" if self._metadata: return tuple(self._metadata.keys) else: return () @property def rowcount(self): """Return the 'rowcount' for this result. The 'rowcount' reports the number of rows *matched* by the WHERE criterion of an UPDATE or DELETE statement. .. note:: Notes regarding .rowcount: * This attribute returns the number of rows *matched*, which is not necessarily the same as the number of rows that were actually *modified* - an UPDATE statement, for example, may have no net change on a given row if the SET values given are the same as those present in the row already. Such a row would be matched but not modified. * .rowcount is *only* useful in conjunction with an UPDATE or DELETE statement. Contrary to what the Python DBAPI says, it does *not* return the number of rows available from the results of a SELECT statement as DBAPIs cannot support this functionality when rows are unbuffered. * Statements that use RETURNING may not return a correct rowcount. """ return self._rowcount def _init_metadata(self): cursor_description = self.cursor.description if cursor_description is not None: self._metadata = ResultMetaData(self, cursor_description) self._weak = weakref.ref(self, lambda _: self.close()) else: self.close() @property def returns_rows(self): """True if this ResultProxy returns rows. I.e. if it is legal to call the methods .fetchone(), .fetchmany() and .fetchall()`. """ return self._metadata is not None @property def closed(self): if self._cursor is None: return True return bool(self._cursor.closed) def close(self): """Close this ResultProxy. Closes the underlying DBAPI cursor corresponding to the execution. Note that any data cached within this ResultProxy is still available. For some types of results, this may include buffered rows. If this ResultProxy was generated from an implicit execution, the underlying Connection will also be closed (returns the underlying DBAPI connection to the connection pool.) This method is called automatically when: * all result rows are exhausted using the fetchXXX() methods. * cursor.description is None. """ if self._cursor is None: return if not self._cursor.closed: self._cursor.close() self._cursor = None self._weak = None def __aiter__(self): return self async def __anext__(self): ret = await self.fetchone() if ret is not None: return ret raise StopAsyncIteration def _non_result(self): if self._metadata is None: raise exc.ResourceClosedError( "This result object does not return rows. " "It has been closed automatically." ) else: raise exc.ResourceClosedError("This result object is closed.") def _process_rows(self, rows): process_row = RowProxy metadata = self._metadata keymap = metadata._keymap processors = metadata._processors return [process_row(metadata, row, processors, keymap) for row in rows] async def fetchall(self): """Fetch all rows, just like DB-API cursor.fetchall().""" try: rows = await self.cursor.fetchall() except AttributeError: self._non_result() else: res = self._process_rows(rows) self.close() return res async def fetchone(self): """Fetch one row, just like DB-API cursor.fetchone(). If a row is present, the cursor remains open after this is called. Else the cursor is automatically closed and None is returned. """ try: row = await self.cursor.fetchone() except AttributeError: self._non_result() else: if row is not None: return self._process_rows([row])[0] else: self.close() return None async def fetchmany(self, size=None): """Fetch many rows, just like DB-API cursor.fetchmany(size=cursor.arraysize). If rows are present, the cursor remains open after this is called. Else the cursor is automatically closed and an empty list is returned. """ try: if size is None: rows = await self.cursor.fetchmany() else: rows = await self.cursor.fetchmany(size) except AttributeError: self._non_result() else: res = self._process_rows(rows) if len(res) == 0: self.close() return res async def first(self): """Fetch the first row and then close the result set unconditionally. Returns None if no row is present. """ if self._metadata is None: self._non_result() try: return await self.fetchone() finally: self.close() async def scalar(self): """Fetch the first column of the first row, and close the result set. Returns None if no row is present. """ row = await self.first() if row is not None: return row[0] else: return None ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1635784389.0 aiopg-1.3.3/aiopg/sa/transaction.py0000644000175100001710000001160200000000000016556 0ustar00runnerdockerfrom . import exc class Transaction: """Represent a database transaction in progress. The Transaction object is procured by calling the SAConnection.begin() method of SAConnection: async with engine as conn: trans = await conn.begin() try: await conn.execute("insert into x (a, b) values (1, 2)") except Exception: await trans.rollback() else: await trans.commit() The object provides .rollback() and .commit() methods in order to control transaction boundaries. See also: SAConnection.begin(), SAConnection.begin_twophase(), SAConnection.begin_nested(). """ __slots__ = ("_connection", "_parent", "_is_active") def __init__(self, connection, parent): self._connection = connection self._parent = parent or self self._is_active = True @property def is_active(self): """Return ``True`` if a transaction is active.""" return self._is_active @property def connection(self): """Return transaction's connection (SAConnection instance).""" return self._connection async def close(self): """Close this transaction. If this transaction is the base transaction in a begin/commit nesting, the transaction will rollback(). Otherwise, the method returns. This is used to cancel a Transaction without affecting the scope of an enclosing transaction. """ if not self._parent._is_active: return if self._parent is self: await self.rollback() else: self._is_active = False async def rollback(self): """Roll back this transaction.""" if not self._parent._is_active: return await self._do_rollback() self._is_active = False async def _do_rollback(self): await self._parent.rollback() async def commit(self): """Commit this transaction.""" if not self._parent._is_active: raise exc.InvalidRequestError("This transaction is inactive") await self._do_commit() self._is_active = False async def _do_commit(self): pass async def __aenter__(self): return self async def __aexit__(self, exc_type, exc_val, exc_tb): if exc_type: await self.rollback() elif self._is_active: await self.commit() class RootTransaction(Transaction): __slots__ = () def __init__(self, connection): super().__init__(connection, None) async def _do_rollback(self): await self._connection._rollback_impl() async def _do_commit(self): await self._connection._commit_impl() class NestedTransaction(Transaction): """Represent a 'nested', or SAVEPOINT transaction. A new NestedTransaction object may be procured using the SAConnection.begin_nested() method. The interface is the same as that of Transaction class. """ __slots__ = ("_savepoint",) def __init__(self, connection, parent): super().__init__(connection, parent) self._savepoint = None async def _do_rollback(self): assert self._savepoint is not None, "Broken transaction logic" if self._is_active: await self._connection._rollback_to_savepoint_impl( self._savepoint, self._parent ) async def _do_commit(self): assert self._savepoint is not None, "Broken transaction logic" if self._is_active: await self._connection._release_savepoint_impl( self._savepoint, self._parent ) class TwoPhaseTransaction(Transaction): """Represent a two-phase transaction. A new TwoPhaseTransaction object may be procured using the SAConnection.begin_twophase() method. The interface is the same as that of Transaction class with the addition of the .prepare() method. """ __slots__ = ("_is_prepared", "_xid") def __init__(self, connection, xid): super().__init__(connection, None) self._is_prepared = False self._xid = xid @property def xid(self): """Returns twophase transaction id.""" return self._xid async def prepare(self): """Prepare this TwoPhaseTransaction. After a PREPARE, the transaction can be committed. """ if not self._parent.is_active: raise exc.InvalidRequestError("This transaction is inactive") await self._connection._prepare_twophase_impl(self._xid) self._is_prepared = True async def _do_rollback(self): await self._connection._rollback_twophase_impl( self._xid, is_prepared=self._is_prepared ) async def _do_commit(self): await self._connection._commit_twophase_impl( self._xid, is_prepared=self._is_prepared ) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1635784389.0 aiopg-1.3.3/aiopg/sa/utils.py0000644000175100001710000000011200000000000015363 0ustar00runnerdockerimport sqlalchemy SQLALCHEMY_VERSION = sqlalchemy.__version__.split(".") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1635784389.0 aiopg-1.3.3/aiopg/utils.py0000644000175100001710000001230400000000000014766 0ustar00runnerdockerimport asyncio import sys from types import TracebackType from typing import ( Any, Awaitable, Callable, Coroutine, Generator, Generic, Optional, Type, TypeVar, Union, ) if sys.version_info >= (3, 7, 0): __get_running_loop = asyncio.get_running_loop else: def __get_running_loop() -> asyncio.AbstractEventLoop: loop = asyncio.get_event_loop() if not loop.is_running(): raise RuntimeError("no running event loop") return loop def get_running_loop() -> asyncio.AbstractEventLoop: return __get_running_loop() def create_completed_future( loop: asyncio.AbstractEventLoop, ) -> "asyncio.Future[Any]": future = loop.create_future() future.set_result(None) return future _TObj = TypeVar("_TObj") _Release = Callable[[_TObj], Awaitable[None]] class _ContextManager(Coroutine[Any, None, _TObj], Generic[_TObj]): __slots__ = ("_coro", "_obj", "_release", "_release_on_exception") def __init__( self, coro: Coroutine[Any, None, _TObj], release: _Release[_TObj], release_on_exception: Optional[_Release[_TObj]] = None, ): self._coro = coro self._obj: Optional[_TObj] = None self._release = release self._release_on_exception = ( release if release_on_exception is None else release_on_exception ) def send(self, value: Any) -> "Any": return self._coro.send(value) def throw( # type: ignore self, typ: Type[BaseException], val: Optional[Union[BaseException, object]] = None, tb: Optional[TracebackType] = None, ) -> Any: if val is None: return self._coro.throw(typ) if tb is None: return self._coro.throw(typ, val) return self._coro.throw(typ, val, tb) def close(self) -> None: self._coro.close() def __await__(self) -> Generator[Any, None, _TObj]: return self._coro.__await__() async def __aenter__(self) -> _TObj: self._obj = await self._coro assert self._obj return self._obj async def __aexit__( self, exc_type: Optional[Type[BaseException]], exc: Optional[BaseException], tb: Optional[TracebackType], ) -> None: if self._obj is None: return try: if exc_type is not None: await self._release_on_exception(self._obj) else: await self._release(self._obj) finally: self._obj = None class _IterableContextManager(_ContextManager[_TObj]): __slots__ = () def __init__(self, *args: Any, **kwargs: Any): super().__init__(*args, **kwargs) def __aiter__(self) -> "_IterableContextManager[_TObj]": return self async def __anext__(self) -> _TObj: if self._obj is None: self._obj = await self._coro try: return await self._obj.__anext__() # type: ignore except StopAsyncIteration: try: await self._release(self._obj) finally: self._obj = None raise class ClosableQueue: """ Proxy object for an asyncio.Queue that is "closable" When the ClosableQueue is closed, with an exception object as parameter, subsequent or ongoing attempts to read from the queue will result in that exception being result in that exception being raised. Note: closing a queue with exception will still allow to read any items pending in the queue. The close exception is raised only once all items are consumed. """ __slots__ = ("_loop", "_queue", "_close_event") def __init__( self, queue: asyncio.Queue, # type: ignore loop: asyncio.AbstractEventLoop, ): self._loop = loop self._queue = queue self._close_event = loop.create_future() # suppress Future exception was never retrieved self._close_event.add_done_callback(lambda f: f.exception()) def close(self, exception: Exception) -> None: if self._close_event.done(): return self._close_event.set_exception(exception) async def get(self) -> Any: if self._close_event.done(): try: return self._queue.get_nowait() except asyncio.QueueEmpty: return self._close_event.result() get = asyncio.ensure_future(self._queue.get(), loop=self._loop) try: await asyncio.wait( [get, self._close_event], return_when=asyncio.FIRST_COMPLETED ) except asyncio.CancelledError: get.cancel() raise if get.done(): return get.result() try: return self._close_event.result() finally: get.cancel() def empty(self) -> bool: return self._queue.empty() def qsize(self) -> int: return self._queue.qsize() def get_nowait(self) -> Any: if self._close_event.done(): try: return self._queue.get_nowait() except asyncio.QueueEmpty: return self._close_event.result() return self._queue.get_nowait() ././@PaxHeader0000000000000000000000000000003200000000000010210 xustar0026 mtime=1635784409.71879 aiopg-1.3.3/aiopg.egg-info/0000755000175100001710000000000000000000000014746 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1635784408.0 aiopg-1.3.3/aiopg.egg-info/PKG-INFO0000644000175100001710000003370000000000000016046 0ustar00runnerdockerMetadata-Version: 2.1 Name: aiopg Version: 1.3.3 Summary: Postgres integration with asyncio. Home-page: https://aiopg.readthedocs.io Author: Andrew Svetlov Author-email: andrew.svetlov@gmail.com Maintainer: Andrew Svetlov , Alexey Firsov , Alexey Popravka , Yury Pliner Maintainer-email: virmir49@gmail.com License: BSD Download-URL: https://pypi.python.org/pypi/aiopg Project-URL: Chat: Gitter, https://gitter.im/aio-libs/Lobby Project-URL: CI: GA, https://github.com/aio-libs/aiopg/actions?query=workflow%3ACI Project-URL: Coverage: codecov, https://codecov.io/gh/aio-libs/aiopg Project-URL: Docs: RTD, https://aiopg.readthedocs.io Project-URL: GitHub: issues, https://github.com/aio-libs/aiopg/issues Project-URL: GitHub: repo, https://github.com/aio-libs/aiopg Platform: macOS Platform: POSIX Platform: Windows Classifier: License :: OSI Approved :: BSD License Classifier: Intended Audience :: Developers Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3 :: Only Classifier: Programming Language :: Python :: 3.6 Classifier: Programming Language :: Python :: 3.7 Classifier: Programming Language :: Python :: 3.8 Classifier: Programming Language :: Python :: 3.9 Classifier: Programming Language :: Python :: 3.10 Classifier: Operating System :: POSIX Classifier: Operating System :: MacOS :: MacOS X Classifier: Operating System :: Microsoft :: Windows Classifier: Environment :: Web Environment Classifier: Development Status :: 5 - Production/Stable Classifier: Topic :: Database Classifier: Topic :: Database :: Front-Ends Classifier: Framework :: AsyncIO Requires-Python: >=3.6 Description-Content-Type: text/x-rst Provides-Extra: sa License-File: LICENSE aiopg ===== .. image:: https://github.com/aio-libs/aiopg/workflows/CI/badge.svg :target: https://github.com/aio-libs/aiopg/actions?query=workflow%3ACI .. image:: https://codecov.io/gh/aio-libs/aiopg/branch/master/graph/badge.svg :target: https://codecov.io/gh/aio-libs/aiopg .. image:: https://badges.gitter.im/Join%20Chat.svg :target: https://gitter.im/aio-libs/Lobby :alt: Chat on Gitter **aiopg** is a library for accessing a PostgreSQL_ database from the asyncio_ (PEP-3156/tulip) framework. It wraps asynchronous features of the Psycopg database driver. Example ------- .. code:: python import asyncio import aiopg dsn = 'dbname=aiopg user=aiopg password=passwd host=127.0.0.1' async def go(): pool = await aiopg.create_pool(dsn) async with pool.acquire() as conn: async with conn.cursor() as cur: await cur.execute("SELECT 1") ret = [] async for row in cur: ret.append(row) assert ret == [(1,)] loop = asyncio.get_event_loop() loop.run_until_complete(go()) Example of SQLAlchemy optional integration ------------------------------------------ .. code:: python import asyncio from aiopg.sa import create_engine import sqlalchemy as sa metadata = sa.MetaData() tbl = sa.Table('tbl', metadata, sa.Column('id', sa.Integer, primary_key=True), sa.Column('val', sa.String(255))) async def create_table(engine): async with engine.acquire() as conn: await conn.execute('DROP TABLE IF EXISTS tbl') await conn.execute('''CREATE TABLE tbl ( id serial PRIMARY KEY, val varchar(255))''') async def go(): async with create_engine(user='aiopg', database='aiopg', host='127.0.0.1', password='passwd') as engine: async with engine.acquire() as conn: await conn.execute(tbl.insert().values(val='abc')) async for row in conn.execute(tbl.select()): print(row.id, row.val) loop = asyncio.get_event_loop() loop.run_until_complete(go()) .. _PostgreSQL: http://www.postgresql.org/ .. _asyncio: https://docs.python.org/3/library/asyncio.html Please use:: $ make test for executing the project's unittests. See https://aiopg.readthedocs.io/en/stable/contributing.html for details on how to set up your environment to run the tests. Changelog --------- 1.3.3 (2021-11-01) ^^^^^^^^^^^^^^^^^^ * Support async-timeout 4.0+ 1.3.2 (2021-10-07) ^^^^^^^^^^^^^^^^^^ 1.3.2b2 (2021-10-07) ^^^^^^^^^^^^^^^^^^^^ * Respect use_labels for select statement `#882 `_ 1.3.2b1 (2021-07-11) ^^^^^^^^^^^^^^^^^^^^ * Fix compatibility with SQLAlchemy >= 1.4 `#870 `_ 1.3.1 (2021-07-08) ^^^^^^^^^^^^^^^^^^ 1.3.1b2 (2021-07-06) ^^^^^^^^^^^^^^^^^^^^ * Suppress "Future exception was never retrieved" `#862 `_ 1.3.1b1 (2021-07-05) ^^^^^^^^^^^^^^^^^^^^ * Fix ClosableQueue.get on cancellation, close it on Connection.close `#859 `_ 1.3.0 (2021-06-30) ^^^^^^^^^^^^^^^^^^ 1.3.0b4 (2021-06-28) ^^^^^^^^^^^^^^^^^^^^ * Fix "Unable to detect disconnect when using NOTIFY/LISTEN" `#559 `_ 1.3.0b3 (2021-04-03) ^^^^^^^^^^^^^^^^^^^^ * Reformat using black `#814 `_ 1.3.0b2 (2021-04-02) ^^^^^^^^^^^^^^^^^^^^ * Type annotations `#813 `_ 1.3.0b1 (2021-03-30) ^^^^^^^^^^^^^^^^^^^^ * Raise ResourceClosedError if we try to open a cursor on a closed SAConnection `#811 `_ 1.3.0b0 (2021-03-25) ^^^^^^^^^^^^^^^^^^^^ * Fix compatibility with SA 1.4 for IN statement `#806 `_ 1.2.1 (2021-03-23) ^^^^^^^^^^^^^^^^^^ * Pop loop in connection init due to backward compatibility `#808 `_ 1.2.0b4 (2021-03-23) ^^^^^^^^^^^^^^^^^^^^ * Set max supported sqlalchemy version `#805 `_ 1.2.0b3 (2021-03-22) ^^^^^^^^^^^^^^^^^^^^ * Don't run ROLLBACK when the connection is closed `#778 `_ * Multiple cursors support `#801 `_ 1.2.0b2 (2020-12-21) ^^^^^^^^^^^^^^^^^^^^ * Fix IsolationLevel.read_committed and introduce IsolationLevel.default `#770 `_ * Fix python 3.8 warnings in tests `#771 `_ 1.2.0b1 (2020-12-16) ^^^^^^^^^^^^^^^^^^^^ * Deprecate blocking connection.cancel() method `#570 `_ 1.2.0b0 (2020-12-15) ^^^^^^^^^^^^^^^^^^^^ * Implement timeout on acquiring connection from pool `#766 `_ 1.1.0 (2020-12-10) ^^^^^^^^^^^^^^^^^^ 1.1.0b2 (2020-12-09) ^^^^^^^^^^^^^^^^^^^^ * Added missing slots to context managers `#763 `_ 1.1.0b1 (2020-12-07) ^^^^^^^^^^^^^^^^^^^^ * Fix on_connect multiple call on acquire `#552 `_ * Fix python 3.8 warnings `#622 `_ * Bump minimum psycopg version to 2.8.4 `#754 `_ * Fix Engine.release method to release connection in any way `#756 `_ 1.0.0 (2019-09-20) ^^^^^^^^^^^^^^^^^^ * Removal of an asynchronous call in favor of issues # 550 * Big editing of documentation and minor bugs #534 0.16.0 (2019-01-25) ^^^^^^^^^^^^^^^^^^^ * Fix select priority name `#525 `_ * Rename `psycopg2` to `psycopg2-binary` to fix deprecation warning `#507 `_ * Fix `#189 `_ hstore when using ReadDictCursor `#512 `_ * close cannot be used while an asynchronous query is underway `#452 `_ * sqlalchemy adapter trx begin allow transaction_mode `#498 `_ 0.15.0 (2018-08-14) ^^^^^^^^^^^^^^^^^^^ * Support Python 3.7 `#437 `_ 0.14.0 (2018-05-10) ^^^^^^^^^^^^^^^^^^^ * Add ``get_dialect`` func to have ability to pass ``json_serializer`` `#451 `_ 0.13.2 (2018-01-03) ^^^^^^^^^^^^^^^^^^^ * Fixed compatibility with SQLAlchemy 1.2.0 `#412 `_ * Added support for transaction isolation levels `#219 `_ 0.13.1 (2017-09-10) ^^^^^^^^^^^^^^^^^^^ * Added connection poll recycling logic `#373 `_ 0.13.0 (2016-12-02) ^^^^^^^^^^^^^^^^^^^ * Add `async with` support to `.begin_nested()` `#208 `_ * Fix connection.cancel() `#212 `_ `#223 `_ * Raise informative error on unexpected connection closing `#191 `_ * Added support for python types columns issues `#217 `_ * Added support for default values in SA table issues `#206 `_ 0.12.0 (2016-10-09) ^^^^^^^^^^^^^^^^^^^ * Add an on_connect callback parameter to pool `#141 `_ * Fixed connection to work under both windows and posix based systems `#142 `_ 0.11.0 (2016-09-12) ^^^^^^^^^^^^^^^^^^^ * Immediately remove callbacks from a closed file descriptor `#139 `_ * Drop Python 3.3 support 0.10.0 (2016-07-16) ^^^^^^^^^^^^^^^^^^^ * Refactor tests to use dockerized Postgres server `#107 `_ * Reduce default pool minsize to 1 `#106 `_ * Explicitly enumerate packages in setup.py `#85 `_ * Remove expired connections from pool on acquire `#116 `_ * Don't crash when Connection is GC'ed `#124 `_ * Use loop.create_future() if available 0.9.2 (2016-01-31) ^^^^^^^^^^^^^^^^^^ * Make pool.release return asyncio.Future, so we can wait on it in `__aexit__` `#102 `_ * Add support for uuid type `#103 `_ 0.9.1 (2016-01-17) ^^^^^^^^^^^^^^^^^^ * Documentation update `#101 `_ 0.9.0 (2016-01-14) ^^^^^^^^^^^^^^^^^^ * Add async context managers for transactions `#91 `_ * Support async iterator in ResultProxy `#92 `_ * Add async with for engine `#90 `_ 0.8.0 (2015-12-31) ^^^^^^^^^^^^^^^^^^ * Add PostgreSQL notification support `#58 `_ * Support pools with unlimited size `#59 `_ * Cancel current DB operation on asyncio timeout `#66 `_ * Add async with support for Pool, Connection, Cursor `#88 `_ 0.7.0 (2015-04-22) ^^^^^^^^^^^^^^^^^^ * Get rid of resource leak on connection failure. * Report ResourceWarning on non-closed connections. * Deprecate iteration protocol support in cursor and ResultProxy. * Release sa connection to pool on `connection.close()`. 0.6.0 (2015-02-03) ^^^^^^^^^^^^^^^^^^ * Accept dict, list, tuple, named and positional parameters in `SAConnection.execute()` 0.5.2 (2014-12-08) ^^^^^^^^^^^^^^^^^^ * Minor release, fixes a bug that leaves connection in broken state after `cursor.execute()` failure. 0.5.1 (2014-10-31) ^^^^^^^^^^^^^^^^^^ * Fix a bug for processing transactions in line. 0.5.0 (2014-10-31) ^^^^^^^^^^^^^^^^^^ * Add .terminate() to Pool and Engine * Reimplement connection pool (now pool size cannot be greater than pool.maxsize) * Add .close() and .wait_closed() to Pool and Engine * Add minsize, maxsize, size and freesize properties to sa.Engine * Support *echo* parameter for logging executed SQL commands * Connection.close() is not a coroutine (but we keep backward compatibility). 0.4.1 (2014-10-02) ^^^^^^^^^^^^^^^^^^ * make cursor iterable * update docs 0.4.0 (2014-10-02) ^^^^^^^^^^^^^^^^^^ * add timeouts for database operations. * Autoregister psycopg2 support for json data type. * Support JSON in aiopg.sa * Support ARRAY in aiopg.sa * Autoregister hstore support if present in connected DB * Support HSTORE in aiopg.sa 0.3.2 (2014-07-07) ^^^^^^^^^^^^^^^^^^ * change signature to cursor.execute(operation, parameters=None) to follow psycopg2 convention. 0.3.1 (2014-07-04) ^^^^^^^^^^^^^^^^^^ * Forward arguments to cursor constructor for pooled connections. 0.3.0 (2014-06-22) ^^^^^^^^^^^^^^^^^^ * Allow executing SQLAlchemy DDL statements. * Fix bug with race conditions on acquiring/releasing connections from pool. 0.2.3 (2014-06-12) ^^^^^^^^^^^^^^^^^^ * Fix bug in connection pool. 0.2.2 (2014-06-07) ^^^^^^^^^^^^^^^^^^ * Fix bug with passing parameters into SAConnection.execute when executing raw SQL expression. 0.2.1 (2014-05-08) ^^^^^^^^^^^^^^^^^^ * Close connection with invalid transaction status on returning to pool. 0.2.0 (2014-05-04) ^^^^^^^^^^^^^^^^^^ * Implemented optional support for sqlalchemy functional sql layer. 0.1.0 (2014-04-06) ^^^^^^^^^^^^^^^^^^ * Implemented plain connections: connect, Connection, Cursor. * Implemented database pools: create_pool and Pool. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1635784409.0 aiopg-1.3.3/aiopg.egg-info/SOURCES.txt0000644000175100001710000000067100000000000016636 0ustar00runnerdockerCHANGES.txt LICENSE MAINTAINERS.txt MANIFEST.in README.rst setup.cfg setup.py aiopg/__init__.py aiopg/connection.py aiopg/log.py aiopg/pool.py aiopg/utils.py aiopg.egg-info/PKG-INFO aiopg.egg-info/SOURCES.txt aiopg.egg-info/dependency_links.txt aiopg.egg-info/requires.txt aiopg.egg-info/top_level.txt aiopg/sa/__init__.py aiopg/sa/connection.py aiopg/sa/engine.py aiopg/sa/exc.py aiopg/sa/result.py aiopg/sa/transaction.py aiopg/sa/utils.py././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1635784408.0 aiopg-1.3.3/aiopg.egg-info/dependency_links.txt0000644000175100001710000000000100000000000021014 0ustar00runnerdocker ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1635784408.0 aiopg-1.3.3/aiopg.egg-info/requires.txt0000644000175100001710000000014500000000000017346 0ustar00runnerdockerpsycopg2-binary>=2.8.4 async_timeout<5.0,>=3.0 [sa] sqlalchemy[postgresql_psycopg2binary]<1.5,>=1.3 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1635784408.0 aiopg-1.3.3/aiopg.egg-info/top_level.txt0000644000175100001710000000000600000000000017474 0ustar00runnerdockeraiopg ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1635784409.7227902 aiopg-1.3.3/setup.cfg0000644000175100001710000000016600000000000014001 0ustar00runnerdocker[tool:pytest] timeout = 300 [coverage:run] branch = true source = aiopg,tests [egg_info] tag_build = tag_date = 0 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1635784389.0 aiopg-1.3.3/setup.py0000644000175100001710000000530300000000000013670 0ustar00runnerdockerimport re from pathlib import Path from setuptools import setup, find_packages install_requires = ["psycopg2-binary>=2.8.4", "async_timeout>=3.0,<5.0"] extras_require = {"sa": ["sqlalchemy[postgresql_psycopg2binary]>=1.3,<1.5"]} def read(*parts): return Path(__file__).resolve().parent.joinpath(*parts).read_text().strip() def get_maintainers(path="MAINTAINERS.txt"): return ", ".join(x.strip().strip("*").strip() for x in read(path).splitlines()) def read_version(): regexp = re.compile(r"^__version__\W*=\W*\"([\d.abrc]+)\"") for line in read("aiopg", "__init__.py").splitlines(): match = regexp.match(line) if match is not None: return match.group(1) raise RuntimeError("Cannot find version in aiopg/__init__.py") def read_changelog(path="CHANGES.txt"): return f"Changelog\n---------\n\n{read(path)}" classifiers = [ "License :: OSI Approved :: BSD License", "Intended Audience :: Developers", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3 :: Only", "Programming Language :: Python :: 3.6", "Programming Language :: Python :: 3.7", "Programming Language :: Python :: 3.8", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Operating System :: POSIX", "Operating System :: MacOS :: MacOS X", "Operating System :: Microsoft :: Windows", "Environment :: Web Environment", "Development Status :: 5 - Production/Stable", "Topic :: Database", "Topic :: Database :: Front-Ends", "Framework :: AsyncIO", ] setup( name="aiopg", version=read_version(), description="Postgres integration with asyncio.", long_description="\n\n".join((read("README.rst"), read_changelog())), long_description_content_type="text/x-rst", classifiers=classifiers, platforms=["macOS", "POSIX", "Windows"], author="Andrew Svetlov", python_requires=">=3.6", project_urls={ "Chat: Gitter": "https://gitter.im/aio-libs/Lobby", "CI: GA": "https://github.com/aio-libs/aiopg/actions?query=workflow%3ACI", "Coverage: codecov": "https://codecov.io/gh/aio-libs/aiopg", "Docs: RTD": "https://aiopg.readthedocs.io", "GitHub: issues": "https://github.com/aio-libs/aiopg/issues", "GitHub: repo": "https://github.com/aio-libs/aiopg", }, author_email="andrew.svetlov@gmail.com", maintainer=get_maintainers(), maintainer_email="virmir49@gmail.com", url="https://aiopg.readthedocs.io", download_url="https://pypi.python.org/pypi/aiopg", license="BSD", packages=find_packages(), install_requires=install_requires, extras_require=extras_require, include_package_data=True, )