`_ bug.
- Previously if an empty string was sent as the query an exception would be raised, but
that isn't done now.
Version 1.28.3, 2022-05-18
``````````````````````````
- Put back ``__version__`` attributes that were inadvertently removed.
Version 1.28.2, 2022-05-17
``````````````````````````
- Use a build system that's compliant with PEP517.
Version 1.28.1, 2022-05-17
``````````````````````````
- If when doing a ``COPY FROM`` the ``stream`` parameter is an iterator of ``str``,
pg8000 used to silently append a newline to the end. That no longer happens.
Version 1.28.0, 2022-05-17
``````````````````````````
- When using the ``COPY FROM`` SQL statement, allow the ``stream`` parameter to be an
iterable.
Version 1.27.1, 2022-05-16
``````````````````````````
- The ``seconds`` attribute of ``PGInterval`` is now always a ``float``, to cope with
fractional seconds.
- Updated the ``interval`` parsers for ``iso_8601`` and ``sql_standard`` to take
account of fractional seconds.
Version 1.27.0, 2022-05-16
``````````````````````````
- It used to be that by default, if pg8000 received an ``interval`` type from the server
and it was too big to fit into a ``datetime.timedelta`` then an exception would be
raised. Now if an interval is too big for ``datetime.timedelta`` a ``PGInterval`` is
returned.
* pg8000 now supports all the output formats for an ``interval`` (``postgres``,
``postgres_verbose``, ``iso_8601`` and ``sql_standard``).
Version 1.26.1, 2022-04-23
``````````````````````````
- Make sure all tests are run by the GitHub Actions tests on commit.
- Remove support for Python 3.6
- Remove support for PostgreSQL 9.6
Version 1.26.0, 2022-04-18
``````````````````````````
- When connecting, raise an ``InterfaceError('network error')`` rather than let the
underlying ``struct.error`` float up.
- Make licence text the same as that used by the OSI. Previously the licence wording
differed slightly from the BSD 3 Clause licence at
https://opensource.org/licenses/BSD-3-Clause. This meant that automated tools didn't
pick it up as being Open Source. The changes are believed to not alter the meaning of the license at all.
Version 1.25.0, 2022-04-17
``````````````````````````
- Fix more cases where a ``ResourceWarning`` would be raise because of a socket that had
been left open.
- We now have a single ``InterfaceError`` with the message 'network error' for all
network errors, with the underlying exception held in the ``cause`` of the exception.
Version 1.24.2, 2022-04-15
``````````````````````````
- To prevent a ``ResourceWarning`` close socket if a connection can't be created.
Version 1.24.1, 2022-03-02
``````````````````````````
- Return pg +/-infinity dates as ``str``. Previously +/-infinity pg values would cause
an error when returned, but now we return +/-infinity as strings.
Version 1.24.0, 2022-02-06
``````````````````````````
- Add SQL escape functions identifier() and literal() to the native API. For use when a
query can't be parameterised and the SQL string has to be created using untrusted
values.
Version 1.23.0, 2021-11-13
``````````````````````````
- If a query has no parameters, then the query will no longer be parsed. Although there
are performance benefits for doing this, the main reason is to avoid query rewriting,
which can introduce errors.
Version 1.22.1, 2021-11-10
``````````````````````````
- Fix bug in PGInterval type where ``str()`` failed for a millennia value.
Version 1.22.0, 2021-10-13
``````````````````````````
- Rather than specifying the oids in the ``Parse`` step of the Postgres protocol, pg8000
now omits them, and so Postgres will use the oids it determines from the query. This
makes the pg8000 code simpler and also it should also make the nuances of type
matching more straightforward.
././@PaxHeader 0000000 0000000 0000000 00000000034 00000000000 010212 x ustar 00 28 mtime=1698781754.4878707
pg8000-1.30.3/pg8000/ 0000775 0001750 0001750 00000000000 14520255072 013022 5 ustar 00tlocke tlocke ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1686929073.0
pg8000-1.30.3/pg8000/__init__.py 0000664 0001750 0001750 00000011272 14443077261 015144 0 ustar 00tlocke tlocke from pg8000.legacy import (
BIGINTEGER,
BINARY,
BOOLEAN,
BOOLEAN_ARRAY,
BYTES,
Binary,
CHAR,
CHAR_ARRAY,
Connection,
Cursor,
DATE,
DATETIME,
DECIMAL,
DECIMAL_ARRAY,
DataError,
DatabaseError,
Date,
DateFromTicks,
Error,
FLOAT,
FLOAT_ARRAY,
INET,
INT2VECTOR,
INTEGER,
INTEGER_ARRAY,
INTERVAL,
IntegrityError,
InterfaceError,
InternalError,
JSON,
JSONB,
MACADDR,
NAME,
NAME_ARRAY,
NULLTYPE,
NUMBER,
NotSupportedError,
OID,
OperationalError,
PGInterval,
ProgrammingError,
ROWID,
Range,
STRING,
TEXT,
TEXT_ARRAY,
TIME,
TIMEDELTA,
TIMESTAMP,
TIMESTAMPTZ,
Time,
TimeFromTicks,
Timestamp,
TimestampFromTicks,
UNKNOWN,
UUID_TYPE,
VARCHAR,
VARCHAR_ARRAY,
Warning,
XID,
__version__,
pginterval_in,
pginterval_out,
timedelta_in,
)
# Copyright (c) 2007-2009, Mathieu Fenniak
# Copyright (c) The Contributors
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright notice,
# this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
# * The name of the author may not be used to endorse or promote products
# derived from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
def connect(
user,
host="localhost",
database=None,
port=5432,
password=None,
source_address=None,
unix_sock=None,
ssl_context=None,
timeout=None,
tcp_keepalive=True,
application_name=None,
replication=None,
):
return Connection(
user,
host=host,
database=database,
port=port,
password=password,
source_address=source_address,
unix_sock=unix_sock,
ssl_context=ssl_context,
timeout=timeout,
tcp_keepalive=tcp_keepalive,
application_name=application_name,
replication=replication,
)
apilevel = "2.0"
"""The DBAPI level supported, currently "2.0".
This property is part of the `DBAPI 2.0 specification
`_.
"""
threadsafety = 1
"""Integer constant stating the level of thread safety the DBAPI interface
supports. This DBAPI module supports sharing of the module only. Connections
and cursors my not be shared between threads. This gives pg8000 a threadsafety
value of 1.
This property is part of the `DBAPI 2.0 specification
`_.
"""
paramstyle = "format"
__all__ = [
"BIGINTEGER",
"BINARY",
"BOOLEAN",
"BOOLEAN_ARRAY",
"BYTES",
"Binary",
"CHAR",
"CHAR_ARRAY",
"Connection",
"Cursor",
"DATE",
"DATETIME",
"DECIMAL",
"DECIMAL_ARRAY",
"DataError",
"DatabaseError",
"Date",
"DateFromTicks",
"Error",
"FLOAT",
"FLOAT_ARRAY",
"INET",
"INT2VECTOR",
"INTEGER",
"INTEGER_ARRAY",
"INTERVAL",
"IntegrityError",
"InterfaceError",
"InternalError",
"JSON",
"JSONB",
"MACADDR",
"NAME",
"NAME_ARRAY",
"NULLTYPE",
"NUMBER",
"NotSupportedError",
"OID",
"OperationalError",
"PGInterval",
"ProgrammingError",
"ROWID",
"Range",
"STRING",
"TEXT",
"TEXT_ARRAY",
"TIME",
"TIMEDELTA",
"TIMESTAMP",
"TIMESTAMPTZ",
"Time",
"TimeFromTicks",
"Timestamp",
"TimestampFromTicks",
"UNKNOWN",
"UUID_TYPE",
"VARCHAR",
"VARCHAR_ARRAY",
"Warning",
"XID",
"__version__",
"connect",
"pginterval_in",
"pginterval_out",
"timedelta_in",
]
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1698780267.0
pg8000-1.30.3/pg8000/converters.py 0000664 0001750 0001750 00000043423 14520252153 015571 0 ustar 00tlocke tlocke from datetime import (
date as Date,
datetime as Datetime,
time as Time,
timedelta as Timedelta,
timezone as Timezone,
)
from decimal import Decimal
from enum import Enum
from ipaddress import (
IPv4Address,
IPv4Network,
IPv6Address,
IPv6Network,
ip_address,
ip_network,
)
from json import dumps, loads
from uuid import UUID
from dateutil.parser import ParserError, parse
from pg8000.exceptions import InterfaceError
from pg8000.types import PGInterval, Range
ANY_ARRAY = 2277
BIGINT = 20
BIGINT_ARRAY = 1016
BOOLEAN = 16
BOOLEAN_ARRAY = 1000
BYTES = 17
BYTES_ARRAY = 1001
CHAR = 1042
CHAR_ARRAY = 1014
CIDR = 650
CIDR_ARRAY = 651
CSTRING = 2275
CSTRING_ARRAY = 1263
DATE = 1082
DATE_ARRAY = 1182
DATEMULTIRANGE = 4535
DATERANGE = 3912
FLOAT = 701
FLOAT_ARRAY = 1022
INET = 869
INET_ARRAY = 1041
INT2VECTOR = 22
INT4MULTIRANGE = 4451
INT4RANGE = 3904
INT8MULTIRANGE = 4536
INT8RANGE = 3926
INTEGER = 23
INTEGER_ARRAY = 1007
INTERVAL = 1186
INTERVAL_ARRAY = 1187
OID = 26
JSON = 114
JSON_ARRAY = 199
JSONB = 3802
JSONB_ARRAY = 3807
MACADDR = 829
MONEY = 790
MONEY_ARRAY = 791
NAME = 19
NAME_ARRAY = 1003
NUMERIC = 1700
NUMERIC_ARRAY = 1231
NUMRANGE = 3906
NUMMULTIRANGE = 4532
NULLTYPE = -1
OID = 26
POINT = 600
REAL = 700
REAL_ARRAY = 1021
RECORD = 2249
SMALLINT = 21
SMALLINT_ARRAY = 1005
SMALLINT_VECTOR = 22
STRING = 1043
TEXT = 25
TEXT_ARRAY = 1009
TIME = 1083
TIME_ARRAY = 1183
TIMESTAMP = 1114
TIMESTAMP_ARRAY = 1115
TIMESTAMPTZ = 1184
TIMESTAMPTZ_ARRAY = 1185
TSMULTIRANGE = 4533
TSRANGE = 3908
TSTZMULTIRANGE = 4534
TSTZRANGE = 3910
UNKNOWN = 705
UUID_TYPE = 2950
UUID_ARRAY = 2951
VARCHAR = 1043
VARCHAR_ARRAY = 1015
XID = 28
MIN_INT2, MAX_INT2 = -(2**15), 2**15
MIN_INT4, MAX_INT4 = -(2**31), 2**31
MIN_INT8, MAX_INT8 = -(2**63), 2**63
def bool_in(data):
return data == "t"
def bool_out(v):
return "true" if v else "false"
def bytes_in(data):
return bytes.fromhex(data[2:])
def bytes_out(v):
return "\\x" + v.hex()
def cidr_out(v):
return str(v)
def cidr_in(data):
return ip_network(data, False) if "/" in data else ip_address(data)
def date_in(data):
if data in ("infinity", "-infinity"):
return data
else:
try:
return Datetime.strptime(data, "%Y-%m-%d").date()
except ValueError:
# pg date can overflow Python Datetime
return data
def date_out(v):
return v.isoformat()
def datetime_out(v):
if v.tzinfo is None:
return v.isoformat()
else:
return v.astimezone(Timezone.utc).isoformat()
def enum_out(v):
return str(v.value)
def float_out(v):
return str(v)
def inet_in(data):
return ip_network(data, False) if "/" in data else ip_address(data)
def inet_out(v):
return str(v)
def int_in(data):
return int(data)
def int_out(v):
return str(v)
def interval_in(data):
pg_interval = PGInterval.from_str(data)
try:
return pg_interval.to_timedelta()
except ValueError:
return pg_interval
def interval_out(v):
return f"{v.days} days {v.seconds} seconds {v.microseconds} microseconds"
def json_in(data):
return loads(data)
def json_out(v):
return dumps(v)
def null_out(v):
return None
def numeric_in(data):
return Decimal(data)
def numeric_out(d):
return str(d)
def point_in(data):
return tuple(map(float, data[1:-1].split(",")))
def pg_interval_in(data):
return PGInterval.from_str(data)
def pg_interval_out(v):
return str(v)
def range_out(v):
if v.is_empty:
return "empty"
else:
le = v.lower
val_lower = "" if le is None else make_param(PY_TYPES, le)
ue = v.upper
val_upper = "" if ue is None else make_param(PY_TYPES, ue)
return f"{v.bounds[0]}{val_lower},{val_upper}{v.bounds[1]}"
def string_in(data):
return data
def string_out(v):
return v
def time_in(data):
pattern = "%H:%M:%S.%f" if "." in data else "%H:%M:%S"
return Datetime.strptime(data, pattern).time()
def time_out(v):
return v.isoformat()
def timestamp_in(data):
if data in ("infinity", "-infinity"):
return data
try:
pattern = "%Y-%m-%d %H:%M:%S.%f" if "." in data else "%Y-%m-%d %H:%M:%S"
return Datetime.strptime(data, pattern)
except ValueError:
try:
return parse(data)
except ParserError:
# pg timestamp can overflow Python Datetime
return data
def timestamptz_in(data):
if data in ("infinity", "-infinity"):
return data
try:
patt = "%Y-%m-%d %H:%M:%S.%f%z" if "." in data else "%Y-%m-%d %H:%M:%S%z"
return Datetime.strptime(f"{data}00", patt)
except ValueError:
try:
return parse(data)
except ParserError:
# pg timestamptz can overflow Python Datetime
return data
def unknown_out(v):
return str(v)
def vector_in(data):
return [int(v) for v in data.split()]
def uuid_out(v):
return str(v)
def uuid_in(data):
return UUID(data)
class ParserState(Enum):
InString = 1
InEscape = 2
InValue = 3
Out = 4
def _parse_array(data, adapter):
state = ParserState.Out
stack = [[]]
val = []
for c in data:
if state == ParserState.InValue:
if c in ("}", ","):
value = "".join(val)
stack[-1].append(None if value == "NULL" else adapter(value))
state = ParserState.Out
else:
val.append(c)
if state == ParserState.Out:
if c == "{":
a = []
stack[-1].append(a)
stack.append(a)
elif c == "}":
stack.pop()
elif c == ",":
pass
elif c == '"':
val = []
state = ParserState.InString
else:
val = [c]
state = ParserState.InValue
elif state == ParserState.InString:
if c == '"':
stack[-1].append(adapter("".join(val)))
state = ParserState.Out
elif c == "\\":
state = ParserState.InEscape
else:
val.append(c)
elif state == ParserState.InEscape:
val.append(c)
state = ParserState.InString
return stack[0][0]
def _array_in(adapter):
def f(data):
return _parse_array(data, adapter)
return f
bool_array_in = _array_in(bool_in)
bytes_array_in = _array_in(bytes_in)
cidr_array_in = _array_in(cidr_in)
date_array_in = _array_in(date_in)
inet_array_in = _array_in(inet_in)
int_array_in = _array_in(int)
interval_array_in = _array_in(interval_in)
json_array_in = _array_in(json_in)
float_array_in = _array_in(float)
numeric_array_in = _array_in(numeric_in)
string_array_in = _array_in(string_in)
time_array_in = _array_in(time_in)
timestamp_array_in = _array_in(timestamp_in)
timestamptz_array_in = _array_in(timestamptz_in)
uuid_array_in = _array_in(uuid_in)
def array_string_escape(v):
cs = []
for c in v:
if c == "\\":
cs.append("\\")
elif c == '"':
cs.append("\\")
cs.append(c)
val = "".join(cs)
if (
len(val) == 0
or val == "NULL"
or any(c.isspace() for c in val)
or any(c in val for c in ("{", "}", ",", "\\"))
):
val = f'"{val}"'
return val
def array_out(ar):
result = []
for v in ar:
if isinstance(v, list):
val = array_out(v)
elif isinstance(v, tuple):
val = f'"{composite_out(v)}"'
elif v is None:
val = "NULL"
elif isinstance(v, dict):
val = array_string_escape(json_out(v))
elif isinstance(v, (bytes, bytearray)):
val = f'"\\{bytes_out(v)}"'
elif isinstance(v, str):
val = array_string_escape(v)
else:
val = make_param(PY_TYPES, v)
result.append(val)
return f'{{{",".join(result)}}}'
def composite_out(ar):
result = []
for v in ar:
if isinstance(v, list):
val = array_out(v)
elif isinstance(v, tuple):
val = composite_out(v)
elif v is None:
val = ""
elif isinstance(v, dict):
val = array_string_escape(json_out(v))
elif isinstance(v, (bytes, bytearray)):
val = f'"\\{bytes_out(v)}"'
elif isinstance(v, str):
val = array_string_escape(v)
else:
val = make_param(PY_TYPES, v)
result.append(val)
return f'({",".join(result)})'
def _range_in(elem_func):
def range_in(data):
if data == "empty":
return Range(is_empty=True)
else:
le, ue = [None if v == "" else elem_func(v) for v in data[1:-1].split(",")]
return Range(le, ue, bounds=f"{data[0]}{data[-1]}")
return range_in
daterange_in = _range_in(date_in)
int4range_in = _range_in(int)
int8range_in = _range_in(int)
numrange_in = _range_in(Decimal)
def ts_in(data):
return timestamp_in(data[1:-1])
def tstz_in(data):
return timestamptz_in(data[1:-1])
tsrange_in = _range_in(ts_in)
tstzrange_in = _range_in(tstz_in)
def _multirange_in(adapter):
def f(data):
in_range = False
result = []
val = []
for c in data:
if in_range:
val.append(c)
if c in "])":
value = "".join(val)
val.clear()
result.append(adapter(value))
in_range = False
elif c in "[(":
val.append(c)
in_range = True
return result
return f
datemultirange_in = _multirange_in(daterange_in)
int4multirange_in = _multirange_in(int4range_in)
int8multirange_in = _multirange_in(int8range_in)
nummultirange_in = _multirange_in(numrange_in)
tsmultirange_in = _multirange_in(tsrange_in)
tstzmultirange_in = _multirange_in(tstzrange_in)
def record_in(data):
state = ParserState.Out
results = []
val = []
for c in data:
if state == ParserState.InValue:
if c in (")", ","):
value = "".join(val)
val.clear()
results.append(None if value == "" else value)
state = ParserState.Out
else:
val.append(c)
if state == ParserState.Out:
if c in "(),":
pass
elif c == '"':
state = ParserState.InString
else:
val.append(c)
state = ParserState.InValue
elif state == ParserState.InString:
if c == '"':
results.append("".join(val))
val.clear()
state = ParserState.Out
elif c == "\\":
state = ParserState.InEscape
else:
val.append(c)
elif state == ParserState.InEscape:
val.append(c)
state = ParserState.InString
return tuple(results)
PY_PG = {
Date: DATE,
Decimal: NUMERIC,
IPv4Address: INET,
IPv6Address: INET,
IPv4Network: INET,
IPv6Network: INET,
PGInterval: INTERVAL,
Time: TIME,
Timedelta: INTERVAL,
UUID: UUID_TYPE,
bool: BOOLEAN,
bytearray: BYTES,
dict: JSONB,
float: FLOAT,
type(None): NULLTYPE,
bytes: BYTES,
str: TEXT,
}
PY_TYPES = {
Date: date_out, # date
Datetime: datetime_out,
Decimal: numeric_out, # numeric
Enum: enum_out, # enum
IPv4Address: inet_out, # inet
IPv6Address: inet_out, # inet
IPv4Network: inet_out, # inet
IPv6Network: inet_out, # inet
PGInterval: interval_out, # interval
Range: range_out, # range types
Time: time_out, # time
Timedelta: interval_out, # interval
UUID: uuid_out, # uuid
bool: bool_out, # bool
bytearray: bytes_out, # bytea
dict: json_out, # jsonb
float: float_out, # float8
type(None): null_out, # null
bytes: bytes_out, # bytea
str: string_out, # unknown
int: int_out,
list: array_out,
tuple: composite_out,
}
PG_TYPES = {
BIGINT: int, # int8
BIGINT_ARRAY: int_array_in, # int8[]
BOOLEAN: bool_in, # bool
BOOLEAN_ARRAY: bool_array_in, # bool[]
BYTES: bytes_in, # bytea
BYTES_ARRAY: bytes_array_in, # bytea[]
CHAR: string_in, # char
CHAR_ARRAY: string_array_in, # char[]
CIDR_ARRAY: cidr_array_in, # cidr[]
CSTRING: string_in, # cstring
CSTRING_ARRAY: string_array_in, # cstring[]
DATE: date_in, # date
DATE_ARRAY: date_array_in, # date[]
DATEMULTIRANGE: datemultirange_in, # datemultirange
DATERANGE: daterange_in, # daterange
FLOAT: float, # float8
FLOAT_ARRAY: float_array_in, # float8[]
INET: inet_in, # inet
INET_ARRAY: inet_array_in, # inet[]
INT4MULTIRANGE: int4multirange_in, # int4multirange
INT4RANGE: int4range_in, # int4range
INT8MULTIRANGE: int8multirange_in, # int8multirange
INT8RANGE: int8range_in, # int8range
INTEGER: int, # int4
INTEGER_ARRAY: int_array_in, # int4[]
JSON: json_in, # json
JSON_ARRAY: json_array_in, # json[]
JSONB: json_in, # jsonb
JSONB_ARRAY: json_array_in, # jsonb[]
MACADDR: string_in, # MACADDR type
MONEY: string_in, # money
MONEY_ARRAY: string_array_in, # money[]
NAME: string_in, # name
NAME_ARRAY: string_array_in, # name[]
NUMERIC: numeric_in, # numeric
NUMERIC_ARRAY: numeric_array_in, # numeric[]
NUMRANGE: numrange_in, # numrange
NUMMULTIRANGE: nummultirange_in, # nummultirange
OID: int, # oid
POINT: point_in, # point
INTERVAL: interval_in, # interval
INTERVAL_ARRAY: interval_array_in, # interval[]
REAL: float, # float4
REAL_ARRAY: float_array_in, # float4[]
RECORD: record_in, # record
SMALLINT: int, # int2
SMALLINT_ARRAY: int_array_in, # int2[]
SMALLINT_VECTOR: vector_in, # int2vector
TEXT: string_in, # text
TEXT_ARRAY: string_array_in, # text[]
TIME: time_in, # time
TIME_ARRAY: time_array_in, # time[]
INTERVAL: interval_in, # interval
TIMESTAMP: timestamp_in, # timestamp
TIMESTAMP_ARRAY: timestamp_array_in, # timestamp
TIMESTAMPTZ: timestamptz_in, # timestamptz
TIMESTAMPTZ_ARRAY: timestamptz_array_in, # timestamptz
TSMULTIRANGE: tsmultirange_in, # tsmultirange
TSRANGE: tsrange_in, # tsrange
TSTZMULTIRANGE: tstzmultirange_in, # tstzmultirange
TSTZRANGE: tstzrange_in, # tstzrange
UNKNOWN: string_in, # unknown
UUID_ARRAY: uuid_array_in, # uuid[]
UUID_TYPE: uuid_in, # uuid
VARCHAR: string_in, # varchar
VARCHAR_ARRAY: string_array_in, # varchar[]
XID: int, # xid
}
# PostgreSQL encodings:
# https://www.postgresql.org/docs/current/multibyte.html
#
# Python encodings:
# https://docs.python.org/3/library/codecs.html
#
# Commented out encodings don't require a name change between PostgreSQL and
# Python. If the py side is None, then the encoding isn't supported.
PG_PY_ENCODINGS = {
# Not supported:
"mule_internal": None,
"euc_tw": None,
# Name fine as-is:
# "euc_jp",
# "euc_jis_2004",
# "euc_kr",
# "gb18030",
# "gbk",
# "johab",
# "sjis",
# "shift_jis_2004",
# "uhc",
# "utf8",
# Different name:
"euc_cn": "gb2312",
"iso_8859_5": "is8859_5",
"iso_8859_6": "is8859_6",
"iso_8859_7": "is8859_7",
"iso_8859_8": "is8859_8",
"koi8": "koi8_r",
"latin1": "iso8859-1",
"latin2": "iso8859_2",
"latin3": "iso8859_3",
"latin4": "iso8859_4",
"latin5": "iso8859_9",
"latin6": "iso8859_10",
"latin7": "iso8859_13",
"latin8": "iso8859_14",
"latin9": "iso8859_15",
"sql_ascii": "ascii",
"win866": "cp886",
"win874": "cp874",
"win1250": "cp1250",
"win1251": "cp1251",
"win1252": "cp1252",
"win1253": "cp1253",
"win1254": "cp1254",
"win1255": "cp1255",
"win1256": "cp1256",
"win1257": "cp1257",
"win1258": "cp1258",
"unicode": "utf-8", # Needed for Amazon Redshift
}
def make_param(py_types, value):
try:
func = py_types[type(value)]
except KeyError:
func = str
for k, v in py_types.items():
try:
if isinstance(value, k):
func = v
break
except TypeError:
pass
return func(value)
def make_params(py_types, values):
return tuple([make_param(py_types, v) for v in values])
def identifier(sql):
if not isinstance(sql, str):
raise InterfaceError("identifier must be a str")
if len(sql) == 0:
raise InterfaceError("identifier must be > 0 characters in length")
quote = not sql[0].isalpha()
for c in sql[1:]:
if not (c.isalpha() or c.isdecimal() or c in "_$"):
if c == "\u0000":
raise InterfaceError(
"identifier cannot contain the code zero character"
)
quote = True
break
if quote:
sql = sql.replace('"', '""')
return f'"{sql}"'
else:
return sql
def literal(value):
if value is None:
return "NULL"
elif isinstance(value, bool):
return "TRUE" if value else "FALSE"
elif isinstance(value, (int, float, Decimal)):
return str(value)
elif isinstance(value, (bytes, bytearray)):
return f"X'{value.hex()}'"
elif isinstance(value, Datetime):
return f"'{datetime_out(value)}'"
elif isinstance(value, Date):
return f"'{date_out(value)}'"
elif isinstance(value, Time):
return f"'{time_out(value)}'"
elif isinstance(value, Timedelta):
return f"'{interval_out(value)}'"
elif isinstance(value, list):
return f"'{array_out(value)}'"
else:
val = str(value).replace("'", "''")
return f"'{val}'"
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1690490068.0
pg8000-1.30.3/pg8000/core.py 0000664 0001750 0001750 00000066531 14460552324 014342 0 ustar 00tlocke tlocke import codecs
import socket
from collections import defaultdict, deque
from hashlib import md5
from importlib.metadata import version
from io import IOBase, TextIOBase
from itertools import count
from struct import Struct
import scramp
from pg8000.converters import (
PG_PY_ENCODINGS,
PG_TYPES,
PY_TYPES,
make_params,
string_in,
)
from pg8000.exceptions import DatabaseError, Error, InterfaceError
ver = version("pg8000")
def pack_funcs(fmt):
struc = Struct(f"!{fmt}")
return struc.pack, struc.unpack_from
i_pack, i_unpack = pack_funcs("i")
h_pack, h_unpack = pack_funcs("h")
ii_pack, ii_unpack = pack_funcs("ii")
ihihih_pack, ihihih_unpack = pack_funcs("ihihih")
ci_pack, ci_unpack = pack_funcs("ci")
bh_pack, bh_unpack = pack_funcs("bh")
cccc_pack, cccc_unpack = pack_funcs("cccc")
# Copyright (c) 2007-2009, Mathieu Fenniak
# Copyright (c) The Contributors
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright notice,
# this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
# * The name of the author may not be used to endorse or promote products
# derived from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
__author__ = "Mathieu Fenniak"
NULL_BYTE = b"\x00"
# Message codes
NOTICE_RESPONSE = b"N"
AUTHENTICATION_REQUEST = b"R"
PARAMETER_STATUS = b"S"
BACKEND_KEY_DATA = b"K"
READY_FOR_QUERY = b"Z"
ROW_DESCRIPTION = b"T"
ERROR_RESPONSE = b"E"
DATA_ROW = b"D"
COMMAND_COMPLETE = b"C"
PARSE_COMPLETE = b"1"
BIND_COMPLETE = b"2"
CLOSE_COMPLETE = b"3"
PORTAL_SUSPENDED = b"s"
NO_DATA = b"n"
PARAMETER_DESCRIPTION = b"t"
NOTIFICATION_RESPONSE = b"A"
COPY_DONE = b"c"
COPY_DATA = b"d"
COPY_IN_RESPONSE = b"G"
COPY_OUT_RESPONSE = b"H"
EMPTY_QUERY_RESPONSE = b"I"
BIND = b"B"
PARSE = b"P"
QUERY = b"Q"
EXECUTE = b"E"
FLUSH = b"H"
SYNC = b"S"
PASSWORD = b"p"
DESCRIBE = b"D"
TERMINATE = b"X"
CLOSE = b"C"
def _create_message(code, data=b""):
return code + i_pack(len(data) + 4) + data
FLUSH_MSG = _create_message(FLUSH)
SYNC_MSG = _create_message(SYNC)
TERMINATE_MSG = _create_message(TERMINATE)
COPY_DONE_MSG = _create_message(COPY_DONE)
EXECUTE_MSG = _create_message(EXECUTE, NULL_BYTE + i_pack(0))
# DESCRIBE constants
STATEMENT = b"S"
PORTAL = b"P"
# ErrorResponse codes
RESPONSE_SEVERITY = "S" # always present
RESPONSE_SEVERITY = "V" # always present
RESPONSE_CODE = "C" # always present
RESPONSE_MSG = "M" # always present
RESPONSE_DETAIL = "D"
RESPONSE_HINT = "H"
RESPONSE_POSITION = "P"
RESPONSE__POSITION = "p"
RESPONSE__QUERY = "q"
RESPONSE_WHERE = "W"
RESPONSE_FILE = "F"
RESPONSE_LINE = "L"
RESPONSE_ROUTINE = "R"
IDLE = b"I"
IN_TRANSACTION = b"T"
IN_FAILED_TRANSACTION = b"E"
def _flush(sock):
try:
sock.flush()
except OSError as e:
raise InterfaceError("network error") from e
def _read(sock, size):
got = 0
buff = []
try:
while got < size:
block = sock.read(size - got)
if block == b"":
raise InterfaceError("network error")
buff.append(block)
got += len(block)
except OSError as e:
raise InterfaceError("network error") from e
return b"".join(buff)
def _write(sock, d):
try:
sock.write(d)
except OSError as e:
raise InterfaceError("network error") from e
def _make_socket(
unix_sock, sock, host, port, timeout, source_address, tcp_keepalive, ssl_context
):
if unix_sock is not None:
if sock is not None:
raise InterfaceError("If unix_sock is provided, sock must be None")
try:
if not hasattr(socket, "AF_UNIX"):
raise InterfaceError(
"attempt to connect to unix socket on unsupported platform"
)
sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
sock.settimeout(timeout)
sock.connect(unix_sock)
if tcp_keepalive:
sock.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1)
except socket.error as e:
if sock is not None:
sock.close()
raise InterfaceError("communication error") from e
elif sock is not None:
pass
elif host is not None:
try:
sock = socket.create_connection((host, port), timeout, source_address)
except socket.error as e:
raise InterfaceError(
f"Can't create a connection to host {host} and port {port} "
f"(timeout is {timeout} and source_address is {source_address})."
) from e
if tcp_keepalive:
sock.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1)
else:
raise InterfaceError("one of host, sock or unix_sock must be provided")
channel_binding = None
if ssl_context is not None:
try:
import ssl
if ssl_context is True:
ssl_context = ssl.create_default_context()
request_ssl = getattr(ssl_context, "request_ssl", True)
if request_ssl:
# Int32(8) - Message length, including self.
# Int32(80877103) - The SSL request code.
sock.sendall(ii_pack(8, 80877103))
resp = sock.recv(1)
if resp != b"S":
raise InterfaceError("Server refuses SSL")
sock = ssl_context.wrap_socket(sock, server_hostname=host)
if request_ssl:
channel_binding = scramp.make_channel_binding(
"tls-server-end-point", sock
)
except ImportError:
raise InterfaceError(
"SSL required but ssl module not available in this python "
"installation."
)
return channel_binding, sock
class CoreConnection:
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, traceback):
self.close()
def __init__(
self,
user,
host="localhost",
database=None,
port=5432,
password=None,
source_address=None,
unix_sock=None,
ssl_context=None,
timeout=None,
tcp_keepalive=True,
application_name=None,
replication=None,
sock=None,
):
self._client_encoding = "utf8"
self._commands_with_count = (
b"INSERT",
b"DELETE",
b"UPDATE",
b"MOVE",
b"FETCH",
b"COPY",
b"SELECT",
)
self.notifications = deque(maxlen=100)
self.notices = deque(maxlen=100)
self.parameter_statuses = deque(maxlen=100)
if user is None:
raise InterfaceError("The 'user' connection parameter cannot be None")
init_params = {
"user": user,
"database": database,
"application_name": application_name,
"replication": replication,
}
for k, v in tuple(init_params.items()):
if isinstance(v, str):
init_params[k] = v.encode("utf8")
elif v is None:
del init_params[k]
elif not isinstance(v, (bytes, bytearray)):
raise InterfaceError(f"The parameter {k} can't be of type {type(v)}.")
self.user = init_params["user"]
if isinstance(password, str):
self.password = password.encode("utf8")
else:
self.password = password
self._xid = None
self._statement_nums = set()
self._caches = {}
self.channel_binding, self._usock = _make_socket(
unix_sock,
sock,
host,
port,
timeout,
source_address,
tcp_keepalive,
ssl_context,
)
self._sock = self._usock.makefile(mode="rwb")
self._backend_key_data = None
self.pg_types = defaultdict(lambda: string_in, PG_TYPES)
self.py_types = dict(PY_TYPES)
self.message_types = {
NOTICE_RESPONSE: self.handle_NOTICE_RESPONSE,
AUTHENTICATION_REQUEST: self.handle_AUTHENTICATION_REQUEST,
PARAMETER_STATUS: self.handle_PARAMETER_STATUS,
BACKEND_KEY_DATA: self.handle_BACKEND_KEY_DATA,
READY_FOR_QUERY: self.handle_READY_FOR_QUERY,
ROW_DESCRIPTION: self.handle_ROW_DESCRIPTION,
ERROR_RESPONSE: self.handle_ERROR_RESPONSE,
EMPTY_QUERY_RESPONSE: self.handle_EMPTY_QUERY_RESPONSE,
DATA_ROW: self.handle_DATA_ROW,
COMMAND_COMPLETE: self.handle_COMMAND_COMPLETE,
PARSE_COMPLETE: self.handle_PARSE_COMPLETE,
BIND_COMPLETE: self.handle_BIND_COMPLETE,
CLOSE_COMPLETE: self.handle_CLOSE_COMPLETE,
PORTAL_SUSPENDED: self.handle_PORTAL_SUSPENDED,
NO_DATA: self.handle_NO_DATA,
PARAMETER_DESCRIPTION: self.handle_PARAMETER_DESCRIPTION,
NOTIFICATION_RESPONSE: self.handle_NOTIFICATION_RESPONSE,
COPY_DONE: self.handle_COPY_DONE,
COPY_DATA: self.handle_COPY_DATA,
COPY_IN_RESPONSE: self.handle_COPY_IN_RESPONSE,
COPY_OUT_RESPONSE: self.handle_COPY_OUT_RESPONSE,
}
# Int32 - Message length, including self.
# Int32(196608) - Protocol version number. Version 3.0.
# Any number of key/value pairs, terminated by a zero byte:
# String - A parameter name (user, database, or options)
# String - Parameter value
protocol = 196608
val = bytearray(i_pack(protocol))
for k, v in init_params.items():
val.extend(k.encode("ascii") + NULL_BYTE + v + NULL_BYTE)
val.append(0)
_write(self._sock, i_pack(len(val) + 4))
_write(self._sock, val)
_flush(self._sock)
try:
code = None
context = Context(None)
while code not in (READY_FOR_QUERY, ERROR_RESPONSE):
code, data_len = ci_unpack(_read(self._sock, 5))
self.message_types[code](_read(self._sock, data_len - 4), context)
if context.error is not None:
raise context.error
except Error as e:
self.close()
raise e
self._transaction_status = None
def register_out_adapter(self, typ, out_func):
self.py_types[typ] = out_func
def register_in_adapter(self, oid, in_func):
self.pg_types[oid] = in_func
def handle_ERROR_RESPONSE(self, data, context):
msg = {
s[:1].decode("ascii"): s[1:].decode(self._client_encoding, errors="replace")
for s in data.split(NULL_BYTE)
if s != b""
}
context.error = DatabaseError(msg)
def handle_EMPTY_QUERY_RESPONSE(self, data, context):
pass
def handle_CLOSE_COMPLETE(self, data, context):
pass
def handle_PARSE_COMPLETE(self, data, context):
# Byte1('1') - Identifier.
# Int32(4) - Message length, including self.
pass
def handle_BIND_COMPLETE(self, data, context):
pass
def handle_PORTAL_SUSPENDED(self, data, context):
pass
def handle_PARAMETER_DESCRIPTION(self, data, context):
"""https://www.postgresql.org/docs/current/protocol-message-formats.html"""
# count = h_unpack(data)[0]
# context.parameter_oids = unpack_from("!" + "i" * count, data, 2)
def handle_COPY_DONE(self, data, context):
pass
def handle_COPY_OUT_RESPONSE(self, data, context):
"""https://www.postgresql.org/docs/current/protocol-message-formats.html"""
is_binary, num_cols = bh_unpack(data)
# column_formats = unpack_from('!' + 'h' * num_cols, data, 3)
if context.stream is None:
raise InterfaceError(
"An output stream is required for the COPY OUT response."
)
elif isinstance(context.stream, TextIOBase):
if is_binary:
raise InterfaceError(
"The COPY OUT stream is binary, but the stream parameter is text."
)
else:
decode = codecs.getdecoder(self._client_encoding)
def w(data):
context.stream.write(decode(data)[0])
context.stream_write = w
else:
context.stream_write = context.stream.write
def handle_COPY_DATA(self, data, context):
context.stream_write(data)
def handle_COPY_IN_RESPONSE(self, data, context):
"""https://www.postgresql.org/docs/current/protocol-message-formats.html"""
is_binary, num_cols = bh_unpack(data)
# column_formats = unpack_from('!' + 'h' * num_cols, data, 3)
if context.stream is None:
raise InterfaceError(
"The 'stream' parameter is required for the COPY IN response. The "
"'stream' parameter can be an I/O stream or an iterable."
)
if isinstance(context.stream, IOBase):
if isinstance(context.stream, TextIOBase):
if is_binary:
raise InterfaceError(
"The COPY IN stream is binary, but the stream parameter is a "
"text stream."
)
else:
def ri(bffr):
bffr.clear()
bffr.extend(
context.stream.read(4096).encode(self._client_encoding)
)
return len(bffr)
readinto = ri
else:
readinto = context.stream.readinto
bffr = bytearray(8192)
while True:
bytes_read = readinto(bffr)
if bytes_read == 0:
break
_write(self._sock, COPY_DATA)
_write(self._sock, i_pack(bytes_read + 4))
_write(self._sock, bffr[:bytes_read])
_flush(self._sock)
else:
for k in context.stream:
if isinstance(k, str):
if is_binary:
raise InterfaceError(
"The COPY IN stream is binary, but the stream parameter "
"is an iterable with str type items."
)
b = k.encode(self._client_encoding)
else:
b = k
self._send_message(COPY_DATA, b)
_flush(self._sock)
# Send CopyDone
_write(self._sock, COPY_DONE_MSG)
_write(self._sock, SYNC_MSG)
_flush(self._sock)
def handle_NOTIFICATION_RESPONSE(self, data, context):
"""https://www.postgresql.org/docs/current/protocol-message-formats.html"""
backend_pid = i_unpack(data)[0]
idx = 4
null_idx = data.find(NULL_BYTE, idx)
channel = data[idx:null_idx].decode("ascii")
payload = data[null_idx + 1 : -1].decode("ascii")
self.notifications.append((backend_pid, channel, payload))
def close(self):
"""Closes the database connection.
This function is part of the `DBAPI 2.0 specification
`_.
"""
if self._usock is None:
raise InterfaceError("connection is closed")
try:
_write(self._sock, TERMINATE_MSG)
_flush(self._sock)
finally:
try:
self._usock.close()
except socket.error as e:
raise InterfaceError("network error") from e
finally:
self._sock = None
self._usock = None
def handle_AUTHENTICATION_REQUEST(self, data, context):
"""https://www.postgresql.org/docs/current/protocol-message-formats.html"""
auth_code = i_unpack(data)[0]
if auth_code == 0:
pass
elif auth_code == 3:
if self.password is None:
raise InterfaceError(
"server requesting password authentication, but no password was "
"provided"
)
self._send_message(PASSWORD, self.password + NULL_BYTE)
_flush(self._sock)
elif auth_code == 5:
salt = b"".join(cccc_unpack(data, 4))
if self.password is None:
raise InterfaceError(
"server requesting MD5 password authentication, but no password "
"was provided"
)
pwd = b"md5" + md5(
md5(self.password + self.user).hexdigest().encode("ascii") + salt
).hexdigest().encode("ascii")
self._send_message(PASSWORD, pwd + NULL_BYTE)
_flush(self._sock)
elif auth_code == 10:
# AuthenticationSASL
mechanisms = [m.decode("ascii") for m in data[4:-2].split(NULL_BYTE)]
self.auth = scramp.ScramClient(
mechanisms,
self.user.decode("utf8"),
self.password.decode("utf8"),
channel_binding=self.channel_binding,
)
init = self.auth.get_client_first().encode("utf8")
mech = self.auth.mechanism_name.encode("ascii") + NULL_BYTE
# SASLInitialResponse
self._send_message(PASSWORD, mech + i_pack(len(init)) + init)
_flush(self._sock)
elif auth_code == 11:
# AuthenticationSASLContinue
self.auth.set_server_first(data[4:].decode("utf8"))
# SASLResponse
msg = self.auth.get_client_final().encode("utf8")
self._send_message(PASSWORD, msg)
_flush(self._sock)
elif auth_code == 12:
# AuthenticationSASLFinal
self.auth.set_server_final(data[4:].decode("utf8"))
elif auth_code in (2, 4, 6, 7, 8, 9):
raise InterfaceError(
f"Authentication method {auth_code} not supported by pg8000."
)
else:
raise InterfaceError(
f"Authentication method {auth_code} not recognized by pg8000."
)
def handle_READY_FOR_QUERY(self, data, context):
self._transaction_status = data
def handle_BACKEND_KEY_DATA(self, data, context):
self._backend_key_data = data
def handle_ROW_DESCRIPTION(self, data, context):
count = h_unpack(data)[0]
idx = 2
columns = []
input_funcs = []
for i in range(count):
name = data[idx : data.find(NULL_BYTE, idx)]
idx += len(name) + 1
field = dict(
zip(
(
"table_oid",
"column_attrnum",
"type_oid",
"type_size",
"type_modifier",
"format",
),
ihihih_unpack(data, idx),
)
)
field["name"] = name.decode(self._client_encoding)
idx += 18
columns.append(field)
input_funcs.append(self.pg_types[field["type_oid"]])
context.columns = columns
context.input_funcs = input_funcs
if context.rows is None:
context.rows = []
def send_PARSE(self, statement_name_bin, statement, oids=()):
val = bytearray(statement_name_bin)
val.extend(statement.encode(self._client_encoding) + NULL_BYTE)
val.extend(h_pack(len(oids)))
for oid in oids:
val.extend(i_pack(0 if oid == -1 else oid))
self._send_message(PARSE, val)
_write(self._sock, FLUSH_MSG)
def send_DESCRIBE_STATEMENT(self, statement_name_bin):
self._send_message(DESCRIBE, STATEMENT + statement_name_bin)
_write(self._sock, FLUSH_MSG)
def send_QUERY(self, sql):
self._send_message(QUERY, sql.encode(self._client_encoding) + NULL_BYTE)
def execute_simple(self, statement):
context = Context(statement)
self.send_QUERY(statement)
_flush(self._sock)
self.handle_messages(context)
return context
def execute_unnamed(self, statement, vals=(), oids=(), stream=None):
context = Context(statement, stream=stream)
self.send_PARSE(NULL_BYTE, statement, oids)
_write(self._sock, SYNC_MSG)
_flush(self._sock)
self.handle_messages(context)
self.send_DESCRIBE_STATEMENT(NULL_BYTE)
_write(self._sock, SYNC_MSG)
try:
_flush(self._sock)
except AttributeError as e:
if self._sock is None:
raise InterfaceError("connection is closed")
else:
raise e
params = make_params(self.py_types, vals)
self.send_BIND(NULL_BYTE, params)
self.handle_messages(context)
self.send_EXECUTE()
_write(self._sock, SYNC_MSG)
_flush(self._sock)
self.handle_messages(context)
return context
def prepare_statement(self, statement, oids=None):
for i in count():
statement_name = f"pg8000_statement_{i}"
statement_name_bin = statement_name.encode("ascii") + NULL_BYTE
if statement_name_bin not in self._statement_nums:
self._statement_nums.add(statement_name_bin)
break
self.send_PARSE(statement_name_bin, statement, oids)
self.send_DESCRIBE_STATEMENT(statement_name_bin)
_write(self._sock, SYNC_MSG)
try:
_flush(self._sock)
except AttributeError as e:
if self._sock is None:
raise InterfaceError("connection is closed")
else:
raise e
context = Context(statement)
self.handle_messages(context)
return statement_name_bin, context.columns, context.input_funcs
def execute_named(
self, statement_name_bin, params, columns, input_funcs, statement
):
context = Context(columns=columns, input_funcs=input_funcs, statement=statement)
self.send_BIND(statement_name_bin, params)
self.send_EXECUTE()
_write(self._sock, SYNC_MSG)
_flush(self._sock)
self.handle_messages(context)
return context
def _send_message(self, code, data):
try:
_write(self._sock, code)
_write(self._sock, i_pack(len(data) + 4))
_write(self._sock, data)
except ValueError as e:
if str(e) == "write to closed file":
raise InterfaceError("connection is closed")
else:
raise e
except AttributeError:
raise InterfaceError("connection is closed")
def send_BIND(self, statement_name_bin, params):
"""https://www.postgresql.org/docs/current/protocol-message-formats.html"""
retval = bytearray(
NULL_BYTE + statement_name_bin + h_pack(0) + h_pack(len(params))
)
for value in params:
if value is None:
retval.extend(i_pack(-1))
else:
val = value.encode(self._client_encoding)
retval.extend(i_pack(len(val)))
retval.extend(val)
retval.extend(h_pack(0))
self._send_message(BIND, retval)
_write(self._sock, FLUSH_MSG)
def send_EXECUTE(self):
"""https://www.postgresql.org/docs/current/protocol-message-formats.html"""
_write(self._sock, EXECUTE_MSG)
_write(self._sock, FLUSH_MSG)
def handle_NO_DATA(self, msg, context):
pass
def handle_COMMAND_COMPLETE(self, data, context):
if self._transaction_status == IN_FAILED_TRANSACTION and context.error is None:
sql = context.statement.split()[0].rstrip(";").upper()
if sql != "ROLLBACK":
context.error = InterfaceError("in failed transaction block")
values = data[:-1].split(b" ")
try:
row_count = int(values[-1])
if context.row_count == -1:
context.row_count = row_count
else:
context.row_count += row_count
except ValueError:
pass
def handle_DATA_ROW(self, data, context):
idx = 2
row = []
for func in context.input_funcs:
vlen = i_unpack(data, idx)[0]
idx += 4
if vlen == -1:
v = None
else:
v = func(str(data[idx : idx + vlen], encoding=self._client_encoding))
idx += vlen
row.append(v)
context.rows.append(row)
def handle_messages(self, context):
code = None
while code != READY_FOR_QUERY:
code, data_len = ci_unpack(_read(self._sock, 5))
self.message_types[code](_read(self._sock, data_len - 4), context)
if context.error is not None:
raise context.error
def close_prepared_statement(self, statement_name_bin):
"""https://www.postgresql.org/docs/current/protocol-message-formats.html"""
self._send_message(CLOSE, STATEMENT + statement_name_bin)
_write(self._sock, FLUSH_MSG)
_write(self._sock, SYNC_MSG)
_flush(self._sock)
context = Context(None)
self.handle_messages(context)
self._statement_nums.remove(statement_name_bin)
def handle_NOTICE_RESPONSE(self, data, context):
"""https://www.postgresql.org/docs/current/protocol-message-formats.html"""
self.notices.append({s[0:1]: s[1:] for s in data.split(NULL_BYTE)})
def handle_PARAMETER_STATUS(self, data, context):
pos = data.find(NULL_BYTE)
key, value = data[:pos], data[pos + 1 : -1]
self.parameter_statuses.append((key, value))
if key == b"client_encoding":
encoding = value.decode("ascii").lower()
self._client_encoding = PG_PY_ENCODINGS.get(encoding, encoding)
elif key == b"integer_datetimes":
if value == b"on":
pass
else:
pass
elif key == b"server_version":
pass
class Context:
def __init__(self, statement, stream=None, columns=None, input_funcs=None):
self.statement = statement
self.rows = None if columns is None else []
self.row_count = -1
self.columns = columns
self.stream = stream
self.input_funcs = [] if input_funcs is None else input_funcs
self.error = None
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1698781567.0
pg8000-1.30.3/pg8000/dbapi.py 0000664 0001750 0001750 00000074206 14520254577 014475 0 ustar 00tlocke tlocke from datetime import (
date as Date,
datetime as Datetime,
time as Time,
)
from itertools import count, islice
from time import localtime
from warnings import warn
from pg8000.converters import (
BIGINT,
BOOLEAN,
BOOLEAN_ARRAY,
BYTES,
CHAR,
CHAR_ARRAY,
DATE,
FLOAT,
FLOAT_ARRAY,
INET,
INT2VECTOR,
INTEGER,
INTEGER_ARRAY,
INTERVAL,
JSON,
JSONB,
MACADDR,
NAME,
NAME_ARRAY,
NULLTYPE,
NUMERIC,
NUMERIC_ARRAY,
OID,
PGInterval,
PY_PG,
STRING,
TEXT,
TEXT_ARRAY,
TIME,
TIMESTAMP,
TIMESTAMPTZ,
UNKNOWN,
UUID_TYPE,
VARCHAR,
VARCHAR_ARRAY,
XID,
)
from pg8000.core import (
Context,
CoreConnection,
IN_FAILED_TRANSACTION,
IN_TRANSACTION,
ver,
)
from pg8000.exceptions import DatabaseError, Error, InterfaceError
from pg8000.types import Range
__version__ = ver
# Copyright (c) 2007-2009, Mathieu Fenniak
# Copyright (c) The Contributors
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright notice,
# this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
# * The name of the author may not be used to endorse or promote products
# derived from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
__author__ = "Mathieu Fenniak"
ROWID = OID
apilevel = "2.0"
"""The DBAPI level supported, currently "2.0".
This property is part of the `DBAPI 2.0 specification
`_.
"""
threadsafety = 1
"""Integer constant stating the level of thread safety the DBAPI interface
supports. This DBAPI module supports sharing of the module only. Connections
and cursors my not be shared between threads. This gives pg8000 a threadsafety
value of 1.
This property is part of the `DBAPI 2.0 specification
`_.
"""
paramstyle = "format"
BINARY = bytes
def PgDate(year, month, day):
"""Construct an object holding a date value.
This function is part of the `DBAPI 2.0 specification
`_.
:rtype: :class:`datetime.date`
"""
return Date(year, month, day)
def PgTime(hour, minute, second):
"""Construct an object holding a time value.
This function is part of the `DBAPI 2.0 specification
`_.
:rtype: :class:`datetime.time`
"""
return Time(hour, minute, second)
def Timestamp(year, month, day, hour, minute, second):
"""Construct an object holding a timestamp value.
This function is part of the `DBAPI 2.0 specification
`_.
:rtype: :class:`datetime.datetime`
"""
return Datetime(year, month, day, hour, minute, second)
def DateFromTicks(ticks):
"""Construct an object holding a date value from the given ticks value
(number of seconds since the epoch).
This function is part of the `DBAPI 2.0 specification
`_.
:rtype: :class:`datetime.date`
"""
return Date(*localtime(ticks)[:3])
def TimeFromTicks(ticks):
"""Construct an object holding a time value from the given ticks value
(number of seconds since the epoch).
This function is part of the `DBAPI 2.0 specification
`_.
:rtype: :class:`datetime.time`
"""
return Time(*localtime(ticks)[3:6])
def TimestampFromTicks(ticks):
"""Construct an object holding a timestamp value from the given ticks value
(number of seconds since the epoch).
This function is part of the `DBAPI 2.0 specification
`_.
:rtype: :class:`datetime.datetime`
"""
return Timestamp(*localtime(ticks)[:6])
def Binary(value):
"""Construct an object holding binary data.
This function is part of the `DBAPI 2.0 specification
`_.
"""
return value
def connect(
user,
host="localhost",
database=None,
port=5432,
password=None,
source_address=None,
unix_sock=None,
ssl_context=None,
timeout=None,
tcp_keepalive=True,
application_name=None,
replication=None,
sock=None,
):
return Connection(
user,
host=host,
database=database,
port=port,
password=password,
source_address=source_address,
unix_sock=unix_sock,
ssl_context=ssl_context,
timeout=timeout,
tcp_keepalive=tcp_keepalive,
application_name=application_name,
replication=replication,
sock=sock,
)
apilevel = "2.0"
"""The DBAPI level supported, currently "2.0".
This property is part of the `DBAPI 2.0 specification
`_.
"""
threadsafety = 1
"""Integer constant stating the level of thread safety the DBAPI interface
supports. This DBAPI module supports sharing of the module only. Connections
and cursors my not be shared between threads. This gives pg8000 a threadsafety
value of 1.
This property is part of the `DBAPI 2.0 specification
`_.
"""
paramstyle = "format"
def convert_paramstyle(style, query, args):
# I don't see any way to avoid scanning the query string char by char,
# so we might as well take that careful approach and create a
# state-based scanner. We'll use int variables for the state.
OUTSIDE = 0 # outside quoted string
INSIDE_SQ = 1 # inside single-quote string '...'
INSIDE_QI = 2 # inside quoted identifier "..."
INSIDE_ES = 3 # inside escaped single-quote string, E'...'
INSIDE_PN = 4 # inside parameter name eg. :name
INSIDE_CO = 5 # inside inline comment eg. --
INSIDE_DQ = 6 # inside escaped dollar-quote string, $$...$$
in_quote_escape = False
in_param_escape = False
placeholders = []
output_query = []
param_idx = map(lambda x: "$" + str(x), count(1))
state = OUTSIDE
prev_c = None
for i, c in enumerate(query):
next_c = query[i + 1] if i + 1 < len(query) else None
if state == OUTSIDE:
if c == "'":
output_query.append(c)
if prev_c == "E":
state = INSIDE_ES
else:
state = INSIDE_SQ
elif c == '"':
output_query.append(c)
state = INSIDE_QI
elif c == "-":
output_query.append(c)
if prev_c == "-":
state = INSIDE_CO
elif c == "$":
output_query.append(c)
if prev_c == "$":
state = INSIDE_DQ
elif style == "qmark" and c == "?":
output_query.append(next(param_idx))
elif (
style == "numeric" and c == ":" and next_c not in ":=" and prev_c != ":"
):
# Treat : as beginning of parameter name if and only
# if it's the only : around
# Needed to properly process type conversions
# i.e. sum(x)::float
output_query.append("$")
elif style == "named" and c == ":" and next_c not in ":=" and prev_c != ":":
# Same logic for : as in numeric parameters
state = INSIDE_PN
placeholders.append("")
elif style == "pyformat" and c == "%" and next_c == "(":
state = INSIDE_PN
placeholders.append("")
elif style in ("format", "pyformat") and c == "%":
style = "format"
if in_param_escape:
in_param_escape = False
output_query.append(c)
else:
if next_c == "%":
in_param_escape = True
elif next_c == "s":
state = INSIDE_PN
output_query.append(next(param_idx))
else:
raise InterfaceError(
"Only %s and %% are supported in the query."
)
else:
output_query.append(c)
elif state == INSIDE_SQ:
if c == "'":
if in_quote_escape:
in_quote_escape = False
else:
if next_c == "'":
in_quote_escape = True
else:
state = OUTSIDE
output_query.append(c)
elif state == INSIDE_QI:
if c == '"':
state = OUTSIDE
output_query.append(c)
elif state == INSIDE_ES:
if c == "'" and prev_c != "\\":
# check for escaped single-quote
state = OUTSIDE
output_query.append(c)
elif state == INSIDE_DQ:
if c == "$" and prev_c == "$":
state = OUTSIDE
output_query.append(c)
elif state == INSIDE_PN:
if style == "named":
placeholders[-1] += c
if next_c is None or (not next_c.isalnum() and next_c != "_"):
state = OUTSIDE
try:
pidx = placeholders.index(placeholders[-1], 0, -1)
output_query.append("$" + str(pidx + 1))
del placeholders[-1]
except ValueError:
output_query.append("$" + str(len(placeholders)))
elif style == "pyformat":
if prev_c == ")" and c == "s":
state = OUTSIDE
try:
pidx = placeholders.index(placeholders[-1], 0, -1)
output_query.append("$" + str(pidx + 1))
del placeholders[-1]
except ValueError:
output_query.append("$" + str(len(placeholders)))
elif c in "()":
pass
else:
placeholders[-1] += c
elif style == "format":
state = OUTSIDE
elif state == INSIDE_CO:
output_query.append(c)
if c == "\n":
state = OUTSIDE
prev_c = c
if style in ("numeric", "qmark", "format"):
vals = args
else:
vals = tuple(args[p] for p in placeholders)
return "".join(output_query), vals
class Cursor:
def __init__(self, connection):
self._c = connection
self.arraysize = 1
self._context = None
self._row_iter = None
self._input_oids = ()
@property
def connection(self):
warn("DB-API extension cursor.connection used", stacklevel=3)
return self._c
@property
def rowcount(self):
context = self._context
if context is None:
return -1
return context.row_count
@property
def description(self):
context = self._context
if context is None:
return None
row_desc = context.columns
if row_desc is None:
return None
if len(row_desc) == 0:
return None
columns = []
for col in row_desc:
columns.append((col["name"], col["type_oid"], None, None, None, None, None))
return columns
##
# Executes a database operation. Parameters may be provided as a sequence
# or mapping and will be bound to variables in the operation.
#
# Stability: Part of the DBAPI 2.0 specification.
def execute(self, operation, args=(), stream=None):
"""Executes a database operation. Parameters may be provided as a
sequence, or as a mapping, depending upon the value of
:data:`pg8000.paramstyle`.
This method is part of the `DBAPI 2.0 specification
`_.
:param operation:
The SQL statement to execute.
:param args:
If :data:`paramstyle` is ``qmark``, ``numeric``, or ``format``,
this argument should be an array of parameters to bind into the
statement. If :data:`paramstyle` is ``named``, the argument should
be a dict mapping of parameters. If the :data:`paramstyle` is
``pyformat``, the argument value may be either an array or a
mapping.
:param stream: This is a pg8000 extension for use with the PostgreSQL
`COPY
`_
command. For a COPY FROM the parameter must be a readable file-like
object, and for COPY TO it must be writable.
.. versionadded:: 1.9.11
"""
try:
if not self._c._in_transaction and not self._c.autocommit:
self._c.execute_simple("begin transaction")
if len(args) == 0 and stream is None:
self._context = self._c.execute_simple(operation)
else:
statement, vals = convert_paramstyle(paramstyle, operation, args)
self._context = self._c.execute_unnamed(
statement, vals=vals, oids=self._input_oids, stream=stream
)
if self._context.rows is None:
self._row_iter = None
else:
self._row_iter = iter(self._context.rows)
self._input_oids = ()
except AttributeError as e:
if self._c is None:
raise InterfaceError("Cursor closed")
elif self._c._sock is None:
raise InterfaceError("connection is closed")
else:
raise e
self.input_types = []
def executemany(self, operation, param_sets):
"""Prepare a database operation, and then execute it against all
parameter sequences or mappings provided.
This method is part of the `DBAPI 2.0 specification
`_.
:param operation:
The SQL statement to execute
:param parameter_sets:
A sequence of parameters to execute the statement with. The values
in the sequence should be sequences or mappings of parameters, the
same as the args argument of the :meth:`execute` method.
"""
rowcounts = []
input_oids = self._input_oids
for parameters in param_sets:
self._input_oids = input_oids
self.execute(operation, parameters)
rowcounts.append(self._context.row_count)
if len(rowcounts) == 0:
self._context = Context(None)
elif -1 in rowcounts:
self._context.row_count = -1
else:
self._context.row_count = sum(rowcounts)
def callproc(self, procname, parameters=None):
args = [] if parameters is None else parameters
operation = f"CALL {procname}(" + ", ".join(["%s" for _ in args]) + ")"
try:
statement, vals = convert_paramstyle("format", operation, args)
self._context = self._c.execute_unnamed(statement, vals=vals)
if self._context.rows is None:
self._row_iter = None
else:
self._row_iter = iter(self._context.rows)
except AttributeError as e:
if self._c is None:
raise InterfaceError("Cursor closed")
elif self._c._sock is None:
raise InterfaceError("connection is closed")
else:
raise e
def fetchone(self):
"""Fetch the next row of a query result set.
This method is part of the `DBAPI 2.0 specification
`_.
:returns:
A row as a sequence of field values, or ``None`` if no more rows
are available.
"""
try:
return next(self)
except StopIteration:
return None
except TypeError:
raise ProgrammingError("attempting to use unexecuted cursor")
def __iter__(self):
"""A cursor object is iterable to retrieve the rows from a query.
This is a DBAPI 2.0 extension.
"""
return self
def __next__(self):
try:
return next(self._row_iter)
except AttributeError:
if self._context is None:
raise ProgrammingError("A query hasn't been issued.")
else:
raise
except StopIteration as e:
if self._context is None:
raise ProgrammingError("A query hasn't been issued.")
elif len(self._context.columns) == 0:
raise ProgrammingError("no result set")
else:
raise e
def fetchmany(self, num=None):
"""Fetches the next set of rows of a query result.
This method is part of the `DBAPI 2.0 specification
`_.
:param size:
The number of rows to fetch when called. If not provided, the
:attr:`arraysize` attribute value is used instead.
:returns:
A sequence, each entry of which is a sequence of field values
making up a row. If no more rows are available, an empty sequence
will be returned.
"""
try:
return tuple(islice(self, self.arraysize if num is None else num))
except TypeError:
raise ProgrammingError("attempting to use unexecuted cursor")
def fetchall(self):
"""Fetches all remaining rows of a query result.
This method is part of the `DBAPI 2.0 specification
`_.
:returns:
A sequence, each entry of which is a sequence of field values
making up a row.
"""
try:
return tuple(self)
except TypeError:
raise ProgrammingError("attempting to use unexecuted cursor")
def close(self):
"""Closes the cursor.
This method is part of the `DBAPI 2.0 specification
`_.
"""
self._c = None
def setinputsizes(self, *sizes):
"""This method is part of the `DBAPI 2.0 specification"""
oids = []
for size in sizes:
if isinstance(size, int):
oid = size
else:
try:
oid = PY_PG[size]
except KeyError:
oid = UNKNOWN
oids.append(oid)
self._input_oids = oids
def setoutputsize(self, size, column=None):
"""This method is part of the `DBAPI 2.0 specification
`_, however, it is not
implemented by pg8000.
"""
pass
class Connection(CoreConnection):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.autocommit = False
# DBAPI Extension: supply exceptions as attributes on the connection
Warning = property(lambda self: self._getError(Warning))
Error = property(lambda self: self._getError(Error))
InterfaceError = property(lambda self: self._getError(InterfaceError))
DatabaseError = property(lambda self: self._getError(DatabaseError))
OperationalError = property(lambda self: self._getError(OperationalError))
IntegrityError = property(lambda self: self._getError(IntegrityError))
InternalError = property(lambda self: self._getError(InternalError))
ProgrammingError = property(lambda self: self._getError(ProgrammingError))
NotSupportedError = property(lambda self: self._getError(NotSupportedError))
def _getError(self, error):
warn(f"DB-API extension connection.{error.__name__} used", stacklevel=3)
return error
@property
def _in_transaction(self):
return self._transaction_status in (IN_TRANSACTION, IN_FAILED_TRANSACTION)
def cursor(self):
"""Creates a :class:`Cursor` object bound to this
connection.
This function is part of the `DBAPI 2.0 specification
`_.
"""
return Cursor(self)
def commit(self):
"""Commits the current database transaction.
This function is part of the `DBAPI 2.0 specification
`_.
"""
self.execute_unnamed("commit")
def rollback(self):
"""Rolls back the current database transaction.
This function is part of the `DBAPI 2.0 specification
`_.
"""
if not self._in_transaction:
return
self.execute_unnamed("rollback")
def xid(self, format_id, global_transaction_id, branch_qualifier):
"""Create a Transaction IDs (only global_transaction_id is used in pg)
format_id and branch_qualifier are not used in postgres
global_transaction_id may be any string identifier supported by
postgres returns a tuple
(format_id, global_transaction_id, branch_qualifier)"""
return (format_id, global_transaction_id, branch_qualifier)
def tpc_begin(self, xid):
"""Begins a TPC transaction with the given transaction ID xid.
This method should be called outside of a transaction (i.e. nothing may
have executed since the last .commit() or .rollback()).
Furthermore, it is an error to call .commit() or .rollback() within the
TPC transaction. A ProgrammingError is raised, if the application calls
.commit() or .rollback() during an active TPC transaction.
This function is part of the `DBAPI 2.0 specification
`_.
"""
self._xid = xid
if self.autocommit:
self.execute_unnamed("begin transaction")
def tpc_prepare(self):
"""Performs the first phase of a transaction started with .tpc_begin().
A ProgrammingError is be raised if this method is called outside of a
TPC transaction.
After calling .tpc_prepare(), no statements can be executed until
.tpc_commit() or .tpc_rollback() have been called.
This function is part of the `DBAPI 2.0 specification
`_.
"""
self.execute_unnamed("PREPARE TRANSACTION '%s';" % (self._xid[1],))
def tpc_commit(self, xid=None):
"""When called with no arguments, .tpc_commit() commits a TPC
transaction previously prepared with .tpc_prepare().
If .tpc_commit() is called prior to .tpc_prepare(), a single phase
commit is performed. A transaction manager may choose to do this if
only a single resource is participating in the global transaction.
When called with a transaction ID xid, the database commits the given
transaction. If an invalid transaction ID is provided, a
ProgrammingError will be raised. This form should be called outside of
a transaction, and is intended for use in recovery.
On return, the TPC transaction is ended.
This function is part of the `DBAPI 2.0 specification
`_.
"""
if xid is None:
xid = self._xid
if xid is None:
raise ProgrammingError("Cannot tpc_commit() without a TPC transaction!")
try:
previous_autocommit_mode = self.autocommit
self.autocommit = True
if xid in self.tpc_recover():
self.execute_unnamed("COMMIT PREPARED '%s';" % (xid[1],))
else:
# a single-phase commit
self.commit()
finally:
self.autocommit = previous_autocommit_mode
self._xid = None
def tpc_rollback(self, xid=None):
"""When called with no arguments, .tpc_rollback() rolls back a TPC
transaction. It may be called before or after .tpc_prepare().
When called with a transaction ID xid, it rolls back the given
transaction. If an invalid transaction ID is provided, a
ProgrammingError is raised. This form should be called outside of a
transaction, and is intended for use in recovery.
On return, the TPC transaction is ended.
This function is part of the `DBAPI 2.0 specification
`_.
"""
if xid is None:
xid = self._xid
if xid is None:
raise ProgrammingError(
"Cannot tpc_rollback() without a TPC prepared transaction!"
)
try:
previous_autocommit_mode = self.autocommit
self.autocommit = True
if xid in self.tpc_recover():
# a two-phase rollback
self.execute_unnamed("ROLLBACK PREPARED '%s';" % (xid[1],))
else:
# a single-phase rollback
self.rollback()
finally:
self.autocommit = previous_autocommit_mode
self._xid = None
def tpc_recover(self):
"""Returns a list of pending transaction IDs suitable for use with
.tpc_commit(xid) or .tpc_rollback(xid).
This function is part of the `DBAPI 2.0 specification
`_.
"""
try:
previous_autocommit_mode = self.autocommit
self.autocommit = True
curs = self.cursor()
curs.execute("select gid FROM pg_prepared_xacts")
return [self.xid(0, row[0], "") for row in curs.fetchall()]
finally:
self.autocommit = previous_autocommit_mode
class Warning(Exception):
"""Generic exception raised for important database warnings like data
truncations. This exception is not currently used by pg8000.
This exception is part of the `DBAPI 2.0 specification
`_.
"""
pass
class DataError(DatabaseError):
"""Generic exception raised for errors that are due to problems with the
processed data. This exception is not currently raised by pg8000.
This exception is part of the `DBAPI 2.0 specification
`_.
"""
pass
class OperationalError(DatabaseError):
"""
Generic exception raised for errors that are related to the database's
operation and not necessarily under the control of the programmer. This
exception is currently never raised by pg8000.
This exception is part of the `DBAPI 2.0 specification
`_.
"""
pass
class IntegrityError(DatabaseError):
"""
Generic exception raised when the relational integrity of the database is
affected. This exception is not currently raised by pg8000.
This exception is part of the `DBAPI 2.0 specification
`_.
"""
pass
class InternalError(DatabaseError):
"""Generic exception raised when the database encounters an internal error.
This is currently only raised when unexpected state occurs in the pg8000
interface itself, and is typically the result of a interface bug.
This exception is part of the `DBAPI 2.0 specification
`_.
"""
pass
class ProgrammingError(DatabaseError):
"""Generic exception raised for programming errors. For example, this
exception is raised if more parameter fields are in a query string than
there are available parameters.
This exception is part of the `DBAPI 2.0 specification
`_.
"""
pass
class NotSupportedError(DatabaseError):
"""Generic exception raised in case a method or database API was used which
is not supported by the database.
This exception is part of the `DBAPI 2.0 specification
`_.
"""
pass
class ArrayContentNotSupportedError(NotSupportedError):
"""
Raised when attempting to transmit an array where the base type is not
supported for binary data transfer by the interface.
"""
pass
__all__ = [
"BIGINT",
"BINARY",
"BOOLEAN",
"BOOLEAN_ARRAY",
"BYTES",
"Binary",
"CHAR",
"CHAR_ARRAY",
"Connection",
"Cursor",
"DATE",
"DataError",
"DatabaseError",
"Date",
"DateFromTicks",
"Error",
"FLOAT",
"FLOAT_ARRAY",
"INET",
"INT2VECTOR",
"INTEGER",
"INTEGER_ARRAY",
"INTERVAL",
"IntegrityError",
"InterfaceError",
"InternalError",
"JSON",
"JSONB",
"MACADDR",
"NAME",
"NAME_ARRAY",
"NULLTYPE",
"NUMERIC",
"NUMERIC_ARRAY",
"NotSupportedError",
"OID",
"OperationalError",
"PGInterval",
"ProgrammingError",
"ROWID",
"Range",
"STRING",
"TEXT",
"TEXT_ARRAY",
"TIME",
"TIMESTAMP",
"TIMESTAMPTZ",
"Time",
"TimeFromTicks",
"Timestamp",
"TimestampFromTicks",
"UNKNOWN",
"UUID_TYPE",
"VARCHAR",
"VARCHAR_ARRAY",
"Warning",
"XID",
"connect",
]
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1682804588.0
pg8000-1.30.3/pg8000/exceptions.py 0000664 0001750 0001750 00000001567 14423307554 015573 0 ustar 00tlocke tlocke class Error(Exception):
"""Generic exception that is the base exception of all other error
exceptions.
This exception is part of the `DBAPI 2.0 specification
`_.
"""
pass
class InterfaceError(Error):
"""Generic exception raised for errors that are related to the database
interface rather than the database itself. For example, if the interface
attempts to use an SSL connection but the server refuses, an InterfaceError
will be raised.
This exception is part of the `DBAPI 2.0 specification
`_.
"""
pass
class DatabaseError(Error):
"""Generic exception raised for errors that are related to the database.
This exception is part of the `DBAPI 2.0 specification
`_.
"""
pass
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1689494272.0
pg8000-1.30.3/pg8000/legacy.py 0000664 0001750 0001750 00000062210 14454721400 014640 0 ustar 00tlocke tlocke from datetime import date as Date, time as Time
from itertools import islice
from warnings import warn
import pg8000
from pg8000.converters import (
BIGINT,
BOOLEAN,
BOOLEAN_ARRAY,
BYTES,
CHAR,
CHAR_ARRAY,
DATE,
FLOAT,
FLOAT_ARRAY,
INET,
INT2VECTOR,
INTEGER,
INTEGER_ARRAY,
INTERVAL,
JSON,
JSONB,
MACADDR,
NAME,
NAME_ARRAY,
NULLTYPE,
NUMERIC,
NUMERIC_ARRAY,
OID,
PGInterval,
PY_PG,
Range,
STRING,
TEXT,
TEXT_ARRAY,
TIME,
TIMESTAMP,
TIMESTAMPTZ,
UNKNOWN,
UUID_TYPE,
VARCHAR,
VARCHAR_ARRAY,
XID,
interval_in as timedelta_in,
make_params,
pg_interval_in as pginterval_in,
pg_interval_out as pginterval_out,
)
from pg8000.core import (
Context,
CoreConnection,
IN_FAILED_TRANSACTION,
IN_TRANSACTION,
ver,
)
from pg8000.dbapi import (
BINARY,
Binary,
DataError,
DateFromTicks,
IntegrityError,
InternalError,
NotSupportedError,
OperationalError,
ProgrammingError,
TimeFromTicks,
Timestamp,
TimestampFromTicks,
Warning,
convert_paramstyle,
)
from pg8000.exceptions import DatabaseError, Error, InterfaceError
__version__ = ver
# Copyright (c) 2007-2009, Mathieu Fenniak
# Copyright (c) The Contributors
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright notice,
# this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
# * The name of the author may not be used to endorse or promote products
# derived from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
__author__ = "Mathieu Fenniak"
BIGINTEGER = BIGINT
DATETIME = TIMESTAMP
NUMBER = DECIMAL = NUMERIC
DECIMAL_ARRAY = NUMERIC_ARRAY
ROWID = OID
TIMEDELTA = INTERVAL
def connect(
user,
host="localhost",
database=None,
port=5432,
password=None,
source_address=None,
unix_sock=None,
ssl_context=None,
timeout=None,
tcp_keepalive=True,
application_name=None,
replication=None,
):
return Connection(
user,
host=host,
database=database,
port=port,
password=password,
source_address=source_address,
unix_sock=unix_sock,
ssl_context=ssl_context,
timeout=timeout,
tcp_keepalive=tcp_keepalive,
application_name=application_name,
replication=replication,
)
apilevel = "2.0"
"""The DBAPI level supported, currently "2.0".
This property is part of the `DBAPI 2.0 specification
`_.
"""
threadsafety = 1
"""Integer constant stating the level of thread safety the DBAPI interface
supports. This DBAPI module supports sharing of the module only. Connections
and cursors my not be shared between threads. This gives pg8000 a threadsafety
value of 1.
This property is part of the `DBAPI 2.0 specification
`_.
"""
paramstyle = "format"
class Cursor:
def __init__(self, connection, paramstyle=None):
self._c = connection
self.arraysize = 1
if paramstyle is None:
self.paramstyle = pg8000.paramstyle
else:
self.paramstyle = paramstyle
self._context = None
self._row_iter = None
self._input_oids = ()
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, traceback):
self.close()
@property
def connection(self):
warn("DB-API extension cursor.connection used", stacklevel=3)
return self._c
@property
def rowcount(self):
context = self._context
if context is None:
return -1
return context.row_count
description = property(lambda self: self._getDescription())
def _getDescription(self):
context = self._context
if context is None:
return None
row_desc = context.columns
if row_desc is None:
return None
if len(row_desc) == 0:
return None
columns = []
for col in row_desc:
columns.append((col["name"], col["type_oid"], None, None, None, None, None))
return columns
##
# Executes a database operation. Parameters may be provided as a sequence
# or mapping and will be bound to variables in the operation.
#
# Stability: Part of the DBAPI 2.0 specification.
def execute(self, operation, args=(), stream=None):
"""Executes a database operation. Parameters may be provided as a
sequence, or as a mapping, depending upon the value of
:data:`pg8000.paramstyle`.
This method is part of the `DBAPI 2.0 specification
`_.
:param operation:
The SQL statement to execute.
:param args:
If :data:`paramstyle` is ``qmark``, ``numeric``, or ``format``,
this argument should be an array of parameters to bind into the
statement. If :data:`paramstyle` is ``named``, the argument should
be a dict mapping of parameters. If the :data:`paramstyle` is
``pyformat``, the argument value may be either an array or a
mapping.
:param stream: This is a pg8000 extension for use with the PostgreSQL
`COPY
`_
command. For a COPY FROM the parameter must be a readable file-like
object, and for COPY TO it must be writable.
.. versionadded:: 1.9.11
"""
try:
if not self._c._in_transaction and not self._c.autocommit:
self._c.execute_simple("begin transaction")
if len(args) == 0 and stream is None:
self._context = self._c.execute_simple(operation)
else:
statement, vals = convert_paramstyle(self.paramstyle, operation, args)
self._context = self._c.execute_unnamed(
statement, vals=vals, oids=self._input_oids, stream=stream
)
rows = [] if self._context.rows is None else self._context.rows
self._row_iter = iter(rows)
self._input_oids = ()
except AttributeError as e:
if self._c is None:
raise InterfaceError("Cursor closed")
elif self._c._sock is None:
raise InterfaceError("connection is closed")
else:
raise e
except DatabaseError as e:
msg = e.args[0]
if isinstance(msg, dict):
response_code = msg["C"]
if response_code == "28000":
cls = InterfaceError
elif response_code == "23505":
cls = IntegrityError
else:
cls = ProgrammingError
raise cls(msg)
else:
raise ProgrammingError(msg)
self.input_types = []
return self
def executemany(self, operation, param_sets):
"""Prepare a database operation, and then execute it against all
parameter sequences or mappings provided.
This method is part of the `DBAPI 2.0 specification
`_.
:param operation:
The SQL statement to execute
:param parameter_sets:
A sequence of parameters to execute the statement with. The values
in the sequence should be sequences or mappings of parameters, the
same as the args argument of the :meth:`execute` method.
"""
rowcounts = []
input_oids = self._input_oids
for parameters in param_sets:
self._input_oids = input_oids
self.execute(operation, parameters)
rowcounts.append(self._context.row_count)
if len(rowcounts) == 0:
self._context = Context(None)
elif -1 in rowcounts:
self._context.row_count = -1
else:
self._context.row_count = sum(rowcounts)
return self
def fetchone(self):
"""Fetch the next row of a query result set.
This method is part of the `DBAPI 2.0 specification
`_.
:returns:
A row as a sequence of field values, or ``None`` if no more rows
are available.
"""
try:
return next(self)
except StopIteration:
return None
except TypeError:
raise ProgrammingError("attempting to use unexecuted cursor")
except AttributeError:
raise ProgrammingError("attempting to use unexecuted cursor")
def fetchmany(self, num=None):
"""Fetches the next set of rows of a query result.
This method is part of the `DBAPI 2.0 specification
`_.
:param size:
The number of rows to fetch when called. If not provided, the
:attr:`arraysize` attribute value is used instead.
:returns:
A sequence, each entry of which is a sequence of field values
making up a row. If no more rows are available, an empty sequence
will be returned.
"""
try:
return tuple(islice(self, self.arraysize if num is None else num))
except TypeError:
raise ProgrammingError("attempting to use unexecuted cursor")
def fetchall(self):
"""Fetches all remaining rows of a query result.
This method is part of the `DBAPI 2.0 specification
`_.
:returns:
A sequence, each entry of which is a sequence of field values
making up a row.
"""
try:
return tuple(self)
except TypeError:
raise ProgrammingError("attempting to use unexecuted cursor")
def close(self):
"""Closes the cursor.
This method is part of the `DBAPI 2.0 specification
`_.
"""
self._c = None
def __iter__(self):
"""A cursor object is iterable to retrieve the rows from a query.
This is a DBAPI 2.0 extension.
"""
return self
def setinputsizes(self, *sizes):
"""This method is part of the `DBAPI 2.0 specification"""
oids = []
for size in sizes:
if isinstance(size, int):
oid = size
else:
try:
oid = PY_PG[size]
except KeyError:
oid = UNKNOWN
oids.append(oid)
self._input_oids = oids
def setoutputsize(self, size, column=None):
"""This method is part of the `DBAPI 2.0 specification
`_, however, it is not
implemented by pg8000.
"""
pass
def __next__(self):
try:
return next(self._row_iter)
except AttributeError:
if self._context is None:
raise ProgrammingError("A query hasn't been issued.")
else:
raise
except StopIteration as e:
if self._context is None:
raise ProgrammingError("A query hasn't been issued.")
elif len(self._context.columns) == 0:
raise ProgrammingError("no result set")
else:
raise e
class Connection(CoreConnection):
# DBAPI Extension: supply exceptions as attributes on the connection
Warning = property(lambda self: self._getError(Warning))
Error = property(lambda self: self._getError(Error))
InterfaceError = property(lambda self: self._getError(InterfaceError))
DatabaseError = property(lambda self: self._getError(DatabaseError))
OperationalError = property(lambda self: self._getError(OperationalError))
IntegrityError = property(lambda self: self._getError(IntegrityError))
InternalError = property(lambda self: self._getError(InternalError))
ProgrammingError = property(lambda self: self._getError(ProgrammingError))
NotSupportedError = property(lambda self: self._getError(NotSupportedError))
def __init__(self, *args, **kwargs):
try:
super().__init__(*args, **kwargs)
except DatabaseError as e:
msg = e.args[0]
if isinstance(msg, dict):
response_code = msg["C"]
if response_code == "28000":
cls = InterfaceError
elif response_code == "23505":
cls = IntegrityError
else:
cls = ProgrammingError
raise cls(msg)
else:
raise ProgrammingError(msg)
self._run_cursor = Cursor(self, paramstyle="named")
self.autocommit = False
def _getError(self, error):
warn("DB-API extension connection.%s used" % error.__name__, stacklevel=3)
return error
def cursor(self):
"""Creates a :class:`Cursor` object bound to this
connection.
This function is part of the `DBAPI 2.0 specification
`_.
"""
return Cursor(self)
@property
def description(self):
return self._run_cursor._getDescription()
@property
def _in_transaction(self):
return self._transaction_status in (IN_TRANSACTION, IN_FAILED_TRANSACTION)
def commit(self):
"""Commits the current database transaction.
This function is part of the `DBAPI 2.0 specification
`_.
"""
self.execute_unnamed("commit")
def rollback(self):
"""Rolls back the current database transaction.
This function is part of the `DBAPI 2.0 specification
`_.
"""
if not self._in_transaction:
return
self.execute_unnamed("rollback")
def run(self, sql, stream=None, **params):
self._run_cursor.execute(sql, params, stream=stream)
if self._run_cursor._context.rows is None:
return tuple()
else:
return tuple(self._run_cursor._context.rows)
def prepare(self, operation):
return PreparedStatement(self, operation)
def xid(self, format_id, global_transaction_id, branch_qualifier):
"""Create a Transaction IDs (only global_transaction_id is used in pg)
format_id and branch_qualifier are not used in postgres
global_transaction_id may be any string identifier supported by
postgres returns a tuple
(format_id, global_transaction_id, branch_qualifier)"""
return (format_id, global_transaction_id, branch_qualifier)
def tpc_begin(self, xid):
"""Begins a TPC transaction with the given transaction ID xid.
This method should be called outside of a transaction (i.e. nothing may
have executed since the last .commit() or .rollback()).
Furthermore, it is an error to call .commit() or .rollback() within the
TPC transaction. A ProgrammingError is raised, if the application calls
.commit() or .rollback() during an active TPC transaction.
This function is part of the `DBAPI 2.0 specification
`_.
"""
self._xid = xid
if self.autocommit:
self.execute_unnamed("begin transaction")
def tpc_prepare(self):
"""Performs the first phase of a transaction started with .tpc_begin().
A ProgrammingError is be raised if this method is called outside of a
TPC transaction.
After calling .tpc_prepare(), no statements can be executed until
.tpc_commit() or .tpc_rollback() have been called.
This function is part of the `DBAPI 2.0 specification
`_.
"""
q = "PREPARE TRANSACTION '%s';" % (self._xid[1],)
self.execute_unnamed(q)
def tpc_commit(self, xid=None):
"""When called with no arguments, .tpc_commit() commits a TPC
transaction previously prepared with .tpc_prepare().
If .tpc_commit() is called prior to .tpc_prepare(), a single phase
commit is performed. A transaction manager may choose to do this if
only a single resource is participating in the global transaction.
When called with a transaction ID xid, the database commits the given
transaction. If an invalid transaction ID is provided, a
ProgrammingError will be raised. This form should be called outside of
a transaction, and is intended for use in recovery.
On return, the TPC transaction is ended.
This function is part of the `DBAPI 2.0 specification
`_.
"""
if xid is None:
xid = self._xid
if xid is None:
raise ProgrammingError("Cannot tpc_commit() without a TPC transaction!")
try:
previous_autocommit_mode = self.autocommit
self.autocommit = True
if xid in self.tpc_recover():
self.execute_unnamed("COMMIT PREPARED '%s';" % (xid[1],))
else:
# a single-phase commit
self.commit()
finally:
self.autocommit = previous_autocommit_mode
self._xid = None
def tpc_rollback(self, xid=None):
"""When called with no arguments, .tpc_rollback() rolls back a TPC
transaction. It may be called before or after .tpc_prepare().
When called with a transaction ID xid, it rolls back the given
transaction. If an invalid transaction ID is provided, a
ProgrammingError is raised. This form should be called outside of a
transaction, and is intended for use in recovery.
On return, the TPC transaction is ended.
This function is part of the `DBAPI 2.0 specification
`_.
"""
if xid is None:
xid = self._xid
if xid is None:
raise ProgrammingError(
"Cannot tpc_rollback() without a TPC prepared transaction!"
)
try:
previous_autocommit_mode = self.autocommit
self.autocommit = True
if xid in self.tpc_recover():
# a two-phase rollback
self.execute_unnamed("ROLLBACK PREPARED '%s';" % (xid[1],))
else:
# a single-phase rollback
self.rollback()
finally:
self.autocommit = previous_autocommit_mode
self._xid = None
def tpc_recover(self):
"""Returns a list of pending transaction IDs suitable for use with
.tpc_commit(xid) or .tpc_rollback(xid).
This function is part of the `DBAPI 2.0 specification
`_.
"""
try:
previous_autocommit_mode = self.autocommit
self.autocommit = True
curs = self.cursor()
curs.execute("select gid FROM pg_prepared_xacts")
return [self.xid(0, row[0], "") for row in curs]
finally:
self.autocommit = previous_autocommit_mode
def to_statement(query):
OUTSIDE = 0 # outside quoted string
INSIDE_SQ = 1 # inside single-quote string '...'
INSIDE_QI = 2 # inside quoted identifier "..."
INSIDE_ES = 3 # inside escaped single-quote string, E'...'
INSIDE_PN = 4 # inside parameter name eg. :name
INSIDE_CO = 5 # inside inline comment eg. --
in_quote_escape = False
placeholders = []
output_query = []
state = OUTSIDE
prev_c = None
for i, c in enumerate(query):
if i + 1 < len(query):
next_c = query[i + 1]
else:
next_c = None
if state == OUTSIDE:
if c == "'":
output_query.append(c)
if prev_c == "E":
state = INSIDE_ES
else:
state = INSIDE_SQ
elif c == '"':
output_query.append(c)
state = INSIDE_QI
elif c == "-":
output_query.append(c)
if prev_c == "-":
state = INSIDE_CO
elif c == ":" and next_c not in ":=" and prev_c != ":":
state = INSIDE_PN
placeholders.append("")
else:
output_query.append(c)
elif state == INSIDE_SQ:
if c == "'":
if in_quote_escape:
in_quote_escape = False
else:
if next_c == "'":
in_quote_escape = True
else:
state = OUTSIDE
output_query.append(c)
elif state == INSIDE_QI:
if c == '"':
state = OUTSIDE
output_query.append(c)
elif state == INSIDE_ES:
if c == "'" and prev_c != "\\":
# check for escaped single-quote
state = OUTSIDE
output_query.append(c)
elif state == INSIDE_PN:
placeholders[-1] += c
if next_c is None or (not next_c.isalnum() and next_c != "_"):
state = OUTSIDE
try:
pidx = placeholders.index(placeholders[-1], 0, -1)
output_query.append("$" + str(pidx + 1))
del placeholders[-1]
except ValueError:
output_query.append("$" + str(len(placeholders)))
elif state == INSIDE_CO:
output_query.append(c)
if c == "\n":
state = OUTSIDE
prev_c = c
def make_vals(args):
return tuple(args[p] for p in placeholders)
return "".join(output_query), make_vals
class PreparedStatement:
def __init__(self, con, operation):
self.con = con
self.operation = operation
statement, self.make_args = to_statement(operation)
self.name_bin, self.row_desc, self.input_funcs = con.prepare_statement(
statement, ()
)
def run(self, **vals):
params = make_params(self.con.py_types, self.make_args(vals))
try:
if not self.con._in_transaction and not self.con.autocommit:
self.con.execute_unnamed("begin transaction")
self._context = self.con.execute_named(
self.name_bin, params, self.row_desc, self.input_funcs, self.operation
)
except AttributeError as e:
if self.con is None:
raise InterfaceError("Cursor closed")
elif self.con._sock is None:
raise InterfaceError("connection is closed")
else:
raise e
return tuple() if self._context.rows is None else tuple(self._context.rows)
def close(self):
self.con.close_prepared_statement(self.name_bin)
self.con = None
__all__ = [
"BIGINTEGER",
"BINARY",
"BOOLEAN",
"BOOLEAN_ARRAY",
"BYTES",
"Binary",
"CHAR",
"CHAR_ARRAY",
"Connection",
"Cursor",
"DATE",
"DATETIME",
"DECIMAL",
"DECIMAL_ARRAY",
"DataError",
"DatabaseError",
"Date",
"DateFromTicks",
"Error",
"FLOAT",
"FLOAT_ARRAY",
"INET",
"INT2VECTOR",
"INTEGER",
"INTEGER_ARRAY",
"INTERVAL",
"IntegrityError",
"InterfaceError",
"InternalError",
"JSON",
"JSONB",
"MACADDR",
"NAME",
"NAME_ARRAY",
"NULLTYPE",
"NUMBER",
"NotSupportedError",
"OID",
"OperationalError",
"PGInterval",
"ProgrammingError",
"ROWID",
"Range",
"STRING",
"TEXT",
"TEXT_ARRAY",
"TIME",
"TIMEDELTA",
"TIMESTAMP",
"TIMESTAMPTZ",
"Time",
"TimeFromTicks",
"Timestamp",
"TimestampFromTicks",
"UNKNOWN",
"UUID_TYPE",
"VARCHAR",
"VARCHAR_ARRAY",
"Warning",
"XID",
"connect",
"pginterval_in",
"pginterval_out",
"timedelta_in",
]
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1698135780.0
pg8000-1.30.3/pg8000/native.py 0000664 0001750 0001750 00000020346 14515677344 014705 0 ustar 00tlocke tlocke from collections import defaultdict
from enum import Enum, auto
from pg8000.converters import (
BIGINT,
BOOLEAN,
BOOLEAN_ARRAY,
BYTES,
CHAR,
CHAR_ARRAY,
DATE,
FLOAT,
FLOAT_ARRAY,
INET,
INT2VECTOR,
INTEGER,
INTEGER_ARRAY,
INTERVAL,
JSON,
JSONB,
JSONB_ARRAY,
JSON_ARRAY,
MACADDR,
NAME,
NAME_ARRAY,
NULLTYPE,
NUMERIC,
NUMERIC_ARRAY,
OID,
PGInterval,
STRING,
TEXT,
TEXT_ARRAY,
TIME,
TIMESTAMP,
TIMESTAMPTZ,
UNKNOWN,
UUID_TYPE,
VARCHAR,
VARCHAR_ARRAY,
XID,
identifier,
literal,
make_params,
)
from pg8000.core import CoreConnection, ver
from pg8000.exceptions import DatabaseError, Error, InterfaceError
from pg8000.types import Range
__version__ = ver
# Copyright (c) 2007-2009, Mathieu Fenniak
# Copyright (c) The Contributors
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright notice,
# this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
# * The name of the author may not be used to endorse or promote products
# derived from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
class State(Enum):
OUT = auto() # outside quoted string
IN_SQ = auto() # inside single-quote string '...'
IN_QI = auto() # inside quoted identifier "..."
IN_ES = auto() # inside escaped single-quote string, E'...'
IN_PN = auto() # inside parameter name eg. :name
IN_CO = auto() # inside inline comment eg. --
IN_DQ = auto() # inside dollar-quoted string eg. $$...$$
def to_statement(query):
in_quote_escape = False
placeholders = []
output_query = []
state = State.OUT
prev_c = None
for i, c in enumerate(query):
if i + 1 < len(query):
next_c = query[i + 1]
else:
next_c = None
if state == State.OUT:
if c == "'":
output_query.append(c)
if prev_c == "E":
state = State.IN_ES
else:
state = State.IN_SQ
elif c == '"':
output_query.append(c)
state = State.IN_QI
elif c == "-":
output_query.append(c)
if prev_c == "-":
state = State.IN_CO
elif c == "$":
output_query.append(c)
if prev_c == "$":
state = State.IN_DQ
elif c == ":" and next_c not in ":=" and prev_c != ":":
state = State.IN_PN
placeholders.append("")
else:
output_query.append(c)
elif state == State.IN_SQ:
if c == "'":
if in_quote_escape:
in_quote_escape = False
elif next_c == "'":
in_quote_escape = True
else:
state = State.OUT
output_query.append(c)
elif state == State.IN_QI:
if c == '"':
state = State.OUT
output_query.append(c)
elif state == State.IN_ES:
if c == "'" and prev_c != "\\":
# check for escaped single-quote
state = State.OUT
output_query.append(c)
elif state == State.IN_PN:
placeholders[-1] += c
if next_c is None or (not next_c.isalnum() and next_c != "_"):
state = State.OUT
try:
pidx = placeholders.index(placeholders[-1], 0, -1)
output_query.append(f"${pidx + 1}")
del placeholders[-1]
except ValueError:
output_query.append(f"${len(placeholders)}")
elif state == State.IN_CO:
output_query.append(c)
if c == "\n":
state = State.OUT
elif state == State.IN_DQ:
output_query.append(c)
if c == "$" and prev_c == "$":
state = State.OUT
prev_c = c
for reserved in ("types", "stream"):
if reserved in placeholders:
raise InterfaceError(
f"The name '{reserved}' can't be used as a placeholder because it's "
f"used for another purpose."
)
def make_vals(args):
vals = []
for p in placeholders:
try:
vals.append(args[p])
except KeyError:
raise InterfaceError(
f"There's a placeholder '{p}' in the query, but no matching "
f"keyword argument."
)
return tuple(vals)
return "".join(output_query), make_vals
class Connection(CoreConnection):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self._context = None
@property
def columns(self):
context = self._context
if context is None:
return None
return context.columns
@property
def row_count(self):
context = self._context
if context is None:
return None
return context.row_count
def run(self, sql, stream=None, types=None, **params):
if len(params) == 0 and stream is None:
self._context = self.execute_simple(sql)
else:
statement, make_vals = to_statement(sql)
oids = () if types is None else make_vals(defaultdict(lambda: None, types))
self._context = self.execute_unnamed(
statement, make_vals(params), oids=oids, stream=stream
)
return self._context.rows
def prepare(self, sql):
return PreparedStatement(self, sql)
class PreparedStatement:
def __init__(self, con, sql, types=None):
self.con = con
self.statement, self.make_vals = to_statement(sql)
oids = () if types is None else self.make_vals(defaultdict(lambda: None, types))
self.name_bin, self.cols, self.input_funcs = con.prepare_statement(
self.statement, oids
)
@property
def columns(self):
return self._context.columns
def run(self, stream=None, **params):
params = make_params(self.con.py_types, self.make_vals(params))
self._context = self.con.execute_named(
self.name_bin, params, self.cols, self.input_funcs, self.statement
)
return self._context.rows
def close(self):
self.con.close_prepared_statement(self.name_bin)
__all__ = [
"BIGINT",
"BOOLEAN",
"BOOLEAN_ARRAY",
"BYTES",
"CHAR",
"CHAR_ARRAY",
"DATE",
"DatabaseError",
"Error",
"FLOAT",
"FLOAT_ARRAY",
"INET",
"INT2VECTOR",
"INTEGER",
"INTEGER_ARRAY",
"INTERVAL",
"InterfaceError",
"JSON",
"JSONB",
"JSONB_ARRAY",
"JSON_ARRAY",
"MACADDR",
"NAME",
"NAME_ARRAY",
"NULLTYPE",
"NUMERIC",
"NUMERIC_ARRAY",
"OID",
"PGInterval",
"Range",
"STRING",
"TEXT",
"TEXT_ARRAY",
"TIME",
"TIMESTAMP",
"TIMESTAMPTZ",
"UNKNOWN",
"UUID_TYPE",
"VARCHAR",
"VARCHAR_ARRAY",
"XID",
"identifier",
"literal",
]
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1686772897.0
pg8000-1.30.3/pg8000/types.py 0000664 0001750 0001750 00000020315 14442416241 014541 0 ustar 00tlocke tlocke from datetime import timedelta as Timedelta
class PGInterval:
UNIT_MAP = {
"millennia": "millennia",
"millennium": "millennia",
"centuries": "centuries",
"century": "centuries",
"decades": "decades",
"decade": "decades",
"years": "years",
"year": "years",
"months": "months",
"month": "months",
"mon": "months",
"mons": "months",
"weeks": "weeks",
"week": "weeks",
"days": "days",
"day": "days",
"hours": "hours",
"hour": "hours",
"minutes": "minutes",
"minute": "minutes",
"mins": "minutes",
"secs": "seconds",
"seconds": "seconds",
"second": "seconds",
"microseconds": "microseconds",
"microsecond": "microseconds",
}
ISO_LOOKUP = {
True: {
"Y": "years",
"M": "months",
"D": "days",
},
False: {
"H": "hours",
"M": "minutes",
"S": "seconds",
},
}
@classmethod
def from_str_iso_8601(cls, interval_str):
# P[n]Y[n]M[n]DT[n]H[n]M[n]S
kwargs = {}
lookup = cls.ISO_LOOKUP[True]
val = []
for c in interval_str[1:]:
if c == "T":
lookup = cls.ISO_LOOKUP[False]
elif c.isdigit() or c in ("-", "."):
val.append(c)
else:
val_str = "".join(val)
name = lookup[c]
v = float(val_str) if name == "seconds" else int(val_str)
kwargs[name] = v
val.clear()
return cls(**kwargs)
@classmethod
def from_str_postgres(cls, interval_str):
"""Parses both the postgres and postgres_verbose formats"""
t = {}
curr_val = None
for k in interval_str.split():
if ":" in k:
hours_str, minutes_str, seconds_str = k.split(":")
hours = int(hours_str)
if hours != 0:
t["hours"] = hours
minutes = int(minutes_str)
if minutes != 0:
t["minutes"] = minutes
seconds = float(seconds_str)
if seconds != 0:
t["seconds"] = seconds
elif k == "@":
continue
elif k == "ago":
for k, v in tuple(t.items()):
t[k] = -1 * v
else:
try:
curr_val = int(k)
except ValueError:
t[cls.UNIT_MAP[k]] = curr_val
return cls(**t)
@classmethod
def from_str_sql_standard(cls, interval_str):
"""YYYY-MM
or
DD HH:MM:SS.F
or
YYYY-MM DD HH:MM:SS.F
"""
month_part = None
day_parts = None
parts = interval_str.split()
if len(parts) == 1:
month_part = parts[0]
elif len(parts) == 2:
day_parts = parts
else:
month_part = parts[0]
day_parts = parts[1:]
kwargs = {}
if month_part is not None:
if month_part.startswith("-"):
sign = -1
p = month_part[1:]
else:
sign = 1
p = month_part
kwargs["years"], kwargs["months"] = [int(v) * sign for v in p.split("-")]
if day_parts is not None:
kwargs["days"] = int(day_parts[0])
time_part = day_parts[1]
if time_part.startswith("-"):
sign = -1
p = time_part[1:]
else:
sign = 1
p = time_part
hours, minutes, seconds = p.split(":")
kwargs["hours"] = int(hours) * sign
kwargs["minutes"] = int(minutes) * sign
kwargs["seconds"] = float(seconds) * sign
return cls(**kwargs)
@classmethod
def from_str(cls, interval_str):
if interval_str.startswith("P"):
return cls.from_str_iso_8601(interval_str)
elif interval_str.startswith("@"):
return cls.from_str_postgres(interval_str)
else:
parts = interval_str.split()
if (len(parts) > 1 and parts[1][0].isalpha()) or (
len(parts) == 1 and ":" in parts[0]
):
return cls.from_str_postgres(interval_str)
else:
return cls.from_str_sql_standard(interval_str)
def __init__(
self,
millennia=None,
centuries=None,
decades=None,
years=None,
months=None,
weeks=None,
days=None,
hours=None,
minutes=None,
seconds=None,
microseconds=None,
):
self.millennia = millennia
self.centuries = centuries
self.decades = decades
self.years = years
self.months = months
self.weeks = weeks
self.days = days
self.hours = hours
self.minutes = minutes
self.seconds = seconds
self.microseconds = microseconds
def __repr__(self):
return f""
def _value_dict(self):
return {
k: v
for k, v in (
("millennia", self.millennia),
("centuries", self.centuries),
("decades", self.decades),
("years", self.years),
("months", self.months),
("weeks", self.weeks),
("days", self.days),
("hours", self.hours),
("minutes", self.minutes),
("seconds", self.seconds),
("microseconds", self.microseconds),
)
if v is not None
}
def __str__(self):
return " ".join(f"{v} {n}" for n, v in self._value_dict().items())
def normalize(self):
months = 0
if self.months is not None:
months += self.months
if self.years is not None:
months += self.years * 12
days = 0
if self.days is not None:
days += self.days
if self.weeks is not None:
days += self.weeks * 7
seconds = 0
if self.hours is not None:
seconds += self.hours * 60 * 60
if self.minutes is not None:
seconds += self.minutes * 60
if self.seconds is not None:
seconds += self.seconds
if self.microseconds is not None:
seconds += self.microseconds / 1000000
return PGInterval(months=months, days=days, seconds=seconds)
def __eq__(self, other):
if isinstance(other, PGInterval):
s = self.normalize()
o = other.normalize()
return s.months == o.months and s.days == o.days and s.seconds == o.seconds
else:
return False
def to_timedelta(self):
pairs = self._value_dict()
overlap = pairs.keys() & {
"weeks",
"months",
"years",
"decades",
"centuries",
"millennia",
}
if len(overlap) > 0:
raise ValueError(
"Can't fit the interval fields {overlap} into a datetime.timedelta."
)
return Timedelta(**pairs)
class Range:
def __init__(
self,
lower=None,
upper=None,
bounds="[)",
is_empty=False,
):
self.lower = lower
self.upper = upper
self.bounds = bounds
self.is_empty = is_empty
def __eq__(self, other):
if isinstance(other, Range):
if self.is_empty or other.is_empty:
return self.is_empty == other.is_empty
else:
return (
self.lower == other.lower
and self.upper == other.upper
and self.bounds == other.bounds
)
return False
def __str__(self):
if self.is_empty:
return "empty"
else:
le, ue = ["" if v is None else v for v in (self.lower, self.upper)]
return f"{self.bounds[0]}{le},{ue}{self.bounds[1]}"
def __repr__(self):
return f""
././@PaxHeader 0000000 0000000 0000000 00000000034 00000000000 010212 x ustar 00 28 mtime=1698781754.4878707
pg8000-1.30.3/pg8000.egg-info/ 0000775 0001750 0001750 00000000000 14520255072 014514 5 ustar 00tlocke tlocke ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1698781754.0
pg8000-1.30.3/pg8000.egg-info/PKG-INFO 0000644 0001750 0001750 00000233776 14520255072 015631 0 ustar 00tlocke tlocke Metadata-Version: 2.1
Name: pg8000
Version: 1.30.3
Summary: PostgreSQL interface library
License: BSD 3-Clause License
Project-URL: Homepage, https://github.com/tlocke/pg8000
Keywords: postgresql,dbapi
Classifier: Development Status :: 5 - Production/Stable
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: BSD License
Classifier: Programming Language :: Python
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: Implementation
Classifier: Programming Language :: Python :: Implementation :: CPython
Classifier: Programming Language :: Python :: Implementation :: PyPy
Classifier: Operating System :: OS Independent
Classifier: Topic :: Database :: Front-Ends
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Requires-Python: >=3.8
Description-Content-Type: text/x-rst
License-File: LICENSE
Requires-Dist: scramp>=1.4.4
Requires-Dist: python-dateutil>=2.8.2
======
pg8000
======
.. |ssl.SSLContext| replace:: ``ssl.SSLContext``
.. _ssl.SSLContext: https://docs.python.org/3/library/ssl.html#ssl.SSLContext
.. |ssl.create_default_context()| replace:: ``ssl.create_default_context()``
.. _ssl.create_default_context(): https://docs.python.org/3/library/ssl.html#ssl.create_default_context
pg8000 is a pure-`Python `_
`PostgreSQL `_ driver that complies with
`DB-API 2.0 `_. It is tested on Python
versions 3.8+, on CPython and PyPy, and PostgreSQL versions 11+. pg8000's name comes
from the belief that it is probably about the 8000th PostgreSQL interface for Python.
pg8000 is distributed under the BSD 3-clause license.
All bug reports, feature requests and contributions are welcome at
`http://github.com/tlocke/pg8000/ `_.
.. image:: https://github.com/tlocke/pg8000/workflows/pg8000/badge.svg
:alt: Build Status
.. contents:: Table of Contents
:depth: 2
:local:
Installation
------------
To install pg8000 using `pip` type:
`pip install pg8000`
Native API Interactive Examples
-------------------------------
pg8000 comes with two APIs, the native pg8000 API and the DB-API 2.0 standard
API. These are the examples for the native API, and the DB-API 2.0 examples
follow in the next section.
Basic Example
`````````````
Import pg8000, connect to the database, create a table, add some rows and then
query the table:
>>> import pg8000.native
>>>
>>> # Connect to the database with user name postgres
>>>
>>> con = pg8000.native.Connection("postgres", password="cpsnow")
>>>
>>> # Create a temporary table
>>>
>>> con.run("CREATE TEMPORARY TABLE book (id SERIAL, title TEXT)")
>>>
>>> # Populate the table
>>>
>>> for title in ("Ender's Game", "The Magus"):
... con.run("INSERT INTO book (title) VALUES (:title)", title=title)
>>>
>>> # Print all the rows in the table
>>>
>>> for row in con.run("SELECT * FROM book"):
... print(row)
[1, "Ender's Game"]
[2, 'The Magus']
>>>
>>> con.close()
Transactions
````````````
Here's how to run groups of SQL statements in a
`transaction `_:
>>> import pg8000.native
>>>
>>> con = pg8000.native.Connection("postgres", password="cpsnow")
>>>
>>> con.run("START TRANSACTION")
>>>
>>> # Create a temporary table
>>> con.run("CREATE TEMPORARY TABLE book (id SERIAL, title TEXT)")
>>>
>>> for title in ("Ender's Game", "The Magus", "Phineas Finn"):
... con.run("INSERT INTO book (title) VALUES (:title)", title=title)
>>> con.run("COMMIT")
>>> for row in con.run("SELECT * FROM book"):
... print(row)
[1, "Ender's Game"]
[2, 'The Magus']
[3, 'Phineas Finn']
>>>
>>> con.close()
rolling back a transaction:
>>> import pg8000.native
>>>
>>> con = pg8000.native.Connection("postgres", password="cpsnow")
>>>
>>> # Create a temporary table
>>> con.run("CREATE TEMPORARY TABLE book (id SERIAL, title TEXT)")
>>>
>>> for title in ("Ender's Game", "The Magus", "Phineas Finn"):
... con.run("INSERT INTO book (title) VALUES (:title)", title=title)
>>>
>>> con.run("START TRANSACTION")
>>> con.run("DELETE FROM book WHERE title = :title", title="Phineas Finn")
>>> con.run("ROLLBACK")
>>> for row in con.run("SELECT * FROM book"):
... print(row)
[1, "Ender's Game"]
[2, 'The Magus']
[3, 'Phineas Finn']
>>>
>>> con.close()
NB. There is `a longstanding bug `_
in the PostgreSQL server whereby if a `COMMIT` is issued against a failed
transaction, the transaction is silently rolled back, rather than an error being
returned. pg8000 attempts to detect when this has happened and raise an
`InterfaceError`.
Query Using Functions
`````````````````````
Another query, using some PostgreSQL functions:
>>> import pg8000.native
>>>
>>> con = pg8000.native.Connection("postgres", password="cpsnow")
>>>
>>> con.run("SELECT TO_CHAR(TIMESTAMP '2021-10-10', 'YYYY BC')")
[['2021 AD']]
>>>
>>> con.close()
Interval Type
`````````````
A query that returns the PostgreSQL interval type:
>>> import pg8000.native
>>>
>>> con = pg8000.native.Connection("postgres", password="cpsnow")
>>>
>>> import datetime
>>>
>>> ts = datetime.date(1980, 4, 27)
>>> con.run("SELECT timestamp '2013-12-01 16:06' - :ts", ts=ts)
[[datetime.timedelta(days=12271, seconds=57960)]]
>>>
>>> con.close()
Point Type
``````````
A round-trip with a
`PostgreSQL point `_
type:
>>> import pg8000.native
>>>
>>> con = pg8000.native.Connection("postgres", password="cpsnow")
>>>
>>> con.run("SELECT CAST(:pt as point)", pt=(2.3,1))
[[(2.3, 1.0)]]
>>>
>>> con.close()
Client Encoding
```````````````
When communicating with the server, pg8000 uses the character set that the server asks
it to use (the client encoding). By default the client encoding is the database's
character set (chosen when the database is created), but the client encoding can be
changed in a number of ways (eg. setting ``CLIENT_ENCODING`` in ``postgresql.conf``).
Another way of changing the client encoding is by using an SQL command. For example:
>>> import pg8000.native
>>>
>>> con = pg8000.native.Connection("postgres", password="cpsnow")
>>>
>>> con.run("SET CLIENT_ENCODING TO 'UTF8'")
>>> con.run("SHOW CLIENT_ENCODING")
[['UTF8']]
>>>
>>> con.close()
JSON
````
`JSON `_ always comes back
from the server de-serialized. If the JSON you want to send is a ``dict`` then you can
just do:
>>> import pg8000.native
>>>
>>> con = pg8000.native.Connection("postgres", password="cpsnow")
>>>
>>> val = {'name': 'Apollo 11 Cave', 'zebra': True, 'age': 26.003}
>>> con.run("SELECT CAST(:apollo as jsonb)", apollo=val)
[[{'age': 26.003, 'name': 'Apollo 11 Cave', 'zebra': True}]]
>>>
>>> con.close()
JSON can always be sent in serialized form to the server:
>>> import json
>>> import pg8000.native
>>>
>>> con = pg8000.native.Connection("postgres", password="cpsnow")
>>>
>>>
>>> val = ['Apollo 11 Cave', True, 26.003]
>>> con.run("SELECT CAST(:apollo as jsonb)", apollo=json.dumps(val))
[[['Apollo 11 Cave', True, 26.003]]]
>>>
>>> con.close()
JSON queries can be have parameters:
>>> import pg8000.native
>>>
>>> with pg8000.native.Connection("postgres", password="cpsnow") as con:
... con.run(""" SELECT CAST('{"a":1, "b":2}' AS jsonb) @> :v """, v={"b": 2})
[[True]]
Retrieve Column Metadata From Results
`````````````````````````````````````
Find the column metadata returned from a query:
>>> import pg8000.native
>>>
>>> con = pg8000.native.Connection("postgres", password="cpsnow")
>>>
>>> con.run("create temporary table quark (id serial, name text)")
>>> for name in ('Up', 'Down'):
... con.run("INSERT INTO quark (name) VALUES (:name)", name=name)
>>> # Now execute the query
>>>
>>> con.run("SELECT * FROM quark")
[[1, 'Up'], [2, 'Down']]
>>>
>>> # and retrieve the metadata
>>>
>>> con.columns
[{'table_oid': ..., 'column_attrnum': 1, 'type_oid': 23, 'type_size': 4, 'type_modifier': -1, 'format': 0, 'name': 'id'}, {'table_oid': ..., 'column_attrnum': 2, 'type_oid': 25, 'type_size': -1, 'type_modifier': -1, 'format': 0, 'name': 'name'}]
>>>
>>> # Show just the column names
>>>
>>> [c['name'] for c in con.columns]
['id', 'name']
>>>
>>> con.close()
Notices And Notifications
`````````````````````````
PostgreSQL `notices
`_ are
stored in a deque called ``Connection.notices`` and added using the ``append()``
method. Similarly there are ``Connection.notifications`` for `notifications
`_ and
``Connection.parameter_statuses`` for changes to the server configuration. Here's an
example:
>>> import pg8000.native
>>>
>>> con = pg8000.native.Connection("postgres", password="cpsnow")
>>>
>>> con.run("LISTEN aliens_landed")
>>> con.run("NOTIFY aliens_landed")
>>> # A notification is a tuple containing (backend_pid, channel, payload)
>>>
>>> con.notifications[0]
(..., 'aliens_landed', '')
>>>
>>> con.close()
LIMIT ALL
`````````
You might think that the following would work, but in fact it fails:
>>> import pg8000.native
>>>
>>> con = pg8000.native.Connection("postgres", password="cpsnow")
>>>
>>> con.run("SELECT 'silo 1' LIMIT :lim", lim='ALL')
Traceback (most recent call last):
pg8000.exceptions.DatabaseError: ...
>>>
>>> con.close()
Instead the `docs say `_ that
you can send ``null`` as an alternative to ``ALL``, which does work:
>>> import pg8000.native
>>>
>>> con = pg8000.native.Connection("postgres", password="cpsnow")
>>>
>>> con.run("SELECT 'silo 1' LIMIT :lim", lim=None)
[['silo 1']]
>>>
>>> con.close()
IN and NOT IN
`````````````
You might think that the following would work, but in fact the server doesn't like it:
>>> import pg8000.native
>>>
>>> con = pg8000.native.Connection("postgres", password="cpsnow")
>>>
>>> con.run("SELECT 'silo 1' WHERE 'a' IN :v", v=['a', 'b'])
Traceback (most recent call last):
pg8000.exceptions.DatabaseError: ...
>>>
>>> con.close()
instead you can write it using the `unnest
`_ function:
>>> import pg8000.native
>>>
>>> con = pg8000.native.Connection("postgres", password="cpsnow")
>>>
>>> con.run(
... "SELECT 'silo 1' WHERE 'a' IN (SELECT unnest(CAST(:v as varchar[])))",
... v=['a', 'b'])
[['silo 1']]
>>> con.close()
and you can do the same for ``NOT IN``.
Many SQL Statements Can't Be Parameterized
``````````````````````````````````````````
In PostgreSQL parameters can only be used for `data values, not identifiers
`_.
Sometimes this might not work as expected, for example the following fails:
>>> import pg8000.native
>>>
>>> con = pg8000.native.Connection("postgres", password="cpsnow")
>>>
>>> channel = 'top_secret'
>>>
>>> con.run("LISTEN :channel", channel=channel)
Traceback (most recent call last):
pg8000.exceptions.DatabaseError: ...
>>>
>>> con.close()
It fails because the PostgreSQL server doesn't allow this statement to have any
parameters. There are many SQL statements that one might think would have parameters,
but don't. For these cases the SQL has to be created manually, being careful to use the
``identifier()`` and ``literal()`` functions to escape the values to avoid `SQL
injection attacks `_:
>>> from pg8000.native import Connection, identifier, literal
>>>
>>> con = Connection("postgres", password="cpsnow")
>>>
>>> channel = 'top_secret'
>>> payload = 'Aliens Landed!'
>>> con.run(f"LISTEN {identifier(channel)}")
>>> con.run(f"NOTIFY {identifier(channel)}, {literal(payload)}")
>>>
>>> con.notifications[0]
(..., 'top_secret', 'Aliens Landed!')
>>>
>>> con.close()
COPY FROM And TO A Stream
`````````````````````````
The SQL `COPY `_ statement can be
used to copy from and to a file or file-like object. Here' an example using the CSV
format:
>>> import pg8000.native
>>> from io import StringIO
>>> import csv
>>>
>>> con = pg8000.native.Connection("postgres", password="cpsnow")
>>>
>>> # Create a CSV file in memory
>>>
>>> stream_in = StringIO()
>>> csv_writer = csv.writer(stream_in)
>>> csv_writer.writerow([1, "electron"])
12
>>> csv_writer.writerow([2, "muon"])
8
>>> csv_writer.writerow([3, "tau"])
7
>>> stream_in.seek(0)
0
>>>
>>> # Create a table and then copy the CSV into it
>>>
>>> con.run("CREATE TEMPORARY TABLE lepton (id SERIAL, name TEXT)")
>>> con.run("COPY lepton FROM STDIN WITH (FORMAT CSV)", stream=stream_in)
>>>
>>> # COPY from a table to a stream
>>>
>>> stream_out = StringIO()
>>> con.run("COPY lepton TO STDOUT WITH (FORMAT CSV)", stream=stream_out)
>>> stream_out.seek(0)
0
>>> for row in csv.reader(stream_out):
... print(row)
['1', 'electron']
['2', 'muon']
['3', 'tau']
>>>
>>> con.close()
It's also possible to COPY FROM an iterable, which is useful if you're creating rows
programmatically:
>>> import pg8000.native
>>>
>>> con = pg8000.native.Connection("postgres", password="cpsnow")
>>>
>>> # Generator function for creating rows
>>> def row_gen():
... for i, name in ((1, "electron"), (2, "muon"), (3, "tau")):
... yield f"{i},{name}\n"
>>>
>>> # Create a table and then copy the CSV into it
>>>
>>> con.run("CREATE TEMPORARY TABLE lepton (id SERIAL, name TEXT)")
>>> con.run("COPY lepton FROM STDIN WITH (FORMAT CSV)", stream=row_gen())
>>>
>>> # COPY from a table to a stream
>>>
>>> stream_out = StringIO()
>>> con.run("COPY lepton TO STDOUT WITH (FORMAT CSV)", stream=stream_out)
>>> stream_out.seek(0)
0
>>> for row in csv.reader(stream_out):
... print(row)
['1', 'electron']
['2', 'muon']
['3', 'tau']
>>>
>>> con.close()
Execute Multiple SQL Statements
```````````````````````````````
If you want to execute a series of SQL statements (eg. an ``.sql`` file), you can run
them as expected:
>>> import pg8000.native
>>>
>>> con = pg8000.native.Connection("postgres", password="cpsnow")
>>>
>>> statements = "SELECT 5; SELECT 'Erich Fromm';"
>>>
>>> con.run(statements)
[[5], ['Erich Fromm']]
>>>
>>> con.close()
The only caveat is that when executing multiple statements you can't have any
parameters.
Quoted Identifiers in SQL
`````````````````````````
Say you had a column called ``My Column``. Since it's case sensitive and contains a
space, you'd have to `surround it by double quotes
`_.
But you can't do:
>>> import pg8000.native
>>>
>>> con = pg8000.native.Connection("postgres", password="cpsnow")
>>>
>>> con.run("select 'hello' as "My Column"")
Traceback (most recent call last):
SyntaxError: invalid syntax...
>>>
>>> con.close()
since Python uses double quotes to delimit string literals, so one solution is
to use Python's `triple quotes
`_ to delimit the string
instead:
>>> import pg8000.native
>>>
>>> con = pg8000.native.Connection("postgres", password="cpsnow")
>>>
>>> con.run('''SELECT 'hello' AS "My Column"''')
[['hello']]
>>>
>>> con.close()
another solution, that's especially useful if the identifier comes from an untrusted
source, is to use the ``identifier()`` function, which correctly quotes and escapes the
identifier as needed:
>>> from pg8000.native import Connection, identifier
>>>
>>> con = Connection("postgres", password="cpsnow")
>>>
>>> sql = f"SELECT 'hello' as {identifier('My Column')}"
>>> print(sql)
SELECT 'hello' as "My Column"
>>>
>>> con.run(sql)
[['hello']]
>>>
>>> con.close()
this approach guards against `SQL injection attacks
`_. One thing to note if you're using
explicit schemas (eg. ``pg_catalog.pg_language``) is that the schema name and table name
are both separate identifiers. So to escape them you'd do:
>>> from pg8000.native import Connection, identifier
>>>
>>> con = Connection("postgres", password="cpsnow")
>>>
>>> query = (
... f"SELECT lanname FROM {identifier('pg_catalog')}.{identifier('pg_language')} "
... f"WHERE lanname = 'sql'"
... )
>>> print(query)
SELECT lanname FROM pg_catalog.pg_language WHERE lanname = 'sql'
>>>
>>> con.run(query)
[['sql']]
>>>
>>> con.close()
Custom adapter from a Python type to a PostgreSQL type
``````````````````````````````````````````````````````
pg8000 has a mapping from Python types to PostgreSQL types for when it needs to send
SQL parameters to the server. The default mapping that comes with pg8000 is designed to
work well in most cases, but you might want to add or replace the default mapping.
A Python ``datetime.timedelta`` object is sent to the server as a PostgreSQL
``interval`` type, which has the ``oid`` 1186. But let's say we wanted to create our
own Python class to be sent as an ``interval`` type. Then we'd have to register an
adapter:
>>> import pg8000.native
>>>
>>> con = pg8000.native.Connection("postgres", password="cpsnow")
>>>
>>> class MyInterval(str):
... pass
>>>
>>> def my_interval_out(my_interval):
... return my_interval # Must return a str
>>>
>>> con.register_out_adapter(MyInterval, my_interval_out)
>>> con.run("SELECT CAST(:interval as interval)", interval=MyInterval("2 hours"))
[[datetime.timedelta(seconds=7200)]]
>>>
>>> con.close()
Note that it still came back as a ``datetime.timedelta`` object because we only changed
the mapping from Python to PostgreSQL. See below for an example of how to change the
mapping from PostgreSQL to Python.
Custom adapter from a PostgreSQL type to a Python type
``````````````````````````````````````````````````````
pg8000 has a mapping from PostgreSQL types to Python types for when it receives SQL
results from the server. The default mapping that comes with pg8000 is designed to work
well in most cases, but you might want to add or replace the default mapping.
If pg8000 receives PostgreSQL ``interval`` type, which has the ``oid`` 1186, it converts
it into a Python ``datetime.timedelta`` object. But let's say we wanted to create our
own Python class to be used instead of ``datetime.timedelta``. Then we'd have to
register an adapter:
>>> import pg8000.native
>>>
>>> con = pg8000.native.Connection("postgres", password="cpsnow")
>>>
>>> class MyInterval(str):
... pass
>>>
>>> def my_interval_in(my_interval_str): # The parameter is of type str
... return MyInterval(my_interval)
>>>
>>> con.register_in_adapter(1186, my_interval_in)
>>> con.run("SELECT \'2 years'")
[['2 years']]
>>>
>>> con.close()
Note that registering the 'in' adapter only afects the mapping from the PostgreSQL type
to the Python type. See above for an example of how to change the mapping from
PostgreSQL to Python.
Could Not Determine Data Type Of Parameter
``````````````````````````````````````````
Sometimes you'll get the 'could not determine data type of parameter' error message from
the server:
>>> import pg8000.native
>>>
>>> con = pg8000.native.Connection("postgres", password="cpsnow")
>>>
>>> con.run("SELECT :v IS NULL", v=None)
Traceback (most recent call last):
pg8000.exceptions.DatabaseError: {'S': 'ERROR', 'V': 'ERROR', 'C': '42P18', 'M': 'could not determine data type of parameter $1', 'F': 'postgres.c', 'L': '...', 'R': 'exec_parse_message'}
>>>
>>> con.close()
One way of solving it is to put a ``CAST`` in the SQL:
>>> import pg8000.native
>>>
>>> con = pg8000.native.Connection("postgres", password="cpsnow")
>>>
>>> con.run("SELECT cast(:v as TIMESTAMP) IS NULL", v=None)
[[True]]
>>>
>>> con.close()
Another way is to override the type that pg8000 sends along with each parameter:
>>> import pg8000.native
>>>
>>> con = pg8000.native.Connection("postgres", password="cpsnow")
>>>
>>> con.run("SELECT :v IS NULL", v=None, types={'v': pg8000.native.TIMESTAMP})
[[True]]
>>>
>>> con.close()
Prepared Statements
```````````````````
`Prepared statements `_
can be useful in improving performance when you have a statement that's executed
repeatedly. Here's an example:
>>> import pg8000.native
>>>
>>> con = pg8000.native.Connection("postgres", password="cpsnow")
>>>
>>> # Create the prepared statement
>>> ps = con.prepare("SELECT cast(:v as varchar)")
>>>
>>> # Execute the statement repeatedly
>>> ps.run(v="speedy")
[['speedy']]
>>> ps.run(v="rapid")
[['rapid']]
>>> ps.run(v="swift")
[['swift']]
>>>
>>> # Close the prepared statement, releasing resources on the server
>>> ps.close()
>>>
>>> con.close()
Use Environment Variables As Connection Defaults
````````````````````````````````````````````````
You might want to use the current user as the database username for example:
>>> import pg8000.native
>>> import getpass
>>>
>>> # Connect to the database with current user name
>>> username = getpass.getuser()
>>> connection = pg8000.native.Connection(username, password="cpsnow")
>>>
>>> connection.run("SELECT 'pilau'")
[['pilau']]
>>>
>>> connection.close()
or perhaps you may want to use some of the same `environment variables that libpg uses
`_:
>>> import pg8000.native
>>> from os import environ
>>>
>>> username = environ.get('PGUSER', 'postgres')
>>> password = environ.get('PGPASSWORD', 'cpsnow')
>>> host = environ.get('PGHOST', 'localhost')
>>> port = environ.get('PGPORT', '5432')
>>> database = environ.get('PGDATABASE')
>>>
>>> connection = pg8000.native.Connection(
... username, password=password, host=host, port=port, database=database)
>>>
>>> connection.run("SELECT 'Mr Cairo'")
[['Mr Cairo']]
>>>
>>> connection.close()
It might be asked, why doesn't pg8000 have this behaviour built in? The thinking
follows the second aphorism of `The Zen of Python
`_:
Explicit is better than implicit.
So we've taken the approach of only being able to set connection parameters using the
``pg8000.native.Connection()`` constructor.
Connect To PostgreSQL Over SSL
``````````````````````````````
To connect to the server using SSL defaults do::
import pg8000.native
connection = pg8000.native.Connection('postgres', password="cpsnow", ssl_context=True)
connection.run("SELECT 'The game is afoot!'")
To connect over SSL with custom settings, set the ``ssl_context`` parameter to an
|ssl.SSLContext|_ object:
::
import pg8000.native
import ssl
ssl_context = ssl.create_default_context()
ssl_context.verify_mode = ssl.CERT_REQUIRED
ssl_context.load_verify_locations('root.pem')
connection = pg8000.native.Connection(
'postgres', password="cpsnow", ssl_context=ssl_context)
It may be that your PostgreSQL server is behind an SSL proxy server in which case you
can set a pg8000-specific attribute ``ssl.SSLContext.request_ssl = False`` which tells
pg8000 to connect using an SSL socket, but not to request SSL from the PostgreSQL
server:
::
import pg8000.native
import ssl
ssl_context = ssl.create_default_context()
ssl_context.request_ssl = False
connection = pg8000.native.Connection(
'postgres', password="cpsnow", ssl_context=ssl_context)
Server-Side Cursors
```````````````````
You can use the SQL commands `DECLARE
`_,
`FETCH `_,
`MOVE `_ and
`CLOSE `_ to manipulate
server-side cursors. For example:
>>> import pg8000.native
>>>
>>> con = pg8000.native.Connection('postgres', password="cpsnow")
>>> con.run("START TRANSACTION")
>>> con.run("DECLARE c SCROLL CURSOR FOR SELECT * FROM generate_series(1, 100)")
>>> con.run("FETCH FORWARD 5 FROM c")
[[1], [2], [3], [4], [5]]
>>> con.run("MOVE FORWARD 50 FROM c")
>>> con.run("FETCH BACKWARD 10 FROM c")
[[54], [53], [52], [51], [50], [49], [48], [47], [46], [45]]
>>> con.run("CLOSE c")
>>> con.run("ROLLBACK")
>>>
>>> con.close()
BLOBs (Binary Large Objects)
````````````````````````````
There's a set of `SQL functions
`_ for manipulating BLOBs.
Here's an example:
>>> import pg8000.native
>>>
>>> con = pg8000.native.Connection('postgres', password="cpsnow")
>>>
>>> # Create a BLOB and get its oid
>>> data = b'hello'
>>> res = con.run("SELECT lo_from_bytea(0, :data)", data=data)
>>> oid = res[0][0]
>>>
>>> # Create a table and store the oid of the BLOB
>>> con.run("CREATE TEMPORARY TABLE image (raster oid)")
>>>
>>> con.run("INSERT INTO image (raster) VALUES (:oid)", oid=oid)
>>> # Retrieve the data using the oid
>>> con.run("SELECT lo_get(:oid)", oid=oid)
[[b'hello']]
>>>
>>> # Add some data to the end of the BLOB
>>> more_data = b' all'
>>> offset = len(data)
>>> con.run(
... "SELECT lo_put(:oid, :offset, :data)",
... oid=oid, offset=offset, data=more_data)
[['']]
>>> con.run("SELECT lo_get(:oid)", oid=oid)
[[b'hello all']]
>>>
>>> # Download a part of the data
>>> con.run("SELECT lo_get(:oid, 6, 3)", oid=oid)
[[b'all']]
>>>
>>> con.close()
Replication Protocol
````````````````````
The PostgreSQL `Replication Protocol
`_ is supported using
the ``replication`` keyword when creating a connection:
>>> import pg8000.native
>>>
>>> con = pg8000.native.Connection(
... 'postgres', password="cpsnow", replication="database")
>>>
>>> con.run("IDENTIFY_SYSTEM")
[['...', 1, '0/...', 'postgres']]
>>>
>>> con.close()
DB-API 2 Interactive Examples
-----------------------------
These examples stick to the DB-API 2.0 standard.
Basic Example
`````````````
Import pg8000, connect to the database, create a table, add some rows and then query the
table:
>>> import pg8000.dbapi
>>>
>>> conn = pg8000.dbapi.connect(user="postgres", password="cpsnow")
>>> cursor = conn.cursor()
>>> cursor.execute("CREATE TEMPORARY TABLE book (id SERIAL, title TEXT)")
>>> cursor.execute(
... "INSERT INTO book (title) VALUES (%s), (%s) RETURNING id, title",
... ("Ender's Game", "Speaker for the Dead"))
>>> results = cursor.fetchall()
>>> for row in results:
... id, title = row
... print("id = %s, title = %s" % (id, title))
id = 1, title = Ender's Game
id = 2, title = Speaker for the Dead
>>> conn.commit()
>>>
>>> conn.close()
Query Using Functions
`````````````````````
Another query, using some PostgreSQL functions:
>>> import pg8000.dbapi
>>>
>>> con = pg8000.dbapi.connect(user="postgres", password="cpsnow")
>>> cursor = con.cursor()
>>>
>>> cursor.execute("SELECT TO_CHAR(TIMESTAMP '2021-10-10', 'YYYY BC')")
>>> cursor.fetchone()
['2021 AD']
>>>
>>> con.close()
Interval Type
`````````````
A query that returns the PostgreSQL interval type:
>>> import datetime
>>> import pg8000.dbapi
>>>
>>> con = pg8000.dbapi.connect(user="postgres", password="cpsnow")
>>> cursor = con.cursor()
>>>
>>> cursor.execute("SELECT timestamp '2013-12-01 16:06' - %s",
... (datetime.date(1980, 4, 27),))
>>> cursor.fetchone()
[datetime.timedelta(days=12271, seconds=57960)]
>>>
>>> con.close()
Point Type
``````````
A round-trip with a `PostgreSQL point
`_ type:
>>> import pg8000.dbapi
>>>
>>> con = pg8000.dbapi.connect(user="postgres", password="cpsnow")
>>> cursor = con.cursor()
>>>
>>> cursor.execute("SELECT cast(%s as point)", ((2.3,1),))
>>> cursor.fetchone()
[(2.3, 1.0)]
>>>
>>> con.close()
Numeric Parameter Style
```````````````````````
pg8000 supports all the DB-API parameter styles. Here's an example of using the
'numeric' parameter style:
>>> import pg8000.dbapi
>>>
>>> pg8000.dbapi.paramstyle = "numeric"
>>> con = pg8000.dbapi.connect(user="postgres", password="cpsnow")
>>> cursor = con.cursor()
>>>
>>> cursor.execute("SELECT array_prepend(:1, CAST(:2 AS int[]))", (500, [1, 2, 3, 4],))
>>> cursor.fetchone()
[[500, 1, 2, 3, 4]]
>>> pg8000.dbapi.paramstyle = "format"
>>>
>>> con.close()
Autocommit
``````````
Following the DB-API specification, autocommit is off by default. It can be turned on by
using the autocommit property of the connection:
>>> import pg8000.dbapi
>>>
>>> con = pg8000.dbapi.connect(user="postgres", password="cpsnow")
>>> con.autocommit = True
>>>
>>> cur = con.cursor()
>>> cur.execute("vacuum")
>>> conn.autocommit = False
>>> cur.close()
>>>
>>> con.close()
Client Encoding
```````````````
When communicating with the server, pg8000 uses the character set that the server asks
it to use (the client encoding). By default the client encoding is the database's
character set (chosen when the database is created), but the client encoding can be
changed in a number of ways (eg. setting ``CLIENT_ENCODING`` in ``postgresql.conf``).
Another way of changing the client encoding is by using an SQL command. For example:
>>> import pg8000.dbapi
>>>
>>> con = pg8000.dbapi.connect(user="postgres", password="cpsnow")
>>> cur = con.cursor()
>>> cur.execute("SET CLIENT_ENCODING TO 'UTF8'")
>>> cur.execute("SHOW CLIENT_ENCODING")
>>> cur.fetchone()
['UTF8']
>>> cur.close()
>>>
>>> con.close()
JSON
````
JSON is sent to the server serialized, and returned de-serialized. Here's an example:
>>> import json
>>> import pg8000.dbapi
>>>
>>> con = pg8000.dbapi.connect(user="postgres", password="cpsnow")
>>> cur = con.cursor()
>>> val = ['Apollo 11 Cave', True, 26.003]
>>> cur.execute("SELECT cast(%s as json)", (json.dumps(val),))
>>> cur.fetchone()
[['Apollo 11 Cave', True, 26.003]]
>>> cur.close()
>>>
>>> con.close()
JSON queries can be have parameters:
>>> import pg8000.dbapi
>>>
>>> with pg8000.dbapi.connect("postgres", password="cpsnow") as con:
... cur = con.cursor()
... cur.execute(""" SELECT CAST('{"a":1, "b":2}' AS jsonb) @> %s """, ({"b": 2},))
... for row in cur.fetchall():
... print(row)
[True]
Retrieve Column Names From Results
``````````````````````````````````
Use the columns names retrieved from a query:
>>> import pg8000
>>> conn = pg8000.dbapi.connect(user="postgres", password="cpsnow")
>>> c = conn.cursor()
>>> c.execute("create temporary table quark (id serial, name text)")
>>> c.executemany("INSERT INTO quark (name) VALUES (%s)", (("Up",), ("Down",)))
>>> #
>>> # Now retrieve the results
>>> #
>>> c.execute("select * from quark")
>>> rows = c.fetchall()
>>> keys = [k[0] for k in c.description]
>>> results = [dict(zip(keys, row)) for row in rows]
>>> assert results == [{'id': 1, 'name': 'Up'}, {'id': 2, 'name': 'Down'}]
>>>
>>> conn.close()
COPY from and to a file
```````````````````````
The SQL `COPY `__ statement can
be used to copy from and to a file or file-like object:
>>> from io import StringIO
>>> import pg8000.dbapi
>>>
>>> con = pg8000.dbapi.connect(user="postgres", password="cpsnow")
>>> cur = con.cursor()
>>> #
>>> # COPY from a stream to a table
>>> #
>>> stream_in = StringIO('1\telectron\n2\tmuon\n3\ttau\n')
>>> cur = con.cursor()
>>> cur.execute("create temporary table lepton (id serial, name text)")
>>> cur.execute("COPY lepton FROM stdin", stream=stream_in)
>>> #
>>> # Now COPY from a table to a stream
>>> #
>>> stream_out = StringIO()
>>> cur.execute("copy lepton to stdout", stream=stream_out)
>>> stream_out.getvalue()
'1\telectron\n2\tmuon\n3\ttau\n'
>>>
>>> con.close()
Server-Side Cursors
```````````````````
You can use the SQL commands `DECLARE
`_,
`FETCH `_,
`MOVE `_ and
`CLOSE `_ to manipulate
server-side cursors. For example:
>>> import pg8000.dbapi
>>>
>>> con = pg8000.dbapi.connect(user="postgres", password="cpsnow")
>>> cur = con.cursor()
>>> cur.execute("START TRANSACTION")
>>> cur.execute(
... "DECLARE c SCROLL CURSOR FOR SELECT * FROM generate_series(1, 100)")
>>> cur.execute("FETCH FORWARD 5 FROM c")
>>> cur.fetchall()
([1], [2], [3], [4], [5])
>>> cur.execute("MOVE FORWARD 50 FROM c")
>>> cur.execute("FETCH BACKWARD 10 FROM c")
>>> cur.fetchall()
([54], [53], [52], [51], [50], [49], [48], [47], [46], [45])
>>> cur.execute("CLOSE c")
>>> cur.execute("ROLLBACK")
>>>
>>> con.close()
BLOBs (Binary Large Objects)
````````````````````````````
There's a set of `SQL functions
`_ for manipulating BLOBs.
Here's an example:
>>> import pg8000.dbapi
>>>
>>> con = pg8000.dbapi.connect(user="postgres", password="cpsnow")
>>> cur = con.cursor()
>>>
>>> # Create a BLOB and get its oid
>>> data = b'hello'
>>> cur = con.cursor()
>>> cur.execute("SELECT lo_from_bytea(0, %s)", [data])
>>> oid = cur.fetchone()[0]
>>>
>>> # Create a table and store the oid of the BLOB
>>> cur.execute("CREATE TEMPORARY TABLE image (raster oid)")
>>> cur.execute("INSERT INTO image (raster) VALUES (%s)", [oid])
>>>
>>> # Retrieve the data using the oid
>>> cur.execute("SELECT lo_get(%s)", [oid])
>>> cur.fetchall()
([b'hello'],)
>>>
>>> # Add some data to the end of the BLOB
>>> more_data = b' all'
>>> offset = len(data)
>>> cur.execute("SELECT lo_put(%s, %s, %s)", [oid, offset, more_data])
>>> cur.execute("SELECT lo_get(%s)", [oid])
>>> cur.fetchall()
([b'hello all'],)
>>>
>>> # Download a part of the data
>>> cur.execute("SELECT lo_get(%s, 6, 3)", [oid])
>>> cur.fetchall()
([b'all'],)
>>>
>>> con.close()
Type Mapping
------------
The following table shows the default mapping between Python types and PostgreSQL types,
and vice versa.
If pg8000 doesn't recognize a type that it receives from PostgreSQL, it will return it
as a ``str`` type. This is how pg8000 handles PostgreSQL ``enum`` and XML types. It's
possible to change the default mapping using adapters (see the examples).
.. table:: Python to PostgreSQL Type Mapping
+-----------------------+-----------------+-----------------------------------------+
| Python Type | PostgreSQL Type | Notes |
+=======================+=================+=========================================+
| bool | bool | |
+-----------------------+-----------------+-----------------------------------------+
| int | int4 | |
+-----------------------+-----------------+-----------------------------------------+
| str | text | |
+-----------------------+-----------------+-----------------------------------------+
| float | float8 | |
+-----------------------+-----------------+-----------------------------------------+
| decimal.Decimal | numeric | |
+-----------------------+-----------------+-----------------------------------------+
| bytes | bytea | |
+-----------------------+-----------------+-----------------------------------------+
| datetime.datetime | timestamp | +/-infinity PostgreSQL values are |
| (without tzinfo) | without | represented as Python ``str`` values. |
| | timezone | If a ``timestamp`` is too big for |
| | | ``datetime.datetime`` then a ``str`` is |
| | | used. |
+-----------------------+-----------------+-----------------------------------------+
| datetime.datetime | timestamp with | +/-infinity PostgreSQL values are |
| (with tzinfo) | timezone | represented as Python ``str`` values. |
| | | If a ``timestamptz`` is too big for |
| | | ``datetime.datetime`` then a ``str`` is |
| | | used. |
+-----------------------+-----------------+-----------------------------------------+
| datetime.date | date | +/-infinity PostgreSQL values are |
| | | represented as Python ``str`` values. |
| | | If a ``date`` is too big for a |
| | | ``datetime.date`` then a ``str`` is |
| | | used. |
+-----------------------+-----------------+-----------------------------------------+
| datetime.time | time without | |
| | time zone | |
+-----------------------+-----------------+-----------------------------------------+
| datetime.timedelta | interval | If an ``interval`` is too big for |
| | | ``datetime.timedelta`` then a |
| | | ``PGInterval`` is used. |
+-----------------------+-----------------+-----------------------------------------+
| None | NULL | |
+-----------------------+-----------------+-----------------------------------------+
| uuid.UUID | uuid | |
+-----------------------+-----------------+-----------------------------------------+
| ipaddress.IPv4Address | inet | |
+-----------------------+-----------------+-----------------------------------------+
| ipaddress.IPv6Address | inet | |
+-----------------------+-----------------+-----------------------------------------+
| ipaddress.IPv4Network | inet | |
+-----------------------+-----------------+-----------------------------------------+
| ipaddress.IPv6Network | inet | |
+-----------------------+-----------------+-----------------------------------------+
| int | xid | |
+-----------------------+-----------------+-----------------------------------------+
| list of int | INT4[] | |
+-----------------------+-----------------+-----------------------------------------+
| list of float | FLOAT8[] | |
+-----------------------+-----------------+-----------------------------------------+
| list of bool | BOOL[] | |
+-----------------------+-----------------+-----------------------------------------+
| list of str | TEXT[] | |
+-----------------------+-----------------+-----------------------------------------+
| int | int2vector | Only from PostgreSQL to Python |
+-----------------------+-----------------+-----------------------------------------+
| JSON | json, jsonb | The Python JSON is provided as a Python |
| | | serialized string. Results returned as |
| | | de-serialized JSON. |
+-----------------------+-----------------+-----------------------------------------+
| pg8000.Range | \*range | PostgreSQL multirange types are |
| | | represented in Python as a list of |
| | | range types. |
+-----------------------+-----------------+-----------------------------------------+
| tuple | composite type | Only from Python to PostgreSQL |
+-----------------------+-----------------+-----------------------------------------+
Theory Of Operation
-------------------
A concept is tolerated inside the microkernel only if moving it outside the kernel,
i.e., permitting competing implementations, would prevent the implementation of the
system's required functionality.
-- Jochen Liedtke, Liedtke's minimality principle
pg8000 is designed to be used with one thread per connection.
Pg8000 communicates with the database using the `PostgreSQL Frontend/Backend Protocol
`_ (FEBE). If a query has no
parameters, pg8000 uses the 'simple query protocol'. If a query does have parameters,
pg8000 uses the 'extended query protocol' with unnamed prepared statements. The steps
for a query with parameters are:
1. Query comes in.
#. Send a PARSE message to the server to create an unnamed prepared statement.
#. Send a BIND message to run against the unnamed prepared statement, resulting in an
unnamed portal on the server.
#. Send an EXECUTE message to read all the results from the portal.
It's also possible to use named prepared statements. In which case the prepared
statement persists on the server, and represented in pg8000 using a
``PreparedStatement`` object. This means that the PARSE step gets executed once up
front, and then only the BIND and EXECUTE steps are repeated subsequently.
There are a lot of PostgreSQL data types, but few primitive data types in Python. By
default, pg8000 doesn't send PostgreSQL data type information in the PARSE step, in
which case PostgreSQL assumes the types implied by the SQL statement. In some cases
PostgreSQL can't work out a parameter type and so an `explicit cast
`_
can be used in the SQL.
In the FEBE protocol, each query parameter can be sent to the server either as binary
or text according to the format code. In pg8000 the parameters are always sent as text.
Occasionally, the network connection between pg8000 and the server may go down. If
pg8000 encounters a network problem it'll raise an ``InterfaceError`` with the message
``network error`` and with the original exception set as the `cause
`_.
Native API Docs
---------------
pg8000.native.Error
```````````````````
Generic exception that is the base exception of the other error exceptions.
pg8000.native.InterfaceError
````````````````````````````
For errors that originate within pg8000.
pg8000.native.DatabaseError
```````````````````````````
For errors that originate from the server.
pg8000.native.Connection(user, host='localhost', database=None, port=5432, password=None, source_address=None, unix_sock=None, ssl_context=None, timeout=None, tcp_keepalive=True, application_name=None, replication=None, sock=None)
``````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````
Creates a connection to a PostgreSQL database.
user
The username to connect to the PostgreSQL server with. If your server character
encoding is not ``ascii`` or ``utf8``, then you need to provide ``user`` as bytes,
eg. ``'my_name'.encode('EUC-JP')``.
host
The hostname of the PostgreSQL server to connect with. Providing this parameter is
necessary for TCP/IP connections. One of either ``host`` or ``unix_sock`` must be
provided. The default is ``localhost``.
database
The name of the database instance to connect with. If ``None`` then the PostgreSQL
server will assume the database name is the same as the username. If your server
character encoding is not ``ascii`` or ``utf8``, then you need to provide ``database``
as bytes, eg. ``'my_db'.encode('EUC-JP')``.
port
The TCP/IP port of the PostgreSQL server instance. This parameter defaults to
``5432``, the registered common port of PostgreSQL TCP/IP servers.
password
The user password to connect to the server with. This parameter is optional; if
omitted and the database server requests password-based authentication, the connection
will fail to open. If this parameter is provided but not
requested by the server, no error will occur.
If your server character encoding is not ``ascii`` or ``utf8``, then you need to
provide ``password`` as bytes, eg. ``'my_password'.encode('EUC-JP')``.
source_address
The source IP address which initiates the connection to the PostgreSQL server. The
default is ``None`` which means that the operating system will choose the source
address.
unix_sock
The path to the UNIX socket to access the database through, for example,
``'/tmp/.s.PGSQL.5432'``. One of either ``host`` or ``unix_sock`` must be provided.
ssl_context
This governs SSL encryption for TCP/IP sockets. It can have three values:
- ``None``, meaning no SSL (the default)
- ``True``, means use SSL with an |ssl.SSLContext|_ created using
|ssl.create_default_context()|_
- An instance of |ssl.SSLContext|_ which will be used to create the SSL connection.
If your PostgreSQL server is behind an SSL proxy, you can set the pg8000-specific
attribute ``ssl.SSLContext.request_ssl = False``, which tells pg8000 to use an SSL
socket, but not to request SSL from the PostgreSQL server. Note that this means you
can't use SCRAM authentication with channel binding.
timeout
This is the time in seconds before the connection to the server will time out. The
default is ``None`` which means no timeout.
tcp_keepalive
If ``True`` then use `TCP keepalive
`_. The default is ``True``.
application_name
Sets the `application_name
`_.
If your server character encoding is not ``ascii`` or ``utf8``, then you need to
provide values as bytes, eg. ``'my_application_name'.encode('EUC-JP')``. The default
is ``None`` which means that the server will set the application name.
replication
Used to run in `streaming replication mode
`_. If your server
character encoding is not ``ascii`` or ``utf8``, then you need to provide values as
bytes, eg. ``'database'.encode('EUC-JP')``.
sock
A socket-like object to use for the connection. For example, ``sock`` could be a plain
``socket.socket``, or it could represent an SSH tunnel or perhaps an
``ssl.SSLSocket`` to an SSL proxy. If an |ssl.SSLContext| is provided, then it will be
used to attempt to create an SSL socket from the provided socket.
pg8000.native.Connection.notifications
``````````````````````````````````````
A deque of server-side `notifications
`__ received by this database
connection (via the ``LISTEN`` / ``NOTIFY`` PostgreSQL commands). Each list item is a
three-element tuple containing the PostgreSQL backend PID that issued the notify, the
channel and the payload.
pg8000.native.Connection.notices
````````````````````````````````
A deque of server-side notices received by this database connection.
pg8000.native.Connection.parameter_statuses
```````````````````````````````````````````
A deque of server-side parameter statuses received by this database connection.
pg8000.native.Connection.run(sql, stream=None, types=None, \*\*kwargs)
``````````````````````````````````````````````````````````````````````
Executes an sql statement, and returns the results as a ``list``. For example::
con.run("SELECT * FROM cities where population > :pop", pop=10000)
sql
The SQL statement to execute. Parameter placeholders appear as a ``:`` followed by the
parameter name.
stream
For use with the PostgreSQL `COPY
`__ command. The nature
of the parameter depends on whether the SQL command is ``COPY FROM`` or ``COPY TO``.
``COPY FROM``
The stream parameter must be a readable file-like object or an iterable. If it's an
iterable then the items can be ``str`` or binary.
``COPY TO``
The stream parameter must be a writable file-like object.
types
A dictionary of oids. A key corresponds to a parameter.
kwargs
The parameters of the SQL statement.
pg8000.native.Connection.row_count
``````````````````````````````````
This read-only attribute contains the number of rows that the last ``run()`` method
produced (for query statements like ``SELECT``) or affected (for modification statements
like ``UPDATE``.
The value is -1 if:
- No ``run()`` method has been performed yet.
- There was no rowcount associated with the last ``run()``.
pg8000.native.Connection.columns
````````````````````````````````
A list of column metadata. Each item in the list is a dictionary with the following
keys:
- name
- table_oid
- column_attrnum
- type_oid
- type_size
- type_modifier
- format
pg8000.native.Connection.close()
````````````````````````````````
Closes the database connection.
pg8000.native.Connection.register_out_adapter(typ, out_func)
````````````````````````````````````````````````````````````
Register a type adapter for types going out from pg8000 to the server.
typ
The Python class that the adapter is for.
out_func
A function that takes the Python object and returns its string representation
in the format that the server requires.
pg8000.native.Connection.register_in_adapter(oid, in_func)
``````````````````````````````````````````````````````````
Register a type adapter for types coming in from the server to pg8000.
oid
The PostgreSQL type identifier found in the `pg_type system catalog
`_.
in_func
A function that takes the PostgreSQL string representation and returns a corresponding
Python object.
pg8000.native.Connection.prepare(sql)
`````````````````````````````````````
Returns a ``PreparedStatement`` object which represents a `prepared statement
`_ on the server. It can
subsequently be repeatedly executed.
sql
The SQL statement to prepare. Parameter placeholders appear as a ``:`` followed by the
parameter name.
pg8000.native.PreparedStatement
```````````````````````````````
A prepared statement object is returned by the ``pg8000.native.Connection.prepare()``
method of a connection. It has the following methods:
pg8000.native.PreparedStatement.run(\*\*kwargs)
```````````````````````````````````````````````
Executes the prepared statement, and returns the results as a ``tuple``.
kwargs
The parameters of the prepared statement.
pg8000.native.PreparedStatement.close()
```````````````````````````````````````
Closes the prepared statement, releasing the prepared statement held on the server.
pg8000.native.identifier(ident)
```````````````````````````````
Correctly quotes and escapes a string to be used as an `SQL identifier
`_.
ident
The ``str`` to be used as an SQL identifier.
pg8000.native.literal(value)
````````````````````````````
Correctly quotes and escapes a value to be used as an `SQL literal
`_.
value
The value to be used as an SQL literal.
DB-API 2 Docs
-------------
Properties
``````````
pg8000.dbapi.apilevel
:::::::::::::::::::::
The DBAPI level supported, currently "2.0".
pg8000.dbapi.threadsafety
:::::::::::::::::::::::::
Integer constant stating the level of thread safety the DBAPI interface supports. For
pg8000, the threadsafety value is 1, meaning that threads may share the module but not
connections.
pg8000.dbapi.paramstyle
:::::::::::::::::::::::
String property stating the type of parameter marker formatting expected by
the interface. This value defaults to "format", in which parameters are
marked in this format: "WHERE name=%s".
As an extension to the DBAPI specification, this value is not constant; it can be
changed to any of the following values:
qmark
Question mark style, eg. ``WHERE name=?``
numeric
Numeric positional style, eg. ``WHERE name=:1``
named
Named style, eg. ``WHERE name=:paramname``
format
printf format codes, eg. ``WHERE name=%s``
pyformat
Python format codes, eg. ``WHERE name=%(paramname)s``
pg8000.dbapi.STRING
:::::::::::::::::::
String type oid.
pg8000.dbapi.BINARY
:::::::::::::::::::
pg8000.dbapi.NUMBER
:::::::::::::::::::
Numeric type oid.
pg8000.dbapi.DATETIME
:::::::::::::::::::::
Timestamp type oid
pg8000.dbapi.ROWID
::::::::::::::::::
ROWID type oid
Functions
`````````
pg8000.dbapi.connect(user, host='localhost', database=None, port=5432, password=None, source_address=None, unix_sock=None, ssl_context=None, timeout=None, tcp_keepalive=True, application_name=None, replication=None, sock=None)
::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
Creates a connection to a PostgreSQL database.
user
The username to connect to the PostgreSQL server with. If your server character
encoding is not ``ascii`` or ``utf8``, then you need to provide ``user`` as bytes,
eg. ``'my_name'.encode('EUC-JP')``.
host
The hostname of the PostgreSQL server to connect with. Providing this parameter is
necessary for TCP/IP connections. One of either ``host`` or ``unix_sock`` must be
provided. The default is ``localhost``.
database
The name of the database instance to connect with. If ``None`` then the PostgreSQL
server will assume the database name is the same as the username. If your server
character encoding is not ``ascii`` or ``utf8``, then you need to provide ``database``
as bytes, eg. ``'my_db'.encode('EUC-JP')``.
port
The TCP/IP port of the PostgreSQL server instance. This parameter defaults to
``5432``, the registered common port of PostgreSQL TCP/IP servers.
password
The user password to connect to the server with. This parameter is optional; if
omitted and the database server requests password-based authentication, the
connection will fail to open. If this parameter is provided but not requested by the
server, no error will occur.
If your server character encoding is not ``ascii`` or ``utf8``, then you need to
provide ``password`` as bytes, eg. ``'my_password'.encode('EUC-JP')``.
source_address
The source IP address which initiates the connection to the PostgreSQL server. The
default is ``None`` which means that the operating system will choose the source
address.
unix_sock
The path to the UNIX socket to access the database through, for example,
``'/tmp/.s.PGSQL.5432'``. One of either ``host`` or ``unix_sock`` must be provided.
ssl_context
This governs SSL encryption for TCP/IP sockets. It can have three values:
- ``None``, meaning no SSL (the default)
- ``True``, means use SSL with an |ssl.SSLContext|_ created using
|ssl.create_default_context()|_.
- An instance of |ssl.SSLContext|_ which will be used to create the SSL connection.
If your PostgreSQL server is behind an SSL proxy, you can set the pg8000-specific
attribute ``ssl.SSLContext.request_ssl = False``, which tells pg8000 to use an SSL
socket, but not to request SSL from the PostgreSQL server. Note that this means you
can't use SCRAM authentication with channel binding.
timeout
This is the time in seconds before the connection to the server will time out. The
default is ``None`` which means no timeout.
tcp_keepalive
If ``True`` then use `TCP keepalive
`_. The default is ``True``.
application_name
Sets the `application_name
`_. If your server character encoding is not ``ascii`` or ``utf8``, then you need to
provide values as bytes, eg. ``'my_application_name'.encode('EUC-JP')``. The default
is ``None`` which means that the server will set the application name.
replication
Used to run in `streaming replication mode
`_. If your server
character encoding is not ``ascii`` or ``utf8``, then you need to provide values as
bytes, eg. ``'database'.encode('EUC-JP')``.
sock
A socket-like object to use for the connection. For example, ``sock`` could be a plain
``socket.socket``, or it could represent an SSH tunnel or perhaps an
``ssl.SSLSocket`` to an SSL proxy. If an |ssl.SSLContext| is provided, then it will be
used to attempt to create an SSL socket from the provided socket.
pg8000.dbapi.Date(year, month, day)
Construct an object holding a date value.
This property is part of the `DBAPI 2.0 specification
`_.
Returns: `datetime.date`
pg8000.dbapi.Time(hour, minute, second)
:::::::::::::::::::::::::::::::::::::::
Construct an object holding a time value.
Returns: ``datetime.time``
pg8000.dbapi.Timestamp(year, month, day, hour, minute, second)
::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
Construct an object holding a timestamp value.
Returns: ``datetime.datetime``
pg8000.dbapi.DateFromTicks(ticks)
:::::::::::::::::::::::::::::::::
Construct an object holding a date value from the given ticks value (number of seconds
since the epoch).
Returns: ``datetime.datetime``
pg8000.dbapi.TimeFromTicks(ticks)
:::::::::::::::::::::::::::::::::
Construct an object holding a time value from the given ticks value (number of seconds
since the epoch).
Returns: ``datetime.time``
pg8000.dbapi.TimestampFromTicks(ticks)
::::::::::::::::::::::::::::::::::::::
Construct an object holding a timestamp value from the given ticks value (number of
seconds since the epoch).
Returns: ``datetime.datetime``
pg8000.dbapi.Binary(value)
::::::::::::::::::::::::::
Construct an object holding binary data.
Returns: ``bytes``.
Generic Exceptions
``````````````````
Pg8000 uses the standard DBAPI 2.0 exception tree as "generic" exceptions. Generally,
more specific exception types are raised; these specific exception types are derived
from the generic exceptions.
pg8000.dbapi.Warning
::::::::::::::::::::
Generic exception raised for important database warnings like data truncations. This
exception is not currently used by pg8000.
pg8000.dbapi.Error
::::::::::::::::::
Generic exception that is the base exception of all other error exceptions.
pg8000.dbapi.InterfaceError
:::::::::::::::::::::::::::
Generic exception raised for errors that are related to the database interface rather
than the database itself. For example, if the interface attempts to use an SSL
connection but the server refuses, an InterfaceError will be raised.
pg8000.dbapi.DatabaseError
::::::::::::::::::::::::::
Generic exception raised for errors that are related to the database. This exception is
currently never raised by pg8000.
pg8000.dbapi.DataError
::::::::::::::::::::::
Generic exception raised for errors that are due to problems with the processed data.
This exception is not currently raised by pg8000.
pg8000.dbapi.OperationalError
:::::::::::::::::::::::::::::
Generic exception raised for errors that are related to the database's operation and not
necessarily under the control of the programmer. This exception is currently never
raised by pg8000.
pg8000.dbapi.IntegrityError
:::::::::::::::::::::::::::
Generic exception raised when the relational integrity of the database is affected. This
exception is not currently raised by pg8000.
pg8000.dbapi.InternalError
::::::::::::::::::::::::::
Generic exception raised when the database encounters an internal error. This is
currently only raised when unexpected state occurs in the pg8000 interface itself, and
is typically the result of a interface bug.
pg8000.dbapi.ProgrammingError
:::::::::::::::::::::::::::::
Generic exception raised for programming errors. For example, this exception is raised
if more parameter fields are in a query string than there are available parameters.
pg8000.dbapi.NotSupportedError
::::::::::::::::::::::::::::::
Generic exception raised in case a method or database API was used which is not
supported by the database.
Classes
```````
pg8000.dbapi.Connection
:::::::::::::::::::::::
A connection object is returned by the ``pg8000.connect()`` function. It represents a
single physical connection to a PostgreSQL database.
pg8000.dbapi.Connection.autocommit
::::::::::::::::::::::::::::::::::
Following the DB-API specification, autocommit is off by default. It can be turned on by
setting this boolean pg8000-specific autocommit property to ``True``.
pg8000.dbapi.Connection.close()
:::::::::::::::::::::::::::::::
Closes the database connection.
pg8000.dbapi.Connection.cursor()
::::::::::::::::::::::::::::::::
Creates a ``pg8000.dbapi.Cursor`` object bound to this connection.
pg8000.dbapi.Connection.rollback()
::::::::::::::::::::::::::::::::::
Rolls back the current database transaction.
pg8000.dbapi.Connection.tpc_begin(xid)
::::::::::::::::::::::::::::::::::::::
Begins a TPC transaction with the given transaction ID xid. This method should be
called outside of a transaction (i.e. nothing may have executed since the last
``commit()`` or ``rollback()``. Furthermore, it is an error to call ``commit()`` or
``rollback()`` within the TPC transaction. A ``ProgrammingError`` is raised, if the
application calls ``commit()`` or ``rollback()`` during an active TPC transaction.
pg8000.dbapi.Connection.tpc_commit(xid=None)
::::::::::::::::::::::::::::::::::::::::::::
When called with no arguments, ``tpc_commit()`` commits a TPC transaction previously
prepared with ``tpc_prepare()``. If ``tpc_commit()`` is called prior to
``tpc_prepare()``, a single phase commit is performed. A transaction manager may choose
to do this if only a single resource is participating in the global transaction.
When called with a transaction ID ``xid``, the database commits the given transaction.
If an invalid transaction ID is provided, a ``ProgrammingError`` will be raised. This
form should be called outside of a transaction, and is intended for use in recovery.
On return, the TPC transaction is ended.
pg8000.dbapi.Connection.tpc_prepare()
:::::::::::::::::::::::::::::::::::::
Performs the first phase of a transaction started with ``.tpc_begin()``. A
``ProgrammingError`` is be raised if this method is called outside of a TPC transaction.
After calling ``tpc_prepare()``, no statements can be executed until ``tpc_commit()`` or
``tpc_rollback()`` have been called.
pg8000.dbapi.Connection.tpc_recover()
:::::::::::::::::::::::::::::::::::::
Returns a list of pending transaction IDs suitable for use with ``tpc_commit(xid)`` or
``tpc_rollback(xid)``.
pg8000.dbapi.Connection.tpc_rollback(xid=None)
::::::::::::::::::::::::::::::::::::::::::::::
When called with no arguments, ``tpc_rollback()`` rolls back a TPC transaction. It may
be called before or after ``tpc_prepare()``.
When called with a transaction ID xid, it rolls back the given transaction. If an
invalid transaction ID is provided, a ``ProgrammingError`` is raised. This form should
be called outside of a transaction, and is intended for use in recovery.
On return, the TPC transaction is ended.
pg8000.dbapi.Connection.xid(format_id, global_transaction_id, branch_qualifier)
:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
Create a Transaction IDs (only global_transaction_id is used in pg) format_id and
branch_qualifier are not used in postgres global_transaction_id may be any string
identifier supported by postgres returns a tuple (format_id, global_transaction_id,
branch_qualifier)
pg8000.dbapi.Cursor
:::::::::::::::::::
A cursor object is returned by the ``pg8000.dbapi.Connection.cursor()`` method of a
connection. It has the following attributes and methods:
pg8000.dbapi.Cursor.arraysize
'''''''''''''''''''''''''''''
This read/write attribute specifies the number of rows to fetch at a time with
``pg8000.dbapi.Cursor.fetchmany()``. It defaults to 1.
pg8000.dbapi.Cursor.connection
''''''''''''''''''''''''''''''
This read-only attribute contains a reference to the connection object (an instance of
``pg8000.dbapi.Connection``) on which the cursor was created.
pg8000.dbapi.Cursor.rowcount
''''''''''''''''''''''''''''
This read-only attribute contains the number of rows that the last ``execute()`` or
``executemany()`` method produced (for query statements like ``SELECT``) or affected
(for modification statements like ``UPDATE``.
The value is -1 if:
- No ``execute()`` or ``executemany()`` method has been performed yet on the cursor.
- There was no rowcount associated with the last ``execute()``.
- At least one of the statements executed as part of an ``executemany()`` had no row
count associated with it.
pg8000.dbapi.Cursor.description
'''''''''''''''''''''''''''''''
This read-only attribute is a sequence of 7-item sequences. Each value contains
information describing one result column. The 7 items returned for each column are
(name, type_code, display_size, internal_size, precision, scale, null_ok). Only the
first two values are provided by the current implementation.
pg8000.dbapi.Cursor.close()
'''''''''''''''''''''''''''
Closes the cursor.
pg8000.dbapi.Cursor.execute(operation, args=None, stream=None)
''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
Executes a database operation. Parameters may be provided as a sequence, or as a
mapping, depending upon the value of ``pg8000.dbapi.paramstyle``. Returns the cursor,
which may be iterated over.
operation
The SQL statement to execute.
args
If ``pg8000.dbapi.paramstyle`` is ``qmark``, ``numeric``, or ``format``, this
argument should be an array of parameters to bind into the statement. If
``pg8000.dbapi.paramstyle`` is ``named``, the argument should be a ``dict`` mapping of
parameters. If ``pg8000.dbapi.paramstyle`` is ``pyformat``, the argument value may be
either an array or a mapping.
stream
This is a pg8000 extension for use with the PostgreSQL `COPY
`__ command. For a
``COPY FROM`` the parameter must be a readable file-like object, and for ``COPY TO``
it must be writable.
pg8000.dbapi.Cursor.executemany(operation, param_sets)
''''''''''''''''''''''''''''''''''''''''''''''''''''''
Prepare a database operation, and then execute it against all parameter sequences or
mappings provided.
operation
The SQL statement to execute.
parameter_sets
A sequence of parameters to execute the statement with. The values in the sequence
should be sequences or mappings of parameters, the same as the args argument of the
``pg8000.dbapi.Cursor.execute()`` method.
pg8000.dbapi.Cursor.callproc(procname, parameters=None)
'''''''''''''''''''''''''''''''''''''''''''''''''''''''
Call a stored database procedure with the given name and optional parameters.
procname
The name of the procedure to call.
parameters
A list of parameters.
pg8000.dbapi.Cursor.fetchall()
''''''''''''''''''''''''''''''
Fetches all remaining rows of a query result.
Returns: A sequence, each entry of which is a sequence of field values making up a row.
pg8000.dbapi.Cursor.fetchmany(size=None)
''''''''''''''''''''''''''''''''''''''''
Fetches the next set of rows of a query result.
size
The number of rows to fetch when called. If not provided, the
``pg8000.dbapi.Cursor.arraysize`` attribute value is used instead.
Returns: A sequence, each entry of which is a sequence of field values making up a row.
If no more rows are available, an empty sequence will be returned.
pg8000.dbapi.Cursor.fetchone()
''''''''''''''''''''''''''''''
Fetch the next row of a query result set.
Returns: A row as a sequence of field values, or ``None`` if no more rows are available.
pg8000.dbapi.Cursor.setinputsizes(\*sizes)
''''''''''''''''''''''''''''''''''''''''''
Used to set the parameter types of the next query. This is useful if it's difficult for
pg8000 to work out the types from the parameters themselves (eg. for parameters of type
None).
sizes
Positional parameters that are either the Python type of the parameter to be sent, or
the PostgreSQL oid. Common oids are available as constants such as ``pg8000.STRING``,
``pg8000.INTEGER``, ``pg8000.TIME`` etc.
pg8000.dbapi.Cursor.setoutputsize(size, column=None)
''''''''''''''''''''''''''''''''''''''''''''''''''''
Not implemented by pg8000.
pg8000.dbapi.Interval
'''''''''''''''''''''
An Interval represents a measurement of time. In PostgreSQL, an interval is defined in
the measure of months, days, and microseconds; as such, the pg8000 interval type
represents the same information.
Note that values of the ``pg8000.dbapi.Interval.microseconds``,
``pg8000.dbapi.Interval.days``, and ``pg8000.dbapi.Interval.months`` properties are
independently measured and cannot be converted to each other. A month may be 28, 29, 30,
or 31 days, and a day may occasionally be lengthened slightly by a leap second.
Design Decisions
----------------
For the ``Range`` type, the constructor follows the `PostgreSQL range constructor functions `_
which makes `[closed, open) `_
the easiest to express:
>>> from pg8000.types import Range
>>>
>>> pg_range = Range(2, 6)
Tests
-----
- Install `tox `_: ``pip install tox``
- Enable the PostgreSQL hstore extension by running the SQL command:
``create extension hstore;``
- Add a line to ``pg_hba.conf`` for the various authentication options:
::
host pg8000_md5 all 127.0.0.1/32 md5
host pg8000_gss all 127.0.0.1/32 gss
host pg8000_password all 127.0.0.1/32 password
host pg8000_scram_sha_256 all 127.0.0.1/32 scram-sha-256
host all all 127.0.0.1/32 trust
- Set password encryption to ``scram-sha-256`` in ``postgresql.conf``:
``password_encryption = 'scram-sha-256'``
- Set the password for the postgres user: ``ALTER USER postgresql WITH PASSWORD 'pw';``
- Run ``tox`` from the ``pg8000`` directory: ``tox``
This will run the tests against the Python version of the virtual environment, on the
machine, and the installed PostgreSQL version listening on port 5432, or the ``PGPORT``
environment variable if set.
Benchmarks are run as part of the test suite at ``tests/test_benchmarks.py``.
README.rst
----------
This file is written in the `reStructuredText
`_ format. To generate an
HTML page from it, do:
- Activate the virtual environment: ``source venv/bin/activate``
- Install ``Sphinx``: ``pip install Sphinx``
- Run ``rst2html.py``: ``rst2html.py README.rst README.html``
Doing A Release Of pg8000
-------------------------
Run ``tox`` to make sure all tests pass, then update the release notes, then do:
::
git tag -a x.y.z -m "version x.y.z"
rm -r dist
python -m build
twine upload dist/*
Release Notes
-------------
Version 1.30.3, 2023-10-31
``````````````````````````
- Fix problem with PG date overflowing Python types. Now we return the ``str`` we got from the
server if we can't parse it.
Version 1.30.2, 2023-09-17
``````````````````````````
- Bug fix where dollar-quoted string constants weren't supported.
Version 1.30.1, 2023-07-29
``````````````````````````
- There was a problem uploading the previous version (1.30.0) to PyPI because the
markup of the README.rst was invalid. There's now a step in the automated tests to
check for this.
Version 1.30.0, 2023-07-27
``````````````````````````
- Remove support for Python 3.7
- Add a ``sock`` keyword parameter for creating a connection from a pre-configured
socket.
Version 1.29.8, 2023-06-16
``````````````````````````
- Ranges don't work with legacy API.
Version 1.29.7, 2023-06-16
``````````````````````````
- Add support for PostgreSQL ``range`` and ``multirange`` types. Previously pg8000
would just return them as strings, but now they're returned as ``Range`` and lists of
``Range``.
- The PostgreSQL ``record`` type is now returned as a ``tuple`` of strings, whereas
before it was returned as one string.
Version 1.29.6, 2023-05-29
``````````````````````````
- Fixed two bugs with composite types. Nulls should be represented by an empty string,
and in an array of composite types, the elements should be surrounded by double
quotes.
Version 1.29.5, 2023-05-09
``````````````````````````
- Fixed bug where pg8000 didn't handle the case when the number of bytes received from
a socket was fewer than requested. This was being interpreted as a network error, but
in fact we just needed to wait until more bytes were available.
- When using the ``PGInterval`` type, if a response from the server contained the period
``millennium``, it wasn't recognised. This was caused by a spelling mistake where we
had ``millenium`` rather than ``millennium``.
- Added support for sending PostgreSQL composite types. If a value is sent as a
``tuple``, pg8000 will send it to the server as a ``(`` delimited composite string.
Version 1.29.4, 2022-12-14
``````````````````````````
- Fixed bug in ``pg8000.dbapi`` in the ``setinputsizes()`` method where if a ``size``
was a recognized Python type, the method failed.
Version 1.29.3, 2022-10-26
``````````````````````````
- Upgrade the SCRAM library to version 1.4.3. This adds support for the case where the
client supports channel binding but the server doesn't.
Version 1.29.2, 2022-10-09
``````````````````````````
- Fixed a bug where in a literal array, items such as ``\n`` and ``\r`` weren't
escaped properly before being sent to the server.
- Fixed a bug where if the PostgreSQL server has a half-hour time zone set, values of
type ``timestamp with time zone`` failed. This has been fixed by using the ``parse``
function of the ``dateutil`` package if the ``datetime`` parser fails.
Version 1.29.1, 2022-05-23
``````````````````````````
- In trying to determine if there's been a failed commit, check for ``ROLLBACK TO
SAVEPOINT``.
Version 1.29.0, 2022-05-21
``````````````````````````
- Implement a workaround for the `silent failed commit
`_ bug.
- Previously if an empty string was sent as the query an exception would be raised, but
that isn't done now.
Version 1.28.3, 2022-05-18
``````````````````````````
- Put back ``__version__`` attributes that were inadvertently removed.
Version 1.28.2, 2022-05-17
``````````````````````````
- Use a build system that's compliant with PEP517.
Version 1.28.1, 2022-05-17
``````````````````````````
- If when doing a ``COPY FROM`` the ``stream`` parameter is an iterator of ``str``,
pg8000 used to silently append a newline to the end. That no longer happens.
Version 1.28.0, 2022-05-17
``````````````````````````
- When using the ``COPY FROM`` SQL statement, allow the ``stream`` parameter to be an
iterable.
Version 1.27.1, 2022-05-16
``````````````````````````
- The ``seconds`` attribute of ``PGInterval`` is now always a ``float``, to cope with
fractional seconds.
- Updated the ``interval`` parsers for ``iso_8601`` and ``sql_standard`` to take
account of fractional seconds.
Version 1.27.0, 2022-05-16
``````````````````````````
- It used to be that by default, if pg8000 received an ``interval`` type from the server
and it was too big to fit into a ``datetime.timedelta`` then an exception would be
raised. Now if an interval is too big for ``datetime.timedelta`` a ``PGInterval`` is
returned.
* pg8000 now supports all the output formats for an ``interval`` (``postgres``,
``postgres_verbose``, ``iso_8601`` and ``sql_standard``).
Version 1.26.1, 2022-04-23
``````````````````````````
- Make sure all tests are run by the GitHub Actions tests on commit.
- Remove support for Python 3.6
- Remove support for PostgreSQL 9.6
Version 1.26.0, 2022-04-18
``````````````````````````
- When connecting, raise an ``InterfaceError('network error')`` rather than let the
underlying ``struct.error`` float up.
- Make licence text the same as that used by the OSI. Previously the licence wording
differed slightly from the BSD 3 Clause licence at
https://opensource.org/licenses/BSD-3-Clause. This meant that automated tools didn't
pick it up as being Open Source. The changes are believed to not alter the meaning of the license at all.
Version 1.25.0, 2022-04-17
``````````````````````````
- Fix more cases where a ``ResourceWarning`` would be raise because of a socket that had
been left open.
- We now have a single ``InterfaceError`` with the message 'network error' for all
network errors, with the underlying exception held in the ``cause`` of the exception.
Version 1.24.2, 2022-04-15
``````````````````````````
- To prevent a ``ResourceWarning`` close socket if a connection can't be created.
Version 1.24.1, 2022-03-02
``````````````````````````
- Return pg +/-infinity dates as ``str``. Previously +/-infinity pg values would cause
an error when returned, but now we return +/-infinity as strings.
Version 1.24.0, 2022-02-06
``````````````````````````
- Add SQL escape functions identifier() and literal() to the native API. For use when a
query can't be parameterised and the SQL string has to be created using untrusted
values.
Version 1.23.0, 2021-11-13
``````````````````````````
- If a query has no parameters, then the query will no longer be parsed. Although there
are performance benefits for doing this, the main reason is to avoid query rewriting,
which can introduce errors.
Version 1.22.1, 2021-11-10
``````````````````````````
- Fix bug in PGInterval type where ``str()`` failed for a millennia value.
Version 1.22.0, 2021-10-13
``````````````````````````
- Rather than specifying the oids in the ``Parse`` step of the Postgres protocol, pg8000
now omits them, and so Postgres will use the oids it determines from the query. This
makes the pg8000 code simpler and also it should also make the nuances of type
matching more straightforward.
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1698781754.0
pg8000-1.30.3/pg8000.egg-info/SOURCES.txt 0000664 0001750 0001750 00000003775 14520255072 016414 0 ustar 00tlocke tlocke LICENSE
README.rst
pyproject.toml
pg8000/__init__.py
pg8000/converters.py
pg8000/core.py
pg8000/dbapi.py
pg8000/exceptions.py
pg8000/legacy.py
pg8000/native.py
pg8000/types.py
pg8000.egg-info/PKG-INFO
pg8000.egg-info/SOURCES.txt
pg8000.egg-info/dependency_links.txt
pg8000.egg-info/requires.txt
pg8000.egg-info/top_level.txt
test/__init__.py
test/test_converters.py
test/test_readme.py
test/test_types.py
test/dbapi/__init__.py
test/dbapi/conftest.py
test/dbapi/test_benchmarks.py
test/dbapi/test_connection.py
test/dbapi/test_copy.py
test/dbapi/test_dbapi.py
test/dbapi/test_dbapi20.py
test/dbapi/test_paramstyle.py
test/dbapi/test_query.py
test/dbapi/test_typeconversion.py
test/dbapi/auth/__init__.py
test/dbapi/auth/test_gss.py
test/dbapi/auth/test_md5.py
test/dbapi/auth/test_md5_ssl.py
test/dbapi/auth/test_password.py
test/dbapi/auth/test_scram-sha-256.py
test/dbapi/auth/test_scram-sha-256_ssl.py
test/legacy/__init__.py
test/legacy/conftest.py
test/legacy/stress.py
test/legacy/test_benchmarks.py
test/legacy/test_connection.py
test/legacy/test_copy.py
test/legacy/test_dbapi.py
test/legacy/test_dbapi20.py
test/legacy/test_error_recovery.py
test/legacy/test_paramstyle.py
test/legacy/test_prepared_statement.py
test/legacy/test_query.py
test/legacy/test_typeconversion.py
test/legacy/test_typeobjects.py
test/legacy/auth/__init__.py
test/legacy/auth/test_gss.py
test/legacy/auth/test_md5.py
test/legacy/auth/test_md5_ssl.py
test/legacy/auth/test_password.py
test/legacy/auth/test_scram-sha-256.py
test/legacy/auth/test_scram-sha-256_ssl.py
test/native/__init__.py
test/native/conftest.py
test/native/test_benchmarks.py
test/native/test_connection.py
test/native/test_copy.py
test/native/test_core.py
test/native/test_prepared_statement.py
test/native/test_query.py
test/native/test_typeconversion.py
test/native/auth/__init__.py
test/native/auth/test_gss.py
test/native/auth/test_md5.py
test/native/auth/test_md5_ssl.py
test/native/auth/test_password.py
test/native/auth/test_scram-sha-256.py
test/native/auth/test_scram-sha-256_ssl.py ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1698781754.0
pg8000-1.30.3/pg8000.egg-info/dependency_links.txt 0000664 0001750 0001750 00000000001 14520255072 020562 0 ustar 00tlocke tlocke
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1698781754.0
pg8000-1.30.3/pg8000.egg-info/requires.txt 0000664 0001750 0001750 00000000045 14520255072 017113 0 ustar 00tlocke tlocke scramp>=1.4.4
python-dateutil>=2.8.2
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1698781754.0
pg8000-1.30.3/pg8000.egg-info/top_level.txt 0000664 0001750 0001750 00000000007 14520255072 017243 0 ustar 00tlocke tlocke pg8000
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1694941823.0
pg8000-1.30.3/pyproject.toml 0000664 0001750 0001750 00000003626 14501541177 015032 0 ustar 00tlocke tlocke [build-system]
requires = [
"setuptools >= 65",
"versioningit >= 2.1.0",
]
build-backend = "setuptools.build_meta"
[project]
name = "pg8000"
description = "PostgreSQL interface library"
readme = "README.rst"
requires-python = ">=3.8"
keywords= ["postgresql", "dbapi"]
license = {text = "BSD 3-Clause License"}
classifiers = [
"Development Status :: 5 - Production/Stable",
"Intended Audience :: Developers",
"License :: OSI Approved :: BSD License",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: Implementation",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: PyPy",
"Operating System :: OS Independent",
"Topic :: Database :: Front-Ends",
"Topic :: Software Development :: Libraries :: Python Modules",
]
dependencies = [
"scramp >= 1.4.4",
'python-dateutil >= 2.8.2',
]
dynamic = ["version"]
[project.urls]
Homepage = "https://github.com/tlocke/pg8000"
[tool.versioningit]
[tool.versioningit.vcs]
method = "git"
default-tag = "0.0.0"
[tool.flake8]
application-names = ['pg8000']
ignore = ['E203', 'W503']
max-line-length = 88
exclude = ['.git', '__pycache__', 'build', 'dist', 'venv', '.tox']
application-import-names = ['pg8000']
[tool.tox]
legacy_tox_ini = """
[tox]
isolated_build = True
envlist = py
[testenv]
passenv = PGPORT
allowlist_externals=/usr/bin/rm
commands =
black --check .
flake8 .
python -m pytest -x -v -W error test
rm -rf dist
python -m build
twine check dist/*
deps =
build
pytest
pytest-mock
pytest-benchmark
black
flake8
flake8-alphabetize
Flake8-pyproject
pytz
twine
"""
././@PaxHeader 0000000 0000000 0000000 00000000034 00000000000 010212 x ustar 00 28 mtime=1698781754.5238705
pg8000-1.30.3/setup.cfg 0000664 0001750 0001750 00000000046 14520255073 013726 0 ustar 00tlocke tlocke [egg_info]
tag_build =
tag_date = 0
././@PaxHeader 0000000 0000000 0000000 00000000034 00000000000 010212 x ustar 00 28 mtime=1698781754.4918706
pg8000-1.30.3/test/ 0000775 0001750 0001750 00000000000 14520255072 013063 5 ustar 00tlocke tlocke ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1682804588.0
pg8000-1.30.3/test/__init__.py 0000664 0001750 0001750 00000000000 14423307554 015167 0 ustar 00tlocke tlocke ././@PaxHeader 0000000 0000000 0000000 00000000034 00000000000 010212 x ustar 00 28 mtime=1698781754.4958706
pg8000-1.30.3/test/dbapi/ 0000775 0001750 0001750 00000000000 14520255072 014142 5 ustar 00tlocke tlocke ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1682804588.0
pg8000-1.30.3/test/dbapi/__init__.py 0000664 0001750 0001750 00000000000 14423307554 016246 0 ustar 00tlocke tlocke ././@PaxHeader 0000000 0000000 0000000 00000000034 00000000000 010212 x ustar 00 28 mtime=1698781754.4998705
pg8000-1.30.3/test/dbapi/auth/ 0000775 0001750 0001750 00000000000 14520255072 015103 5 ustar 00tlocke tlocke ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1682804588.0
pg8000-1.30.3/test/dbapi/auth/__init__.py 0000664 0001750 0001750 00000000000 14423307554 017207 0 ustar 00tlocke tlocke ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1682804588.0
pg8000-1.30.3/test/dbapi/auth/test_gss.py 0000664 0001750 0001750 00000000705 14423307554 017317 0 ustar 00tlocke tlocke import pytest
from pg8000.dbapi import InterfaceError, connect
# This requires a line in pg_hba.conf that requires gss for the database
# pg8000_gss
def test_gss(db_kwargs):
db_kwargs["database"] = "pg8000_gss"
# Should raise an exception saying gss isn't supported
with pytest.raises(
InterfaceError,
match="Authentication method 7 not supported by pg8000.",
):
with connect(**db_kwargs):
pass
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1682804588.0
pg8000-1.30.3/test/dbapi/auth/test_md5.py 0000664 0001750 0001750 00000000603 14423307554 017205 0 ustar 00tlocke tlocke import pytest
from pg8000.dbapi import DatabaseError, connect
# This requires a line in pg_hba.conf that requires md5 for the database
# pg8000_md5
def test_md5(db_kwargs):
db_kwargs["database"] = "pg8000_md5"
# Should only raise an exception saying db doesn't exist
with pytest.raises(DatabaseError, match="3D000"):
with connect(**db_kwargs):
pass
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1682804588.0
pg8000-1.30.3/test/dbapi/auth/test_md5_ssl.py 0000664 0001750 0001750 00000000425 14423307554 020070 0 ustar 00tlocke tlocke import ssl
from pg8000.dbapi import connect
def test_md5_ssl(db_kwargs):
context = ssl.create_default_context()
context.check_hostname = False
context.verify_mode = ssl.CERT_NONE
db_kwargs["ssl_context"] = context
with connect(**db_kwargs):
pass
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1682804588.0
pg8000-1.30.3/test/dbapi/auth/test_password.py 0000664 0001750 0001750 00000000630 14423307554 020362 0 ustar 00tlocke tlocke import pytest
from pg8000.dbapi import DatabaseError, connect
# This requires a line in pg_hba.conf that requires 'password' for the
# database pg8000_password
def test_password(db_kwargs):
db_kwargs["database"] = "pg8000_password"
# Should only raise an exception saying db doesn't exist
with pytest.raises(DatabaseError, match="3D000"):
with connect(**db_kwargs):
pass
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1682804588.0
pg8000-1.30.3/test/dbapi/auth/test_scram-sha-256.py 0000664 0001750 0001750 00000000661 14423307554 020714 0 ustar 00tlocke tlocke import pytest
from pg8000.dbapi import DatabaseError, connect
# This requires a line in pg_hba.conf that requires scram-sha-256 for the
# database scram-sha-256
def test_scram_sha_256(db_kwargs):
db_kwargs["database"] = "pg8000_scram_sha_256"
# Should only raise an exception saying db doesn't exist
with pytest.raises(DatabaseError, match="3D000"):
with connect(**db_kwargs) as con:
con.close()
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1682804588.0
pg8000-1.30.3/test/dbapi/auth/test_scram-sha-256_ssl.py 0000664 0001750 0001750 00000001133 14423307554 021570 0 ustar 00tlocke tlocke import ssl
import pytest
from pg8000.dbapi import DatabaseError, connect
# This requires a line in pg_hba.conf that requires scram-sha-256 for the
# database scram-sha-256
def test_scram_sha_256(db_kwargs):
context = ssl.create_default_context()
context.check_hostname = False
context.verify_mode = ssl.CERT_NONE
db_kwargs["ssl_context"] = context
db_kwargs["database"] = "pg8000_scram_sha_256"
# Should only raise an exception saying db doesn't exist
with pytest.raises(DatabaseError, match="3D000"):
with connect(**db_kwargs) as con:
con.close()
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1682804588.0
pg8000-1.30.3/test/dbapi/conftest.py 0000664 0001750 0001750 00000002214 14423307554 016345 0 ustar 00tlocke tlocke from os import environ
import pytest
import pg8000.dbapi
@pytest.fixture(scope="class")
def db_kwargs():
db_connect = {"user": "postgres", "password": "pw"}
for kw, var, f in [
("host", "PGHOST", str),
("password", "PGPASSWORD", str),
("port", "PGPORT", int),
]:
try:
db_connect[kw] = f(environ[var])
except KeyError:
pass
return db_connect
@pytest.fixture
def con(request, db_kwargs):
conn = pg8000.dbapi.connect(**db_kwargs)
def fin():
try:
conn.rollback()
except pg8000.dbapi.InterfaceError:
pass
try:
conn.close()
except pg8000.dbapi.InterfaceError:
pass
request.addfinalizer(fin)
return conn
@pytest.fixture
def cursor(request, con):
cursor = con.cursor()
def fin():
cursor.close()
request.addfinalizer(fin)
return cursor
@pytest.fixture
def pg_version(cursor):
cursor.execute("select current_setting('server_version')")
retval = cursor.fetchall()
version = retval[0][0]
idx = version.index(".")
return int(version[:idx])
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1682804588.0
pg8000-1.30.3/test/dbapi/test_benchmarks.py 0000664 0001750 0001750 00000001456 14423307554 017703 0 ustar 00tlocke tlocke import pytest
from pg8000.dbapi import connect
@pytest.mark.parametrize(
"txt",
(
("int2", "cast(id / 100 as int2)"),
"cast(id as int4)",
"cast(id * 100 as int8)",
"(id % 2) = 0",
"N'Static text string'",
"cast(id / 100 as float4)",
"cast(id / 100 as float8)",
"cast(id / 100 as numeric)",
"timestamp '2001-09-28'",
),
)
def test_round_trips(db_kwargs, benchmark, txt):
def torun():
with connect(**db_kwargs) as con:
query = f"""SELECT {txt}, {txt}, {txt}, {txt}, {txt}, {txt}, {txt}
FROM (SELECT generate_series(1, 10000) AS id) AS tbl"""
cursor = con.cursor()
cursor.execute(query)
cursor.fetchall()
cursor.close()
benchmark(torun)
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1690487249.0
pg8000-1.30.3/test/dbapi/test_connection.py 0000664 0001750 0001750 00000020612 14460544721 017720 0 ustar 00tlocke tlocke import datetime
import socket
import warnings
import pytest
from pg8000.dbapi import DatabaseError, InterfaceError, __version__, connect
def test_unix_socket_missing():
conn_params = {"unix_sock": "/file-does-not-exist", "user": "doesn't-matter"}
with pytest.raises(InterfaceError):
with connect(**conn_params) as con:
con.close()
def test_internet_socket_connection_refused():
conn_params = {"port": 0, "user": "doesn't-matter"}
with pytest.raises(
InterfaceError,
match="Can't create a connection to host localhost and port 0 "
"\\(timeout is None and source_address is None\\).",
):
connect(**conn_params)
def test_Connection_plain_socket(db_kwargs):
host = db_kwargs.get("host", "localhost")
port = db_kwargs.get("port", 5432)
with socket.create_connection((host, port)) as sock:
user = db_kwargs["user"]
password = db_kwargs["password"]
conn_params = {"sock": sock, "user": user, "password": password}
con = connect(**conn_params)
cur = con.cursor()
cur.execute("SELECT 1")
res = cur.fetchall()
assert res[0][0] == 1
def test_database_missing(db_kwargs):
db_kwargs["database"] = "missing-db"
with pytest.raises(DatabaseError):
connect(**db_kwargs)
def test_database_name_unicode(db_kwargs):
db_kwargs["database"] = "pg8000_sn\uFF6Fw"
# Should only raise an exception saying db doesn't exist
with pytest.raises(DatabaseError, match="3D000"):
connect(**db_kwargs)
def test_database_name_bytes(db_kwargs):
"""Should only raise an exception saying db doesn't exist"""
db_kwargs["database"] = bytes("pg8000_sn\uFF6Fw", "utf8")
with pytest.raises(DatabaseError, match="3D000"):
connect(**db_kwargs)
def test_password_bytes(con, db_kwargs):
# Create user
username = "boltzmann"
password = "cha\uFF6Fs"
cur = con.cursor()
cur.execute("create user " + username + " with password '" + password + "';")
con.commit()
db_kwargs["user"] = username
db_kwargs["password"] = password.encode("utf8")
db_kwargs["database"] = "pg8000_md5"
with pytest.raises(DatabaseError, match="3D000"):
connect(**db_kwargs)
cur.execute("drop role " + username)
con.commit()
def test_application_name(db_kwargs):
app_name = "my test application name"
db_kwargs["application_name"] = app_name
with connect(**db_kwargs) as db:
cur = db.cursor()
cur.execute(
"select application_name from pg_stat_activity "
" where pid = pg_backend_pid()"
)
application_name = cur.fetchone()[0]
assert application_name == app_name
def test_application_name_integer(db_kwargs):
db_kwargs["application_name"] = 1
with pytest.raises(
InterfaceError,
match="The parameter application_name can't be of type " ".",
):
connect(**db_kwargs)
def test_application_name_bytearray(db_kwargs):
db_kwargs["application_name"] = bytearray(b"Philby")
with connect(**db_kwargs):
pass
def test_notify(con):
cursor = con.cursor()
cursor.execute("select pg_backend_pid()")
backend_pid = cursor.fetchall()[0][0]
assert list(con.notifications) == []
cursor.execute("LISTEN test")
cursor.execute("NOTIFY test")
con.commit()
cursor.execute("VALUES (1, 2), (3, 4), (5, 6)")
assert len(con.notifications) == 1
assert con.notifications[0] == (backend_pid, "test", "")
def test_notify_with_payload(con):
cursor = con.cursor()
cursor.execute("select pg_backend_pid()")
backend_pid = cursor.fetchall()[0][0]
assert list(con.notifications) == []
cursor.execute("LISTEN test")
cursor.execute("NOTIFY test, 'Parnham'")
con.commit()
cursor.execute("VALUES (1, 2), (3, 4), (5, 6)")
assert len(con.notifications) == 1
assert con.notifications[0] == (backend_pid, "test", "Parnham")
def test_broken_pipe_read(con, db_kwargs):
db1 = connect(**db_kwargs)
cur1 = db1.cursor()
cur2 = con.cursor()
cur1.execute("select pg_backend_pid()")
pid1 = cur1.fetchone()[0]
cur2.execute("select pg_terminate_backend(%s)", (pid1,))
with pytest.raises(InterfaceError, match="network error"):
cur1.execute("select 1")
try:
db1.close()
except InterfaceError:
pass
def test_broken_pipe_flush(con, db_kwargs):
db1 = connect(**db_kwargs)
cur1 = db1.cursor()
cur2 = con.cursor()
cur1.execute("select pg_backend_pid()")
pid1 = cur1.fetchone()[0]
cur2.execute("select pg_terminate_backend(%s)", (pid1,))
try:
cur1.execute("select 1")
except BaseException:
pass
# Sometimes raises and sometime doesn't
try:
db1.close()
except InterfaceError as e:
assert str(e) == "network error"
def test_broken_pipe_unpack(con):
cur = con.cursor()
cur.execute("select pg_backend_pid()")
pid1 = cur.fetchone()[0]
with pytest.raises(InterfaceError, match="network error"):
cur.execute("select pg_terminate_backend(%s)", (pid1,))
def test_py_value_fail(con, mocker):
# Ensure that if types.py_value throws an exception, the original
# exception is raised (PG8000TestException), and the connection is
# still usable after the error.
class PG8000TestException(Exception):
pass
def raise_exception(val):
raise PG8000TestException("oh noes!")
mocker.patch.object(con, "py_types")
con.py_types = {datetime.time: raise_exception}
with pytest.raises(PG8000TestException):
c = con.cursor()
c.execute("SELECT CAST(%s AS TIME) AS f1", (datetime.time(10, 30),))
c.fetchall()
# ensure that the connection is still usable for a new query
c.execute("VALUES ('hw3'::text)")
assert c.fetchone()[0] == "hw3"
def test_no_data_error_recovery(con):
for i in range(1, 4):
with pytest.raises(DatabaseError) as e:
c = con.cursor()
c.execute("DROP TABLE t1")
assert e.value.args[0]["C"] == "42P01"
con.rollback()
def test_closed_connection(db_kwargs):
warnings.simplefilter("ignore")
my_db = connect(**db_kwargs)
cursor = my_db.cursor()
my_db.close()
with pytest.raises(my_db.InterfaceError, match="connection is closed"):
cursor.execute("VALUES ('hw1'::text)")
warnings.resetwarnings()
def test_version():
try:
from importlib.metadata import version
except ImportError:
from importlib_metadata import version
ver = version("pg8000")
assert __version__ == ver
@pytest.mark.parametrize(
"commit",
[
"commit",
"COMMIT;",
],
)
def test_failed_transaction_commit_sql(cursor, commit):
cursor.execute("create temporary table tt (f1 int primary key)")
cursor.execute("begin")
try:
cursor.execute("insert into tt(f1) values(null)")
except DatabaseError:
pass
with pytest.raises(InterfaceError):
cursor.execute(commit)
def test_failed_transaction_commit_method(con, cursor):
cursor.execute("create temporary table tt (f1 int primary key)")
cursor.execute("begin")
try:
cursor.execute("insert into tt(f1) values(null)")
except DatabaseError:
pass
with pytest.raises(InterfaceError):
con.commit()
@pytest.mark.parametrize(
"rollback",
[
"rollback",
"rollback;",
"ROLLBACK ;",
],
)
def test_failed_transaction_rollback_sql(cursor, rollback):
cursor.execute("create temporary table tt (f1 int primary key)")
cursor.execute("begin")
try:
cursor.execute("insert into tt(f1) values(null)")
except DatabaseError:
pass
cursor.execute(rollback)
def test_failed_transaction_rollback_method(cursor, con):
cursor.execute("create temporary table tt (f1 int primary key)")
cursor.execute("begin")
try:
cursor.execute("insert into tt(f1) values(null)")
except DatabaseError:
pass
con.rollback()
@pytest.mark.parametrize(
"sql",
[
"BEGIN",
"select * from tt;",
],
)
def test_failed_transaction_sql(cursor, sql):
cursor.execute("create temporary table tt (f1 int primary key)")
cursor.execute("begin")
try:
cursor.execute("insert into tt(f1) values(null)")
except DatabaseError:
pass
with pytest.raises(DatabaseError):
cursor.execute(sql)
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1682804588.0
pg8000-1.30.3/test/dbapi/test_copy.py 0000664 0001750 0001750 00000005131 14423307554 016532 0 ustar 00tlocke tlocke from io import BytesIO
import pytest
@pytest.fixture
def db_table(request, con):
cursor = con.cursor()
cursor.execute(
"CREATE TEMPORARY TABLE t1 (f1 int primary key, "
"f2 int not null, f3 varchar(50) null) on commit drop"
)
return con
def test_copy_to_with_table(db_table):
cursor = db_table.cursor()
cursor.execute("INSERT INTO t1 (f1, f2, f3) VALUES (%s, %s, %s)", (1, 1, 1))
cursor.execute("INSERT INTO t1 (f1, f2, f3) VALUES (%s, %s, %s)", (2, 2, 2))
cursor.execute("INSERT INTO t1 (f1, f2, f3) VALUES (%s, %s, %s)", (3, 3, 3))
stream = BytesIO()
cursor.execute("copy t1 to stdout", stream=stream)
assert stream.getvalue() == b"1\t1\t1\n2\t2\t2\n3\t3\t3\n"
assert cursor.rowcount == 3
def test_copy_to_with_query(db_table):
cursor = db_table.cursor()
stream = BytesIO()
cursor.execute(
"COPY (SELECT 1 as One, 2 as Two) TO STDOUT WITH DELIMITER "
"'X' CSV HEADER QUOTE AS 'Y' FORCE QUOTE Two",
stream=stream,
)
assert stream.getvalue() == b"oneXtwo\n1XY2Y\n"
assert cursor.rowcount == 1
def test_copy_from_with_table(db_table):
cursor = db_table.cursor()
stream = BytesIO(b"1\t1\t1\n2\t2\t2\n3\t3\t3\n")
cursor.execute("copy t1 from STDIN", stream=stream)
assert cursor.rowcount == 3
cursor.execute("SELECT * FROM t1 ORDER BY f1")
retval = cursor.fetchall()
assert retval == ([1, 1, "1"], [2, 2, "2"], [3, 3, "3"])
def test_copy_from_with_query(db_table):
cursor = db_table.cursor()
stream = BytesIO(b"f1Xf2\n1XY1Y\n")
cursor.execute(
"COPY t1 (f1, f2) FROM STDIN WITH DELIMITER 'X' CSV HEADER "
"QUOTE AS 'Y' FORCE NOT NULL f1",
stream=stream,
)
assert cursor.rowcount == 1
cursor.execute("SELECT * FROM t1 ORDER BY f1")
retval = cursor.fetchall()
assert retval == ([1, 1, None],)
def test_copy_from_with_error(db_table):
cursor = db_table.cursor()
stream = BytesIO(b"f1Xf2\n\n1XY1Y\n")
with pytest.raises(BaseException) as e:
cursor.execute(
"COPY t1 (f1, f2) FROM STDIN WITH DELIMITER 'X' CSV HEADER "
"QUOTE AS 'Y' FORCE NOT NULL f1",
stream=stream,
)
arg = {
"S": ("ERROR",),
"C": ("22P02",),
"M": (
'invalid input syntax for type integer: ""',
'invalid input syntax for integer: ""',
),
"W": ('COPY t1, line 2, column f1: ""',),
"F": ("numutils.c",),
"R": ("pg_atoi", "pg_strtoint32"),
}
earg = e.value.args[0]
for k, v in arg.items():
assert earg[k] in v
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1682804588.0
pg8000-1.30.3/test/dbapi/test_dbapi.py 0000664 0001750 0001750 00000012363 14423307554 016644 0 ustar 00tlocke tlocke import datetime
import os
import time
import pytest
from pg8000.dbapi import (
BINARY,
Binary,
Date,
DateFromTicks,
Time,
TimeFromTicks,
Timestamp,
TimestampFromTicks,
)
@pytest.fixture
def has_tzset():
# Neither Windows nor Jython 2.5.3 have a time.tzset() so skip
if hasattr(time, "tzset"):
os.environ["TZ"] = "UTC"
time.tzset()
return True
return False
# DBAPI compatible interface tests
@pytest.fixture
def db_table(con, has_tzset):
c = con.cursor()
c.execute(
"CREATE TEMPORARY TABLE t1 "
"(f1 int primary key, f2 int not null, f3 varchar(50) null) "
"ON COMMIT DROP"
)
c.execute("INSERT INTO t1 (f1, f2, f3) VALUES (%s, %s, %s)", (1, 1, None))
c.execute("INSERT INTO t1 (f1, f2, f3) VALUES (%s, %s, %s)", (2, 10, None))
c.execute("INSERT INTO t1 (f1, f2, f3) VALUES (%s, %s, %s)", (3, 100, None))
c.execute("INSERT INTO t1 (f1, f2, f3) VALUES (%s, %s, %s)", (4, 1000, None))
c.execute("INSERT INTO t1 (f1, f2, f3) VALUES (%s, %s, %s)", (5, 10000, None))
return con
def test_parallel_queries(db_table):
c1 = db_table.cursor()
c2 = db_table.cursor()
c1.execute("SELECT f1, f2, f3 FROM t1")
while 1:
row = c1.fetchone()
if row is None:
break
f1, f2, f3 = row
c2.execute("SELECT f1, f2, f3 FROM t1 WHERE f1 > %s", (f1,))
while 1:
row = c2.fetchone()
if row is None:
break
f1, f2, f3 = row
def test_qmark(mocker, db_table):
mocker.patch("pg8000.dbapi.paramstyle", "qmark")
c1 = db_table.cursor()
c1.execute("SELECT f1, f2, f3 FROM t1 WHERE f1 > ?", (3,))
while 1:
row = c1.fetchone()
if row is None:
break
f1, f2, f3 = row
def test_numeric(mocker, db_table):
mocker.patch("pg8000.dbapi.paramstyle", "numeric")
c1 = db_table.cursor()
c1.execute("SELECT f1, f2, f3 FROM t1 WHERE f1 > :1", (3,))
while 1:
row = c1.fetchone()
if row is None:
break
f1, f2, f3 = row
def test_named(mocker, db_table):
mocker.patch("pg8000.dbapi.paramstyle", "named")
c1 = db_table.cursor()
c1.execute("SELECT f1, f2, f3 FROM t1 WHERE f1 > :f1", {"f1": 3})
while 1:
row = c1.fetchone()
if row is None:
break
f1, f2, f3 = row
def test_format(mocker, db_table):
mocker.patch("pg8000.dbapi.paramstyle", "format")
c1 = db_table.cursor()
c1.execute("SELECT f1, f2, f3 FROM t1 WHERE f1 > %s", (3,))
while 1:
row = c1.fetchone()
if row is None:
break
f1, f2, f3 = row
def test_pyformat(mocker, db_table):
mocker.patch("pg8000.dbapi.paramstyle", "pyformat")
c1 = db_table.cursor()
c1.execute("SELECT f1, f2, f3 FROM t1 WHERE f1 > %(f1)s", {"f1": 3})
while 1:
row = c1.fetchone()
if row is None:
break
f1, f2, f3 = row
def test_arraysize(db_table):
c1 = db_table.cursor()
c1.arraysize = 3
c1.execute("SELECT * FROM t1")
retval = c1.fetchmany()
assert len(retval) == c1.arraysize
def test_date():
val = Date(2001, 2, 3)
assert val == datetime.date(2001, 2, 3)
def test_time():
val = Time(4, 5, 6)
assert val == datetime.time(4, 5, 6)
def test_timestamp():
val = Timestamp(2001, 2, 3, 4, 5, 6)
assert val == datetime.datetime(2001, 2, 3, 4, 5, 6)
def test_date_from_ticks(has_tzset):
if has_tzset:
val = DateFromTicks(1173804319)
assert val == datetime.date(2007, 3, 13)
def testTimeFromTicks(has_tzset):
if has_tzset:
val = TimeFromTicks(1173804319)
assert val == datetime.time(16, 45, 19)
def test_timestamp_from_ticks(has_tzset):
if has_tzset:
val = TimestampFromTicks(1173804319)
assert val == datetime.datetime(2007, 3, 13, 16, 45, 19)
def test_binary():
v = Binary(b"\x00\x01\x02\x03\x02\x01\x00")
assert v == b"\x00\x01\x02\x03\x02\x01\x00"
assert isinstance(v, BINARY)
def test_row_count(db_table):
c1 = db_table.cursor()
c1.execute("SELECT * FROM t1")
assert 5 == c1.rowcount
c1.execute("UPDATE t1 SET f3 = %s WHERE f2 > 101", ("Hello!",))
assert 2 == c1.rowcount
c1.execute("DELETE FROM t1")
assert 5 == c1.rowcount
def test_fetch_many(db_table):
cursor = db_table.cursor()
cursor.arraysize = 2
cursor.execute("SELECT * FROM t1")
assert 2 == len(cursor.fetchmany())
assert 2 == len(cursor.fetchmany())
assert 1 == len(cursor.fetchmany())
assert 0 == len(cursor.fetchmany())
def test_iterator(db_table):
cursor = db_table.cursor()
cursor.execute("SELECT * FROM t1 ORDER BY f1")
f1 = 0
for row in cursor.fetchall():
next_f1 = row[0]
assert next_f1 > f1
f1 = next_f1
# Vacuum can't be run inside a transaction, so we need to turn
# autocommit on.
def test_vacuum(con):
con.autocommit = True
cursor = con.cursor()
cursor.execute("vacuum")
def test_prepared_statement(con):
cursor = con.cursor()
cursor.execute("PREPARE gen_series AS SELECT generate_series(1, 10);")
cursor.execute("EXECUTE gen_series")
def test_cursor_type(cursor):
assert str(type(cursor)) == ""
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1683227782.0
pg8000-1.30.3/test/dbapi/test_dbapi20.py 0000664 0001750 0001750 00000052565 14425002206 017002 0 ustar 00tlocke tlocke import time
import warnings
import pytest
import pg8000
""" Python DB API 2.0 driver compliance unit test suite.
This software is Public Domain and may be used without restrictions.
"Now we have booze and barflies entering the discussion, plus rumours of
DBAs on drugs... and I won't tell you what flashes through my mind each
time I read the subject line with 'Anal Compliance' in it. All around
this is turning out to be a thoroughly unwholesome unit test."
-- Ian Bicking
"""
__rcs_id__ = "$Id: dbapi20.py,v 1.10 2003/10/09 03:14:14 zenzen Exp $"
__version__ = "$Revision: 1.10 $"[11:-2]
__author__ = "Stuart Bishop "
# $Log: dbapi20.py,v $
# Revision 1.10 2003/10/09 03:14:14 zenzen
# Add test for DB API 2.0 optional extension, where database exceptions
# are exposed as attributes on the Connection object.
#
# Revision 1.9 2003/08/13 01:16:36 zenzen
# Minor tweak from Stefan Fleiter
#
# Revision 1.8 2003/04/10 00:13:25 zenzen
# Changes, as per suggestions by M.-A. Lemburg
# - Add a table prefix, to ensure namespace collisions can always be avoided
#
# Revision 1.7 2003/02/26 23:33:37 zenzen
# Break out DDL into helper functions, as per request by David Rushby
#
# Revision 1.6 2003/02/21 03:04:33 zenzen
# Stuff from Henrik Ekelund:
# added test_None
# added test_nextset & hooks
#
# Revision 1.5 2003/02/17 22:08:43 zenzen
# Implement suggestions and code from Henrik Eklund - test that
# cursor.arraysize defaults to 1 & generic cursor.callproc test added
#
# Revision 1.4 2003/02/15 00:16:33 zenzen
# Changes, as per suggestions and bug reports by M.-A. Lemburg,
# Matthew T. Kromer, Federico Di Gregorio and Daniel Dittmar
# - Class renamed
# - Now a subclass of TestCase, to avoid requiring the driver stub
# to use multiple inheritance
# - Reversed the polarity of buggy test in test_description
# - Test exception hierarchy correctly
# - self.populate is now self._populate(), so if a driver stub
# overrides self.ddl1 this change propagates
# - VARCHAR columns now have a width, which will hopefully make the
# DDL even more portible (this will be reversed if it causes more problems)
# - cursor.rowcount being checked after various execute and fetchXXX methods
# - Check for fetchall and fetchmany returning empty lists after results
# are exhausted (already checking for empty lists if select retrieved
# nothing
# - Fix bugs in test_setoutputsize_basic and test_setinputsizes
#
""" Test a database self.driver for DB API 2.0 compatibility.
This implementation tests Gadfly, but the TestCase
is structured so that other self.drivers can subclass this
test case to ensure compiliance with the DB-API. It is
expected that this TestCase may be expanded in the future
if ambiguities or edge conditions are discovered.
The 'Optional Extensions' are not yet being tested.
self.drivers should subclass this test, overriding setUp, tearDown,
self.driver, connect_args and connect_kw_args. Class specification
should be as follows:
import dbapi20
class mytest(dbapi20.DatabaseAPI20Test):
[...]
Don't 'import DatabaseAPI20Test from dbapi20', or you will
confuse the unit tester - just 'import dbapi20'.
"""
# The self.driver module. This should be the module where the 'connect'
# method is to be found
driver = pg8000
table_prefix = "dbapi20test_" # If you need to specify a prefix for tables
ddl1 = "create table %sbooze (name varchar(20))" % table_prefix
ddl2 = "create table %sbarflys (name varchar(20))" % table_prefix
xddl1 = "drop table %sbooze" % table_prefix
xddl2 = "drop table %sbarflys" % table_prefix
# Name of stored procedure to convert
# string->lowercase
lowerfunc = "lower"
# Some drivers may need to override these helpers, for example adding
# a 'commit' after the execute.
def executeDDL1(cursor):
cursor.execute(ddl1)
def executeDDL2(cursor):
cursor.execute(ddl2)
@pytest.fixture
def db(request, con):
def fin():
with con.cursor() as cur:
for ddl in (xddl1, xddl2):
try:
cur.execute(ddl)
con.commit()
except driver.Error:
# Assume table didn't exist. Other tests will check if
# execute is busted.
pass
request.addfinalizer(fin)
return con
def test_apilevel():
# Must exist
apilevel = driver.apilevel
# Must equal 2.0
assert apilevel == "2.0"
def test_threadsafety():
try:
# Must exist
threadsafety = driver.threadsafety
# Must be a valid value
assert threadsafety in (0, 1, 2, 3)
except AttributeError:
assert False, "Driver doesn't define threadsafety"
def test_paramstyle():
try:
# Must exist
paramstyle = driver.paramstyle
# Must be a valid value
assert paramstyle in ("qmark", "numeric", "named", "format", "pyformat")
except AttributeError:
assert False, "Driver doesn't define paramstyle"
def test_Exceptions():
# Make sure required exceptions exist, and are in the
# defined hierarchy.
assert issubclass(driver.Warning, Exception)
assert issubclass(driver.Error, Exception)
assert issubclass(driver.InterfaceError, driver.Error)
assert issubclass(driver.DatabaseError, driver.Error)
assert issubclass(driver.OperationalError, driver.Error)
assert issubclass(driver.IntegrityError, driver.Error)
assert issubclass(driver.InternalError, driver.Error)
assert issubclass(driver.ProgrammingError, driver.Error)
assert issubclass(driver.NotSupportedError, driver.Error)
def test_ExceptionsAsConnectionAttributes(con):
# OPTIONAL EXTENSION
# Test for the optional DB API 2.0 extension, where the exceptions
# are exposed as attributes on the Connection object
# I figure this optional extension will be implemented by any
# driver author who is using this test suite, so it is enabled
# by default.
warnings.simplefilter("ignore")
drv = driver
assert con.Warning is drv.Warning
assert con.Error is drv.Error
assert con.InterfaceError is drv.InterfaceError
assert con.DatabaseError is drv.DatabaseError
assert con.OperationalError is drv.OperationalError
assert con.IntegrityError is drv.IntegrityError
assert con.InternalError is drv.InternalError
assert con.ProgrammingError is drv.ProgrammingError
assert con.NotSupportedError is drv.NotSupportedError
warnings.resetwarnings()
def test_commit(con):
# Commit must work, even if it doesn't do anything
con.commit()
def test_rollback(con):
# If rollback is defined, it should either work or throw
# the documented exception
if hasattr(con, "rollback"):
try:
con.rollback()
except driver.NotSupportedError:
pass
def test_cursor(con):
con.cursor()
def test_cursor_isolation(con):
# Make sure cursors created from the same connection have
# the documented transaction isolation level
cur1 = con.cursor()
cur2 = con.cursor()
executeDDL1(cur1)
cur1.execute("insert into %sbooze values ('Victoria Bitter')" % (table_prefix))
cur2.execute("select name from %sbooze" % table_prefix)
booze = cur2.fetchall()
assert len(booze) == 1
assert len(booze[0]) == 1
assert booze[0][0] == "Victoria Bitter"
def test_description(con):
cur = con.cursor()
executeDDL1(cur)
assert cur.description is None, (
"cursor.description should be none after executing a "
"statement that can return no rows (such as DDL)"
)
cur.execute("select name from %sbooze" % table_prefix)
assert len(cur.description) == 1, "cursor.description describes too many columns"
assert (
len(cur.description[0]) == 7
), "cursor.description[x] tuples must have 7 elements"
assert (
cur.description[0][0].lower() == "name"
), "cursor.description[x][0] must return column name"
assert cur.description[0][1] == driver.STRING, (
"cursor.description[x][1] must return column type. Got %r"
% cur.description[0][1]
)
# Make sure self.description gets reset
executeDDL2(cur)
assert cur.description is None, (
"cursor.description not being set to None when executing "
"no-result statements (eg. DDL)"
)
def test_rowcount(cursor):
executeDDL1(cursor)
assert cursor.rowcount == -1, (
"cursor.rowcount should be -1 after executing no-result " "statements"
)
cursor.execute("insert into %sbooze values ('Victoria Bitter')" % (table_prefix))
assert cursor.rowcount in (-1, 1), (
"cursor.rowcount should == number or rows inserted, or "
"set to -1 after executing an insert statement"
)
cursor.execute("select name from %sbooze" % table_prefix)
assert cursor.rowcount in (-1, 1), (
"cursor.rowcount should == number of rows returned, or "
"set to -1 after executing a select statement"
)
executeDDL2(cursor)
assert cursor.rowcount == -1, (
"cursor.rowcount not being reset to -1 after executing " "no-result statements"
)
def test_close(con):
cur = con.cursor()
con.close()
# cursor.execute should raise an Error if called after connection
# closed
with pytest.raises(driver.Error):
executeDDL1(cur)
# connection.commit should raise an Error if called after connection'
# closed.'
with pytest.raises(driver.Error):
con.commit()
# connection.close should raise an Error if called more than once
with pytest.raises(driver.Error):
con.close()
def test_execute(con):
cur = con.cursor()
_paraminsert(cur)
def _paraminsert(cur):
executeDDL1(cur)
cur.execute("insert into %sbooze values ('Victoria Bitter')" % (table_prefix))
assert cur.rowcount in (-1, 1)
if driver.paramstyle == "qmark":
cur.execute("insert into %sbooze values (?)" % table_prefix, ("Cooper's",))
elif driver.paramstyle == "numeric":
cur.execute("insert into %sbooze values (:1)" % table_prefix, ("Cooper's",))
elif driver.paramstyle == "named":
cur.execute(
"insert into %sbooze values (:beer)" % table_prefix, {"beer": "Cooper's"}
)
elif driver.paramstyle == "format":
cur.execute("insert into %sbooze values (%%s)" % table_prefix, ("Cooper's",))
elif driver.paramstyle == "pyformat":
cur.execute(
"insert into %sbooze values (%%(beer)s)" % table_prefix,
{"beer": "Cooper's"},
)
else:
assert False, "Invalid paramstyle"
assert cur.rowcount in (-1, 1)
cur.execute("select name from %sbooze" % table_prefix)
res = cur.fetchall()
assert len(res) == 2, "cursor.fetchall returned too few rows"
beers = [res[0][0], res[1][0]]
beers.sort()
assert beers[0] == "Cooper's", (
"cursor.fetchall retrieved incorrect data, or data inserted " "incorrectly"
)
assert beers[1] == "Victoria Bitter", (
"cursor.fetchall retrieved incorrect data, or data inserted " "incorrectly"
)
def test_executemany(cursor):
executeDDL1(cursor)
largs = [("Cooper's",), ("Boag's",)]
margs = [{"beer": "Cooper's"}, {"beer": "Boag's"}]
if driver.paramstyle == "qmark":
cursor.executemany("insert into %sbooze values (?)" % table_prefix, largs)
elif driver.paramstyle == "numeric":
cursor.executemany("insert into %sbooze values (:1)" % table_prefix, largs)
elif driver.paramstyle == "named":
cursor.executemany("insert into %sbooze values (:beer)" % table_prefix, margs)
elif driver.paramstyle == "format":
cursor.executemany("insert into %sbooze values (%%s)" % table_prefix, largs)
elif driver.paramstyle == "pyformat":
cursor.executemany(
"insert into %sbooze values (%%(beer)s)" % (table_prefix), margs
)
else:
assert False, "Unknown paramstyle"
assert cursor.rowcount in (-1, 2), (
"insert using cursor.executemany set cursor.rowcount to "
"incorrect value %r" % cursor.rowcount
)
cursor.execute("select name from %sbooze" % table_prefix)
res = cursor.fetchall()
assert len(res) == 2, "cursor.fetchall retrieved incorrect number of rows"
beers = [res[0][0], res[1][0]]
beers.sort()
assert beers[0] == "Boag's", "incorrect data retrieved"
assert beers[1] == "Cooper's", "incorrect data retrieved"
def test_fetchone(cursor):
# cursor.fetchone should raise an Error if called before
# executing a select-type query
with pytest.raises(driver.Error):
cursor.fetchone()
# cursor.fetchone should raise an Error if called after
# executing a query that cannot return rows
executeDDL1(cursor)
with pytest.raises(driver.Error):
cursor.fetchone()
cursor.execute("select name from %sbooze" % table_prefix)
assert cursor.fetchone() is None, (
"cursor.fetchone should return None if a query retrieves " "no rows"
)
assert cursor.rowcount in (-1, 0)
# cursor.fetchone should raise an Error if called after
# executing a query that cannot return rows
cursor.execute("insert into %sbooze values ('Victoria Bitter')" % (table_prefix))
with pytest.raises(driver.Error):
cursor.fetchone()
cursor.execute("select name from %sbooze" % table_prefix)
r = cursor.fetchone()
assert len(r) == 1, "cursor.fetchone should have retrieved a single row"
assert r[0] == "Victoria Bitter", "cursor.fetchone retrieved incorrect data"
assert (
cursor.fetchone() is None
), "cursor.fetchone should return None if no more rows available"
assert cursor.rowcount in (-1, 1)
samples = [
"Carlton Cold",
"Carlton Draft",
"Mountain Goat",
"Redback",
"Victoria Bitter",
"XXXX",
]
def _populate():
"""Return a list of sql commands to setup the DB for the fetch
tests.
"""
populate = [
"insert into %sbooze values ('%s')" % (table_prefix, s) for s in samples
]
return populate
def test_fetchmany(cursor):
# cursor.fetchmany should raise an Error if called without
# issuing a query
with pytest.raises(driver.Error):
cursor.fetchmany(4)
executeDDL1(cursor)
for sql in _populate():
cursor.execute(sql)
cursor.execute("select name from %sbooze" % table_prefix)
r = cursor.fetchmany()
assert len(r) == 1, (
"cursor.fetchmany retrieved incorrect number of rows, "
"default of arraysize is one."
)
cursor.arraysize = 10
r = cursor.fetchmany(3) # Should get 3 rows
assert len(r) == 3, "cursor.fetchmany retrieved incorrect number of rows"
r = cursor.fetchmany(4) # Should get 2 more
assert len(r) == 2, "cursor.fetchmany retrieved incorrect number of rows"
r = cursor.fetchmany(4) # Should be an empty sequence
assert len(r) == 0, (
"cursor.fetchmany should return an empty sequence after "
"results are exhausted"
)
assert cursor.rowcount in (-1, 6)
# Same as above, using cursor.arraysize
cursor.arraysize = 4
cursor.execute("select name from %sbooze" % table_prefix)
r = cursor.fetchmany() # Should get 4 rows
assert len(r) == 4, "cursor.arraysize not being honoured by fetchmany"
r = cursor.fetchmany() # Should get 2 more
assert len(r) == 2
r = cursor.fetchmany() # Should be an empty sequence
assert len(r) == 0
assert cursor.rowcount in (-1, 6)
cursor.arraysize = 6
cursor.execute("select name from %sbooze" % table_prefix)
rows = cursor.fetchmany() # Should get all rows
assert cursor.rowcount in (-1, 6)
assert len(rows) == 6
assert len(rows) == 6
rows = [row[0] for row in rows]
rows.sort()
# Make sure we get the right data back out
for i in range(0, 6):
assert rows[i] == samples[i], "incorrect data retrieved by cursor.fetchmany"
rows = cursor.fetchmany() # Should return an empty list
assert len(rows) == 0, (
"cursor.fetchmany should return an empty sequence if "
"called after the whole result set has been fetched"
)
assert cursor.rowcount in (-1, 6)
executeDDL2(cursor)
cursor.execute("select name from %sbarflys" % table_prefix)
r = cursor.fetchmany() # Should get empty sequence
assert len(r) == 0, (
"cursor.fetchmany should return an empty sequence if " "query retrieved no rows"
)
assert cursor.rowcount in (-1, 0)
def test_fetchall(cursor):
# cursor.fetchall should raise an Error if called
# without executing a query that may return rows (such
# as a select)
with pytest.raises(driver.Error):
cursor.fetchall()
executeDDL1(cursor)
for sql in _populate():
cursor.execute(sql)
# cursor.fetchall should raise an Error if called
# after executing a a statement that cannot return rows
with pytest.raises(driver.Error):
cursor.fetchall()
cursor.execute("select name from %sbooze" % table_prefix)
rows = cursor.fetchall()
assert cursor.rowcount in (-1, len(samples))
assert len(rows) == len(samples), "cursor.fetchall did not retrieve all rows"
rows = [r[0] for r in rows]
rows.sort()
for i in range(0, len(samples)):
assert rows[i] == samples[i], "cursor.fetchall retrieved incorrect rows"
rows = cursor.fetchall()
assert len(rows) == 0, (
"cursor.fetchall should return an empty list if called "
"after the whole result set has been fetched"
)
assert cursor.rowcount in (-1, len(samples))
executeDDL2(cursor)
cursor.execute("select name from %sbarflys" % table_prefix)
rows = cursor.fetchall()
assert cursor.rowcount in (-1, 0)
assert len(rows) == 0, (
"cursor.fetchall should return an empty list if "
"a select query returns no rows"
)
def test_mixedfetch(cursor):
executeDDL1(cursor)
for sql in _populate():
cursor.execute(sql)
cursor.execute("select name from %sbooze" % table_prefix)
rows1 = cursor.fetchone()
rows23 = cursor.fetchmany(2)
rows4 = cursor.fetchone()
rows56 = cursor.fetchall()
assert cursor.rowcount in (-1, 6)
assert len(rows23) == 2, "fetchmany returned incorrect number of rows"
assert len(rows56) == 2, "fetchall returned incorrect number of rows"
rows = [rows1[0]]
rows.extend([rows23[0][0], rows23[1][0]])
rows.append(rows4[0])
rows.extend([rows56[0][0], rows56[1][0]])
rows.sort()
for i in range(0, len(samples)):
assert rows[i] == samples[i], "incorrect data retrieved or inserted"
def help_nextset_setUp(cur):
"""Should create a procedure called deleteme
that returns two result sets, first the
number of rows in booze then "name from booze"
"""
raise NotImplementedError("Helper not implemented")
def help_nextset_tearDown(cur):
"If cleaning up is needed after nextSetTest"
raise NotImplementedError("Helper not implemented")
def test_nextset(cursor):
if not hasattr(cursor, "nextset"):
return
try:
executeDDL1(cursor)
sql = _populate()
for sql in _populate():
cursor.execute(sql)
help_nextset_setUp(cursor)
cursor.callproc("deleteme")
numberofrows = cursor.fetchone()
assert numberofrows[0] == len(samples)
assert cursor.nextset()
names = cursor.fetchall()
assert len(names) == len(samples)
s = cursor.nextset()
assert s is None, "No more return sets, should return None"
finally:
help_nextset_tearDown(cursor)
def test_arraysize(cursor):
# Not much here - rest of the tests for this are in test_fetchmany
assert hasattr(cursor, "arraysize"), "cursor.arraysize must be defined"
def test_setinputsizes(cursor):
cursor.setinputsizes(25)
def test_setoutputsize_basic(cursor):
# Basic test is to make sure setoutputsize doesn't blow up
cursor.setoutputsize(1000)
cursor.setoutputsize(2000, 0)
_paraminsert(cursor) # Make sure the cursor still works
def test_None(cursor):
executeDDL1(cursor)
cursor.execute("insert into %sbooze values (NULL)" % table_prefix)
cursor.execute("select name from %sbooze" % table_prefix)
r = cursor.fetchall()
assert len(r) == 1
assert len(r[0]) == 1
assert r[0][0] is None, "NULL value not returned as None"
def test_Date():
driver.Date(2002, 12, 25)
driver.DateFromTicks(time.mktime((2002, 12, 25, 0, 0, 0, 0, 0, 0)))
# Can we assume this? API doesn't specify, but it seems implied
# self.assertEqual(str(d1),str(d2))
def test_Time():
driver.Time(13, 45, 30)
driver.TimeFromTicks(time.mktime((2001, 1, 1, 13, 45, 30, 0, 0, 0)))
# Can we assume this? API doesn't specify, but it seems implied
# self.assertEqual(str(t1),str(t2))
def test_Timestamp():
driver.Timestamp(2002, 12, 25, 13, 45, 30)
driver.TimestampFromTicks(time.mktime((2002, 12, 25, 13, 45, 30, 0, 0, 0)))
# Can we assume this? API doesn't specify, but it seems implied
# self.assertEqual(str(t1),str(t2))
def test_Binary():
driver.Binary(b"Something")
driver.Binary(b"")
def test_STRING():
assert hasattr(driver, "STRING"), "module.STRING must be defined"
def test_BINARY():
assert hasattr(driver, "BINARY"), "module.BINARY must be defined."
def test_NUMBER():
assert hasattr(driver, "NUMBER"), "module.NUMBER must be defined."
def test_DATETIME():
assert hasattr(driver, "DATETIME"), "module.DATETIME must be defined."
def test_ROWID():
assert hasattr(driver, "ROWID"), "module.ROWID must be defined."
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1694941823.0
pg8000-1.30.3/test/dbapi/test_paramstyle.py 0000664 0001750 0001750 00000007164 14501541177 017747 0 ustar 00tlocke tlocke import pytest
from pg8000.dbapi import convert_paramstyle
# "(id %% 2) = 0",
@pytest.mark.parametrize(
"query,statement",
[
[
'SELECT ?, ?, "field_?" FROM t '
"WHERE a='say ''what?''' AND b=? AND c=E'?\\'test\\'?'",
'SELECT $1, $2, "field_?" FROM t WHERE '
"a='say ''what?''' AND b=$3 AND c=E'?\\'test\\'?'",
],
[
"SELECT ?, ?, * FROM t WHERE a=? AND b='are you ''sure?'",
"SELECT $1, $2, * FROM t WHERE a=$3 AND b='are you ''sure?'",
],
],
)
def test_qmark(query, statement):
args = 1, 2, 3
new_query, vals = convert_paramstyle("qmark", query, args)
assert (new_query, vals) == (statement, args)
@pytest.mark.parametrize(
"query,expected",
[
[
"SELECT sum(x)::decimal(5, 2) :2, :1, * FROM t WHERE a=:3",
"SELECT sum(x)::decimal(5, 2) $2, $1, * FROM t WHERE a=$3",
],
],
)
def test_numeric(query, expected):
args = 1, 2, 3
new_query, vals = convert_paramstyle("numeric", query, args)
assert (new_query, vals) == (expected, args)
@pytest.mark.parametrize(
"query",
[
"make_interval(days := 10)",
],
)
def test_numeric_unchanged(query):
args = 1, 2, 3
new_query, vals = convert_paramstyle("numeric", query, args)
assert (new_query, vals) == (query, args)
@pytest.mark.parametrize(
"query,args,expected_query,expected_args",
[
[
"SELECT sum(x)::decimal(5, 2) :f_2, :f1 FROM t WHERE a=:f_2",
{"f_2": 1, "f1": 2},
"SELECT sum(x)::decimal(5, 2) $1, $2 FROM t WHERE a=$1",
(1, 2),
],
["SELECT $$'$$ = :v", {"v": "'"}, "SELECT $$'$$ = $1", ("'",)],
],
)
def test_named(query, args, expected_query, expected_args):
new_query, vals = convert_paramstyle("named", query, args)
assert (new_query, vals) == (expected_query, expected_args)
@pytest.mark.parametrize(
"query,expected",
[
[
"SELECT %s, %s, \"f1_%%\", E'txt_%%' "
"FROM t WHERE a=%s AND b='75%%' AND c = '%' -- Comment with %",
"SELECT $1, $2, \"f1_%%\", E'txt_%%' FROM t WHERE a=$3 AND "
"b='75%%' AND c = '%' -- Comment with %",
],
[
"SELECT -- Comment\n%s FROM t",
"SELECT -- Comment\n$1 FROM t",
],
],
)
def test_format_changed(query, expected):
args = 1, 2, 3
new_query, vals = convert_paramstyle("format", query, args)
assert (new_query, vals) == (expected, args)
@pytest.mark.parametrize(
"query",
[
r"""COMMENT ON TABLE test_schema.comment_test """
r"""IS 'the test % '' " \ table comment'""",
],
)
def test_format_unchanged(query):
args = 1, 2, 3
new_query, vals = convert_paramstyle("format", query, args)
assert (new_query, vals) == (query, args)
def test_py_format():
args = {"f2": 1, "f1": 2, "f3": 3}
new_query, vals = convert_paramstyle(
"pyformat",
"SELECT %(f2)s, %(f1)s, \"f1_%%\", E'txt_%%' "
"FROM t WHERE a=%(f2)s AND b='75%%'",
args,
)
expected = "SELECT $1, $2, \"f1_%%\", E'txt_%%' FROM t WHERE a=$1 AND " "b='75%%'"
assert (new_query, vals) == (expected, (1, 2))
def test_pyformat_format():
"""pyformat should support %s and an array, too:"""
args = 1, 2, 3
new_query, vals = convert_paramstyle(
"pyformat",
"SELECT %s, %s, \"f1_%%\", E'txt_%%' " "FROM t WHERE a=%s AND b='75%%'",
args,
)
expected = "SELECT $1, $2, \"f1_%%\", E'txt_%%' FROM t WHERE a=$3 AND " "b='75%%'"
assert (new_query, vals) == (expected, args)
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1682804588.0
pg8000-1.30.3/test/dbapi/test_query.py 0000664 0001750 0001750 00000022277 14423307554 016737 0 ustar 00tlocke tlocke from datetime import datetime as Datetime, timezone as Timezone
import pytest
import pg8000.dbapi
from pg8000.converters import INET_ARRAY, INTEGER
# Tests relating to the basic operation of the database driver, driven by the
# pg8000 custom interface.
@pytest.fixture
def db_table(request, con):
con.paramstyle = "format"
cursor = con.cursor()
cursor.execute(
"CREATE TEMPORARY TABLE t1 (f1 int primary key, "
"f2 bigint not null, f3 varchar(50) null) "
)
def fin():
try:
cursor = con.cursor()
cursor.execute("drop table t1")
except pg8000.dbapi.DatabaseError:
pass
request.addfinalizer(fin)
return con
def test_database_error(cursor):
with pytest.raises(pg8000.dbapi.DatabaseError):
cursor.execute("INSERT INTO t99 VALUES (1, 2, 3)")
def test_parallel_queries(db_table):
cursor = db_table.cursor()
cursor.execute("INSERT INTO t1 (f1, f2, f3) VALUES (%s, %s, %s)", (1, 1, None))
cursor.execute("INSERT INTO t1 (f1, f2, f3) VALUES (%s, %s, %s)", (2, 10, None))
cursor.execute("INSERT INTO t1 (f1, f2, f3) VALUES (%s, %s, %s)", (3, 100, None))
cursor.execute("INSERT INTO t1 (f1, f2, f3) VALUES (%s, %s, %s)", (4, 1000, None))
cursor.execute("INSERT INTO t1 (f1, f2, f3) VALUES (%s, %s, %s)", (5, 10000, None))
c1 = db_table.cursor()
c2 = db_table.cursor()
c1.execute("SELECT f1, f2, f3 FROM t1")
for row in c1.fetchall():
f1, f2, f3 = row
c2.execute("SELECT f1, f2, f3 FROM t1 WHERE f1 > %s", (f1,))
for row in c2.fetchall():
f1, f2, f3 = row
def test_parallel_open_portals(con):
c1 = con.cursor()
c2 = con.cursor()
c1count, c2count = 0, 0
q = "select * from generate_series(1, %s)"
params = (100,)
c1.execute(q, params)
c2.execute(q, params)
for c2row in c2.fetchall():
c2count += 1
for c1row in c1.fetchall():
c1count += 1
assert c1count == c2count
# Run a query on a table, alter the structure of the table, then run the
# original query again.
def test_alter(db_table):
cursor = db_table.cursor()
cursor.execute("select * from t1")
cursor.execute("alter table t1 drop column f3")
cursor.execute("select * from t1")
# Run a query on a table, drop then re-create the table, then run the
# original query again.
def test_create(db_table):
cursor = db_table.cursor()
cursor.execute("select * from t1")
cursor.execute("drop table t1")
cursor.execute("create temporary table t1 (f1 int primary key)")
cursor.execute("select * from t1")
def test_insert_returning(db_table):
cursor = db_table.cursor()
cursor.execute("CREATE TEMPORARY TABLE t2 (id serial, data text)")
# Test INSERT ... RETURNING with one row...
cursor.execute("INSERT INTO t2 (data) VALUES (%s) RETURNING id", ("test1",))
row_id = cursor.fetchone()[0]
cursor.execute("SELECT data FROM t2 WHERE id = %s", (row_id,))
assert "test1" == cursor.fetchone()[0]
assert cursor.rowcount == 1
# Test with multiple rows...
cursor.execute(
"INSERT INTO t2 (data) VALUES (%s), (%s), (%s) " "RETURNING id",
("test2", "test3", "test4"),
)
assert cursor.rowcount == 3
ids = tuple([x[0] for x in cursor.fetchall()])
assert len(ids) == 3
def test_row_count(db_table):
cursor = db_table.cursor()
expected_count = 57
cursor.executemany(
"INSERT INTO t1 (f1, f2, f3) VALUES (%s, %s, %s)",
tuple((i, i, None) for i in range(expected_count)),
)
# Check rowcount after executemany
assert expected_count == cursor.rowcount
cursor.execute("SELECT * FROM t1")
# Check row_count without doing any reading first...
assert expected_count == cursor.rowcount
# Check rowcount after reading some rows, make sure it still
# works...
for i in range(expected_count // 2):
cursor.fetchone()
assert expected_count == cursor.rowcount
cursor = db_table.cursor()
# Restart the cursor, read a few rows, and then check rowcount
# again...
cursor.execute("SELECT * FROM t1")
for i in range(expected_count // 3):
cursor.fetchone()
assert expected_count == cursor.rowcount
# Should be -1 for a command with no results
cursor.execute("DROP TABLE t1")
assert -1 == cursor.rowcount
def test_row_count_update(db_table):
cursor = db_table.cursor()
cursor.execute("INSERT INTO t1 (f1, f2, f3) VALUES (%s, %s, %s)", (1, 1, None))
cursor.execute("INSERT INTO t1 (f1, f2, f3) VALUES (%s, %s, %s)", (2, 10, None))
cursor.execute("INSERT INTO t1 (f1, f2, f3) VALUES (%s, %s, %s)", (3, 100, None))
cursor.execute("INSERT INTO t1 (f1, f2, f3) VALUES (%s, %s, %s)", (4, 1000, None))
cursor.execute("INSERT INTO t1 (f1, f2, f3) VALUES (%s, %s, %s)", (5, 10000, None))
cursor.execute("UPDATE t1 SET f3 = %s WHERE f2 > 101", ("Hello!",))
assert cursor.rowcount == 2
def test_int_oid(cursor):
# https://bugs.launchpad.net/pg8000/+bug/230796
cursor.execute("SELECT typname FROM pg_type WHERE oid = %s", (100,))
def test_unicode_query(cursor):
cursor.execute(
"CREATE TEMPORARY TABLE \u043c\u0435\u0441\u0442\u043e "
"(\u0438\u043c\u044f VARCHAR(50), "
"\u0430\u0434\u0440\u0435\u0441 VARCHAR(250))"
)
def test_executemany(db_table):
cursor = db_table.cursor()
cursor.executemany(
"INSERT INTO t1 (f1, f2, f3) VALUES (%s, %s, %s)",
((1, 1, "Avast ye!"), (2, 1, None)),
)
cursor.executemany(
"select CAST(%s AS TIMESTAMP)",
((Datetime(2014, 5, 7, tzinfo=Timezone.utc),), (Datetime(2014, 5, 7),)),
)
def test_executemany_setinputsizes(cursor):
"""Make sure that setinputsizes works for all the parameter sets"""
cursor.execute(
"CREATE TEMPORARY TABLE t1 (f1 int primary key, f2 inet[] not null) "
)
cursor.setinputsizes(INTEGER, INET_ARRAY)
cursor.executemany(
"INSERT INTO t1 (f1, f2) VALUES (%s, %s)", ((1, ["1.1.1.1"]), (2, ["0.0.0.0"]))
)
def test_executemany_no_param_sets(cursor):
cursor.executemany("INSERT INTO t1 (f1, f2) VALUES (%s, %s)", [])
assert cursor.rowcount == -1
# Check that autocommit stays off
# We keep track of whether we're in a transaction or not by using the
# READY_FOR_QUERY message.
def test_transactions(db_table):
cursor = db_table.cursor()
cursor.execute("commit")
cursor.execute("INSERT INTO t1 (f1, f2, f3) VALUES (%s, %s, %s)", (1, 1, "Zombie"))
cursor.execute("rollback")
cursor.execute("select * from t1")
assert cursor.rowcount == 0
def test_in(cursor):
cursor.execute("SELECT typname FROM pg_type WHERE oid = any(%s)", ([16, 23],))
ret = cursor.fetchall()
assert ret[0][0] == "bool"
def test_no_previous_tpc(con):
con.tpc_begin("Stacey")
cursor = con.cursor()
cursor.execute("SELECT * FROM pg_type")
con.tpc_commit()
# Check that tpc_recover() doesn't start a transaction
def test_tpc_recover(con):
con.tpc_recover()
cursor = con.cursor()
con.autocommit = True
# If tpc_recover() has started a transaction, this will fail
cursor.execute("VACUUM")
def test_tpc_prepare(con):
xid = "Stacey"
con.tpc_begin(xid)
con.tpc_prepare()
con.tpc_rollback(xid)
def test_empty_query(cursor):
"""No exception thrown"""
cursor.execute("")
# rolling back when not in a transaction doesn't generate a warning
def test_rollback_no_transaction(con):
# Remove any existing notices
con.notices.clear()
# First, verify that a raw rollback does produce a notice
con.execute_unnamed("rollback")
assert 1 == len(con.notices)
# 25P01 is the code for no_active_sql_tronsaction. It has
# a message and severity name, but those might be
# localized/depend on the server version.
assert con.notices.pop().get(b"C") == b"25P01"
# Now going through the rollback method doesn't produce
# any notices because it knows we're not in a transaction.
con.rollback()
assert 0 == len(con.notices)
@pytest.mark.parametrize("sizes,oids", [([0], [0]), ([float], [701])])
def test_setinputsizes(con, sizes, oids):
cursor = con.cursor()
cursor.setinputsizes(*sizes)
assert cursor._input_oids == oids
cursor.execute("select %s", (None,))
retval = cursor.fetchall()
assert retval[0][0] is None
def test_unexecuted_cursor_rowcount(con):
cursor = con.cursor()
assert cursor.rowcount == -1
def test_unexecuted_cursor_description(con):
cursor = con.cursor()
assert cursor.description is None
def test_callproc(pg_version, cursor):
if pg_version > 10:
cursor.execute(
"""
CREATE PROCEDURE echo(INOUT val text)
LANGUAGE plpgsql AS
$proc$
BEGIN
END
$proc$;
"""
)
cursor.callproc("echo", ["hello"])
assert cursor.fetchall() == (["hello"],)
def test_null_result(db_table):
cur = db_table.cursor()
cur.execute("INSERT INTO t1 (f1, f2, f3) VALUES (%s, %s, %s)", (1, 1, "a"))
with pytest.raises(pg8000.dbapi.ProgrammingError):
cur.fetchall()
def test_not_parsed_if_no_params(mocker, cursor):
mock_convert_paramstyle = mocker.patch("pg8000.dbapi.convert_paramstyle")
cursor.execute("ROLLBACK")
mock_convert_paramstyle.assert_not_called()
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1682804588.0
pg8000-1.30.3/test/dbapi/test_typeconversion.py 0000664 0001750 0001750 00000054665 14423307554 020667 0 ustar 00tlocke tlocke import decimal
import ipaddress
import os
import time
import uuid
from collections import OrderedDict
from datetime import (
date as Date,
datetime as Datetime,
time as Time,
timedelta as Timedelta,
timezone as Timezone,
)
from enum import Enum
from json import dumps
import pytest
import pytz
import pg8000.dbapi
from pg8000.converters import (
INTERVAL,
PGInterval,
interval_in,
pg_interval_in,
pg_interval_out,
)
# Type conversion tests
def test_time_roundtrip(cursor):
t = Time(4, 5, 6)
cursor.execute("SELECT cast(%s as time) as f1", (t,))
assert cursor.fetchall()[0][0] == t
def test_date_roundtrip(cursor):
v = Date(2001, 2, 3)
cursor.execute("SELECT cast(%s as date) as f1", (v,))
assert cursor.fetchall()[0][0] == v
def test_bool_roundtrip(cursor):
b = True
cursor.execute("SELECT cast(%s as bool) as f1", (b,))
assert cursor.fetchall()[0][0] is b
def test_null_roundtrip(cursor):
cursor.execute("SELECT %s", (None,))
assert cursor.fetchall()[0][0] is None
def test_decimal_roundtrip(cursor):
values = ("1.1", "-1.1", "10000", "20000", "-1000000000.123456789", "1.0", "12.44")
for v in values:
cursor.execute("SELECT CAST(%s AS NUMERIC)", (decimal.Decimal(v),))
retval = cursor.fetchall()
assert str(retval[0][0]) == v
def test_float_roundtrip(cursor):
val = 1.756e-12
cursor.execute("SELECT cast(%s as double precision)", (val,))
assert cursor.fetchall()[0][0] == val
def test_float_plus_infinity_roundtrip(cursor):
v = float("inf")
cursor.execute("SELECT cast(%s as double precision)", (v,))
assert cursor.fetchall()[0][0] == v
def test_str_roundtrip(cursor):
v = "hello world"
cursor.execute("create temporary table test_str (f character varying(255))")
cursor.execute("INSERT INTO test_str VALUES (%s)", (v,))
cursor.execute("SELECT * from test_str")
assert cursor.fetchall()[0][0] == v
def test_str_then_int(cursor):
v1 = "hello world"
cursor.execute("SELECT cast(%s as varchar) as f1", (v1,))
assert cursor.fetchall()[0][0] == v1
v2 = 1
cursor.execute("SELECT cast(%s as varchar) as f1", (v2,))
assert cursor.fetchall()[0][0] == str(v2)
def test_unicode_roundtrip(cursor):
v = "hello \u0173 world"
cursor.execute("SELECT cast(%s as varchar) as f1", (v,))
assert cursor.fetchall()[0][0] == v
def test_long_roundtrip(cursor):
v = 50000000000000
cursor.execute("SELECT cast(%s as bigint)", (v,))
assert cursor.fetchall()[0][0] == v
def test_int_execute_many_select(cursor):
cursor.executemany("SELECT CAST(%s AS INTEGER)", ((1,), (40000,)))
cursor.fetchall()
def test_int_execute_many_insert(cursor):
v = ([None], [4])
cursor.execute("create temporary table test_int (f integer)")
cursor.executemany("INSERT INTO test_int VALUES (%s)", v)
cursor.execute("SELECT * from test_int")
assert cursor.fetchall() == v
def test_insert_null(cursor):
v = None
cursor.execute("CREATE TEMPORARY TABLE test_int (f INTEGER)")
cursor.execute("INSERT INTO test_int VALUES (%s)", (v,))
cursor.execute("SELECT * FROM test_int")
assert cursor.fetchall()[0][0] == v
def test_int_roundtrip(con, cursor):
int2 = 21
int4 = 23
int8 = 20
MAP = {
int2: "int2",
int4: "int4",
int8: "int8",
}
test_values = [
(0, int2),
(-32767, int2),
(-32768, int4),
(+32767, int2),
(+32768, int4),
(-2147483647, int4),
(-2147483648, int8),
(+2147483647, int4),
(+2147483648, int8),
(-9223372036854775807, int8),
(+9223372036854775807, int8),
]
for value, typoid in test_values:
cursor.execute("SELECT cast(%s as " + MAP[typoid] + ")", (value,))
assert cursor.fetchall()[0][0] == value
column_name, column_typeoid = cursor.description[0][0:2]
assert column_typeoid == typoid
def test_bytea_roundtrip(cursor):
cursor.execute(
"SELECT cast(%s as bytea)", (pg8000.Binary(b"\x00\x01\x02\x03\x02\x01\x00"),)
)
assert cursor.fetchall()[0][0] == b"\x00\x01\x02\x03\x02\x01\x00"
def test_bytearray_round_trip(cursor):
binary = b"\x00\x01\x02\x03\x02\x01\x00"
cursor.execute("SELECT cast(%s as bytea)", (bytearray(binary),))
assert cursor.fetchall()[0][0] == binary
def test_bytearray_subclass_round_trip(cursor):
class BClass(bytearray):
pass
binary = b"\x00\x01\x02\x03\x02\x01\x00"
cursor.execute("SELECT cast(%s as bytea)", (BClass(binary),))
assert cursor.fetchall()[0][0] == binary
def test_timestamp_roundtrip(cursor):
v = Datetime(2001, 2, 3, 4, 5, 6, 170000)
cursor.execute("SELECT cast(%s as timestamp)", (v,))
assert cursor.fetchall()[0][0] == v
# Test that time zone doesn't affect it
orig_tz = os.environ.get("TZ")
os.environ["TZ"] = "America/Edmonton"
time.tzset()
cursor.execute("SELECT cast(%s as timestamp)", (v,))
assert cursor.fetchall()[0][0] == v
if orig_tz is None:
del os.environ["TZ"]
else:
os.environ["TZ"] = orig_tz
time.tzset()
def test_pg_interval_repr():
v = PGInterval(microseconds=123456789, days=2, months=24)
assert repr(v) == ""
def test_pg_interval_in_1_year():
assert pg_interval_in("1 year") == PGInterval(years=1)
def test_interval_in_2_months():
assert interval_in("2 hours")
def test_pg_interval_roundtrip(con, cursor):
con.register_in_adapter(INTERVAL, pg_interval_in)
con.register_out_adapter(PGInterval, pg_interval_out)
v = PGInterval(microseconds=123456789, days=2, months=24)
cursor.execute("SELECT cast(%s as interval)", (v,))
assert cursor.fetchall()[0][0] == v
def test_interval_roundtrip(cursor):
v = Timedelta(seconds=30)
cursor.execute("SELECT cast(%s as interval)", (v,))
assert cursor.fetchall()[0][0] == v
def test_enum_str_round_trip(cursor):
try:
cursor.execute("create type lepton as enum ('electron', 'muon', 'tau')")
v = "muon"
cursor.execute("SELECT cast(%s as lepton) as f1", (v,))
retval = cursor.fetchall()
assert retval[0][0] == v
cursor.execute("CREATE TEMPORARY TABLE testenum (f1 lepton)")
cursor.execute(
"INSERT INTO testenum VALUES (cast(%s as lepton))", ("electron",)
)
finally:
cursor.execute("drop table testenum")
cursor.execute("drop type lepton")
def test_enum_custom_round_trip(con, cursor):
class Lepton:
# Implements PEP 435 in the minimal fashion needed
__members__ = OrderedDict()
def __init__(self, name, value, alias=None):
self.name = name
self.value = value
self.__members__[name] = self
setattr(self.__class__, name, self)
if alias:
self.__members__[alias] = self
setattr(self.__class__, alias, self)
def lepton_out(lepton):
return lepton.value
try:
cursor.execute("create type lepton as enum ('1', '2', '3')")
con.register_out_adapter(Lepton, lepton_out)
v = Lepton("muon", "2")
cursor.execute("SELECT CAST(%s AS lepton)", (v,))
assert cursor.fetchall()[0][0] == v.value
finally:
cursor.execute("drop type lepton")
def test_enum_py_round_trip(cursor):
class Lepton(Enum):
electron = "1"
muon = "2"
tau = "3"
try:
cursor.execute("create type lepton as enum ('1', '2', '3')")
v = Lepton.muon
cursor.execute("SELECT cast(%s as lepton) as f1", (v,))
assert cursor.fetchall()[0][0] == v.value
cursor.execute("CREATE TEMPORARY TABLE testenum (f1 lepton)")
cursor.execute(
"INSERT INTO testenum VALUES (cast(%s as lepton))", (Lepton.electron,)
)
finally:
cursor.execute("drop table testenum")
cursor.execute("drop type lepton")
def test_xml_roundtrip(cursor):
v = "gatccgagtac"
cursor.execute("select xmlparse(content %s) as f1", (v,))
assert cursor.fetchall()[0][0] == v
def test_uuid_roundtrip(cursor):
v = uuid.UUID("911460f2-1f43-fea2-3e2c-e01fd5b5069d")
cursor.execute("select cast(%s as uuid)", (v,))
assert cursor.fetchall()[0][0] == v
def test_inet_roundtrip_network(cursor):
v = ipaddress.ip_network("192.168.0.0/28")
cursor.execute("select cast(%s as inet)", (v,))
assert cursor.fetchall()[0][0] == v
def test_inet_roundtrip_address(cursor):
v = ipaddress.ip_address("192.168.0.1")
cursor.execute("select cast(%s as inet)", (v,))
assert cursor.fetchall()[0][0] == v
def test_xid_roundtrip(cursor):
v = 86722
cursor.execute("select cast(cast(%s as varchar) as xid) as f1", (v,))
retval = cursor.fetchall()
assert retval[0][0] == v
# Should complete without an exception
cursor.execute("select * from pg_locks where transactionid = %s", (97712,))
retval = cursor.fetchall()
def test_int2vector_in(cursor):
cursor.execute("select cast('1 2' as int2vector) as f1")
assert cursor.fetchall()[0][0] == [1, 2]
# Should complete without an exception
cursor.execute("select indkey from pg_index")
cursor.fetchall()
def test_timestamp_tz_out(cursor):
cursor.execute(
"SELECT '2001-02-03 04:05:06.17 America/Edmonton'" "::timestamp with time zone"
)
retval = cursor.fetchall()
dt = retval[0][0]
assert dt.tzinfo is not None, "no tzinfo returned"
assert dt.astimezone(Timezone.utc) == Datetime(
2001, 2, 3, 11, 5, 6, 170000, Timezone.utc
), "retrieved value match failed"
def test_timestamp_tz_roundtrip(cursor):
mst = pytz.timezone("America/Edmonton")
v1 = mst.localize(Datetime(2001, 2, 3, 4, 5, 6, 170000))
cursor.execute("SELECT cast(%s as timestamptz)", (v1,))
v2 = cursor.fetchall()[0][0]
assert v2.tzinfo is not None
assert v1 == v2
def test_timestamp_mismatch(cursor):
mst = pytz.timezone("America/Edmonton")
cursor.execute("SET SESSION TIME ZONE 'America/Edmonton'")
try:
cursor.execute(
"CREATE TEMPORARY TABLE TestTz "
"(f1 timestamp with time zone, "
"f2 timestamp without time zone)"
)
cursor.execute(
"INSERT INTO TestTz (f1, f2) VALUES (%s, %s)",
(
# insert timestamp into timestamptz field (v1)
Datetime(2001, 2, 3, 4, 5, 6, 170000),
# insert timestamptz into timestamp field (v2)
mst.localize(Datetime(2001, 2, 3, 4, 5, 6, 170000)),
),
)
cursor.execute("SELECT f1, f2 FROM TestTz")
retval = cursor.fetchall()
# when inserting a timestamp into a timestamptz field,
# postgresql assumes that it is in local time. So the value
# that comes out will be the server's local time interpretation
# of v1. We've set the server's TZ to MST, the time should
# be...
f1 = retval[0][0]
assert f1 == Datetime(2001, 2, 3, 11, 5, 6, 170000, Timezone.utc)
# inserting the timestamptz into a timestamp field, pg8000
# converts the value into UTC, and then the PG server converts
# it into local time for insertion into the field. When we
# query for it, we get the same time back, like the tz was
# dropped.
f2 = retval[0][1]
assert f2 == Datetime(2001, 2, 3, 11, 5, 6, 170000)
finally:
cursor.execute("SET SESSION TIME ZONE DEFAULT")
def test_name_out(cursor):
# select a field that is of "name" type:
cursor.execute("SELECT usename FROM pg_user")
cursor.fetchall()
# It is sufficient that no errors were encountered.
def test_oid_out(cursor):
cursor.execute("SELECT oid FROM pg_type")
cursor.fetchall()
# It is sufficient that no errors were encountered.
def test_boolean_in(cursor):
cursor.execute("SELECT cast('t' as bool)")
assert cursor.fetchall()[0][0]
def test_numeric_out(cursor):
for num in ("5000", "50.34"):
cursor.execute("SELECT " + num + "::numeric")
assert str(cursor.fetchall()[0][0]) == num
def test_int2_out(cursor):
cursor.execute("SELECT 5000::smallint")
assert cursor.fetchall()[0][0] == 5000
def test_int4_out(cursor):
cursor.execute("SELECT 5000::integer")
assert cursor.fetchall()[0][0] == 5000
def test_int8_out(cursor):
cursor.execute("SELECT 50000000000000::bigint")
assert cursor.fetchall()[0][0] == 50000000000000
def test_float4_out(cursor):
cursor.execute("SELECT 1.1::real")
assert cursor.fetchall()[0][0] == 1.1
def test_float8_out(cursor):
cursor.execute("SELECT 1.1::double precision")
assert cursor.fetchall()[0][0] == 1.1000000000000001
def test_varchar_out(cursor):
cursor.execute("SELECT 'hello'::varchar(20)")
assert cursor.fetchall()[0][0] == "hello"
def test_char_out(cursor):
cursor.execute("SELECT 'hello'::char(20)")
assert cursor.fetchall()[0][0] == "hello "
def test_text_out(cursor):
cursor.execute("SELECT 'hello'::text")
assert cursor.fetchall()[0][0] == "hello"
def test_interval_in(con, cursor):
con.register_in_adapter(INTERVAL, pg_interval_in)
cursor.execute("SELECT '1 month 16 days 12 hours 32 minutes 64 seconds'::interval")
expected_value = PGInterval(
microseconds=(12 * 60 * 60 * 1000 * 1000)
+ (32 * 60 * 1000 * 1000)
+ (64 * 1000 * 1000),
days=16,
months=1,
)
assert cursor.fetchall()[0][0] == expected_value
def test_interval_in_30_seconds(cursor):
cursor.execute("select interval '30 seconds'")
assert cursor.fetchall()[0][0] == Timedelta(seconds=30)
def test_interval_in_12_days_30_seconds(cursor):
cursor.execute("select interval '12 days 30 seconds'")
assert cursor.fetchall()[0][0] == Timedelta(days=12, seconds=30)
def test_timestamp_out(cursor):
cursor.execute("SELECT '2001-02-03 04:05:06.17'::timestamp")
retval = cursor.fetchall()
assert retval[0][0] == Datetime(2001, 2, 3, 4, 5, 6, 170000)
def test_int4_array_out(cursor):
cursor.execute(
"SELECT '{1,2,3,4}'::INT[] AS f1, '{{1,2,3},{4,5,6}}'::INT[][] AS f2, "
"'{{{1,2},{3,4}},{{NULL,6},{7,8}}}'::INT[][][] AS f3"
)
f1, f2, f3 = cursor.fetchone()
assert f1 == [1, 2, 3, 4]
assert f2 == [[1, 2, 3], [4, 5, 6]]
assert f3 == [[[1, 2], [3, 4]], [[None, 6], [7, 8]]]
def test_int2_array_out(cursor):
cursor.execute(
"SELECT '{1,2,3,4}'::INT2[] AS f1, "
"'{{1,2,3},{4,5,6}}'::INT2[][] AS f2, "
"'{{{1,2},{3,4}},{{NULL,6},{7,8}}}'::INT2[][][] AS f3"
)
f1, f2, f3 = cursor.fetchone()
assert f1 == [1, 2, 3, 4]
assert f2 == [[1, 2, 3], [4, 5, 6]]
assert f3 == [[[1, 2], [3, 4]], [[None, 6], [7, 8]]]
def test_int8_array_out(cursor):
cursor.execute(
"SELECT '{1,2,3,4}'::INT8[] AS f1, "
"'{{1,2,3},{4,5,6}}'::INT8[][] AS f2, "
"'{{{1,2},{3,4}},{{NULL,6},{7,8}}}'::INT8[][][] AS f3"
)
f1, f2, f3 = cursor.fetchone()
assert f1 == [1, 2, 3, 4]
assert f2 == [[1, 2, 3], [4, 5, 6]]
assert f3 == [[[1, 2], [3, 4]], [[None, 6], [7, 8]]]
def test_bool_array_out(cursor):
cursor.execute(
"SELECT '{TRUE,FALSE,FALSE,TRUE}'::BOOL[] AS f1, "
"'{{TRUE,FALSE,TRUE},{FALSE,TRUE,FALSE}}'::BOOL[][] AS f2, "
"'{{{TRUE,FALSE},{FALSE,TRUE}},{{NULL,TRUE},{FALSE,FALSE}}}'"
"::BOOL[][][] AS f3"
)
f1, f2, f3 = cursor.fetchone()
assert f1 == [True, False, False, True]
assert f2 == [[True, False, True], [False, True, False]]
assert f3 == [[[True, False], [False, True]], [[None, True], [False, False]]]
def test_float4_array_out(cursor):
cursor.execute(
"SELECT '{1,2,3,4}'::FLOAT4[] AS f1, "
"'{{1,2,3},{4,5,6}}'::FLOAT4[][] AS f2, "
"'{{{1,2},{3,4}},{{NULL,6},{7,8}}}'::FLOAT4[][][] AS f3"
)
f1, f2, f3 = cursor.fetchone()
assert f1 == [1, 2, 3, 4]
assert f2 == [[1, 2, 3], [4, 5, 6]]
assert f3 == [[[1, 2], [3, 4]], [[None, 6], [7, 8]]]
def test_float8_array_out(cursor):
cursor.execute(
"SELECT '{1,2,3,4}'::FLOAT8[] AS f1, "
"'{{1,2,3},{4,5,6}}'::FLOAT8[][] AS f2, "
"'{{{1,2},{3,4}},{{NULL,6},{7,8}}}'::FLOAT8[][][] AS f3"
)
f1, f2, f3 = cursor.fetchone()
assert f1 == [1, 2, 3, 4]
assert f2 == [[1, 2, 3], [4, 5, 6]]
assert f3 == [[[1, 2], [3, 4]], [[None, 6], [7, 8]]]
def test_int_array_roundtrip_small(cursor):
"""send small int array, should be sent as INT2[]"""
cursor.execute("SELECT cast(%s as int2[])", ([1, 2, 3],))
assert cursor.fetchall()[0][0], [1, 2, 3]
column_name, column_typeoid = cursor.description[0][0:2]
assert column_typeoid == 1005, "type should be INT2[]"
def test_int_array_roundtrip_multi(cursor):
"""test multi-dimensional array, should be sent as INT2[]"""
cursor.execute("SELECT cast(%s as int2[])", ([[1, 2], [3, 4]],))
assert cursor.fetchall()[0][0] == [[1, 2], [3, 4]]
column_name, column_typeoid = cursor.description[0][0:2]
assert column_typeoid == 1005, "type should be INT2[]"
def test_int4_array_roundtrip(cursor):
"""a larger value should kick it up to INT4[]..."""
cursor.execute("SELECT cast(%s as int4[])", ([70000, 2, 3],))
assert cursor.fetchall()[0][0] == [70000, 2, 3]
column_name, column_typeoid = cursor.description[0][0:2]
assert column_typeoid == 1007, "type should be INT4[]"
def test_int8_array_roundtrip(cursor):
"""a much larger value should kick it up to INT8[]..."""
cursor.execute("SELECT cast(%s as int8[])", ([7000000000, 2, 3],))
assert cursor.fetchall()[0][0] == [7000000000, 2, 3], "retrieved value match failed"
column_name, column_typeoid = cursor.description[0][0:2]
assert column_typeoid == 1016, "type should be INT8[]"
def test_int_array_with_null_roundtrip(cursor):
cursor.execute("SELECT cast(%s as int[])", ([1, None, 3],))
assert cursor.fetchall()[0][0] == [1, None, 3]
def test_float_array_roundtrip(cursor):
cursor.execute("SELECT cast(%s as double precision[])", ([1.1, 2.2, 3.3],))
assert cursor.fetchall()[0][0] == [1.1, 2.2, 3.3]
def test_bool_array_roundtrip(cursor):
cursor.execute("SELECT cast(%s as bool[])", ([True, False, None],))
assert cursor.fetchall()[0][0] == [True, False, None]
@pytest.mark.parametrize(
"test_input,expected",
[
("SELECT '{a,b,c}'::TEXT[] AS f1", ["a", "b", "c"]),
("SELECT '{a,b,c}'::CHAR[] AS f1", ["a", "b", "c"]),
("SELECT '{a,b,c}'::VARCHAR[] AS f1", ["a", "b", "c"]),
("SELECT '{a,b,c}'::CSTRING[] AS f1", ["a", "b", "c"]),
("SELECT '{a,b,c}'::NAME[] AS f1", ["a", "b", "c"]),
("SELECT '{}'::text[];", []),
('SELECT \'{NULL,"NULL",NULL,""}\'::text[];', [None, "NULL", None, ""]),
],
)
def test_string_array_out(cursor, test_input, expected):
cursor.execute(test_input)
assert cursor.fetchall()[0][0] == expected
def test_numeric_array_out(cursor):
cursor.execute("SELECT '{1.1,2.2,3.3}'::numeric[] AS f1")
assert cursor.fetchone()[0] == [
decimal.Decimal("1.1"),
decimal.Decimal("2.2"),
decimal.Decimal("3.3"),
]
def test_numeric_array_roundtrip(cursor):
v = [decimal.Decimal("1.1"), None, decimal.Decimal("3.3")]
cursor.execute("SELECT cast(%s as numeric[])", (v,))
assert cursor.fetchall()[0][0] == v
def test_string_array_roundtrip(cursor):
v = [
"Hello!",
"World!",
"abcdefghijklmnopqrstuvwxyz",
"",
"A bunch of random characters:",
" ~!@#$%^&*()_+`1234567890-=[]\\{}|{;':\",./<>?\t",
"\n",
"\r",
"\t",
"\b",
None,
]
cursor.execute("SELECT cast(%s as varchar[])", (v,))
assert cursor.fetchall()[0][0] == v
def test_array_string_escape():
v = '"'
res = pg8000.converters.array_string_escape(v)
assert res == '"\\""'
def test_empty_array(cursor):
v = []
cursor.execute("SELECT cast(%s as varchar[])", (v,))
assert cursor.fetchall()[0][0] == v
def test_macaddr(cursor):
cursor.execute("SELECT macaddr '08002b:010203'")
assert cursor.fetchall()[0][0] == "08:00:2b:01:02:03"
def test_tsvector_roundtrip(cursor):
cursor.execute(
"SELECT cast(%s as tsvector)", ("a fat cat sat on a mat and ate a fat rat",)
)
retval = cursor.fetchall()
assert retval[0][0] == "'a' 'and' 'ate' 'cat' 'fat' 'mat' 'on' 'rat' 'sat'"
def test_hstore_roundtrip(cursor):
val = '"a"=>"1"'
cursor.execute("SELECT cast(%s as hstore)", (val,))
assert cursor.fetchall()[0][0] == val
def test_json_roundtrip(cursor):
val = {"name": "Apollo 11 Cave", "zebra": True, "age": 26.003}
cursor.execute("SELECT cast(%s as jsonb)", (dumps(val),))
assert cursor.fetchall()[0][0] == val
def test_jsonb_roundtrip(cursor):
val = {"name": "Apollo 11 Cave", "zebra": True, "age": 26.003}
cursor.execute("SELECT cast(%s as jsonb)", (dumps(val),))
retval = cursor.fetchall()
assert retval[0][0] == val
def test_json_access_object(cursor):
val = {"name": "Apollo 11 Cave", "zebra": True, "age": 26.003}
cursor.execute("SELECT cast(%s as json) -> %s", (dumps(val), "name"))
retval = cursor.fetchall()
assert retval[0][0] == "Apollo 11 Cave"
def test_jsonb_access_object(cursor):
val = {"name": "Apollo 11 Cave", "zebra": True, "age": 26.003}
cursor.execute("SELECT cast(%s as jsonb) -> %s", (dumps(val), "name"))
retval = cursor.fetchall()
assert retval[0][0] == "Apollo 11 Cave"
def test_json_access_array(cursor):
val = [-1, -2, -3, -4, -5]
cursor.execute("SELECT cast(%s as json) -> cast(%s as int)", (dumps(val), 2))
assert cursor.fetchall()[0][0] == -3
def test_jsonb_access_array(cursor):
val = [-1, -2, -3, -4, -5]
cursor.execute("SELECT cast(%s as jsonb) -> cast(%s as int)", (dumps(val), 2))
assert cursor.fetchall()[0][0] == -3
def test_jsonb_access_path(cursor):
j = {"a": [1, 2, 3], "b": [4, 5, 6]}
path = ["a", "2"]
cursor.execute("SELECT cast(%s as jsonb) #>> %s", (dumps(j), path))
assert cursor.fetchall()[0][0] == str(j[path[0]][int(path[1])])
def test_infinity_timestamp_roundtrip(cursor):
v = "infinity"
cursor.execute("SELECT cast(%s as timestamp) as f1", (v,))
assert cursor.fetchall()[0][0] == v
def test_point_roundtrip(cursor):
v = 2.3, 1
cursor.execute("SELECT cast(%s as point) as f1", (v,))
assert cursor.fetchall()[0][0] == v
def test_time_in():
actual = pg8000.converters.time_in("12:57:18.000396")
assert actual == Time(12, 57, 18, 396)
././@PaxHeader 0000000 0000000 0000000 00000000034 00000000000 010212 x ustar 00 28 mtime=1698781754.5158703
pg8000-1.30.3/test/legacy/ 0000775 0001750 0001750 00000000000 14520255073 014330 5 ustar 00tlocke tlocke ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1682804588.0
pg8000-1.30.3/test/legacy/__init__.py 0000664 0001750 0001750 00000000000 14423307554 016433 0 ustar 00tlocke tlocke ././@PaxHeader 0000000 0000000 0000000 00000000034 00000000000 010212 x ustar 00 28 mtime=1698781754.5158703
pg8000-1.30.3/test/legacy/auth/ 0000775 0001750 0001750 00000000000 14520255073 015271 5 ustar 00tlocke tlocke ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1682804588.0
pg8000-1.30.3/test/legacy/auth/__init__.py 0000664 0001750 0001750 00000000000 14423307554 017374 0 ustar 00tlocke tlocke ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1682804588.0
pg8000-1.30.3/test/legacy/auth/test_gss.py 0000664 0001750 0001750 00000000665 14423307554 017511 0 ustar 00tlocke tlocke import pytest
from pg8000 import InterfaceError, connect
def test_gss(db_kwargs):
"""This requires a line in pg_hba.conf that requires gss for the database
pg8000_gss
"""
db_kwargs["database"] = "pg8000_gss"
# Should raise an exception saying gss isn't supported
with pytest.raises(
InterfaceError,
match="Authentication method 7 not supported by pg8000.",
):
connect(**db_kwargs)
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1682804588.0
pg8000-1.30.3/test/legacy/auth/test_md5.py 0000664 0001750 0001750 00000000222 14423307554 017367 0 ustar 00tlocke tlocke def test_md5(con):
"""Called by GitHub Actions with auth method md5.
We just need to check that we can get a connection.
"""
pass
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1682804588.0
pg8000-1.30.3/test/legacy/auth/test_md5_ssl.py 0000664 0001750 0001750 00000000415 14423307554 020254 0 ustar 00tlocke tlocke import ssl
from pg8000 import connect
def test_ssl(db_kwargs):
context = ssl.create_default_context()
context.check_hostname = False
context.verify_mode = ssl.CERT_NONE
db_kwargs["ssl_context"] = context
with connect(**db_kwargs):
pass
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1682804588.0
pg8000-1.30.3/test/legacy/auth/test_password.py 0000664 0001750 0001750 00000000234 14423307554 020547 0 ustar 00tlocke tlocke def test_password(con):
"""Called by GitHub Actions with auth method password.
We just need to check that we can get a connection.
"""
pass
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1682804588.0
pg8000-1.30.3/test/legacy/auth/test_scram-sha-256.py 0000664 0001750 0001750 00000000246 14423307554 021100 0 ustar 00tlocke tlocke def test_scram_sha_256(con):
"""Called by GitHub Actions with auth method scram-sha-256.
We just need to check that we can get a connection.
"""
pass
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1682804588.0
pg8000-1.30.3/test/legacy/auth/test_scram-sha-256_ssl.py 0000664 0001750 0001750 00000001132 14423307554 021754 0 ustar 00tlocke tlocke import ssl
import pytest
from pg8000 import DatabaseError, connect
# This requires a line in pg_hba.conf that requires scram-sha-256 for the
# database scram-sha-256
def test_scram_sha_256_plus(db_kwargs):
context = ssl.create_default_context()
context.check_hostname = False
context.verify_mode = ssl.CERT_NONE
db_kwargs["ssl_context"] = context
db_kwargs["database"] = "pg8000_scram_sha_256"
# Should only raise an exception saying db doesn't exist
with pytest.raises(DatabaseError, match="3D000"):
with connect(**db_kwargs) as con:
con.close()
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1682804588.0
pg8000-1.30.3/test/legacy/conftest.py 0000664 0001750 0001750 00000001626 14423307554 016540 0 ustar 00tlocke tlocke from os import environ
import pytest
import pg8000
@pytest.fixture(scope="class")
def db_kwargs():
db_connect = {"user": "postgres", "password": "pw"}
for kw, var, f in [
("host", "PGHOST", str),
("password", "PGPASSWORD", str),
("port", "PGPORT", int),
]:
try:
db_connect[kw] = f(environ[var])
except KeyError:
pass
return db_connect
@pytest.fixture
def con(request, db_kwargs):
conn = pg8000.connect(**db_kwargs)
def fin():
try:
conn.rollback()
except pg8000.InterfaceError:
pass
try:
conn.close()
except pg8000.InterfaceError:
pass
request.addfinalizer(fin)
return conn
@pytest.fixture
def cursor(request, con):
cursor = con.cursor()
def fin():
cursor.close()
request.addfinalizer(fin)
return cursor
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1682804588.0
pg8000-1.30.3/test/legacy/stress.py 0000664 0001750 0001750 00000002043 14423307554 016230 0 ustar 00tlocke tlocke from contextlib import closing
import pg8000
from pg8000.tests.connection_settings import db_connect
with closing(pg8000.connect(**db_connect)) as db:
for i in range(100):
cursor = db.cursor()
cursor.execute(
"""
SELECT n.nspname as "Schema",
pg_catalog.format_type(t.oid, NULL) AS "Name",
pg_catalog.obj_description(t.oid, 'pg_type') as "Description"
FROM pg_catalog.pg_type t
LEFT JOIN pg_catalog.pg_namespace n ON n.oid = t.typnamespace
left join pg_catalog.pg_namespace kj on n.oid = t.typnamespace
WHERE (t.typrelid = 0
OR (SELECT c.relkind = 'c'
FROM pg_catalog.pg_class c WHERE c.oid = t.typrelid))
AND NOT EXISTS(
SELECT 1 FROM pg_catalog.pg_type el
WHERE el.oid = t.typelem AND el.typarray = t.oid)
AND pg_catalog.pg_type_is_visible(t.oid)
ORDER BY 1, 2;"""
)
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1682804588.0
pg8000-1.30.3/test/legacy/test_benchmarks.py 0000664 0001750 0001750 00000001332 14423307554 020061 0 ustar 00tlocke tlocke import pytest
@pytest.mark.parametrize(
"txt",
(
("int2", "cast(id / 100 as int2)"),
"cast(id as int4)",
"cast(id * 100 as int8)",
"(id % 2) = 0",
"N'Static text string'",
"cast(id / 100 as float4)",
"cast(id / 100 as float8)",
"cast(id / 100 as numeric)",
"timestamp '2001-09-28'",
),
)
def test_round_trips(con, benchmark, txt):
def torun():
query = """SELECT {0} AS column1, {0} AS column2, {0} AS column3,
{0} AS column4, {0} AS column5, {0} AS column6, {0} AS column7
FROM (SELECT generate_series(1, 10000) AS id) AS tbl""".format(
txt
)
con.run(query)
benchmark(torun)
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1682804588.0
pg8000-1.30.3/test/legacy/test_connection.py 0000664 0001750 0001750 00000015406 14423307554 020112 0 ustar 00tlocke tlocke import pytest
from pg8000 import DatabaseError, InterfaceError, ProgrammingError, connect
def testUnixSocketMissing():
conn_params = {"unix_sock": "/file-does-not-exist", "user": "doesn't-matter"}
with pytest.raises(InterfaceError):
connect(**conn_params)
def test_internet_socket_connection_refused():
conn_params = {"port": 0, "user": "doesn't-matter"}
with pytest.raises(
InterfaceError,
match="Can't create a connection to host localhost and port 0 "
"\\(timeout is None and source_address is None\\).",
):
connect(**conn_params)
def testDatabaseMissing(db_kwargs):
db_kwargs["database"] = "missing-db"
with pytest.raises(ProgrammingError):
connect(**db_kwargs)
def test_notify(con):
backend_pid = con.run("select pg_backend_pid()")[0][0]
assert list(con.notifications) == []
con.run("LISTEN test")
con.run("NOTIFY test")
con.commit()
con.run("VALUES (1, 2), (3, 4), (5, 6)")
assert len(con.notifications) == 1
assert con.notifications[0] == (backend_pid, "test", "")
def test_notify_with_payload(con):
backend_pid = con.run("select pg_backend_pid()")[0][0]
assert list(con.notifications) == []
con.run("LISTEN test")
con.run("NOTIFY test, 'Parnham'")
con.commit()
con.run("VALUES (1, 2), (3, 4), (5, 6)")
assert len(con.notifications) == 1
assert con.notifications[0] == (backend_pid, "test", "Parnham")
# This requires a line in pg_hba.conf that requires md5 for the database
# pg8000_md5
def testMd5(db_kwargs):
db_kwargs["database"] = "pg8000_md5"
# Should only raise an exception saying db doesn't exist
with pytest.raises(ProgrammingError, match="3D000"):
connect(**db_kwargs)
# This requires a line in pg_hba.conf that requires 'password' for the
# database pg8000_password
def testPassword(db_kwargs):
db_kwargs["database"] = "pg8000_password"
# Should only raise an exception saying db doesn't exist
with pytest.raises(ProgrammingError, match="3D000"):
connect(**db_kwargs)
def testUnicodeDatabaseName(db_kwargs):
db_kwargs["database"] = "pg8000_sn\uFF6Fw"
# Should only raise an exception saying db doesn't exist
with pytest.raises(ProgrammingError, match="3D000"):
connect(**db_kwargs)
def testBytesDatabaseName(db_kwargs):
"""Should only raise an exception saying db doesn't exist"""
db_kwargs["database"] = bytes("pg8000_sn\uFF6Fw", "utf8")
with pytest.raises(ProgrammingError, match="3D000"):
connect(**db_kwargs)
def testBytesPassword(con, db_kwargs):
# Create user
username = "boltzmann"
password = "cha\uFF6Fs"
with con.cursor() as cur:
cur.execute("create user " + username + " with password '" + password + "';")
con.commit()
db_kwargs["user"] = username
db_kwargs["password"] = password.encode("utf8")
db_kwargs["database"] = "pg8000_md5"
with pytest.raises(ProgrammingError, match="3D000"):
connect(**db_kwargs)
cur.execute("drop role " + username)
con.commit()
def test_broken_pipe_read(con, db_kwargs):
db1 = connect(**db_kwargs)
cur1 = db1.cursor()
cur2 = con.cursor()
cur1.execute("select pg_backend_pid()")
pid1 = cur1.fetchone()[0]
cur2.execute("select pg_terminate_backend(%s)", (pid1,))
with pytest.raises(InterfaceError, match="network error"):
cur1.execute("select 1")
try:
db1.close()
except InterfaceError:
pass
def test_broken_pipe_flush(con, db_kwargs):
db1 = connect(**db_kwargs)
cur1 = db1.cursor()
cur2 = con.cursor()
cur1.execute("select pg_backend_pid()")
pid1 = cur1.fetchone()[0]
cur2.execute("select pg_terminate_backend(%s)", (pid1,))
try:
cur1.execute("select 1")
except BaseException:
pass
# Can do an assert_raises when we're on 3.8 or above
try:
db1.close()
except InterfaceError as e:
assert str(e) == "network error"
def test_broken_pipe_unpack(con):
cur = con.cursor()
cur.execute("select pg_backend_pid()")
pid1 = cur.fetchone()[0]
with pytest.raises(InterfaceError, match="network error"):
cur.execute("select pg_terminate_backend(%s)", (pid1,))
def testApplicatioName(db_kwargs):
app_name = "my test application name"
db_kwargs["application_name"] = app_name
with connect(**db_kwargs) as db:
cur = db.cursor()
cur.execute(
"select application_name from pg_stat_activity "
" where pid = pg_backend_pid()"
)
application_name = cur.fetchone()[0]
assert application_name == app_name
def test_application_name_integer(db_kwargs):
db_kwargs["application_name"] = 1
with pytest.raises(
InterfaceError,
match="The parameter application_name can't be of type .",
):
connect(**db_kwargs)
def test_application_name_bytearray(db_kwargs):
db_kwargs["application_name"] = bytearray(b"Philby")
with connect(**db_kwargs):
pass
# This requires a line in pg_hba.conf that requires scram-sha-256 for the
# database scram-sha-256
def test_scram_sha_256(db_kwargs):
db_kwargs["database"] = "pg8000_scram_sha_256"
# Should only raise an exception saying db doesn't exist
with pytest.raises(ProgrammingError, match="3D000"):
connect(**db_kwargs)
@pytest.mark.parametrize(
"commit",
[
"commit",
"COMMIT;",
],
)
def test_failed_transaction_commit_sql(cursor, commit):
cursor.execute("create temporary table tt (f1 int primary key)")
cursor.execute("begin")
try:
cursor.execute("insert into tt(f1) values(null)")
except DatabaseError:
pass
with pytest.raises(InterfaceError):
cursor.execute(commit)
def test_failed_transaction_commit_method(con, cursor):
cursor.execute("create temporary table tt (f1 int primary key)")
cursor.execute("begin")
try:
cursor.execute("insert into tt(f1) values(null)")
except DatabaseError:
pass
with pytest.raises(InterfaceError):
con.commit()
@pytest.mark.parametrize(
"rollback",
[
"rollback",
"rollback;",
"ROLLBACK ;",
],
)
def test_failed_transaction_rollback_sql(cursor, rollback):
cursor.execute("create temporary table tt (f1 int primary key)")
cursor.execute("begin")
try:
cursor.execute("insert into tt(f1) values(null)")
except DatabaseError:
pass
cursor.execute(rollback)
def test_failed_transaction_rollback_method(cursor, con):
cursor.execute("create temporary table tt (f1 int primary key)")
cursor.execute("begin")
try:
cursor.execute("insert into tt(f1) values(null)")
except DatabaseError:
pass
con.rollback()
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1682804588.0
pg8000-1.30.3/test/legacy/test_copy.py 0000664 0001750 0001750 00000005562 14423307554 016727 0 ustar 00tlocke tlocke from io import BytesIO
import pytest
@pytest.fixture
def db_table(request, con):
with con.cursor() as cursor:
cursor.execute(
"CREATE TEMPORARY TABLE t1 (f1 int primary key, "
"f2 int not null, f3 varchar(50) null) "
"on commit drop"
)
return con
def test_copy_to_with_table(db_table):
with db_table.cursor() as cursor:
cursor.execute("INSERT INTO t1 (f1, f2, f3) VALUES (%s, %s, %s)", (1, 1, 1))
cursor.execute("INSERT INTO t1 (f1, f2, f3) VALUES (%s, %s, %s)", (2, 2, 2))
cursor.execute("INSERT INTO t1 (f1, f2, f3) VALUES (%s, %s, %s)", (3, 3, 3))
stream = BytesIO()
cursor.execute("copy t1 to stdout", stream=stream)
assert stream.getvalue() == b"1\t1\t1\n2\t2\t2\n3\t3\t3\n"
assert cursor.rowcount == 3
def test_copy_to_with_query(db_table):
with db_table.cursor() as cursor:
stream = BytesIO()
cursor.execute(
"COPY (SELECT 1 as One, 2 as Two) TO STDOUT WITH DELIMITER "
"'X' CSV HEADER QUOTE AS 'Y' FORCE QUOTE Two",
stream=stream,
)
assert stream.getvalue() == b"oneXtwo\n1XY2Y\n"
assert cursor.rowcount == 1
def test_copy_from_with_table(db_table):
with db_table.cursor() as cursor:
stream = BytesIO(b"1\t1\t1\n2\t2\t2\n3\t3\t3\n")
cursor.execute("copy t1 from STDIN", stream=stream)
assert cursor.rowcount == 3
cursor.execute("SELECT * FROM t1 ORDER BY f1")
retval = cursor.fetchall()
assert retval == ([1, 1, "1"], [2, 2, "2"], [3, 3, "3"])
def test_copy_from_with_query(db_table):
with db_table.cursor() as cursor:
stream = BytesIO(b"f1Xf2\n1XY1Y\n")
cursor.execute(
"COPY t1 (f1, f2) FROM STDIN WITH DELIMITER 'X' CSV HEADER "
"QUOTE AS 'Y' FORCE NOT NULL f1",
stream=stream,
)
assert cursor.rowcount == 1
cursor.execute("SELECT * FROM t1 ORDER BY f1")
retval = cursor.fetchall()
assert retval == ([1, 1, None],)
def test_copy_from_with_error(db_table):
with db_table.cursor() as cursor:
stream = BytesIO(b"f1Xf2\n\n1XY1Y\n")
with pytest.raises(BaseException) as e:
cursor.execute(
"COPY t1 (f1, f2) FROM STDIN WITH DELIMITER 'X' CSV HEADER "
"QUOTE AS 'Y' FORCE NOT NULL f1",
stream=stream,
)
arg = {
"S": ("ERROR",),
"C": ("22P02",),
"M": (
'invalid input syntax for type integer: ""',
'invalid input syntax for integer: ""',
),
"W": ('COPY t1, line 2, column f1: ""',),
"F": ("numutils.c",),
"R": ("pg_atoi", "pg_strtoint32"),
}
earg = e.value.args[0]
for k, v in arg.items():
assert earg[k] in v
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1682804588.0
pg8000-1.30.3/test/legacy/test_dbapi.py 0000664 0001750 0001750 00000014106 14423307554 017026 0 ustar 00tlocke tlocke import datetime
import os
import time
import pytest
import pg8000
@pytest.fixture
def has_tzset():
# Neither Windows nor Jython 2.5.3 have a time.tzset() so skip
if hasattr(time, "tzset"):
os.environ["TZ"] = "UTC"
time.tzset()
return True
return False
# DBAPI compatible interface tests
@pytest.fixture
def db_table(con, has_tzset):
with con.cursor() as c:
c.execute(
"CREATE TEMPORARY TABLE t1 "
"(f1 int primary key, f2 int not null, f3 varchar(50) null) "
"ON COMMIT DROP"
)
c.execute("INSERT INTO t1 (f1, f2, f3) VALUES (%s, %s, %s)", (1, 1, None))
c.execute("INSERT INTO t1 (f1, f2, f3) VALUES (%s, %s, %s)", (2, 10, None))
c.execute("INSERT INTO t1 (f1, f2, f3) VALUES (%s, %s, %s)", (3, 100, None))
c.execute("INSERT INTO t1 (f1, f2, f3) VALUES (%s, %s, %s)", (4, 1000, None))
c.execute("INSERT INTO t1 (f1, f2, f3) VALUES (%s, %s, %s)", (5, 10000, None))
return con
def test_parallel_queries(db_table):
with db_table.cursor() as c1, db_table.cursor() as c2:
c1.execute("SELECT f1, f2, f3 FROM t1")
while 1:
row = c1.fetchone()
if row is None:
break
f1, f2, f3 = row
c2.execute("SELECT f1, f2, f3 FROM t1 WHERE f1 > %s", (f1,))
while 1:
row = c2.fetchone()
if row is None:
break
f1, f2, f3 = row
def test_qmark(db_table):
orig_paramstyle = pg8000.paramstyle
try:
pg8000.paramstyle = "qmark"
with db_table.cursor() as c1:
c1.execute("SELECT f1, f2, f3 FROM t1 WHERE f1 > ?", (3,))
while 1:
row = c1.fetchone()
if row is None:
break
f1, f2, f3 = row
finally:
pg8000.paramstyle = orig_paramstyle
def test_numeric(db_table):
orig_paramstyle = pg8000.paramstyle
try:
pg8000.paramstyle = "numeric"
with db_table.cursor() as c1:
c1.execute("SELECT f1, f2, f3 FROM t1 WHERE f1 > :1", (3,))
while 1:
row = c1.fetchone()
if row is None:
break
f1, f2, f3 = row
finally:
pg8000.paramstyle = orig_paramstyle
def test_named(db_table):
orig_paramstyle = pg8000.paramstyle
try:
pg8000.paramstyle = "named"
with db_table.cursor() as c1:
c1.execute("SELECT f1, f2, f3 FROM t1 WHERE f1 > :f1", {"f1": 3})
while 1:
row = c1.fetchone()
if row is None:
break
f1, f2, f3 = row
finally:
pg8000.paramstyle = orig_paramstyle
def test_format(db_table):
orig_paramstyle = pg8000.paramstyle
try:
pg8000.paramstyle = "format"
with db_table.cursor() as c1:
c1.execute("SELECT f1, f2, f3 FROM t1 WHERE f1 > %s", (3,))
while 1:
row = c1.fetchone()
if row is None:
break
f1, f2, f3 = row
finally:
pg8000.paramstyle = orig_paramstyle
def test_pyformat(db_table):
orig_paramstyle = pg8000.paramstyle
try:
pg8000.paramstyle = "pyformat"
with db_table.cursor() as c1:
c1.execute("SELECT f1, f2, f3 FROM t1 WHERE f1 > %(f1)s", {"f1": 3})
while 1:
row = c1.fetchone()
if row is None:
break
f1, f2, f3 = row
finally:
pg8000.paramstyle = orig_paramstyle
def test_arraysize(db_table):
with db_table.cursor() as c1:
c1.arraysize = 3
c1.execute("SELECT * FROM t1")
retval = c1.fetchmany()
assert len(retval) == c1.arraysize
def test_date():
val = pg8000.Date(2001, 2, 3)
assert val == datetime.date(2001, 2, 3)
def test_time():
val = pg8000.Time(4, 5, 6)
assert val == datetime.time(4, 5, 6)
def test_timestamp():
val = pg8000.Timestamp(2001, 2, 3, 4, 5, 6)
assert val == datetime.datetime(2001, 2, 3, 4, 5, 6)
def test_date_from_ticks(has_tzset):
if has_tzset:
val = pg8000.DateFromTicks(1173804319)
assert val == datetime.date(2007, 3, 13)
def testTimeFromTicks(has_tzset):
if has_tzset:
val = pg8000.TimeFromTicks(1173804319)
assert val == datetime.time(16, 45, 19)
def test_timestamp_from_ticks(has_tzset):
if has_tzset:
val = pg8000.TimestampFromTicks(1173804319)
assert val == datetime.datetime(2007, 3, 13, 16, 45, 19)
def test_binary():
v = pg8000.Binary(b"\x00\x01\x02\x03\x02\x01\x00")
assert v == b"\x00\x01\x02\x03\x02\x01\x00"
assert isinstance(v, pg8000.BINARY)
def test_row_count(db_table):
with db_table.cursor() as c1:
c1.execute("SELECT * FROM t1")
assert 5 == c1.rowcount
c1.execute("UPDATE t1 SET f3 = %s WHERE f2 > 101", ("Hello!",))
assert 2 == c1.rowcount
c1.execute("DELETE FROM t1")
assert 5 == c1.rowcount
def test_fetch_many(db_table):
with db_table.cursor() as cursor:
cursor.arraysize = 2
cursor.execute("SELECT * FROM t1")
assert 2 == len(cursor.fetchmany())
assert 2 == len(cursor.fetchmany())
assert 1 == len(cursor.fetchmany())
assert 0 == len(cursor.fetchmany())
def test_iterator(db_table):
with db_table.cursor() as cursor:
cursor.execute("SELECT * FROM t1 ORDER BY f1")
f1 = 0
for row in cursor:
next_f1 = row[0]
assert next_f1 > f1
f1 = next_f1
# Vacuum can't be run inside a transaction, so we need to turn
# autocommit on.
def test_vacuum(con):
con.autocommit = True
with con.cursor() as cursor:
cursor.execute("vacuum")
def test_prepared_statement(con):
with con.cursor() as cursor:
cursor.execute("PREPARE gen_series AS SELECT generate_series(1, 10);")
cursor.execute("EXECUTE gen_series")
def test_cursor_type(cursor):
assert str(type(cursor)) == ""
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1683227782.0
pg8000-1.30.3/test/legacy/test_dbapi20.py 0000664 0001750 0001750 00000053455 14425002206 017166 0 ustar 00tlocke tlocke import time
import warnings
import pytest
import pg8000
""" Python DB API 2.0 driver compliance unit test suite.
This software is Public Domain and may be used without restrictions.
"Now we have booze and barflies entering the discussion, plus rumours of
DBAs on drugs... and I won't tell you what flashes through my mind each
time I read the subject line with 'Anal Compliance' in it. All around
this is turning out to be a thoroughly unwholesome unit test."
-- Ian Bicking
"""
__rcs_id__ = "$Id: dbapi20.py,v 1.10 2003/10/09 03:14:14 zenzen Exp $"
__version__ = "$Revision: 1.10 $"[11:-2]
__author__ = "Stuart Bishop "
# $Log: dbapi20.py,v $
# Revision 1.10 2003/10/09 03:14:14 zenzen
# Add test for DB API 2.0 optional extension, where database exceptions
# are exposed as attributes on the Connection object.
#
# Revision 1.9 2003/08/13 01:16:36 zenzen
# Minor tweak from Stefan Fleiter
#
# Revision 1.8 2003/04/10 00:13:25 zenzen
# Changes, as per suggestions by M.-A. Lemburg
# - Add a table prefix, to ensure namespace collisions can always be avoided
#
# Revision 1.7 2003/02/26 23:33:37 zenzen
# Break out DDL into helper functions, as per request by David Rushby
#
# Revision 1.6 2003/02/21 03:04:33 zenzen
# Stuff from Henrik Ekelund:
# added test_None
# added test_nextset & hooks
#
# Revision 1.5 2003/02/17 22:08:43 zenzen
# Implement suggestions and code from Henrik Eklund - test that
# cursor.arraysize defaults to 1 & generic cursor.callproc test added
#
# Revision 1.4 2003/02/15 00:16:33 zenzen
# Changes, as per suggestions and bug reports by M.-A. Lemburg,
# Matthew T. Kromer, Federico Di Gregorio and Daniel Dittmar
# - Class renamed
# - Now a subclass of TestCase, to avoid requiring the driver stub
# to use multiple inheritance
# - Reversed the polarity of buggy test in test_description
# - Test exception hierarchy correctly
# - self.populate is now self._populate(), so if a driver stub
# overrides self.ddl1 this change propagates
# - VARCHAR columns now have a width, which will hopefully make the
# DDL even more portible (this will be reversed if it causes more problems)
# - cursor.rowcount being checked after various execute and fetchXXX methods
# - Check for fetchall and fetchmany returning empty lists after results
# are exhausted (already checking for empty lists if select retrieved
# nothing
# - Fix bugs in test_setoutputsize_basic and test_setinputsizes
#
""" Test a database self.driver for DB API 2.0 compatibility.
This implementation tests Gadfly, but the TestCase
is structured so that other self.drivers can subclass this
test case to ensure compiliance with the DB-API. It is
expected that this TestCase may be expanded in the future
if ambiguities or edge conditions are discovered.
The 'Optional Extensions' are not yet being tested.
self.drivers should subclass this test, overriding setUp, tearDown,
self.driver, connect_args and connect_kw_args. Class specification
should be as follows:
import dbapi20
class mytest(dbapi20.DatabaseAPI20Test):
[...]
Don't 'import DatabaseAPI20Test from dbapi20', or you will
confuse the unit tester - just 'import dbapi20'.
"""
# The self.driver module. This should be the module where the 'connect'
# method is to be found
driver = pg8000
table_prefix = "dbapi20test_" # If you need to specify a prefix for tables
ddl1 = "create table %sbooze (name varchar(20))" % table_prefix
ddl2 = "create table %sbarflys (name varchar(20))" % table_prefix
xddl1 = "drop table %sbooze" % table_prefix
xddl2 = "drop table %sbarflys" % table_prefix
# Name of stored procedure to convert
# string->lowercase
lowerfunc = "lower"
# Some drivers may need to override these helpers, for example adding
# a 'commit' after the execute.
def executeDDL1(cursor):
cursor.execute(ddl1)
def executeDDL2(cursor):
cursor.execute(ddl2)
@pytest.fixture
def db(request, con):
def fin():
with con.cursor() as cur:
for ddl in (xddl1, xddl2):
try:
cur.execute(ddl)
con.commit()
except driver.Error:
# Assume table didn't exist. Other tests will check if
# execute is busted.
pass
request.addfinalizer(fin)
return con
def test_apilevel():
# Must exist
apilevel = driver.apilevel
# Must equal 2.0
assert apilevel == "2.0"
def test_threadsafety():
try:
# Must exist
threadsafety = driver.threadsafety
# Must be a valid value
assert threadsafety in (0, 1, 2, 3)
except AttributeError:
assert False, "Driver doesn't define threadsafety"
def test_paramstyle():
try:
# Must exist
paramstyle = driver.paramstyle
# Must be a valid value
assert paramstyle in ("qmark", "numeric", "named", "format", "pyformat")
except AttributeError:
assert False, "Driver doesn't define paramstyle"
def test_Exceptions():
# Make sure required exceptions exist, and are in the
# defined hierarchy.
assert issubclass(driver.Warning, Exception)
assert issubclass(driver.Error, Exception)
assert issubclass(driver.InterfaceError, driver.Error)
assert issubclass(driver.DatabaseError, driver.Error)
assert issubclass(driver.OperationalError, driver.Error)
assert issubclass(driver.IntegrityError, driver.Error)
assert issubclass(driver.InternalError, driver.Error)
assert issubclass(driver.ProgrammingError, driver.Error)
assert issubclass(driver.NotSupportedError, driver.Error)
def test_ExceptionsAsConnectionAttributes(con):
# OPTIONAL EXTENSION
# Test for the optional DB API 2.0 extension, where the exceptions
# are exposed as attributes on the Connection object
# I figure this optional extension will be implemented by any
# driver author who is using this test suite, so it is enabled
# by default.
warnings.simplefilter("ignore")
drv = driver
assert con.Warning is drv.Warning
assert con.Error is drv.Error
assert con.InterfaceError is drv.InterfaceError
assert con.DatabaseError is drv.DatabaseError
assert con.OperationalError is drv.OperationalError
assert con.IntegrityError is drv.IntegrityError
assert con.InternalError is drv.InternalError
assert con.ProgrammingError is drv.ProgrammingError
assert con.NotSupportedError is drv.NotSupportedError
warnings.resetwarnings()
def test_commit(con):
# Commit must work, even if it doesn't do anything
con.commit()
def test_rollback(con):
# If rollback is defined, it should either work or throw
# the documented exception
if hasattr(con, "rollback"):
try:
con.rollback()
except driver.NotSupportedError:
pass
def test_cursor(con):
con.cursor()
def test_cursor_isolation(con):
# Make sure cursors created from the same connection have
# the documented transaction isolation level
cur1 = con.cursor()
cur2 = con.cursor()
executeDDL1(cur1)
cur1.execute("insert into %sbooze values ('Victoria Bitter')" % (table_prefix))
cur2.execute("select name from %sbooze" % table_prefix)
booze = cur2.fetchall()
assert len(booze) == 1
assert len(booze[0]) == 1
assert booze[0][0] == "Victoria Bitter"
def test_description(con):
cur = con.cursor()
executeDDL1(cur)
assert cur.description is None, (
"cursor.description should be none after executing a "
"statement that can return no rows (such as DDL)"
)
cur.execute("select name from %sbooze" % table_prefix)
assert len(cur.description) == 1, "cursor.description describes too many columns"
assert (
len(cur.description[0]) == 7
), "cursor.description[x] tuples must have 7 elements"
assert (
cur.description[0][0].lower() == "name"
), "cursor.description[x][0] must return column name"
assert cur.description[0][1] == driver.STRING, (
"cursor.description[x][1] must return column type. Got %r"
% cur.description[0][1]
)
# Make sure self.description gets reset
executeDDL2(cur)
assert cur.description is None, (
"cursor.description not being set to None when executing "
"no-result statements (eg. DDL)"
)
def test_rowcount(cursor):
executeDDL1(cursor)
assert cursor.rowcount == -1, (
"cursor.rowcount should be -1 after executing no-result " "statements"
)
cursor.execute("insert into %sbooze values ('Victoria Bitter')" % (table_prefix))
assert cursor.rowcount in (-1, 1), (
"cursor.rowcount should == number or rows inserted, or "
"set to -1 after executing an insert statement"
)
cursor.execute("select name from %sbooze" % table_prefix)
assert cursor.rowcount in (-1, 1), (
"cursor.rowcount should == number of rows returned, or "
"set to -1 after executing a select statement"
)
executeDDL2(cursor)
assert cursor.rowcount == -1, (
"cursor.rowcount not being reset to -1 after executing " "no-result statements"
)
lower_func = "lower"
def test_callproc(cursor):
if lower_func and hasattr(cursor, "callproc"):
r = cursor.callproc(lower_func, ("FOO",))
assert len(r) == 1
assert r[0] == "FOO"
r = cursor.fetchall()
assert len(r) == 1, "callproc produced no result set"
assert len(r[0]) == 1, "callproc produced invalid result set"
assert r[0][0] == "foo", "callproc produced invalid results"
def test_close(con):
cur = con.cursor()
con.close()
# cursor.execute should raise an Error if called after connection
# closed
with pytest.raises(driver.Error):
executeDDL1(cur)
# connection.commit should raise an Error if called after connection'
# closed.'
with pytest.raises(driver.Error):
con.commit()
# connection.close should raise an Error if called more than once
with pytest.raises(driver.Error):
con.close()
def test_execute(con):
cur = con.cursor()
_paraminsert(cur)
def _paraminsert(cur):
executeDDL1(cur)
cur.execute("insert into %sbooze values ('Victoria Bitter')" % (table_prefix))
assert cur.rowcount in (-1, 1)
if driver.paramstyle == "qmark":
cur.execute("insert into %sbooze values (?)" % table_prefix, ("Cooper's",))
elif driver.paramstyle == "numeric":
cur.execute("insert into %sbooze values (:1)" % table_prefix, ("Cooper's",))
elif driver.paramstyle == "named":
cur.execute(
"insert into %sbooze values (:beer)" % table_prefix, {"beer": "Cooper's"}
)
elif driver.paramstyle == "format":
cur.execute("insert into %sbooze values (%%s)" % table_prefix, ("Cooper's",))
elif driver.paramstyle == "pyformat":
cur.execute(
"insert into %sbooze values (%%(beer)s)" % table_prefix,
{"beer": "Cooper's"},
)
else:
assert False, "Invalid paramstyle"
assert cur.rowcount in (-1, 1)
cur.execute("select name from %sbooze" % table_prefix)
res = cur.fetchall()
assert len(res) == 2, "cursor.fetchall returned too few rows"
beers = [res[0][0], res[1][0]]
beers.sort()
assert beers[0] == "Cooper's", (
"cursor.fetchall retrieved incorrect data, or data inserted " "incorrectly"
)
assert beers[1] == "Victoria Bitter", (
"cursor.fetchall retrieved incorrect data, or data inserted " "incorrectly"
)
def test_executemany(cursor):
executeDDL1(cursor)
largs = [("Cooper's",), ("Boag's",)]
margs = [{"beer": "Cooper's"}, {"beer": "Boag's"}]
if driver.paramstyle == "qmark":
cursor.executemany("insert into %sbooze values (?)" % table_prefix, largs)
elif driver.paramstyle == "numeric":
cursor.executemany("insert into %sbooze values (:1)" % table_prefix, largs)
elif driver.paramstyle == "named":
cursor.executemany("insert into %sbooze values (:beer)" % table_prefix, margs)
elif driver.paramstyle == "format":
cursor.executemany("insert into %sbooze values (%%s)" % table_prefix, largs)
elif driver.paramstyle == "pyformat":
cursor.executemany(
"insert into %sbooze values (%%(beer)s)" % (table_prefix), margs
)
else:
assert False, "Unknown paramstyle"
assert cursor.rowcount in (-1, 2), (
"insert using cursor.executemany set cursor.rowcount to "
"incorrect value %r" % cursor.rowcount
)
cursor.execute("select name from %sbooze" % table_prefix)
res = cursor.fetchall()
assert len(res) == 2, "cursor.fetchall retrieved incorrect number of rows"
beers = [res[0][0], res[1][0]]
beers.sort()
assert beers[0] == "Boag's", "incorrect data retrieved"
assert beers[1] == "Cooper's", "incorrect data retrieved"
def test_fetchone(cursor):
# cursor.fetchone should raise an Error if called before
# executing a select-type query
with pytest.raises(driver.Error):
cursor.fetchone()
# cursor.fetchone should raise an Error if called after
# executing a query that cannot return rows
executeDDL1(cursor)
with pytest.raises(driver.Error):
cursor.fetchone()
cursor.execute("select name from %sbooze" % table_prefix)
assert cursor.fetchone() is None, (
"cursor.fetchone should return None if a query retrieves " "no rows"
)
assert cursor.rowcount in (-1, 0)
# cursor.fetchone should raise an Error if called after
# executing a query that cannot return rows
cursor.execute("insert into %sbooze values ('Victoria Bitter')" % (table_prefix))
with pytest.raises(driver.Error):
cursor.fetchone()
cursor.execute("select name from %sbooze" % table_prefix)
r = cursor.fetchone()
assert len(r) == 1, "cursor.fetchone should have retrieved a single row"
assert r[0] == "Victoria Bitter", "cursor.fetchone retrieved incorrect data"
assert (
cursor.fetchone() is None
), "cursor.fetchone should return None if no more rows available"
assert cursor.rowcount in (-1, 1)
samples = [
"Carlton Cold",
"Carlton Draft",
"Mountain Goat",
"Redback",
"Victoria Bitter",
"XXXX",
]
def _populate():
"""Return a list of sql commands to setup the DB for the fetch
tests.
"""
populate = [
"insert into %sbooze values ('%s')" % (table_prefix, s) for s in samples
]
return populate
def test_fetchmany(cursor):
# cursor.fetchmany should raise an Error if called without
# issuing a query
with pytest.raises(driver.Error):
cursor.fetchmany(4)
executeDDL1(cursor)
for sql in _populate():
cursor.execute(sql)
cursor.execute("select name from %sbooze" % table_prefix)
r = cursor.fetchmany()
assert len(r) == 1, (
"cursor.fetchmany retrieved incorrect number of rows, "
"default of arraysize is one."
)
cursor.arraysize = 10
r = cursor.fetchmany(3) # Should get 3 rows
assert len(r) == 3, "cursor.fetchmany retrieved incorrect number of rows"
r = cursor.fetchmany(4) # Should get 2 more
assert len(r) == 2, "cursor.fetchmany retrieved incorrect number of rows"
r = cursor.fetchmany(4) # Should be an empty sequence
assert len(r) == 0, (
"cursor.fetchmany should return an empty sequence after "
"results are exhausted"
)
assert cursor.rowcount in (-1, 6)
# Same as above, using cursor.arraysize
cursor.arraysize = 4
cursor.execute("select name from %sbooze" % table_prefix)
r = cursor.fetchmany() # Should get 4 rows
assert len(r) == 4, "cursor.arraysize not being honoured by fetchmany"
r = cursor.fetchmany() # Should get 2 more
assert len(r) == 2
r = cursor.fetchmany() # Should be an empty sequence
assert len(r) == 0
assert cursor.rowcount in (-1, 6)
cursor.arraysize = 6
cursor.execute("select name from %sbooze" % table_prefix)
rows = cursor.fetchmany() # Should get all rows
assert cursor.rowcount in (-1, 6)
assert len(rows) == 6
assert len(rows) == 6
rows = [row[0] for row in rows]
rows.sort()
# Make sure we get the right data back out
for i in range(0, 6):
assert rows[i] == samples[i], "incorrect data retrieved by cursor.fetchmany"
rows = cursor.fetchmany() # Should return an empty list
assert len(rows) == 0, (
"cursor.fetchmany should return an empty sequence if "
"called after the whole result set has been fetched"
)
assert cursor.rowcount in (-1, 6)
executeDDL2(cursor)
cursor.execute("select name from %sbarflys" % table_prefix)
r = cursor.fetchmany() # Should get empty sequence
assert len(r) == 0, (
"cursor.fetchmany should return an empty sequence if " "query retrieved no rows"
)
assert cursor.rowcount in (-1, 0)
def test_fetchall(cursor):
# cursor.fetchall should raise an Error if called
# without executing a query that may return rows (such
# as a select)
with pytest.raises(driver.Error):
cursor.fetchall()
executeDDL1(cursor)
for sql in _populate():
cursor.execute(sql)
# cursor.fetchall should raise an Error if called
# after executing a a statement that cannot return rows
with pytest.raises(driver.Error):
cursor.fetchall()
cursor.execute("select name from %sbooze" % table_prefix)
rows = cursor.fetchall()
assert cursor.rowcount in (-1, len(samples))
assert len(rows) == len(samples), "cursor.fetchall did not retrieve all rows"
rows = [r[0] for r in rows]
rows.sort()
for i in range(0, len(samples)):
assert rows[i] == samples[i], "cursor.fetchall retrieved incorrect rows"
rows = cursor.fetchall()
assert len(rows) == 0, (
"cursor.fetchall should return an empty list if called "
"after the whole result set has been fetched"
)
assert cursor.rowcount in (-1, len(samples))
executeDDL2(cursor)
cursor.execute("select name from %sbarflys" % table_prefix)
rows = cursor.fetchall()
assert cursor.rowcount in (-1, 0)
assert len(rows) == 0, (
"cursor.fetchall should return an empty list if "
"a select query returns no rows"
)
def test_mixedfetch(cursor):
executeDDL1(cursor)
for sql in _populate():
cursor.execute(sql)
cursor.execute("select name from %sbooze" % table_prefix)
rows1 = cursor.fetchone()
rows23 = cursor.fetchmany(2)
rows4 = cursor.fetchone()
rows56 = cursor.fetchall()
assert cursor.rowcount in (-1, 6)
assert len(rows23) == 2, "fetchmany returned incorrect number of rows"
assert len(rows56) == 2, "fetchall returned incorrect number of rows"
rows = [rows1[0]]
rows.extend([rows23[0][0], rows23[1][0]])
rows.append(rows4[0])
rows.extend([rows56[0][0], rows56[1][0]])
rows.sort()
for i in range(0, len(samples)):
assert rows[i] == samples[i], "incorrect data retrieved or inserted"
def help_nextset_setUp(cur):
"""Should create a procedure called deleteme
that returns two result sets, first the
number of rows in booze then "name from booze"
"""
raise NotImplementedError("Helper not implemented")
def help_nextset_tearDown(cur):
"If cleaning up is needed after nextSetTest"
raise NotImplementedError("Helper not implemented")
def test_nextset(cursor):
if not hasattr(cursor, "nextset"):
return
try:
executeDDL1(cursor)
sql = _populate()
for sql in _populate():
cursor.execute(sql)
help_nextset_setUp(cursor)
cursor.callproc("deleteme")
numberofrows = cursor.fetchone()
assert numberofrows[0] == len(samples)
assert cursor.nextset()
names = cursor.fetchall()
assert len(names) == len(samples)
s = cursor.nextset()
assert s is None, "No more return sets, should return None"
finally:
help_nextset_tearDown(cursor)
def test_arraysize(cursor):
# Not much here - rest of the tests for this are in test_fetchmany
assert hasattr(cursor, "arraysize"), "cursor.arraysize must be defined"
def test_setinputsizes(cursor):
cursor.setinputsizes(25)
def test_setoutputsize_basic(cursor):
# Basic test is to make sure setoutputsize doesn't blow up
cursor.setoutputsize(1000)
cursor.setoutputsize(2000, 0)
_paraminsert(cursor) # Make sure the cursor still works
def test_None(cursor):
executeDDL1(cursor)
cursor.execute("insert into %sbooze values (NULL)" % table_prefix)
cursor.execute("select name from %sbooze" % table_prefix)
r = cursor.fetchall()
assert len(r) == 1
assert len(r[0]) == 1
assert r[0][0] is None, "NULL value not returned as None"
def test_Date():
driver.Date(2002, 12, 25)
driver.DateFromTicks(time.mktime((2002, 12, 25, 0, 0, 0, 0, 0, 0)))
# Can we assume this? API doesn't specify, but it seems implied
# self.assertEqual(str(d1),str(d2))
def test_Time():
driver.Time(13, 45, 30)
driver.TimeFromTicks(time.mktime((2001, 1, 1, 13, 45, 30, 0, 0, 0)))
# Can we assume this? API doesn't specify, but it seems implied
# self.assertEqual(str(t1),str(t2))
def test_Timestamp():
driver.Timestamp(2002, 12, 25, 13, 45, 30)
driver.TimestampFromTicks(time.mktime((2002, 12, 25, 13, 45, 30, 0, 0, 0)))
# Can we assume this? API doesn't specify, but it seems implied
# self.assertEqual(str(t1),str(t2))
def test_Binary():
driver.Binary(b"Something")
driver.Binary(b"")
def test_STRING():
assert hasattr(driver, "STRING"), "module.STRING must be defined"
def test_BINARY():
assert hasattr(driver, "BINARY"), "module.BINARY must be defined."
def test_NUMBER():
assert hasattr(driver, "NUMBER"), "module.NUMBER must be defined."
def test_DATETIME():
assert hasattr(driver, "DATETIME"), "module.DATETIME must be defined."
def test_ROWID():
assert hasattr(driver, "ROWID"), "module.ROWID must be defined."
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1682804588.0
pg8000-1.30.3/test/legacy/test_error_recovery.py 0000664 0001750 0001750 00000002525 14423307554 021020 0 ustar 00tlocke tlocke import datetime
import warnings
import pytest
import pg8000
class PG8000TestException(Exception):
pass
def raise_exception(val):
raise PG8000TestException("oh noes!")
def test_py_value_fail(con, mocker):
# Ensure that if types.py_value throws an exception, the original
# exception is raised (PG8000TestException), and the connection is
# still usable after the error.
mocker.patch.object(con, "py_types")
con.py_types = {datetime.time: raise_exception}
with con.cursor() as c, pytest.raises(PG8000TestException):
c.execute("SELECT CAST(%s AS TIME)", (datetime.time(10, 30),))
c.fetchall()
# ensure that the connection is still usable for a new query
c.execute("VALUES ('hw3'::text)")
assert c.fetchone()[0] == "hw3"
def test_no_data_error_recovery(con):
for i in range(1, 4):
with con.cursor() as c, pytest.raises(pg8000.ProgrammingError) as e:
c.execute("DROP TABLE t1")
assert e.value.args[0]["C"] == "42P01"
con.rollback()
def testClosedConnection(db_kwargs):
warnings.simplefilter("ignore")
my_db = pg8000.connect(**db_kwargs)
cursor = my_db.cursor()
my_db.close()
with pytest.raises(my_db.InterfaceError, match="connection is closed"):
cursor.execute("VALUES ('hw1'::text)")
warnings.resetwarnings()
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1682804588.0
pg8000-1.30.3/test/legacy/test_paramstyle.py 0000664 0001750 0001750 00000006130 14423307554 020126 0 ustar 00tlocke tlocke from pg8000.legacy import convert_paramstyle as convert
# Tests of the convert_paramstyle function.
def test_qmark():
args = 1, 2, 3
new_query, vals = convert(
"qmark",
'SELECT ?, ?, "field_?" FROM t '
"WHERE a='say ''what?''' AND b=? AND c=E'?\\'test\\'?'",
args,
)
expected = (
'SELECT $1, $2, "field_?" FROM t WHERE '
"a='say ''what?''' AND b=$3 AND c=E'?\\'test\\'?'"
)
assert (new_query, vals) == (expected, args)
def test_qmark_2():
args = 1, 2, 3
new_query, vals = convert(
"qmark", "SELECT ?, ?, * FROM t WHERE a=? AND b='are you ''sure?'", args
)
expected = "SELECT $1, $2, * FROM t WHERE a=$3 AND b='are you ''sure?'"
assert (new_query, vals) == (expected, args)
def test_numeric():
args = 1, 2, 3
new_query, vals = convert(
"numeric", "SELECT sum(x)::decimal(5, 2) :2, :1, * FROM t WHERE a=:3", args
)
expected = "SELECT sum(x)::decimal(5, 2) $2, $1, * FROM t WHERE a=$3"
assert (new_query, vals) == (expected, args)
def test_numeric_default_parameter():
args = 1, 2, 3
new_query, vals = convert("numeric", "make_interval(days := 10)", args)
assert (new_query, vals) == ("make_interval(days := 10)", args)
def test_named():
args = {
"f_2": 1,
"f1": 2,
}
new_query, vals = convert(
"named", "SELECT sum(x)::decimal(5, 2) :f_2, :f1 FROM t WHERE a=:f_2", args
)
expected = "SELECT sum(x)::decimal(5, 2) $1, $2 FROM t WHERE a=$1"
assert (new_query, vals) == (expected, (1, 2))
def test_format():
args = 1, 2, 3
new_query, vals = convert(
"format",
"SELECT %s, %s, \"f1_%%\", E'txt_%%' "
"FROM t WHERE a=%s AND b='75%%' AND c = '%' -- Comment with %",
args,
)
expected = (
"SELECT $1, $2, \"f1_%%\", E'txt_%%' FROM t WHERE a=$3 AND "
"b='75%%' AND c = '%' -- Comment with %"
)
assert (new_query, vals) == (expected, args)
sql = (
r"""COMMENT ON TABLE test_schema.comment_test """
r"""IS 'the test % '' " \ table comment'"""
)
new_query, vals = convert("format", sql, args)
assert (new_query, vals) == (sql, args)
def test_format_multiline():
args = 1, 2, 3
new_query, vals = convert("format", "SELECT -- Comment\n%s FROM t", args)
assert (new_query, vals) == ("SELECT -- Comment\n$1 FROM t", args)
def test_py_format():
args = {"f2": 1, "f1": 2, "f3": 3}
new_query, vals = convert(
"pyformat",
"SELECT %(f2)s, %(f1)s, \"f1_%%\", E'txt_%%' "
"FROM t WHERE a=%(f2)s AND b='75%%'",
args,
)
expected = "SELECT $1, $2, \"f1_%%\", E'txt_%%' FROM t WHERE a=$1 AND " "b='75%%'"
assert (new_query, vals) == (expected, (1, 2))
# pyformat should support %s and an array, too:
args = 1, 2, 3
new_query, vals = convert(
"pyformat",
"SELECT %s, %s, \"f1_%%\", E'txt_%%' " "FROM t WHERE a=%s AND b='75%%'",
args,
)
expected = "SELECT $1, $2, \"f1_%%\", E'txt_%%' FROM t WHERE a=$3 AND " "b='75%%'"
assert (new_query, vals) == (expected, args)
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1682804588.0
pg8000-1.30.3/test/legacy/test_prepared_statement.py 0000664 0001750 0001750 00000000367 14423307554 021641 0 ustar 00tlocke tlocke def test_prepare(con):
con.prepare("SELECT CAST(:v AS INTEGER)")
def test_run(con):
ps = con.prepare("SELECT cast(:v as varchar)")
ps.run(v="speedy")
def test_run_with_no_results(con):
ps = con.prepare("ROLLBACK")
ps.run()
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1682804588.0
pg8000-1.30.3/test/legacy/test_query.py 0000664 0001750 0001750 00000024076 14423307554 017123 0 ustar 00tlocke tlocke from datetime import datetime as Datetime, timezone as Timezone
from warnings import filterwarnings
import pytest
import pg8000
from pg8000.converters import INET_ARRAY, INTEGER
# Tests relating to the basic operation of the database driver, driven by the
# pg8000 custom interface.
@pytest.fixture
def db_table(request, con):
filterwarnings("ignore", "DB-API extension cursor.next()")
filterwarnings("ignore", "DB-API extension cursor.__iter__()")
con.paramstyle = "format"
with con.cursor() as cursor:
cursor.execute(
"CREATE TEMPORARY TABLE t1 (f1 int primary key, "
"f2 bigint not null, f3 varchar(50) null) "
)
def fin():
try:
with con.cursor() as cursor:
cursor.execute("drop table t1")
except pg8000.ProgrammingError:
pass
request.addfinalizer(fin)
return con
def test_database_error(cursor):
with pytest.raises(pg8000.ProgrammingError):
cursor.execute("INSERT INTO t99 VALUES (1, 2, 3)")
def test_parallel_queries(db_table):
with db_table.cursor() as cursor:
cursor.execute("INSERT INTO t1 (f1, f2, f3) VALUES (%s, %s, %s)", (1, 1, None))
cursor.execute("INSERT INTO t1 (f1, f2, f3) VALUES (%s, %s, %s)", (2, 10, None))
cursor.execute(
"INSERT INTO t1 (f1, f2, f3) VALUES (%s, %s, %s)", (3, 100, None)
)
cursor.execute(
"INSERT INTO t1 (f1, f2, f3) VALUES (%s, %s, %s)", (4, 1000, None)
)
cursor.execute(
"INSERT INTO t1 (f1, f2, f3) VALUES (%s, %s, %s)", (5, 10000, None)
)
with db_table.cursor() as c1, db_table.cursor() as c2:
c1.execute("SELECT f1, f2, f3 FROM t1")
for row in c1:
f1, f2, f3 = row
c2.execute("SELECT f1, f2, f3 FROM t1 WHERE f1 > %s", (f1,))
for row in c2:
f1, f2, f3 = row
def test_parallel_open_portals(con):
with con.cursor() as c1, con.cursor() as c2:
c1count, c2count = 0, 0
q = "select * from generate_series(1, %s)"
params = (100,)
c1.execute(q, params)
c2.execute(q, params)
for c2row in c2:
c2count += 1
for c1row in c1:
c1count += 1
assert c1count == c2count
# Run a query on a table, alter the structure of the table, then run the
# original query again.
def test_alter(db_table):
with db_table.cursor() as cursor:
cursor.execute("select * from t1")
cursor.execute("alter table t1 drop column f3")
cursor.execute("select * from t1")
# Run a query on a table, drop then re-create the table, then run the
# original query again.
def test_create(db_table):
with db_table.cursor() as cursor:
cursor.execute("select * from t1")
cursor.execute("drop table t1")
cursor.execute("create temporary table t1 (f1 int primary key)")
cursor.execute("select * from t1")
def test_insert_returning(db_table):
with db_table.cursor() as cursor:
cursor.execute("CREATE TABLE t2 (id serial, data text)")
# Test INSERT ... RETURNING with one row...
cursor.execute("INSERT INTO t2 (data) VALUES (%s) RETURNING id", ("test1",))
row_id = cursor.fetchone()[0]
cursor.execute("SELECT data FROM t2 WHERE id = %s", (row_id,))
assert "test1" == cursor.fetchone()[0]
assert cursor.rowcount == 1
# Test with multiple rows...
cursor.execute(
"INSERT INTO t2 (data) VALUES (%s), (%s), (%s) " "RETURNING id",
("test2", "test3", "test4"),
)
assert cursor.rowcount == 3
ids = tuple([x[0] for x in cursor])
assert len(ids) == 3
def test_row_count(db_table):
with db_table.cursor() as cursor:
expected_count = 57
cursor.executemany(
"INSERT INTO t1 (f1, f2, f3) VALUES (%s, %s, %s)",
tuple((i, i, None) for i in range(expected_count)),
)
# Check rowcount after executemany
assert expected_count == cursor.rowcount
cursor.execute("SELECT * FROM t1")
# Check row_count without doing any reading first...
assert expected_count == cursor.rowcount
# Check rowcount after reading some rows, make sure it still
# works...
for i in range(expected_count // 2):
cursor.fetchone()
assert expected_count == cursor.rowcount
with db_table.cursor() as cursor:
# Restart the cursor, read a few rows, and then check rowcount
# again...
cursor.execute("SELECT * FROM t1")
for i in range(expected_count // 3):
cursor.fetchone()
assert expected_count == cursor.rowcount
# Should be -1 for a command with no results
cursor.execute("DROP TABLE t1")
assert -1 == cursor.rowcount
def test_row_count_update(db_table):
with db_table.cursor() as cursor:
cursor.execute("INSERT INTO t1 (f1, f2, f3) VALUES (%s, %s, %s)", (1, 1, None))
cursor.execute("INSERT INTO t1 (f1, f2, f3) VALUES (%s, %s, %s)", (2, 10, None))
cursor.execute(
"INSERT INTO t1 (f1, f2, f3) VALUES (%s, %s, %s)", (3, 100, None)
)
cursor.execute(
"INSERT INTO t1 (f1, f2, f3) VALUES (%s, %s, %s)", (4, 1000, None)
)
cursor.execute(
"INSERT INTO t1 (f1, f2, f3) VALUES (%s, %s, %s)", (5, 10000, None)
)
cursor.execute("UPDATE t1 SET f3 = %s WHERE f2 > 101", ("Hello!",))
assert cursor.rowcount == 2
def test_int_oid(cursor):
# https://bugs.launchpad.net/pg8000/+bug/230796
cursor.execute("SELECT typname FROM pg_type WHERE oid = %s", (100,))
def test_unicode_query(cursor):
cursor.execute(
"CREATE TEMPORARY TABLE \u043c\u0435\u0441\u0442\u043e "
"(\u0438\u043c\u044f VARCHAR(50), "
"\u0430\u0434\u0440\u0435\u0441 VARCHAR(250))"
)
def test_executemany(db_table):
with db_table.cursor() as cursor:
cursor.executemany(
"INSERT INTO t1 (f1, f2, f3) VALUES (%s, %s, %s)",
((1, 1, "Avast ye!"), (2, 1, None)),
)
cursor.executemany(
"SELECT CAST(%s AS TIMESTAMP)",
((Datetime(2014, 5, 7, tzinfo=Timezone.utc),), (Datetime(2014, 5, 7),)),
)
def test_executemany_setinputsizes(cursor):
"""Make sure that setinputsizes works for all the parameter sets"""
cursor.execute(
"CREATE TEMPORARY TABLE t1 (f1 int primary key, f2 inet[] not null) "
)
cursor.setinputsizes(INTEGER, INET_ARRAY)
cursor.executemany(
"INSERT INTO t1 (f1, f2) VALUES (%s, %s)", ((1, ["1.1.1.1"]), (2, ["0.0.0.0"]))
)
def test_executemany_no_param_sets(cursor):
cursor.executemany("INSERT INTO t1 (f1, f2) VALUES (%s, %s)", [])
assert cursor.rowcount == -1
# Check that autocommit stays off
# We keep track of whether we're in a transaction or not by using the
# READY_FOR_QUERY message.
def test_transactions(db_table):
with db_table.cursor() as cursor:
cursor.execute("commit")
cursor.execute(
"INSERT INTO t1 (f1, f2, f3) VALUES (%s, %s, %s)", (1, 1, "Zombie")
)
cursor.execute("rollback")
cursor.execute("select * from t1")
assert cursor.rowcount == 0
def test_in(cursor):
cursor.execute("SELECT typname FROM pg_type WHERE oid = any(%s)", ([16, 23],))
ret = cursor.fetchall()
assert ret[0][0] == "bool"
def test_no_previous_tpc(con):
con.tpc_begin("Stacey")
with con.cursor() as cursor:
cursor.execute("SELECT * FROM pg_type")
con.tpc_commit()
# Check that tpc_recover() doesn't start a transaction
def test_tpc_recover(con):
con.tpc_recover()
with con.cursor() as cursor:
con.autocommit = True
# If tpc_recover() has started a transaction, this will fail
cursor.execute("VACUUM")
def test_tpc_prepare(con):
xid = "Stacey"
con.tpc_begin(xid)
con.tpc_prepare()
con.tpc_rollback(xid)
def test_empty_query(cursor):
"""No exception raised"""
cursor.execute("")
# rolling back when not in a transaction doesn't generate a warning
def test_rollback_no_transaction(con):
# Remove any existing notices
con.notices.clear()
# First, verify that a raw rollback does produce a notice
con.execute_unnamed("rollback")
assert 1 == len(con.notices)
# 25P01 is the code for no_active_sql_tronsaction. It has
# a message and severity name, but those might be
# localized/depend on the server version.
assert con.notices.pop().get(b"C") == b"25P01"
# Now going through the rollback method doesn't produce
# any notices because it knows we're not in a transaction.
con.rollback()
assert 0 == len(con.notices)
def test_context_manager_class(con):
assert "__enter__" in pg8000.legacy.Cursor.__dict__
assert "__exit__" in pg8000.legacy.Cursor.__dict__
with con.cursor() as cursor:
cursor.execute("select 1")
def test_close_prepared_statement(con):
ps = con.prepare("select 1")
ps.run()
res = con.run("select count(*) from pg_prepared_statements")
assert res[0][0] == 1 # Should have one prepared statement
ps.close()
res = con.run("select count(*) from pg_prepared_statements")
assert res[0][0] == 0 # Should have no prepared statements
def test_setinputsizes(con):
cursor = con.cursor()
cursor.setinputsizes(20)
cursor.execute("select %s", (None,))
retval = cursor.fetchall()
assert retval[0][0] is None
def test_setinputsizes_class(con):
cursor = con.cursor()
cursor.setinputsizes(bytes)
cursor.execute("select %s", (None,))
retval = cursor.fetchall()
assert retval[0][0] is None
def test_unexecuted_cursor_rowcount(con):
cursor = con.cursor()
assert cursor.rowcount == -1
def test_unexecuted_cursor_description(con):
cursor = con.cursor()
assert cursor.description is None
def test_not_parsed_if_no_params(mocker, cursor):
mock_convert_paramstyle = mocker.patch("pg8000.legacy.convert_paramstyle")
cursor.execute("ROLLBACK")
mock_convert_paramstyle.assert_not_called()
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1686929073.0
pg8000-1.30.3/test/legacy/test_typeconversion.py 0000664 0001750 0001750 00000054123 14443077261 021042 0 ustar 00tlocke tlocke import decimal
import ipaddress
import os
import time
import uuid
from collections import OrderedDict
from datetime import (
date as Date,
datetime as Datetime,
time as Time,
timedelta as Timedelta,
timezone as Timezone,
)
from enum import Enum
from json import dumps
import pytest
import pytz
import pg8000.converters
from pg8000 import (
Binary,
INTERVAL,
PGInterval,
Range,
pginterval_in,
pginterval_out,
timedelta_in,
)
# Type conversion tests
def test_time_roundtrip(con):
retval = con.run("SELECT cast(:t as time) as f1", t=Time(4, 5, 6))
assert retval[0][0] == Time(4, 5, 6)
def test_date_roundtrip(con):
v = Date(2001, 2, 3)
retval = con.run("SELECT cast(:d as date) as f1", d=v)
assert retval[0][0] == v
def test_bool_roundtrip(con):
retval = con.run("SELECT cast(:b as bool) as f1", b=True)
assert retval[0][0] is True
def test_null_roundtrip(con):
retval = con.run("SELECT :v", v=None)
assert retval[0][0] is None
def test_decimal_roundtrip(cursor):
values = ("1.1", "-1.1", "10000", "20000", "-1000000000.123456789", "1.0", "12.44")
for v in values:
cursor.execute("SELECT CAST(%s AS NUMERIC)", (decimal.Decimal(v),))
retval = cursor.fetchall()
assert str(retval[0][0]) == v
def test_float_roundtrip(con):
val = 1.756e-12
retval = con.run("SELECT cast(:v as double precision)", v=val)
assert retval[0][0] == val
def test_float_plus_infinity_roundtrip(con):
v = float("inf")
retval = con.run("SELECT cast(:v as double precision)", v=v)
assert retval[0][0] == v
def test_str_roundtrip(cursor):
v = "hello world"
cursor.execute("create temporary table test_str (f character varying(255))")
cursor.execute("INSERT INTO test_str VALUES (%s)", (v,))
retval = tuple(cursor.execute("SELECT * from test_str"))
assert retval[0][0] == v
def test_str_then_int(cursor):
v1 = "hello world"
retval = tuple(cursor.execute("SELECT cast(%s as varchar) as f1", (v1,)))
assert retval[0][0] == v1
v2 = 1
retval = tuple(cursor.execute("SELECT cast(%s as varchar) as f1", (v2,)))
assert retval[0][0] == str(v2)
def test_unicode_roundtrip(cursor):
v = "hello \u0173 world"
retval = tuple(cursor.execute("SELECT cast(%s as varchar) as f1", (v,)))
assert retval[0][0] == v
def test_long_roundtrip(con):
v = 50000000000000
retval = con.run("SELECT cast(:v as bigint)", v=v)
assert retval[0][0] == v
def test_int_execute_many_select(cursor):
tuple(cursor.executemany("SELECT CAST(%s AS INTEGER)", ((1,), (40000,))))
def test_int_execute_many_insert(cursor):
v = ([None], [4])
cursor.execute("create temporary table test_int (f integer)")
cursor.executemany("INSERT INTO test_int VALUES (%s)", v)
retval = tuple(cursor.execute("SELECT * from test_int"))
assert retval == v
def test_insert_null(con):
v = None
con.run("CREATE TEMPORARY TABLE test_int (f INTEGER)")
con.run("INSERT INTO test_int VALUES (:v)", v=v)
retval = con.run("SELECT * FROM test_int")
assert retval[0][0] == v
def test_int_roundtrip(con):
int2 = 21
int4 = 23
int8 = 20
MAP = {
int2: "int2",
int4: "int4",
int8: "int8",
}
test_values = [
(0, int2),
(-32767, int2),
(-32768, int4),
(+32767, int2),
(+32768, int4),
(-2147483647, int4),
(-2147483648, int8),
(+2147483647, int4),
(+2147483648, int8),
(-9223372036854775807, int8),
(+9223372036854775807, int8),
]
for value, typoid in test_values:
retval = con.run("SELECT cast(:v as " + MAP[typoid] + ")", v=value)
assert retval[0][0] == value
column_name, column_typeoid = con.description[0][0:2]
assert column_typeoid == typoid
def test_bytea_roundtrip(con):
retval = con.run(
"SELECT cast(:v as bytea)", v=Binary(b"\x00\x01\x02\x03\x02\x01\x00")
)
assert retval[0][0] == b"\x00\x01\x02\x03\x02\x01\x00"
def test_bytearray_round_trip(con):
binary = b"\x00\x01\x02\x03\x02\x01\x00"
retval = con.run("SELECT cast(:v as bytea)", v=bytearray(binary))
assert retval[0][0] == binary
def test_bytearray_subclass_round_trip(con):
class BClass(bytearray):
pass
binary = b"\x00\x01\x02\x03\x02\x01\x00"
retval = con.run("SELECT cast(:v as bytea)", v=BClass(binary))
assert retval[0][0] == binary
def test_timestamp_roundtrip(con):
v = Datetime(2001, 2, 3, 4, 5, 6, 170000)
retval = con.run("SELECT cast(:v as timestamp)", v=v)
assert retval[0][0] == v
# Test that time zone doesn't affect it
orig_tz = os.environ.get("TZ")
os.environ["TZ"] = "America/Edmonton"
time.tzset()
retval = con.run("SELECT cast(:v as timestamp)", v=v)
assert retval[0][0] == v
if orig_tz is None:
del os.environ["TZ"]
else:
os.environ["TZ"] = orig_tz
time.tzset()
def test_interval_repr():
v = PGInterval(microseconds=123456789, days=2, months=24)
assert repr(v) == ""
def test_interval_in_1_year():
assert pginterval_in("1 year") == PGInterval(years=1)
def test_timedelta_in_2_months():
assert timedelta_in("2 hours")
def test_interval_roundtrip(con):
con.register_in_adapter(INTERVAL, pginterval_in)
con.register_out_adapter(PGInterval, pginterval_out)
v = PGInterval(microseconds=123456789, days=2, months=24)
retval = con.run("SELECT cast(:v as interval)", v=v)
assert retval[0][0] == v
def test_timedelta_roundtrip(con):
v = Timedelta(seconds=30)
retval = con.run("SELECT cast(:v as interval)", v=v)
assert retval[0][0] == v
def test_enum_str_round_trip(cursor):
try:
cursor.execute("create type lepton as enum ('electron', 'muon', 'tau')")
v = "muon"
cursor.execute("SELECT cast(%s as lepton) as f1", (v,))
retval = cursor.fetchall()
assert retval[0][0] == v
cursor.execute("CREATE TEMPORARY TABLE testenum (f1 lepton)")
cursor.execute(
"INSERT INTO testenum VALUES (cast(%s as lepton))", ("electron",)
)
finally:
cursor.execute("drop table testenum")
cursor.execute("drop type lepton")
def test_enum_custom_round_trip(con):
class Lepton:
# Implements PEP 435 in the minimal fashion needed
__members__ = OrderedDict()
def __init__(self, name, value, alias=None):
self.name = name
self.value = value
self.__members__[name] = self
setattr(self.__class__, name, self)
if alias:
self.__members__[alias] = self
setattr(self.__class__, alias, self)
def lepton_out(lepton):
return lepton.value
try:
con.run("create type lepton as enum ('1', '2', '3')")
con.register_out_adapter(Lepton, lepton_out)
v = Lepton("muon", "2")
retval = con.run("SELECT CAST(:v AS lepton)", v=v)
assert retval[0][0] == v.value
finally:
con.run("drop type lepton")
def test_enum_py_round_trip(cursor):
class Lepton(Enum):
electron = "1"
muon = "2"
tau = "3"
try:
cursor.execute("create type lepton as enum ('1', '2', '3')")
v = Lepton.muon
retval = tuple(cursor.execute("SELECT cast(%s as lepton) as f1", (v,)))
assert retval[0][0] == v.value
cursor.execute("CREATE TEMPORARY TABLE testenum (f1 lepton)")
cursor.execute(
"INSERT INTO testenum VALUES (cast(%s as lepton))", (Lepton.electron,)
)
finally:
cursor.execute("drop table testenum")
cursor.execute("drop type lepton")
def test_xml_roundtrip(cursor):
v = "gatccgagtac"
retval = tuple(cursor.execute("select xmlparse(content %s) as f1", (v,)))
assert retval[0][0] == v
def test_uuid_roundtrip(con):
v = uuid.UUID("911460f2-1f43-fea2-3e2c-e01fd5b5069d")
retval = con.run("select cast(:v as uuid)", v=v)
assert retval[0][0] == v
def test_inet_roundtrip_network(con):
v = ipaddress.ip_network("192.168.0.0/28")
retval = con.run("select cast(:v as inet)", v=v)
assert retval[0][0] == v
def test_inet_roundtrip_address(con):
v = ipaddress.ip_address("192.168.0.1")
retval = con.run("select cast(:v as inet)", v=v)
assert retval[0][0] == v
def test_xid_roundtrip(cursor):
v = 86722
cursor.execute("select cast(cast(%s as varchar) as xid) as f1", (v,))
retval = cursor.fetchall()
assert retval[0][0] == v
# Should complete without an exception
cursor.execute("select * from pg_locks where transactionid = %s", (97712,))
retval = cursor.fetchall()
def test_int2vector_in(cursor):
retval = tuple(cursor.execute("select cast('1 2' as int2vector) as f1"))
assert retval[0][0] == [1, 2]
# Should complete without an exception
tuple(cursor.execute("select indkey from pg_index"))
def test_range_roundtrip(cursor):
v = Range(1, 3)
retval = tuple(cursor.execute("select cast(%s as int4range)", (v,)))
assert retval[0][0] == v
def test_timestamp_tz_out(cursor):
cursor.execute(
"SELECT '2001-02-03 04:05:06.17 America/Edmonton'" "::timestamp with time zone"
)
retval = cursor.fetchall()
dt = retval[0][0]
assert dt.tzinfo is not None, "no tzinfo returned"
assert dt.astimezone(Timezone.utc) == Datetime(
2001, 2, 3, 11, 5, 6, 170000, Timezone.utc
), "retrieved value match failed"
def test_timestamp_tz_roundtrip(con):
mst = pytz.timezone("America/Edmonton")
v1 = mst.localize(Datetime(2001, 2, 3, 4, 5, 6, 170000))
retval = con.run("SELECT cast(:v as timestamptz)", v=v1)
v2 = retval[0][0]
assert v2.tzinfo is not None
assert v1 == v2
def test_timestamp_mismatch(cursor):
mst = pytz.timezone("America/Edmonton")
cursor.execute("SET SESSION TIME ZONE 'America/Edmonton'")
try:
cursor.execute(
"CREATE TEMPORARY TABLE TestTz "
"(f1 timestamp with time zone, "
"f2 timestamp without time zone)"
)
cursor.execute(
"INSERT INTO TestTz (f1, f2) VALUES (%s, %s)",
(
# insert timestamp into timestamptz field (v1)
Datetime(2001, 2, 3, 4, 5, 6, 170000),
# insert timestamptz into timestamp field (v2)
mst.localize(Datetime(2001, 2, 3, 4, 5, 6, 170000)),
),
)
cursor.execute("SELECT f1, f2 FROM TestTz")
retval = cursor.fetchall()
# when inserting a timestamp into a timestamptz field,
# postgresql assumes that it is in local time. So the value
# that comes out will be the server's local time interpretation
# of v1. We've set the server's TZ to MST, the time should
# be...
f1 = retval[0][0]
assert f1 == Datetime(2001, 2, 3, 11, 5, 6, 170000, Timezone.utc)
# inserting the timestamptz into a timestamp field, pg8000
# converts the value into UTC, and then the PG server converts
# it into local time for insertion into the field. When we
# query for it, we get the same time back, like the tz was
# dropped.
f2 = retval[0][1]
assert f2 == Datetime(2001, 2, 3, 11, 5, 6, 170000)
finally:
cursor.execute("SET SESSION TIME ZONE DEFAULT")
def test_name_out(cursor):
# select a field that is of "name" type:
tuple(cursor.execute("SELECT usename FROM pg_user"))
# It is sufficient that no errors were encountered.
def test_oid_out(cursor):
tuple(cursor.execute("SELECT oid FROM pg_type"))
# It is sufficient that no errors were encountered.
def test_boolean_in(cursor):
retval = tuple(cursor.execute("SELECT cast('t' as bool)"))
assert retval[0][0]
def test_numeric_out(cursor):
for num in ("5000", "50.34"):
retval = tuple(cursor.execute("SELECT " + num + "::numeric"))
assert str(retval[0][0]) == num
def test_int2_out(cursor):
retval = tuple(cursor.execute("SELECT 5000::smallint"))
assert retval[0][0] == 5000
def test_int4_out(cursor):
retval = tuple(cursor.execute("SELECT 5000::integer"))
assert retval[0][0] == 5000
def test_int8_out(cursor):
retval = tuple(cursor.execute("SELECT 50000000000000::bigint"))
assert retval[0][0] == 50000000000000
def test_float4_out(cursor):
retval = tuple(cursor.execute("SELECT 1.1::real"))
assert retval[0][0] == 1.1
def test_float8_out(cursor):
retval = tuple(cursor.execute("SELECT 1.1::double precision"))
assert retval[0][0] == 1.1000000000000001
def test_varchar_out(cursor):
retval = tuple(cursor.execute("SELECT 'hello'::varchar(20)"))
assert retval[0][0] == "hello"
def test_char_out(cursor):
retval = tuple(cursor.execute("SELECT 'hello'::char(20)"))
assert retval[0][0] == "hello "
def test_text_out(cursor):
retval = tuple(cursor.execute("SELECT 'hello'::text"))
assert retval[0][0] == "hello"
def test_interval_in(con):
con.register_in_adapter(INTERVAL, pginterval_in)
retval = con.run(
"SELECT '1 month 16 days 12 hours 32 minutes 64 seconds'::interval"
)
expected_value = PGInterval(
microseconds=(12 * 60 * 60 * 1000 * 1000)
+ (32 * 60 * 1000 * 1000)
+ (64 * 1000 * 1000),
days=16,
months=1,
)
assert retval[0][0] == expected_value
def test_interval_in_30_seconds(con):
retval = con.run("select interval '30 seconds'")
assert retval[0][0] == Timedelta(seconds=30)
def test_interval_in_12_days_30_seconds(con):
retval = con.run("select interval '12 days 30 seconds'")
assert retval[0][0] == Timedelta(days=12, seconds=30)
def test_timestamp_out(cursor):
cursor.execute("SELECT '2001-02-03 04:05:06.17'::timestamp")
retval = cursor.fetchall()
assert retval[0][0] == Datetime(2001, 2, 3, 4, 5, 6, 170000)
def test_int4_array_out(cursor):
cursor.execute(
"SELECT '{1,2,3,4}'::INT[] AS f1, '{{1,2,3},{4,5,6}}'::INT[][] AS f2, "
"'{{{1,2},{3,4}},{{NULL,6},{7,8}}}'::INT[][][] AS f3"
)
f1, f2, f3 = cursor.fetchone()
assert f1 == [1, 2, 3, 4]
assert f2 == [[1, 2, 3], [4, 5, 6]]
assert f3 == [[[1, 2], [3, 4]], [[None, 6], [7, 8]]]
def test_int2_array_out(cursor):
cursor.execute(
"SELECT '{1,2,3,4}'::INT2[] AS f1, "
"'{{1,2,3},{4,5,6}}'::INT2[][] AS f2, "
"'{{{1,2},{3,4}},{{NULL,6},{7,8}}}'::INT2[][][] AS f3"
)
f1, f2, f3 = cursor.fetchone()
assert f1 == [1, 2, 3, 4]
assert f2 == [[1, 2, 3], [4, 5, 6]]
assert f3 == [[[1, 2], [3, 4]], [[None, 6], [7, 8]]]
def test_int8_array_out(cursor):
cursor.execute(
"SELECT '{1,2,3,4}'::INT8[] AS f1, "
"'{{1,2,3},{4,5,6}}'::INT8[][] AS f2, "
"'{{{1,2},{3,4}},{{NULL,6},{7,8}}}'::INT8[][][] AS f3"
)
f1, f2, f3 = cursor.fetchone()
assert f1 == [1, 2, 3, 4]
assert f2 == [[1, 2, 3], [4, 5, 6]]
assert f3 == [[[1, 2], [3, 4]], [[None, 6], [7, 8]]]
def test_bool_array_out(cursor):
cursor.execute(
"SELECT '{TRUE,FALSE,FALSE,TRUE}'::BOOL[] AS f1, "
"'{{TRUE,FALSE,TRUE},{FALSE,TRUE,FALSE}}'::BOOL[][] AS f2, "
"'{{{TRUE,FALSE},{FALSE,TRUE}},{{NULL,TRUE},{FALSE,FALSE}}}'"
"::BOOL[][][] AS f3"
)
f1, f2, f3 = cursor.fetchone()
assert f1 == [True, False, False, True]
assert f2 == [[True, False, True], [False, True, False]]
assert f3 == [[[True, False], [False, True]], [[None, True], [False, False]]]
def test_float4_array_out(cursor):
cursor.execute(
"SELECT '{1,2,3,4}'::FLOAT4[] AS f1, "
"'{{1,2,3},{4,5,6}}'::FLOAT4[][] AS f2, "
"'{{{1,2},{3,4}},{{NULL,6},{7,8}}}'::FLOAT4[][][] AS f3"
)
f1, f2, f3 = cursor.fetchone()
assert f1 == [1, 2, 3, 4]
assert f2 == [[1, 2, 3], [4, 5, 6]]
assert f3 == [[[1, 2], [3, 4]], [[None, 6], [7, 8]]]
def test_float8_array_out(cursor):
cursor.execute(
"SELECT '{1,2,3,4}'::FLOAT8[] AS f1, "
"'{{1,2,3},{4,5,6}}'::FLOAT8[][] AS f2, "
"'{{{1,2},{3,4}},{{NULL,6},{7,8}}}'::FLOAT8[][][] AS f3"
)
f1, f2, f3 = cursor.fetchone()
assert f1 == [1, 2, 3, 4]
assert f2 == [[1, 2, 3], [4, 5, 6]]
assert f3 == [[[1, 2], [3, 4]], [[None, 6], [7, 8]]]
def test_int_array_roundtrip_small(con):
"""send small int array, should be sent as INT2[]"""
retval = con.run("SELECT cast(:v as int2[])", v=[1, 2, 3])
assert retval[0][0], [1, 2, 3]
column_name, column_typeoid = con.description[0][0:2]
assert column_typeoid == 1005, "type should be INT2[]"
def test_int_array_roundtrip_multi(con):
"""test multi-dimensional array, should be sent as INT2[]"""
retval = con.run("SELECT cast(:v as int2[])", v=[[1, 2], [3, 4]])
assert retval[0][0] == [[1, 2], [3, 4]]
column_name, column_typeoid = con.description[0][0:2]
assert column_typeoid == 1005, "type should be INT2[]"
def test_int4_array_roundtrip(con):
"""a larger value should kick it up to INT4[]..."""
retval = con.run("SELECT cast(:v as int4[])", v=[70000, 2, 3])
assert retval[0][0] == [70000, 2, 3]
column_name, column_typeoid = con.description[0][0:2]
assert column_typeoid == 1007, "type should be INT4[]"
def test_int8_array_roundtrip(con):
"""a much larger value should kick it up to INT8[]..."""
retval = con.run("SELECT cast(:v as int8[])", v=[7000000000, 2, 3])
assert retval[0][0] == [7000000000, 2, 3], "retrieved value match failed"
column_name, column_typeoid = con.description[0][0:2]
assert column_typeoid == 1016, "type should be INT8[]"
def test_int_array_with_null_roundtrip(con):
retval = con.run("SELECT cast(:v as int[])", v=[1, None, 3])
assert retval[0][0] == [1, None, 3]
def test_float_array_roundtrip(con):
retval = con.run("SELECT cast(:v as double precision[])", v=[1.1, 2.2, 3.3])
assert retval[0][0] == [1.1, 2.2, 3.3]
def test_bool_array_roundtrip(con):
retval = con.run("SELECT cast(:v as bool[])", v=[True, False, None])
assert retval[0][0] == [True, False, None]
@pytest.mark.parametrize(
"test_input,expected",
[
("SELECT '{a,b,c}'::TEXT[] AS f1", ["a", "b", "c"]),
("SELECT '{a,b,c}'::CHAR[] AS f1", ["a", "b", "c"]),
("SELECT '{a,b,c}'::VARCHAR[] AS f1", ["a", "b", "c"]),
("SELECT '{a,b,c}'::CSTRING[] AS f1", ["a", "b", "c"]),
("SELECT '{a,b,c}'::NAME[] AS f1", ["a", "b", "c"]),
("SELECT '{}'::text[];", []),
('SELECT \'{NULL,"NULL",NULL,""}\'::text[];', [None, "NULL", None, ""]),
],
)
def test_string_array_out(con, test_input, expected):
result = con.run(test_input)
assert result[0][0] == expected
def test_numeric_array_out(cursor):
cursor.execute("SELECT '{1.1,2.2,3.3}'::numeric[] AS f1")
assert cursor.fetchone()[0] == [
decimal.Decimal("1.1"),
decimal.Decimal("2.2"),
decimal.Decimal("3.3"),
]
def test_numeric_array_roundtrip(con):
v = [decimal.Decimal("1.1"), None, decimal.Decimal("3.3")]
retval = con.run("SELECT cast(:v as numeric[])", v=v)
assert retval[0][0] == v
def test_string_array_roundtrip(con):
v = [
"Hello!",
"World!",
"abcdefghijklmnopqrstuvwxyz",
"",
"A bunch of random characters:",
" ~!@#$%^&*()_+`1234567890-=[]\\{}|{;':\",./<>?\t",
None,
]
retval = con.run("SELECT cast(:v as varchar[])", v=v)
assert retval[0][0] == v
def test_array_string_escape():
v = '"'
res = pg8000.converters.array_string_escape(v)
assert res == '"\\""'
def test_empty_array(con):
v = []
retval = con.run("SELECT cast(:v as varchar[])", v=v)
assert retval[0][0] == v
def test_macaddr(cursor):
retval = tuple(cursor.execute("SELECT macaddr '08002b:010203'"))
assert retval[0][0] == "08:00:2b:01:02:03"
def test_tsvector_roundtrip(cursor):
cursor.execute(
"SELECT cast(%s as tsvector)", ("a fat cat sat on a mat and ate a fat rat",)
)
retval = cursor.fetchall()
assert retval[0][0] == "'a' 'and' 'ate' 'cat' 'fat' 'mat' 'on' 'rat' 'sat'"
def test_hstore_roundtrip(cursor):
val = '"a"=>"1"'
retval = tuple(cursor.execute("SELECT cast(%s as hstore)", (val,)))
assert retval[0][0] == val
def test_json_roundtrip(con):
val = {"name": "Apollo 11 Cave", "zebra": True, "age": 26.003}
retval = con.run("SELECT cast(:v as jsonb)", v=dumps(val))
assert retval[0][0] == val
def test_jsonb_roundtrip(cursor):
val = {"name": "Apollo 11 Cave", "zebra": True, "age": 26.003}
cursor.execute("SELECT cast(%s as jsonb)", (dumps(val),))
retval = cursor.fetchall()
assert retval[0][0] == val
def test_json_access_object(cursor):
val = {"name": "Apollo 11 Cave", "zebra": True, "age": 26.003}
cursor.execute("SELECT cast(%s as json) -> %s", (dumps(val), "name"))
retval = cursor.fetchall()
assert retval[0][0] == "Apollo 11 Cave"
def test_jsonb_access_object(cursor):
val = {"name": "Apollo 11 Cave", "zebra": True, "age": 26.003}
cursor.execute("SELECT cast(%s as jsonb) -> %s", (dumps(val), "name"))
retval = cursor.fetchall()
assert retval[0][0] == "Apollo 11 Cave"
def test_json_access_array(con):
val = [-1, -2, -3, -4, -5]
retval = con.run(
"SELECT cast(:v1 as json) -> cast(:v2 as int)", v1=dumps(val), v2=2
)
assert retval[0][0] == -3
def test_jsonb_access_array(con):
val = [-1, -2, -3, -4, -5]
retval = con.run(
"SELECT cast(:v1 as jsonb) -> cast(:v2 as int)", v1=dumps(val), v2=2
)
assert retval[0][0] == -3
def test_jsonb_access_path(con):
j = {"a": [1, 2, 3], "b": [4, 5, 6]}
path = ["a", "2"]
retval = con.run("SELECT cast(:v1 as jsonb) #>> :v2", v1=dumps(j), v2=path)
assert retval[0][0] == str(j[path[0]][int(path[1])])
def test_infinity_timestamp_roundtrip(cursor):
v = "infinity"
retval = tuple(cursor.execute("SELECT cast(%s as timestamp) as f1", (v,)))
assert retval[0][0] == v
def test_point_roundtrip(cursor):
v = 2.3, 1
retval = tuple(cursor.execute("SELECT cast(%s as point) as f1", (v,)))
assert retval[0][0] == v
def test_time_in():
actual = pg8000.converters.time_in("12:57:18.000396")
assert actual == Time(12, 57, 18, 396)
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1682804588.0
pg8000-1.30.3/test/legacy/test_typeobjects.py 0000664 0001750 0001750 00000000270 14423307554 020277 0 ustar 00tlocke tlocke from pg8000 import PGInterval
def test_pginterval_constructor_days():
i = PGInterval(days=1)
assert i.months is None
assert i.days == 1
assert i.microseconds is None
././@PaxHeader 0000000 0000000 0000000 00000000034 00000000000 010212 x ustar 00 28 mtime=1698781754.5198705
pg8000-1.30.3/test/native/ 0000775 0001750 0001750 00000000000 14520255073 014352 5 ustar 00tlocke tlocke ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1682804588.0
pg8000-1.30.3/test/native/__init__.py 0000664 0001750 0001750 00000000000 14423307554 016455 0 ustar 00tlocke tlocke ././@PaxHeader 0000000 0000000 0000000 00000000034 00000000000 010212 x ustar 00 28 mtime=1698781754.5238705
pg8000-1.30.3/test/native/auth/ 0000775 0001750 0001750 00000000000 14520255073 015313 5 ustar 00tlocke tlocke ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1682804588.0
pg8000-1.30.3/test/native/auth/__init__.py 0000664 0001750 0001750 00000000000 14423307554 017416 0 ustar 00tlocke tlocke ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1682804588.0
pg8000-1.30.3/test/native/auth/test_gss.py 0000664 0001750 0001750 00000000703 14423307554 017524 0 ustar 00tlocke tlocke import pytest
from pg8000.native import Connection, InterfaceError
def test_gss(db_kwargs):
"""This requires a line in pg_hba.conf that requires gss for the database
pg8000_gss
"""
db_kwargs["database"] = "pg8000_gss"
# Should raise an exception saying gss isn't supported
with pytest.raises(
InterfaceError,
match="Authentication method 7 not supported by pg8000.",
):
Connection(**db_kwargs)
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1682804588.0
pg8000-1.30.3/test/native/auth/test_md5.py 0000664 0001750 0001750 00000000222 14423307554 017411 0 ustar 00tlocke tlocke def test_md5(con):
"""Called by GitHub Actions with auth method md5.
We just need to check that we can get a connection.
"""
pass
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1682804588.0
pg8000-1.30.3/test/native/auth/test_md5_ssl.py 0000664 0001750 0001750 00000000432 14423307554 020275 0 ustar 00tlocke tlocke import ssl
from pg8000.native import Connection
def test_ssl(db_kwargs):
context = ssl.create_default_context()
context.check_hostname = False
context.verify_mode = ssl.CERT_NONE
db_kwargs["ssl_context"] = context
with Connection(**db_kwargs):
pass
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1682804588.0
pg8000-1.30.3/test/native/auth/test_password.py 0000664 0001750 0001750 00000000234 14423307554 020571 0 ustar 00tlocke tlocke def test_password(con):
"""Called by GitHub Actions with auth method password.
We just need to check that we can get a connection.
"""
pass
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1682804588.0
pg8000-1.30.3/test/native/auth/test_scram-sha-256.py 0000664 0001750 0001750 00000000623 14423307554 021121 0 ustar 00tlocke tlocke import pytest
from pg8000.native import Connection, DatabaseError
# This requires a line in pg_hba.conf that requires scram-sha-256 for the
# database scram_sha_256
def test_scram_sha_256(db_kwargs):
db_kwargs["database"] = "pg8000_scram_sha_256"
# Should only raise an exception saying db doesn't exist
with pytest.raises(DatabaseError, match="3D000"):
Connection(**db_kwargs)
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1682804588.0
pg8000-1.30.3/test/native/auth/test_scram-sha-256_ssl.py 0000664 0001750 0001750 00000001102 14423307554 021773 0 ustar 00tlocke tlocke import ssl
import pytest
from pg8000.native import Connection, DatabaseError
# This requires a line in pg_hba.conf that requires scram-sha-256 for the
# database scram_sha_256
def test_scram_sha_256_plus(db_kwargs):
context = ssl.create_default_context()
context.check_hostname = False
context.verify_mode = ssl.CERT_NONE
db_kwargs["ssl_context"] = context
db_kwargs["database"] = "pg8000_scram_sha_256"
# Should only raise an exception saying db doesn't exist
with pytest.raises(DatabaseError, match="3D000"):
Connection(**db_kwargs)
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1682804588.0
pg8000-1.30.3/test/native/conftest.py 0000664 0001750 0001750 00000001732 14423307554 016560 0 ustar 00tlocke tlocke from os import environ
import pytest
import pg8000.native
@pytest.fixture(scope="class")
def db_kwargs():
db_connect = {"user": "postgres", "password": "pw"}
for kw, var, f in [
("host", "PGHOST", str),
("password", "PGPASSWORD", str),
("port", "PGPORT", int),
]:
try:
db_connect[kw] = f(environ[var])
except KeyError:
pass
return db_connect
@pytest.fixture
def con(request, db_kwargs):
conn = pg8000.native.Connection(**db_kwargs)
def fin():
try:
conn.run("rollback")
except pg8000.native.InterfaceError:
pass
try:
conn.close()
except pg8000.native.InterfaceError:
pass
request.addfinalizer(fin)
return conn
@pytest.fixture
def pg_version(con):
retval = con.run("select current_setting('server_version')")
version = retval[0][0]
idx = version.index(".")
return int(version[:idx])
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1682804588.0
pg8000-1.30.3/test/native/test_benchmarks.py 0000664 0001750 0001750 00000001332 14423307554 020103 0 ustar 00tlocke tlocke import pytest
@pytest.mark.parametrize(
"txt",
(
("int2", "cast(id / 100 as int2)"),
"cast(id as int4)",
"cast(id * 100 as int8)",
"(id % 2) = 0",
"N'Static text string'",
"cast(id / 100 as float4)",
"cast(id / 100 as float8)",
"cast(id / 100 as numeric)",
"timestamp '2001-09-28'",
),
)
def test_round_trips(con, benchmark, txt):
def torun():
query = """SELECT {0} AS column1, {0} AS column2, {0} AS column3,
{0} AS column4, {0} AS column5, {0} AS column6, {0} AS column7
FROM (SELECT generate_series(1, 10000) AS id) AS tbl""".format(
txt
)
con.run(query)
benchmark(torun)
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1690487249.0
pg8000-1.30.3/test/native/test_connection.py 0000664 0001750 0001750 00000017760 14460544721 020141 0 ustar 00tlocke tlocke import socket
from datetime import time as Time
import pytest
from pg8000.native import Connection, DatabaseError, InterfaceError, __version__
def test_unix_socket_missing():
conn_params = {"unix_sock": "/file-does-not-exist", "user": "doesn't-matter"}
with pytest.raises(InterfaceError):
Connection(**conn_params)
def test_internet_socket_connection_refused():
conn_params = {"port": 0, "user": "doesn't-matter"}
with pytest.raises(
InterfaceError,
match="Can't create a connection to host localhost and port 0 "
"\\(timeout is None and source_address is None\\).",
):
Connection(**conn_params)
def test_Connection_plain_socket(db_kwargs):
host = db_kwargs.get("host", "localhost")
port = db_kwargs.get("port", 5432)
with socket.create_connection((host, port)) as sock:
user = db_kwargs["user"]
password = db_kwargs["password"]
conn_params = {"sock": sock, "user": user, "password": password}
con = Connection(**conn_params)
res = con.run("SELECT 1")
assert res[0][0] == 1
def test_database_missing(db_kwargs):
db_kwargs["database"] = "missing-db"
with pytest.raises(DatabaseError):
Connection(**db_kwargs)
def test_notify(con):
backend_pid = con.run("select pg_backend_pid()")[0][0]
assert list(con.notifications) == []
con.run("LISTEN test")
con.run("NOTIFY test")
con.run("VALUES (1, 2), (3, 4), (5, 6)")
assert len(con.notifications) == 1
assert con.notifications[0] == (backend_pid, "test", "")
def test_notify_with_payload(con):
backend_pid = con.run("select pg_backend_pid()")[0][0]
assert list(con.notifications) == []
con.run("LISTEN test")
con.run("NOTIFY test, 'Parnham'")
con.run("VALUES (1, 2), (3, 4), (5, 6)")
assert len(con.notifications) == 1
assert con.notifications[0] == (backend_pid, "test", "Parnham")
# This requires a line in pg_hba.conf that requires md5 for the database
# pg8000_md5
def test_md5(db_kwargs):
db_kwargs["database"] = "pg8000_md5"
# Should only raise an exception saying db doesn't exist
with pytest.raises(DatabaseError, match="3D000"):
Connection(**db_kwargs)
# This requires a line in pg_hba.conf that requires 'password' for the
# database pg8000_password
def test_password(db_kwargs):
db_kwargs["database"] = "pg8000_password"
# Should only raise an exception saying db doesn't exist
with pytest.raises(DatabaseError, match="3D000"):
Connection(**db_kwargs)
def test_unicode_databaseName(db_kwargs):
db_kwargs["database"] = "pg8000_sn\uFF6Fw"
# Should only raise an exception saying db doesn't exist
with pytest.raises(DatabaseError, match="3D000"):
Connection(**db_kwargs)
def test_bytes_databaseName(db_kwargs):
"""Should only raise an exception saying db doesn't exist"""
db_kwargs["database"] = bytes("pg8000_sn\uFF6Fw", "utf8")
with pytest.raises(DatabaseError, match="3D000"):
Connection(**db_kwargs)
def test_bytes_password(con, db_kwargs):
# Create user
username = "boltzmann"
password = "cha\uFF6Fs"
con.run("create user " + username + " with password '" + password + "';")
db_kwargs["user"] = username
db_kwargs["password"] = password.encode("utf8")
db_kwargs["database"] = "pg8000_md5"
with pytest.raises(DatabaseError, match="3D000"):
Connection(**db_kwargs)
con.run("drop role " + username)
def test_broken_pipe_read(con, db_kwargs):
db1 = Connection(**db_kwargs)
res = db1.run("select pg_backend_pid()")
pid1 = res[0][0]
con.run("select pg_terminate_backend(:v)", v=pid1)
with pytest.raises(InterfaceError, match="network error"):
db1.run("select 1")
try:
db1.close()
except InterfaceError:
pass
def test_broken_pipe_unpack(con):
res = con.run("select pg_backend_pid()")
pid1 = res[0][0]
with pytest.raises(InterfaceError, match="network error"):
con.run("select pg_terminate_backend(:v)", v=pid1)
def test_broken_pipe_flush(con, db_kwargs):
db1 = Connection(**db_kwargs)
res = db1.run("select pg_backend_pid()")
pid1 = res[0][0]
con.run("select pg_terminate_backend(:v)", v=pid1)
try:
db1.run("select 1")
except BaseException:
pass
# Sometimes raises and sometime doesn't
try:
db1.close()
except InterfaceError as e:
assert str(e) == "network error"
def test_application_name(db_kwargs):
app_name = "my test application name"
db_kwargs["application_name"] = app_name
with Connection(**db_kwargs) as db:
res = db.run(
"select application_name from pg_stat_activity "
" where pid = pg_backend_pid()"
)
application_name = res[0][0]
assert application_name == app_name
def test_application_name_integer(db_kwargs):
db_kwargs["application_name"] = 1
with pytest.raises(
InterfaceError,
match="The parameter application_name can't be of type .",
):
Connection(**db_kwargs)
def test_application_name_bytearray(db_kwargs):
db_kwargs["application_name"] = bytearray(b"Philby")
with Connection(**db_kwargs):
pass
class PG8000TestException(Exception):
pass
def raise_exception(val):
raise PG8000TestException("oh noes!")
def test_py_value_fail(con, mocker):
# Ensure that if types.py_value throws an exception, the original
# exception is raised (PG8000TestException), and the connection is
# still usable after the error.
mocker.patch.object(con, "py_types")
con.py_types = {Time: raise_exception}
with pytest.raises(PG8000TestException):
con.run("SELECT CAST(:v AS TIME)", v=Time(10, 30))
# ensure that the connection is still usable for a new query
res = con.run("VALUES ('hw3'::text)")
assert res[0][0] == "hw3"
def test_no_data_error_recovery(con):
for i in range(1, 4):
with pytest.raises(DatabaseError) as e:
con.run("DROP TABLE t1")
assert e.value.args[0]["C"] == "42P01"
con.run("ROLLBACK")
def test_closed_connection(con):
con.close()
with pytest.raises(InterfaceError, match="connection is closed"):
con.run("VALUES ('hw1'::text)")
def test_version():
try:
from importlib.metadata import version
except ImportError:
from importlib_metadata import version
v = version("pg8000")
assert __version__ == v
@pytest.mark.parametrize(
"commit",
[
"commit",
"COMMIT;",
],
)
def test_failed_transaction_commit(con, commit):
con.run("create temporary table tt (f1 int primary key)")
con.run("begin")
try:
con.run("insert into tt(f1) values(null)")
except DatabaseError:
pass
with pytest.raises(InterfaceError):
con.run(commit)
@pytest.mark.parametrize(
"rollback",
[
"rollback",
"rollback;",
"ROLLBACK ;",
],
)
def test_failed_transaction_rollback(con, rollback):
con.run("create temporary table tt (f1 int primary key)")
con.run("begin")
try:
con.run("insert into tt(f1) values(null)")
except DatabaseError:
pass
con.run(rollback)
@pytest.mark.parametrize(
"rollback",
[
"rollback to sp",
"rollback to sp;",
"ROLLBACK TO sp ;",
],
)
def test_failed_transaction_rollback_to_savepoint(con, rollback):
con.run("create temporary table tt (f1 int primary key)")
con.run("begin")
con.run("SAVEPOINT sp;")
try:
con.run("insert into tt(f1) values(null)")
except DatabaseError:
pass
con.run(rollback)
@pytest.mark.parametrize(
"sql",
[
"BEGIN",
"select * from tt;",
],
)
def test_failed_transaction_sql(con, sql):
con.run("create temporary table tt (f1 int primary key)")
con.run("begin")
try:
con.run("insert into tt(f1) values(null)")
except DatabaseError:
pass
with pytest.raises(DatabaseError):
con.run(sql)
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1682804588.0
pg8000-1.30.3/test/native/test_copy.py 0000664 0001750 0001750 00000006404 14423307554 016745 0 ustar 00tlocke tlocke from io import BytesIO, StringIO
import pytest
@pytest.fixture
def db_table(request, con):
con.run("START TRANSACTION")
con.run(
"CREATE TEMPORARY TABLE t1 "
"(f1 int primary key, f2 int not null, f3 varchar(50) null) "
"on commit drop"
)
return con
def test_copy_to_with_table(db_table):
db_table.run("INSERT INTO t1 (f1, f2, f3) VALUES (:v1, :v1, :v2)", v1=1, v2="1")
db_table.run("INSERT INTO t1 (f1, f2, f3) VALUES (:v1, :v1, :v2)", v1=2, v2="2")
db_table.run("INSERT INTO t1 (f1, f2, f3) VALUES (:v1, :v1, :v2)", v1=3, v2="3")
stream = BytesIO()
db_table.run("copy t1 to stdout", stream=stream)
assert stream.getvalue() == b"1\t1\t1\n2\t2\t2\n3\t3\t3\n"
assert db_table.row_count == 3
def test_copy_to_with_query(con):
stream = BytesIO()
con.run(
"COPY (SELECT 1 as One, 2 as Two) TO STDOUT WITH DELIMITER "
"'X' CSV HEADER QUOTE AS 'Y' FORCE QUOTE Two",
stream=stream,
)
assert stream.getvalue() == b"oneXtwo\n1XY2Y\n"
assert con.row_count == 1
def test_copy_to_with_text_stream(con):
stream = StringIO()
con.run(
"COPY (SELECT 1 as One, 2 as Two) TO STDOUT WITH DELIMITER "
"'X' CSV HEADER QUOTE AS 'Y' FORCE QUOTE Two",
stream=stream,
)
assert stream.getvalue() == "oneXtwo\n1XY2Y\n"
assert con.row_count == 1
def test_copy_from_with_table(db_table):
stream = BytesIO(b"1\t1\t1\n2\t2\t2\n3\t3\t3\n")
db_table.run("copy t1 from STDIN", stream=stream)
assert db_table.row_count == 3
retval = db_table.run("SELECT * FROM t1 ORDER BY f1")
assert retval == [[1, 1, "1"], [2, 2, "2"], [3, 3, "3"]]
def test_copy_from_with_text_stream(db_table):
stream = StringIO("1\t1\t1\n2\t2\t2\n3\t3\t3\n")
db_table.run("copy t1 from STDIN", stream=stream)
retval = db_table.run("SELECT * FROM t1 ORDER BY f1")
assert retval == [[1, 1, "1"], [2, 2, "2"], [3, 3, "3"]]
def test_copy_from_with_query(db_table):
stream = BytesIO(b"f1Xf2\n1XY1Y\n")
db_table.run(
"COPY t1 (f1, f2) FROM STDIN WITH DELIMITER 'X' CSV HEADER "
"QUOTE AS 'Y' FORCE NOT NULL f1",
stream=stream,
)
assert db_table.row_count == 1
retval = db_table.run("SELECT * FROM t1 ORDER BY f1")
assert retval == [[1, 1, None]]
def test_copy_from_with_error(db_table):
stream = BytesIO(b"f1Xf2\n\n1XY1Y\n")
with pytest.raises(BaseException) as e:
db_table.run(
"COPY t1 (f1, f2) FROM STDIN WITH DELIMITER 'X' CSV HEADER "
"QUOTE AS 'Y' FORCE NOT NULL f1",
stream=stream,
)
arg = {
"S": ("ERROR",),
"C": ("22P02",),
"M": (
'invalid input syntax for type integer: ""',
'invalid input syntax for integer: ""',
),
"W": ('COPY t1, line 2, column f1: ""',),
"F": ("numutils.c",),
"R": ("pg_atoi", "pg_strtoint32"),
}
earg = e.value.args[0]
for k, v in arg.items():
assert earg[k] in v
def test_copy_from_with_text_iterable(db_table):
stream = ["1\t1\t1\n", "2\t2\t2\n", "3\t3\t3\n"]
db_table.run("copy t1 from STDIN", stream=stream)
retval = db_table.run("SELECT * FROM t1 ORDER BY f1")
assert retval == [[1, 1, "1"], [2, 2, "2"], [3, 3, "3"]]
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1690487249.0
pg8000-1.30.3/test/native/test_core.py 0000664 0001750 0001750 00000003474 14460544721 016727 0 ustar 00tlocke tlocke from io import BytesIO
import pytest
from pg8000.core import (
Context,
CoreConnection,
NULL_BYTE,
PASSWORD,
_create_message,
_make_socket,
_read,
)
from pg8000.native import InterfaceError
def test_make_socket(mocker):
unix_sock = None
sock = mocker.Mock()
host = "localhost"
port = 5432
timeout = None
source_address = None
tcp_keepalive = True
ssl_context = None
_make_socket(
unix_sock, sock, host, port, timeout, source_address, tcp_keepalive, ssl_context
)
def test_handle_AUTHENTICATION_3(mocker):
"""Shouldn't send a FLUSH message, as FLUSH only used in extended-query"""
mocker.patch.object(CoreConnection, "__init__", lambda x: None)
con = CoreConnection()
password = "barbour".encode("utf8")
con.password = password
con._sock = mocker.Mock()
buf = BytesIO()
con._sock.write = buf.write
CoreConnection.handle_AUTHENTICATION_REQUEST(con, b"\x00\x00\x00\x03", None)
assert buf.getvalue() == _create_message(PASSWORD, password + NULL_BYTE)
def test_create_message():
msg = _create_message(PASSWORD, "barbour".encode("utf8") + NULL_BYTE)
assert msg == b"p\x00\x00\x00\x0cbarbour\x00"
def test_handle_ERROR_RESPONSE(mocker):
"""Check it handles invalid encodings in the error messages"""
mocker.patch.object(CoreConnection, "__init__", lambda x: None)
con = CoreConnection()
con._client_encoding = "utf8"
data = b"S\xc2err" + NULL_BYTE + NULL_BYTE
context = Context(None)
CoreConnection.handle_ERROR_RESPONSE(con, data, context)
assert str(context.error) == "{'S': '�err'}"
def test_read(mocker):
mock_socket = mocker.Mock()
mock_socket.read = mocker.Mock(return_value=b"")
with pytest.raises(InterfaceError, match="network error"):
_read(mock_socket, 5)
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1682804588.0
pg8000-1.30.3/test/native/test_prepared_statement.py 0000664 0001750 0001750 00000000244 14423307554 021655 0 ustar 00tlocke tlocke def test_prepare(con):
con.prepare("SELECT CAST(:v AS INTEGER)")
def test_run(con):
ps = con.prepare("SELECT cast(:v as varchar)")
ps.run(v="speedy")
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1694941823.0
pg8000-1.30.3/test/native/test_query.py 0000664 0001750 0001750 00000013744 14501541177 017143 0 ustar 00tlocke tlocke import pytest
from pg8000.native import DatabaseError, to_statement
# Tests relating to the basic operation of the database driver, driven by the
# pg8000 custom interface.
@pytest.fixture
def db_table(request, con):
con.run(
"CREATE TEMPORARY TABLE t1 (f1 int primary key, "
"f2 bigint not null, f3 varchar(50) null) "
)
def fin():
try:
con.run("drop table t1")
except DatabaseError:
pass
request.addfinalizer(fin)
return con
def test_database_error(con):
with pytest.raises(DatabaseError):
con.run("INSERT INTO t99 VALUES (1, 2, 3)")
# Run a query on a table, alter the structure of the table, then run the
# original query again.
def test_alter(db_table):
db_table.run("select * from t1")
db_table.run("alter table t1 drop column f3")
db_table.run("select * from t1")
# Run a query on a table, drop then re-create the table, then run the
# original query again.
def test_create(db_table):
db_table.run("select * from t1")
db_table.run("drop table t1")
db_table.run("create temporary table t1 (f1 int primary key)")
db_table.run("select * from t1")
def test_parametrized(db_table):
res = db_table.run("SELECT f1, f2, f3 FROM t1 WHERE f1 > :f1", f1=3)
for row in res:
f1, f2, f3 = row
def test_insert_returning(db_table):
db_table.run("CREATE TEMPORARY TABLE t2 (id serial, data text)")
# Test INSERT ... RETURNING with one row...
res = db_table.run("INSERT INTO t2 (data) VALUES (:v) RETURNING id", v="test1")
row_id = res[0][0]
res = db_table.run("SELECT data FROM t2 WHERE id = :v", v=row_id)
assert "test1" == res[0][0]
assert db_table.row_count == 1
# Test with multiple rows...
res = db_table.run(
"INSERT INTO t2 (data) VALUES (:v1), (:v2), (:v3) " "RETURNING id",
v1="test2",
v2="test3",
v3="test4",
)
assert db_table.row_count == 3
ids = [x[0] for x in res]
assert len(ids) == 3
def test_row_count_select(db_table):
expected_count = 57
for i in range(expected_count):
db_table.run(
"INSERT INTO t1 (f1, f2, f3) VALUES (:v1, :v2, :v3)", v1=i, v2=i, v3=None
)
db_table.run("SELECT * FROM t1")
# Check row_count
assert expected_count == db_table.row_count
# Should be -1 for a command with no results
db_table.run("DROP TABLE t1")
assert -1 == db_table.row_count
def test_row_count_delete(db_table):
db_table.run(
"INSERT INTO t1 (f1, f2, f3) VALUES (:v1, :v2, :v3)", v1=1, v2=1, v3=None
)
db_table.run("DELETE FROM t1")
assert db_table.row_count == 1
def test_row_count_update(db_table):
db_table.run(
"INSERT INTO t1 (f1, f2, f3) VALUES (:v1, :v2, :v3)", v1=1, v2=1, v3=None
)
db_table.run(
"INSERT INTO t1 (f1, f2, f3) VALUES (:v1, :v2, :v3)", v1=2, v2=10, v3=None
)
db_table.run(
"INSERT INTO t1 (f1, f2, f3) VALUES (:v1, :v2, :v3)", v1=3, v2=100, v3=None
)
db_table.run(
"INSERT INTO t1 (f1, f2, f3) VALUES (:v1, :v2, :v3)", v1=4, v2=1000, v3=None
)
db_table.run(
"INSERT INTO t1 (f1, f2, f3) VALUES (:v1, :v2, :v3)", v1=5, v2=10000, v3=None
)
db_table.run("UPDATE t1 SET f3 = :v1 WHERE f2 > 101", v1="Hello!")
assert db_table.row_count == 2
def test_int_oid(con):
# https://bugs.launchpad.net/pg8000/+bug/230796
con.run("SELECT typname FROM pg_type WHERE oid = :v", v=100)
def test_unicode_query(con):
con.run(
"CREATE TEMPORARY TABLE \u043c\u0435\u0441\u0442\u043e "
"(\u0438\u043c\u044f VARCHAR(50), "
"\u0430\u0434\u0440\u0435\u0441 VARCHAR(250))"
)
def test_transactions(db_table):
db_table.run("start transaction")
db_table.run(
"INSERT INTO t1 (f1, f2, f3) VALUES (:v1, :v2, :v3)", v1=1, v2=1, v3="Zombie"
)
db_table.run("rollback")
db_table.run("select * from t1")
assert db_table.row_count == 0
def test_in(con):
ret = con.run("SELECT typname FROM pg_type WHERE oid = any(:v)", v=[16, 23])
assert ret[0][0] == "bool"
def test_empty_query(con):
"""No exception thrown"""
con.run("")
def test_rollback_no_transaction(con):
# Remove any existing notices
con.notices.clear()
# First, verify that a raw rollback does produce a notice
con.run("rollback")
assert 1 == len(con.notices)
# 25P01 is the code for no_active_sql_tronsaction. It has
# a message and severity name, but those might be
# localized/depend on the server version.
assert con.notices.pop().get(b"C") == b"25P01"
def test_close_prepared_statement(con):
ps = con.prepare("select 1")
ps.run()
res = con.run("select count(*) from pg_prepared_statements")
assert res[0][0] == 1 # Should have one prepared statement
ps.close()
res = con.run("select count(*) from pg_prepared_statements")
assert res[0][0] == 0 # Should have no prepared statements
def test_no_data(con):
assert con.run("START TRANSACTION") is None
def test_multiple_statements(con):
statements = "SELECT 5; SELECT 'Erich Fromm';"
assert con.run(statements) == [[5], ["Erich Fromm"]]
def test_unexecuted_connection_row_count(con):
assert con.row_count is None
def test_unexecuted_connection_columns(con):
assert con.columns is None
def test_sql_prepared_statement(con):
con.run("PREPARE gen_series AS SELECT generate_series(1, 10);")
con.run("EXECUTE gen_series")
def test_to_statement():
new_query, _ = to_statement(
"SELECT sum(x)::decimal(5, 2) :f_2, :f1 FROM t WHERE a=:f_2"
)
expected = "SELECT sum(x)::decimal(5, 2) $1, $2 FROM t WHERE a=$1"
assert new_query == expected
def test_to_statement_quotes():
new_query, _ = to_statement("SELECT $$'$$ = :v")
expected = "SELECT $$'$$ = $1"
assert new_query == expected
def test_not_parsed_if_no_params(mocker, con):
mock_to_statement = mocker.patch("pg8000.native.to_statement")
con.run("ROLLBACK")
mock_to_statement.assert_not_called()
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1698780354.0
pg8000-1.30.3/test/native/test_typeconversion.py 0000664 0001750 0001750 00000055774 14520252302 021064 0 ustar 00tlocke tlocke import os
import time
from collections import OrderedDict
from datetime import (
date as Date,
datetime as Datetime,
time as Time,
timedelta as Timedelta,
timezone as Timezone,
)
from decimal import Decimal
from enum import Enum
from ipaddress import IPv4Address, IPv4Network
from json import dumps
from locale import LC_ALL, localeconv, setlocale
from uuid import UUID
import pytest
import pytz
from pg8000.converters import (
BIGINT,
BIGINT_ARRAY,
BOOLEAN,
CIDR_ARRAY,
DATE,
FLOAT_ARRAY,
INET,
INTEGER_ARRAY,
INTERVAL,
JSON,
JSONB,
JSONB_ARRAY,
JSON_ARRAY,
MONEY,
MONEY_ARRAY,
NUMERIC,
NUMERIC_ARRAY,
PGInterval,
POINT,
Range,
SMALLINT_ARRAY,
TIME,
TIMESTAMP,
TIMESTAMPTZ,
TIMESTAMPTZ_ARRAY,
TIMESTAMP_ARRAY,
UUID_ARRAY,
UUID_TYPE,
XID,
pg_interval_in,
pg_interval_out,
)
def test_str_then_int(con):
v1 = "hello world"
retval = con.run("SELECT cast(:v1 as varchar) as f1", v1=v1)
assert retval[0][0] == v1
v2 = 1
retval = con.run("SELECT cast(:v2 as varchar) as f1", v2=v2)
assert retval[0][0] == str(v2)
def test_insert_null(con):
v = None
con.run("CREATE TEMPORARY TABLE test_int (f INTEGER)")
con.run("INSERT INTO test_int VALUES (:v)", v=v)
retval = con.run("SELECT * FROM test_int")
assert retval[0][0] == v
def test_int_roundtrip(con):
int2 = 21
int4 = 23
int8 = 20
MAP = {
int2: "int2",
int4: "int4",
int8: "int8",
}
test_values = [
(0, int2),
(-32767, int2),
(-32768, int4),
(+32767, int2),
(+32768, int4),
(-2147483647, int4),
(-2147483648, int8),
(+2147483647, int4),
(+2147483648, int8),
(-9223372036854775807, int8),
(+9223372036854775807, int8),
]
for value, typoid in test_values:
retval = con.run("SELECT cast(:v as " + MAP[typoid] + ")", v=value)
assert retval[0][0] == value
column_type_oid = con.columns[0]["type_oid"]
assert column_type_oid == typoid
def test_bytearray_subclass_round_trip(con):
class BClass(bytearray):
pass
binary = b"\x00\x01\x02\x03\x02\x01\x00"
retval = con.run("SELECT cast(:v as bytea)", v=BClass(binary))
assert retval[0][0] == binary
def test_timestamp_roundtrip(con):
v = Datetime(2001, 2, 3, 4, 5, 6, 170000)
retval = con.run("SELECT cast(:v as timestamp)", v=v)
assert retval[0][0] == v
# Test that time zone doesn't affect it
orig_tz = os.environ.get("TZ")
os.environ["TZ"] = "America/Edmonton"
time.tzset()
retval = con.run("SELECT cast(:v as timestamp)", v=v)
assert retval[0][0] == v
if orig_tz is None:
del os.environ["TZ"]
else:
os.environ["TZ"] = orig_tz
time.tzset()
def test_interval_roundtrip(con):
con.register_in_adapter(INTERVAL, pg_interval_in)
con.register_out_adapter(PGInterval, pg_interval_out)
v = PGInterval(microseconds=123456789, days=2, months=24)
retval = con.run("SELECT cast(:v as interval)", v=v)
assert retval[0][0] == v
def test_enum_str_round_trip(con):
try:
con.run("create type lepton as enum ('electron', 'muon', 'tau')")
v = "muon"
retval = con.run("SELECT cast(:v as lepton) as f1", v=v)
assert retval[0][0] == v
con.run("CREATE TEMPORARY TABLE testenum (f1 lepton)")
con.run("INSERT INTO testenum VALUES (cast(:v as lepton))", v="electron")
finally:
con.run("drop table testenum")
con.run("drop type lepton")
def test_enum_custom_round_trip(con):
class Lepton:
# Implements PEP 435 in the minimal fashion needed
__members__ = OrderedDict()
def __init__(self, name, value, alias=None):
self.name = name
self.value = value
self.__members__[name] = self
setattr(self.__class__, name, self)
if alias:
self.__members__[alias] = self
setattr(self.__class__, alias, self)
def lepton_out(lepton):
return lepton.value
try:
con.run("create type lepton as enum ('1', '2', '3')")
con.register_out_adapter(Lepton, lepton_out)
v = Lepton("muon", "2")
retval = con.run("SELECT CAST(:v AS lepton)", v=v)
assert retval[0][0] == v.value
finally:
con.run("drop type lepton")
def test_enum_py_round_trip(con):
class Lepton(Enum):
electron = "1"
muon = "2"
tau = "3"
try:
con.run("create type lepton as enum ('1', '2', '3')")
v = Lepton.muon
retval = con.run("SELECT cast(:v as lepton) as f1", v=v)
assert retval[0][0] == v.value
con.run("CREATE TEMPORARY TABLE testenum (f1 lepton)")
con.run("INSERT INTO testenum VALUES (cast(:v as lepton))", v=Lepton.electron)
finally:
con.run("drop table testenum")
con.run("drop type lepton")
def test_xml_roundtrip(con):
v = "gatccgagtac"
retval = con.run("select xmlparse(content :v) as f1", v=v)
assert retval[0][0] == v
def test_int2vector_in(con):
retval = con.run("select cast('1 2' as int2vector) as f1")
assert retval[0][0] == [1, 2]
# Should complete without an exception
con.run("select indkey from pg_index")
@pytest.mark.parametrize(
"tz, test_input,test_output",
[
[
"UTC",
"2001-02-03 04:05:06.17 America/Edmonton",
Datetime(2001, 2, 3, 11, 5, 6, 170000, Timezone.utc),
],
[
"UTC",
"2001-02-03 04:05:06.17+01:30",
Datetime(2001, 2, 3, 2, 35, 6, 170000, Timezone.utc),
],
[
"01:30",
"2001-02-03 04:05:06.17+01:30",
Datetime(2001, 2, 3, 2, 35, 6, 170000, Timezone.utc),
],
[
"UTC",
"infinity",
"infinity",
],
[
"UTC",
"-infinity",
"-infinity",
],
],
)
def test_timestamptz_in(con, tz, test_input, test_output):
con.run(f"SET TIME ZONE '{tz}'")
retval = con.run(f"SELECT CAST('{test_input}' AS timestamp with time zone)")
dt = retval[0][0]
assert dt == test_output
def test_timestamp_tz_roundtrip(con):
mst = pytz.timezone("America/Edmonton")
v1 = mst.localize(Datetime(2001, 2, 3, 4, 5, 6, 170000))
retval = con.run("SELECT cast(:v as timestamptz)", v=v1)
v2 = retval[0][0]
assert v2.tzinfo is not None
assert v1 == v2
def test_timestamp_mismatch(con):
mst = pytz.timezone("America/Edmonton")
con.run("SET SESSION TIME ZONE 'America/Edmonton'")
try:
con.run(
"CREATE TEMPORARY TABLE TestTz (f1 timestamp with time zone, "
"f2 timestamp without time zone)"
)
con.run(
"INSERT INTO TestTz (f1, f2) VALUES (:v1, :v2)",
# insert timestamp into timestamptz field (v1)
v1=Datetime(2001, 2, 3, 4, 5, 6, 170000),
# insert timestamptz into timestamp field (v2)
v2=mst.localize(Datetime(2001, 2, 3, 4, 5, 6, 170000)),
)
retval = con.run("SELECT f1, f2 FROM TestTz")
# when inserting a timestamp into a timestamptz field,
# postgresql assumes that it is in local time. So the value
# that comes out will be the server's local time interpretation
# of v1. We've set the server's TZ to MST, the time should
# be...
f1 = retval[0][0]
assert f1 == Datetime(2001, 2, 3, 11, 5, 6, 170000, Timezone.utc)
# inserting the timestamptz into a timestamp field, pg8000 converts the
# value into UTC, and then the PG server sends that time back
f2 = retval[0][1]
assert f2 == Datetime(2001, 2, 3, 11, 5, 6, 170000)
finally:
con.run("SET SESSION TIME ZONE DEFAULT")
@pytest.mark.parametrize(
"select,expected",
[
["CAST('t' AS bool)", True],
["5000::smallint", 5000],
["5000::numeric", Decimal("5000")],
["50.34::numeric", Decimal("50.34")],
["5000::integer", 5000],
["50000000000000::bigint", 50000000000000],
["1.1::real", 1.1],
["1.1::double precision", 1.1000000000000001],
["'hello'::varchar(20)", "hello"],
["'hello'::char(20)", "hello "],
["'hello'::text", "hello"],
["(1,2)", ("1", "2")],
["CAST('20000-01-01' AS TIMESTAMP WITHOUT TIME ZONE)", "20000-01-01 00:00:00"],
],
)
def test_in(con, select, expected):
retval = con.run(f"SELECT {select}")
assert retval[0][0] == expected
def test_name_out(con):
# select a field that is of "name" type:
con.run("SELECT usename FROM pg_user")
# It is sufficient that no errors were encountered.
def test_oid_out(con):
con.run("SELECT oid FROM pg_type")
# It is sufficient that no errors were encountered.
def test_pg_interval_in(con):
con.register_in_adapter(1186, pg_interval_in)
retval = con.run(
"SELECT CAST('1 month 16 days 12 hours 32 minutes 64 seconds' as INTERVAL)"
)
expected_value = PGInterval(
microseconds=(12 * 60 * 60 * 1000 * 1000)
+ (32 * 60 * 1000 * 1000)
+ (64 * 1000 * 1000),
days=16,
months=1,
)
assert retval[0][0] == expected_value
@pytest.mark.parametrize(
"test_input,test_output",
[
["12 days 30 seconds", Timedelta(days=12, seconds=30)],
["30 seconds", Timedelta(seconds=30)],
],
)
def test_interval_in_postgres(con, test_input, test_output):
con.run("SET intervalstyle TO 'postgres'")
retval = con.run(f"SELECT CAST('{test_input}' AS INTERVAL)")
assert retval[0][0] == test_output
@pytest.mark.parametrize(
"iso_8601,output",
[
["P12DT30S", Timedelta(days=12, seconds=30)],
["PT30S", Timedelta(seconds=30)],
[
"P-1Y-2M3DT-4H-5M-6S",
PGInterval(years=-1, months=-2, days=3, hours=-4, minutes=-5, seconds=-6),
],
],
)
def test_interval_in_iso_8601(con, iso_8601, output):
con.run("SET intervalstyle TO 'iso_8601'")
retval = con.run(f"SELECT CAST('{iso_8601}' AS INTERVAL)")
assert retval[0][0] == output
@pytest.mark.parametrize(
"postgres_verbose,output",
[
["@ 1 year 2 mons", PGInterval(years=1, months=2)],
[
"@ 3 days 4 hours 5 mins 6 secs",
Timedelta(days=3, hours=4, minutes=5, seconds=6),
],
[
"@ 1 year 2 mons -3 days 4 hours 5 mins 6 secs ago",
PGInterval(years=-1, months=-2, days=3, hours=-4, minutes=-5, seconds=-6),
],
],
)
def test_interval_in_postgres_verbose(con, postgres_verbose, output):
con.run("SET intervalstyle TO 'postgres_verbose'")
retval = con.run(f"SELECT CAST('{postgres_verbose}' AS INTERVAL)")
assert retval[0][0] == output
@pytest.mark.parametrize(
"sql_standard,output",
[
["1-2", PGInterval(years=1, months=2)],
["3 4:05:06", Timedelta(days=3, hours=4, minutes=5, seconds=6)],
[
"-1-2 +3 -4:05:06",
PGInterval(years=-1, months=-2, days=3, hours=-4, minutes=-5, seconds=-6),
],
],
)
def test_interval_in_sql_standard(con, sql_standard, output):
con.run("SET intervalstyle TO 'sql_standard'")
retval = con.run(f"SELECT CAST('{sql_standard}' AS INTERVAL)")
assert retval[0][0] == output
def test_timestamp_out(con):
retval = con.run("SELECT '2001-02-03 04:05:06.17'::timestamp")
assert retval[0][0] == Datetime(2001, 2, 3, 4, 5, 6, 170000)
def test_int4_array_out(con):
retval = con.run(
"SELECT '{1,2,3,4}'::INT[] AS f1, '{{1,2,3},{4,5,6}}'::INT[][] AS f2, "
"'{{{1,2},{3,4}},{{NULL,6},{7,8}}}'::INT[][][] AS f3"
)
f1, f2, f3 = retval[0]
assert f1 == [1, 2, 3, 4]
assert f2 == [[1, 2, 3], [4, 5, 6]]
assert f3 == [[[1, 2], [3, 4]], [[None, 6], [7, 8]]]
def test_int2_array_out(con):
res = con.run(
"SELECT '{1,2,3,4}'::INT2[] AS f1, "
"'{{1,2,3},{4,5,6}}'::INT2[][] AS f2, "
"'{{{1,2},{3,4}},{{NULL,6},{7,8}}}'::INT2[][][] AS f3"
)
f1, f2, f3 = res[0]
assert f1 == [1, 2, 3, 4]
assert f2 == [[1, 2, 3], [4, 5, 6]]
assert f3 == [[[1, 2], [3, 4]], [[None, 6], [7, 8]]]
def test_int8_array_out(con):
res = con.run(
"SELECT '{1,2,3,4}'::INT8[] AS f1, "
"'{{1,2,3},{4,5,6}}'::INT8[][] AS f2, "
"'{{{1,2},{3,4}},{{NULL,6},{7,8}}}'::INT8[][][] AS f3"
)
f1, f2, f3 = res[0]
assert f1 == [1, 2, 3, 4]
assert f2 == [[1, 2, 3], [4, 5, 6]]
assert f3 == [[[1, 2], [3, 4]], [[None, 6], [7, 8]]]
def test_bool_array_out(con):
res = con.run(
"SELECT '{TRUE,FALSE,FALSE,TRUE}'::BOOL[] AS f1, "
"'{{TRUE,FALSE,TRUE},{FALSE,TRUE,FALSE}}'::BOOL[][] AS f2, "
"'{{{TRUE,FALSE},{FALSE,TRUE}},{{NULL,TRUE},{FALSE,FALSE}}}'"
"::BOOL[][][] AS f3"
)
f1, f2, f3 = res[0]
assert f1 == [True, False, False, True]
assert f2 == [[True, False, True], [False, True, False]]
assert f3 == [[[True, False], [False, True]], [[None, True], [False, False]]]
def test_float4_array_out(con):
res = con.run(
"SELECT '{1,2,3,4}'::FLOAT4[] AS f1, "
"'{{1,2,3},{4,5,6}}'::FLOAT4[][] AS f2, "
"'{{{1,2},{3,4}},{{NULL,6},{7,8}}}'::FLOAT4[][][] AS f3"
)
f1, f2, f3 = res[0]
assert f1 == [1, 2, 3, 4]
assert f2 == [[1, 2, 3], [4, 5, 6]]
assert f3 == [[[1, 2], [3, 4]], [[None, 6], [7, 8]]]
def test_float8_array_out(con):
res = con.run(
"SELECT '{1,2,3,4}'::FLOAT8[] AS f1, "
"'{{1,2,3},{4,5,6}}'::FLOAT8[][] AS f2, "
"'{{{1,2},{3,4}},{{NULL,6},{7,8}}}'::FLOAT8[][][] AS f3"
)
f1, f2, f3 = res[0]
assert f1 == [1, 2, 3, 4]
assert f2 == [[1, 2, 3], [4, 5, 6]]
assert f3 == [[[1, 2], [3, 4]], [[None, 6], [7, 8]]]
# Find the currency string
setlocale(LC_ALL, "")
CURRENCY = localeconv()["currency_symbol"]
if CURRENCY == "":
CURRENCY = "$"
@pytest.mark.parametrize(
"test_input,oid",
[
[[Datetime(2001, 2, 3, 4, 5, 6)], TIMESTAMP_ARRAY], # timestamp[]
[ # timestamptz[]
[Datetime(2001, 2, 3, 4, 5, 6, 0, Timezone.utc)],
TIMESTAMPTZ_ARRAY,
],
[
{"name": "Apollo 11 Cave", "zebra": True, "age": 26.003},
# json
JSON,
],
[{"name": "Apollo 11 Cave", "zebra": True, "age": 26.003}, JSONB], # jsonb
[[IPv4Network("192.168.0.0/28")], CIDR_ARRAY], # cidr[]
[[1, 2, 3], SMALLINT_ARRAY], # int2[]
[[[1, 2], [3, 4]], SMALLINT_ARRAY], # int2[] multidimensional
[[1, None, 3], INTEGER_ARRAY], # int4[] with None
[[7000000000, 2, 3], BIGINT_ARRAY], # int8[]
[[1.1, 2.2, 3.3], FLOAT_ARRAY], # float8[]
[[Decimal("1.1"), None, Decimal("3.3")], NUMERIC_ARRAY], # numeric[]
[[f"{CURRENCY}1.10", None, f"{CURRENCY}3.30"], MONEY_ARRAY], # money[]
[[UUID("911460f2-1f43-fea2-3e2c-e01fd5b5069d")], UUID_ARRAY], # uuid[]
[ # json[]
[{"name": "Apollo 11 Cave", "zebra": True, "age": 26.003}],
JSON_ARRAY,
],
[ # jsonb[]
[{"name": "Apollo 11 Cave", "zebra": True, "age": 26.003}],
JSONB_ARRAY,
],
[Time(4, 5, 6), TIME], # time
[Date(2001, 2, 3), DATE], # date
[Datetime(2001, 2, 3, 4, 5, 6), TIMESTAMP], # timestamp
[Datetime(2001, 2, 3, 4, 5, 6, 0, Timezone.utc), TIMESTAMPTZ], # timestamptz
[True, BOOLEAN], # bool
[None, BOOLEAN], # null
[Decimal("1.1"), NUMERIC], # numeric
[f"{CURRENCY}1.10", MONEY], # money
[f"-{CURRENCY}1.10", MONEY], # money
[50000000000000, BIGINT], # int8
[UUID("911460f2-1f43-fea2-3e2c-e01fd5b5069d"), UUID_TYPE], # uuid
[IPv4Network("192.168.0.0/28"), INET], # inet
[IPv4Address("192.168.0.1"), INET], # inet
[86722, XID], # xid
["infinity", TIMESTAMP], # timestamp
[(2.3, 1), POINT], # point
[{"name": "Apollo 11 Cave", "zebra": True, "age": 26.003}, JSON], # json
[{"name": "Apollo 11 Cave", "zebra": True, "age": 26.003}, JSONB], # jsonb
],
)
def test_roundtrip_oid(con, test_input, oid):
retval = con.run("SELECT :v", v=test_input, types={"v": oid})
assert retval[0][0] == test_input
assert oid == con.columns[0]["type_oid"]
@pytest.mark.parametrize(
"test_input,typ,req_ver",
[
[[True, False, None], "bool[]", None],
[
[
Range(Date(2023, 6, 1), Date(2023, 6, 6)),
Range(Date(2023, 6, 10), Date(2023, 6, 13)),
],
"datemultirange",
14,
],
[Range(Date(1937, 6, 1), Date(2023, 5, 10)), "daterange", None],
['"a"=>"1"', "hstore", None],
[[IPv4Address("192.168.0.1")], "inet[]", None],
[[Range(3, 7), Range(8, 9)], "int4multirange", 14],
[Range(3, 7), "int4range", None],
[50000000000000, "int8", None],
[[Range(3, 7), Range(8, 9)], "int8multirange", 14],
[Range(3, 7), "int8range", None],
[Range(3, 7), "numrange", None],
[[Range(3, 7), Range(Decimal("9.5"), Decimal("11.4"))], "nummultirange", 14],
[[Date(2021, 3, 1)], "date[]", None],
[[Datetime(2001, 2, 3, 4, 5, 6)], "timestamp[]", None],
[[Datetime(2001, 2, 3, 4, 5, 6, 0, Timezone.utc)], "timestamptz[]", None],
[[Time(4, 5, 6)], "time[]", None],
[
[
Range(Datetime(2001, 2, 3, 4, 5), Datetime(2023, 2, 3, 4, 5)),
Range(Datetime(2024, 6, 1), Datetime(2024, 7, 3)),
],
"tsmultirange",
14,
],
[
Range(Datetime(2001, 2, 3, 4, 5), Datetime(2023, 2, 3, 4, 5)),
"tsrange",
None,
],
[
[
Range(
Datetime(2001, 2, 3, 4, 5, tzinfo=Timezone.utc),
Datetime(2023, 2, 3, 4, 5, tzinfo=Timezone.utc),
),
Range(
Datetime(2024, 6, 1, tzinfo=Timezone.utc),
Datetime(2024, 7, 3, tzinfo=Timezone.utc),
),
],
"tstzmultirange",
14,
],
[
Range(
Datetime(2001, 2, 3, 4, 5, tzinfo=Timezone.utc),
Datetime(2023, 2, 3, 4, 5, tzinfo=Timezone.utc),
),
"tstzrange",
None,
],
[[Timedelta(seconds=30)], "interval[]", None],
[[{"name": "Apollo 11 Cave", "zebra": True, "age": 26.003}], "jsonb[]", None],
[[b"\x00\x01\x02\x03\x02\x01\x00"], "bytea[]", None],
[[Decimal("1.1"), None, Decimal("3.3")], "numeric[]", None],
[[UUID("911460f2-1f43-fea2-3e2c-e01fd5b5069d")], "uuid[]", None],
[
[
"Hello!",
"World!",
"abcdefghijklmnopqrstuvwxyz",
"",
"A bunch of random characters:",
" ~!@#$%^&*()_+`1234567890-=[]\\{}|{;':\",./<>?\t",
"\n",
"\r",
"\t",
"\b",
None,
],
"varchar[]",
None,
],
[[], "varchar[]", None],
[Time(4, 5, 6), "time", None],
[Date(2001, 2, 3), "date", None],
["infinity", "date", None],
[Datetime(2001, 2, 3, 4, 5, 6), "timestamp", None],
[Datetime(2001, 2, 3, 4, 5, 6, 0, Timezone.utc), "timestamptz", None],
[True, "bool", None],
[Decimal("1.1"), "numeric", None],
[1.756e-12, "float8", None],
[float("inf"), "float8", None],
["hello world", "unknown", None],
["hello \u0173 world", "varchar", None],
[50000000000000, "int8", None],
[b"\x00\x01\x02\x03\x02\x01\x00", "bytea", None],
[bytearray(b"\x00\x01\x02\x03\x02\x01\x00"), "bytea", None],
[UUID("911460f2-1f43-fea2-3e2c-e01fd5b5069d"), "uuid", None],
[IPv4Network("192.168.0.0/28"), "inet", None],
[IPv4Address("192.168.0.1"), "inet", None],
],
)
def test_roundtrip_cast(con, pg_version, test_input, typ, req_ver):
if req_ver is None or pg_version >= req_ver:
retval = con.run(f"SELECT CAST(:v AS {typ})", v=test_input)
assert retval[0][0] == test_input
@pytest.mark.parametrize(
"test_input,expected",
[
("SELECT CAST('{a,b,c}' AS TEXT[])", ["a", "b", "c"]),
("SELECT CAST('{a,b,c}' AS CHAR[])", ["a", "b", "c"]),
("SELECT CAST('{a,b,c}' AS VARCHAR[])", ["a", "b", "c"]),
("SELECT CAST('{a,b,c}' AS CSTRING[])", ["a", "b", "c"]),
("SELECT CAST('{a,b,c}' AS NAME[])", ["a", "b", "c"]),
("SELECT CAST('{}' AS text[])", []),
('SELECT CAST(\'{NULL,"NULL",NULL,""}\' AS text[])', [None, "NULL", None, ""]),
],
)
def test_array_in(con, test_input, expected):
result = con.run(test_input)
assert result[0][0] == expected
def test_macaddr(con):
retval = con.run("SELECT macaddr '08002b:010203'")
assert retval[0][0] == "08:00:2b:01:02:03"
def test_tsvector_roundtrip(con):
retval = con.run(
"SELECT cast(:v as tsvector)", v="a fat cat sat on a mat and ate a fat rat"
)
assert retval[0][0] == "'a' 'and' 'ate' 'cat' 'fat' 'mat' 'on' 'rat' 'sat'"
def test_json_access_object(con):
val = {"name": "Apollo 11 Cave", "zebra": True, "age": 26.003}
retval = con.run("SELECT cast(:val as json) -> :name", val=dumps(val), name="name")
assert retval[0][0] == "Apollo 11 Cave"
def test_jsonb_access_object(con):
val = {"name": "Apollo 11 Cave", "zebra": True, "age": 26.003}
retval = con.run("SELECT cast(:val as jsonb) -> :name", val=dumps(val), name="name")
assert retval[0][0] == "Apollo 11 Cave"
def test_json_access_array(con):
val = [-1, -2, -3, -4, -5]
retval = con.run(
"SELECT cast(:v1 as json) -> cast(:v2 as int)", v1=dumps(val), v2=2
)
assert retval[0][0] == -3
def test_jsonb_access_array(con):
val = [-1, -2, -3, -4, -5]
retval = con.run(
"SELECT cast(:v1 as jsonb) -> cast(:v2 as int)", v1=dumps(val), v2=2
)
assert retval[0][0] == -3
def test_jsonb_access_path(con):
j = {"a": [1, 2, 3], "b": [4, 5, 6]}
path = ["a", "2"]
retval = con.run("SELECT cast(:v1 as jsonb) #>> :v2", v1=dumps(j), v2=path)
assert retval[0][0] == str(j[path[0]][int(path[1])])
@pytest.fixture
def duple_type(con):
con.run("CREATE TYPE duple AS (a int, b int);")
yield
con.run("DROP TYPE IF EXISTS duple;")
@pytest.mark.parametrize(
"test_input,expected",
[
((1, 3), "(1,3)"),
((1, None), "(1,)"),
],
)
def test_composite_type(con, duple_type, test_input, expected):
retval = con.run("SELECT CAST(:v AS duple)", v=test_input)
assert retval[0][0] == expected
@pytest.mark.parametrize(
"test_input,expected",
[
([(1, 3)], '{"(1,3)"}'),
],
)
def test_composite_type_array(con, duple_type, test_input, expected):
retval = con.run("SELECT CAST(:v AS duple[])", v=test_input)
assert retval[0][0] == expected
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1698780195.0
pg8000-1.30.3/test/test_converters.py 0000664 0001750 0001750 00000022002 14520252043 016655 0 ustar 00tlocke tlocke from datetime import (
date as Date,
datetime as DateTime,
time as Time,
timedelta as TimeDelta,
timezone as TimeZone,
)
from decimal import Decimal
from ipaddress import IPv4Address, IPv4Network
import pytest
from pg8000.converters import (
PGInterval,
PY_TYPES,
Range,
array_out,
array_string_escape,
date_in,
datemultirange_in,
identifier,
int4range_in,
interval_in,
literal,
make_param,
null_out,
numeric_in,
numeric_out,
pg_interval_in,
range_out,
string_in,
string_out,
time_in,
timestamp_in,
timestamptz_in,
tsrange_in,
)
from pg8000.native import InterfaceError
@pytest.mark.parametrize(
"value,expected",
[
["2022-03-02", Date(2022, 3, 2)],
["infinity", "infinity"],
["-infinity", "-infinity"],
["20022-03-02", "20022-03-02"],
],
)
def test_date_in(value, expected):
assert date_in(value) == expected
def test_null_out():
assert null_out(None) is None
@pytest.mark.parametrize(
"array,out",
[
[[True, False, None], "{true,false,NULL}"], # bool[]
[[IPv4Address("192.168.0.1")], "{192.168.0.1}"], # inet[]
[[Date(2021, 3, 1)], "{2021-03-01}"], # date[]
[[b"\x00\x01\x02\x03\x02\x01\x00"], '{"\\\\x00010203020100"}'], # bytea[]
[[IPv4Network("192.168.0.0/28")], "{192.168.0.0/28}"], # inet[]
[[1, 2, 3], "{1,2,3}"], # int2[]
[[1, None, 3], "{1,NULL,3}"], # int2[] with None
[[[1, 2], [3, 4]], "{{1,2},{3,4}}"], # int2[] multidimensional
[[70000, 2, 3], "{70000,2,3}"], # int4[]
[[7000000000, 2, 3], "{7000000000,2,3}"], # int8[]
[[0, 7000000000, 2], "{0,7000000000,2}"], # int8[]
[[1.1, 2.2, 3.3], "{1.1,2.2,3.3}"], # float8[]
[["Veni", "vidi", "vici"], "{Veni,vidi,vici}"], # varchar[]
[[("Veni", True, 1)], '{"(Veni,true,1)"}'], # array of composites
],
)
def test_array_out(array, out):
assert array_out(array) == out
@pytest.mark.parametrize(
"value",
[
"1.1",
"-1.1",
"10000",
"20000",
"-1000000000.123456789",
"1.0",
"12.44",
],
)
def test_numeric_out(value):
assert numeric_out(value) == str(value)
@pytest.mark.parametrize(
"value",
[
"1.1",
"-1.1",
"10000",
"20000",
"-1000000000.123456789",
"1.0",
"12.44",
],
)
def test_numeric_in(value):
assert numeric_in(value) == Decimal(value)
@pytest.mark.parametrize(
"data,expected",
[
("[6,3]", Range(6, 3, bounds="[]")),
],
)
def test_int4range_in(data, expected):
assert int4range_in(data) == expected
@pytest.mark.parametrize(
"v,expected",
[
(Range(6, 3, bounds="[]"), "[6,3]"),
],
)
def test_range_out(v, expected):
assert range_out(v) == expected
@pytest.mark.parametrize(
"value",
[
"hello \u0173 world",
],
)
def test_string_out(value):
assert string_out(value) == value
@pytest.mark.parametrize(
"value",
[
"hello \u0173 world",
],
)
def test_string_in(value):
assert string_in(value) == value
def test_time_in():
actual = time_in("12:57:18.000396")
assert actual == Time(12, 57, 18, 396)
@pytest.mark.parametrize(
"value,expected",
[
("1 year", PGInterval(years=1)),
("2 hours", PGInterval(hours=2)),
],
)
def test_pg_interval_in(value, expected):
assert pg_interval_in(value) == expected
@pytest.mark.parametrize(
"value,expected",
[
("2 hours", TimeDelta(hours=2)),
("00:00:30", TimeDelta(seconds=30)),
],
)
def test_interval_in(value, expected):
assert interval_in(value) == expected
@pytest.mark.parametrize(
"value,expected",
[
("a", "a"),
('"', '"\\""'),
("\r", '"\r"'),
("\n", '"\n"'),
("\t", '"\t"'),
],
)
def test_array_string_escape(value, expected):
res = array_string_escape(value)
assert res == expected
@pytest.mark.parametrize(
"value,expected",
[
[
"2022-10-08 15:01:39+01:30",
DateTime(
2022, 10, 8, 15, 1, 39, tzinfo=TimeZone(TimeDelta(hours=1, minutes=30))
),
],
[
"2022-10-08 15:01:39-01:30",
DateTime(
2022,
10,
8,
15,
1,
39,
tzinfo=TimeZone(TimeDelta(hours=-1, minutes=-30)),
),
],
[
"2022-10-08 15:01:39+02",
DateTime(2022, 10, 8, 15, 1, 39, tzinfo=TimeZone(TimeDelta(hours=2))),
],
[
"2022-10-08 15:01:39-02",
DateTime(2022, 10, 8, 15, 1, 39, tzinfo=TimeZone(TimeDelta(hours=-2))),
],
[
"2022-10-08 15:01:39.597026+01:30",
DateTime(
2022,
10,
8,
15,
1,
39,
597026,
tzinfo=TimeZone(TimeDelta(hours=1, minutes=30)),
),
],
[
"2022-10-08 15:01:39.597026-01:30",
DateTime(
2022,
10,
8,
15,
1,
39,
597026,
tzinfo=TimeZone(TimeDelta(hours=-1, minutes=-30)),
),
],
[
"2022-10-08 15:01:39.597026+02",
DateTime(
2022, 10, 8, 15, 1, 39, 597026, tzinfo=TimeZone(TimeDelta(hours=2))
),
],
[
"2022-10-08 15:01:39.597026-02",
DateTime(
2022, 10, 8, 15, 1, 39, 597026, tzinfo=TimeZone(TimeDelta(hours=-2))
),
],
[
"20022-10-08 15:01:39.597026-02",
"20022-10-08 15:01:39.597026-02",
],
],
)
def test_timestamptz_in(value, expected):
assert timestamptz_in(value) == expected
@pytest.mark.parametrize(
"value,expected",
[
[
"20022-10-08 15:01:39.597026",
"20022-10-08 15:01:39.597026",
],
],
)
def test_timestamp_in(value, expected):
assert timestamp_in(value) == expected
@pytest.mark.parametrize(
"value,expected",
[
[
'["2001-02-03 04:05:00","2023-02-03 04:05:00")',
Range(DateTime(2001, 2, 3, 4, 5), DateTime(2023, 2, 3, 4, 5)),
],
],
)
def test_tsrange_in(value, expected):
assert tsrange_in(value) == expected
@pytest.mark.parametrize(
"value,expected",
[
[
"{[2023-06-01,2023-06-06),[2023-06-10,2023-06-13)}",
[
Range(Date(2023, 6, 1), Date(2023, 6, 6)),
Range(Date(2023, 6, 10), Date(2023, 6, 13)),
],
],
],
)
def test_datemultirange_in(value, expected):
assert datemultirange_in(value) == expected
def test_make_param():
class BClass(bytearray):
pass
val = BClass(b"\x00\x01\x02\x03\x02\x01\x00")
assert make_param(PY_TYPES, val) == "\\x00010203020100"
def test_identifier_int():
with pytest.raises(InterfaceError, match="identifier must be a str"):
identifier(9)
def test_identifier_empty():
with pytest.raises(
InterfaceError, match="identifier must be > 0 characters in length"
):
identifier("")
def test_identifier_quoted_null():
with pytest.raises(
InterfaceError, match="identifier cannot contain the code zero character"
):
identifier("tabl\u0000e")
@pytest.mark.parametrize(
"value,expected",
[
("top_secret", "top_secret"),
(" Table", '" Table"'),
("A Table", '"A Table"'),
('A " Table', '"A "" Table"'),
("Table$", "Table$"),
],
)
def test_identifier_success(value, expected):
assert identifier(value) == expected
@pytest.mark.parametrize(
"value,expected",
[("top_secret", "'top_secret'"), (["cove"], "'{cove}'")],
)
def test_literal(value, expected):
assert literal(value) == expected
def test_literal_quote():
assert literal("bob's") == "'bob''s'"
def test_literal_int():
assert literal(7) == "7"
def test_literal_float():
assert literal(7.9) == "7.9"
def test_literal_decimal():
assert literal(Decimal("0.1")) == "0.1"
def test_literal_bytes():
assert literal(b"\x03") == "X'03'"
def test_literal_boolean():
assert literal(True) == "TRUE"
def test_literal_None():
assert literal(None) == "NULL"
def test_literal_Time():
assert literal(Time(22, 13, 2)) == "'22:13:02'"
def test_literal_Date():
assert literal(Date(2063, 11, 2)) == "'2063-11-02'"
def test_literal_TimeDelta():
assert literal(TimeDelta(22, 13, 2)) == "'22 days 13 seconds 2 microseconds'"
def test_literal_Datetime():
assert literal(DateTime(2063, 3, 31, 22, 13, 2)) == "'2063-03-31T22:13:02'"
def test_literal_Trojan():
class Trojan:
def __str__(self):
return "A Gift"
assert literal(Trojan()) == "'A Gift'"
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1682804588.0
pg8000-1.30.3/test/test_readme.py 0000664 0001750 0001750 00000000337 14423307554 015741 0 ustar 00tlocke tlocke import doctest
from pathlib import Path
def test_readme():
failure_count, _ = doctest.testfile(
str(Path("..") / "README.rst"), verbose=False, optionflags=doctest.ELLIPSIS
)
assert failure_count == 0
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 010213 x ustar 00 22 mtime=1686772897.0
pg8000-1.30.3/test/test_types.py 0000664 0001750 0001750 00000006052 14442416241 015643 0 ustar 00tlocke tlocke import pytest
from pg8000.types import PGInterval, Range
def test_PGInterval_init():
i = PGInterval(days=1)
assert i.months is None
assert i.days == 1
assert i.microseconds is None
def test_PGInterval_repr():
v = PGInterval(microseconds=123456789, days=2, months=24)
assert repr(v) == ""
def test_PGInterval_str():
v = PGInterval(microseconds=123456789, days=2, months=24, millennia=2)
assert str(v) == "2 millennia 24 months 2 days 123456789 microseconds"
@pytest.mark.parametrize(
"value,expected",
[
("P1Y2M", PGInterval(years=1, months=2)),
("P12DT30S", PGInterval(days=12, seconds=30)),
(
"P-1Y-2M3DT-4H-5M-6S",
PGInterval(years=-1, months=-2, days=3, hours=-4, minutes=-5, seconds=-6),
),
("PT1M32.32S", PGInterval(minutes=1, seconds=32.32)),
],
)
def test_PGInterval_from_str_iso_8601(value, expected):
interval = PGInterval.from_str_iso_8601(value)
assert interval == expected
@pytest.mark.parametrize(
"value,expected",
[
("@ 1 year 2 mons", PGInterval(years=1, months=2)),
(
"@ 3 days 4 hours 5 mins 6 secs",
PGInterval(days=3, hours=4, minutes=5, seconds=6),
),
(
"@ 1 year 2 mons -3 days 4 hours 5 mins 6 secs ago",
PGInterval(years=-1, months=-2, days=3, hours=-4, minutes=-5, seconds=-6),
),
(
"@ 1 millennium -2 mons",
PGInterval(millennia=1, months=-2),
),
],
)
def test_PGInterval_from_str_postgres(value, expected):
interval = PGInterval.from_str_postgres(value)
assert interval == expected
@pytest.mark.parametrize(
"value,expected",
[
["1-2", PGInterval(years=1, months=2)],
["3 4:05:06", PGInterval(days=3, hours=4, minutes=5, seconds=6)],
[
"-1-2 +3 -4:05:06",
PGInterval(years=-1, months=-2, days=3, hours=-4, minutes=-5, seconds=-6),
],
["8 4:00:32.32", PGInterval(days=8, hours=4, minutes=0, seconds=32.32)],
],
)
def test_PGInterval_from_str_sql_standard(value, expected):
interval = PGInterval.from_str_sql_standard(value)
assert interval == expected
@pytest.mark.parametrize(
"value,expected",
[
("P12DT30S", PGInterval(days=12, seconds=30)),
("@ 1 year 2 mons", PGInterval(years=1, months=2)),
("1-2", PGInterval(years=1, months=2)),
("3 4:05:06", PGInterval(days=3, hours=4, minutes=5, seconds=6)),
(
"-1-2 +3 -4:05:06",
PGInterval(years=-1, months=-2, days=3, hours=-4, minutes=-5, seconds=-6),
),
("00:00:30", PGInterval(seconds=30)),
],
)
def test_PGInterval_from_str(value, expected):
interval = PGInterval.from_str(value)
assert interval == expected
def test_Range_equals():
pg_range_a = Range("[", 1, 2, ")")
pg_range_b = Range("[", 1, 2, ")")
assert pg_range_a == pg_range_b
def test_Range_str():
v = Range(5, 6)
assert str(v) == "[5,6)"