peewee-2.10.2/000077500000000000000000000000001316645060400130535ustar00rootroot00000000000000peewee-2.10.2/.gitignore000066400000000000000000000002061316645060400150410ustar00rootroot00000000000000*.pyc build prof/ playhouse/*.c playhouse/*.h playhouse/*.so playhouse/tests/peewee_test.db .idea/ MANIFEST peewee_test.db closure.so peewee-2.10.2/.travis.yml000066400000000000000000000013051316645060400151630ustar00rootroot00000000000000language: python python: - "2.6" - "2.7" - "3.4" - "3.5" - "3.6" dist: trusty env: - PEEWEE_TEST_BACKEND=sqlite - PEEWEE_TEST_BACKEND=postgresql - PEEWEE_TEST_BACKEND=mysql addons: postgresql: "9.6" services: - postgresql - mysql install: "pip install psycopg2 Cython pymysql" before_script: - python setup.py build_ext -i - psql -c 'drop database if exists peewee_test;' -U postgres - psql -c 'create database peewee_test;' -U postgres - psql peewee_test -c 'create extension hstore;' -U postgres - mysql -e 'drop database if exists peewee_test;' - mysql -e 'create database peewee_test;' - mysql -e 'grant all on *.* to travis@localhost;' script: "python runtests.py -a" peewee-2.10.2/CHANGELOG.md000066400000000000000000001573061316645060400147000ustar00rootroot00000000000000# Changelog Tracking changes in peewee between versions. For a complete view of all the releases, visit GitHub: https://github.com/coleifer/peewee/releases ## 2.10.0 The main change in this release is the removal of the `AESEncryptedField`, which was included as part of the `playhouse.fields` extension. It was brought to my attention that there was some serious potential for security vulnerabilities. Rather than give users a false sense of security, I've decided the best course of action is to remove the field. * Remove the `playhouse.fields.AESEncryptedField` over security concerns described in ticket #1264. * Correctly resolve explicit table dependencies when creating tables, refs #1076. Thanks @maaaks. * Implement not equals comparison for `CompositeKey`. [View commits](https://github.com/coleifer/peewee/compare/2.9.2...2.10.0) ## 2.9.2 * Fixed significant bug in the `savepoint` commit/rollback implementation. Many thanks to @Syeberman for raising the issue. See #1225 for details. * Added support for postgresql `INTERVAL` columns. The new `IntervalField` in the `postgres_ext` module is suitable for storing `datetime.timedelta`. * Fixed bug where missing `sqlite3` library was causing other, unrelated libraries to throw errors when attempting to import. * Added a `case_sensitive` parameter to the SQLite `REGEXP` function implementation. The default is `False`, to preserve backwards-compatibility. * Fixed bug that caused tables not to be created when using the `dataset` extension. See #1213 for details. * Modified `drop_table` to raise an exception if the user attempts to drop tables with `CASCADE` when the database backend does not support it. * Fixed Python3 issue in the `AESEncryptedField`. * Modified the behavior of string-typed fields to treat the addition operator as concatenation. See #1241 for details. [View commits](https://github.com/coleifer/peewee/compare/2.9.1...2.9.2) ## 2.9.1 * Fixed #1218, where the use of `playhouse.flask_utils` was requiring the `sqlite3` module to be installed. * Fixed #1219 regarding the SQL generation for composite key sub-selects, joins, etc. [View commits](https://github.com/coleifer/peewee/compare/2.9.0...2.9.1) ## 2.9.0 In this release there are two notable changes: * The ``Model.create_or_get()`` method was removed. See the [documentation](http://docs.peewee-orm.com/en/latest/peewee/querying.html#create-or-get) for an example of the code one would write to replicate this functionality. * The SQLite closure table extension gained support for many-to-many relationships thanks to a nice PR by @necoro. [Docs](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#ClosureTable). [View commits](https://github.com/coleifer/peewee/compare/2.8.8...2.9.0) ## 2.8.8 This release contains a single important bugfix for a regression in specifying the type of lock to use when opening a SQLite transaction. [View commits](https://github.com/coleifer/peewee/compare/2.8.7...2.8.8) ## 2.8.7 This release contains numerous cleanups. ### Bugs fixed * #1087 - Fixed a misuse of the iteration protocol in the `sqliteq` extension. * Ensure that driver exceptions are wrapped when calling `commit` and `rollback`. * #1096 - Fix representation of recursive foreign key relations when using the `model_to_dict` helper. * #1126 - Allow `pskel` to be installed into `bin` directory. * #1105 - Added a `Tuple()` type to Peewee to enable expressing arbitrary tuple expressions in SQL. * #1133 - Fixed bug in the conversion of objects to `Decimal` instances in the `DecimalField`. * Fixed an issue renaming a unique foreign key in MySQL. * Remove the join predicate from CROSS JOINs. * #1148 - Ensure indexes are created when a column is added using a schema migration. * #1165 - Fix bug where the primary key was being overwritten in queries using the closure-table extension. ### New stuff * Added properties to the `SqliteExtDatabase` to expose common `PRAGMA` settings. For example, to set the cache size to 4MB, `db.cache_size = 1000`. * Clarified documentation on calling `commit()` or `rollback()` from within the scope of an atomic block. [See docs](http://docs.peewee-orm.com/en/latest/peewee/transactions.html#transactions). * Allow table creation dependencies to be specified using new `depends_on` meta option. Refs #1076. * Allow specification of the lock type used in SQLite transactions. Previously this behavior was only present in `playhouse.sqlite_ext.SqliteExtDatabase`, but it now exists in `peewee.SqliteDatabase`. * Added support for `CROSS JOIN` expressions in select queries. * Docs on how to implement [optimistic locking](http://docs.peewee-orm.com/en/latest/peewee/hacks.html#optimistic-locking). * Documented optional dependencies. * Generic support for specifying select queries as locking the selected rows `FOR X`, e.g. `FOR UPDATE` or `FOR SHARE`. * Support for specifying the frame-of-reference in window queries, e.g. specifying `UNBOUNDED PRECEDING`, etc. [See docs](http://docs.peewee-orm.com/en/latest/peewee/api.html#Window). ### Backwards-incompatible changes * As of 9e76c99, an `OperationalError` is raised if the user calls `connect()` on an already-open Database object. Previously, the existing connection would remain open and a new connection would overwrite it, making it impossible to close the previous connection. If you find this is causing breakage in your application, you can switch the `connect()` call to `get_conn()` which will only open a connection if necessary. The error **is** indicative of a real issue, though, so audit your code for places where you may be opening a connection without closing it (module-scope operations, e.g.). [View commits](https://github.com/coleifer/peewee/compare/2.8.5...2.8.7) ## 2.8.6 This release was later removed due to containing a bug. See notes on 2.8.7. ## 2.8.5 This release contains two small bugfixes. * #1081 - fixed the use of parentheses in compound queries on MySQL. * Fixed some grossness in a helper function used by `prefetch` that was clearing out the `GROUP BY` and `HAVING` clauses of sub-queries. [View commits](https://github.com/coleifer/peewee/compare/2.8.4...2.8.5) ## 2.8.4 This release contains bugfixes as well as a new playhouse extension module for working with [SQLite in multi-threaded / concurrent environments](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#sqliteq). The new module is called `playhouse.sqliteq` and it works by serializing queries using a dedicated worker thread (or greenlet). The performance is quite good, hopefully this proves useful to someone besides myself! You can learn more by reading the [sqliteq documentation](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#sqliteq). As a miscellaneous note, I did some major refactoring and cleanup in `ExtQueryResultsWrapper` and it's corollary in the `speedups` module. The code is much easier to read than before. [View commits](https://github.com/coleifer/peewee/compare/2.8.3...2.8.4) ### Bugs fixed * #1061 - @akrs patched a bug in `TimestampField` which affected the accuracy of sub-second timestamps (for resolution > 1). * #1071, small python 3 fix. * #1072, allow `DeferredRelation` to be used multiple times if there are multiple references to a given deferred model. * #1073, fixed regression in the speedups module that caused SQL functions to always coerce return values, regardless of the `coerce` flag. * #1083, another Python 3 issue - this time regarding the use of `exc.message`. [View commits](https://github.com/coleifer/peewee/compare/2.8.3...2.8.4) ## 2.8.3 This release contains bugfixes and a small backwards-incompatible change to the way foreign key `ObjectIdDescriptor` is named (issue #1050). ### Bugs fixed and general changes * #1028 - allow the `ensure_join` method to accept `on` and `join_type` parameters. Thanks @paulbooth. * #1032 - fix bug related to coercing model instances to database parameters when the model's primary key is a foreign key. * #1035 - fix bug introduced in 2.8.2, where I had added some logic to try and restrict the base `Model` class from being treated as a "real" Model. * #1039 - update documentation to clarify that lists *or tuples* are acceptable values when specifying SQLite `PRAGMA` statements. * #1041 - PyPy user was unable to install Peewee. (Who in their right mind would *ever* use PyPy?!) Bug was fixed by removing the pre-generated C files from the distribution. * #1043 - fix bug where the `speedups` C extension was not calling the correct model initialization method, resulting in model instances returned as results of a query having their `dirty` flag incorrectly set. * #1048 - similar to #1043, add logic to ensure that fields with default values are considered dirty when instantiating the model. * #1049 - update URL to [APSW](https://rogerbinns.github.io/apsw). * Fixed unreported bug regarding `TimestampField` with zero values reporting the incorrect datetime. ### New stuff * [djpeewee](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#djpeewee) extension module now works with Django 1.9. * [TimestampField](http://docs.peewee-orm.com/en/latest/peewee/api.html#TimestampField) is now an officially documented field. * #1050 - use the `db_column` of a `ForeignKeyField` for the name of the `ObjectIdDescriptor`, except when the `db_column` and field `name` are the same, in which case the ID descriptor will be named `_id`. [View commits](https://github.com/coleifer/peewee/compare/2.8.2...2.8.3) ## 2.8.2 This release contains mostly bug-fixes, clean-ups, and API enhancements. ### Bugs fixed and general cleanups * #820 - fixed some bugs related to the Cython extension build process. * #858 - allow blanks and perform type conversion when using the `db_url` extension * #922 - ensure that `peewee.OperationalError` is raised consistently when using the `RetryOperationalError` mixin. * #929 - ensure that `pwiz` will import the appropriate extensions when vendor-specific fields are used. * #930 - ensure that `pwiz`-generated models containing `UnknownField` placeholders do not blow up when you instantiate them. * #932 - correctly limit the length of automatically-generated index names. * #933 - fixed bug where `BlobField` could not be used if it's parent model pointed to an uninitialized database `Proxy`. * #935 - greater consistency with the conversion to Python data-types when performing aggregations, annotations, or calling `scalar()`. * #939 - ensure the correct data-types are used when initializing a connection pool. * #947 - fix bug where `Signal` subclasses were not returning rows affected on save. * #951 - better warnings regarding C extension compilation, thanks @dhaase-de. * #968 - fix bug where table names starting with numbers generated invalid table names when using `pwiz`. * #971 - fix bug where parameter was not being used. Thanks @jberkel. * #974 - fixed the way `SqliteExtDatabase` handles the automatic `rowid` (and `docid`) columns. Thanks for alerting me to the issue and providing a failing test case @jberkel. * #976 - fix obscure bug relating to cloning foreign key fields twice. * #981 - allow `set` instances to be used on the right-hand side of `IN` exprs. * #983 - fix behavior where the default `id` primary key was inherited regardless. When users would inadvertently include it in their queries, it would use the table alias of it's parent class. * #992 - add support for `db_column` in `djpeewee` * #995 - fix the behavior of `truncate_date` with Postgresql. Thanks @Zverik. * #1011 - correctly handle `bytes` wrapper used by `PasswordField` to `bytes`. * #1012 - when selecting and joining on multiple models, do not create model instances when the foreign key is NULL. * #1017 - do not coerce the return value of function calls to `COUNT` or `SUM`, since the python driver will already give us the right Python value. * #1018 - use global state to resolve `DeferredRelations`, allowing for a nicer API. Thanks @brenguyen711. * #1022 - attempt to avoid creating invalid Python when using `pwiz` with MySQL database columns containing spaces. Yes, fucking spaces. * #1024 - fix bug in SQLite migrator which had a naive approach to fixing indexes. * #1025 - explicitly check for `None` when determining if the database has been set on `ModelOptions`. Thanks @joeyespo. ### New stuff * Added `TimestampField` for storing datetimes using integers. Greater than second delay is possible through exponentiation. * Added `Database.drop_index()` method. * Added a `max_depth` parameter to the `model_to_dict` function in the `playhouse.shortcuts` extension module. * `SelectQuery.first()` function accepts a parameter `n` which applies a limit to the query and returns the first row. Previously the limit was not applied out of consideration for subsequent iterations, but I believe usage has shown that a limit is more desirable than reserving the option to iterate without a second query. The old behavior is preserved in the new `SelectQuery.peek()` method. * `group_by()`, `order_by()`, `window()` now accept a keyward argument `extend`, which, when set to `True`, will append to the existing values rather than overwriting them. * Query results support negative indexing. * C sources are included now as part of the package. I *think* they should be able to compile for python 2 or 3, on linux or windows...but not positive. * #895 - added the ability to query using the `_id` attribute. * #948 - added documentation about SQLite limits and how they affect * #1009 - allow `DATABASE_URL` as a recognized parameter to the Flask config. `insert_many`. [View commits](https://github.com/coleifer/peewee/compare/2.8.1...2.8.2) ## 2.8.1 This release is long overdue so apologies if you've been waiting on it and running off master. There are numerous bugfixes contained in this release, so I'll list those first this time. ### Bugs fixed * #821 - issue warning if Cython is old * #822 - better handling of MySQL connections point for advanced use-cases. * #313 - support equality/inequality with generic foreign key queries, and ensure `get_or_create` works with GFKs. * #834 - fixed Python3 incompatibilities in the `PasswordField`, thanks @mosquito. * #836 - fix handling of `last_insert_id()` when using `APSWDatabase`. * #845 - add connection hooks to `APSWDatabase`. * #852 - check SQLite library version to avoid calls to missing APIs. * #857 - allow database definition to be deferred when using the connection pool. * #878 - formerly `.limit(0)` had no effect. Now adds `LIMIT 0`. * #879 - implement a `__hash__` method for `Model` * #886 - fix `count()` for compound select queries. * #895 - allow writing to the `foreign_key_id` descriptor to set the foreign key value. * #893 - fix boolean logic bug in `model_to_dict()`. * #904 - fix side-effect in `clean_prefetch_query`, thanks to @p.kamayev * #907 - package includes `pskel` now. * #852 - fix sqlite version check in BerkeleyDB backend. * #919 - add runtime check for `sqlite3` library to match MySQL and Postgres. Thanks @M157q ### New features * Added a number of [SQLite user-defined functions and aggregates](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#sqlite-udf). * Use the DB-API2 `Binary` type for `BlobField`. * Implemented the lucene scoring algorithm in the `sqlite_ext` Cython library. * #825 - allow a custom base class for `ModelOptions`, providing an extension * #830 - added `SmallIntegerField` type. * #838 - allow using a custom descriptor class with `ManyToManyField`. * #855 - merged change from @lez which included docs on using peewee with Pyramid. * #858 - allow arguments to be passed on query-string when using the `db_url` module. Thanks @RealSalmon * #862 - add support for `truncate table`, thanks @dev-zero for the sample code. * Allow the `related_name` model `Meta` option to be a callable that accepts the foreign key field instance. [View commits](https://github.com/coleifer/peewee/compare/2.8.0...2.8.1) ## 2.8.0 This release includes a couple new field types and greatly improved C extension support for both speedups and SQLite enhancements. Also includes some work, suggested by @foxx, to remove some places where `Proxy` was used in favor of more obvious APIs. ### New features * [travis-ci builds](http://travis-ci.org/coleifer/peewee/builds/) now include MySQL and Python 3.5. Dropped support for Python 3.2 and 3.3. Builds also will run the C-extension code. * C extension speedups now enabled by default, includes faster implementations for `dict` and `tuple` `QueryResultWrapper` classes, faster date formatting, and a faster field and model sorting. * C implementations of SQLite functions is now enabled by default. SQLite extension is now compatible with APSW and can be used in standalone form directly from Python. See [SqliteExtDatabase](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#SqliteExtDatabase) for more details. * SQLite C extension now supports `murmurhash2`. * `UUIDField` is now supported for SQLite and MySQL, using `text` and `varchar` respectively, thanks @foxx! * Added `BinaryField`, thanks again, @foxx! * Added `PickledField` to `playhouse.fields`. * `ManyToManyField` now accepts a list of primary keys when adding or removing values from the through relationship. * Added support for SQLite [table-valued functions](http://sqlite.org/vtab.html#tabfunc2) using the [sqlite-vtfunc library](https://github.com/coleifer/sqlite-vtfunc). * Significantly simplified the build process for compiling the C extensions. ### Backwards-incompatible changes * Instead of using a `Proxy` for defining circular foreign key relationships, you now need to use [DeferredRelation](http://docs.peewee-orm.com/en/latest/peewee/api.html#DeferredRelation). * Instead of using a `Proxy` for defining many-to-many through tables, you now need to use [DeferredThroughModel](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#DeferredThroughModel). * SQLite Virtual Models must now use `Meta.extension_module` and `Meta.extension_options` to declare extension and any options. For more details, see [VirtualModel](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#VirtualModel). * MySQL database will now issue `COMMIT` statements for `SELECT` queries. This was not necessary, but added due to an influx of confused users creating GitHub tickets. Hint: learn to user your damn database, it's not magic! ### Bugs fixed Some of these may have been included in a previous release, but since I did not list them I'm listing them here. * #766, fixed bug with PasswordField and Python3. Fuck Python 3. * #768, fixed SortedFieldList and `remove_field()`. Thanks @klen! * #771, clarified docs for APSW. * #773, added docs for request hooks in Pyramid (who uses Pyramid, by the way?). * #774, prefetch() only loads first ForeignKeyField for a given relation. * #782, fixed typo in docs. * #791, foreign keys were not correctly handling coercing to the appropriate python value. * #792, cleaned up some CSV utils code. * #798, cleaned up iteration protocol in QueryResultWrappers. * #806, not really a bug, but MySQL users were clowning around and needed help. [View commits](https://github.com/coleifer/peewee/compare/2.7.4...2.8.0) ## 2.7.4 This is another small release which adds code to automatically build the SQLite C extension if `libsqlite` is available. The release also includes: * Support for `UUIDField` with SQLite. * Support for registering additional database classes with the `db_url` module via `register_database`. * `prefetch()` supports fetching multiple foreign-keys to the same model class. * Added method to validate FTS5 search queries. [View commits](https://github.com/coleifer/peewee/compare/2.7.3...2.7.4) ## 2.7.3 Small release which includes some changes to the BM25 sorting algorithm and the addition of a [`JSONField`](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#JSONField) for use with the new [JSON1 extension](http://sqlite.org/json1.html). ## 2.7.2 People were having trouble building the sqlite extension. I figure enough people are having trouble that I made it a separate command: `python setup.py build_sqlite_ext`. ## 2.7.1 Jacked up the setup.py ## 2.7.0 New APIs, features, and performance improvements. ### Notable changes and new features * [`PasswordField`](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#PasswordField) that uses the `bcrypt` module. * Added new Model [`Meta.only_save_dirty`](http://docs.peewee-orm.com/en/latest/peewee/models.html#model-options-and-table-metadata) flag to, by default, only save fields that have been modified. * Added support for [`upsert()`](http://docs.peewee-orm.com/en/latest/peewee/api.html#InsertQuery.upsert) on MySQL (in addition to SQLite). * Implemented SQLite ranking functions (``rank`` and ``bm25``) in Cython, and changed both the Cython and Python APIs to accept weight values for every column in the search index. This more closely aligns with the APIs provided by FTS5. In fact, made the APIs for FTS4 and FTS5 result ranking compatible. * Major changes to the :ref:`sqlite_ext` module. Function callbacks implemented in Python were implemented in Cython (e.g. date manipulation and regex processing) and will be used if Cython is available when Peewee is installed. * Support for the experimental new [FTS5](http://sqlite.org/fts5.html) SQLite search extension. * Added :py:class:`SearchField` for use with the SQLite FTS extensions. * Added :py:class:`RowIDField` for working with the special ``rowid`` column in SQLite. * Added a model class validation hook to allow model subclasses to perform any validation after class construction. This is currently used to ensure that ``FTS5Model`` subclasses do not violate any rules required by the FTS5 virtual table. ### Bugs fixed * **#751**, fixed some very broken behavior in the MySQL migrator code. Added more tests. * **#718**, added a `RetryOperationalError` mixin that will try automatically reconnecting after a failed query. There was a bug in the previous error handler implementation that made this impossible, which is also fixed. #### Small bugs * #713, fix column name regular expression in SQLite migrator. * #724, fixed `NULL` handling with the Postgresql `JSONField`. * #725, added `__module__` attribute to `DoesNotExist` classes. * #727, removed the `commit_select` logic for MySQL databases. * #730, added documentation for `Meta.order_by` API. * #745, added `cast()` method for casting JSON field values. * #748, added docs and method override to indicate that SQLite does not support adding foreign key constraints after table creation. * Check whether pysqlite or libsqlite were compiled with BerkeleyDB support when using the :py:class:`BerkeleyDatabase`. * Clean up the options passed to SQLite virtual tables on creation. ### Small features * #700, use sensible default if field's declared data-type is not present in the field type map. * #707, allow model to be specified explicitly in `prefetch()`. * #734, automatic testing against python 3.5. * #753, added support for `upsert()` ith MySQL via the `REPLACE INTO ...` statement. * #757, `pwiz`, the schema intropsection tool, will now generate multi-column index declarations. * #756, `pwiz` will capture passwords using the `getpass()` function rather than via the command-line. * Removed `Database.sql_error_handler()`, replaced with the `RetryOperationalError` mixin class. * Documentation for `Meta.order_by` and `Meta.primary_key`. * Better documentation around column and table constraints. * Improved performance for some methods that are called frequently. * Added `coerce` parameter to `BareField` and added documentation. [View commits](https://github.com/coleifer/peewee/compare/2.6.4...2.7.0) ## 2.6.4 Updating so some of the new APIs are available on pypi. ### Bugs fixed * #646, fixed a bug with the Cython speedups not being included in package. * #654, documented how to create models with no primary key. * #659, allow bare `INSERT` statements. * #674, regarding foreign key / one-to-one relationships. * #676, allow `ArrayField` to accept tuples in addition to lists. * #679, fix regarding unsaved relations. * #682, refactored QueryResultWrapper to allow multiple independent iterations over the same underlying result cache. * #692, fix bug with multiple joins to same table + eager loading. * #695, fix bug when connection fails while using an execution context. * #698, use correct column names with non-standard django foreign keys. * #706, return `datetime.time` instead of `timedelta` for MySQL time fields. * #712, fixed SQLite migrator regular expressions. Thanks @sroebert. ### New features * #647, #649, #650, added support for `RETURNING` clauses. Update, Insert and Delete queries can now be called with `RETURNING` to retrieve the rows that were affected. [See docs](http://docs.peewee-orm.com/en/latest/peewee/querying.html#returning-clause). * #685, added web request hook docs. * #691, allowed arbitrary model attributes and methods to be serialized by `model_to_dict()`. [Docs](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#model_to_dict). * #696, allow `model_to_dict()` to introspect query for which fields to serialize. * Added backend-agnostic [truncate_date()](http://docs.peewee-orm.com/en/latest/peewee/api.html#Database.truncate_date) implementation. * Added a `FixedCharField` which uses column type `CHAR`. * Added support for arbitrary `PRAGMA` statements to be run on new SQLite connections. [Docs](http://docs.peewee-orm.com/en/latest/peewee/databases.html#sqlite-pragma). * Removed `berkeley_build.sh` script. See instructions [on my blog instead](http://charlesleifer.com/blog/building-the-python-sqlite-driver-for-use-with-berkeleydb/). [View commits](https://github.com/coleifer/peewee/compare/2.6.2...2.6.4) ## 2.6.2 Just a regular old release. ### Bugs fixed * #641, fixed bug with exception wrapping and Python 2.6 * #634, fixed bug where correct query result wrapper was not being used for certain composite queries. * #625, cleaned up some example code. * #614, fixed bug with `aggregate_rows()` when there are multiple joins to the same table. ### New features * Added [create_or_get()](http://docs.peewee-orm.com/en/latest/peewee/querying.html#create-or-get) as a companion to `get_or_create()`. * Added support for `ON CONFLICT` clauses for `UPDATE` and `INSERT` queries. [Docs](http://docs.peewee-orm.com/en/latest/peewee/api.html#UpdateQuery.on_conflict). * Added a [JSONKeyStore](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#JSONKeyStore) to `playhouse.kv`. * Added Cythonized version of `strip_parens()`, with plans to perhaps move more performance-critical code to Cython in the future. * Added docs on specifying [vendor-specific database parameters](http://docs.peewee-orm.com/en/latest/peewee/database.html#vendor-specific-parameters). * Added docs on specifying [field default values](http://docs.peewee-orm.com/en/latest/peewee/models.html#default-field-values) (both client and server-side). * Added docs on [foreign key field back-references](http://docs.peewee-orm.com/en/latest/peewee/models.html#foreignkeyfield). * Added docs for [models without a primary key](http://docs.peewee-orm.com/en/latest/peewee/models.html#models-without-a-primary-key). * Cleaned up docs on `prefetch()` and `aggregate_rows()`. [View commits](https://github.com/coleifer/peewee/compare/2.6.1...2.6.2) ## 2.6.1 This release contains a number of small fixes and enhancements. ### Bugs fixed * #606, support self-referential joins with `prefetch` and `aggregate_rows()` methods. * #588, accomodate changes in SQLite's `PRAGMA index_list()` return value. * #607, fixed bug where `pwiz` was not passing table names to introspector. * #591, fixed bug with handling of named cursors in older psycopg2 version. * Removed some cruft from the `APSWDatabase` implementation. ### New features * Added [CompressedField](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#CompressedField) and [AESEncryptedField](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#AESEncryptedField) * #609, #610, added Django-style foreign key ID lookup. [Docs](http://docs.peewee-orm.com/en/latest/peewee/models.html#foreignkeyfield). * Added support for [Hybrid Attributes](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#hybrid-attributes) (cool idea courtesy of SQLAlchemy). * Added ``upsert`` keyword argument to the `Model.save()` function (SQLite only). * #587, added support for ``ON CONFLICT`` SQLite clause for `INSERT` and `UPDATE` queries. [Docs](http://docs.peewee-orm.com/en/latest/peewee/api.html#UpdateQuery.on_conflict) * #601, added hook for programmatically defining table names. [Model options docs](http://docs.peewee-orm.com/en/latest/peewee/models.html#model-options-and-table-metadata) * #581, #611, support connection pools with `playhouse.db_url.connect()`. [Docs](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#connect). * Added [Contributing section](http://docs.peewee-orm.com/en/latest/peewee/contributing.html) section to docs. [View commits](https://github.com/coleifer/peewee/compare/2.6.0...2.6.1) ## 2.6.0 This is a tiny update, mainly consisting of a new-and-improved implementation of ``get_or_create()`` ([docs](http://docs.peewee-orm.com/en/latest/peewee/api.html#Model.get_or_create)). ### Backwards-incompatible changes * ``get_or_create()`` now returns a 2-tuple consisting of the model instance and a boolean indicating whether the instance was created. The function now behaves just like the Django equivalent. ### New features * #574, better support for setting the character encoding on Postgresql database connections. Thanks @klen! * Improved implementation of [get_or_create()](http://docs.peewee-orm.com/en/latest/peewee/api.html#Model.get_or_create). [View commits](https://github.com/coleifer/peewee/compare/2.5.1...2.6.0) ## 2.5.1 This is a relatively small release with a few important bugfixes. ### Bugs fixed * #566, fixed a bug regarding parentheses around compound `SELECT` queries (i.e. `UNION`, `INTERSECT`, etc). * Fixed unreported bug where table aliases were not generated correctly for compound `SELECT` queries. * #559, add option to preserve original column order with `pwiz`. Thanks @elgow! * Fixed unreported bug where selecting all columns from a `ModelAlias` does not use the appropriate `FieldAlias` objects. ### New features * #561, added an option for bulk insert queries to return the list of auto-generated primary keys. See [docs for InsertQuery.return_id_list](http://docs.peewee-orm.com/en/latest/peewee/api.html#InsertQuery.return_id_list). * #569, added `parse` function to the `playhouse.db_url` module. Thanks @stt! * Added [hacks](http://docs.peewee-orm.com/en/latest/peewee/hacks.html) section to the docs. Please contribute your hacks! ### Backwards-incompatible changes * Calls to `Node.in_()` and `Node.not_in()` do not take `*args` anymore and instead take a single argument. [View commits](https://github.com/coleifer/peewee/compare/2.5.0...2.5.1) ## 2.5.0 There are a couple new features so I thought I'd bump to 2.5.x. One change Postgres users may be happy to see is the use of `INSERT ... RETURNING` to perform inserts. This should definitely speed up inserts for Postgres, since an extra query is no longer needed to get the new auto-generated primary key. I also added a [new context manager/decorator](http://docs.peewee-orm.com/en/latest/peewee/database.html#using-multiple-databases) that allows you to use a different database for the duration of the wrapped block. ### Bugs fixed * #534, CSV utils was erroneously stripping the primary key from CSV data. * #537, fix upserts when using `insert_many`. * #541, respect `autorollback` with `PostgresqlExtDatabase`. Thanks @davidmcclure. * #551, fix for QueryResultWrapper's implementation of the iterator protocol. * #554, allow SQLite journal_mode to be set at run-time. * Fixed case-sensitivity issue with `DataSet`. ### New features * Added support for [CAST expressions](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#cast). * Added a hook for [extending Node](http://docs.peewee-orm.com/en/latest/peewee/api.html#Node.extend) with custom methods. * `JOIN_` became `JOIN.`, e.g. `.join(JOIN.LEFT_OUTER)`. * `OP_` became `OP.`. * #556, allowed using `+` and `-` prefixes to indicate ascending/descending ordering. * #550, added [Database.initialize_connection()](http://docs.peewee-orm.com/en/latest/peewee/database.html#additional-connection-initialization) hook. * #549, bind selected columns to a particular model. Thanks @jhorman, nice PR! * #531, support for swapping databases at run-time via [Using](http://docs.peewee-orm.com/en/latest/peewee/database.html#using-multiple-databases). * #530, support for SQLCipher and Python3. * New `RowIDField` for `sqlite_ext` playhouse module. This field can be used to interact with SQLite `rowid` fields. * Added `LateralJoin` helper to the `postgres_ext` playhouse module. * New [example blog app](https://github.com/coleifer/peewee/tree/master/examples/blog). [View commits](https://github.com/coleifer/peewee/compare/2.4.7...2.5.0) ## 2.4.7 ### Bugs fixed * #504, Docs updates. * #506, Fixed regression in `aggregate_rows()` * #510, Fixes bug in pwiz overwriting columns. * #514, Correctly cast foreign keys in `prefetch()`. * #515, Simplifies queries issued when doing recursive deletes. * #516, Fix cloning of Field objects. * #519, Aggregate rows now correctly preserves ordering of joined instances. * Unreported, fixed bug to not leave expired connections sitting around in the pool. ### New features * Added support for Postgresql's ``jsonb`` type with [BinaryJSONField](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#BinaryJSONField). * Add some basic [Flask helpers](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#flask-utils). * Add support for `UNION ALL` queries in #512 * Add `SqlCipherExtDatabase`, which combines the sqlcipher database with the sqlite extensions. * Add option to print metadata when generating code with ``pwiz``. [View commits](https://github.com/coleifer/peewee/compare/2.4.6...2.4.7) ## 2.4.6 This is a relatively small release with mostly bug fixes and updates to the documentation. The one new feature I'd like to highlight is the ``ManyToManyField`` ([docs](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#ManyToManyField)). ### Bugs fixed * #503, fixes behavior of `aggregate_rows()` when used with a `CompositeKey`. * #498, fixes value coercion for field aliases. * #492, fixes bug with pwiz and composite primary keys. * #486, correctly handle schemas with reflection module. ### New features * Peewee has a new [ManyToManyField](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#ManyToManyField) available in the ``playhouse.shortcuts`` module. * Peewee now has proper support for *NOT IN* queries through the ``Node.not_in()`` method. * Models now support iteration. This is equivalent to ``Model.select()``. [View commits](https://github.com/coleifer/peewee/compare/2.4.5...2.4.6) ## 2.4.5 I'm excited about this release, as in addition to a number of new features and bugfixes, it also is a step towards cleaner code. I refactored the tests into a number of modules, using a standard set of base test-cases and helpers. I also introduced the `mock` library into the test suite and plan to use it for cleaner tests going forward. There's a lot of work to do to continue cleaning up the tests, but I'm feeling good about the changes. Curiously, the test suite runs faster now. ### Bugs fixed * #471, #482 and #484, all of which had to do with how joins were handled by the `aggregate_rows()` query result wrapper. * #472 removed some needless special-casing in `Model.save()`. * #466 fixed case-sensitive issues with the SQLite migrator. * #474 fixed a handful of bugs that cropped up migrating foreign keys with SQLite. * #475 fixed the behavior of the SQLite migrator regarding auto-generated indexes. * #479 fixed a bug in the code that stripped extra parentheses in the SQL generator. * Fixed a handful of bugs in the APSW extension. ### New features * Added connection abstraction called `ExecutionContext` ([see docs](http://docs.peewee-orm.com/en/latest/peewee/database.html#advanced-connection-management)). * Made all context managers work as decorators (`atomic`, `transaction`, `savepoint`, `execution_context`). * Added explicit methods for `IS NULL` and `IS NOT NULL` queries. The latter was actually necessary since the behavior is different from `NOT IS NULL (...)`. * Allow disabling backref validation (#465) * Made quite a few improvements to the documentation, particularly sections on transactions. * Added caching to the [DataSet](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#dataset) extension, which should improve performance. * Made the SQLite migrator smarter with regards to preserving indexes when a table copy is necessary. [View commits](https://github.com/coleifer/peewee/compare/2.4.4...2.4.5) ## 2.4.4 Biggest news: peewee has a new logo! ![](http://media.charlesleifer.com/blog/photos/peewee-logo-bold.png) * Small documentation updates here and there. ### Backwards-incompatible changes * The argument signature for the `SqliteExtDatabase.aggregate()` decorator changed so that the aggregate name is the first parameter, and the number of parameters is the second parameter. If no values are specified, peewee will choose the name of the class and an un-specified number of arguments (`-1`). * The logic for saving a model with a composite key changed slightly. Previously, if a model had a composite primary key and you called `save()`, only the dirty fields would be saved. ### Bugs fixed * #462 * #465, add hook for disabling backref validation. * #466, fix case-sensitive table names with migration module. * #469, save only dirty fields. ### New features * Lots of enhancements and cleanup to the `playhouse.apsw_ext` module. * The `playhouse.reflection` module now supports introspecting indexes. * Added a model option for disabling backref validation. * Added support for the SQLite [closure table extension](http://charlesleifer.com/blog/querying-tree-structures-in-sqlite-using-python-and-the-transitive-closure-extension/). * Added support for *virtual fields*, which act on dynamically-created virtual table fields. * Added a new example: a virtual table implementation that exposes Redis as a relational database table. * Added a module `playhouse.sqlite_aggregates` that contains a handful of aggregates you may find useful when developing with SQLite. [View commits](https://github.com/coleifer/peewee/compare/2.4.3...2.4.4) ## 2.4.3 This release contains numerous improvements, particularly around the built-in database introspection utilities. Peewee should now also be compatible with PyPy. ### Bugs fixed * #466, table names are case sensitive in the SQLite migrations module. * #465, added option to disable backref validation. * #462, use the schema name consistently with postgres reflection. ### New features * New model *Meta* option to disable backref validation. [See validate_backrefs](http://docs.peewee-orm.com/en/latest/peewee/models.html#model-options-and-table-metadata). * Added documentation on ordering by calculated values. * Added basic PyPy compatibility. * Added logic to close cursors after they have been exhausted. * Structured and consolidated database metadata introspection, including improvements for introspecting indexes. * Added support to [prefetch](http://docs.peewee-orm.com/en/latest/peewee/api.html?highlight=prefetch#prefetch) for traversing *up* the query tree. * Added introspection option to skip invalid models while introspecting. * Added option to limit the tables introspected. * Added closed connection detection to the MySQL connection pool. * Enhancements to passing options to creating virtual tables with SQLite. * Added factory method for generating Closure tables for use with the `transitive_closure` SQLite extension. * Added support for loading SQLite extensions. * Numerous test-suite enhancements and new test-cases. [View commits](https://github.com/coleifer/peewee/compare/2.4.2...2.4.3) ## 2.4.2 This release contains a number of improvements to the `reflection` and `migrate` extension modules. I also added an encrypted *diary* app to the [examples](https://github.com/coleifer/peewee/tree/master/examples) directory. ### Bugs fixed * #449, typo in the db_url extension, thanks to @malea for the fix. * #457 and #458, fixed documentation deficiences. ### New features * Added support for [importing data](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#importing-data) when using the [DataSet extension](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#dataset). * Added an encrypted diary app to the examples. * Better index reconstruction when altering columns on SQLite databases with the [migrate](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#migrate) module. * Support for multi-column primary keys in the [reflection](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#reflection) module. * Close cursors more aggressively when executing SELECT queries. [View commits](https://github.com/coleifer/peewee/compare/2.4.1...2.4.2) ## 2.4.1 This release contains a few small bugfixes. ### Bugs fixed * #448, add hook to the connection pool for detecting closed connections. * #229, fix join attribute detection. * #447, fixed documentation typo. [View commits](https://github.com/coleifer/peewee/compare/2.4.0...2.4.1) ## 2.4.0 This release contains a number of enhancements to the `playhouse` collection of extensions. ### Backwards-incompatible changes As of 2.4.0, most of the introspection logic was moved out of the ``pwiz`` module and into ``playhouse.reflection``. ### New features * Created a new [reflection](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#reflection) extension for introspecting databases. The *reflection* module additionally can generate actual peewee Model classes dynamically. * Created a [dataset](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#dataset) library (based on the [SQLAlchemy project](https://dataset.readthedocs.io/) of the same name). For more info check out the blog post [announcing playhouse.dataset](http://charlesleifer.com/blog/saturday-morning-hacks-dataset-for-peewee/). * Added a [db_url](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#database-url) module which creates `Database` objects from a connection string. * Added [csv dump](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#dumping-csv) functionality to the [CSV utils](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#csv-utils) extension. * Added an [atomic](http://docs.peewee-orm.com/en/latest/peewee/transactions.html#nesting-transactions) context manager to support nested transactions. * Added support for HStore, JSON and TSVector to the `reflection` module. * More documentation updates. ### Bugs fixed * Fixed #440, which fixes a bug where `Model.dirty_fields` did not return an empty set for some subclasses of `QueryResultWrapper`. [View commits](https://github.com/coleifer/peewee/compare/2.3.3...2.4.0) ## 2.3.3 This release contains a lot of improvements to the documentation and a mixed bag of other new features and bugfixes. ### Backwards-incompatible changes As of 2.3.3, all peewee `Database` instances have a default of `True` for the `threadlocals` parameter. This means that a connection is opened for each thread. It seemed to me that by sharing connections across threads caused a lot of confusion to users who weren't aware of (or familiar with) the `threadlocals` parameter. For single-threaded apps the behavior will not be affected, but for multi-threaded applications, if you wish to share your connection across threads you must now specify `threadlocals=False`. For more information, see the [documentation](http://docs.peewee-orm.com/en/latest/peewee/api.html#Database). I also renamed the `Model.get_id()` and `Model.set_id()` convenience methods so as not to conflict with Flask-Login. These methods should have probably been private anyways, and the new methods are named `_get_pk_value()` and `_set_pk_value()`. ### New features * Basic support for [Postgresql full-text search](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#pg-fts). * Helper functions for converting models to dictionaries and unpacking dictionaries into model instances. See [docs](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#model_to_dict). ### Bugs fixed * Fixed #428, documentation formatting error. * Fixed #429, which fixes the way default values are initialized for bulk inserts. * Fixed #432, making the HStore extension optional when using `PostgresqlExtDatabase`. * Fixed #435, allowing peewee to be used with Flask-Login. * Fixed #436, allowing the SQLite date_part and date_trunc functions to correctly handle NULL values. * Fixed #438, in which the ordering of clauses in a Join expression were causing unpredictable behavior when selecting related instances. * Updated the `berkeley_build.sh` script, which was incompatible with the newest version of `bsddb3`. [View commits](https://github.com/coleifer/peewee/compare/2.3.2...2.3.3) ## 2.3.2 This release contains mostly bugfixes. ### Changes in 2.3.2 * Fixed #421, allowing division operations to work correctly in py3k. * Added support for custom json.dumps command, thanks to @alexlatchford. * Fixed some foreign key generation bugs with pwiz in #426. * Fixed a parentheses bug with UNION queries, #422. * Added support for returning partial JSON data-structures from postgresql. [View commits](https://github.com/coleifer/peewee/compare/2.3.1...2.3.2) ## 2.3.1 This release contains a fix for a bug introducted in 2.3.0. Table names are included, unquoted, in update queries now, which is causing some problems when the table name is a keyword. ### Changes in 2.3.1 * [Quote table name / alias](https://github.com/coleifer/peewee/issues/414) [View commits](https://github.com/coleifer/peewee/compare/2.3.0...2.3.1) ## 2.3.0 This release contains a number of bugfixes, enhancements and a rewrite of much of the documentation. ### Changes in 2.3.0 * [New and improved documentation](http://docs.peewee-orm.com/) * Added [aggregate_rows()](http://docs.peewee-orm.com/en/latest/peewee/querying.html#list-users-and-all-their-tweets) method for mitigating N+1 queries. * Query compiler performance improvements and rewrite of table alias internals (51d82fcd and d8d55df04). * Added context-managers and decorators for [counting queries](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#count_queries) and [asserting query counts](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#assert_query_count). * Allow `UPDATE` queries to contain subqueries for values ([example](http://docs.peewee-orm.com/en/latest/peewee/querying.html#atomic-updates)). * Support for `INSERT INTO / SELECT FROM` queries ([docs](http://docs.peewee-orm.com/en/latest/peewee/api.html?highlight=insert_from#Model.insert_from)). * Allow `SqliteDatabase` to set the database's journal mode. * Added method for concatenation ([docs]()). * Moved ``UUIDField`` out of the playhouse and into peewee * Added [pskel](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#pskel) script. * Documentation for [BerkeleyDB](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#berkeleydb). ### Bugs fixed * #340, allow inner query values to be used in outer query joins. * #380, fixed foreign key handling in SQLite migrations. * #389, mark foreign keys as dirty on assignment. * #391, added an ``orwhere()`` method. * #392, fixed ``order_by`` meta option inheritance bug. * #394, fixed UUID and conversion of foreign key values (thanks @alexlatchford). * #395, allow selecting all columns using ``SQL('*')``. * #396, fixed query compiler bug that was adding unnecessary parentheses around expressions. * #405, fixed behavior of ``count()`` when query has a limit or offset. [View commits](https://github.com/coleifer/peewee/compare/2.2.5...2.3.0) ## 2.2.5 This is a small release and contains a handful of fixes. ### Changes in 2.2.5 * Added a `Window` object for creating reusable window definitions. * Added support for `DISTINCT ON (...)`. * Added a BerkeleyDB-backed sqlite `Database` and build script. * Fixed how the `UUIDField` handles `None` values (thanks @alexlatchford). * Fixed various things in the example app. * Added 3.4 to the travis build (thanks @frewsxcv). [View commits](https://github.com/coleifer/peewee/compare/2.2.4...2.2.5) ## 2.2.4 This release contains a complete rewrite of `pwiz` as well as some improvements to the SQLite extension, including support for the BM25 ranking algorithm for full-text searches. I also merged support for sqlcipher, an encrypted SQLite database with many thanks to @thedod! ### Changes in 2.2.4 * Rewrite of `pwiz`, schema introspection utility. * `Model.save()` returns a value indicating the number of modified rows. * Fixed bug with `PostgresqlDatabase.last_insert_id()` leaving a transaction open in autocommit mode (#353). * Added BM25 ranking algorithm for full-text searches with SQLite. [View commits](https://github.com/coleifer/peewee/compare/2.2.3...2.2.4) ## 2.2.3 This release contains a new migrations module in addition to a number of small features and bug fixes. ### Changes in 2.2.3 * New migrations module. * Added a return value to `Model.save()` indicating number of rows affected. * Added a `date_trunc()` method that works for Sqlite. * Added a `Model.sqlall()` class-method to return all the SQL to generate the model / indices. ### Bugs fixed * #342, allow functions to not coerce parameters automatically. * #338, fixed unaliased columns when using Array and Json fields with postgres, thanks @mtwesley. * #331, corrected issue with the way unicode arrays were adapted with psycopg2. * #328, pwiz / mysql bug. * #326, fixed calculation of the alias_map when using subqueries. * #324, bug with `prefetch()` not selecting the correct primary key. [View commits](https://github.com/coleifer/peewee/compare/2.2.2...2.2.3) ## 2.2.1 I've been looking forward to this release, as it contains a couple new features that I've been wanting to add for some time now. Hope you find them useful. ### Changes in 2.2.1 * Window queries using ``OVER`` syntax. * Compound query operations ``UNION``, ``INTERSECT``, ``EXCEPT`` as well as symmetric difference. ### Bugs fixed * #300, pwiz was not correctly interpreting some foreign key constraints in SQLite. * #298, drop table with cascade API was missing. * #294, typo. [View commits](https://github.com/coleifer/peewee/compare/2.2.0...2.2.1) ## 2.2.0 This release contains a large refactoring of the way SQL was generated for both the standard query classes (`Select`, `Insert`, `Update`, `Delete`) as well as for the DDL methods (`create_table`, `create_index`, etc). Instead of joining strings of SQL and manually quoting things, I've created `Clause` objects containing multiple `Node` objects to represent all parts of the query. I also changed the way peewee determins the SQL to represent a field. Now a field implements ``__ddl__`` and ``__ddl_column__`` methods. The former creates the entire field definition, e.g.: "quoted_column_name" [NOT NULL/PRIMARY KEY/DEFAULT NEXTVAL(...)/CONSTRAINTS...] The latter method is responsible just for the column type definition. This might return ``VARCHAR(255)`` or simply ``TEXT``. I've also added support for arbitrary constraints on each field, so you might have: price = DecimalField(decimal_places=2, constraints=[Check('price > 0')]) ### Changes in 2.2.0 * Refactored query generation for both SQL queries and DDL queries. * Support for arbitrary column constraints. * `autorollback` option to the `Database` class that will roll back the transaction before raising an exception. * Added `JSONField` type to the `postgresql_ext` module. * Track fields that are explicitly set, allowing faster saves (thanks @soasme). * Allow the `FROM` clause to be an arbitrary `Node` object (#290). * `schema` is a new `Model.Mketa` option and is used throughout the code. * Allow indexing operation on HStore fields (thanks @zdxerr, #293). ### Bugs fixed * #277 (where calls not chainable with update query) * #278, use `wraps()`, thanks @lucasmarshall * #284, call `prepared()` after `create()`, thanks @soasme. * #286, cursor description issue with pwiz + postgres [View commits](https://github.com/coleifer/peewee/compare/2.1.7...2.2.0) ## 2.1.7 ### Changes in 2.1.7 * Support for savepoints (Sqlite, Postgresql and MySQL) using an API similar to that of transactions. * Common set of exceptions to wrap DB-API 2 driver-specific exception classes, e.g. ``peewee.IntegrityError``. * When pwiz cannot determine the underlying column type, display it in a comment in the generated code. * Support for circular foreign-keys. * Moved ``Proxy`` into peewee (previously in ``playhouse.proxy``). * Renamed ``R()`` to ``SQL()``. * General code cleanup, some new comments and docstrings. ### Bugs fixed * Fixed a small bug in the way errors were handled in transaction context manager. * #257 * #265, nest multiple calls to functions decorated with `@database.commit_on_success`. * #266 * #267 Commits: https://github.com/coleifer/peewee/compare/2.1.6...2.1.7 Released 2013-12-25 ## 2.1.6 Changes included in 2.1.6: * [Lightweight Django integration](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#django-integration). * Added a [csv loader](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#csv-loader) to playhouse. * Register unicode converters per-connection instead of globally when using `pscyopg2`. * Fix for how the related object cache is invalidated (#243). Commits: https://github.com/coleifer/peewee/compare/2.1.5...2.1.6 Released 2013-11-19 ## 2.1.5 ### Summary of new features * Rewrote the ``playhouse.postgres_ext.ServerSideCursor`` helper to work with a single query. [Docs](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#server-side-cursors). * Added error handler hook to the database class, allowing your code to choose how to handle errors executing SQL. [Docs](http://docs.peewee-orm.com/en/latest/peewee/api.html#Database.sql_error_handler). * Allow arbitrary attributes to be stored in ``Model.Meta`` a5e13bb26d6196dbd24ff228f99ff63d9c046f79. * Support for composite primary keys (!!). [How-to](http://docs.peewee-orm.com/en/latest/peewee/cookbook.html#composite-primary-keys) and [API docs](http://docs.peewee-orm.com/en/latest/peewee/api.html#CompositeKey). * Added helper for generating ``CASE`` expressions. [Docs](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#case). * Allow the table alias to be specified as a model ``Meta`` option. * Added ability to specify ``NOWAIT`` when issuing ``SELECT FOR UPDATE`` queries. ### Bug fixes * #147, SQLite auto-increment behavior. * #222 * #223, missing call to ``execute()`` in docs. * #224, python 3 compatibility fix. * #227, was using wrong column type for boolean with MySQL. Commits: https://github.com/coleifer/peewee/compare/2.1.4...2.1.5 Released 2013-10-19 ## 2.1.4 * Small refactor of some components used to represent expressions (mostly better names). * Support for [Array fields](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#ArrayField) in postgresql. * Added notes on [Proxy](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#proxy) * Support for [Server side cursors](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#server-side-cursors) with postgresql. * Code cleanups for more consistency. Commits: https://github.com/coleifer/peewee/compare/2.1.3...2.1.4 Released 2013-08-05 ## 2.1.3 * Added the ``sqlite_ext`` module, including support for virtual tables, full-text search, user-defined functions, collations and aggregates, as well as more granular locking. * Manually convert data-types when doing simple aggregations - fixes issue #208 * Profiled code and dramatically increased performance of benchmarks. * Added a proxy object for lazy database initialization - fixes issue #210 Commits: https://github.com/coleifer/peewee/compare/2.1.2...2.1.3 Released 2013-06-28 ------------------------------------- ## 2.0.0 Major rewrite, see notes here: http://docs.peewee-orm.com/en/latest/peewee/upgrading.html#upgrading peewee-2.10.2/LICENSE000066400000000000000000000020421316645060400140560ustar00rootroot00000000000000Copyright (c) 2010 Charles Leifer Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. peewee-2.10.2/MANIFEST.in000066400000000000000000000005331316645060400146120ustar00rootroot00000000000000include CHANGELOG.md include LICENSE include README.rst include TODO.rst include runtests.py include tests.py include playhouse/_sqlite_ext.pyx include playhouse/_sqlite_udf.pyx include playhouse/_speedups.pyx include playhouse/pskel include playhouse/README.md include playhouse/tests/README recursive-include examples * recursive-include docs * peewee-2.10.2/README.rst000066400000000000000000000167171316645060400145560ustar00rootroot00000000000000Attention --------- I've pushed the alpha branch of Peewee 3.0, which is a more-or-less total rewrite. The APIs are all mostly backwards-compatible, though some features and extension modules have been removed. Check it out if you're curious: https://github.com/coleifer/peewee/tree/3.0a .. image:: http://media.charlesleifer.com/blog/photos/p1423749536.32.png peewee ====== Peewee is a simple and small ORM. It has few (but expressive) concepts, making it easy to learn and intuitive to use. * A small, expressive ORM * Written in python with support for versions 2.6+ and 3.2+. * built-in support for sqlite, mysql and postgresql * tons of extensions available in the `playhouse `_ * `Postgresql HStore, JSON, arrays and more `_ * `SQLite full-text search, user-defined functions, virtual tables and more `_ * `Schema migrations `_ and `model code generator `_ * `Connection pool `_ * `Encryption `_ * `and much, much more... `_ .. image:: https://travis-ci.org/coleifer/peewee.svg?branch=master :target: https://travis-ci.org/coleifer/peewee New to peewee? Here is a list of documents you might find most helpful when getting started: * `Quickstart guide `_ -- this guide covers all the essentials. It will take you between 5 and 10 minutes to go through it. * `Guide to the various query operators `_ describes how to construct queries and combine expressions. * `Field types table `_ lists the various field types peewee supports and the parameters they accept. For flask helpers, check out the `flask_utils extension module `_. You can also use peewee with the popular extension `flask-admin `_ to provide a Django-like admin interface for managing peewee models. Examples -------- Defining models is similar to Django or SQLAlchemy: .. code-block:: python from peewee import * from playhouse.sqlite_ext import SqliteExtDatabase import datetime db = SqliteExtDatabase('my_database.db') class BaseModel(Model): class Meta: database = db class User(BaseModel): username = CharField(unique=True) class Tweet(BaseModel): user = ForeignKeyField(User, related_name='tweets') message = TextField() created_date = DateTimeField(default=datetime.datetime.now) is_published = BooleanField(default=True) Connect to the database and create tables: .. code-block:: python db.connect() db.create_tables([User, Tweet]) Create a few rows: .. code-block:: python charlie = User.create(username='charlie') huey = User(username='huey') huey.save() # No need to set `is_published` or `created_date` since they # will just use the default values we specified. Tweet.create(user=charlie, message='My first tweet') Queries are expressive and composable: .. code-block:: python # A simple query selecting a user. User.get(User.username == 'charles') # Get tweets created by one of several users. The "<<" operator # corresponds to the SQL "IN" operator. usernames = ['charlie', 'huey', 'mickey'] users = User.select().where(User.username << usernames) tweets = Tweet.select().where(Tweet.user << users) # We could accomplish the same using a JOIN: tweets = (Tweet .select() .join(User) .where(User.username << usernames)) # How many tweets were published today? tweets_today = (Tweet .select() .where( (Tweet.created_date >= datetime.date.today()) & (Tweet.is_published == True)) .count()) # Paginate the user table and show me page 3 (users 41-60). User.select().order_by(User.username).paginate(3, 20) # Order users by the number of tweets they've created: tweet_ct = fn.Count(Tweet.id) users = (User .select(User, tweet_ct.alias('ct')) .join(Tweet, JOIN.LEFT_OUTER) .group_by(User) .order_by(tweet_ct.desc())) # Do an atomic update Counter.update(count=Counter.count + 1).where( Counter.url == request.url) Check out the `example app `_ for a working Twitter-clone website written with Flask. Learning more ------------- Check the `documentation `_ for more examples. Specific question? Come hang out in the #peewee channel on irc.freenode.net, or post to the mailing list, http://groups.google.com/group/peewee-orm . If you would like to report a bug, `create a new issue `_ on GitHub. Still want more info? --------------------- .. image:: http://media.charlesleifer.com/blog/photos/wat.jpg I've written a number of blog posts about building applications and web-services with peewee (and usually Flask). If you'd like to see some real-life applications that use peewee, the following resources may be useful: * `Building a note-taking app with Flask and Peewee `_ as well as `Part 2 `_ and `Part 3 `_. * `Analytics web service built with Flask and Peewee `_. * `Personalized news digest (with a boolean query parser!) `_. * `Structuring Flask apps with Peewee `_. * `Creating a lastpass clone with Flask and Peewee `_. * `Creating a bookmarking web-service that takes screenshots of your bookmarks `_. * `Building a pastebin, wiki and a bookmarking service using Flask and Peewee `_. * `Encrypted databases with Python and SQLCipher `_. * `Dear Diary: An Encrypted, Command-Line Diary with Peewee `_. * `Query Tree Structures in SQLite using Peewee and the Transitive Closure Extension `_. peewee-2.10.2/TODO.rst000066400000000000000000000010251316645060400143500ustar00rootroot00000000000000todo ==== * Database column defaults? * Pre-compute foreign keys, attributes and join types (forward-ref or backref) in the `AggregateQueryResultWrapper.iterate` method. * Improve the performance of the `QueryCompiler`. version 3? ========== * Follow foreign keys through fields, e.g. Tweet.user.username, or Comment.blog.user.username. * Simplify the node types: * Node (base class) * Expression * Quoted * Clause * Parsing should be context-aware, which would reduce some of the hacks, particularly around `IN` + lists. peewee-2.10.2/docs/000077500000000000000000000000001316645060400140035ustar00rootroot00000000000000peewee-2.10.2/docs/Makefile000066400000000000000000000107561316645060400154540ustar00rootroot00000000000000# Makefile for Sphinx documentation # # You can set these variables from the command line. SPHINXOPTS = SPHINXBUILD = sphinx-build PAPER = BUILDDIR = _build # Internal variables. PAPEROPT_a4 = -D latex_paper_size=a4 PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . .PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest help: @echo "Please use \`make ' where is one of" @echo " html to make standalone HTML files" @echo " dirhtml to make HTML files named index.html in directories" @echo " singlehtml to make a single large HTML file" @echo " pickle to make pickle files" @echo " json to make JSON files" @echo " htmlhelp to make HTML files and a HTML help project" @echo " qthelp to make HTML files and a qthelp project" @echo " devhelp to make HTML files and a Devhelp project" @echo " epub to make an epub" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" @echo " latexpdf to make LaTeX files and run them through pdflatex" @echo " text to make text files" @echo " man to make manual pages" @echo " changes to make an overview of all changed/added/deprecated items" @echo " linkcheck to check all external links for integrity" @echo " doctest to run all doctests embedded in the documentation (if enabled)" clean: -rm -rf $(BUILDDIR)/* html: $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." dirhtml: $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." singlehtml: $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml @echo @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." pickle: $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle @echo @echo "Build finished; now you can process the pickle files." json: $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json @echo @echo "Build finished; now you can process the JSON files." htmlhelp: $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp @echo @echo "Build finished; now you can run HTML Help Workshop with the" \ ".hhp project file in $(BUILDDIR)/htmlhelp." qthelp: $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp @echo @echo "Build finished; now you can run "qcollectiongenerator" with the" \ ".qhcp project file in $(BUILDDIR)/qthelp, like this:" @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/peewee.qhcp" @echo "To view the help file:" @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/peewee.qhc" devhelp: $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp @echo @echo "Build finished." @echo "To view the help file:" @echo "# mkdir -p $$HOME/.local/share/devhelp/peewee" @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/peewee" @echo "# devhelp" epub: $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub @echo @echo "Build finished. The epub file is in $(BUILDDIR)/epub." latex: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." @echo "Run \`make' in that directory to run these through (pdf)latex" \ "(use \`make latexpdf' here to do that automatically)." latexpdf: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo "Running LaTeX files through pdflatex..." make -C $(BUILDDIR)/latex all-pdf @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." text: $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text @echo @echo "Build finished. The text files are in $(BUILDDIR)/text." man: $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man @echo @echo "Build finished. The manual pages are in $(BUILDDIR)/man." changes: $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes @echo @echo "The overview file is in $(BUILDDIR)/changes." linkcheck: $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck @echo @echo "Link check complete; look for any errors in the above output " \ "or in $(BUILDDIR)/linkcheck/output.txt." doctest: $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest @echo "Testing of doctests in the sources finished, look at the " \ "results in $(BUILDDIR)/doctest/output.txt." peewee-2.10.2/docs/_static/000077500000000000000000000000001316645060400154315ustar00rootroot00000000000000peewee-2.10.2/docs/_static/peewee-white.png000066400000000000000000000402311316645060400205270ustar00rootroot00000000000000‰PNG  IHDRÁ«£'`sRGB®ÎébKGDÿÿÿ ½§“ pHYs  šœtIMEÚ ãõtEXtCommentCreated with GIMPW IDATxÚìÝW“\gžßùïqyÒû¬¬,ï †èÚ°{fzœfZ#ÅhCÒèbu¯°·½·û.t¡ÝˆÝi43m؆͡k$[Þ¢|Uf¥÷Ç<ºÈbhÐÀ&ÀçÁ ƒ‰:uNf"ùÎóE!$I’¤o!U>’$I’ AI’$I’!(I’$I2%I’$I† $I’$É”$I’$‚’$I’$CP’$I’dJ’$I’ AI’$I’!(I’$I2%I’$I† $I’$É”$I’$‚’$I’$CP’$I’dJ’$I’ AI’$I’!(I’$Iφþ"\„‚V«M³ÙÂë5QU…J¥†¢(„B ï´$I’ôb†`£Ñbn~‰ÕÕ{LNŒ‰¹rå®ãòï\fp°]×ä«-I’$½X!hY÷îmóÓŸ¾ÅÂÂ*gÏÎpá ·nγ·D»íðñ2™$ª*G%I’¤$…äóEÞ{ï|‡ããí¶ƒ i:¹\žwÞýt:E0x™p8€¢(òU—$I’€ç|bL»m±±±ÍG×ïP(”Báø¸ÀÝ»K ŸƒƒC®\¹Áþ~×uå+.I’$½!X«ÕYX\eo÷„Š®(Іßïebr˜É‰a\×a}}‹ååMêõ¦|Å%I’¤#&ÛÛû4-TÕÀÐ=¤’ ^{í"¯¿ö.œ!ôS(Y\\£X¬àºB¾ê’$IÒ󂶈Åb™¶e£ë>_€áá^yå<ƒƒ½LMŽÒÛ›¦mµY[»ÇÖÖí¶%_uI’$éù¯@U4]'1>>DO¦ ŸÏK:brrGçð0Çââ•J !^üjPqú$I’ô‚… ¢(x½^ɺ®aÛ¦i’L&ðzÍÓFù™™qb±ÕZ…ååM²Ù 1Aƶm Å"…bËú´ºu… ^¯³½»Ë½jõº BI’¤± ŒŽ ù±¬&¶ÝFU•Ó6¯×ËèÈÀ`ggÕÕ{߸ 2BÍ&¥r™V«ý¥¡åº.G¹o½û¿zû_ØÚÙŲìÓp\ßÚâÿýŸÿÈûÿþž; 4[-ùN—$Iú ÏuŸ ßïcöÌ$#‹·(WŠçi·-„¨ªB"çåËçY[½G¡Xbqq ¦ý_kÏ ‚v»M­Þ@ÓT~?º®#„ V¯³°²ÂÎÞ>##Œ ašæç«Ùj±º±É{×®Q«Ö(–ËüÙH_o–e±³·ÇÒêÕZH8DO½™Œì‘”$Iú=ÚO~ò“Ÿ<·e¬ªâóyÑ ƒƒ#r¹<šª1<ÜO<EÓT C' àº.»»4šMÆÆI¥â_ëRjív›ÕM>øðCö²$ãqü>–m³¼ºÆÏó·çPU¡þ~ü>BÇ¡ÝnŸyªªJ«Õbym[ssds9Jå à'Ó B°´ºÆüò ¥r˶ééî&Ý•ÂÐuùŽ—$IzQ*A€@ÀÏ¥‹gi·Úüô§o±µµÃ\'•БɤPU•t:ÉýÑë —+ *ÊCŽBì“°Ñ4 ÓãyªU“‚r¥Êõ[·xïê5’Ƀ}½„ƒ!Žr9~÷áG,.¯Ð²l޲4[-lÛ¦\©rptÈq¡€ë ñ½ÝÝ(ªŠ®ëhšŽ+¹|ž›ssLŽÑßÛƒišFç¥Ís{nžÑ¡AY J’$½h!¨ª ±X˜—_>‡ëºüêWï±¹¹ÅÑÑ1©T Ã0Ð4ÞÞ ?þë?¡Õ¶‡‚˜¦çô¶m³wxÄúæ&Á`É‘B¡àS Çu98:bae…ìñ1 pp˜%L±°²ÌÅEj†ÇIJ,Z­‡ÙW¯_çú­Ûä‹E\×%™ˆóÚåË\>ŽLº‹þž ÅÍFƒƒÃCÖ66èËt3Ð×K:•"Ÿ/Po4X\]e}ó‰xŸ×+ßõ’$I/Jº'÷Ô²…cR™ÿê¯ÿÇrH§hڧÆ¡“LÆ 7×u9ÌñÛ÷ÞïüÌþ„³ÓÓx½æS9Çj­ÆêƇÙ,Žãžs–«U–VÖÈ ¨šN<eh ࣛ7yëÝ÷Ø?:Âq\×%›Ë!\AO†þÞ^~øÝïP­×Y][§Ùjq˜Íâ AOw7g&'ÙÙÝ#_,’/¸=ß©{d5(I’ôi!õ¼_@³Ùd~i™üÅ/ù—ß}@ äåòå³$“±‡vPå3À0 ¢á0𦲱µÅ•op˜Ë=•V ×u) ,¬¬P­ÖÐuX4Bº+I£Ù`{o—VÛ"à096Î¥sç®àÎüÂIº¨ª†ªªX¶ÍQ.ÇáQa0=1Îå 牄ôڙ¥‘PˆÙ©Iúz{ñx<Ôêu–×ÖXßÚ’3E%I’^”´m›½ƒÞ»z¹ÅEî,,òÁG×Ù9<À²GîSU•d<ÎØð0®ë²°¼ÂâêõFã‰Ï±Ñl²~ï[;;´Úm?cÃÃ$ã êõÕZ§y?‹òÒì ™tõFƒ\>ã8Ò©ñX]×q‡Z£+Ñp˜ÉÑ’‰X'ÜO^×uz2f§& úý'-ÇÜYX¤P*áʾAI’¤ç?«õ:·çæYZ]¥ÙjÓj[,¯­qíÆ òÅÂc5‰ü~&FGˆÇ¢”Ëe–×Ö)•+OÔhî A©\fay…r¥Šªi$b1¦ÆÇ…‚X¶…m;èºF¦+Íà@?>Ÿï´bÕut*É÷^}™ó33ø¼&®¸®‹pªªâ5M Ã@×u¼¦yú³‘PˆéñqºÓ]hšF½^gyuÍ­-Úí¶|çK’$=Ï!è8¹ã<óKË”+U Ãáñ >ÓD×ôǺ÷ex<ôf2Œ wë7îÝckwÖF»Ýf{w—µÍ{4[-|^ƒ}}§-¿ÿ4Äb‘0Ÿ]Óðû|„‚AšÍ&G¹Ù,¶í i:ªÚ¹¶V»ÍÞþùBUU è'mº®Ó›éfjlœ` €+Ù\Ž» ‹ŠEY J’$=Ï!ض,ŽÙ;<Ä`xLü>Cý\:Žx4òX!¨* ‘P˜ÉÑQ~?Çù<ËkëTj5¾J\!¨Öj,­­“ËçqÝνº‰Ñ¢‘ðɽÁ(‰x MU©Öª4N†_}>‘pUU98:âÚÇ7XY[§mY†AÀïààðˆ›wç(•Ëx<¢‘èi/ ¢(DÂaf§§èÍd0tj½Îòú:G¹cÛ–ï~I’d>¯'nY6ùB‘f³…¦ëxL“X4Æù™ûû0 ã±éóù /“Á²,ÖîÝã0›Ãþ áº.GÙË«kÔêut]£»+ÅèпMU §½{+ëëÜ]\¤\©`š2Ýiü~ŽãR­ÕiµÛ§÷.»RIªµW?þ˜Û ´Úm"á=é®Vš1 ƒÁþ>¦'Æ …B :4È*P’$é¹A!–mŸ,¦b’‰£ƒý_mI4UUH&âLŒŽà1töXÛ¼G£ñøkÖê V7688<ıþã#ä’‰ÓY«~ŸÞ^|^/Ù\Ž®}ÈòÚ:š¦19:ÊÈà ñXŒp(D,cdpW/^ »+Åþá×oÞ¢P(âõv*àžtÏ}á¯( Ñp˜³ÓÓôõô„éí΋FO‡M%I’¾ÍžÛOBUUðy½hš†Ój£ë:¡@€P(ø@àãòûýLŒòá͛챲¶ÎùéiBÁÀck»Ýæ(›¥y² M2gld˜À}í5Mûûéïí%_,²¹½Í­ùyúúá_ý–V×(UÊ„‚A†˜ÅkšTj5CU¥§;Íù33Ä>cØ0 †øÞ«¯°³·ÇäØñhô+}Ip…Àql”Îóî8”*%,Ë& á3½µ¥H’$É|<©D‚`0@±\ÆqlTM{âFpC×Étw362Âaö˜½=6wwèJ%ñû}µÃ0ˆÇb˜žÎŒÍ±á!ú{z0=žû‚\%“îââ¹³f³f³ìîÐh6èJ&˜šdxpÛ¶ÑuŸ×‹×4iµZøL/‘H¯ÏÇ+/]`jlô3WƒQ…h$Âw_y™F³‰ßçûJ«Æ´Úms‡äòÇDBa2]Ýò¼õþÛä 9^¹ð2ç¦f …B¨Š BI’žÏíÚªª¢û‡‡aÛÑH”‘~¢‘ðW®H>iMPU•ƒÃκ~¿Ÿ‘ü'í ôÄjª¢R( ‡B¼zñ"Ãx<W†A0ımjõñX”3S“DÂa ÃÀçõv ×4@ kzçØªŠe[ôtwñÚ¥Ktwu}Á5 ZíùRžf«„3{Öq¶÷wøï?ÿ~ýÞo±,‹h8ʵ[ñ³ßüœ»Kód³DÃQ’±Ã#W¥‘$I†à³ôI€(ŠÂþÁ!¹B¥3q$™|¢E°UM#x²ÕQ©TêìK84D8øè뉪ŠBÀ璘§‡3S“Œð?¢ŠÒÖMÄcÄc1&FFNÁVNú®Öª,®-±³¿‹ÏëÇkziµ›lïoQ®–èJ&‰EbŸ{Ÿ¯R«ðÁGWøÿö?ØÜ¹Çpÿ ‘ð£ÍžBPª–yïÚûüú½·¨ÔªÌŒOá —_¿ÿ+›kTkU ŵzî®4Éx]“÷%Iúæ{®?©¼¦Éôø8ÅR™ö;ïP(¹q÷.Ý]IfÆÇðÜ7ôøX!¨(„C!^yé%zº»Ñux4ò¥?ç A£Ù Ýnã5M|^/çÇü¼ÐÑuL:M4yh‹z³ÁÅ»üÛÿD«ÝâoÿòßòÒìy²‡|pýwììï’/c£ƒ#x Ïï“Ëq!ϵ›²¼¾LÐ@yŒ*Ùqv÷w¹òñ5r…<#ã  ±w¸ÏÑqÛ±Aj½Æ¥»ôfzèI÷éê–Õ $I2Ÿu5DxõÒE ]çýk‘ËfÙØÚf¨¿ï+‡ t†[#‘0^¯I¹Z¦\-ƒ~¯ïs‡«µ*7ço±»¿ÇØÐ3Ó|G+É5`àÁ?kYÛ»ÛüêÝ·¸q÷~¿½£}¦›“Ôur…cö÷i[m"¡á`ˆL:óÀ=¹f«ÅúÖKë+(( õv‚ðªV«1¿ºÈÆÖšª292Î@Ï~_€±¡1 Å"åjG8KEnÍßæ•ó—IDã_¸1°$I’ Á§„¿éÉqB¡ ;»ûôeºhøÊA¨(4[ ÞÿèwlîÜãìô,—Ï^$zx(Ñqööxóí_±º¹Æ¹™³D#1Fú‡¾ÒlU!¥J‰+7®qkþ6Ív“xüÓ­¡|^áP§é>W8æêkL ‹Äðûü§Ç(–‹Ü]š£T.‘îJ39<~úø£T¶ùR»‹s”ªºSif'ÎŒ'ˆÇâüùJ¥ZáöÂÍŽã’˳¾µÉôø´ AI’¾ñžëi|BêsË \¹qT‡—/gjl ïSúv‡£\–+_ãçoÿ’ŵeÚÖÃK©ÕêuVYZ[bç`—…µ%¶v·hµ¿Ú® Žë°tÀõ;uPCÁScSL ôéËôòK¯ÑÛ݃¢(d¹³4G¥Z9=FgŸÄ}æWÀÔè$ý½ý ™~žf«ÉÚ½uÖî­£* cCcŒ ãõz øüœŸâ{—_'O iŸ,åÖ"_,ÈõI%I’!ø¬Y–Åêæ*ÿó—ÿÄßÿìð¿ù)KË4­æS[Åïó320Œ¡é,­.síÖGä Ç,¬í —|1Ï¥9 å"Šª k:ÆÉ,Ó¯¢Ùj±¹³ÅÞá>Á@ßo¼ò=†ûñx<ÄÂQ¾{ùu^¿ôÑp„z³ÁæÎåjWt¶€ªÖª,¬,rxtH0äÌÄ Ñpô‘ΩSE–¸»4Çq!O8æìÔâѪ¢ž,ÒáÌÄ =éžÓI9®+°¬6B®J#I’ ÁgÇ.GÇYÞ¾ò×ïÜ`÷hŸ›ó·ùÙÛorgé.Ívó©ü¯×ÇèÐ0#C´Û-æWØÚÛ~ l¶Z¬ßÛ`im™F³‰Ïëcd`ˆþž~LÏW«H[­ÙNÏ`(âÂÌYfƧ:3T5M£'á•ó—éËô¢* •Z…F³‰p;;Mdó9î,tž‹‘!&†Çð{}ööÙ;Ücau‰¶Õ¦/ÓÇÔèûî[jšF4!“Jcz<'_ ‰*'ÅH’$CðÙi¶Z,®-ññ›TëU4]ò-ªÕ*ívá>JDUÑøÉ$?‡ÙC–6V¨Ô«!NïÝu*¦ã“¥Ê"œŸ!}å’ŽãP«×Âí,ì=<ñнHC7èÍô0Ô7ˆé1q]÷´ ¬7ê,o¬°±½‰×ôrvj–d<ùÈ•é'UäþÑAgîØ$édšúàýMMÓ1MïédŸé%•H>Ѥ$I’$‚ð!½¸ºD®pŒáñ`z½Ä"1.Î^`zlê+­ŠòyþScSô¤3Ôê5×–8:Îâ ˶ÙÝßc~u‘Z£Žišôeú˜'à|õFUNšÎUL‡h8úнŽãtî“O“~ÂsÐuƒH(Œ®i4MJ•ÒC»Y¸®K©Ræ wˆ+\¢‘(~¯˶¹·»ÅÒê2ª¢0=>Eoºç‘Øë:Ëë+lîÜCÕT¦F'ê|h¶§mÛå²ììÑjµˆEb\8sž¾§2;W’$I†àç„`³Õ¤Pîlëñz‰„ÃLŒŒÓÿ >€E! sfr†X$J¾gne|±@®cnyžrµŒªª¤ª) =Q³¸ÇðN¥ñ™>J•7çn‘Ígq]÷ô9¨Ô*Ü]šgec!™®n¼^/•Z…¹åy²ù,±hŒ³“gˆ>Æ 1ÅR§­¢X*Æ87u–X$ö@ÿ¡+\ ¥·nŸ™ÎNžáõ‹¯‹Äd£¼$I2Ÿmòé6JÁ@î“ÐxÀ^¯ÉÈÀ0#ã!XÝXeic™¹å6·ïÑj·ƒŒÑŸé{ä6„ÏAƒîdšT2E­QçÚÍøèöu ¥®piµ[,­­ðî‡ïs\Ì …ìÀkšäòÇ̯,à8£C#Œ Œà5mx¸mµÙÞßayc˜ŸbzbŠÀ}½…BªÕ*7æoñ»ëWi4›LŒNð£ïÿ1cƒ#_i/GI’$‚^š¡ë:~oçƒÙu]tCÇkš¨ê³©@TE%sfb†` H6Ÿåw_åÊÇW)” (ŠB"çÜä,±Hô‰ƒXÓ4R‰$Sc“ýö÷ùÕ»¿eae‘V«Åq!Ïo?x›ù¥yEa|xŒñ¡14MãèøˆìqŽP(Äùé³Äc‚ŽeYÊ%¼/3ãSüðõIu?0¡Æqvvyûwï°w´ÏèÐýÇÉ¥³/d(IÒsã¹\1FU‚ }™žN_`£ŽmÙ4ZMœ“áÂgÁï÷35:Á`ïs+óܸ{“V«E­QÇçó120Âèà>ï“O Q…H8Ê+.±º¹Ê͹[lïm³¾³ÉôøÕZ•Í-­&ý½ý¼zá2=]TUESU{ˆ†#ÌNžy¬ :‡©Ñ ü^/@ñ¡1¼¿7ÉHQ:}á`ˆ™ñ)þèõðÊ…—å0¨$I2¿.á`ˆÙÉ3ܘ¿ÅQ>G±Tbçp—J­JÈ|&›»jªvºRK¡TธÇ0 ªçùä Ã`rd‚ÿí¯L†û¹9U(8¶ólËç“åÂÂÁ­VÛ¶PÕNö´†:só7YZ™gxhŒ³g^"ºÑnµÌõëï³´<ÏÌÔY^{í¤’é§Z­påê;\ýð=LÓäOôcνˆ¡{Ø;ØáW¿ùgÖÖ–p\!–eÑß7H¦»X싇u+•W®½ÃÕkïrtt(X–éñòê«oÆPC£’$É|ve¬¢tv}!•HqéìK(Šzº³B­V¥Þ¨á5½øü§¾Ûy£Qgyežýý|>?“3tueÐõ'o¨7j,-Ïñáõ÷ÙØXFS5νŒßß™[®Y\¼ÃâòÇÇGxL“×_û!ÑHPÂ¥Z­°¼2ÇÆæ ‘p”F£Žp­V“{¬­-rp¸wZyªªJ½žúÒ0B`;6µj•V«…¢*Ø–ÍÞþ6W?|®t†33¾òº©’$I2§"Ô â‘ÑP@UTjµ wçn²´Ÿït†ç¾0ªŽ¦iä 9r¹CÊ•2Á@ÞÞ¼^†áAÓu޳äó9~ŸŸþþa|'íÕZ……Å;d³„B.œ»LOO?8ʰ°x›z½†a!ðx< Žrîì%"_Òë¨i¡PÓô²°K¹\ÄqlÛFQF‡ÇO†T5ù·L’$‚_×u9<Úçíy“¹ù›Ô!;Wz{úéêÊ`Og‡UU ‡#tueðxLJ¥<–Õ"΋v _Èñ›ßü”yç—í‘Ju G¿4TU%vBf—lîUU›"Žb¡``w›r¹„¢¨ŒŽLÆÑ4f«ÁöÎ&ÕZ…ÁÁ.]|èÉ}ÅZ­ÂÞþ†Ç¤·w€`0D&ÓÏ«/Ñ‘ Ì/™]ûI›†×ë£\)±°C³Ùì,\  ŽÑî•«ÇH’ô¦¿hÔj5Y__fiyŽF³³«C d°”ÞžÁ§Ú:Ñ =™>üßýF†Ç±¬6Éd7êÉph£Qg{gƒ­í Ê•"½½ý¤RÝD±/=¶Ïçgtx’3388Ü#—;"›;"Ó݇iz ‡£LOcqù.…BžBñ˜ã|–Át]'rþüeÒé ==ý¤’é“6 …¾¾AþâÏþ˶ ‡ÂX–…Çc’N÷ð:ŸO&ß_*ŠJ(fld‚7¯R.—pÇq¨×k8Ž-ÿ†I’$CðëT«UYßX¦X* ëûj}½\¸ð étÏ3žÓ4x™Yžà({@»Ý¤P8¦Ýn¾dö¤¢¨ƒaú‡ðû´Z *å"¶ccžT‹ÑHŒ¾aæçocµÛÔjU§sÐçõ33uŽñ±i<¦Ç{zÜp(ÊÌÌMÓOÝ¿7Ðu]jõ*•JUQ…"øýŸ.fb±$@N{¹³¼$I2¿VB¸Ôë5÷p ŸÏ$32<Î@ßÐ3ÝèUQÔÏfýd‰eµ)–ŽžÀçõÓ™ÁÙiK¨Õ*§³‡à'TUÁôxÑ4 Û²°‹û[;;+ÖÄN'³ÜOUU|>?Ÿµˆ›ªª_:{Óu޳ܼý‹‹wÐ ƒK/½Îì™ øO–bSè,¡¦k•e,×=ý½ò~ $I2¿ÖËjS©–Ïç%ŽÐ“é?YØùëoඬ6¥rMÓš`tdïÉÂß¶m³³{7¯Òl59{æ%&&fØÖvjõêé„MÓè¿ÂŲ-@7 BÁÈSY$@A©8,nŒèIDAT\äê‡ïñλ¿âàp¯×‡®é ŽœNüq‡jµB½Q¢EH$»žÚ½WI’$‚ñáí !NªÓ$=µ¡y6‹y®^}—¥å»t¥3t§{H&RèºN½Qc~þ&ïÿîmõÕj™d2M¦»÷´)¾^«²³{V«A(Â !(—KìîÞ£mµI¥ÒÄãɧҧèº.Çùwî~ÌÁÑ.–eá÷0M/šúéìÖf³ÁÖÎ¥R!\‚Á£#“¤]ÏlÅI’¤§å…ZÛJQtÝ x2±ã“ÊD—?ÄêpB€@Ðj7(ólo¯³°t“J¥Ü¹GgsÚV³A±tÌÎÎ&ùBÛ¶N«Èݽ-VVhµZDc â±ÔiKC½^c}c™å•9‚ÁÁ¢‘ØS[²ÌР¡±hœžL/¼ÊåK¯‡OªT›ƒƒ]—îvîª*Ýé^fÏ\8…*I’$+Á¯1Á ÝÝ}¬®-Ñh4i4KyÚVÓô}mç"Dge–b±€ªt&›´[- …cZ­&>ŸŸ±ÑiÖÆ–¹·å!ObšæÉ0£ÍÁá߸ÂÞÞ6^¯¡ÁQâñ$š¦Ÿ6¦_ûè}ŽŽI$SŒÍ†žJø¨ªJ2ÙÅoüÃÃcøýF‡'I´˜8®ÃÑÑ}ü«k‹Ø¶E*ÕÍ¥‹¯322yÒ°/I’$Cðk „dnþ&¹ãC …cv÷î16:ƒß|`%—gÉqvw·øõoþ™ù…[X–E !I`žìϧëCC£üùŸþkööwèJ¥Ét÷¡ë:år‰Ûw>ææíh¶ŒŽL23}žp¨ÓÄ^©–¹qó*++ xL“33:÷ŸRÐ+Š‚ßçgb|†¡Q4MÅëõNvi5›¬­-róö‡T«ºRÝ\¾ô^¹ü]b1YJ’$CðÂ4½ŒM3{æ%®^{‡|¾Èææ*ÝéÛa"‘¯ïÚ²-êõ®p‰D¢ 39q–€ÿÓj­S N180Š®ë§Kž¹Â¥Õn"„KWW†K_gxh¼³£„Ø–EµZÁcš ŽòÊåï’L¤žêî ŠÒ™AúY³HEÅ4}$â]Ä¢ ÎξĹ³/“éî}êk´J’$=+/ÜŠ1Š¢`š&~¿ŸJµBµZ¡Ùj⺂žLÿ#­ÖòtÎ ݃iz ‚ŒMóÊåï1>>C < AMÓñx<läªÚéf¸ãcÓ\¾ôÓí>YçSUU¢Ñ/¼ÊØèÔS]õKß8ªF8a``˜³³—˜Ÿ9ð#I’ôÜdÆó¼ŸàçéÜk°µ½ÉÊê‡GûÄc ^}å ÒéÌC;¥?+®ëÒhÔi4ꨚŠßxä úäžb¹RÂcx…ÂÌpBÐhÔiµ›ø|~LÏW@!â“…´£’ììCèv¢ü¾ {%I’d~‚жíNP´hšN0ÆÐõ/ü w]—F½L¯Mpầ³NÕ“ÜïÿÙv» B Æ«¯!°ÚmÚí6š¦az½þ¬ãP¯×±Úm¼>^¯EUNgµ:ŽCµ\îl¬àóù+°>ïZëµǹµJŸßO:“Áç÷Ë@“$I†à ˆ®KµR%œÃu]¼^`_ÀÿÀ¤ã8î°pç6íf‹¡ÑQú† †B§áÇÙ,»[[¨ªJWw†D*‰áñ „ X(²º¸@½^§§·¾ÁÓ°j5›¬-/³¾ºJ$ajv–Dª3¼)„ ˜ÏsýêUŽ›œäÌùó‚ÁûÎkŸ÷ß~›R±ÈÅ—_aæÜY¼¾'›Ө׹uý:¿þùÏÙÞܤw`€÷wÇäÌ šÖ”$é[à[ñIgYk«üúg?ãø(KWW£ããÌ\¸@ïà¦×‹ëºä³YÞùͯyëoÒªÕ™œžæÏÿÍ¿æÌ… x}>*å2ï¿ý6oý⸎åW_å/þæoÈôõÑnµX¼{—¿ÿ¿ÿ¹£#fÏãßüûÿÀðøš¢pœÍñÖ/ÞäÊ{ï‘H&øÛ¿ûO\~ý5ü¶e±±¶ÆÏþáØßÙá;?øÃç!Øj6Y¸{—7ÿ韱­6ÉTбɉ'Áz­Æü»||õG‡4êuJ¥Žë"<“$I†à BÓu‘þ@€Û;7X[\dynžƒ½}~ôã¿¢x˜V³É›7yûÍ7Y[ZBSUÁ ÕJ¥³ ã=:âúÕ«,ÍÏ£* ÉTF£³rK©Ä7X^X Q«‘Nwc ›¶[-¶67¹yý:»[[xN¶úd8´Z©0wëë++Ø'»9|2ÁDA©XdþÖ-Žöéêî&•Nãñ>ùn®ƒÕnã:.• _K’$Cð…£ª*=}}üÅL4gîæMÊÅ"GìoïJ§ÉqåÝwÙÚÜ!ˆ'“œ½ô#xL“z½Îêâ"++4ëu’]] ŽŽ‰Dq‡ƒ½}îÜ¡R.‰D'‘J¢¨*•r™ù;·Ébš†ÇÇèÄ4ÍÓp½{ëÕJ…Þþ¦fÏœV¶e±»µÅâü<®ë269ÉÀÐÓY \×uüÁ †ÇÀ4M"ÑÎy?P’$‚/EQð˜&ýÃÃDâq^ýþ÷( ×%É`;k++,ÌÍѬ7‚ŒONòúoîéAQŠ…s·nQ,Ð ƒt&Ã̹sÃ!ªÕ*‹ssìlÝ!Hu¥9sá<áH!‡Ìß¹CµR!™J2{þ‰T EU©–Ë,/,po}Ã0˜ž=ÃðØØiC}¥\fáî]öö †BÌž¿@4ÿÒ~@!®ëÒjµ®ûéMÃ4MTUÅcšôõ÷31=M£ÑàâË/“Îd¾¶$I’d~4M#‰Fq®‹ªióy6×Ö(‹èšF<gæìYú‡†0 ƒV³ÉÖÆKsóÔ«5B‘“3g:{<ìíì0wë&åR‰@ Èøô4㣘^/ÕJå4ä\Ç¡¯€ñé)Á®ëRÌç¹{ó¥|h,Æ™sç‰%¨ªŠë8ä²YænߦQ¯3þÒ&¦§ð_z­V»Íö½{lmlÐl¶@tZ B‘33$ ¼^/ç.^¤»·Wâ±Øéï–$I’!ø‚RUõzÛ¶Éçr¸ŽƒÇðFéÄï÷ŸVcó·ostx€¢’çÏÅh5›¬,.²¾²B»Õ¢«+Í™óçˆÆãä¹{ó&¥BP8ÌÔÙ³¤3TM£Ùhpo}ƒå…Çapd„‰™éÓ«Õj,/,°¹¶†išÌœ=G²«ë¡Jí³v}/óæ?ý×Þÿ€z­Öù3 ôôõñÿó&ôòËx½^ÂÑ(¡HäôçåP¨$I2¿eÇ¡Õlâ8.]Ãï÷ EÐtÇq8ØßçîÍ[TJe¼>ÃccŒNL`z½0û6ÇÙ,¦i282ÂøT§Zk5›l¬¬°²°€Õn32:Êìùó§¡Ó¹Wx‡ìá`é³³¤Òi4M;© ܽ٩0û‡˜:;K0:=ïOúüŠù<®Dc1‚Á ŠªbYÕr™V³ë:Ÿ»‹† >I’d~Ë©ŠrÒ'NVA8¶‚ZµÊÒÜ<;[[8¶M¬»›™óçH¤R8¶Íöæ&KóÞÀT*Åô¹³¤ÒiTU¥X(p÷Ö-ò¹>¿Ÿñéi†;“ZÇ9¹Wx›F½ÁèÄ8S³³NB®Õjqoc¥ùyEaêÌú0î›Ól4¸ýñÇü˯~E»mñúßçõ7Þ ‰ÅxãG?btr×qJgCÞ®®.Ƨ¦0=rÃ[I’$‚€áñO$Ðuv£I½V#ŸÍÒj¶ÈçrÌß¾E¹XÄ0 2}½LÎÌ(—JI+û{(ŠJ:ÓÃä™3C!lËbw—…;w©×jdz{™9wŽH4Šz2cty¾®¦ÇÃô™YúOf} !N‡`³Y¢±g_º@4{ j«”Ë||í×Þÿ˶ðx Μ?O0" rþÒ%fÎ{`¸Ô0:3AyßO’$I† €Ïïght”P8ÂA¥J±P`se…á‰qîmn°º¸D«Ù$–H093C¦·EU9:8`îÖ-ÊÅÁ`‰éiúúûÑ ƒb¡ÀÂ;ìî¢k:CccŒMv†I…ërœÍž±v¥ÓÌœ?G$Ú 9Û²8ÜÛcñîŽí061ÉÈø8¾ßkŽWCÓñ˜&ÓC0þt©6EÁç÷ã“/¯$I’ Á/ AŸñ©)&f¦Ée9γ²°H4‘`{g›r©„išdzz8srO¯Ùh°±ºÊÞö®ëÐÕföÂy"±¥Bå…êõ±d‚ÙóçI¦»P5F½ÎÖú:kËK(ªÂØäc““§!W¯×Y[Yá`o—p$̹‹I$SUoáH„W¿ÿ}tOgÒ‹¯¾z:!G’$I’!øHTM£«»›×ßxƒíÍM6VWÙØXÇ_8ÈÄÌ4ºn0}æ £““˜¦ÙY[×%Ù•bjv–ñéi|>îI^$aljŠþÎ]ºDð¤Þu,Ç!ÕÕE²«‹K¯½ÖYGT넜p]„ëÒÛ?@²«‹Ù ç †‚·éõ2uv–þ¡A‚áðSi¢—$Iú¶øV, ý(„ë’=<â­7Á[o¾I³Vgxt”7þôG ŒŒ`ü?±DÃ0°m›r±H±PÀ¶,Bá0‰T ÙÙô¶Q¯“=:¢^«H¥Óx}¾Ó…¸‹ù<¹£#E!•NŽFO[,Ëâ8›¥X(à÷ûIuwãõzå,NI’$‚ÏŽã8䎎X[Z¢V«‘J§%‰tvÉåÁ^¼Ó§NøŒV÷d¥Êâ¾u:?ïgOŽ+›×%I’d~ma«ÝÆut]Ç0 ÄI@ ºþà‚BÇÁC{>ÒïûŒF÷Çy\’$I’!øÌØ–M.—#{˜C× ººSDc4½ÓÐ^­TÉå°m—D2A4FÓ¿|íÍÎÎñ-JÅ2A8Æëó>° o»Õ¦\.£i:ápÝ·p%I’ž&ù©ú%Êå ï½ý>‹sKD#1fÏÏrþâ,‘x„f³ÉÝÛw¹öÁG TΞ›åÂËçˆ%¢_:„iÛ6[›Û\yï ŽíòÒå—˜˜ÃçïÜ7´,‹õM>ºò¡P˜Ë¯]¢»'-·–$I’!ø5V‚¶M±Pdowìá1…P(Èô¹IlÛ¦P(²»³K­RDZA?³/!ôá¦eYä²9VWÖ(J4›-|~/ÃcÃx<í¶Åîö. s‹¸NçÞâw𩸕$IzJ´Ÿüä'?‘OÃçSTÇqØß; ŸÏãZIª+I(B×5޳ÇR¯ÕÑTƒ®t’P8tÚòðYT¥óØqö˜½=Š…¦ÇKw&?àCQÀ¶-ØÝÙ¥T(OÄI¤’C¾0’$I2¿†RY×ñù¼T«Uvwvi6[˜¦x"N2Ý B€Ý=ÊÅ2Š¢EIw§ð˜Ÿß³§ª*>¿Ãcp°ÀáÁ¶e“îNOÆñx<øý~4Meow\îMÕèì#’ßJ’$=•ÏøoûP¬ÖÙ<ÈQ¬Õ>÷ÏØ–EÛëC÷y©åKçYZÙ$=ØC  mxñ„BˆÃùBž…åMŒd‚™ñNûg?0˜NM¦ˆõô°½½ËþÁWnÜæÈqðü G!êâà0ÇÜÂfª‹ž‘A’±(CÝI¢Aÿ_ßg‰ u'¿ðüŸäñGýýòúêã?êõSÏ_¾þÏÇõ}|ëg‡Þ\Ýâ¿þò]n®nѽ~ø :-¸¨Ê.½F4Ç4»8Ìzh´ºÞð³~û.ÑJ…ßG4ÒÇaÎG½ÙùáîE?ÿø¸Ä_r8^\Äk;øÝÔê Úmãäq…2ж‹ª8˜ž~šÍ¶ûé™ÿç¿ÿ—¯|}â‹Îÿi<þeäõ½û=þ—]ß7ýüåûû›{}ßßú.ìb­ÆÍÕ-Þ¾µð™òA "Ðp]ãÚP pgs—…ý â ۱ؿï ÂòQ‘\½+\«ã~ú{=(BGAàº-\ܧv}úEá‹~þI–¯Ÿ¼¾'?þ“üþoÂùË÷÷7÷úd>Eé,ÜÂ}MëlÛÁuA|ZÒ•^Ú¶ý1Õ BË„ÒÙûO÷èŠ ŠŠ8 ey7P’$I†à×¢SÐ а¸¸Z-ëž#ÏÉŸq]e9§K¦=RÐ"8ÙÒ!Ô“*ò¾¨.Žë"„‚êƒI’$I_™ì|¤jК@ UUB£ÕVqœN©ŽKXp¶­„Õ£ j·1¢¨¸Â@í:ÅBUìN¥)4d-(I’$CðëªAX¨JU³QUÛ6h· „«¢àâ±[$tUU@ñв<8ΣÝÂlµÐUðà ÝÀªRCU]¡ã8®¼$I’ Á¯……¢QÕêIÈùh¶ýØ®~R¥5ñÕ«èšØvËñ Ä—UkpP¨ámÕqAÇñáºÊɱT¥Ž¦VGq… BV‚’$I2Ÿ±Îd˜&ªZFQ„kb[!lÇëª(8¨J‰kÑÒ5„Âv¸®Áý÷í.Œ tþcóþc+ 4QÉ£ØL'‚Ÿü¬‹JU9¥®‰aàëß4÷³Îÿqÿ¦{ѯO’ïoùþ–!ø•@Q À‹p \‚¸N!ô“ˆìLŒÉdº ÄÂŒLÏìíF7 bÁàͦóÝ›MºAio›Ÿÿô—ìåZlÕ,j€O†B¥Seºn½óÿD!Bˆ?À\¦ÿýϾÿÐùÿ~3íç=~síÿçý‡oôëü¢_Ÿ$ßßòýý- A!®ëâØªª>´ÑÛ~î1<8Nú$uÚ}Už†#büÇÿôçÄã1’©$ÓóÐ’f§ß´îãØÛ>ñq–ëKìVyÉsßù ¥¾„P„ëÆxøCÌ ý¬óœÇŸ›oÊ/èõIòý-ßßß²B`Y¥R…b¡L½V' ÓÓ“Æczp—z½N­Z'ŸÍ£ ª(÷Õ'÷ëNª0K§£ÏAQµ“vEx™œž@×õÇÚ^ÕTzúz;?Ë?®ìRw•N ‚ðƒÒÝi›À@¶FH’$ÉüR–e±¶¶Éõo²µ¹‡ë™³“D"!TMãè(ËÝ; ¬,op˜/h·1N7³u€:¨u ]Bض×m€Ò@×UtÕ‡e{AtØ=žÇ¿O§( ‡@(„«ê¸<\‘v‚Ð'ߥ’$I2½ ,Ë\ùà#~÷þ5ªÕ:‰Dœ §Q5jµÊµ+óÛ·Þ¥P( G°…zÒ¨îà¸%,çC·pô Š0q\Ë>@ÕxA £`*ÏváUYùI’$=K/\Ùm;ìí0?¿ÈQ6‡m[Dca††ú1MÙ£·nÝewwf³‰?䣥騮@ÐÆrЏ¢‚å´°Ú–­âºm,»L۪вêØvûdi3I’$I†à7H­Zcyy•ìQ×uñûýŒÐIcY««loïbY6¡P€®Þ ަ¡( D!ja£*àG9mIˆ“¥Ðú#ôJ’$Ißt/Ôp¨ã¸äŽó,/­R­ÖPU•x"ÆÄä(þ€ŸüqžÅ…eÊ¥2š¦‘L%H÷f`ópÝ ‚Š¢ ª&ªBAGQ TÕƒ¢¸(˜arÿPåÍÕ­'Ú¯ë›îI®/|ãû^ôë“äû[¾¿¿%!ب×Y_Ûdg{«m b`°lnl±ººA»Õ&066L4GÐiFpD„ƒ®yP• Šb‚ÐP] ?ˆ  ¢Ÿ‡>é~l }³Ÿ×'½¾ÿãÿúñC}H_æëÜ”óE¿>I¾¿åûû[‚BJ¥2ós‹ä U!‹25=N$¦Z­±´´J±XFU5b±“c´ü>WV\l§ÖÙ-Bñ£DP†–¥Ó+¨ =p?ð“ý¶þCäÕÏý&öE?ß”Ÿäú¾é}H/úõIòý-ßߟ'ØjµÙÚÚauuƒF½išôöõ04<€¡ëdY[Û¤Ùhàõzéèeh¨M7À¶‰*@UÕ; âCUôÓ§IQMÅ‹¢tV‹‘$I’d~cªÀjµÊ ǹ<ÊÿjïÞ~›Hï0Žçd{|Œ8äà`;„@mX@‚VmµUÕ­ÚJ•ú_ô«´Û½èM»• ZuÅE[±] Á9r‚;>{fzv»H@º ”8û|¤\YÍoÞQÍÌû{_à •Jrzr‚þþ4f“ryõ-<Ï'™J099A_:…ïy˜íI|<,Ëyö-0tØb2""¢|÷<ÏgkóñÁS`£IÈ ‘Í01Q$âºìl?áÞÝûT*b††²”Jy¢Q—V£Ý¬ãš†eb #Ž¥»CDD!xôíï×(ߟgkó†a’ÉÌÄóº,.,³ººF»Ý¡/•äÔéÙÁ ಻‹Ñ¬¶ l3„iÆ1y~ö§ˆˆO=?1&êõkë›X¦I¡0ƹé3\¹z‰T_нÝ=ææ©×d2i¦ÎNráÂ9ÉõzƒÍµM¬NÃ41­¦‘ À>ØêHDD‚G]8ìPÏÓ—J’Ï1>Q`` ŸPÈ¡ëyØŽMi¢@©Tàò•òùŽãP­V©Uª`<õ F~Œ Æ·{úºûu÷>œõ)¦—¦Xõúë;êÇ×ø«>…à+|5 æÚõ«xžGÄຑ¯·3ÊdÒüüÃèvº$S ’É$Žcc®ë2ZÈÓº·ÌÜúcAßþ’¼Î~]/ûý8Ýd‡õ)æ÷ÿNõ½Íó+íãëþV} ÁW‚€n·‹i›DÜ0¡Ð÷ó ‚€PÈattÃ0¾þûŠëFÈÆè¦úØyøp¾Ó9¼î~]ǽ§×û${½¾·}~Ç}|uïúz6ƒ  Óíòtw—Õµ5*•*¹‘ c eû¾O½Ñ`wo €t_ ×u±¾‚¦iŠ„Á²ð4FDä{¦'C0š­‹Ë+ܾs‡Å•X¶†ÁÈЖeòxg‡Ù¹9fç°‡k—ßçÔxË45ê""Ò»!èû>ëüåÆ îÏ/Ðõ}F‡‡I§R8ŽÍΓ'ÜüÛçܾs‡Z½IntË4¿±{¼ˆˆHö Ö æ™_\äéî.fdè¾ï3·¸È?nßfeõ­V“áì™t–mkÄED¤wC0v÷ö(Ïϳ[©`ñXŒR¡@"§ZÝg¶\fûÉR‰¥b‘D<®O~""òœž{4j¶Z¬—#¿ðxÇ}¿¬7QßQîƒT}ï¾>]Õ×ËŒ èe¢ƒ àñö6üÓŸùìÖ-ê™~~ö“ó£ë× ‚€Ooü•Ooܤº¿Ïð‰!>üé\»ü>ñxü…ÇÜݯ¿pSÉÿåæì…>½×­ßø»üþ¦®¯ê{7õéú¨¾ãðÿ±çB°Óép·\æ£ÿÀÂò2¶ípª4Îoý+J…"××ùè“Oø÷ì,¶ípîÌ~óË_0žÏcYZ[DDž×Sßkõ:ó ‹ßûˆÅ¢”ŠENd³tº–—XßܤëyDcQŠ…<™ ¦Ú"DD¤—CÐ÷}¶wv(/,P«×1M“¾TЉñ"±XŒ½J…ûóóT* JQ*ˆE£Ï­#""Òs!Xo4XZ]e}c“N·K8æd.ÇØÈ××YY}@£ÙÄu]Šù“Œ á8ŽFYDDz7ƒ `o¯ÂìÜ<{Ï6Æ=‘ÍröôiRÉ$ûµåùùƒžAÓ¤?“arb‚D"®§@y©žh‘h·ÛÏžôV1Lƒbî$W/]âÜÔÛfië KË´;²ýýLOM1Q,¾´-BDD¤gB°ëy´ÚmR©$#ÃÃ\™™áÌ©S¤Ó}4êu6¶¶¨Õë  0sñ"×®^!Û߯ 1""òJG¾E"ºžG¥Ze{g‡p(L¶¿ŸhÔÅ0 ê÷ÊeÊsódÒi¦ÏN1˜Íâh‰4éÕ hµZTªUÚÝ6ñXœhÄŶìçöô}ŸF³I³Ù"rp£Qí!""½‚žçQ­UYXYâ‹{_²ßØgzò—Ïψ%4j""òF¹w†Ýn—Ç›|q÷_üý˲±½…cÛ¤Si.LN“ˆiÐDD䘆`¥Záæç7¹yë3v÷+8á17†˜†^sŠˆÈ1Áf«ÉÊÚ*›¶À40M;n“ˆ%´þ§ˆˆ¼QGîÑÊ÷}<ÏÃ|LÓ$æF)Sߟˆˆó'AÓ²ˆº1±±xŒ™é÷øá•0~² %ÐDDäx‡`<ãâÔyÜp„ìÀ —οGnd”Òh""òF¹ Ïó¨Õk4šM¡ñX[ï""ò}A‘ÿõˆˆˆBPDDD!(""¢QŠˆˆ(EDD‚""" A… ˆˆˆBPDDD!(""¢QŠˆˆ(EDD‚""" A… ˆˆˆBPDDämûslmŸœ1êÌIEND®B`‚peewee-2.10.2/docs/_themes/000077500000000000000000000000001316645060400154275ustar00rootroot00000000000000peewee-2.10.2/docs/_themes/flask/000077500000000000000000000000001316645060400165275ustar00rootroot00000000000000peewee-2.10.2/docs/_themes/flask/layout.html000066400000000000000000000013561316645060400207370ustar00rootroot00000000000000{%- extends "basic/layout.html" %} {%- block extrahead %} {{ super() }} {% if theme_touch_icon %} {% endif %} {% endblock %} {%- block relbar2 %}{% endblock %} {% block header %} {{ super() }} {% if pagename == 'index' %}
{% endif %} {% endblock %} {%- block footer %} {% if pagename == 'index' %}
{% endif %} {%- endblock %} peewee-2.10.2/docs/_themes/flask/relations.html000066400000000000000000000011161316645060400214140ustar00rootroot00000000000000

Related Topics

peewee-2.10.2/docs/_themes/flask/static/000077500000000000000000000000001316645060400200165ustar00rootroot00000000000000peewee-2.10.2/docs/_themes/flask/static/flasky.css_t000066400000000000000000000144441316645060400223530ustar00rootroot00000000000000/* * flasky.css_t * ~~~~~~~~~~~~ * * :copyright: Copyright 2010 by Armin Ronacher. * :license: Flask Design License, see LICENSE for details. */ {% set page_width = '940px' %} {% set sidebar_width = '220px' %} @import url("basic.css"); /* -- page layout ----------------------------------------------------------- */ body { font-family: 'Georgia', serif; font-size: 17px; background-color: white; color: #000; margin: 0; padding: 0; } div.document { width: {{ page_width }}; margin: 30px auto 0 auto; } div.documentwrapper { float: left; width: 100%; } div.bodywrapper { margin: 0 0 0 {{ sidebar_width }}; } div.sphinxsidebar { width: {{ sidebar_width }}; } hr { border: 1px solid #B1B4B6; } div.body { background-color: #ffffff; color: #3E4349; padding: 0 30px 0 30px; } img.floatingflask { padding: 0 0 10px 10px; float: right; } div.footer { width: {{ page_width }}; margin: 20px auto 30px auto; font-size: 14px; color: #888; text-align: right; } div.footer a { color: #888; } div.related { display: none; } div.sphinxsidebar a { color: #444; text-decoration: none; border-bottom: 1px dotted #999; } div.sphinxsidebar a:hover { border-bottom: 1px solid #999; } div.sphinxsidebar { font-size: 14px; line-height: 1.5; } div.sphinxsidebarwrapper { padding: 18px 10px; } div.sphinxsidebarwrapper p.logo { padding: 0 0 20px 0; margin: 0; text-align: center; } div.sphinxsidebar h3, div.sphinxsidebar h4 { font-family: 'Garamond', 'Georgia', serif; color: #444; font-size: 24px; font-weight: normal; margin: 0 0 5px 0; padding: 0; } div.sphinxsidebar h4 { font-size: 20px; } div.sphinxsidebar h3 a { color: #444; } div.sphinxsidebar p.logo a, div.sphinxsidebar h3 a, div.sphinxsidebar p.logo a:hover, div.sphinxsidebar h3 a:hover { border: none; } div.sphinxsidebar p { color: #555; margin: 10px 0; } div.sphinxsidebar ul { margin: 10px 0; padding: 0; color: #000; } div.sphinxsidebar input { border: 1px solid #ccc; font-family: 'Georgia', serif; font-size: 1em; } /* -- body styles ----------------------------------------------------------- */ a { color: #004B6B; text-decoration: underline; } a:hover { color: #6D4100; text-decoration: underline; } div.body h1, div.body h2, div.body h3, div.body h4, div.body h5, div.body h6 { font-family: 'Garamond', 'Georgia', serif; font-weight: normal; margin: 30px 0px 10px 0px; padding: 0; } {% if theme_index_logo %} div.indexwrapper h1 { text-indent: -999999px; background: url({{ theme_index_logo }}) no-repeat center center; height: {{ theme_index_logo_height }}; } {% endif %} div.body h1 { margin-top: 0; padding-top: 0; font-size: 240%; } div.body h2 { font-size: 180%; } div.body h3 { font-size: 150%; } div.body h4 { font-size: 130%; } div.body h5 { font-size: 100%; } div.body h6 { font-size: 100%; } a.headerlink { color: #ddd; padding: 0 4px; text-decoration: none; } a.headerlink:hover { color: #444; background: #eaeaea; } div.body p, div.body dd, div.body li { line-height: 1.4em; } div.admonition { background: #fafafa; margin: 20px -30px; padding: 10px 30px; border-top: 1px solid #ccc; border-bottom: 1px solid #ccc; } div.admonition tt.xref, div.admonition a tt { border-bottom: 1px solid #fafafa; } dd div.admonition { margin-left: -60px; padding-left: 60px; } div.admonition p.admonition-title { font-family: 'Garamond', 'Georgia', serif; font-weight: normal; font-size: 24px; margin: 0 0 10px 0; padding: 0; line-height: 1; } div.admonition p.last { margin-bottom: 0; } div.highlight { background-color: white; } dt:target, .highlight { background: #FAF3E8; } div.note { background-color: #eee; border: 1px solid #ccc; } div.seealso { background-color: #ffc; border: 1px solid #ff6; } div.topic { background-color: #eee; } p.admonition-title { display: inline; } p.admonition-title:after { content: ":"; } pre, tt { font-family: 'Consolas', 'Menlo', 'Deja Vu Sans Mono', 'Bitstream Vera Sans Mono', monospace; font-size: 0.9em; } img.screenshot { } tt.descname, tt.descclassname { font-size: 0.95em; } tt.descname { padding-right: 0.08em; } img.screenshot { -moz-box-shadow: 2px 2px 4px #eee; -webkit-box-shadow: 2px 2px 4px #eee; box-shadow: 2px 2px 4px #eee; } table.docutils { border: 1px solid #888; -moz-box-shadow: 2px 2px 4px #eee; -webkit-box-shadow: 2px 2px 4px #eee; box-shadow: 2px 2px 4px #eee; } table.docutils td, table.docutils th { border: 1px solid #888; padding: 0.25em 0.7em; } table.field-list, table.footnote { border: none; -moz-box-shadow: none; -webkit-box-shadow: none; box-shadow: none; } table.footnote { margin: 15px 0; width: 100%; border: 1px solid #eee; background: #fdfdfd; font-size: 0.9em; } table.footnote + table.footnote { margin-top: -15px; border-top: none; } table.field-list th { padding: 0 0.8em 0 0; } table.field-list td { padding: 0; } table.footnote td.label { width: 0px; padding: 0.3em 0 0.3em 0.5em; } table.footnote td { padding: 0.3em 0.5em; } dl { margin: 0; padding: 0; } dl dd { margin-left: 30px; } blockquote { margin: 0 0 0 30px; padding: 0; } ul, ol { margin: 10px 0 10px 30px; padding: 0; } pre { background: #eee; padding: 7px 30px; margin: 15px -30px; line-height: 1.3em; } dl pre, blockquote pre, li pre { margin-left: -60px; padding-left: 60px; } dl dl pre { margin-left: -90px; padding-left: 90px; } tt { background-color: #ecf0f3; color: #222; /* padding: 1px 2px; */ } tt.xref, a tt { background-color: #FBFBFB; border-bottom: 1px solid white; } a.reference { text-decoration: none; border-bottom: 1px dotted #004B6B; } a.reference:hover { border-bottom: 1px solid #6D4100; } a.footnote-reference { text-decoration: none; font-size: 0.7em; vertical-align: top; border-bottom: 1px dotted #004B6B; } a.footnote-reference:hover { border-bottom: 1px solid #6D4100; } a:hover tt { background: #EEE; } peewee-2.10.2/docs/_themes/flask/static/small_flask.css000066400000000000000000000017201316645060400230200ustar00rootroot00000000000000/* * small_flask.css_t * ~~~~~~~~~~~~~~~~~ * * :copyright: Copyright 2010 by Armin Ronacher. * :license: Flask Design License, see LICENSE for details. */ body { margin: 0; padding: 20px 30px; } div.documentwrapper { float: none; background: white; } div.sphinxsidebar { display: block; float: none; width: 102.5%; margin: 50px -30px -20px -30px; padding: 10px 20px; background: #333; color: white; } div.sphinxsidebar h3, div.sphinxsidebar h4, div.sphinxsidebar p, div.sphinxsidebar h3 a { color: white; } div.sphinxsidebar a { color: #aaa; } div.sphinxsidebar p.logo { display: none; } div.document { width: 100%; margin: 0; } div.related { display: block; margin: 0; padding: 10px 0 20px 0; } div.related ul, div.related ul li { margin: 0; padding: 0; } div.footer { display: none; } div.bodywrapper { margin: 0; } div.body { min-height: 0; padding: 0; } peewee-2.10.2/docs/_themes/flask/theme.conf000066400000000000000000000002441316645060400205000ustar00rootroot00000000000000[theme] inherit = basic stylesheet = flasky.css pygments_style = flask_theme_support.FlaskyStyle [options] index_logo = '' index_logo_height = 120px touch_icon = peewee-2.10.2/docs/conf.py000066400000000000000000000160141316645060400153040ustar00rootroot00000000000000# -*- coding: utf-8 -*- # # peewee documentation build configuration file, created by # sphinx-quickstart on Fri Nov 26 11:05:15 2010. # # This file is execfile()d with the current directory set to its containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. #RTD_NEW_THEME = True import sys, os # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. #sys.path.insert(0, os.path.abspath('.')) # -- General configuration ----------------------------------------------------- # If your documentation needs a minimal Sphinx version, state it here. #needs_sphinx = '1.0' # Add any Sphinx extension module names here, as strings. They can be extensions # coming with Sphinx (named 'sphinx.ext.*') or your custom ones. extensions = [] # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. #source_encoding = 'utf-8-sig' # The master toctree document. master_doc = 'index' # General information about the project. project = u'peewee' copyright = u'charles leifer' # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # # The short X.Y version. src_dir = os.path.realpath(os.path.dirname(os.path.dirname(__file__))) sys.path.insert(0, src_dir) from peewee import __version__ version = __version__ # The full version, including alpha/beta/rc tags. release = __version__ # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. #language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: #today = '' # Else, today_fmt is used as the format for a strftime call. #today_fmt = '%B %d, %Y' # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. exclude_patterns = ['_build'] # The reST default role (used for this markup: `text`) to use for all documents. #default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. #add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). #add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. #show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'pastie' # A list of ignored prefixes for module index sorting. #modindex_common_prefix = [] # -- Options for HTML output --------------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. html_theme = 'default' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. #html_theme_options = { # 'index_logo': 'peewee-white.png' #} # Add any paths that contain custom themes here, relative to this directory. #html_theme_path = ['_themes'] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". #html_title = None # A shorter title for the navigation bar. Default is the same as html_title. #html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. #html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. #html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['_static'] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. #html_last_updated_fmt = '%b %d, %Y' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. #html_use_smartypants = True # Custom sidebar templates, maps document names to template names. #html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. #html_additional_pages = {} # If false, no module index is generated. #html_domain_indices = True # If false, no index is generated. #html_use_index = True # If true, the index is split into individual pages for each letter. #html_split_index = False # If true, links to the reST sources are added to the pages. #html_show_sourcelink = True # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. #html_show_sphinx = True # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. #html_show_copyright = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. #html_use_opensearch = '' # This is the file name suffix for HTML files (e.g. ".xhtml"). #html_file_suffix = None # Output file base name for HTML help builder. htmlhelp_basename = 'peeweedoc' # -- Options for LaTeX output -------------------------------------------------- # The paper size ('letter' or 'a4'). #latex_paper_size = 'letter' # The font size ('10pt', '11pt' or '12pt'). #latex_font_size = '10pt' # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, documentclass [howto/manual]). latex_documents = [ ('index', 'peewee.tex', u'peewee Documentation', u'charles leifer', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. #latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. #latex_use_parts = False # If true, show page references after internal links. #latex_show_pagerefs = False # If true, show URL addresses after external links. #latex_show_urls = False # Additional stuff for the LaTeX preamble. #latex_preamble = '' # Documents to append as an appendix to all manuals. #latex_appendices = [] # If false, no module index is generated. #latex_domain_indices = True # -- Options for manual page output -------------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ ('index', 'peewee', u'peewee Documentation', [u'charles leifer'], 1) ] peewee-2.10.2/docs/index.rst000066400000000000000000000044051316645060400156470ustar00rootroot00000000000000.. peewee documentation master file, created by sphinx-quickstart on Thu Nov 25 21:20:29 2010. You can adapt this file completely to your liking, but it should at least contain the root `toctree` directive. peewee ====== .. image:: peewee-logo.png Peewee is a simple and small ORM. It has few (but expressive) concepts, making it easy to learn and intuitive to use. * A small, expressive ORM * Written in python with support for versions 2.6+ and 3.2+. * built-in support for sqlite, mysql and postgresql * :ref:`numerous extensions available ` (:ref:`postgres hstore/json/arrays `, :ref:`sqlite full-text-search `, :ref:`schema migrations `, and much more). .. image:: postgresql.png :target: peewee/database.html#using-postgresql :alt: postgresql .. image:: mysql.png :target: peewee/database.html#using-mysql :alt: mysql .. image:: sqlite.png :target: peewee/database.html#using-sqlite :alt: sqlite Peewee's source code hosted on `GitHub `_. New to peewee? Here is a list of documents you might find most helpful when getting started: * :ref:`Quickstart guide ` -- this guide covers all the bare essentials. It will take you between 5 and 10 minutes to go through it. * :ref:`Guide to the various query operators ` describes how to construct queries and combine expressions. * :ref:`Field types table ` lists the various field types peewee supports and the parameters they accept. There is also an :ref:`extension module ` that contains :ref:`special/custom field types `. Contents: --------- .. toctree:: :maxdepth: 2 :glob: peewee/installation peewee/quickstart peewee/example peewee/more-resources peewee/contributing peewee/database peewee/models peewee/querying peewee/transactions peewee/playhouse peewee/api peewee/hacks Note ---- If you find any bugs, odd behavior, or have an idea for a new feature please don't hesitate to `open an issue `_ on GitHub or `contact me `_. Indices and tables ================== * :ref:`genindex` * :ref:`modindex` * :ref:`search` peewee-2.10.2/docs/make.bat000066400000000000000000000100121316645060400154020ustar00rootroot00000000000000@ECHO OFF REM Command file for Sphinx documentation if "%SPHINXBUILD%" == "" ( set SPHINXBUILD=sphinx-build ) set BUILDDIR=_build set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% . if NOT "%PAPER%" == "" ( set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS% ) if "%1" == "" goto help if "%1" == "help" ( :help echo.Please use `make ^` where ^ is one of echo. html to make standalone HTML files echo. dirhtml to make HTML files named index.html in directories echo. singlehtml to make a single large HTML file echo. pickle to make pickle files echo. json to make JSON files echo. htmlhelp to make HTML files and a HTML help project echo. qthelp to make HTML files and a qthelp project echo. devhelp to make HTML files and a Devhelp project echo. epub to make an epub echo. latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter echo. text to make text files echo. man to make manual pages echo. changes to make an overview over all changed/added/deprecated items echo. linkcheck to check all external links for integrity echo. doctest to run all doctests embedded in the documentation if enabled goto end ) if "%1" == "clean" ( for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i del /q /s %BUILDDIR%\* goto end ) if "%1" == "html" ( %SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html echo. echo.Build finished. The HTML pages are in %BUILDDIR%/html. goto end ) if "%1" == "dirhtml" ( %SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml echo. echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml. goto end ) if "%1" == "singlehtml" ( %SPHINXBUILD% -b singlehtml %ALLSPHINXOPTS% %BUILDDIR%/singlehtml echo. echo.Build finished. The HTML pages are in %BUILDDIR%/singlehtml. goto end ) if "%1" == "pickle" ( %SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle echo. echo.Build finished; now you can process the pickle files. goto end ) if "%1" == "json" ( %SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json echo. echo.Build finished; now you can process the JSON files. goto end ) if "%1" == "htmlhelp" ( %SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp echo. echo.Build finished; now you can run HTML Help Workshop with the ^ .hhp project file in %BUILDDIR%/htmlhelp. goto end ) if "%1" == "qthelp" ( %SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp echo. echo.Build finished; now you can run "qcollectiongenerator" with the ^ .qhcp project file in %BUILDDIR%/qthelp, like this: echo.^> qcollectiongenerator %BUILDDIR%\qthelp\peewee.qhcp echo.To view the help file: echo.^> assistant -collectionFile %BUILDDIR%\qthelp\peewee.ghc goto end ) if "%1" == "devhelp" ( %SPHINXBUILD% -b devhelp %ALLSPHINXOPTS% %BUILDDIR%/devhelp echo. echo.Build finished. goto end ) if "%1" == "epub" ( %SPHINXBUILD% -b epub %ALLSPHINXOPTS% %BUILDDIR%/epub echo. echo.Build finished. The epub file is in %BUILDDIR%/epub. goto end ) if "%1" == "latex" ( %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex echo. echo.Build finished; the LaTeX files are in %BUILDDIR%/latex. goto end ) if "%1" == "text" ( %SPHINXBUILD% -b text %ALLSPHINXOPTS% %BUILDDIR%/text echo. echo.Build finished. The text files are in %BUILDDIR%/text. goto end ) if "%1" == "man" ( %SPHINXBUILD% -b man %ALLSPHINXOPTS% %BUILDDIR%/man echo. echo.Build finished. The manual pages are in %BUILDDIR%/man. goto end ) if "%1" == "changes" ( %SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes echo. echo.The overview file is in %BUILDDIR%/changes. goto end ) if "%1" == "linkcheck" ( %SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck echo. echo.Link check complete; look for any errors in the above output ^ or in %BUILDDIR%/linkcheck/output.txt. goto end ) if "%1" == "doctest" ( %SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest echo. echo.Testing of doctests in the sources finished, look at the ^ results in %BUILDDIR%/doctest/output.txt. goto end ) :end peewee-2.10.2/docs/mysql.png000066400000000000000000000025011316645060400156540ustar00rootroot00000000000000‰PNG  IHDR00`Ü µgAMA¯È7ŠétEXtSoftwareAdobe ImageReadyqÉe<€PLTEê¤GÄÜåã„ýôêb‹ùæÍ&yœu¬ÁþùòõùûâíòõôõûéÓëóöÕÖ×÷Ü·ëìíòʼnÌáé«ÍÙÓåëùãÅ iéž;ª«®d¢ºžÅÓæ“$u™2ƒ£CŒª]†jÉËÌî³føøøõÒ¤j¥½»ÍZœ¶ºÖᔾϱÑÝúüýüüü­®±G­¡ÆÕ„¶ÉŸ¡¤/‚£Q˜³è›4ë©R¡¤¦ò÷ø²Æââä:„£³¶¸†ˆ‹¾Ùâ²³µÝëð™›äðô¿ÃÈ§ËØŸ¡¯³µq–L”°Ûêïüþþº¼¾íõ÷S–±¤É×ïºtÖèî·¸º§¨«;ˆ§»Û錎‘ñ¿}ÆÆÉÀÁÃwš–˜›t˜'~ŸŽ‘”é ?6‡¦ÿÿÿ o”øûûþüùþþÿ¯ÏÜæçè@«ÿÿþ_ ¸øà¿ÎÎÏüýýúÂ|þþþÿþýýÿÿüïßÿþþÈãñ“”—¥§©ø¹ièéên¨¿þû÷ÆËΓ••—˜q•¶Ôßý¡GÝGIDATxÚ¼– WÚ@Ç9#1\r$ˆ\* ÈU,(å…zEÑŠéa/¡­~õnb‘ˆ•àë{ÝÇKvöå—ÙùÏìÑéËÆ‰è¼hüwÀ¥G*ÞÂèÀ-_³‘qq!:"P°úÓ²èe÷šr¤-õwÐÏ¢×TA¬‰‘‚vÊRÜ]¹fEäÑeÍ@ârnOŽØVFµeÙ…¸—ÞQ²ma@Elj,7ÇoŸOgZ˜{˜‹©¸JðXˆ¾1KšÜÂ&Ó·bdQ°–ð#?/•Aº X|™ž1Ez…€FÐů‹­!ˆ+ßô#`*8Ç7SÔfn8àº`€*Ë Á€Z\.õííÛÍáÀbe§(^21<4ÞÒëý…,­—Ãòpát¹,bÞ+œ²ºiìy@¼AÉ¢îGò«&­tëù63i›ªŒÿq:c±Üо”’‹ʸâl mdÑb_Uñ2§eøÈo–D:ÅÄÊØùÍßÓp?®7dHïÜCÐe£ÏǨoæóÀ0µc^sîPcàF`U­q4Rnò­¸¤ì«Éd`y=¿j7|éÒñ€ïð*½~ Àn)b,©}0S4µØŒûšŸ ¿œ>\Ý}óÚ^.Dá5w|…H–ÇçœÞàÔ=P‚!Ø#ÍARa|å}ûÕ§år^ñ•øæ‹D“¿z5•~P ë|¬we¸T7>aWÇ/5ЬÌ<ÍÏüâsR lowQÁ±ÞòxØ`0„¦»Zx3ˆP4„ò˲“Y¯#÷Åð¡q6mÖ¡¨n|B²§ÓéD¢ ‰Ž¹kÖ3àBàYŒ±' —}ùn¾–*˜‘°oÖVS‹ÄñÄ’…-orn©j¨¶g$mhï=ÜÌ‘éŽ' Pa=°¥«†DЃ´Ýd=ÜÇp-å€úl>t´HâRDI\£ãã¨9’$I’ AI’$I’!(I’$I2%I’$I† $I’$É”$I’$‚’$I’$CP’$I’dJ’$I’ AI’$I’!(I’$I2%I’$I† $I’$É”$I’$‚’$I’$CP’$I’dJ’$I’ AI’$I’!(I’$Iφþ"\„‚V«M³ÙÂë5QU…J¥†¢(„B ï´$I’ôb†`£Ñbn~‰ÕÕ{LNŒ‰¹rå®ãòï\fp°]×ä«-I’$½X!hY÷îmóÓŸ¾ÅÂÂ*gÏÎpá ·nγ·D»íðñ2™$ª*G%I’¤$…äóEÞ{ï|‡ããí¶ƒ i:¹\žwÞýt:E0x™p8€¢(òU—$I’€ç|bL»m±±±ÍG×ïP(”Báø¸ÀÝ»K ŸƒƒC®\¹Áþ~×uå+.I’$½!X«ÕYX\eo÷„Š®(Іßïebr˜É‰a\×a}}‹ååMêõ¦|Å%I’¤#&ÛÛû4-TÕÀÐ=¤’ ^{í"¯¿ö.œ!ôS(Y\\£X¬àºB¾ê’$IÒ󂶈Åb™¶e£ë>_€áá^yå<ƒƒ½LMŽÒÛ›¦mµY[»ÇÖÖí¶%_uI’$éù¯@U4]'1>>DO¦ ŸÏK:brrGçð0Çââ•J !^üjPqú$I’ô‚… ¢(x½^ɺ®aÛ¦i’L&ðzÍÓFù™™qb±ÕZ…ååM²Ù 1Aƶm Å"…bËú´ºu… ^¯³½»Ë½jõº BI’¤± ŒŽ ù±¬&¶ÝFU•Ó6¯×ËèÈÀ`ggÕÕ{߸ 2BÍ&¥r™V«ý¥¡åº.G¹o½û¿zû_ØÚÙŲìÓp\ßÚâÿýŸÿÈûÿþž; 4[-ùN—$Iú ÏuŸ ßïcöÌ$#‹·(WŠçi·-„¨ªB"çåËçY[½G¡Xbqq ¦ý_kÏ ‚v»M­Þ@ÓT~?º®#„ V¯³°²ÂÎÞ>##Œ ašæç«Ùj±º±É{×®Q«Ö(–ËüÙH_o–e±³·ÇÒêÕZH8DO½™Œì‘”$Iú=ÚO~ò“Ÿ<·e¬ªâóyÑ ƒƒ#r¹<šª1<ÜO<EÓT C' àº.»»4šMÆÆI¥â_ëRjív›ÕM>øðCö²$ãqü>–m³¼ºÆÏó·çPU¡þ~ü>BÇ¡ÝnŸyªªJ«Õbym[ssds9Jå à'Ó B°´ºÆüò ¥r˶ééî&Ý•ÂÐuùŽ—$IzQ*A€@ÀÏ¥‹gi·Úüô§o±µµÃ\'•БɤPU•t:ÉýÑë —+ *ÊCŽBì“°Ñ4 ÓãyªU“‚r¥Êõ[·xïê5’Ƀ}½„ƒ!Žr9~÷áG,.¯Ð²l޲4[-lÛ¦\©rptÈq¡€ë ñ½ÝÝ(ªŠ®ëhšŽ+¹|ž›ssLŽÑßÛƒišFç¥Ís{nžÑ¡AY J’$½h!¨ª ±X˜—_>‡ëºüêWï±¹¹ÅÑÑ1©T Ã0Ð4ÞÞ ?þë?¡Õ¶‡‚˜¦çô¶m³wxÄúæ&Á`É‘B¡àS Çu98:bae…ìñ1 pp˜%L±°²ÌÅEj†ÇIJ,Z­‡ÙW¯_çú­Ûä‹E\×%™ˆóÚåË\>ŽLº‹þž ÅÍFƒƒÃCÖ66èËt3Ð×K:•"Ÿ/Po4X\]e}ó‰xŸ×+ßõ’$I/Jº'÷Ô²…cR™ÿê¯ÿÇrH§hڧÆ¡“LÆ 7×u9ÌñÛ÷ÞïüÌþ„³ÓÓx½æS9Çj­ÆêƇÙ,Žãžs–«U–VÖÈ ¨šN<eh ࣛ7yëÝ÷Ø?:Âq\×%›Ë!\AO†þÞ^~øÝïP­×Y][§Ùjq˜Íâ AOw7g&'ÙÙÝ#_,’/¸=ß©{d5(I’ôi!õ¼_@³Ùd~i™üÅ/ù—ß}@ äåòå³$“±‡vPå3À0 ¢á0𦲱µÅ•op˜Ë=•V ×u) ,¬¬P­ÖÐuX4Bº+I£Ù`{o—VÛ"à096Î¥sç®àÎüÂIº¨ª†ªªX¶ÍQ.ÇáQa0=1Îå 牄ôڙ¥‘PˆÙ©Iúz{ñx<Ôêu–×ÖXßÚ’3E%I’^”´m›½ƒÞ»z¹ÅEî,,òÁG×Ù9<À²GîSU•d<ÎØð0®ë²°¼ÂâêõFã‰Ï±Ñl²~ï[;;´Úm?cÃÃ$ã êõÕZ§y?‹òÒì ™tõFƒ\>ã8Ò©ñX]×q‡Z£+Ñp˜ÉÑ’‰X'ÜO^×uz2f§& úý'-ÇÜYX¤P*áʾAI’¤ç?«õ:·çæYZ]¥ÙjÓj[,¯­qíÆ òÅÂc5‰ü~&FGˆÇ¢”Ëe–×Ö)•+OÔhî A©\fay…r¥Šªi$b1¦ÆÇ…‚X¶…m;èºF¦+Íà@?>Ÿï´bÕut*É÷^}™ó33ø¼&®¸®‹pªªâ5M Ã@×u¼¦yú³‘PˆéñqºÓ]hšF½^gyuÍ­-Úí¶|çK’$=Ï!è8¹ã<óKË”+U Ãáñ >ÓD×ôǺ÷ex<ôf2Œ wë7îÝckwÖF»Ýf{w—µÍ{4[-|^ƒ}}§-¿ÿ4Äb‘0Ÿ]Óðû|„‚AšÍ&G¹Ù,¶í i:ªÚ¹¶V»ÍÞþùBUU è'mº®Ó›éfjlœ` €+Ù\Ž» ‹ŠEY J’$=Ï!ض,ŽÙ;<Ä`xLü>Cý\:Žx4òX!¨* ‘P˜ÉÑQ~?Çù<ËkëTj5¾J\!¨Öj,­­“ËçqÝνº‰Ñ¢‘ðɽÁ(‰x MU©Öª4N†_}>‘pUU98:âÚÇ7XY[§mY†AÀïààðˆ›wç(•Ëx<¢‘èi/ ¢(DÂaf§§èÍd0tj½Îòú:G¹cÛ–ï~I’d>¯'nY6ùB‘f³…¦ëxL“X4Æù™ûû0 ã±éóù /“Á²,ÖîÝã0›Ãþ áº.GÙË«kÔêut]£»+ÅèпMU §½{+ëëÜ]\¤\©`š2Ýiü~ŽãR­ÕiµÛ§÷.»RIªµW?þ˜Û ´Úm"á=é®Vš1 ƒÁþ>¦'Æ …B :4È*P’$é¹A!–mŸ,¦b’‰£ƒý_mI4UUH&âLŒŽà1töXÛ¼G£ñøkÖê V7688<ıþã#ä’‰ÓY«~ŸÞ^|^/Ù\Ž®}ÈòÚ:š¦19:ÊÈà ñXŒp(D,cdpW/^ »+Åþá×oÞ¢P(âõv*àžtÏ}á¯( Ñp˜³ÓÓôõô„éí΋FO‡M%I’¾ÍžÛOBUUðy½hš†Ój£ë:¡@€P(ø@àãòûýLŒòá͛챲¶ÎùéiBÁÀck»Ýæ(›¥y² M2gld˜À}í5Mûûéïí%_,²¹½Í­ùyúúá_ý–V×(UÊ„‚A†˜ÅkšTj5CU¥§;Íù33Ä>cØ0 †øÞ«¯°³·ÇäØñhô+}Ip…Àql”Îóî8”*%,Ë& á3½µ¥H’$É|<©D‚`0@±\ÆqlTM{âFpC×Étw362Âaö˜½=6wwèJ%ñû}µÃ0ˆÇb˜žÎŒÍ±á!ú{z0=žû‚\%“îââ¹³f³f³ìîÐh6èJ&˜šdxpÛ¶ÑuŸ×‹×4iµZøL/‘H¯ÏÇ+/]`jlô3WƒQ…h$Âw_y™F³‰ßçûJ«Æ´Úms‡äòÇDBa2]Ýò¼õþÛä 9^¹ð2ç¦f …B¨Š BI’žÏíÚªª¢û‡‡aÛÑH”‘~¢‘ðW®H>iMPU•ƒÃκ~¿Ÿ‘ü'í ôÄjª¢R( ‡B¼zñ"Ãx<W†A0ımjõñX”3S“DÂa ÃÀçõv ×4@ kzçØªŠe[ôtwñÚ¥Ktwu}Á5 ZíùRžf«„3{Öq¶÷wøï?ÿ~ýÞo±,‹h8ʵ[ñ³ßüœ»Kód³DÃQ’±Ã#W¥‘$I†à³ôI€(ŠÂþÁ!¹B¥3q$™|¢E°UM#x²ÕQ©TêìK84D8øè뉪ŠBÀ璘§‡3S“Œð?¢ŠÒÖMÄcÄc1&FFNÁVNú®Öª,®-±³¿‹ÏëÇkziµ›lïoQ®–èJ&‰EbŸ{Ÿ¯R«ðÁGWøÿö?ØÜ¹Çpÿ ‘ð£ÍžBPª–yïÚûüú½·¨ÔªÌŒOá —_¿ÿ+›kTkU ŵzî®4Éx]“÷%Iúæ{®?©¼¦Éôø8ÅR™ö;ïP(¹q÷.Ý]IfÆÇðÜ7ôøX!¨(„C!^yé%zº»Ñux4ò¥?ç A£Ù Ýnã5M|^/çÇü¼ÐÑuL:M4yh‹z³ÁÅ»üÛÿD«ÝâoÿòßòÒìy²‡|pýwììï’/c£ƒ#x Ïï“Ëq!ϵ›²¼¾LÐ@yŒ*Ùqv÷w¹òñ5r…<#ã  ±w¸ÏÑqÛ±Aj½Æ¥»ôfzèI÷éê–Õ $I2Ÿu5DxõÒE ]çýk‘ËfÙØÚf¨¿ï+‡ t†[#‘0^¯I¹Z¦\-ƒ~¯ïs‡«µ*7ço±»¿ÇØÐ3Ó|G+É5`àÁ?kYÛ»ÛüêÝ·¸q÷~¿½£}¦›“Ôur…cö÷i[m"¡á`ˆL:óÀ=¹f«ÅúÖKë+(( õv‚ðªV«1¿ºÈÆÖšª292Î@Ï~_€±¡1 Å"åjG8KEnÍßæ•ó—IDã_¸1°$I’ Á§„¿éÉqB¡ ;»ûôeºhøÊA¨(4[ ÞÿèwlîÜãìô,—Ï^$zx(Ñqööxóí_±º¹Æ¹™³D#1Fú‡¾ÒlU!¥J‰+7®qkþ6Ív“xüÓ­¡|^áP§é>W8æêkL ‹Äðûü§Ç(–‹Ü]š£T.‘îJ39<~úø£T¶ùR»‹s”ªºSif'ÎŒ'ˆÇâüùJ¥ZáöÂÍŽã’˳¾µÉôø´ AI’¾ñžëi|BêsË \¹qT‡—/gjl ïSúv‡£\–+_ãçoÿ’ŵeÚÖÃK©ÕêuVYZ[bç`—…µ%¶v·hµ¿Ú® Žë°tÀõ;uPCÁScSL ôéËôòK¯ÑÛ݃¢(d¹³4G¥Z9=FgŸÄ}æWÀÔè$ý½ý ™~žf«ÉÚ½uÖî­£* cCcŒ ãõz øüœŸâ{—_'O iŸ,åÖ"_,ÈõI%I’!ø¬Y–Åêæ*ÿó—ÿÄßÿìð¿ù)KË4­æS[Åïó320Œ¡é,­.síÖGä Ç,¬í —|1Ï¥9 å"Šª k:ÆÉ,Ó¯¢Ùj±¹³ÅÞá>Á@ßo¼ò=†ûñx<ÄÂQ¾{ùu^¿ôÑp„z³ÁæÎåjWt¶€ªÖª,¬,rxtH0äÌÄ Ñpô‘ΩSE–¸»4Çq!O8æìÔâѪ¢ž,ÒáÌÄ =éžÓI9®+°¬6B®J#I’ ÁgÇ.GÇYÞ¾ò×ïÜ`÷hŸ›ó·ùÙÛorgé.Ívó©ü¯×ÇèÐ0#C´Û-æWØÚÛ~ l¶Z¬ßÛ`im™F³‰Ïëcd`ˆþž~LÏW«H[­ÙNÏ`(âÂÌYfƧ:3T5M£'á•ó—éËô¢* •Z…F³‰p;;Mdó9î,tž‹‘!&†Çð{}ööÙ;Ücau‰¶Õ¦/ÓÇÔèûî[jšF4!“Jcz<'_ ‰*'ÅH’$CðÙi¶Z,®-ññ›TëU4]ò-ªÕ*ívá>JDUÑøÉ$?‡ÙC–6V¨Ô«!NïÝu*¦ã“¥Ê"œŸ!}å’ŽãP«×Âí,ì=<ñнHC7èÍô0Ô7ˆé1q]÷´ ¬7ê,o¬°±½‰×ôrvj–d<ùÈ•é'UäþÑAgîØ$édšúàýMMÓ1MïédŸé%•H>Ѥ$I’$‚ð!½¸ºD®pŒáñ`z½Ä"1.Î^`zlê+­ŠòyþScSô¤3Ôê5×–8:Îâ ˶ÙÝßc~u‘Z£Žišôeú˜'à|õFUNšÎUL‡h8úнŽãtî“O“~ÂsÐuƒH(Œ®i4MJ•ÒC»Y¸®K©Ræ wˆ+\¢‘(~¯˶¹·»ÅÒê2ª¢0=>Eoºç‘Øë:Ëë+lîÜCÕT¦F'ê|h¶§mÛå²ììÑjµˆEb\8sž¾§2;W’$I†àç„`³Õ¤Pîlëñz‰„ÃLŒŒÓÿ >€E! sfr†X$J¾gne|±@®cnyžrµŒªª¤ª) =Q³¸ÇðN¥ñ™>J•7çn‘Ígq]÷ô9¨Ô*Ü]šgec!™®n¼^/•Z…¹åy²ù,±hŒ³“gˆ>Æ 1ÅR§­¢X*Æ87u–X$ö@ÿ¡+\ ¥·nŸ™ÎNžáõ‹¯‹Äd£¼$I2Ÿmòé6JÁ@î“ÐxÀ^¯ÉÈÀ0#ã!XÝXeic™¹å6·ïÑj·ƒŒÑŸé{ä6„ÏAƒîdšT2E­QçÚÍøèöu ¥®piµ[,­­ðî‡ïs\Ì …ìÀkšäòÇ̯,à8£C#Œ Œà5mx¸mµÙÞßayc˜ŸbzbŠÀ}½…BªÕ*7æoñ»ëWi4›LŒNð£ïÿ1cƒ#_i/GI’$‚^š¡ë:~oçƒÙu]tCÇkš¨ê³©@TE%sfb†` H6Ÿåw_åÊÇW)” (ŠB"çÜä,±Hô‰ƒXÓ4R‰$Sc“ýö÷ùÕ»¿eae‘V«Åq!Ïo?x›ù¥yEa|xŒñ¡14MãèøˆìqŽP(Äùé³Äc‚ŽeYÊ%¼/3ãSüðõIu?0¡Æqvvyûwï°w´ÏèÐýÇÉ¥³/d(IÒsã¹\1FU‚ }™žN_`£ŽmÙ4ZMœ“áÂgÁï÷35:Á`ïs+óܸ{“V«E­QÇçó120Âèà>ï“O Q…H8Ê+.±º¹Ê͹[lïm³¾³ÉôøÕZ•Í-­&ý½ý¼zá2=]TUESU{ˆ†#ÌNžy¬ :‡©Ñ ü^/@ñ¡1¼¿7ÉHQ:}á`ˆ™ñ)þèõðÊ…—å0¨$I2¿.á`ˆÙÉ3ܘ¿ÅQ>G±Tbçp—J­JÈ|&›»jªvºRK¡TธÇ0 ªçùä Ã`rd‚ÿí¯L†û¹9U(8¶ólËç“åÂÂÁ­VÛ¶PÕNö´†:só7YZ™gxhŒ³g^"ºÑnµÌõëï³´<ÏÌÔY^{í¤’é§Z­påê;\ýð=LÓäOôcνˆ¡{Ø;ØáW¿ùgÖÖ–p\!–eÑß7H¦»X싇u+•W®½ÃÕkïrtt(X–éñòê«oÆPC£’$É|ve¬¢tv}!•HqéìK(Šzº³B­V¥Þ¨á5½øü§¾Ûy£Qgyežýý|>?“3tueÐõ'o¨7j,-Ïñáõ÷ÙØXFS5νŒßß™[®Y\¼ÃâòÇÇGxL“×_û!ÑHPÂ¥Z­°¼2ÇÆæ ‘p”F£Žp­V“{¬­-rp¸wZyªªJ½žúÒ0B`;6µj•V«…¢*Ø–ÍÞþ6W?|®t†33¾òº©’$I2§"Ô â‘ÑP@UTjµ wçn²´Ÿït†ç¾0ªŽ¦iä 9r¹CÊ•2Á@ÞÞ¼^†áAÓu޳äó9~ŸŸþþa|'íÕZ……Å;d³„B.œ»LOO?8ʰ°x›z½†a!ðx< Žrîì%"_Òë¨i¡PÓô²°K¹\ÄqlÛFQF‡ÇO†T5ù·L’$‚_×u9<Úçíy“¹ù›Ô!;Wz{úéêÊ`Og‡UU ‡#tueðxLJ¥<–Õ"΋v _Èñ›ßü”yç—í‘Ju G¿4TU%vBf—lîUU›"Žb¡``w›r¹„¢¨ŒŽLÆÑ4f«ÁöÎ&ÕZ…ÁÁ.]|èÉ}ÅZ­ÂÞþ†Ç¤·w€`0D&ÓÏ«/Ñ‘ Ì/™]ûI›†×ë£\)±°C³Ùì,\  ŽÑî•«ÇH’ô¦¿hÔj5Y__fiyŽF³³«C d°”ÞžÁ§Ú:Ñ =™>üßýF†Ç±¬6Éd7êÉph£Qg{gƒ­í Ê•"½½ý¤RÝD±/=¶Ïçgtx’3388Ü#—;"›;"Ó݇iz ‡£LOcqù.…BžBñ˜ã|–Át]'rþüeÒé ==ý¤’é“6 …¾¾AþâÏþ˶ ‡ÂX–…Çc’N÷ð:ŸO&ß_*ŠJ(fld‚7¯R.—pÇq¨×k8Ž-ÿ†I’$CðëT«UYßX¦X* ëûj}½\¸ð étÏ3žÓ4x™Yžà({@»Ý¤P8¦Ýn¾dö¤¢¨ƒaú‡ðû´Z *å"¶ccžT‹ÑHŒ¾aæçocµÛÔjU§sÐçõ33uŽñ±i<¦Ç{zÜp(ÊÌÌMÓOÝ¿7Ðu]jõ*•JUQ…"øýŸ.fb±$@N{¹³¼$I2¿VB¸Ôë5÷p ŸÏ$32<Î@ßÐ3ÝèUQÔÏfýd‰eµ)–ŽžÀçõÓ™ÁÙiK¨Õ*§³‡à'TUÁôxÑ4 Û²°‹û[;;+ÖÄN'³ÜOUU|>?Ÿµˆ›ªª_:{Óu޳ܼý‹‹wÐ ƒK/½Îì™ øO–bSè,¡¦k•e,×=ý½ò~ $I2¿ÖËjS©–Ïç%ŽÐ“é?YØùëoඬ6¥rMÓš`tdïÉÂß¶m³³{7¯Òl59{æ%&&fØÖvjõêé„MÓè¿ÂŲ-@7 BÁÈSY$@A©8,nŒèIDAT\äê‡ïñλ¿âàp¯×‡®é ŽœNüq‡jµB½Q¢EH$»žÚ½WI’$‚ñáí !NªÓ$=µ¡y6‹y®^}—¥å»t¥3t§{H&RèºN½Qc~þ&ïÿîmõÕj™d2M¦»÷´)¾^«²³{V«A(Â !(—KìîÞ£mµI¥ÒÄãɧҧèº.Çùwî~ÌÁÑ.–eá÷0M/šúéìÖf³ÁÖÎ¥R!\‚Á£#“¤]ÏlÅI’¤§å…ZÛJQtÝ x2±ã“ÊD—?ÄêpB€@Ðj7(ólo¯³°t“J¥Ü¹GgsÚV³A±tÌÎÎ&ùBÛ¶N«Èݽ-VVhµZDc â±ÔiKC½^c}c™å•9‚ÁÁ¢‘ØS[²ÌР¡±hœžL/¼ÊåK¯‡OªT›ƒƒ]—îvîª*Ýé^fÏ\8…*I’$+Á¯1Á ÝÝ}¬®-Ñh4i4KyÚVÓô}mç"Dge–b±€ªt&›´[- …cZ­&>ŸŸ±ÑiÖÆ–¹·å!ObšæÉ0£ÍÁá߸ÂÞÞ6^¯¡ÁQâñ$š¦Ÿ6¦_ûè}ŽŽI$SŒÍ†žJø¨ªJ2ÙÅoüÃÃcøýF‡'I´˜8®ÃÑÑ}ü«k‹Ø¶E*ÕÍ¥‹¯322yÒ°/I’$Cðk „dnþ&¹ãC …cv÷î16:ƒß|`%—gÉqvw·øõoþ™ù…[X–E !I`žìϧëCC£üùŸþkööwèJ¥Ét÷¡ë:år‰Ûw>ææíh¶ŒŽL23}žp¨ÓÄ^©–¹qó*++ xL“33:÷ŸRÐ+Š‚ßçgb|†¡Q4MÅëõNvi5›¬­-róö‡T«ºRÝ\¾ô^¹ü]b1YJ’$CðÂ4½ŒM3{æ%®^{‡|¾Èææ*ÝéÛa"‘¯ïÚ²-êõ®p‰D¢ 39q–€ÿÓj­S N180Š®ë§Kž¹Â¥Õn"„KWW†K_gxh¼³£„Ø–EµZÁcš ŽòÊåï’L¤žêî ŠÒ™AúY³HEÅ4}$â]Ä¢ ÎξĹ³/“éî}êk´J’$=+/ÜŠ1Š¢`š&~¿ŸJµBµZ¡Ùj⺂žLÿ#­ÖòtÎ ݃iz ‚ŒMóÊåï1>>C < AMÓñx<läªÚéf¸ãcÓ\¾ôÓí>YçSUU¢Ñ/¼ÊØèÔS]õKß8ªF8a``˜³³—˜Ÿ9ð#I’ôÜdÆó¼ŸàçéÜk°µ½ÉÊê‡GûÄc ^}å ÒéÌC;¥?+®ëÒhÔi4ꨚŠßxä úäžb¹RÂcx…ÂÌpBÐhÔiµ›ø|~LÏW@!â“…´£’ììCèv¢ü¾ {%I’d~‚жíNP´hšN0ÆÐõ/ü w]—F½L¯Mpầ³NÕ“ÜïÿÙv» B Æ«¯!°ÚmÚí6š¦az½þ¬ãP¯×±Úm¼>^¯EUNgµ:ŽCµ\îl¬àóù+°>ïZëµǹµJŸßO:“Áç÷Ë@“$I†à ˆ®KµR%œÃu]¼^`_ÀÿÀ¤ã8î°pç6íf‹¡ÑQú† †B§áÇÙ,»[[¨ªJWw†D*‰áñ „ X(²º¸@½^§§·¾ÁÓ°j5›¬-/³¾ºJ$ajv–Dª3¼)„ ˜ÏsýêUŽ›œäÌùó‚ÁûÎkŸ÷ß~›R±ÈÅ—_aæÜY¼¾'›Ө׹uý:¿þùÏÙÞܤw`€÷wÇäÌ šÖ”$é[à[ñIgYk«üúg?ãø(KWW£ããÌ\¸@ïà¦×‹ëºä³YÞùͯyëoÒªÕ™œžæÏÿÍ¿æÌ… x}>*å2ï¿ý6oý⸎åW_å/þæoÈôõÑnµX¼{—¿ÿ¿ÿ¹£#fÏãßüûÿÀðøš¢pœÍñÖ/ÞäÊ{ï‘H&øÛ¿ûO\~ý5ü¶e±±¶ÆÏþáØßÙá;?øÃç!Øj6Y¸{—7ÿ韱­6ÉTбɉ'Áz­Æü»||õG‡4êuJ¥Žë"<“$I†à BÓu‘þ@€Û;7X[\dynžƒ½}~ôã¿¢x˜V³É›7yûÍ7Y[ZBSUÁ ÕJ¥³ ã=:âúÕ«,ÍÏ£* ÉTF£³rK©Ä7X^X Q«‘Nwc ›¶[-¶67¹yý:»[[xN¶úd8´Z©0wëë++Ø'»9|2ÁDA©XdþÖ-Žöéêî&•Nãñ>ùn®ƒÕnã:.• _K’$Cð…£ª*=}}üÅL4gîæMÊÅ"GìoïJ§ÉqåÝwÙÚÜ!ˆ'“œ½ô#xL“z½Îêâ"++4ëu’]] ŽŽ‰Dq‡ƒ½}îÜ¡R.‰D'‘J¢¨*•r™ù;·Ébš†ÇÇèÄ4ÍÓp½{ëÕJ…Þþ¦fÏœV¶e±»µÅâü<®ë269ÉÀÐÓY \×uüÁ †ÇÀ4M"ÑÎy?P’$‚/EQð˜&ýÃÃDâq^ýþ÷( ×%É`;k++,ÌÍѬ7‚ŒONòúoîéAQŠ…s·nQ,Ð ƒt&Ã̹sÃ!ªÕ*‹ssìlÝ!Hu¥9sá<áH!‡Ìß¹CµR!™J2{þ‰T EU©–Ë,/,po}Ã0˜ž=ÃðØØiC}¥\fáî]öö †BÌž¿@4ÿÒ~@!®ëÒjµ®ûéMÃ4MTUÅcšôõ÷31=M£ÑàâË/“Îd¾¶$I’d~4M#‰Fq®‹ªióy6×Ö(‹èšF<gæìYú‡†0 ƒV³ÉÖÆKsóÔ«5B‘“3g:{<ìíì0wë&åR‰@ Èøô4㣘^/ÕJå4ä\Ç¡¯€ñé)Á®ëRÌç¹{ó¥|h,Æ™sç‰%¨ªŠë8ä²YænߦQ¯3þÒ&¦§ð_z­V»Íö½{lmlÐl¶@tZ B‘33$ ¼^/ç.^¤»·Wâ±Øéï–$I’!ø‚RUõzÛ¶Éçr¸ŽƒÇðFéÄï÷ŸVcó·ostx€¢’çÏÅh5›¬,.²¾²B»Õ¢«+Í™óçˆÆãä¹{ó&¥BP8ÌÔÙ³¤3TM£Ùhpo}ƒå…Çapd„‰™éÓ«Õj,/,°¹¶†išÌœ=G²«ë¡Jí³v}/óæ?ý×Þÿ€z­Öù3 ôôõñÿó&ôòËx½^ÂÑ(¡HäôçåP¨$I2¿eÇ¡Õlâ8.]Ãï÷ EÐtÇq8ØßçîÍ[TJe¼>ÃccŒNL`z½0û6ÇÙ,¦i282ÂøT§Zk5›l¬¬°²°€Õn32:Êìùó§¡Ó¹Wx‡ìá`é³³¤Òi4M;© ܽ٩0û‡˜:;K0:=ïOúüŠù<®Dc1‚Á ŠªbYÕr™V³ë:Ÿ»‹† >I’d~Ë©ŠrÒ'NVA8¶‚ZµÊÒÜ<;[[8¶M¬»›™óçH¤R8¶Íöæ&KóÞÀT*Åô¹³¤ÒiTU¥X(p÷Ö-ò¹>¿Ÿñéi†;“ZÇ9¹Wx›F½ÁèÄ8S³³NB®Õjqoc¥ùyEaêÌú0î›Ól4¸ýñÇü˯~E»mñúßçõ7Þ ‰ÅxãG?btr×qJgCÞ®®.Ƨ¦0=rÃ[I’$‚€áñO$Ðuv£I½V#ŸÍÒj¶ÈçrÌß¾E¹XÄ0 2}½LÎÌ(—JI+û{(ŠJ:ÓÃä™3C!lËbw—…;w©×jdz{™9wŽH4Šz2cty¾®¦ÇÃô™YúOf} !N‡`³Y¢±g_º@4{ j«”Ë||í×Þÿ˶ðx Μ?O0" rþÒ%fÎ{`¸Ô0:3AyßO’$I† €Ïïght”P8ÂA¥J±P`se…á‰qîmn°º¸D«Ù$–H093C¦·EU9:8`îÖ-ÊÅÁ`‰éiúúûÑ ƒb¡ÀÂ;ìî¢k:CccŒMv†I…ërœÍž±v¥ÓÌœ?G$Ú 9Û²8ÜÛcñîŽí061ÉÈø8¾ßkŽWCÓñ˜&ÓC0þt©6EÁç÷ã“/¯$I’ Á/ AŸñ©)&f¦Ée9γ²°H4‘`{g›r©„išdzz8srO¯Ùh°±ºÊÞö®ëÐÕföÂy"±¥Bå…êõ±d‚ÙóçI¦»P5F½ÎÖú:kËK(ªÂØäc““§!W¯×Y[Yá`o—p$̹‹I$SUoáH„W¿ÿ}tOgÒ‹¯¾z:!G’$I’!øHTM£«»›×ßxƒíÍM6VWÙØXÇ_8ÈÄÌ4ºn0}æ £““˜¦ÙY[×%Ù•bjv–ñéi|>îI^$aljŠþÎ]ºDð¤Þu,Ç!ÕÕE²«‹K¯½ÖYGT넜p]„ëÒÛ?@²«‹Ù ç †‚·éõ2uv–þ¡A‚áðSi¢—$Iú¶øV, ý(„ë’=<â­7Á[o¾I³Vgxt”7þôG ŒŒ`ü?±DÃ0°m›r±H±PÀ¶,Bá0‰T ÙÙô¶Q¯“=:¢^«H¥Óx}¾Ó…¸‹ù<¹£#E!•NŽFO[,Ëâ8›¥X(à÷ûIuwãõzå,NI’$‚ÏŽã8䎎X[Z¢V«‘J§%‰tvÉåÁ^¼Ó§NøŒV÷d¥Êâ¾u:?ïgOŽ+›×%I’d~ma«ÝÆut]Ç0 ÄI@ ºþà‚BÇÁC{>ÒïûŒF÷Çy\’$I’!øÌØ–M.—#{˜C× ººSDc4½ÓÐ^­TÉå°m—D2A4FÓ¿|íÍÎÎñ-JÅ2A8Æëó>° o»Õ¦\.£i:ápÝ·p%I’ž&ù©ú%Êå ï½ý>‹sKD#1fÏÏrþâ,‘x„f³ÉÝÛw¹öÁG TΞ›åÂËçˆ%¢_:„iÛ6[›Û\yï ŽíòÒå—˜˜ÃçïÜ7´,‹õM>ºò¡P˜Ë¯]¢»'-·–$I’!ø5V‚¶M±Pdowìá1…P(Èô¹IlÛ¦P(²»³K­RDZA?³/!ôá¦eYä²9VWÖ(J4›-|~/ÃcÃx<í¶Åîö. s‹¸NçÞâw𩸕$IzJ´Ÿüä'?‘OÃçSTÇqØß; ŸÏãZIª+I(B×5޳ÇR¯ÕÑTƒ®t’P8tÚòðYT¥óØqö˜½=Š…¦ÇKw&?àCQÀ¶-ØÝÙ¥T(OÄI¤’C¾0’$I2¿†RY×ñù¼T«Uvwvi6[˜¦x"N2Ý B€Ý=ÊÅ2Š¢EIw§ð˜Ÿß³§ª*>¿Ãcp°ÀáÁ¶e“îNOÆñx<øý~4Meow\îMÕèì#’ßJ’$=•ÏøoûP¬ÖÙ<ÈQ¬Õ>÷ÏØ–EÛëC÷y©åKçYZÙ$=ØC  mxñ„BˆÃùBž…åMŒd‚™ñNûg?0˜NM¦ˆõô°½½ËþÁWnÜæÈqðü G!êâà0ÇÜÂfª‹ž‘A’±(CÝI¢Aÿ_ßg‰ u'¿ðüŸäñGýýòúêã?êõSÏ_¾þÏÇõ}|ëg‡Þ\Ýâ¿þò]n®nѽ~ø :-¸¨Ê.½F4Ç4»8Ìzh´ºÞð³~û.ÑJ…ßG4ÒÇaÎG½ÙùáîE?ÿø¸Ä_r8^\Äk;øÝÔê Úmãäq…2ж‹ª8˜ž~šÍ¶ûé™ÿç¿ÿ—¯|}â‹Îÿi<þeäõ½û=þ—]ß7ýüåûû›{}ßßú.ìb­ÆÍÕ-Þ¾µð™òA "Ðp]ãÚP pgs—…ý â ۱ؿï ÂòQ‘\½+\«ã~ú{=(BGAàº-\ܧv}úEá‹~þI–¯Ÿ¼¾'?þ“üþoÂùË÷÷7÷úd>Eé,ÜÂ}MëlÛÁuA|ZÒ•^Ú¶ý1Õ BË„ÒÙûO÷èŠ ŠŠ8 ey7P’$I†à×¢SÐ а¸¸Z-ëž#ÏÉŸq]e9§K¦=RÐ"8ÙÒ!Ô“*ò¾¨.Žë"„‚êƒI’$I_™ì|¤jК@ UUB£ÕVqœN©ŽKXp¶­„Õ£ j·1¢¨¸Â@í:ÅBUìN¥)4d-(I’$CðëªAX¨JU³QUÛ6h· „«¢àâ±[$tUU@ñв<8ΣÝÂlµÐUðà ÝÀªRCU]¡ã8®¼$I’ Á¯……¢QÕêIÈùh¶ýØ®~R¥5ñÕ«èšØvËñ Ä—UkpP¨ámÕqAÇñáºÊɱT¥Ž¦VGq… BV‚’$I2Ÿ±Îd˜&ªZFQ„kb[!lÇëª(8¨J‰kÑÒ5„Âv¸®Áý÷í.Œ tþcóþc+ 4QÉ£ØL'‚Ÿü¬‹JU9¥®‰aàëß4÷³Îÿqÿ¦{ѯO’ïoùþ–!ø•@Q À‹p \‚¸N!ô“ˆìLŒÉdº ÄÂŒLÏìíF7 bÁàͦóÝ›MºAio›Ÿÿô—ìåZlÕ,j€O†B¥Seºn½óÿD!Bˆ?À\¦ÿýϾÿÐùÿ~3íç=~síÿçý‡oôëü¢_Ÿ$ßßòýý- A!®ëâØªª>´ÑÛ~î1<8Nú$uÚ}Už†#büÇÿôçÄã1’©$ÓóÐ’f§ß´îãØÛ>ñq–ëKìVyÉsßù ¥¾„P„ëÆxøCÌ ý¬óœÇŸ›oÊ/èõIòý-ßßß²B`Y¥R…b¡L½V' ÓÓ“Æczp—z½N­Z'ŸÍ£ ª(÷Õ'÷ëNª0K§£ÏAQµ“vEx™œž@×õÇÚ^ÕTzúz;?Ë?®ìRw•N ‚ðƒÒÝi›À@¶FH’$ÉüR–e±¶¶Éõo²µ¹‡ë™³“D"!TMãè(ËÝ; ¬,op˜/h·1N7³u€:¨u ]Bض×m€Ò@×UtÕ‡e{AtØ=žÇ¿O§( ‡@(„«ê¸<\‘v‚Ð'ߥ’$I2½ ,Ë\ùà#~÷þ5ªÕ:‰Dœ §Q5jµÊµ+óÛ·Þ¥P( G°…zÒ¨îà¸%,çC·pô Š0q\Ë>@ÕxA £`*ÏváUYùI’$=K/\Ùm;ìí0?¿ÈQ6‡m[Dca††ú1MÙ£·nÝewwf³‰?䣥騮@ÐÆrЏ¢‚å´°Ú–­âºm,»L۪вêØvûdi3I’$I†à7H­Zcyy•ìQ×uñûýŒÐIcY««loïbY6¡P€®Þ ަ¡( D!ja£*àG9mIˆ“¥Ðú#ôJ’$Ißt/Ôp¨ã¸äŽó,/­R­ÖPU•x"ÆÄä(þ€ŸüqžÅ…eÊ¥2š¦‘L%H÷f`ópÝ ‚Š¢ ª&ªBAGQ TÕƒ¢¸(˜arÿPåÍÕ­'Ú¯ë›îI®/|ãû^ôë“äû[¾¿¿%!ب×Y_Ûdg{«m b`°lnl±ººA»Õ&066L4GÐiFpD„ƒ®yP• Šb‚ÐP] ?ˆ  ¢Ÿ‡>é~l }³Ÿ×'½¾ÿãÿúñC}H_æëÜ”óE¿>I¾¿åûû[‚BJ¥2ós‹ä U!‹25=N$¦Z­±´´J±XFU5b±“c´ü>WV\l§ÖÙ-Bñ£DP†–¥Ó+¨ =p?ð“ý¶þCäÕÏý&öE?ß”Ÿäú¾é}H/úõIòý-ßߟ'ØjµÙÚÚauuƒF½išôöõ04<€¡ëdY[Û¤Ùhàõzéèeh¨M7À¶‰*@UÕ; âCUôÓ§IQMÅ‹¢tV‹‘$I’d~cªÀjµÊ ǹ<ÊÿjïÞ~›Hï0Žçd{|Œ8äà`;„@mX@‚VmµUÕ­ÚJ•ú_ô«´Û½èM»• ZuÅE[±] Á9r‚;>{fzv»H@º ”8û|¤\YÍoÞQÍÌû{_à •Jrzr‚þþ4f“ryõ-<Ï'™J099A_:…ïy˜íI|<,Ëyö-0tØb2""¢|÷<ÏgkóñÁS`£IÈ ‘Í01Q$âºìl?áÞÝûT*b††²”Jy¢Q—V£Ý¬ãš†eb #Ž¥»CDD!xôíï×(ߟgkó†a’ÉÌÄóº,.,³ººF»Ý¡/•äÔéÙÁ ಻‹Ñ¬¶ l3„iÆ1y~ö§ˆˆO=?1&êõkë›X¦I¡0ƹé3\¹z‰T_нÝ=ææ©×d2i¦ÎNráÂ9ÉõzƒÍµM¬NÃ41­¦‘ À>ØêHDD‚G]8ìPÏÓ—J’Ï1>Q`` ŸPÈ¡ëyØŽMi¢@©Tàò•òùŽãP­V©Uª`<õ F~Œ Æ·{úºûu÷>œõ)¦—¦Xõúë;êÇ×ø«>…à+|5 æÚõ«xžGÄຑ¯·3ÊdÒüüÃèvº$S ’É$Žcc®ë2ZÈÓº·ÌÜúcAßþ’¼Î~]/ûý8Ýd‡õ)æ÷ÿNõ½Íó+íãëþV} ÁW‚€n·‹i›DÜ0¡Ð÷ó ‚€PÈattÃ0¾þûŠëFÈÆè¦úØyøp¾Ó9¼î~]ǽ§×û${½¾·}~Ç}|uïúz6ƒ  Óíòtw—Õµ5*•*¹‘ c eû¾O½Ñ`wo €t_ ×u±¾‚¦iŠ„Á²ð4FDä{¦'C0š­‹Ë+ܾs‡Å•X¶†ÁÈЖeòxg‡Ù¹9fç°‡k—ßçÔxË45ê""Ò»!èû>ëüåÆ îÏ/Ðõ}F‡‡I§R8ŽÍΓ'ÜüÛçܾs‡Z½IntË4¿±{¼ˆˆHö Ö æ™_\äéî.fdè¾ï3·¸È?nßfeõ­V“áì™t–mkÄED¤wC0v÷ö(Ïϳ[©`ñXŒR¡@"§ZÝg¶\fûÉR‰¥b‘D<®O~""òœž{4j¶Z¬—#¿ðxÇ}¿¬7QßQîƒT}ï¾>]Õ×ËŒ èe¢ƒ àñö6üÓŸùìÖ-ê™~~ö“ó£ë× ‚€Ooü•Ooܤº¿Ïð‰!>üé\»ü>ñxü…ÇÜݯ¿pSÉÿåæì…>½×­ßø»üþ¦®¯ê{7õéú¨¾ãðÿ±çB°Óép·\æ£ÿÀÂò2¶ípª4Îoý+J…"××ùè“Oø÷ì,¶ípîÌ~óË_0žÏcYZ[DDž×Sßkõ:ó ‹ßûˆÅ¢”ŠENd³tº–—XßܤëyDcQŠ…<™ ¦Ú"DD¤—CÐ÷}¶wv(/,P«×1M“¾TЉñ"±XŒ½J…ûóóT* JQ*ˆE£Ï­#""Òs!Xo4XZ]e}c“N·K8æd.ÇØÈ××YY}@£ÙÄu]Šù“Œ á8ŽFYDDz7ƒ `o¯ÂìÜ<{Ï6Æ=‘ÍröôiRÉ$ûµåùùƒžAÓ¤?“arb‚D"®§@y©žh‘h·ÛÏžôV1Lƒbî$W/]âÜÔÛfië KË´;²ýýLOM1Q,¾´-BDD¤gB°ëy´ÚmR©$#ÃÃ\™™áÌ©S¤Ó}4êu6¶¶¨Õë  0sñ"×®^!Û߯ 1""òJG¾E"ºžG¥Ze{g‡p(L¶¿ŸhÔÅ0 ê÷ÊeÊsódÒi¦ÏN1˜Íâh‰4éÕ hµZTªUÚÝ6ñXœhÄŶìçöô}ŸF³I³Ù"rp£Qí!""½‚žçQ­UYXYâ‹{_²ßØgzò—Ïψ%4j""òF¹w†Ýn—Ç›|q÷_üý˲±½…cÛ¤Si.LN“ˆiÐDD䘆`¥Záæç7¹yë3v÷+8á17†˜†^sŠˆÈ1Áf«ÉÊÚ*›¶À40M;n“ˆ%´þ§ˆˆ¼QGîÑÊ÷}<ÏÃ|LÓ$æF)Sߟˆˆó'AÓ²ˆº1±±xŒ™é÷øá•0~² %ÐDDäx‡`<ãâÔyÜp„ìÀ —οGnd”Òh""òF¹ Ïó¨Õk4šM¡ñX[ï""ò}A‘ÿõˆˆˆBPDDD!(""¢QŠˆˆ(EDD‚""" A… ˆˆˆBPDDD!(""¢QŠˆˆ(EDD‚""" A… ˆˆˆBPDDämûslmŸœ1êÌIEND®B`‚peewee-2.10.2/docs/peewee/000077500000000000000000000000001316645060400152555ustar00rootroot00000000000000peewee-2.10.2/docs/peewee/api.rst000066400000000000000000003405221316645060400165660ustar00rootroot00000000000000.. _api: API Reference ============= .. _model-api: Models ------ .. py:class:: Model(**kwargs) Models provide a 1-to-1 mapping to database tables. Subclasses of ``Model`` declare any number of :py:class:`Field` instances as class attributes. These fields correspond to columns on the table. Table-level operations, such as :py:meth:`~Model.select`, :py:meth:`~Model.update`, :py:meth:`~Model.insert`, and :py:meth:`~Model.delete`, are implemented as classmethods. Row-level operations such as :py:meth:`~Model.save` and :py:meth:`~Model.delete_instance` are implemented as instancemethods. :param kwargs: Initialize the model, assigning the given key/values to the appropriate fields. Example: .. code-block:: python class User(Model): username = CharField() join_date = DateTimeField(default=datetime.datetime.now) is_admin = BooleanField() u = User(username='charlie', is_admin=True) .. py:classmethod:: select(*selection) :param selection: A list of model classes, field instances, functions or expressions. If no argument is provided, all columns for the given model will be selected. :rtype: a :py:class:`SelectQuery` for the given :py:class:`Model`. Examples of selecting all columns (default): .. code-block:: python User.select().where(User.active == True).order_by(User.username) Example of selecting all columns on *Tweet* and the parent model, *User*. When the ``user`` foreign key is accessed on a *Tweet* instance no additional query will be needed (see :ref:`N+1 ` for more details): .. code-block:: python (Tweet .select(Tweet, User) .join(User) .order_by(Tweet.created_date.desc())) .. py:classmethod:: update(**update) :param update: mapping of field-name to expression :rtype: an :py:class:`UpdateQuery` for the given :py:class:`Model` Example showing users being marked inactive if their registration expired: .. code-block:: python q = User.update(active=False).where(User.registration_expired == True) q.execute() # Execute the query, updating the database. Example showing an atomic update: .. code-block:: python q = PageView.update(count=PageView.count + 1).where(PageView.url == url) q.execute() # execute the query, updating the database. .. note:: When an update query is executed, the number of rows modified will be returned. .. py:classmethod:: insert(**insert) Insert a new row into the database. If any fields on the model have default values, these values will be used if the fields are not explicitly set in the ``insert`` dictionary. :param insert: mapping of field or field-name to expression. :rtype: an :py:class:`InsertQuery` for the given :py:class:`Model`. Example showing creation of a new user: .. code-block:: python q = User.insert(username='admin', active=True, registration_expired=False) q.execute() # perform the insert. You can also use :py:class:`Field` objects as the keys: .. code-block:: python User.insert(**{User.username: 'admin'}).execute() If you have a model with a default value on one of the fields, and that field is not specified in the ``insert`` parameter, the default will be used: .. code-block:: python class User(Model): username = CharField() active = BooleanField(default=True) # This INSERT query will automatically specify `active=True`: User.insert(username='charlie') .. note:: When an insert query is executed on a table with an auto-incrementing primary key, the primary key of the new row will be returned. .. py:method:: insert_many(rows) Insert multiple rows at once. The ``rows`` parameter must be an iterable that yields dictionaries. As with :py:meth:`~Model.insert`, fields that are not specified in the dictionary will use their default value, if one exists. .. note:: Due to the nature of bulk inserts, each row must contain the same fields. The following will not work: .. code-block:: python Person.insert_many([ {'first_name': 'Peewee', 'last_name': 'Herman'}, {'first_name': 'Huey'}, # Missing "last_name"! ]) :param rows: An iterable containing dictionaries of field-name-to-value. :rtype: an :py:class:`InsertQuery` for the given :py:class:`Model`. Example of inserting multiple Users: .. code-block:: python usernames = ['charlie', 'huey', 'peewee', 'mickey'] row_dicts = ({'username': username} for username in usernames) # Insert 4 new rows. User.insert_many(row_dicts).execute() Because the ``rows`` parameter can be an arbitrary iterable, you can also use a generator: .. code-block:: python def get_usernames(): for username in ['charlie', 'huey', 'peewee']: yield {'username': username} User.insert_many(get_usernames()).execute() .. warning:: If you are using SQLite, your SQLite library must be version 3.7.11 or newer to take advantage of bulk inserts. .. note:: SQLite has a default limit of 999 bound variables per statement. This limit can be modified at compile-time or at run-time, **but** if modifying at run-time, you can only specify a *lower* value than the default limit. For more information, check out the following SQLite documents: * `Max variable number limit `_ * `Changing run-time limits `_ * `SQLite compile-time flags `_ .. py:classmethod:: insert_from(fields, query) Insert rows into the table using a query as the data source. This API should be used for *INSERT INTO...SELECT FROM* queries. :param fields: The field objects to map the selected data into. :param query: The source of the new rows. :rtype: an :py:class:`InsertQuery` for the given :py:class:`Model`. Example of inserting data across tables for denormalization purposes: .. code-block:: python source = (User .select(User.username, fn.COUNT(Tweet.id)) .join(Tweet, JOIN.LEFT_OUTER) .group_by(User.username)) UserTweetDenorm.insert_from( [UserTweetDenorm.username, UserTweetDenorm.num_tweets], source).execute() .. py:classmethod:: delete() :rtype: a :py:class:`DeleteQuery` for the given :py:class:`Model`. Example showing the deletion of all inactive users: .. code-block:: python q = User.delete().where(User.active == False) q.execute() # remove the rows .. warning:: This method performs a delete on the *entire table*. To delete a single instance, see :py:meth:`Model.delete_instance`. .. py:classmethod:: raw(sql, *params) :param sql: a string SQL expression :param params: any number of parameters to interpolate :rtype: a :py:class:`RawQuery` for the given ``Model`` Example selecting rows from the User table: .. code-block:: python q = User.raw('select id, username from users') for user in q: print user.id, user.username .. note:: Generally the use of ``raw`` is reserved for those cases where you can significantly optimize a select query. It is useful for select queries since it will return instances of the model. .. py:classmethod:: create(**attributes) :param attributes: key/value pairs of model attributes :rtype: a model instance with the provided attributes Example showing the creation of a user (a row will be added to the database): .. code-block:: python user = User.create(username='admin', password='test') .. note:: The create() method is a shorthand for instantiate-then-save. .. py:classmethod:: get(*args) :param args: a list of query expressions, e.g. ``User.username == 'foo'`` :rtype: :py:class:`Model` instance or raises ``DoesNotExist`` exception Get a single row from the database that matches the given query. Raises a ``.DoesNotExist`` if no rows are returned: .. code-block:: python user = User.get(User.username == username, User.active == True) This method is also exposed via the :py:class:`SelectQuery`, though it takes no parameters: .. code-block:: python active = User.select().where(User.active == True) try: user = active.where( (User.username == username) & (User.active == True) ).get() except User.DoesNotExist: user = None .. note:: The :py:meth:`~Model.get` method is shorthand for selecting with a limit of 1. It has the added behavior of raising an exception when no matching row is found. If more than one row is found, the first row returned by the database cursor will be used. .. py:classmethod:: get_or_create([defaults=None[, **kwargs]]) :param dict defaults: A dictionary of values to set on newly-created model instances. :param kwargs: Django-style filters specifying which model to get, and what values to apply to new instances. :returns: A 2-tuple containing the model instance and a boolean indicating whether the instance was created. This function attempts to retrieve a model instance based on the provided filters. If no matching model can be found, a new model is created using the parameters specified by the filters and any values in the ``defaults`` dictionary. .. note:: Use care when calling ``get_or_create`` with ``autocommit=False``, as the ``get_or_create()`` method will call :py:meth:`Database.atomic` to create either a transaction or savepoint. Example **without** ``get_or_create``: .. code-block:: python # Without `get_or_create`, we might write: try: person = Person.get( (Person.first_name == 'John') & (Person.last_name == 'Lennon')) except Person.DoesNotExist: person = Person.create( first_name='John', last_name='Lennon', birthday=datetime.date(1940, 10, 9)) Equivalent code using ``get_or_create``: .. code-block:: python person, created = Person.get_or_create( first_name='John', last_name='Lennon', defaults={'birthday': datetime.date(1940, 10, 9)}) .. py:classmethod:: alias() :rtype: :py:class:`ModelAlias` instance The :py:meth:`alias` method is used to create self-joins. Example: .. code-block:: pycon Parent = Category.alias() sq = (Category .select(Category, Parent) .join(Parent, on=(Category.parent == Parent.id)) .where(Parent.name == 'parent category')) .. note:: When using a :py:class:`ModelAlias` in a join, you must explicitly specify the join condition. .. py:classmethod:: create_table([fail_silently=False]) :param bool fail_silently: If set to ``True``, the method will check for the existence of the table before attempting to create. Create the table for the given model, along with any constraints and indexes. Example: .. code-block:: python database.connect() SomeModel.create_table() # Execute the create table query. .. py:classmethod:: drop_table([fail_silently=False[, cascade=False]]) :param bool fail_silently: If set to ``True``, the query will check for the existence of the table before attempting to remove. :param bool cascade: Drop table with ``CASCADE`` option. Drop the table for the given model. .. py:classmethod:: table_exists() :rtype: Boolean whether the table for this model exists in the database .. py:classmethod:: sqlall() :returns: A list of queries required to create the table and indexes. .. py:method:: save([force_insert=False[, only=None]]) :param bool force_insert: Whether to force execution of an insert :param list only: A list of fields to persist -- when supplied, only the given fields will be persisted. Save the given instance, creating or updating depending on whether it has a primary key. If ``force_insert=True`` an *INSERT* will be issued regardless of whether or not the primary key exists. Example showing saving a model instance: .. code-block:: python user = User() user.username = 'some-user' # does not touch the database user.save() # change is persisted to the db .. py:method:: delete_instance([recursive=False[, delete_nullable=False]]) :param recursive: Delete this instance and anything that depends on it, optionally updating those that have nullable dependencies :param delete_nullable: If doing a recursive delete, delete all dependent objects regardless of whether it could be updated to NULL Delete the given instance. Any foreign keys set to cascade on delete will be deleted automatically. For more programmatic control, you can call with recursive=True, which will delete any non-nullable related models (those that *are* nullable will be set to NULL). If you wish to delete all dependencies regardless of whether they are nullable, set ``delete_nullable=True``. example: .. code-block:: python some_obj.delete_instance() # it is gone forever .. py:method:: dependencies([search_nullable=False]) :param bool search_nullable: Search models related via a nullable foreign key :rtype: Generator expression yielding queries and foreign key fields Generate a list of queries of dependent models. Yields a 2-tuple containing the query and corresponding foreign key field. Useful for searching dependencies of a model, i.e. things that would be orphaned in the event of a delete. .. py:attribute:: dirty_fields Return a list of fields that were manually set. :rtype: list .. note:: If you just want to persist modified fields, you can call ``model.save(only=model.dirty_fields)``. If you **always** want to only save a model's dirty fields, you can use the Meta option ``only_save_dirty = True``. Then, any time you call :py:meth:`Model.save()`, by default only the dirty fields will be saved, e.g. .. code-block:: python class Person(Model): first_name = CharField() last_name = CharField() dob = DateField() class Meta: database = db only_save_dirty = True .. py:method:: is_dirty() Return whether any fields were manually set. :rtype: bool .. py:method:: prepared() This method provides a hook for performing model initialization *after* the row data has been populated. .. _fields-api: Fields ------ .. py:class:: Field(null=False, index=False, unique=False, verbose_name=None, help_text=None, db_column=None, default=None, choices=None, primary_key=False, sequence=None, constraints=None, schema=None, **kwargs): The base class from which all other field types extend. :param bool null: whether this column can accept ``None`` or ``NULL`` values :param bool index: whether to create an index for this column when creating the table :param bool unique: whether to create a unique index for this column when creating the table :param string verbose_name: specify a "verbose name" for this field, useful for metadata purposes :param string help_text: specify some instruction text for the usage/meaning of this field :param string db_column: column name to use for underlying storage, useful for compatibility with legacy databases :param default: a value to use as an uninitialized default :param choices: an iterable of 2-tuples mapping ``value`` to ``display`` :param bool primary_key: whether to use this as the primary key for the table :param string sequence: name of sequence (if backend supports it) :param list constraints: a list of constraints, e.g. ``[Check('price > 0')]``. :param string schema: name of schema (if backend supports it) :param kwargs: named attributes containing values that may pertain to specific field subclasses, such as "max_length" or "decimal_places" .. py:attribute:: db_field = '' Attribute used to map this field to a column type, e.g. "string" or "datetime" .. py:attribute:: _is_bound Boolean flag indicating if the field is attached to a model class. .. py:attribute:: model_class The model the field belongs to. *Only applies to bound fields.* .. py:attribute:: name The name of the field. *Only applies to bound fields.* .. py:method:: db_value(value) :param value: python data type to prep for storage in the database :rtype: converted python datatype .. py:method:: python_value(value) :param value: data coming from the backend storage :rtype: python data type .. py:method:: coerce(value) This method is a shorthand that is used, by default, by both ``db_value`` and ``python_value``. You can usually get away with just implementing this. :param value: arbitrary data from app or backend :rtype: python data type .. py:class:: IntegerField Stores: integers .. py:attribute:: db_field = 'int' .. py:class:: BigIntegerField Stores: big integers .. py:attribute:: db_field = 'bigint' .. py:class:: PrimaryKeyField Stores: auto-incrementing integer fields suitable for use as primary key. .. py:attribute:: db_field = 'primary_key' .. py:class:: FloatField Stores: floating-point numbers .. py:attribute:: db_field = 'float' .. py:class:: DoubleField Stores: double-precision floating-point numbers .. py:attribute:: db_field = 'double' .. py:class:: DecimalField Stores: decimal numbers, using python standard library ``Decimal`` objects Additional attributes and values: ================== =================================== ``max_digits`` ``10`` ``decimal_places`` ``5`` ``auto_round`` ``False`` ``rounding`` ``decimal.DefaultContext.rounding`` ================== =================================== .. py:attribute:: db_field = 'decimal' .. py:class:: CharField Stores: small strings (0-255 bytes) Additional attributes and values: ================ ========================= ``max_length`` ``255`` ================ ========================= .. py:attribute:: db_field = 'string' .. py:class:: TextField Stores: arbitrarily large strings .. py:attribute:: db_field = 'text' .. py:class:: DateTimeField Stores: python ``datetime.datetime`` instances Accepts a special parameter ``formats``, which contains a list of formats the datetime can be encoded with. The default behavior is: .. code-block:: python '%Y-%m-%d %H:%M:%S.%f' # year-month-day hour-minute-second.microsecond '%Y-%m-%d %H:%M:%S' # year-month-day hour-minute-second '%Y-%m-%d' # year-month-day .. note:: If the incoming value does not match a format, it will be returned as-is .. py:attribute:: db_field = 'datetime' .. py:attribute:: year An expression suitable for extracting the year, for example to retrieve all blog posts from 2013: .. code-block:: python Blog.select().where(Blog.pub_date.year == 2013) .. py:attribute:: month An expression suitable for extracting the month from a stored date. .. py:attribute:: day An expression suitable for extracting the day from a stored date. .. py:attribute:: hour An expression suitable for extracting the hour from a stored time. .. py:attribute:: minute An expression suitable for extracting the minute from a stored time. .. py:attribute:: second An expression suitable for extracting the second from a stored time. .. py:class:: DateField Stores: python ``datetime.date`` instances Accepts a special parameter ``formats``, which contains a list of formats the date can be encoded with. The default behavior is: .. code-block:: python '%Y-%m-%d' # year-month-day '%Y-%m-%d %H:%M:%S' # year-month-day hour-minute-second '%Y-%m-%d %H:%M:%S.%f' # year-month-day hour-minute-second.microsecond .. note:: If the incoming value does not match a format, it will be returned as-is .. py:attribute:: db_field = 'date' .. py:attribute:: year An expression suitable for extracting the year, for example to retrieve all people born in 1980: .. code-block:: python Person.select().where(Person.dob.year == 1983) .. py:attribute:: month Same as :py:attr:`~DateField.year`, except extract month. .. py:attribute:: day Same as :py:attr:`~DateField.year`, except extract day. .. py:class:: TimeField Stores: python ``datetime.time`` instances Accepts a special parameter ``formats``, which contains a list of formats the time can be encoded with. The default behavior is: .. code-block:: python '%H:%M:%S.%f' # hour:minute:second.microsecond '%H:%M:%S' # hour:minute:second '%H:%M' # hour:minute '%Y-%m-%d %H:%M:%S.%f' # year-month-day hour-minute-second.microsecond '%Y-%m-%d %H:%M:%S' # year-month-day hour-minute-second .. note:: If the incoming value does not match a format, it will be returned as-is .. py:attribute:: db_field = 'time' .. py:attribute:: hour Extract the hour from a time, for example to retreive all events occurring in the evening: .. code-block:: python Event.select().where(Event.time.hour > 17) .. py:attribute:: minute Same as :py:attr:`~TimeField.hour`, except extract minute. .. py:attribute:: second Same as :py:attr:`~TimeField.hour`, except extract second.. .. py:class:: TimestampField Stores: python ``datetime.datetime`` instances (stored as integers) Accepts a special parameter ``resolution``, which is a power-of-10 up to ``10^6``. This allows sub-second precision while still using an :py:class:`IntegerField` for storage. Default is ``1`` (second precision). Also accepts a boolean parameter ``utc``, used to indicate whether the timestamps should be UTC. Default is ``False``. Finally, the field ``default`` is the current timestamp. If you do not want this behavior, then explicitly pass in ``default=None``. .. py:class:: BooleanField Stores: ``True`` / ``False`` .. py:attribute:: db_field = 'bool' .. py:class:: BlobField Store arbitrary binary data. .. py:class:: UUIDField Store ``UUID`` values. .. note:: Currently this field is only supported by :py:class:`PostgresqlDatabase`. .. py:class:: BareField Intended to be used only with SQLite. Since data-types are not enforced, you can declare fields without *any* data-type. It is also common for SQLite virtual tables to use meta-columns or untyped columns, so for those cases as well you may wish to use an untyped field. Accepts a special ``coerce`` parameter, a function that takes a value coming from the database and converts it into the appropriate Python type. .. note:: Currently this field is only supported by :py:class:`SqliteDatabase`. .. py:class:: ForeignKeyField(rel_model[, related_name=None[, on_delete=None[, on_update=None[, to_field=None[, ...]]]]]) Stores: relationship to another model :param rel_model: related :py:class:`Model` class or the string 'self' if declaring a self-referential foreign key :param string related_name: attribute to expose on related model :param string on_delete: on delete behavior, e.g. ``on_delete='CASCADE'``. :param string on_update: on update behavior. :param to_field: the field (or field name) on ``rel_model`` the foreign key references. Defaults to the primary key field for ``rel_model``. .. code-block:: python class User(Model): name = CharField() class Tweet(Model): user = ForeignKeyField(User, related_name='tweets') content = TextField() # "user" attribute >>> some_tweet.user # "tweets" related name attribute >>> for tweet in charlie.tweets: ... print tweet.content Some tweet Another tweet Yet another tweet .. note:: Foreign keys do not have a particular ``db_field`` as they will take their field type depending on the type of primary key on the model they are related to. .. note:: If you manually specify a ``to_field``, that field must be either a primary key or have a unique constraint. .. py:class:: CompositeKey(*fields) Specify a composite primary key for a model. Unlike the other fields, a composite key is defined in the model's ``Meta`` class after the fields have been defined. It takes as parameters the string names of the fields to use as the primary key: .. code-block:: python class BlogTagThrough(Model): blog = ForeignKeyField(Blog, related_name='tags') tag = ForeignKeyField(Tag, related_name='blogs') class Meta: primary_key = CompositeKey('blog', 'tag') .. _query-types: Query Types ----------- .. py:class:: Query() The parent class from which all other query classes are derived. While you will not deal with :py:class:`Query` directly in your code, it implements some methods that are common across all query types. .. py:method:: where(*expressions) :param expressions: a list of one or more expressions :rtype: a :py:class:`Query` instance Example selection users where the username is equal to 'somebody': .. code-block:: python sq = SelectQuery(User).where(User.username == 'somebody') Example selecting tweets made by users who are either editors or administrators: .. code-block:: python sq = SelectQuery(Tweet).join(User).where( (User.is_editor == True) | (User.is_admin == True)) Example of deleting tweets by users who are no longer active: .. code-block:: python dq = DeleteQuery(Tweet).where( Tweet.user << User.select().where(User.active == False)) dq.execute() # perform the delete query .. note:: :py:meth:`~SelectQuery.where` calls are chainable. Multiple calls will be "AND"-ed together. .. py:method:: join(model, join_type=None, on=None) :param model: the model to join on. there must be a :py:class:`ForeignKeyField` between the current ``query context`` and the model passed in. :param join_type: allows the type of ``JOIN`` used to be specified explicitly, one of ``JOIN.INNER``, ``JOIN.LEFT_OUTER``, ``JOIN.FULL``, ``JOIN.RIGHT_OUTER``, or ``JOIN.CROSS``. :param on: if multiple foreign keys exist between two models, this parameter is the ForeignKeyField to join on. :rtype: a :py:class:`Query` instance Generate a ``JOIN`` clause from the current ``query context`` to the ``model`` passed in, and establishes ``model`` as the new ``query context``. Example selecting tweets and joining on user in order to restrict to only those tweets made by "admin" users: .. code-block:: python sq = SelectQuery(Tweet).join(User).where(User.is_admin == True) Example selecting users and joining on a particular foreign key field. See the :py:ref:`example app ` for a real-life usage: .. code-block:: python sq = SelectQuery(User).join(Relationship, on=Relationship.to_user) .. py:method:: switch(model) :param model: model to switch the ``query context`` to. :rtype: a clone of the query with a new query context Switches the ``query context`` to the given model. Raises an exception if the model has not been selected or joined on previously. Useful for performing multiple joins from a single table. The following example selects from blog and joins on both entry and user: .. code-block:: python sq = SelectQuery(Blog).join(Entry).switch(Blog).join(User) .. py:method:: alias(alias=None) :param str alias: A string to alias the result of this query :rtype: a Query instance Assign an alias to given query, which can be used as part of a subquery. .. py:method:: sql() :rtype: a 2-tuple containing the appropriate SQL query and a tuple of parameters .. warning: This method should be implemented by subclasses .. py:method:: execute() Execute the given query .. warning: This method should be implemented by subclasses .. py:method:: scalar([as_tuple=False[, convert=False]]) :param bool as_tuple: return the row as a tuple or a single value :param bool convert: attempt to coerce the selected value to the appropriate data-type based on it's associated Field type (assuming one exists). :rtype: the resulting row, either as a single value or tuple Provide a way to retrieve single values from select queries, for instance when performing an aggregation. .. code-block:: pycon >>> PageView.select(fn.Count(fn.Distinct(PageView.url))).scalar() 100 # <-- there are 100 distinct URLs in the pageview table This example illustrates the use of the `convert` argument. When using a SQLite database, datetimes are stored as strings. To select the max datetime, and have it *returned* as a datetime, we will specify ``convert=True``. .. code-block:: pycon >>> PageView.select(fn.MAX(PageView.timestamp)).scalar() '2016-04-20 13:37:00.1234' >>> PageView.select(fn.MAX(PageView.timestamp)).scalar(convert=True) datetime.datetime(2016, 4, 20, 13, 37, 0, 1234) .. py:class:: SelectQuery(model_class, *selection) By far the most complex of the query classes available in peewee. It supports all clauses commonly associated with select queries. Methods on the select query can be chained together. ``SelectQuery`` implements an :py:meth:`~SelectQuery.__iter__` method, allowing it to be iterated to return model instances. :param model: a :py:class:`Model` class to perform query on :param selection: a list of models, fields, functions or expressions If no selection is provided, it will default to all the fields of the given model. Example selecting some user instances from the database. Only the ``id`` and ``username`` columns are selected. When iterated, will return instances of the ``User`` model: .. code-block:: python sq = SelectQuery(User, User.id, User.username) for user in sq: print user.username Example selecting users and additionally the number of tweets made by the user. The ``User`` instances returned will have an additional attribute, 'count', that corresponds to the number of tweets made: .. code-block:: python sq = (SelectQuery( User, User, fn.Count(Tweet.id).alias('count')) .join(Tweet) .group_by(User)) .. py:method:: select(*selection) :param selection: a list of expressions, which can be model classes or fields. if left blank, will default to all the fields of the given model. :rtype: :py:class:`SelectQuery` .. note:: Usually the selection will be specified when the instance is created. This method simply exists for the case when you want to modify the SELECT clause independent of instantiating a query. .. code-block:: python query = User.select() query = query.select(User.username) .. py:method:: from_(*args) :param args: one or more expressions, for example :py:class:`Model` or :py:class:`SelectQuery` instance(s). if left blank, will default to the table of the given model. :rtype: :py:class:`SelectQuery` .. code-block:: python # rather than a join, select from both tables and join with where. query = User.select().from_(User, Blog).where(Blog.user == User.id) .. py:method:: group_by(*clauses) :param clauses: a list of expressions, which can be model classes or individual field instances :rtype: :py:class:`SelectQuery` Group by one or more columns. If a model class is provided, all the fields on that model class will be used. Example selecting users, joining on tweets, and grouping by the user so a count of tweets can be calculated for each user: .. code-block:: python sq = (User .select(User, fn.Count(Tweet.id).alias('count')) .join(Tweet) .group_by(User)) .. py:method:: having(*expressions) :param expressions: a list of one or more expressions :rtype: :py:class:`SelectQuery` Here is the above example selecting users and tweet counts, but restricting the results to those users who have created 100 or more tweets: .. code-block:: python sq = (User .select(User, fn.Count(Tweet.id).alias('count')) .join(Tweet) .group_by(User) .having(fn.Count(Tweet.id) > 100)) .. py:method:: order_by(*clauses[, extend=False]) :param clauses: a list of fields, calls to ``field.[asc|desc]()`` or one or more expressions. If called without any arguments, any pre-existing ``ORDER BY`` clause will be removed. :param extend: When called with ``extend=True``, Peewee will append any to the pre-existing ``ORDER BY`` rather than overwriting it. :rtype: :py:class:`SelectQuery` Example of ordering users by username: .. code-block:: python User.select().order_by(User.username) Example of selecting tweets and ordering them first by user, then newest first: .. code-block:: python query = (Tweet .select() .join(User) .order_by( User.username, Tweet.created_date.desc())) You can also use ``+`` and ``-`` prefixes to indicate ascending or descending order if you prefer: .. code-block:: python query = (Tweet .select() .join(User) .order_by( +User.username, -Tweet.created_date)) A more complex example ordering users by the number of tweets made (greatest to least), then ordered by username in the event of a tie: .. code-block:: python tweet_ct = fn.Count(Tweet.id) sq = (User .select(User, tweet_ct.alias('count')) .join(Tweet) .group_by(User) .order_by(tweet_ct.desc(), User.username)) Example of removing a pre-existing ``ORDER BY`` clause: .. code-block:: python # Query will be ordered by username. users = User.select().order_by(User.username) # Query will be returned in whatever order database chooses. unordered_users = users.order_by() .. py:method:: window(*windows) :param Window windows: One or more :py:class:`Window` instances. Add one or more window definitions to this query. .. code-block:: python window = Window(partition_by=[fn.date_trunc('day', PageView.timestamp)]) query = (PageView .select( PageView.url, PageView.timestamp, fn.Count(PageView.id).over(window=window)) .window(window) .order_by(PageView.timestamp)) .. py:method:: limit(num) :param int num: limit results to ``num`` rows .. py:method:: offset(num) :param int num: offset results by ``num`` rows .. py:method:: paginate(page_num, paginate_by=20) :param page_num: a 1-based page number to use for paginating results :param paginate_by: number of results to return per-page :rtype: :py:class:`SelectQuery` Shorthand for applying a ``LIMIT`` and ``OFFSET`` to the query. Page indices are **1-based**, so page 1 is the first page. .. code-block:: python User.select().order_by(User.username).paginate(3, 20) # get users 41-60 .. py:method:: distinct([is_distinct=True]) :param is_distinct: See notes. :rtype: :py:class:`SelectQuery` Indicates that this query should only return distinct rows. Results in a ``SELECT DISTINCT`` query. .. note:: The value for ``is_distinct`` should either be a boolean, in which case the query will (or won't) be `DISTINCT`. You can specify a list of one or more expressions to generate a ``DISTINCT ON`` query, e.g. ``.distinct([Model.col1, Model.col2])``. .. py:method:: for_update([for_update=True[, nowait=False]]) :rtype: :py:class:`SelectQuery` Indicate that this query should lock rows for update. If ``nowait`` is ``True`` then the database will raise an ``OperationalError`` if it cannot obtain the lock. .. py:method:: with_lock([lock_type='UPDATE']) :rtype: :py:class:`SelectQuery` Indicates that this query shoudl lock rows. A more generic version of the :py:meth:`~SelectQuery.for_update` method. Example: .. code-block:: python # SELECT * FROM some_model FOR KEY SHARE NOWAIT; SomeModel.select().with_lock('KEY SHARE NOWAIT') .. note:: You do not need to include the word *FOR*. .. py:method:: naive() :rtype: :py:class:`SelectQuery` Flag this query indicating it should only attempt to reconstruct a single model instance for every row returned by the cursor. If multiple tables were queried, the columns returned are patched directly onto the single model instance. Generally this method is useful for speeding up the time needed to construct model instances given a database cursor. .. note:: this can provide a significant speed improvement when doing simple iteration over a large result set. .. py:method:: iterator() :rtype: ``iterable`` By default peewee will cache rows returned by the cursor. This is to prevent things like multiple iterations, slicing and indexing from triggering extra queries. When you are iterating over a large number of rows, however, this cache can take up a lot of memory. Using ``iterator()`` will save memory by not storing all the returned model instances. .. code-block:: python # iterate over large number of rows. for obj in Stats.select().iterator(): # do something. pass .. py:method:: tuples() :rtype: :py:class:`SelectQuery` Flag this query indicating it should simply return raw tuples from the cursor. This method is useful when you either do not want or do not need full model instances. .. py:method:: dicts() :rtype: :py:class:`SelectQuery` Flag this query indicating it should simply return dictionaries from the cursor. This method is useful when you either do not want or do not need full model instances. .. py:method:: aggregate_rows() :rtype: :py:class:`SelectQuery` This method provides one way to avoid the **N+1** query problem. Consider a webpage where you wish to display a list of users and all of their associated tweets. You could approach this problem by listing the users, then for each user executing a separate query to retrieve their tweets. This is the **N+1** behavior, because the number of queries varies depending on the number of users. Conventional wisdom is that it is preferable to execute fewer queries. Peewee provides several ways to avoid this problem. You can use the :py:func:`prefetch` helper, which uses ``IN`` clauses to retrieve the tweets for the listed users. Another method is to select both the user and the tweet data in a single query, then de-dupe the users, aggregating the tweets in the process. The raw column data might appear like this: .. code-block:: python # user.id, user.username, tweet.id, tweet.user_id, tweet.message [1, 'charlie', 1, 1, 'hello'], [1, 'charlie', 2, 1, 'goodbye'], [2, 'no-tweets', NULL, NULL, NULL], [3, 'huey', 3, 3, 'meow'], [3, 'huey', 4, 3, 'purr'], [3, 'huey', 5, 3, 'hiss'], We can infer from the ``JOIN`` clause that the user data will be duplicated, and therefore by de-duping the users, we can collect their tweets in one go and iterate over the users and tweets transparently. .. code-block:: python query = (User .select(User, Tweet) .join(Tweet, JOIN.LEFT_OUTER) .order_by(User.username, Tweet.id) .aggregate_rows()) # .aggregate_rows() tells peewee to de-dupe the rows. for user in query: print user.username for tweet in user.tweets: print ' ', tweet.message # Producing the following output: charlie hello goodbye huey meow purr hiss no-tweets .. warning:: Be sure that you specify an ``ORDER BY`` clause that ensures duplicated data will appear in consecutive rows. .. note:: You can specify arbitrarily complex joins, though for more complex queries it may be more efficient to use :py:func:`prefetch`. In short, try both and see what works best for your data-set. .. note:: For more information, see the :ref:`nplusone` document and the :ref:`aggregate-rows` sub-section. .. py:method:: annotate(related_model, aggregation=None) :param related_model: related :py:class:`Model` on which to perform aggregation, must be linked by :py:class:`ForeignKeyField`. :param aggregation: the type of aggregation to use, e.g. ``fn.Count(Tweet.id).alias('count')`` :rtype: :py:class:`SelectQuery` Annotate a query with an aggregation performed on a related model, for example, "get a list of users with the number of tweets for each": .. code-block:: python >>> User.select().annotate(Tweet) If ``aggregation`` is None, it will default to ``fn.Count(related_model.id).alias('count')`` but can be anything: .. code-block:: python >>> user_latest = User.select().annotate(Tweet, fn.Max(Tweet.created_date).alias('latest')) .. note:: If the ``ForeignKeyField`` is ``nullable``, then a ``LEFT OUTER`` join may need to be used:: query = (User .select() .join(Tweet, JOIN.LEFT_OUTER) .switch(User) # Switch query context back to `User`. .annotate(Tweet)) .. py:method:: aggregate(aggregation) :param aggregation: a function specifying what aggregation to perform, for example ``fn.Max(Tweet.created_date)``. Method to look at an aggregate of rows using a given function and return a scalar value, such as the count of all rows or the average value of a particular column. .. py:method:: count([clear_limit=False]) :param bool clear_limit: Remove any limit or offset clauses from the query before counting. :rtype: an integer representing the number of rows in the current query .. note:: If the query has a GROUP BY, DISTINCT, LIMIT, or OFFSET clause, then the :py:meth:`~SelectQuery.wrapped_count` method will be used instead. >>> sq = SelectQuery(Tweet) >>> sq.count() 45 # number of tweets >>> deleted_tweets = sq.where(Tweet.status == DELETED) >>> deleted_tweets.count() 3 # number of tweets that are marked as deleted .. py:method:: wrapped_count([clear_limit=False]) :param bool clear_limit: Remove any limit or offset clauses from the query before counting. :rtype: an integer representing the number of rows in the current query Wrap the count query in a subquery. Additional overhead but will give correct counts when performing ``DISTINCT`` queries or those with ``GROUP BY`` clauses. .. note:: :py:meth:`~SelectQuery.count` will automatically default to :py:meth:`~SelectQuery.wrapped_count` in the event the query is distinct or has a grouping. .. py:method:: exists() :rtype: boolean whether the current query will return any rows. uses an optimized lookup, so use this rather than :py:meth:`~SelectQuery.get`. .. code-block:: python sq = User.select().where(User.active == True) if sq.where(User.username == username, User.active == True).exists(): authenticated = True .. py:method:: get() :rtype: :py:class:`Model` instance or raises ``DoesNotExist`` exception Get a single row from the database that matches the given query. Raises a ``.DoesNotExist`` if no rows are returned: .. code-block:: python active = User.select().where(User.active == True) try: user = active.where(User.username == username).get() except User.DoesNotExist: user = None This method is also exposed via the :py:class:`Model` api, in which case it accepts arguments that are translated to the where clause: user = User.get(User.active == True, User.username == username) .. py:method:: first([n=1]) :param int n: Return the first *n* query results after applying a limit of ``n`` records. :rtype: :py:class:`Model` instance, list or ``None`` if no results Fetch the first *n* rows from a query. Behind-the-scenes, a ``LIMIT n`` is applied. The results of the query are then cached on the query result wrapper so subsequent calls to :py:meth:`~SelectQuery.first` will not cause multiple queries. If only one row is requested (default behavior), then the return-type will be either a model instance or ``None``. If multiple rows are requested, the return type will either be a list of one to n model instances, or ``None`` if no results are found. .. py:method:: peek([n=1]) :param int n: Return the first *n* query results. :rtype: :py:class:`Model` instance, list or ``None`` if no results Fetch the first *n* rows from a query. No ``LIMIT`` is applied to the query, so the :py:meth:`~SelectQuery.peek` has slightly different semantics from :py:meth:`~SelectQuery.first`, which ensures no more than *n* rows are requested. The ``peek`` method, on the other hand, retains the ability to fetch the entire result set withouth issuing additional queries. .. py:method:: execute() :rtype: :py:class:`QueryResultWrapper` Executes the query and returns a :py:class:`QueryResultWrapper` for iterating over the result set. The results are managed internally by the query and whenever a clause is added that would possibly alter the result set, the query is marked for re-execution. .. py:method:: __iter__() Executes the query and returns populated model instances: .. code-block:: python for user in User.select().where(User.active == True): print user.username .. py:method:: __len__() Return the number of items in the result set of this query. If all you need is the count of items and do not intend to do anything with the results, call :py:meth:`~SelectQuery.count`. .. warning:: The ``SELECT`` query will be executed and the result set will be loaded. If you want to obtain the number of results without also loading the query, use :py:meth:`~SelectQuery.count`. .. py:method:: __getitem__(value) :param value: Either an index or a ``slice`` object. Return the model instance(s) at the requested indices. To get the first model, for instance: .. code-block:: python query = User.select().order_by(User.username) first_user = query[0] first_five = query[:5] .. py:method:: __or__(rhs) :param rhs: Either a :py:class:`SelectQuery` or a :py:class:`CompoundSelect` :rtype: :py:class:`CompoundSelect` Create a ``UNION`` query with the right-hand object. The result will contain all values from both the left and right queries. .. code-block:: python customers = Customer.select(Customer.city).where(Customer.state == 'KS') stores = Store.select(Store.city).where(Store.state == 'KS') # Get all cities in kansas where we have either a customer or a store. all_cities = (customers | stores).order_by(SQL('city')) .. note:: SQLite does not allow ``ORDER BY`` or ``LIMIT`` clauses on the components of a compound query, however SQLite does allow these clauses on the final, compound result. This applies to ``UNION (ALL)``, ``INTERSECT``, and ``EXCEPT``. .. py:method:: __and__(rhs) :param rhs: Either a :py:class:`SelectQuery` or a :py:class:`CompoundSelect` :rtype: :py:class:`CompoundSelect` Create an ``INTERSECT`` query. The result will contain values that are in both the left and right queries. .. code-block:: python customers = Customer.select(Customer.city).where(Customer.state == 'KS') stores = Store.select(Store.city).where(Store.state == 'KS') # Get all cities in kanasas where we have both customers and stores. cities = (customers & stores).order_by(SQL('city')) .. py:method:: __sub__(rhs) :param rhs: Either a :py:class:`SelectQuery` or a :py:class:`CompoundSelect` :rtype: :py:class:`CompoundSelect` Create an ``EXCEPT`` query. The result will contain values that are in the left-hand query but not in the right-hand query. .. code-block:: python customers = Customer.select(Customer.city).where(Customer.state == 'KS') stores = Store.select(Store.city).where(Store.state == 'KS') # Get all cities in kanasas where we have customers but no stores. cities = (customers - stores).order_by(SQL('city')) .. py:method:: __xor__(rhs) :param rhs: Either a :py:class:`SelectQuery` or a :py:class:`CompoundSelect` :rtype: :py:class:`CompoundSelect` Create an symmetric difference query. The result will contain values that are in either the left-hand query or the right-hand query, but not both. .. code-block:: python customers = Customer.select(Customer.city).where(Customer.state == 'KS') stores = Store.select(Store.city).where(Store.state == 'KS') # Get all cities in kanasas where we have either customers with no # store, or a store with no customers. cities = (customers ^ stores).order_by(SQL('city')) .. py:class:: UpdateQuery(model_class, **kwargs) :param model: :py:class:`Model` class on which to perform update :param kwargs: mapping of field/value pairs containing columns and values to update Example in which users are marked inactive if their registration expired: .. code-block:: python uq = UpdateQuery(User, active=False).where(User.registration_expired == True) uq.execute() # Perform the actual update Example of an atomic update: .. code-block:: python atomic_update = UpdateQuery(PageCount, count = PageCount.count + 1).where( PageCount.url == url) atomic_update.execute() # will perform the actual update .. py:method:: execute() :rtype: Number of rows updated Performs the query .. py:method:: returning(*returning) :param returning: A list of model classes, field instances, functions or expressions. If no argument is provided, all columns for the given model will be selected. To clear any existing values, pass in ``None``. :rtype: a :py:class:`UpdateQuery` for the given :py:class:`Model`. Add a ``RETURNING`` clause to the query, which will cause the ``UPDATE`` to compute return values based on each row that was actually updated. When the query is executed, rather than returning the number of rows updated, an iterator will be returned that yields the updated objects. .. note:: Currently only :py:class:`PostgresqlDatabase` supports this feature. Example: .. code-block:: python # Disable all users whose registration expired, and return the user # objects that were updated. query = (User .update(active=False) .where(User.registration_expired == True) .returning(User)) # We can iterate over the users that were updated. for updated_user in query.execute(): send_activation_email(updated_user.email) For more information, check out :ref:`the RETURNING clause docs `. .. py:method:: tuples() :rtype: :py:class:`UpdateQuery` .. note:: This method should only be used in conjunction with a call to :py:meth:`~UpdateQuery.returning`. When the updated results are returned, they will be returned as row tuples. .. py:method:: dicts() :rtype: :py:class:`UpdateQuery` .. note:: This method should only be used in conjunction with a call to :py:meth:`~UpdateQuery.returning`. When the updated results are returned, they will be returned as dictionaries mapping column to value. .. py:method:: on_conflict([action=None]) Add a SQL ``ON CONFLICT`` clause with the specified action to the given ``UPDATE`` query. `Valid actions `_ are: * ROLLBACK * ABORT * FAIL * IGNORE * REPLACE Specifying ``None`` for the action will execute a normal ``UPDATE`` query. .. note:: This feature is only available on SQLite databases. .. py:class:: InsertQuery(model_class[, field_dict=None[, rows=None[, fields=None[, query=None[, validate_fields=False]]]]]) Creates an ``InsertQuery`` instance for the given model. :param dict field_dict: A mapping of either field or field-name to value. :param iterable rows: An iterable of dictionaries containing a mapping of field or field-name to value. :param list fields: A list of field objects to insert data into (only used in combination with the ``query`` parameter). :param query: A :py:class:`SelectQuery` to use as the source of data. :param bool validate_fields: Check that every column referenced in the insert query has a corresponding field on the model. If validation is enabled and then fails, a ``KeyError`` is raised. Basic example: .. code-block:: pycon >>> fields = {'username': 'admin', 'password': 'test', 'active': True} >>> iq = InsertQuery(User, fields) >>> iq.execute() # insert new row and return primary key 2L Example inserting multiple rows: .. code-block:: python users = [ {'username': 'charlie', 'active': True}, {'username': 'peewee', 'active': False}, {'username': 'huey', 'active': True}] iq = InsertQuery(User, rows=users) iq.execute() Example inserting using a query as the data source: .. code-block:: python query = (User .select(User.username, fn.COUNT(Tweet.id)) .join(Tweet, JOIN.LEFT_OUTER) .group_by(User.username)) iq = InsertQuery( UserTweetDenorm, fields=[UserTweetDenorm.username, UserTweetDenorm.num_tweets], query=query) iq.execute() .. py:method:: execute() :rtype: primary key of the new row Performs the query .. py:method:: upsert([upsert=True]) Perform an *INSERT OR REPLACE* query with SQLite. MySQL databases will issue a *REPLACE* query. Currently this feature is not supported for Postgres databases, but the 9.5 syntax will be added soon. .. note:: This feature is only available on SQLite and MySQL databases. .. py:method:: on_conflict([action=None]) Add a SQL ``ON CONFLICT`` clause with the specified action to the given ``INSERT`` query. Specifying ``REPLACE`` is equivalent to using the :py:meth:`~InsertQuery.upsert` method. `Valid actions `_ are: * ROLLBACK * ABORT * FAIL * IGNORE * REPLACE Specifying ``None`` for the action will execute a normal ``INSERT`` query. .. note:: This feature is only available on SQLite databases. .. py:method:: return_id_list([return_id_list=True]) By default, when doing bulk INSERTs, peewee will not return the list of generated primary keys. However, if the database supports returning primary keys via ``INSERT ... RETURNING``, this method instructs peewee to return the generated list of IDs. .. note:: Currently only PostgreSQL supports this behavior. While other databases support bulk inserts, they will simply return ``True`` instead. Example: .. code-block:: python usernames = [ {'username': username} for username in ['charlie', 'huey', 'mickey']] query = User.insert_many(usernames).return_id_list() user_ids = query.execute() print user_ids # prints something like [1, 2, 3] .. py:method:: returning(*returning) :param returning: A list of model classes, field instances, functions or expressions. If no argument is provided, all columns for the given model will be selected. To clear any existing values, pass in ``None``. :rtype: a :py:class:`InsertQuery` for the given :py:class:`Model`. Add a ``RETURNING`` clause to the query, which will cause the ``INSERT`` to compute return values based on each row that was inserted. When the query is executed, rather than returning the primary key of the new row(s), an iterator will be returned that yields the inserted objects. .. note:: Currently only :py:class:`PostgresqlDatabase` supports this feature. Example: .. code-block:: python # Create some users, retrieving the list of IDs assigned to them. query = (User .insert_many(list_of_user_data) .returning(User)) # We can iterate over the users that were created. for new_user in query.execute(): # Do something with the new user's ID... do_something(new_user.id) For more information, check out :ref:`the RETURNING clause docs `. .. py:method:: tuples() :rtype: :py:class:`InsertQuery` .. note:: This method should only be used in conjunction with a call to :py:meth:`~InsertQuery.returning`. When the inserted results are returned, they will be returned as row tuples. .. py:method:: dicts() :rtype: :py:class:`InsertQuery` .. note:: This method should only be used in conjunction with a call to :py:meth:`~InsertQuery.returning`. When the inserted results are returned, they will be returned as dictionaries mapping column to value. .. py:class:: DeleteQuery(model_class) Creates a *DELETE* query for the given model. .. note:: DeleteQuery will *not* traverse foreign keys or ensure that constraints are obeyed, so use it with care. Example deleting users whose account is inactive: .. code-block:: python dq = DeleteQuery(User).where(User.active == False) .. py:method:: execute() :rtype: Number of rows deleted Performs the query .. py:method:: returning(*returning) :param returning: A list of model classes, field instances, functions or expressions. If no argument is provided, all columns for the given model will be selected. To clear any existing values, pass in ``None``. :rtype: a :py:class:`DeleteQuery` for the given :py:class:`Model`. Add a ``RETURNING`` clause to the query, which will cause the ``DELETE`` to compute return values based on each row that was removed from the database. When the query is executed, rather than returning the number of rows deleted, an iterator will be returned that yields the deleted objects. .. note:: Currently only :py:class:`PostgresqlDatabase` supports this feature. Example: .. code-block:: python # Create some users, retrieving the list of IDs assigned to them. query = (User .delete() .where(User.account_expired == True) .returning(User)) # We can iterate over the user objects that were deleted. for deleted_user in query.execute(): # Do something with the deleted user. notify_account_deleted(deleted_user.email) For more information, check out :ref:`the RETURNING clause docs `. .. py:method:: tuples() :rtype: :py:class:`DeleteQuery` .. note:: This method should only be used in conjunction with a call to :py:meth:`~DeleteQuery.returning`. When the deleted results are returned, they will be returned as row tuples. .. py:method:: dicts() :rtype: :py:class:`DeleteQuery` .. note:: This method should only be used in conjunction with a call to :py:meth:`~DeleteQuery.returning`. When the deleted results are returned, they will be returned as dictionaries mapping column to value. .. py:class:: RawQuery(model_class, sql, *params) Allows execution of an arbitrary query and returns instances of the model via a :py:class:`QueryResultsWrapper`. .. note:: Generally you will only need this for executing highly optimized SELECT queries. .. warning:: If you are executing a parameterized query, you must use the correct interpolation string for your database. SQLite uses ``'?'`` and most others use ``'%s'``. Example selecting users with a given username: .. code-block:: pycon >>> rq = RawQuery(User, 'SELECT * FROM users WHERE username = ?', 'admin') >>> for obj in rq.execute(): ... print obj .. py:method:: tuples() :rtype: :py:class:`RawQuery` Flag this query indicating it should simply return raw tuples from the cursor. This method is useful when you either do not want or do not need full model instances. .. py:method:: dicts() :rtype: :py:class:`RawQuery` Flag this query indicating it should simply return raw dicts from the cursor. This method is useful when you either do not want or do not need full model instances. .. py:method:: execute() :rtype: a :py:class:`QueryResultWrapper` for iterating over the result set. The results are instances of the given model. Performs the query .. py:class:: CompoundSelect(model_class, lhs, operator, rhs) Compound select query. :param model_class: The type of model to return, by default the model class of the ``lhs`` query. :param lhs: Left-hand query, either a :py:class:`SelectQuery` or a :py:class:`CompoundQuery`. :param operator: A string used to join the two queries, for example ``'UNION'``. :param rhs: Right query, either a :py:class:`SelectQuery` or a :py:class:`CompoundQuery`. .. py:function:: prefetch(sq, *subqueries) :param sq: :py:class:`SelectQuery` instance :param subqueries: one or more :py:class:`SelectQuery` instances to prefetch for ``sq``. You can also pass models, but they will be converted into SelectQueries. If you wish to specify a particular model to join against, you can pass a 2-tuple of ``(query_or_model, join_model)``. :rtype: :py:class:`SelectQuery` with related instances pre-populated Pre-fetch the appropriate instances from the subqueries and apply them to their corresponding parent row in the outer query. This function will eagerly load the related instances specified in the subqueries. This is a technique used to save doing O(n) queries for n rows, and rather is O(k) queries for *k* subqueries. For example, consider you have a list of users and want to display all their tweets: .. code-block:: python # let's impost some small restrictions on our queries users = User.select().where(User.active == True) tweets = Tweet.select().where(Tweet.published == True) # this will perform 2 queries users_pf = prefetch(users, tweets) # now we can: for user in users_pf: print user.username for tweet in user.tweets_prefetch: print '- ', tweet.content You can prefetch an arbitrary number of items. For instance, suppose we have a photo site, User -> Photo -> (Comments, Tags). That is, users can post photos, and these photos can have tags and comments on them. If we wanted to fetch a list of users, all their photos, and all the comments and tags on the photos: .. code-block:: python users = User.select() published_photos = Photo.select().where(Photo.published == True) published_comments = Comment.select().where( (Comment.is_spam == False) & (Comment.num_flags < 3)) # note that we are just passing the Tag model -- it will be converted # to a query automatically users_pf = prefetch(users, published_photos, published_comments, Tag) # now we can iterate users, photos, and comments/tags for user in users_pf: for photo in user.photo_set_prefetch: for comment in photo.comment_set_prefetch: # ... for tag in photo.tag_set_prefetch: # ... .. note:: Subqueries must be related by foreign key and can be arbitrarily deep .. note:: For more information, see the :ref:`nplusone` document and the :ref:`prefetch` sub-section. .. warning:: :py:func:`prefetch` can use up lots of RAM when the result set is large, and will not warn you if you are doing something dangerous, so it is up to you to know when to use it. Additionally, because of the semantics of subquerying, there may be some cases when prefetch does not act as you expect (for instance, when applying a ``LIMIT`` to subqueries, but there may be others) -- please report anything you think is a bug to `github `_. Database and its subclasses --------------------------- .. py:class:: Database(database[, threadlocals=True[, autocommit=True[, fields=None[, ops=None[, autorollback=False[, use_speedups=True[, **connect_kwargs]]]]]]]) :param database: the name of the database (or filename if using sqlite) :param bool threadlocals: whether to store connections in a threadlocal :param bool autocommit: automatically commit every query executed by calling :py:meth:`~Database.execute` :param dict fields: a mapping of :py:attr:`~Field.db_field` to database column type, e.g. 'string' => 'varchar' :param dict ops: a mapping of operations understood by the querycompiler to expressions :param bool autorollback: automatically rollback when an exception occurs while executing a query. :param bool use_speedups: use the Cython speedups module to improve performance of some queries. :param connect_kwargs: any arbitrary parameters to pass to the database driver when connecting The ``connect_kwargs`` dictionary is used for vendor-specific parameters that will be passed back directly to your database driver, allowing you to specify the ``user``, ``host`` and ``password``, for instance. For more information and examples, see the :ref:`vendor-specific parameters document `. .. note:: If your database name is not known when the class is declared, you can pass ``None`` in as the database name which will mark the database as "deferred" and any attempt to connect while in this state will raise an exception. To initialize your database, call the :py:meth:`Database.init` method with the database name. For an in-depth discussion of run-time database configuration, see the :ref:`deferring_initialization` section. A high-level API for working with the supported database engines. The database class: * Manages the underlying database connection. * Executes queries. * Manage transactions and savepoints. * Create and drop tables and indexes. * Introspect the database. .. py:attribute:: commit_select = False Whether to issue a commit after executing a select query. With some engines can prevent implicit transactions from piling up. .. py:attribute:: compiler_class = QueryCompiler A class suitable for compiling queries .. py:attribute:: compound_operations = ['UNION', 'INTERSECT', 'EXCEPT'] Supported compound query operations. .. py:attribute:: compound_select_parentheses = False Whether ``UNION`` (or other compound ``SELECT`` queries) allow parentheses around the queries. .. py:attribute:: distinct_on = False Whether the database supports ``DISTINCT ON`` statements. .. py:attribute:: drop_cascade = False Whether the database supports cascading drop table queries. .. py:attribute:: field_overrides = {} A mapping of field types to database column types, e.g. ``{'primary_key': 'SERIAL'}`` .. py:attribute:: foreign_keys = True Whether the given backend enforces foreign key constraints. .. py:attribute:: for_update = False Whether the given backend supports selecting rows for update .. py:attribute:: for_update_nowait = False Whether the given backend supports selecting rows for update .. py:attribute:: insert_many = True Whether the database supports multiple ``VALUES`` clauses for ``INSERT`` queries. .. py:attribute:: insert_returning = False Whether the database supports returning the primary key for newly inserted rows. .. py:attribute:: interpolation = '?' The string used by the driver to interpolate query parameters .. py:attribute:: op_overrides = {} A mapping of operation codes to string operations, e.g. ``{OP.LIKE: 'LIKE BINARY'}`` .. py:attribute:: quote_char = '"' The string used by the driver to quote names .. py:attribute:: reserved_tables = [] Table names that are reserved by the backend -- if encountered in the application a warning will be issued. .. py:attribute:: returning_clause = False Whether the database supports ``RETURNING`` clauses for ``UPDATE``, ``INSERT`` and ``DELETE`` queries. .. note:: Currently only :py:class:`PostgresqlDatabase` supports this. See the following for more information: * :py:meth:`UpdateQuery.returning` * :py:meth:`InsertQuery.returning` * :py:meth:`DeleteQuery.returning` .. py:attribute:: savepoints = True Whether the given backend supports savepoints. .. py:attribute:: sequences = False Whether the given backend supports sequences .. py:attribute:: subquery_delete_same_table = True Whether the given backend supports deleting rows using a subquery that selects from the same table .. py:attribute:: window_functions = False Whether the given backend supports window functions. .. py:method:: init(database[, **connect_kwargs]) This method is used to initialize a deferred database. For details on configuring your database at run-time, see the :ref:`deferring_initialization` section. :param database: the name of the database (or filename if using sqlite) :param connect_kwargs: any arbitrary parameters to pass to the database driver when connecting .. py:method:: connect() Establishes a connection to the database .. note:: By default, connections will be stored on a threadlocal, ensuring connections are not shared across threads. To disable this behavior, initialize the database with ``threadlocals=False``. .. py:method:: close() Closes the connection to the database (if one is open) .. note:: If you initialized with ``threadlocals=True``, only a connection local to the calling thread will be closed. .. py:method:: initialize_connection(conn) Perform additional intialization on a newly-opened connection. For example, if you are using SQLite you may want to enable foreign key constraint enforcement (off by default). Here is how you might use this hook to load a SQLite extension: .. code-block:: python class CustomSqliteDatabase(SqliteDatabase): def initialize_connection(self, conn): conn.load_extension('fts5') .. py:method:: get_conn() :rtype: a connection to the database, creates one if does not exist .. py:method:: get_cursor() :rtype: a cursor for executing queries .. py:method:: last_insert_id(cursor, model) :param cursor: the database cursor used to perform the insert query :param model: the model class that was just created :rtype: the primary key of the most recently inserted instance .. py:method:: rows_affected(cursor) :rtype: number of rows affected by the last query .. py:method:: compiler() :rtype: an instance of :py:class:`QueryCompiler` using the field and op overrides specified. .. py:method:: execute(clause) :param Node clause: a :py:class:`Node` instance or subclass (e.g. a :py:class:`SelectQuery`). The clause will be compiled into SQL then sent to the :py:meth:`~Database.execute_sql` method. .. py:method:: execute_sql(sql[, params=None[, require_commit=True]]) :param sql: a string sql query :param params: a list or tuple of parameters to interpolate .. note:: You can configure whether queries will automatically commit by using the :py:meth:`~Database.set_autocommit` and :py:meth:`Database.get_autocommit` methods. .. py:method:: begin([lock_type=None]) Initiate a new transaction. By default **not** implemented as this is not part of the DB-API 2.0, but provided for API compatibility and to allow SQLite users to specify the isolation level when beginning transactions. For SQLite users, the valid isolation levels for ``lock_type`` are: * ``exclusive`` * ``immediate`` * ``deferred`` Example usage: .. code-block:: python # Calling transaction() in turn calls begin('exclusive'). with db.transaction('exclusive'): # No other readers or writers allowed while this is active. (Account .update(Account.balance=Account.balance - 100) .where(Account.id == from_acct) .execute()) (Account .update(Account.balance=Account.balance + 100) .where(Account.id == to_acct) .execute()) .. py:method:: commit() Call ``commit()`` on the active connection, committing the current transaction. .. py:method:: rollback() Call ``rollback()`` on the active connection, rolling back the current transaction. .. py:method:: set_autocommit(autocommit) :param autocommit: a boolean value indicating whether to turn on/off autocommit. .. py:method:: get_autocommit() :rtype: a boolean value indicating whether autocommit is enabled. .. py:method:: get_tables([schema=None]) :rtype: a list of table names in the database. .. py:method:: get_indexes(table, [schema=None]) :rtype: a list of :py:class:`IndexMetadata` instances, representing the indexes for the given table. .. py:method:: get_columns(table, [schema=None]) :rtype: a list of :py:class:`ColumnMetadata` instances, representing the columns for the given table. .. py:method:: get_primary_keys(table, [schema=None]) :rtype: a list containing the primary key column name(s) for the given table. .. py:method:: get_foreign_keys(table, [schema=None]) :rtype: a list of :py:class:`ForeignKeyMetadata` instances, representing the foreign keys for the given table. .. py:method:: sequence_exists(sequence_name) :rtype boolean: .. py:method:: create_table(model_class[, safe=True]) :param model_class: :py:class:`Model` class. :param bool safe: If `True`, the table will not be created if it already exists. .. warning:: Unlike :py:meth:`Model.create_table`, this method does not create indexes or constraints. This method will only create the table itself. If you wish to create the table along with any indexes and constraints, use either :py:meth:`Model.create_table` or :py:meth:`Database.create_tables`. .. py:method:: create_index(model_class, fields[, unique=False]) :param model_class: :py:class:`Model` table on which to create index :param fields: field(s) to create index on (either field instances or field names) :param unique: whether the index should enforce uniqueness .. py:method:: create_foreign_key(model_class, field[, constraint=None]) :param model_class: :py:class:`Model` table on which to create foreign key constraint :param field: :py:class:`Field` object :param str constraint: Name to give foreign key constraint. Manually create a foreign key constraint using an ``ALTER TABLE`` query. This is primarily used when creating a circular foreign key dependency, for example: .. code-block:: python DeferredPost = DeferredRelation() class User(Model): username = CharField() favorite_post = ForeignKeyField(DeferredPost, null=True) class Post(Model): title = CharField() author = ForeignKeyField(User, related_name='posts') DeferredPost.set_model(Post) # Create tables. The foreign key from Post -> User will be created # automatically, but the foreign key from User -> Post must be added # manually. User.create_table() Post.create_table() # Manually add the foreign key constraint on `User`, since we could # not add it until we had created the `Post` table. db.create_foreign_key(User, User.favorite_post) .. py:method:: create_sequence(sequence_name) :param sequence_name: name of sequence to create .. note:: only works with database engines that support sequences .. py:method:: drop_table(model_class[, fail_silently=False[, cascade=False]]) :param model_class: :py:class:`Model` table to drop :param bool fail_silently: if ``True``, query will add a ``IF EXISTS`` clause :param bool cascade: drop table with ``CASCADE`` option. .. py:method:: drop_sequence(sequence_name) :param sequence_name: name of sequence to drop .. note:: only works with database engines that support sequences .. py:method:: create_tables(models[, safe=False]) :param list models: A list of models. :param bool safe: Check first whether the table exists before attempting to create it. This method should be used for creating tables as it will resolve the model dependency graph and ensure the tables are created in the correct order. This method will also create any indexes and constraints defined on the models. Usage: .. code-block:: python db.create_tables([User, Tweet, Something], safe=True) .. py:method:: drop_tables(models[, safe=False[, cascade=False]]) :param list models: A list of models. :param bool safe: Check the table exists before attempting to drop it. :param bool cascade: drop table with ``CASCADE`` option. This method should be used for dropping tables, as it will resolve the model dependency graph and ensure the tables are dropped in the correct order. Usage: .. code-block:: python db.drop_tables([User, Tweet, Something], safe=True) .. py:method:: atomic([transaction_type=None]) Execute statements in either a transaction or a savepoint. The outer-most call to *atomic* will use a transaction, and any subsequent nested calls will use savepoints. :param str transaction_type: Specify isolation level. This parameter only has effect on **SQLite databases**, and furthermore, only affects the outer-most call to :py:meth:`~Database.atomic`. For more information, see :py:meth:`~Database.transaction`. ``atomic`` can be used as either a context manager or a decorator. .. note:: For most use-cases, it makes the most sense to always use :py:meth:`~Database.atomic` when you wish to execute queries in a transaction. The benefit of using ``atomic`` is that you do not need to manually keep track of the transaction stack depth, as this will be managed for you. Context manager example code: .. code-block:: python with db.atomic() as txn: perform_some_operations() with db.atomic() as nested_txn: do_other_things() if something_bad_happened(): # Roll back these changes, but preserve the changes # made in the outer block. nested_txn.rollback() Decorator example code: .. code-block:: python @db.atomic() def create_user(username): # This function will execute in a transaction/savepoint. return User.create(username=username) .. py:method:: transaction([transaction_type=None]) Execute statements in a transaction using either a context manager or decorator. If an error is raised inside the wrapped block, the transaction will be rolled back, otherwise statements are committed when exiting. Transactions can also be explicitly rolled back or committed within the transaction block by calling :py:meth:`~transaction.rollback` or :py:meth:`~transaction.commit`. If you manually commit or roll back, a new transaction will be started automatically. Nested blocks can be wrapped with ``transaction`` - the database will keep a stack and only commit when it reaches the end of the outermost function / block. :param str transaction_type: Specify isolation level, **SQLite only**. Context manager example code: .. code-block:: python # delete a blog instance and all its associated entries, but # do so within a transaction with database.transaction(): blog.delete_instance(recursive=True) # Explicitly roll back a transaction. with database.transaction() as txn: do_some_stuff() if something_bad_happened(): # Roll back any changes made within this block. txn.rollback() Decorator example code: .. code-block:: python @database.transaction() def transfer_money(from_acct, to_acct, amt): from_acct.charge(amt) to_acct.pay(amt) return amt SQLite users can specify the isolation level by specifying one of the following values for ``transaction_type``: * ``exclusive`` * ``immediate`` * ``deferred`` Example usage: .. code-block:: python with db.transaction('exclusive'): # No other readers or writers allowed while this is active. (Account .update(Account.balance=Account.balance - 100) .where(Account.id == from_acct) .execute()) (Account .update(Account.balance=Account.balance + 100) .where(Account.id == to_acct) .execute()) .. py:method:: commit_on_success(func) .. note:: Use :py:meth:`~Database.atomic` or :py:meth:`~Database.transaction` instead. .. py:method:: savepoint([sid=None]) Execute statements in a savepoint using either a context manager or decorator. If an error is raised inside the wrapped block, the savepoint will be rolled back, otherwise statements are committed when exiting. Like :py:meth:`~Database.transaction`, a savepoint can also be explicitly rolled-back or committed by calling :py:meth:`~savepoint.rollback` or :py:meth:`~savepoint.commit`. If you manually commit or roll back, a new savepoint **will not** be created. Savepoints can be thought of as nested transactions. :param str sid: An optional string identifier for the savepoint. Context manager example code: .. code-block:: python with db.transaction() as txn: do_some_stuff() with db.savepoint() as sp1: do_more_things() with db.savepoint() as sp2: even_more() # Oops, something bad happened, roll back # just the changes made in this block. if something_bad_happened(): sp2.rollback() .. py:method:: execution_context([with_transaction=True]) Create an :py:class:`ExecutionContext` context manager or decorator. Blocks wrapped with an *ExecutionContext* will run using their own connection. By default, the wrapped block will also run in a transaction, although this can be disabled specifyin ``with_transaction=False``. For more explanation of :py:class:`ExecutionContext`, see the :ref:`advanced_connection_management` section. .. warning:: ExecutionContext is very new and has not been tested extensively. .. py:classmethod:: register_fields(fields) Register a mapping of field overrides for the database class. Used to register custom fields or override the defaults. :param dict fields: A mapping of :py:attr:`~Field.db_field` to column type .. py:classmethod:: register_ops(ops) Register a mapping of operations understood by the QueryCompiler to their SQL equivalent, e.g. ``{OP.EQ: '='}``. Used to extend the types of field comparisons. :param dict fields: A mapping of :py:attr:`~Field.db_field` to column type .. py:method:: extract_date(date_part, date_field) Return an expression suitable for extracting a date part from a date field. For instance, extract the year from a :py:class:`DateTimeField`. :param str date_part: The date part attribute to retrieve. Valid options are: "year", "month", "day", "hour", "minute" and "second". :param Field date_field: field instance storing a datetime, date or time. :rtype: an expression object. .. py:method:: truncate_date(date_part, date_field) Return an expression suitable for truncating a date / datetime to the given resolution. This can be used, for example, to group a collection of timestamps by day. :param str date_part: The date part to truncate to. Valid options are: "year", "month", "day", "hour", "minute" and "second". :param Field date_field: field instance storing a datetime, date or time. :rtype: an expression object. Example: .. code-block:: python # Get tweets from today. tweets = Tweet.select().where( db.truncate_date('day', Tweet.timestamp) == datetime.date.today()) .. py:class:: SqliteDatabase(Database) :py:class:`Database` subclass that works with the ``sqlite3`` driver (or ``pysqlite2``). In addition to the default database parameters, :py:class:`SqliteDatabase` also accepts a *journal_mode* parameter which will configure the journaling mode. .. note:: If you have both ``sqlite3`` and ``pysqlite2`` installed on your system, peewee will use whichever points at a newer version of SQLite. .. note:: SQLite is unique among the databases supported by Peewee in that it allows a high degree of customization by the host application. This means you can do things like write custom functions or aggregates *in Python* and then call them from your SQL queries. This feature, and many more, are available through the :py:class:`SqliteExtDatabase`, part of ``playhouse.sqlite_ext``. I *strongly* recommend you use :py:class:`SqliteExtDatabase` as it exposes many of the features that make SQLite so powerful. Custom parameters: :param str journal_mode: Journaling mode. :param list pragmas: List of 2-tuples containing ``PRAGMA`` statements to run against new connections. SQLite allows run-time configuration of a number of parameters through ``PRAGMA`` statements (`documentation `_). These statements are typically run against a new database connection. To run one or more ``PRAGMA`` statements against new connections, you can specify them as a list of 2-tuples containing the pragma name and value: .. code-block:: python db = SqliteDatabase('my_app.db', pragmas=( ('journal_mode', 'WAL'), ('cache_size', 10000), ('mmap_size', 1024 * 1024 * 32), )) .. py:attribute:: insert_many = True *if* using SQLite 3.7.11.0 or newer. .. py:class:: MySQLDatabase(Database) :py:class:`Database` subclass that works with either "MySQLdb" or "pymysql". .. py:attribute:: commit_select = True .. py:attribute:: compound_operations = ['UNION'] .. py:attribute:: for_update = True .. py:attribute:: subquery_delete_same_table = False .. py:class:: PostgresqlDatabase(Database) :py:class:`Database` subclass that works with the "psycopg2" driver .. py:attribute:: commit_select = True .. py:attribute:: compound_select_parentheses = True .. py:attribute:: distinct_on = True .. py:attribute:: for_update = True .. py:attribute:: for_update_nowait = True .. py:attribute:: insert_returning = True .. py:attribute:: returning_clause = True .. py:attribute:: sequences = True .. py:attribute:: window_functions = True .. py:attribute:: register_unicode = True Control whether the ``UNICODE`` and ``UNICODEARRAY`` psycopg2 extensions are loaded automatically. Transaction, Savepoint and ExecutionContext ------------------------------------------- The easiest way to create transactions and savepoints is to use :py:meth:`Database.atomic`. The :py:meth:`~Database.atomic` method will create a transaction or savepoint depending on the level of nesting. .. code-block:: python with db.atomic() as txn: # The outer-most call will be a transaction. with db.atomic() as sp: # Nested calls will be savepoints instead. execute_some_statements() .. py:class:: transaction(database) Context manager that encapsulates a database transaction. Statements executed within the wrapped block will be committed at the end of the block unless an exception occurs, in which case any changes will be rolled back. .. warning:: Transactions should not be nested as this could lead to unpredictable behavior in the event of an exception in a nested block. If you wish to use nested transactions, use the :py:meth:`~Database.atomic` method, which will create a transaction at the outer-most layer and use savepoints for nested blocks. .. note:: In practice you should not create :py:class:`transaction` objects directly, but rather use the :py:meth:`Database.transaction` method. .. py:method:: commit() Manually commit any pending changes and begin a new transaction. .. py:method:: rollback() Manually roll-back any pending changes and begin a new transaction. .. py:class:: savepoint(database[, sid=None]) Context manager that encapsulates a savepoint (nested transaction). Statements executed within the wrapped block will be committed at the end of the block unless an exception occurs, in which case any changes will be rolled back. .. warning:: Savepoints must be created within a transaction. It is recommended that you use :py:meth:`~Database.atomic` instead of manually managing the transaction+savepoint stack. .. note:: In practice you should not create :py:class:`savepoint` objects directly, but rather use the :py:meth:`Database.savepoint` method. .. py:method:: commit() Manually commit any pending changes. If the savepoint is manually committed and additional changes are made, they will be executed in the context of the outer block. .. py:method:: rollback() Manually roll-back any pending changes. If the savepoint is manually rolled-back and additional changes are made, they will be executed in the context of the outer block. .. py:class:: ExecutionContext(database[, with_transaction=True]) ExecutionContext provides a way to explicitly run statements in a dedicated connection. Typically a single database connection is maintained per-thread, but in some situations you may wish to explicitly force a new, separate connection. To accomplish this, you can create an :py:class:`ExecutionContext`. Statements executed in the wrapped block will be run in a transaction by default, though you can disable this by specifying ``with_transaction=False``. .. note:: Rather than instantiating ``ExecutionContext`` directly, use :py:meth:`Database.execution_context`. Example code: .. code-block:: python # This will return the connection associated with the current thread. conn = db.get_conn() with db.execution_context(): # This will be a new connection object. If you are using the # connection pool, it may be an unused connection from the pool. ctx_conn = db.get_conn() # This statement is executed using the new `ctx_conn`. User.create(username='huey') # At the end of the wrapped block, the connection will be closed and the # transaction, if one exists, will be committed. # This statement is executed using the regular `conn`. User.create(username='mickey') .. py:class:: Using(database, models[, with_transaction=True]) For the duration of the wrapped block, all queries against the given ``models`` will use the specified ``database``. Optionally these queries can be run outside a transaction by specifying ``with_transaction=False``. ``Using`` provides, in short, a way to run queries on a list of models using a manually specified database. :param database: a :py:class:`Database` instance. :param models: a list of :py:class:`Model` classes to use with the given database. :param with_transaction: Whether the wrapped block should be run in a transaction. .. warning:: The :py:class:`Using` context manager does not do anything to manage the database connections, so it the user's responsibility to make sure that you close the database explicitly. Example: .. code-block:: python master = PostgresqlDatabase('master') replica = PostgresqlDatabase('replica') class Data(Model): value = IntegerField() class Meta: database = master # All these queries use the "master" database, # since that is what our Data model was configured # to use. for i in range(10): Data.create(value=i) Data.insert_many({Data.value: j} for j in range(100, 200)).execute() # To use the read replica, we can use the Using context manager. with Using(read_replica, [Data]): # Query is executed against the read replica. n_data = Data.select().count() # Since we did not specify this model in the list passed # to Using, it will use whatever database it was defined with. other_count = SomeOtherModel.select().count() Metadata Types -------------- .. py:class:: IndexMetadata(name, sql, columns, unique, table) .. py:attribute:: name The name of the index. .. py:attribute:: sql The SQL query used to generate the index. .. py:attribute:: columns A list of columns that are covered by the index. .. py:attribute:: unique A boolean value indicating whether the index has a unique constraint. .. py:attribute:: table The name of the table containing this index. .. py:class:: ColumnMetadata(name, data_type, null, primary_key, table) .. py:attribute:: name The name of the column. .. py:attribute:: data_type The data type of the column .. py:attribute:: null A boolean value indicating whether ``NULL`` is permitted in this column. .. py:attribute:: primary_key A boolean value indicating whether this column is a primary key. .. py:attribute:: table The name of the table containing this column. .. py:class:: ForeignKeyMetadata(column, dest_table, dest_column, table) .. py:attribute:: column The column containing the foreign key (the "source"). .. py:attribute:: dest_table The table referenced by the foreign key. .. py:attribute:: dest_column The column referenced by the foreign key (on ``dest_table``). .. py:attribute:: table The name of the table containing this foreign key. Misc ---- .. py:class:: fn() A helper class that will convert arbitrary function calls to SQL function calls. To express functions in peewee, use the :py:class:`fn` object. The way it works is anything to the right of the "dot" operator will be treated as a function. You can pass that function arbitrary parameters which can be other valid expressions. For example: ============================================ ============================================ Peewee expression Equivalent SQL ============================================ ============================================ ``fn.Count(Tweet.id).alias('count')`` ``Count(t1."id") AS count`` ``fn.Lower(fn.Substr(User.username, 1, 1))`` ``Lower(Substr(t1."username", 1, 1))`` ``fn.Rand().alias('random')`` ``Rand() AS random`` ``fn.Stddev(Employee.salary).alias('sdv')`` ``Stddev(t1."salary") AS sdv`` ============================================ ============================================ .. py:method:: over([partition_by=None[, order_by=None[, start=None[, end=None[, window=None]]]]]) Basic support for SQL window functions. :param list partition_by: List of :py:class:`Node` instances to partition by. :param list order_by: List of :py:class:`Node` instances to use for ordering. :param start: The start of the *frame* of the window query. :param end: The end of the *frame* of the window query. :param Window window: A :py:class:`Window` instance to use for this aggregate. Examples: .. code-block:: python # Get the list of employees and the average salary for their dept. query = (Employee .select( Employee.name, Employee.department, Employee.salary, fn.Avg(Employee.salary).over( partition_by=[Employee.department])) .order_by(Employee.name)) # Rank employees by salary. query = (Employee .select( Employee.name, Employee.salary, fn.rank().over( order_by=[Employee.salary]))) # Get a list of page-views, along with avg pageviews for that day. query = (PageView .select( PageView.url, PageView.timestamp, fn.Count(PageView.id).over( partition_by=[fn.date_trunc( 'day', PageView.timestamp)])) .order_by(PageView.timestamp)) # Same as above but using a window class. window = Window(partition_by=[fn.date_trunc('day', PageView.timestamp)]) query = (PageView .select( PageView.url, PageView.timestamp, fn.Count(PageView.id).over(window=window)) .window(window) # Need to include our Window here. .order_by(PageView.timestamp)) # Get the list of times along with the last time. query = (Times .select( Times.time, fn.LAST_VALUE(Times.time).over( order_by=[Times.time], start=Window.preceding(), end=Window.following()))) .. py:class:: SQL(sql, *params) Add fragments of SQL to a peewee query. For example you might want to reference an aliased name. :param str sql: Arbitrary SQL string. :param params: Arbitrary query parameters. .. code-block:: python # Retrieve user table and "annotate" it with a count of tweets for each # user. query = (User .select(User, fn.Count(Tweet.id).alias('ct')) .join(Tweet, JOIN.LEFT_OUTER) .group_by(User)) # Sort the users by number of tweets. query = query.order_by(SQL('ct DESC')) .. py:class:: Window([partition_by=None[, order_by=None[, start=None[, end=None]]]]) Create a ``WINDOW`` definition. :param list partition_by: List of :py:class:`Node` instances to partition by. :param list order_by: List of :py:class:`Node` instances to use for ordering. :param start: The start of the *frame* of the window query. :param end: The end of the *frame* of the window query. Examples: .. code-block:: python # Get the list of employees and the average salary for their dept. window = Window(partition_by=[Employee.department]).alias('dept_w') query = (Employee .select( Employee.name, Employee.department, Employee.salary, fn.Avg(Employee.salary).over(window)) .window(window) .order_by(Employee.name)) .. py:staticmethod:: preceding([value=None]) Return an expression appropriate for passing in to the ``start`` or ``end`` clause of a :py:class:`Window` object. If ``value`` is not provided, then it will be ``UNBOUNDED PRECEDING``. .. py:staticmethod:: following([value=None]) Return an expression appropriate for passing in to the ``start`` or ``end`` clause of a :py:class:`Window` object. If ``value`` is not provided, then it will be ``UNBOUNDED FOLLOWING``. .. py:class:: DeferredRelation() Used to reference a not-yet-created model class. Stands in as a placeholder for the related model of a foreign key. Useful for circular references. .. code-block:: python DeferredPost = DeferredRelation() class User(Model): username = CharField() # `Post` is not available yet, it is declared below. favorite_post = ForeignKeyField(DeferredPost, null=True) class Post(Model): # `Post` comes after `User` since it refers to `User`. user = ForeignKeyField(User) title = CharField() DeferredPost.set_model(Post) # Post is now available. .. py:method:: set_model(model) Replace the placeholder with the correct model class. .. py:class:: Proxy() Proxy class useful for situations when you wish to defer the initialization of an object. For instance, you want to define your models but you do not know what database engine you will be using until runtime. Example: .. code-block:: python database_proxy = Proxy() # Create a proxy for our db. class BaseModel(Model): class Meta: database = database_proxy # Use proxy for our DB. class User(BaseModel): username = CharField() # Based on configuration, use a different database. if app.config['DEBUG']: database = SqliteDatabase('local.db') elif app.config['TESTING']: database = SqliteDatabase(':memory:') else: database = PostgresqlDatabase('mega_production_db') # Configure our proxy to use the db we specified in config. database_proxy.initialize(database) .. py:method:: initialize(obj) :param obj: The object to proxy to. Once initialized, the attributes and methods on ``obj`` can be accessed directly via the :py:class:`Proxy` instance. .. py:class:: Node() The :py:class:`Node` class is the parent class for all composable parts of a query, and forms the basis of peewee's expression API. The following classes extend :py:class:`Node`: * :py:class:`SelectQuery`, :py:class:`UpdateQuery`, :py:class:`InsertQuery`, :py:class:`DeleteQuery`, and :py:class:`RawQuery`. * :py:class:`Field` * :py:class:`Func` (and :py:func:`fn`) * :py:class:`SQL` * :py:class:`Expression` * :py:class:`Param` * :py:class:`Window` * :py:class:`Clause` * :py:class:`Entity` * :py:class:`Check` Overridden operators: * Bitwise and- and or- (``&`` and ``|``): combine multiple nodes using the given conjunction. * ``+``, ``-``, ``*``, ``/`` and ``^`` (add, subtract, multiply, divide and exclusive-or). * ``==``, ``!=``, ``<``, ``<=``, ``>``, ``>=``: create a binary expression using the given comparator. * ``<<``: create an *IN* expression. * ``>>``: create an *IS* expression. * ``%`` and ``**``: *LIKE* and *ILIKE*. .. py:method:: contains(rhs) Create a binary expression using case-insensitive string search. .. py:method:: startswith(rhs) Create a binary expression using case-insensitive prefix search. .. py:method:: endswith(rhs) Create a binary expression using case-insensitive suffix search. .. py:method:: between(low, high) Create an expression that will match values between ``low`` and ``high``. .. py:method:: regexp(expression) Match based on regular expression. .. py:method:: concat(rhs) Concatenate the current node with the provided ``rhs``. .. warning:: In order for this method to work with MySQL, the MySQL session must be set to use ``PIPES_AS_CONCAT``. To reliably concatenate strings with MySQL, use ``fn.CONCAT(s1, s2...)`` instead. .. py:method:: is_null([is_null=True]) Create an expression testing whether the ``Node`` is (or is not) ``NULL``. .. code-block:: python # Find all categories whose parent column is NULL. root_nodes = Category.select().where(Category.parent.is_null()) # Find all categores whose parent is NOT NULL. child_nodes = Category.select().where(Category.parent.is_null(False)) To simplify things, peewee will generate the correct SQL for equality and inequality. The :py:meth:`~Node.is_null` method is provided simply for readability. .. code-block:: python # Equivalent to the previous queries -- peewee will translate these # into `IS NULL` and `IS NOT NULL`: root_nodes = Category.select().where(Category.parent == None) child_nodes = Category.select().where(Category.parent != None) .. py:method:: __invert__() Negate the node. This translates roughly into *NOT ()*. .. py:method:: alias([name=None]) Apply an alias to the given node. This translates into * AS *. .. py:method:: asc() Apply ascending ordering to the given node. This translates into * ASC*. .. py:method:: desc() Apply descending ordering to the given node. This translates into * DESC*. .. py:method:: bind_to(model_class) Bind the results of an expression to a specific model type. Useful when adding expressions to a select, where the result of the expression should be placed on a particular joined instance. .. py:classmethod:: extend([name=None[, clone=False]]) Decorator for adding the decorated function as a new method on :py:class:`Node` and its subclasses. Useful for adding implementation-specific features to all node types. :param str name: Method name. If not provided the name of the wrapped function will be used. :param bool clone: Whether this method should return a clone. This is generally true when the method mutates the internal state of the node. Example: .. code-block:: python # Add a `cast()` method to all nodes using the '::' operator. PostgresqlDatabase.register_ops({'::', '::'}) @Node.extend() def cast(self, as_type): return Expression(self, '::', SQL(as_type)) # Let's pretend we want to find all data points whose numbers # are palindromes. Note that we can use the new *cast* method # on both fields and with the `fn` helper: reverse_val = fn.REVERSE(DataModel.value.cast('str')).cast('int') query = (DataPoint .select() .where(DataPoint.value == reverse_val)) .. note:: To remove an extended method, simply call ``delattr`` on the class the method was originally added to. peewee-2.10.2/docs/peewee/contributing.rst000066400000000000000000000042251316645060400205210ustar00rootroot00000000000000.. _contributing: Contributing ============ In order to continually improve, Peewee needs the help of developers like you. Whether it's contributing patches, submitting bug reports, or just asking and answering questions, you are helping to make Peewee a better library. In this document I'll describe some of the ways you can help. Patches ------- Do you have an idea for a new feature, or is there a clunky API you'd like to improve? Before coding it up and submitting a pull-request, `open a new issue `_ on GitHub describing your proposed changes. This doesn't have to be anything formal, just a description of what you'd like to do and why. When you're ready, you can submit a pull-request with your changes. Successful patches will have the following: * Unit tests. * Documentation, both prose form and general :ref:`API documentation `. * Code that conforms stylistically with the rest of the Peewee codebase. Bugs ---- If you've found a bug, please check to see if it has `already been reported `_, and if not `create an issue on GitHub `_. The more information you include, the more quickly the bug will get fixed, so please try to include the following: * Traceback and the error message (please `format your code `_!) * Relevant portions of your code or code to reproduce the error * Peewee version: ``python -c "from peewee import __version__; print(__version__)"`` * Which database you're using If you have found a bug in the code and submit a failing test-case, then hats-off to you, you are a hero! Questions --------- If you have questions about how to do something with peewee, then I recommend either: * Ask on StackOverflow. I check SO just about every day for new peewee questions and try to answer them. This has the benefit also of preserving the question and answer for other people to find. * Ask in IRC, ``#peewee`` on freenode. I always answer questions, but it may take a bit to get to them. * Ask on the mailing list, https://groups.google.com/group/peewee-orm peewee-2.10.2/docs/peewee/database.rst000066400000000000000000001144451316645060400175640ustar00rootroot00000000000000.. _databases: Managing your Database ====================== This document describes how to perform typical database-related tasks with peewee. Throughout this document we will use the following example models: .. code-block:: python from peewee import * class User(Model): username = CharField(unique=True) class Tweet(Model): user = ForeignKeyField(User, related_name='tweets') message = TextField() created_date = DateTimeField(default=datetime.datetime.now) is_published = BooleanField(default=True) Creating a database connection and tables ----------------------------------------- While it is not necessary to explicitly connect to the database before using it, **managing connections explicitly is a good practice**. This way if the connection fails, the exception can be caught during the *connect* step, rather than some arbitrary time later when a query is executed. Furthermore, if you're using a :ref:`connection pool `, it is actually necessary to call :py:meth:`~Database.connect` and :py:meth:`~Database.close` to ensure connections are recycled correctly. For web-apps you will typically open a connection when a request is started and close it when the response is delivered: .. code-block:: python database = SqliteDatabase('my_app.db') def before_request_handler(): database.connect() def after_request_handler(): database.close() .. note:: For examples of configuring connection hooks for several popular web frameworks, see the :ref:`adding_request_hooks` section. .. note:: For advanced connection management techniques, see the :ref:`advanced connection management ` section. To use this database with your models, set the ``database`` attribute on an inner :ref:`Meta ` class: .. code-block:: python class MyModel(Model): some_field = CharField() class Meta: database = database **Best practice:** define a base model class that points at the database object you wish to use, and then all your models will extend it: .. code-block:: python database = SqliteDatabase('my_app.db') class BaseModel(Model): class Meta: database = database class User(BaseModel): username = CharField() class Tweet(BaseModel): user = ForeignKeyField(User, related_name='tweets') message = TextField() # etc, etc .. note:: Remember to specify a database on your model classes, otherwise peewee will fall back to a default sqlite database named "peewee.db". .. _vendor-specific-parameters: Vendor-specific Parameters ^^^^^^^^^^^^^^^^^^^^^^^^^^ Some database drivers accept special parameters when being initialized. Rather than try to accommodate all these parameters, Peewee will pass back unrecognized parameters directly to the database driver. For instance, with Postgresql it is common to need to specify the ``host``, ``user`` and ``password`` when creating your connection. These are not standard Peewee :py:class:`Database` parameters, so they will be passed directly back to ``psycopg2`` when creating connections: .. code-block:: python db = PostgresqlDatabase( 'database_name', # Required by Peewee. user='postgres', # Will be passed directly to psycopg2. password='secret', # Ditto. host='db.mysite.com', # Ditto. ) As another example, the ``pymysql`` driver accepts a ``charset`` parameter which is not a standard Peewee :py:class:`Database` parameter. To set this value, simply pass in ``charset`` alongside your other values: .. code-block:: python db = MySQLDatabase('database_name', user='www-data', charset='utf8mb4') Consult your database driver's documentation for the available parameters: * Postgres: `psycopg2 `_ * MySQL: `MySQLdb `_ * MySQL: `pymysql `_ * SQLite: `sqlite3 `_ .. _using_postgresql: Using Postgresql ---------------- To connect to a Postgresql database, we will use :py:class:`PostgresqlDatabase`. The first parameter is always the name of the database, and after that you can specify arbitrary `psycopg2 parameters `_. .. code-block:: python psql_db = PostgresqlDatabase('my_database', user='postgres') class BaseModel(Model): """A base model that will use our Postgresql database""" class Meta: database = psql_db class User(BaseModel): username = CharField() The :ref:`playhouse` contains a :ref:`Postgresql extension module ` which provides many postgres-specific features such as: * :ref:`Arrays ` * :ref:`HStore ` * :ref:`JSON ` * :ref:`Server-side cursors ` * And more! If you would like to use these awesome features, use the :py:class:`PostgresqlExtDatabase` from the ``playhouse.postgres_ext`` module: .. code-block:: python from playhouse.postgres_ext import PostgresqlExtDatabase psql_db = PostgresqlExtDatabase('my_database', user='postgres') .. _using_sqlite: Using SQLite ------------ To connect to a SQLite database, we will use :py:class:`SqliteDatabase`. The first parameter is the filename containing the database, or the string *:memory:* to create an in-memory database. After the database filename, you can specify arbitrary `sqlite3 parameters `_. .. code-block:: python sqlite_db = SqliteDatabase('my_app.db') class BaseModel(Model): """A base model that will use our Sqlite database.""" class Meta: database = sqlite_db class User(BaseModel): username = CharField() # etc, etc The :ref:`playhouse` contains a :ref:`SQLite extension module ` which provides many SQLite-specific features such as: * :ref:`Full-text search ` with :ref:`BM25 ranking `. * Support for custom functions, aggregates and collations * Advanced transaction support * And more! If you would like to use these awesome features, use the :py:class:`SqliteExtDatabase` from the ``playhouse.sqlite_ext`` module: .. code-block:: python from playhouse.sqlite_ext import SqliteExtDatabase sqlite_db = SqliteExtDatabase('my_app.db', journal_mode='WAL') .. _sqlite-pragma: PRAGMA statements ^^^^^^^^^^^^^^^^^ .. versionadded:: 2.6.4 SQLite allows run-time configuration of a number of parameters through ``PRAGMA`` statements (`documentation `_). These statements are typically run against a new database connection. To run one or more ``PRAGMA`` statements against new connections, you can specify them as a list or tuple of 2-tuples containing the pragma name and value: .. code-block:: python db = SqliteDatabase('my_app.db', pragmas=( ('journal_mode', 'WAL'), ('cache_size', 10000), ('mmap_size', 1024 * 1024 * 32), )) SQLite and Autocommit ^^^^^^^^^^^^^^^^^^^^^ .. versionchanged:: 2.4.5 In version 2.4.5, the default isolation level for SQLite databases is ``None``, which equates to *autocommit*. The reason for this change has to do with some idiosyncracies of ``pysqlite`` (or the standard library ``sqlite3``). If you are using your database in autocommit mode (the default) then you should not need to make any changes to your code. If you are using ``autocommit=False``, you will need to explicitly call :py:meth:`~Database.begin` before executing queries. .. note:: This does not apply to code executed within :py:meth:`~Database.transaction` or :py:meth:`~Database.atomic`. .. warning:: If you are using peewee with autocommit disabled, you must explicitly call :py:meth:`~Database.begin`, otherwise statements **will** be executed in autocommit mode. Example code: .. code-block:: python # Define a database with autocommit turned off. db = SqliteDatabase('my_app.db', autocommit=False) # You must call begin() db.begin() User.create(username='charlie') db.commit() # If using a transaction, then no changes are necessary. with db.transaction(): User.create(username='huey') # If using a function decorated by transaction, no changes are necessary. @db.transaction() def create_user(username): User.create(username=username) APSW, an Advanced SQLite Driver ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Peewee also comes with an alternate SQLite database that uses :ref:`apsw`, an advanced Python SQLite driver. More information on APSW can be obtained on the `APSW project website `_. APSW provides special features like: * Virtual tables, virtual file-systems, Blob I/O, backups and file control. * Connections can be shared across threads without any additional locking. * Transactions are managed explicitly by your code. * Unicode is handled *correctly*. * APSW is faster that the standard library sqlite3 module. * Exposes pretty much the entire SQLite C API to your Python app. If you would like to use APSW, use the :py:class:`APSWDatabase` from the `apsw_ext` module: .. code-block:: python from playhouse.apsw_ext import APSWDatabase apsw_db = APSWDatabase('my_app.db') .. _using_berkeleydb: Using BerkeleyDB ---------------- The :ref:`playhouse ` contains a special extension module for using a :ref:`BerkeleyDB database `. BerkeleyDB can be compiled with a SQLite-compatible API, then the python SQLite driver can be compiled to use the Berkeley version of SQLite. You can find up-to-date `step by step instructions `_ on my blog for compling the BerkeleyDB + SQLite library, then building a statically-linked `pysqlite `_ that uses the custom sqlite library. To connect to a BerkeleyDB database, we will use :py:class:`BerkeleyDatabase`. Like :py:class:`SqliteDatabase`, the first parameter is the filename containing the database or the string *:memory:* to create an in-memory database. .. code-block:: python from playhouse.berkeleydb import BerkeleyDatabase berkeley_db = BerkeleyDatabase('my_app.db') class BaseModel(Model): """A base model that will use our BDB database.""" class Meta: database = berkeley_db class User(BaseModel): username = CharField() # etc, etc .. _using_mysql: Using MySQL ----------- To connect to a MySQL database, we will use :py:class:`MySQLDatabase`. After the database name, you can specify arbitrary connection parameters that will be passed back to the driver (either MySQLdb or pymysql). .. code-block:: python mysql_db = MySQLDatabase('my_database') class BaseModel(Model): """A base model that will use our MySQL database""" class Meta: database = mysql_db class User(BaseModel): username = CharField() # etc, etc Error 2006: MySQL server has gone away ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ This particular error can occur when MySQL kills an idle database connection. This typically happens with web apps that do not explicitly manage database connections. What happens is your application starts, a connection is opened to handle the first query that executes, and, since that connection is never closed, it remains open, waiting for more queries. To fix this, make sure you are explicitly connecting to the database when you need to execute queries, and close your connection when you are done. In a web-application, this typically means you will open a connection when a request comes in, and close the connection when you return a response. See the :ref:`adding_request_hooks` for more information. If you would like to automatically reconnect and retry queries that fail due to an ``OperationalError``, peewee provides a :py:class:`Database` mixin :py:class:`RetryOperationalError` that will handle reconnecting and retrying the query automatically. For more information see :ref:`automatic-reconnect`. Connecting using a Database URL ------------------------------- The playhouse module :ref:`db_url` provides a helper :py:func:`connect` function that accepts a database URL and returns a :py:class:`Database` instance. Example code: .. code-block:: python import os from peewee import * from playhouse.db_url import connect # Connect to the database URL defined in the environment, falling # back to a local Sqlite database if no database URL is specified. db = connect(os.environ.get('DATABASE') or 'sqlite:///default.db') class BaseModel(Model): class Meta: database = db Example database URLs: * *sqlite:///my_database.db* will create a :py:class:`SqliteDatabase` instance for the file ``my_database.db`` in the current directory. * *sqlite:///:memory:* will create an in-memory :py:class:`SqliteDatabase` instance. * *postgresql://postgres:my_password@localhost:5432/my_database* will create a :py:class:`PostgresqlDatabase` instance. A username and password are provided, as well as the host and port to connect to. * *mysql://user:passwd@ip:port/my_db* will create a :py:class:`MySQLDatabase` instance for the local MySQL database *my_db*. * :ref:`More examples in the db_url documentation `. Multi-threaded applications --------------------------- peewee stores the connection state in a thread local, so each thread gets its own separate connection. If you prefer to manage the connections yourself, you can disable this behavior by initializing your database with ``threadlocals=False``. .. _deferring_initialization: Run-time database configuration ------------------------------- Sometimes the database connection settings are not known until run-time, when these values may be loaded from a configuration file or the environment. In these cases, you can *defer* the initialization of the database by specifying ``None`` as the database_name. .. code-block:: python database = SqliteDatabase(None) # Un-initialized database. class SomeModel(Model): class Meta: database = database If you try to connect or issue any queries while your database is uninitialized you will get an exception: .. code-block:: python >>> database.connect() Exception: Error, database not properly initialized before opening connection To initialize your database, call the :py:meth:`~Database.init` method with the database name and any additional keyword arguments: .. code-block:: python database_name = raw_input('What is the name of the db? ') database.init(database_name, host='localhost', user='postgres') For even more control over initializing your database, see the next section, :ref:`dynamic_db`. .. _dynamic_db: Dynamically defining a database ------------------------------- For even more control over how your database is defined/initialized, you can use the :py:class:`Proxy` helper. :py:class:`Proxy` objects act as a placeholder, and then at run-time you can swap it out for a different object. In the example below, we will swap out the database depending on how the app is configured: .. code-block:: python database_proxy = Proxy() # Create a proxy for our db. class BaseModel(Model): class Meta: database = database_proxy # Use proxy for our DB. class User(BaseModel): username = CharField() # Based on configuration, use a different database. if app.config['DEBUG']: database = SqliteDatabase('local.db') elif app.config['TESTING']: database = SqliteDatabase(':memory:') else: database = PostgresqlDatabase('mega_production_db') # Configure our proxy to use the db we specified in config. database_proxy.initialize(database) .. warning:: Only use this method if your actual database driver varies at run-time. For instance, if your tests and local dev environment run on SQLite, but your deployed app uses PostgreSQL, you can use the :py:class:`Proxy` to swap out engines at run-time. However, if it is only connection values that vary at run-time, such as the path to the database file, or the database host, you should instead use :py:meth:`Database.init`. See :ref:`deferring_initialization` for more details. .. _connection_pooling: Connection Pooling ------------------ Connection pooling is provided by the :ref:`pool module `, included in the :ref:`playhouse` extensions library. The pool supports: * Timeout after which connections will be recycled. * Upper bound on the number of open connections. The connection pool module comes with support for Postgres and MySQL (though adding support for other databases is trivial). .. code-block:: python from playhouse.pool import PooledPostgresqlExtDatabase db = PooledPostgresqlExtDatabase( 'my_database', max_connections=8, stale_timeout=300, user='postgres') class BaseModel(Model): class Meta: database = db The following pooled database classes are available: * :py:class:`PooledPostgresqlDatabase` * :py:class:`PooledPostgresqlExtDatabase` * :py:class:`PooledMySQLDatabase` * :py:class:`PooledSqliteDatabase` * :py:class:`PooledSqliteExtDatabase` For an in-depth discussion of peewee's connection pool, see the :ref:`pool` section of the :ref:`playhouse` documentation. .. _using_read_slaves: Read Slaves ----------- Peewee can automatically run *SELECT* queries against one or more read replicas. The :ref:`read_slave module `, included in the :ref:`playhouse` extensions library, contains a :py:class:`Model` subclass which provides this behavior. Here is how you might use the :py:class:`ReadSlaveModel`: .. code-block:: python from peewee import * from playhouse.read_slave import ReadSlaveModel # Declare a master and two read-replicas. master = PostgresqlDatabase('master') replica_1 = PostgresqlDatabase('replica', host='192.168.1.2') replica_2 = PostgresqlDatabase('replica', host='192.168.1.3') class BaseModel(ReadSlaveModel): class Meta: database = master read_slaves = (replica_1, replica_2) class User(BaseModel): username = CharField() Now when you execute writes (or deletes), they will be run on the master, while all read-only queries will be executed against one of the replicas. Queries are dispatched among the read slaves in round-robin fashion. Schema migrations ----------------- Currently peewee does not have support for *automatic* schema migrations, but you can use the :ref:`migrate` module to create simple migration scripts. The schema migrations module works with SQLite, MySQL and Postgres, and will even allow you to do things like drop or rename columns in SQLite! Here is an example of how you might write a migration script: .. code-block:: python from playhouse.migrate import * my_db = SqliteDatabase('my_database.db') migrator = SqliteMigrator(my_db) title_field = CharField(default='') status_field = IntegerField(null=True) with my_db.transaction(): migrate( migrator.add_column('some_table', 'title', title_field), migrator.add_column('some_table', 'status', status_field), migrator.drop_column('some_table', 'old_column'), ) Check the :ref:`migrate` documentation for more details. Generating Models from Existing Databases ----------------------------------------- If you'd like to generate peewee model definitions for an existing database, you can try out the database introspection tool :ref:`pwiz` that comes with peewee. *pwiz* is capable of introspecting Postgresql, MySQL and SQLite databases. Introspecting a Postgresql database: .. code-block:: console python -m pwiz --engine=postgresql my_postgresql_database Introspecting a SQLite database: .. code-block:: console python -m pwiz --engine=sqlite test.db pwiz will generate: * Database connection object * A *BaseModel* class to use with the database * *Model* classes for each table in the database. The generated code is written to stdout, and can easily be redirected to a file: .. code-block:: console python -m pwiz -e postgresql my_postgresql_db > models.py .. note:: pwiz generally works quite well with even large and complex database schemas, but in some cases it will not be able to introspect a column. You may need to go through the generated code to add indexes, fix unrecognized column types, and resolve any circular references that were found. .. _adding_request_hooks: Adding Request Hooks -------------------- When building web-applications, it is very important that you manage your database connections correctly. In this section I will describe how to add hooks to your web app to ensure the database connection is handled properly. These steps will ensure that regardless of whether you're using a simple SQLite database, or a pool of multiple Postgres connections, peewee will handle the connections correctly. Flask ^^^^^ Flask and peewee are a great combo and my go-to for projects of any size. Flask provides two hooks which we will use to open and close our db connection. We'll open the connection when a request is received, then close it when the response is returned. .. code-block:: python from flask import Flask from peewee import * database = SqliteDatabase('my_app.db') app = Flask(__name__) # This hook ensures that a connection is opened to handle any queries # generated by the request. @app.before_request def _db_connect(): database.connect() # This hook ensures that the connection is closed when we've finished # processing the request. @app.teardown_request def _db_close(exc): if not database.is_closed(): database.close() Django ^^^^^^ While it's less common to see peewee used with Django, it is actually very easy to use the two. To manage your peewee database connections with Django, the easiest way in my opinion is to add a middleware to your app. The middleware should be the very first in the list of middlewares, to ensure it runs first when a request is handled, and last when the response is returned. If you have a django project named *my_blog* and your peewee database is defined in the module ``my_blog.db``, you might add the following middleware class: .. code-block:: python # middleware.py from my_blog.db import database # Import the peewee database instance. class PeeweeConnectionMiddleware(object): def process_request(self, request): database.connect() def process_response(self, request, response): if not database.is_closed(): database.close() return response To ensure this middleware gets executed, add it to your ``settings`` module: .. code-block:: python # settings.py MIDDLEWARE_CLASSES = ( # Our custom middleware appears first in the list. 'my_blog.middleware.PeeweeConnectionMiddleware', # These are the default Django 1.7 middlewares. Yours may differ, # but the important this is that our Peewee middleware comes first. 'django.middleware.common.CommonMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', ) # ... other Django settings ... Bottle ^^^^^^ I haven't used bottle myself, but looking at the documentation I believe the following code should ensure the database connections are properly managed: .. code-block:: python # app.py from bottle import hook #, route, etc, etc. from peewee import * db = SqliteDatabase('my-bottle-app.db') @hook('before_request') def _connect_db(): db.connect() @hook('after_request') def _close_db(): if not db.is_closed(): db.close() # Rest of your bottle app goes here. Web.py ^^^^^^ See `application processors `_. .. code-block:: python db = SqliteDatabase('my_webpy_app.db') def connection_processor(handler): db.connect() try: return handler() finally: if not db.is_closed(): db.close() app.add_processor(connection_processor) Tornado ^^^^^^^ It looks like Tornado's ``RequestHandler`` class implements two hooks which can be used to open and close connections when a request is handled. .. code-block:: python from tornado.web import RequestHandler db = SqliteDatabase('my_db.db') class PeeweeRequestHandler(RequestHandler): def prepare(self): db.connect() return super(PeeweeRequestHandler, self).prepare() def on_finish(self): if not db.is_closed(): db.close() return super(PeeweeRequestHandler, self).on_finish() In your app, instead of extending the default ``RequestHandler``, now you can extend ``PeeweeRequestHandler``. Note that this does not address how to use peewee asynchronously with Tornado or another event loop. Wheezy.web ^^^^^^^^^^ The connection handling code can be placed in a `middleware `_. .. code-block:: python def peewee_middleware(request, following): db.connect() try: response = following(request) finally: if not db.is_closed(): db.close() return response app = WSGIApplication(middleware=[ lambda x: peewee_middleware, # ... other middlewares ... ]) Thanks to GitHub user *@tuukkamustonen* for submitting this code. Falcon ^^^^^^ The connection handling code can be placed in a `middleware component `_. .. code-block:: python import falcon from peewee import * database = SqliteDatabase('my_app.db') class PeeweeConnectionMiddleware(object): def process_request(self, req, resp): database.connect() def process_response(self, req, resp, resource): if not database.is_closed(): database.close() application = falcon.API(middleware=[ PeeweeConnectionMiddleware(), # ... other middlewares ... ]) Pyramid ^^^^^^^ Set up a Request factory that handles database connection lifetime as follows: .. code-block:: python from pyramid.request import Request db = SqliteDatabase('pyramidapp.db') class MyRequest(Request): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) db.connect() self.add_finished_callback(self.finish) def finish(self, request): if not db.is_closed(): db.close() In your application `main()` make sure `MyRequest` is used as `request_factory`: .. code-block:: python def main(global_settings, **settings): config = Configurator(settings=settings, ...) config.set_request_factory(MyRequest) CherryPy ^^^^^^^^ See `Publish/Subscribe pattern `_. .. code-block:: python def _db_connect(): db.connect() def _db_close(): if not db.is_closed(): db.close() cherrypy.engine.subscribe('before_request', _db_connect) cherrypy.engine.subscribe('after_request', _db_close) Other frameworks ^^^^^^^^^^^^^^^^ Don't see your framework here? Please `open a GitHub ticket `_ and I'll see about adding a section, or better yet, submit a documentation pull-request. Additional connection initialization ------------------------------------ Peewee does a few basic things depending on your database to initialize a connection. For SQLite this means registering custom user-defined functions, for Postgresql this means registering unicode support. You may find it necessary to add additional initialization when a new connection is opened, however. For example you may want to tell SQLite to enforce all foreign key constraints (off by default). To do this, you can subclass the database and override the :py:meth:`~Database.initialize_connection` method. This method contains no implementation on the base database classes, so you do not need to call ``super()`` with it. Example turning on SQLite foreign keys: .. code-block:: python class SqliteFKDatabase(SqliteDatabase): def initialize_connection(self, conn): self.execute_sql('PRAGMA foreign_keys=ON;') .. _advanced_connection_management: Advanced Connection Management ------------------------------ Managing your database connections is as simple as calling :py:meth:`~Database.connect` when you need to open a connection, and :py:meth:`~Database.close` when you are finished. In a web-app, you would typically connect when you receive a request, and close the connection when you return a response. Because connection state is stored in a thread-local, you do not need to worry about juggling connection objects -- peewee will handle it for you. In some situations, however, you may want to manage your connections more explicitly. Since peewee stores the active connection in a threadlocal, this typically would mean that there could only ever be one connection open per thread. For most applications this is desirable, but if you would like to manually manage multiple connections you can create an :py:class:`ExecutionContext`. Execution contexts allow finer-grained control over managing multiple connections to the database. When an execution context is initialized (either as a context manager or as a decorated function), a separate connection will be used for the duration of the wrapped block. You can also choose whether to wrap the block in a transaction. Execution context examples: .. code-block:: python with db.execution_context() as ctx: # A new connection will be opened or, if using a connection pool, # pulled from the pool of available connections. Additionally, a # transaction will be started. user = User.create(username='charlie') # When the block ends, the transaction will be committed and the connection # will be closed (or returned to the pool). @db.execution_context(with_transaction=False) def do_something(foo, bar): # When this function is called, a separate connection is made and will # be closed when the function returns. If you are using the peewee connection pool, then the new connections used by the :py:class:`ExecutionContext` will be pulled from the pool of available connections and recycled appropriately. Using multiple databases ------------------------ With peewee you can use as many databases as you want. Each model can define it's database by specifying a :ref:`Meta.database `. What if you want to use the same model with multiple databases, though? Depending on your use-case, peewee provides several options. If you have a Master/Slave setup and want all writes to go to the master, but reads can go to any number of replicated copies, check out the :ref:`Read Slave extension `. For finer-grained control, check out the :py:class:`Using` context manager / decorator. This allows you to specify the database to use with a given list of models for the duration of the wrapped block. Here is an example of how you might use the :py:class:`Using` context manager: .. code-block:: python master = PostgresqlDatabase('master') read_replica = PostgresqlDatabase('replica') class Data(Model): value = IntegerField() class Meta: database = master # By default all queries go to the master, since that is what # is defined on our model. for i in range(10): Data.create(value=i) # But what if we want to explicitly use the read replica? with Using(read_replica, [Data]): # Query is executed against the read replica. Data.get(Data.value == 5) # Since we did not specify this model in the list of overrides # it will use whatever database it was defined with. SomeOtherModel.get(SomeOtherModel.field == 3) .. note:: For simple master/slave configurations, check out the :ref:`read_slaves` extension. This extension ensures writes are sent to the master database and reads occur from any of the listed read replicas. .. _database-errors: Database Errors --------------- The Python DB-API 2.0 spec describes `several types of exceptions `_. Because most database drivers have their own implementations of these exceptions, Peewee simplifies things by providing its own wrappers around any implementation-specific exception classes. That way, you don't need to worry about importing any special exception classes, you can just use the ones from peewee: * ``DatabaseError`` * ``DataError`` * ``IntegrityError`` * ``InterfaceError`` * ``InternalError`` * ``NotSupportedError`` * ``OperationalError`` * ``ProgrammingError`` .. note:: All of these error classes extend ``PeeweeException``. .. _automatic-reconnect: Automatic Reconnect ------------------- Peewee provides very basic support for automatic reconnecting in the :ref:`shortcuts` module, through the use of the :py:class:`RetryOperationalError` mixin. This mixin will automatically reconnect to the database and retry any queries that fail with an ``OperationalError``. The query that failed will be retried only once, and if it fails twice an exception will be raised. Usage: .. code-block:: python from peewee import * from playhouse.shortcuts import RetryOperationalError class MyRetryDB(RetryOperationalError, MySQLDatabase): pass db = MyRetryDB('my_app') Logging queries --------------- All queries are logged to the *peewee* namespace using the standard library ``logging`` module. Queries are logged using the *DEBUG* level. If you're interested in doing something with the queries, you can simply register a handler. .. code-block:: python # Print all queries to stderr. import logging logger = logging.getLogger('peewee') logger.setLevel(logging.DEBUG) logger.addHandler(logging.StreamHandler()) Generating skeleton code ------------------------ For writing quick scripts, peewee comes with a helper script :ref:`pskel` which generates database connection and model boilerplate code. If you find yourself frequently writing small programs, :ref:`pskel` can really save you time. To generate a script, you can simply run: .. code-block:: console pskel User Tweet SomeModel AnotherModel > my_script.py ``pskel`` will generate code to connect to an in-memory SQLite database, as well as blank model definitions for the model names specified on the command line. Here is a more complete example, which will use the :py:class:`PostgresqlExtDatabase` with query logging enabled: .. code-block:: console pskel -l -e postgres_ext -d my_database User Tweet > my_script.py You can now fill in the model definitions and get to hacking! Adding a new Database Driver ---------------------------- Peewee comes with built-in support for Postgres, MySQL and SQLite. These databases are very popular and run the gamut from fast, embeddable databases to heavyweight servers suitable for large-scale deployments. That being said, there are a ton of cool databases out there and adding support for your database-of-choice should be really easy, provided the driver supports the `DB-API 2.0 spec `_. The db-api 2.0 spec should be familiar to you if you've used the standard library sqlite3 driver, psycopg2 or the like. Peewee currently relies on a handful of parts: * `Connection.commit` * `Connection.execute` * `Connection.rollback` * `Cursor.description` * `Cursor.fetchone` These methods are generally wrapped up in higher-level abstractions and exposed by the :py:class:`Database`, so even if your driver doesn't do these exactly you can still get a lot of mileage out of peewee. An example is the `apsw sqlite driver `_ in the "playhouse" module. The first thing is to provide a subclass of :py:class:`Database` that will open a connection. .. code-block:: python from peewee import Database import foodb # Our fictional DB-API 2.0 driver. class FooDatabase(Database): def _connect(self, database, **kwargs): return foodb.connect(database, **kwargs) The :py:class:`Database` provides a higher-level API and is responsible for executing queries, creating tables and indexes, and introspecting the database to get lists of tables. The above implementation is the absolute minimum needed, though some features will not work -- for best results you will want to additionally add a method for extracting a list of tables and indexes for a table from the database. We'll pretend that ``FooDB`` is a lot like MySQL and has special "SHOW" statements: .. code-block:: python class FooDatabase(Database): def _connect(self, database, **kwargs): return foodb.connect(database, **kwargs) def get_tables(self): res = self.execute('SHOW TABLES;') return [r[0] for r in res.fetchall()] Other things the database handles that are not covered here include: * :py:meth:`~Database.last_insert_id` and :py:meth:`~Database.rows_affected` * :py:attr:`~Database.interpolation` and :py:attr:`~Database.quote_char` * :py:attr:`~Database.op_overrides` for mapping operations such as "LIKE/ILIKE" to their database equivalent Refer to the :py:class:`Database` API reference or the `source code `_. for details. .. note:: If your driver conforms to the DB-API 2.0 spec, there shouldn't be much work needed to get up and running. Our new database can be used just like any of the other database subclasses: .. code-block:: python from peewee import * from foodb_ext import FooDatabase db = FooDatabase('my_database', user='foo', password='secret') class BaseModel(Model): class Meta: database = db class Blog(BaseModel): title = CharField() contents = TextField() pub_date = DateTimeField() peewee-2.10.2/docs/peewee/example.rst000066400000000000000000000340501316645060400174440ustar00rootroot00000000000000.. _example-app: Example app =========== We'll be building a simple *twitter*-like site. The source code for the example can be found in the ``examples/twitter`` directory. You can also `browse the source-code `_ on github. There is also an example `blog app `_ if that's more to your liking. The example app uses the `flask `_ web framework which is very easy to get started with. If you don't have flask already, you will need to install it to run the example: .. code-block:: console pip install flask Running the example ------------------- .. image:: tweepee.jpg After ensuring that flask is installed, ``cd`` into the twitter example directory and execute the ``run_example.py`` script: .. code-block:: console python run_example.py The example app will be accessible at http://localhost:5000/ Diving into the code -------------------- For simplicity all example code is contained within a single module, ``examples/twitter/app.py``. For a guide on structuring larger Flask apps with peewee, check out `Structuring Flask Apps `_. .. _example-app-models: Models ^^^^^^ In the spirit of the popular web framework Django, peewee uses declarative model definitions. If you're not familiar with Django, the idea is that you declare a model class for each table. The model class then defines one or more field attributes which correspond to the table's columns. For the twitter clone, there are just three models: *User*: Represents a user account and stores the username and password, an email address for generating avatars using *gravatar*, and a datetime field indicating when that account was created. *Relationship*: This is a utility model that contains two foreign-keys to the *User* model and stores which users follow one another. *Message*: Analagous to a tweet. The Message model stores the text content of the tweet, when it was created, and who posted it (foreign key to User). If you like UML, these are the tables and relationships: .. image:: schema.jpg In order to create these models we need to instantiate a :py:class:`SqliteDatabase` object. Then we define our model classes, specifying the columns as :py:class:`Field` instances on the class. .. code-block:: python # create a peewee database instance -- our models will use this database to # persist information database = SqliteDatabase(DATABASE) # model definitions -- the standard "pattern" is to define a base model class # that specifies which database to use. then, any subclasses will automatically # use the correct storage. class BaseModel(Model): class Meta: database = database # the user model specifies its fields (or columns) declaratively, like django class User(BaseModel): username = CharField(unique=True) password = CharField() email = CharField() join_date = DateTimeField() class Meta: order_by = ('username',) # this model contains two foreign keys to user -- it essentially allows us to # model a "many-to-many" relationship between users. by querying and joining # on different columns we can expose who a user is "related to" and who is # "related to" a given user class Relationship(BaseModel): from_user = ForeignKeyField(User, related_name='relationships') to_user = ForeignKeyField(User, related_name='related_to') class Meta: indexes = ( # Specify a unique multi-column index on from/to-user. (('from_user', 'to_user'), True), ) # a dead simple one-to-many relationship: one user has 0..n messages, exposed by # the foreign key. because we didn't specify, a users messages will be accessible # as a special attribute, User.message_set class Message(BaseModel): user = ForeignKeyField(User) content = TextField() pub_date = DateTimeField() class Meta: order_by = ('-pub_date',) .. note:: Note that we create a *BaseModel* class that simply defines what database we would like to use. All other models then extend this class and will also use the correct database connection. Peewee supports many different :ref:`field types ` which map to different column types commonly supported by database engines. Conversion between python types and those used in the database is handled transparently, allowing you to use the following in your application: * Strings (unicode or otherwise) * Integers, floats, and ``Decimal`` numbers. * Boolean values * Dates, times and datetimes * ``None`` (NULL) * Binary data Creating tables ^^^^^^^^^^^^^^^ In order to start using the models, its necessary to create the tables. This is a one-time operation and can be done quickly using the interactive interpreter. We can create a small helper function to accomplish this: .. code-block:: python def create_tables(): database.connect() database.create_tables([User, Relationship, Message]) Open a python shell in the directory alongside the example app and execute the following: .. code-block:: python >>> from app import * >>> create_tables() .. note:: If you encounter an *ImportError* it means that either *flask* or *peewee* was not found and may not be installed correctly. Check the :ref:`installation` document for instructions on installing peewee. Every model has a :py:meth:`~Model.create_table` classmethod which runs a SQL *CREATE TABLE* statement in the database. This method will create the table, including all columns, foreign-key constraints, indexes, and sequences. Usually this is something you'll only do once, whenever a new model is added. Peewee provides a helper method :py:meth:`Database.create_tables` which will resolve inter-model dependencies and call :py:meth:`~Model.create_table` on each model. .. note:: Adding fields after the table has been created will required you to either drop the table and re-create it or manually add the columns using an *ALTER TABLE* query. Alternatively, you can use the :ref:`schema migrations ` extension to alter your database schema using Python. .. note:: You can also write ``database.create_tables([User, ...], True)`` and peewee will first check to see if the table exists before creating it. Establishing a database connection ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ You may have noticed in the above model code that there is a class defined on the base model named *Meta* that sets the ``database`` attribute. Peewee allows every model to specify which database it uses. There are many :ref:`Meta options ` you can specify which control the behavior of your model. This is a peewee idiom: .. code-block:: python DATABASE = 'tweepee.db' # Create a database instance that will manage the connection and # execute queries database = SqliteDatabase(DATABASE, threadlocals=True) When developing a web application, it's common to open a connection when a request starts, and close it when the response is returned. **You should always manage your connections explicitly**. For instance, if you are using a :ref:`connection pool `, connections will only be recycled correctly if you call :py:meth:`~Database.connect` and :py:meth:`~Database.close`. We will tell flask that during the request/response cycle we need to create a connection to the database. Flask provides some handy decorators to make this a snap: .. code-block:: python @app.before_request def before_request(): database.connect() @app.after_request def after_request(response): database.close() return response .. note:: Peewee uses thread local storage to manage connection state, so this pattern can be used with multi-threaded WSGI servers. Making queries ^^^^^^^^^^^^^^ In the *User* model there are a few instance methods that encapsulate some user-specific functionality: * ``following()``: who is this user following? * ``followers()``: who is following this user? These methods are similar in their implementation but with an important difference in the SQL *JOIN* and *WHERE* clauses: .. code-block:: python def following(self): # query other users through the "relationship" table return (User .select() .join(Relationship, on=Relationship.to_user) .where(Relationship.from_user == self)) def followers(self): return (User .select() .join(Relationship, on=Relationship.from_user) .where(Relationship.to_user == self)) Creating new objects ^^^^^^^^^^^^^^^^^^^^ When a new user wants to join the site we need to make sure the username is available, and if so, create a new *User* record. Looking at the *join()* view, we can see that our application attempts to create the User using :py:meth:`Model.create`. We defined the *User.username* field with a unique constraint, so if the username is taken the database will raise an ``IntegrityError``. .. code-block:: python try: with database.transaction(): # Attempt to create the user. If the username is taken, due to the # unique constraint, the database will raise an IntegrityError. user = User.create( username=request.form['username'], password=md5(request.form['password']).hexdigest(), email=request.form['email'], join_date=datetime.datetime.now() ) # mark the user as being 'authenticated' by setting the session vars auth_user(user) return redirect(url_for('homepage')) except IntegrityError: flash('That username is already taken') We will use a similar approach when a user wishes to follow someone. To indicate a following relationship, we create a row in the *Relationship* table pointing from one user to another. Due to the unique index on ``from_user`` and ``to_user``, we will be sure not to end up with duplicate rows: .. code-block:: python user = get_object_or_404(User, username=username) try: with database.transaction(): Relationship.create( from_user=get_current_user(), to_user=user) except IntegrityError: pass Performing subqueries ^^^^^^^^^^^^^^^^^^^^^ If you are logged-in and visit the twitter homepage, you will see tweets from the users that you follow. In order to implement this cleanly, we can use a subquery: .. code-block:: python # python code messages = Message.select().where(Message.user << user.following()) This code corresponds to the following SQL query: .. code-block:: sql SELECT t1."id", t1."user_id", t1."content", t1."pub_date" FROM "message" AS t1 WHERE t1."user_id" IN ( SELECT t2."id" FROM "user" AS t2 INNER JOIN "relationship" AS t3 ON t2."id" = t3."to_user_id" WHERE t3."from_user_id" = ? ) Other topics of interest ^^^^^^^^^^^^^^^^^^^^^^^^ There are a couple other neat things going on in the example app that are worth mentioning briefly. * Support for paginating lists of results is implemented in a simple function called ``object_list`` (after it's corollary in Django). This function is used by all the views that return lists of objects. .. code-block:: python def object_list(template_name, qr, var_name='object_list', **kwargs): kwargs.update( page=int(request.args.get('page', 1)), pages=qr.count() / 20 + 1 ) kwargs[var_name] = qr.paginate(kwargs['page']) return render_template(template_name, **kwargs) * Simple authentication system with a ``login_required`` decorator. The first function simply adds user data into the current session when a user successfully logs in. The decorator ``login_required`` can be used to wrap view functions, checking for whether the session is authenticated and if not redirecting to the login page. .. code-block:: python def auth_user(user): session['logged_in'] = True session['user'] = user session['username'] = user.username flash('You are logged in as %s' % (user.username)) def login_required(f): @wraps(f) def inner(*args, **kwargs): if not session.get('logged_in'): return redirect(url_for('login')) return f(*args, **kwargs) return inner * Return a 404 response instead of throwing exceptions when an object is not found in the database. .. code-block:: python def get_object_or_404(model, *expressions): try: return model.get(*expressions) except model.DoesNotExist: abort(404) More examples ------------- There are more examples included in the peewee `examples directory `_, including: * `Example blog app `_ using Flask and peewee. Also see `accompanying blog post `_. * `An encrypted command-line diary `_. There is a `companion blog post `_ you might enjoy as well. * `Analytics web-service `_ (like a lite version of Google Analytics). Also check out the `companion blog post `_. .. note:: Like these snippets and interested in more? Check out `flask-peewee `_ - a flask plugin that provides a django-like Admin interface, RESTful API, Authentication and more for your peewee models. peewee-2.10.2/docs/peewee/hacks.rst000066400000000000000000000356101316645060400171050ustar00rootroot00000000000000.. _hacks: Hacks ===== Collected hacks using peewee. Have a cool hack you'd like to share? Open `an issue on GitHub `_ or `contact me `_. .. _optimistic_locking: Optimistic Locking ------------------ Optimistic locking is useful in situations where you might ordinarily use a *SELECT FOR UPDATE* (or in SQLite, *BEGIN IMMEDIATE*). For example, you might fetch a user record from the database, make some modifications, then save the modified user record. Typically this scenario would require us to lock the user record for the duration of the transaction, from the moment we select it, to the moment we save our changes. In optimistic locking, on the other hand, we do *not* acquire any lock and instead rely on an internal *version* column in the row we're modifying. At read time, we see what version the row is currently at, and on save, we ensure that the update takes place only if the version is the same as the one we initially read. If the version is higher, then some other process must have snuck in and changed the row -- to save our modified version could result in the loss of important changes. It's quite simple to implement optimistic locking in Peewee, here is a base class that you can use as a starting point: .. code-block:: python from peewee import * class BaseVersionedModel(Model): version = IntegerField(default=1, index=True) def save_optimistic(self): if not self.id: # This is a new record, so the default logic is to perform an # INSERT. Ideally your model would also have a unique # constraint that made it impossible for two INSERTs to happen # at the same time. return self.save() # Update any data that has changed and bump the version counter. field_data = dict(self._data) current_version = field_data.pop('version', 1) field_data = self._prune_fields(field_data, self.dirty_fields) if not field_data: raise ValueError('No changes have been made.') ModelClass = type(self) field_data['version'] = ModelClass.version + 1 # Atomic increment. query = ModelClass.update(**field_data).where( (ModelClass.version == current_version) & (ModelClass.id == self.id)) if query.execute() == 0: # No rows were updated, indicating another process has saved # a new version. How you handle this situation is up to you, # but for simplicity I'm just raising an exception. raise ConflictDetectedException() else: # Increment local version to match what is now in the db. self.version += 1 return True Here's an example of how this works. Let's assume we have the following model definition. Note that there's a unique constraint on the username -- this is important as it provides a way to prevent double-inserts. .. code-block:: python class User(BaseVersionedModel): username = CharField(unique=True) favorite_animal = CharField() Example: .. code-block:: pycon >>> u = User(username='charlie', favorite_animal='cat') >>> u.save_optimistic() True >>> u.version 1 >>> u.save_optimistic() Traceback (most recent call last): File "", line 1, in File "x.py", line 18, in save_optimistic raise ValueError('No changes have been made.') ValueError: No changes have been made. >>> u.favorite_animal = 'kitten' >>> u.save_optimistic() True # Simulate a separate thread coming in and updating the model. >>> u2 = User.get(User.username == 'charlie') >>> u2.favorite_animal = 'macaw' >>> u2.save_optimistic() True # Now, attempt to change and re-save the original instance: >>> u.favorite_animal = 'little parrot' >>> u.save_optimistic() Traceback (most recent call last): File "", line 1, in File "x.py", line 30, in save_optimistic raise ConflictDetectedException() ConflictDetectedException: current version is out of sync .. _top_item_per_group: Top object per group -------------------- These examples describe several ways to query the single top item per group. For a thorough discuss of various techniques, check out my blog post `Querying the top item by group with Peewee ORM `_. If you are interested in the more general problem of querying the top *N* items, see the section below :ref:`top_n_per_group`. In these examples we will use the *User* and *Tweet* models to find each user and their most-recent tweet. The most efficient method I found in my testing uses the ``MAX()`` aggregate function. We will perform the aggregation in a non-correlated subquery, so we can be confident this method will be performant. The idea is that we will select the posts, grouped by their author, whose timestamp is equal to the max observed timestamp for that user. .. code-block:: python # When referencing a table multiple times, we'll call Model.alias() to create # a secondary reference to the table. TweetAlias = Tweet.alias() # Create a subquery that will calculate the maximum Tweet create_date for each # user. subquery = (TweetAlias .select( TweetAlias.user, fn.MAX(TweetAlias.create_date).alias('max_ts')) .group_by(TweetAlias.user) .alias('tweet_max_subquery')) # Query for tweets and join using the subquery to match the tweet's user # and create_date. query = (Tweet .select(Tweet, User) .join(User) .switch(Tweet) .join(subquery, on=( (Tweet.create_date == subquery.c.max_ts) & (Tweet.user == subquery.c.user_id)))) SQLite and MySQL are a bit more lax and permit grouping by a subset of the columns that are selected. This means we can do away with the subquery and express it quite concisely: .. code-block:: python query = (Tweet .select(Tweet, User) .join(User) .group_by(Tweet.user) .having(Tweet.create_date == fn.MAX(Tweet.create_date))) .. _top_n_per_group: Top N objects per group ----------------------- These examples describe several ways to query the top *N* items per group reasonably efficiently. For a thorough discussion of various techniques, check out my blog post `Querying the top N objects per group with Peewee ORM `_. In these examples we will use the *User* and *Tweet* models to find each user and their three most-recent tweets. Postgres lateral joins ^^^^^^^^^^^^^^^^^^^^^^ `Lateral joins `_ are a neat Postgres feature that allow reasonably efficient correlated subqueries. They are often described as SQL ``for each`` loops. The desired SQL is: .. code-block:: sql SELECT * FROM (SELECT t2.id, t2.username FROM user AS t2) AS uq LEFT JOIN LATERAL (SELECT t2.message, t2.create_date FROM tweet AS t2 WHERE (t2.user_id = uq.id) ORDER BY t2.create_date DESC LIMIT 3) AS pq ON true To accomplish this with peewee we'll need to express the lateral join as a :py:class:`Clause`, which gives us greater flexibility than the :py:meth:`~Query.join` method. .. code-block:: python # We'll reference `Tweet` twice, so keep an alias handy. TweetAlias = Tweet.alias() # The "outer loop" will be iterating over the users whose # tweets we are trying to find. user_query = User.select(User.id, User.username).alias('uq') # The inner loop will select tweets and is correlated to the # outer loop via the WHERE clause. Note that we are using a # LIMIT clause. tweet_query = (TweetAlias .select(TweetAlias.message, TweetAlias.create_date) .where(TweetAlias.user == user_query.c.id) .order_by(TweetAlias.create_date.desc()) .limit(3) .alias('pq')) # Now we join the outer and inner queries using the LEFT LATERAL # JOIN. The join predicate is *ON TRUE*, since we're effectively # joining in the tweet subquery's WHERE clause. join_clause = Clause( user_query, SQL('LEFT JOIN LATERAL'), tweet_query, SQL('ON %s', True)) # Finally, we'll wrap these up and SELECT from the result. query = (Tweet .select(SQL('*')) .from_(join_clause)) Window functions ^^^^^^^^^^^^^^^^ `Window functions `_, which are :ref:`supported by peewee `, provide scalable, efficient performance. The desired SQL is: .. code-block:: sql SELECT subq.message, subq.username FROM ( SELECT t2.message, t3.username, RANK() OVER ( PARTITION BY t2.user_id ORDER BY t2.create_date DESC ) AS rnk FROM tweet AS t2 INNER JOIN user AS t3 ON (t2.user_id = t3.id) ) AS subq WHERE (subq.rnk <= 3) To accomplish this with peewee, we will wrap the ranked Tweets in an outer query that performs the filtering. .. code-block:: python TweetAlias = Tweet.alias() # The subquery will select the relevant data from the Tweet and # User table, as well as ranking the tweets by user from newest # to oldest. subquery = (TweetAlias .select( TweetAlias.message, User.username, fn.RANK().over( partition_by=[TweetAlias.user], order_by=[TweetAlias.create_date.desc()]).alias('rnk')) .join(User, on=(TweetAlias.user == User.id)) .alias('subq')) # Since we can't filter on the rank, we are wrapping it in a query # and performing the filtering in the outer query. query = (Tweet .select(subquery.c.message, subquery.c.username) .from_(subquery) .where(subquery.c.rnk <= 3)) Other methods ^^^^^^^^^^^^^ If you're not using Postgres, then unfortunately you're left with options that exhibit less-than-ideal performance. For a more complete overview of common methods, check out `this blog post `_. Below I will summarize the approaches and the corresponding SQL. Using ``COUNT``, we can get all tweets where there exist less than *N* tweets with more recent timestamps: .. code-block:: python TweetAlias = Tweet.alias() # Create a correlated subquery that calculates the number of # tweets with a higher (newer) timestamp than the tweet we're # looking at in the outer query. subquery = (TweetAlias .select(fn.COUNT(TweetAlias.id)) .where( (TweetAlias.create_date >= Tweet.create_date) & (TweetAlias.user == Tweet.user))) # Wrap the subquery and filter on the count. query = (Tweet .select(Tweet, User) .join(User) .where(subquery <= 3)) We can achieve similar results by doing a self-join and performing the filtering in the ``HAVING`` clause: .. code-block:: python TweetAlias = Tweet.alias() # Use a self-join and join predicates to count the number of # newer tweets. query = (Tweet .select(Tweet.id, Tweet.message, Tweet.user, User.username) .join(User) .switch(Tweet) .join(TweetAlias, on=( (TweetAlias.user == Tweet.user) & (TweetAlias.create_date >= Tweet.create_date))) .group_by(Tweet.id, Tweet.content, Tweet.user, User.username) .having(fn.COUNT(Tweet.id) <= 3)) The last example uses a ``LIMIT`` clause in a correlated subquery. .. code-block:: python TweetAlias = Tweet.alias() # The subquery here will calculate, for the user who created the # tweet in the outer loop, the three newest tweets. The expression # will evaluate to `True` if the outer-loop tweet is in the set of # tweets represented by the inner query. query = (Tweet .select(Tweet, User) .join(User) .where(Tweet.id << ( TweetAlias .select(TweetAlias.id) .where(TweetAlias.user == Tweet.user) .order_by(TweetAlias.create_date.desc()) .limit(3)))) Writing custom functions with SQLite ------------------------------------ SQLite is very easy to extend with custom functions written in Python, that are then callable from your SQL statements. By using the :py:class:`SqliteExtDatabase` and the :py:meth:`~SqliteExtDatabase.func` decorator, you can very easily define your own functions. Here is an example function that generates a hashed version of a user-supplied password. We can also use this to implement ``login`` functionality for matching a user and password. .. code-block:: python from hashlib import sha1 from random import random from playhouse.sqlite_ext import SqliteExtDatabase db = SqliteExtDatabase('my-blog.db') def get_hexdigest(salt, raw_password): data = salt + raw_password return sha1(data.encode('utf8')).hexdigest() @db.func() def make_password(raw_password): salt = get_hexdigest(str(random()), str(random()))[:5] hsh = get_hexdigest(salt, raw_password) return '%s$%s' % (salt, hsh) @db.func() def check_password(raw_password, enc_password): salt, hsh = enc_password.split('$', 1) return hsh == get_hexdigest(salt, raw_password) Here is how you can use the function to add a new user, storing a hashed password: .. code-block:: python query = User.insert( username='charlie', password=fn.make_password('testing')).execute() If we retrieve the user from the database, the password that's stored is hashed and salted: .. code-block:: pycon >>> user = User.get(User.username == 'charlie') >>> print user.password b76fa$88be1adcde66a1ac16054bc17c8a297523170949 To implement ``login``-type functionality, you could write something like this: .. code-block:: python def login(username, password): try: return (User .select() .where( (User.username == username) & (fn.check_password(password, User.password) == True)) .get()) except User.DoesNotExist: # Incorrect username and/or password. return False peewee-2.10.2/docs/peewee/installation.rst000066400000000000000000000073011316645060400205110ustar00rootroot00000000000000.. _installation: Installing and Testing ====================== Most users will want to simply install the latest version, hosted on PyPI: .. code-block:: console pip install peewee Peewee comes with two C extensions that can optionally be compiled: * Speedups, which includes miscellaneous functions re-implemented with Cython. This module will be built automatically if Cython is installed. * Sqlite extensions, which includes Cython implementations of the SQLite date manipulation functions, the REGEXP operator, and full-text search result ranking algorithms. This module should be built using the ``build_sqlite_ext`` command. .. note:: If you have Cython installed, then the ``speedups`` module will automatically be built. If you wish to also build the SQLite Cython extension, you must manually run: .. code-block:: console python setup.py build_sqlite_ext python setup.py install Installing with git ------------------- The project is hosted at https://github.com/coleifer/peewee and can be installed using git: .. code-block:: console git clone https://github.com/coleifer/peewee.git cd peewee python setup.py install If you would like to build the SQLite extension in a git checkout, you can run: .. code-block:: console # Build the sqlite extension and place the shared library alongside the other modules. python setup.py build_sqlite_ext -i .. note:: On some systems you may need to use ``sudo python setup.py install`` to install peewee system-wide. Running tests ------------- You can test your installation by running the test suite. .. code-block:: console python setup.py test # Or use the test runner: python runtests.py You can test specific features or specific database drivers using the ``runtests.py`` script. By default the test suite is run using SQLite and the ``playhouse`` extension tests are not run. To view the available test runner options, use: .. code-block:: console python runtests.py --help Optional dependencies --------------------- .. note:: To use Peewee, you typically won't need anything outside the standard library, since most Python distributions are compiled with SQLite support. You can test by running ``import sqlite3`` in the Python console. If you wish to use another database, there are many DB-API 2.0-compatible drivers out there, such as ``pymysql`` or ``psycopg2`` for MySQL and Postgres respectively. * `Cython `_: used for various speedups. Can give a big boost to certain operations, particularly if you use SQLite. * `apsw `_: an optional 3rd-party SQLite binding offering greater performance and much, much saner semantics than the standard library ``pysqlite``. Use with :py:class:`APSWDatabase`. * `pycrypto `_ is used for the :py:class:`AESEncryptedField`. * ``bcrypt`` module is used for the :py:class:`PasswordField`. * `vtfunc ` is used to provide some table-valued functions for Sqlite as part of the ``sqlite_udf`` extensions module. * `gevent `_ is an optional dependency for :py:class:`SqliteQueueDatabase` (though it works with ``threading`` just fine). * `BerkeleyDB `_ can be compiled with a SQLite frontend, which works with Peewee. Compiling can be tricky so `here are instructions `_. * Lastly, if you use the *Flask* or *Django* frameworks, there are helper extension modules available. peewee-2.10.2/docs/peewee/models.rst000066400000000000000000001067641316645060400173100ustar00rootroot00000000000000.. _models: Models and Fields ================= :py:class:`Model` classes, :py:class:`Field` instances and model instances all map to database concepts: ================= ================================= Thing Corresponds to... ================= ================================= Model class Database table Field instance Column on a table Model instance Row in a database table ================= ================================= The following code shows the typical way you will define your database connection and model classes. .. _blog-models: .. code-block:: python from peewee import * db = SqliteDatabase('my_app.db') class BaseModel(Model): class Meta: database = db class User(BaseModel): username = CharField(unique=True) class Tweet(BaseModel): user = ForeignKeyField(User, related_name='tweets') message = TextField() created_date = DateTimeField(default=datetime.datetime.now) is_published = BooleanField(default=True) 1. Create an instance of a :py:class:`Database`. .. code-block:: python db = SqliteDatabase('my_app.db') The ``db`` object will be used to manage the connections to the Sqlite database. In this example we're using :py:class:`SqliteDatabase`, but you could also use one of the other :ref:`database engines `. 2. Create a base model class which specifies our database. .. code-block:: python class BaseModel(Model): class Meta: database = db It is good practice to define a base model class which establishes the database connection. This makes your code DRY as you will not have to specify the database for subsequent models. Model configuration is kept namespaced in a special class called ``Meta``. This convention is borrowed from Django. :ref:`Meta ` configuration is passed on to subclasses, so our project's models will all subclass *BaseModel*. There are :ref:`many different attributes ` you can configure using *Model.Meta*. 3. Define a model class. .. code-block:: python class User(BaseModel): username = CharField(unique=True) Model definition uses the declarative style seen in other popular ORMs like SQLAlchemy or Django. Note that we are extending the *BaseModel* class so the *User* model will inherit the database connection. We have explicitly defined a single *username* column with a unique constraint. Because we have not specified a primary key, peewee will automatically add an auto-incrementing integer primary key field named *id*. .. note:: If you would like to start using peewee with an existing database, you can use :ref:`pwiz` to automatically generate model definitions. .. _fields: Fields ------ The :py:class:`Field` class is used to describe the mapping of :py:class:`Model` attributes to database columns. Each field type has a corresponding SQL storage class (i.e. varchar, int), and conversion between python data types and underlying storage is handled transparently. When creating a :py:class:`Model` class, fields are defined as class attributes. This should look familiar to users of the django framework. Here's an example: .. code-block:: python class User(Model): username = CharField() join_date = DateTimeField() about_me = TextField() There is one special type of field, :py:class:`ForeignKeyField`, which allows you to represent foreign-key relationships between models in an intuitive way: .. code-block:: python class Message(Model): user = ForeignKeyField(User, related_name='messages') body = TextField() send_date = DateTimeField() This allows you to write code like the following: .. code-block:: python >>> print some_message.user.username Some User >>> for message in some_user.messages: ... print message.body some message another message yet another message For full documentation on fields, see the :ref:`Fields API notes ` .. _field_types_table: Field types table ^^^^^^^^^^^^^^^^^ ===================== ================= ================= ================= Field Type Sqlite Postgresql MySQL ===================== ================= ================= ================= ``CharField`` varchar varchar varchar ``FixedCharField`` char char char ``TextField`` text text longtext ``DateTimeField`` datetime timestamp datetime ``IntegerField`` integer integer integer ``BooleanField`` integer boolean bool ``FloatField`` real real real ``DoubleField`` real double precision double precision ``BigIntegerField`` integer bigint bigint ``SmallIntegerField`` integer smallint smallint ``DecimalField`` decimal numeric numeric ``PrimaryKeyField`` integer serial integer ``ForeignKeyField`` integer integer integer ``DateField`` date date date ``TimeField`` time time time ``TimestampField`` integer integer integer ``BlobField`` blob bytea blob ``UUIDField`` text uuid varchar(40) ``BareField`` untyped not supported not supported ===================== ================= ================= ================= .. note:: Don't see the field you're looking for in the above table? It's easy to create custom field types and use them with your models. * :ref:`custom-fields` * :py:class:`Database`, particularly the ``fields`` parameter. Field initialization arguments ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Parameters accepted by all field types and their default values: * ``null = False`` -- boolean indicating whether null values are allowed to be stored * ``index = False`` -- boolean indicating whether to create an index on this column * ``unique = False`` -- boolean indicating whether to create a unique index on this column. See also :ref:`adding composite indexes `. * ``verbose_name = None`` -- string representing the "user-friendly" name of this field * ``help_text = None`` -- string representing any helpful text for this field * ``db_column = None`` -- string representing the underlying column to use if different, useful for legacy databases * ``default = None`` -- any value to use as a default for uninitialized models; If ``callable``, will be called to produce value * ``choices = None`` -- an optional iterable containing 2-tuples of ``value``, ``display`` * ``primary_key = False`` -- whether this field is the primary key for the table * ``sequence = None`` -- sequence to populate field (if backend supports it) * ``constraints = None`` - a list of one or more constraints, e.g. ``[Check('price > 0')]`` * ``schema = None`` -- optional name of the schema to use, if your db supports this. Some fields take special parameters... ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +--------------------------------+------------------------------------------------+ | Field type | Special Parameters | +================================+================================================+ | :py:class:`CharField` | ``max_length`` | +--------------------------------+------------------------------------------------+ | :py:class:`FixedCharField` | ``max_length`` | +--------------------------------+------------------------------------------------+ | :py:class:`DateTimeField` | ``formats`` | +--------------------------------+------------------------------------------------+ | :py:class:`DateField` | ``formats`` | +--------------------------------+------------------------------------------------+ | :py:class:`TimeField` | ``formats`` | +--------------------------------+------------------------------------------------+ | :py:class:`TimestampField` | ``resolution``, ``utc`` | +--------------------------------+------------------------------------------------+ | :py:class:`DecimalField` | ``max_digits``, ``decimal_places``, | | | ``auto_round``, ``rounding`` | +--------------------------------+------------------------------------------------+ | :py:class:`ForeignKeyField` | ``rel_model``, ``related_name``, ``to_field``, | | | ``on_delete``, ``on_update``, ``extra`` | +--------------------------------+------------------------------------------------+ | :py:class:`BareField` | ``coerce`` | +--------------------------------+------------------------------------------------+ .. note:: Both ``default`` and ``choices`` could be implemented at the database level as *DEFAULT* and *CHECK CONSTRAINT* respectively, but any application change would require a schema change. Because of this, ``default`` is implemented purely in python and ``choices`` are not validated but exist for metadata purposes only. To add database (server-side) constraints, use the ``constraints`` parameter. Default field values ^^^^^^^^^^^^^^^^^^^^ Peewee can provide default values for fields when objects are created. For example to have an ``IntegerField`` default to zero rather than ``NULL``, you could declare the field with a default value: .. code-block:: python class Message(Model): context = TextField() read_count = IntegerField(default=0) In some instances it may make sense for the default value to be dynamic. A common scenario is using the current date and time. Peewee allows you to specify a function in these cases, whose return value will be used when the object is created. Note we only provide the function, we do not actually *call* it: .. code-block:: python class Message(Model): context = TextField() timestamp = DateTimeField(default=datetime.datetime.now) .. note:: If you are using a field that accepts a mutable type (`list`, `dict`, etc), and would like to provide a default, it is a good idea to wrap your default value in a simple function so that multiple model instances are not sharing a reference to the same underlying object: .. code-block:: python def house_defaults(): return {'beds': 0, 'baths': 0} class House(Model): number = TextField() street = TextField() attributes = JSONField(default=house_defaults) The database can also provide the default value for a field. While peewee does not explicitly provide an API for setting a server-side default value, you can use the ``constraints`` parameter to specify the server default: .. code-block:: python class Message(Model): context = TextField() timestamp = DateTimeField(constraints=[SQL('DEFAULT CURRENT_TIMESTAMP')]) .. note:: **Remember:** when using the ``default`` parameter, the values are set by Peewee rather than being a part of the actual table and column definition. ForeignKeyField ^^^^^^^^^^^^^^^ :py:class:`ForeignKeyField` is a special field type that allows one model to reference another. Typically a foreign key will contain the primary key of the model it relates to (but you can specify a particular column by specifying a ``to_field``). Foreign keys allow data to be `normalized `_. In our example models, there is a foreign key from ``Tweet`` to ``User``. This means that all the users are stored in their own table, as are the tweets, and the foreign key from tweet to user allows each tweet to *point* to a particular user object. In peewee, accessing the value of a :py:class:`ForeignKeyField` will return the entire related object, e.g.: .. code-block:: python tweets = Tweet.select(Tweet, User).join(User).order_by(Tweet.create_date.desc()) for tweet in tweets: print(tweet.user.username, tweet.message) In the example above the ``User`` data was selected as part of the query. For more examples of this technique, see the :ref:`Avoiding N+1 ` document. If we did not select the ``User``, though, then an additional query would be issued to fetch the associated ``User`` data: .. code-block:: python tweets = Tweet.select().order_by(Tweet.create_date.desc()) for tweet in tweets: # WARNING: an additional query will be issued for EACH tweet # to fetch the associated User data. print(tweet.user.username, tweet.message) Sometimes you only need the associated primary key value from the foreign key column. In this case, Peewee follows the convention established by Django, of allowing you to access the raw foreign key value by appending ``"_id"`` to the foreign key field's name: .. code-block:: python tweets = Tweet.select() for tweet in tweets: # Instead of "tweet.user", we will just get the raw ID value stored # in the column. print(tweet.user_id, tweet.message) :py:class:`ForeignKeyField` allows for a backreferencing property to be bound to the target model. Implicitly, this property will be named `classname_set`, where `classname` is the lowercase name of the class, but can be overridden via the parameter ``related_name``: .. code-block:: python class Message(Model): from_user = ForeignKeyField(User) to_user = ForeignKeyField(User, related_name='received_messages') text = TextField() for message in some_user.message_set: # We are iterating over all Messages whose from_user is some_user. print message for message in some_user.received_messages: # We are iterating over all Messages whose to_user is some_user print message DateTimeField, DateField and TimeField ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The three fields devoted to working with dates and times have special properties which allow access to things like the year, month, hour, etc. :py:class:`DateField` has properties for: * ``year`` * ``month`` * ``day`` :py:class:`TimeField` has properties for: * ``hour`` * ``minute`` * ``second`` :py:class:`DateTimeField` has all of the above. These properties can be used just like any other expression. Let's say we have an events calendar and want to highlight all the days in the current month that have an event attached: .. code-block:: python # Get the current time. now = datetime.datetime.now() # Get days that have events for the current month. Event.select(Event.event_date.day.alias('day')).where( (Event.event_date.year == now.year) & (Event.event_date.month == now.month)) .. note:: SQLite does not have a native date type, so dates are stored in formatted text columns. To ensure that comparisons work correctly, the dates need to be formatted so they are sorted lexicographically. That is why they are stored, by default, as ``YYYY-MM-DD HH:MM:SS``. BareField ^^^^^^^^^ The :py:class:`BareField` class is intended to be used only with SQLite. Since SQLite uses dynamic typing and data-types are not enforced, it can be perfectly fine to declare fields without *any* data-type. In those cases you can use :py:class:`BareField`. It is also common for SQLite virtual tables to use meta-columns or untyped columns, so for those cases as well you may wish to use an untyped field. :py:class:`BareField` accepts a special parameter ``coerce``. This parameter is a function that takes a value coming from the database and converts it into the appropriate Python type. For instance, if you have a virtual table with an un-typed column but you know that it will return ``int`` objects, you can specify ``coerce=int``. .. _custom-fields: Creating a custom field ^^^^^^^^^^^^^^^^^^^^^^^ It isn't too difficult to add support for custom field types in peewee. In this example we will create a UUID field for postgresql (which has a native UUID column type). To add a custom field type you need to first identify what type of column the field data will be stored in. If you just want to add python behavior atop, say, a decimal field (for instance to make a currency field) you would just subclass :py:class:`DecimalField`. On the other hand, if the database offers a custom column type you will need to let peewee know. This is controlled by the :py:attr:`Field.db_field` attribute. Let's start by defining our UUID field: .. code-block:: python class UUIDField(Field): db_field = 'uuid' We will store the UUIDs in a native UUID column. Since psycopg2 treats the data as a string by default, we will add two methods to the field to handle: * The data coming out of the database to be used in our application * The data from our python app going into the database .. code-block:: python import uuid class UUIDField(Field): db_field = 'uuid' def db_value(self, value): return str(value) # convert UUID to str def python_value(self, value): return uuid.UUID(value) # convert str to UUID Now, we need to let the database know how to map this *uuid* label to an actual *uuid* column type in the database. There are 2 ways of doing this: 1. Specify the overrides in the :py:class:`Database` constructor: .. code-block:: python db = PostgresqlDatabase('my_db', fields={'uuid': 'uuid'}) 2. Register them class-wide using :py:meth:`Database.register_fields`: .. code-block:: python # Will affect all instances of PostgresqlDatabase PostgresqlDatabase.register_fields({'uuid': 'uuid'}) That is it! Some fields may support exotic operations, like the postgresql HStore field acts like a key/value store and has custom operators for things like *contains* and *update*. You can specify :ref:`custom operations ` as well. For example code, check out the source code for the :py:class:`HStoreField`, in ``playhouse.postgres_ext``. Creating model tables --------------------- In order to start using our models, its necessary to open a connection to the database and create the tables first. Peewee will run the necessary *CREATE TABLE* queries, additionally creating any constraints and indexes. .. code-block:: python # Connect to our database. db.connect() # Create the tables. db.create_tables([User, Tweet]) .. note:: Strictly speaking, it is not necessary to call :py:meth:`~Database.connect` but it is good practice to be explicit. That way if something goes wrong, the error occurs at the connect step, rather than some arbitrary time later. .. note:: Peewee can determine if your tables already exist, and conditionally create them: .. code-block:: python # Only create the tables if they do not exist. db.create_tables([User, Tweet], safe=True) After you have created your tables, if you choose to modify your database schema (by adding, removing or otherwise changing the columns) you will need to either: * Drop the table and re-create it. * Run one or more *ALTER TABLE* queries. Peewee comes with a schema migration tool which can greatly simplify this. Check the :ref:`schema migrations ` docs for details. .. _model-options: Model options and table metadata -------------------------------- In order not to pollute the model namespace, model-specific configuration is placed in a special class called *Meta* (a convention borrowed from the django framework): .. code-block:: python from peewee import * contacts_db = SqliteDatabase('contacts.db') class Person(Model): name = CharField() class Meta: database = contacts_db This instructs peewee that whenever a query is executed on *Person* to use the contacts database. .. note:: Take a look at :ref:`the sample models ` - you will notice that we created a ``BaseModel`` that defined the database, and then extended. This is the preferred way to define a database and create models. Once the class is defined, you should not access ``ModelClass.Meta``, but instead use ``ModelClass._meta``: .. code-block:: pycon >>> Person.Meta Traceback (most recent call last): File "", line 1, in AttributeError: type object 'Person' has no attribute 'Meta' >>> Person._meta The :py:class:`ModelOptions` class implements several methods which may be of use for retrieving model metadata (such as lists of fields, foreign key relationships, and more). .. code-block:: pycon >>> Person._meta.fields {'id': , 'name': } >>> Person._meta.primary_key >>> Person._meta.database There are several options you can specify as ``Meta`` attributes. While most options are inheritable, some are table-specific and will not be inherited by subclasses. ===================== ====================================================== ============ Option Meaning Inheritable? ===================== ====================================================== ============ ``database`` database for model yes ``db_table`` name of the table to store data no ``db_table_func`` function that accepts model and returns a table name yes ``indexes`` a list of fields to index yes ``order_by`` a list of fields to use for default ordering yes ``primary_key`` a :py:class:`CompositeKey` instance yes ``table_alias`` an alias to use for the table in queries no ``schema`` the database schema for the model yes ``constraints`` a list of table constraints yes ``validate_backrefs`` ensure backrefs do not conflict with other attributes. yes ``only_save_dirty`` when calling model.save(), only save dirty fields yes ===================== ====================================================== ============ Here is an example showing inheritable versus non-inheritable attributes: .. code-block:: pycon >>> db = SqliteDatabase(':memory:') >>> class ModelOne(Model): ... class Meta: ... database = db ... db_table = 'model_one_tbl' ... >>> class ModelTwo(ModelOne): ... pass ... >>> ModelOne._meta.database is ModelTwo._meta.database True >>> ModelOne._meta.db_table == ModelTwo._meta.db_table False Meta.order_by ^^^^^^^^^^^^^ Specifying a default ordering is, in my opinion, a bad idea. It's better to be explicit in your code when you want to sort your results. That said, to specify a default ordering, the syntax is similar to that of Django. ``Meta.order_by`` is a tuple of field names, and to indicate descending ordering, the field name is prefixed by a ``'-'``. .. code-block:: python class Person(Model): first_name = CharField() last_name = CharField() dob = DateField() class Meta: # Order people by last name, first name. If two people have the # same first and last, order them youngest to oldest. order_by = ('last_name', 'first_name', '-dob') Meta.primary_key ^^^^^^^^^^^^^^^^ The ``Meta.primary_key`` attribute is used to specify either a :py:class:`CompositeKey` or to indicate that the model has *no* primary key. Composite primary keys are discussed in more detail here: :ref:`composite-key`. To indicate that a model should not have a primary key, then set ``primary_key = False``. Examples: .. code-block:: python class BlogToTag(Model): """A simple "through" table for many-to-many relationship.""" blog = ForeignKeyField(Blog) tag = ForeignKeyField(Tag) class Meta: primary_key = CompositeKey('blog', 'tag') class NoPrimaryKey(Model): data = IntegerField() class Meta: primary_key = False .. _model_indexes: Indexes and Constraints ----------------------- Peewee can create indexes on single or multiple columns, optionally including a *UNIQUE* constraint. Peewee also supports user-defined constraints on both models and fields. Single-column indexes and constraints ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Single column indexes are defined using field initialization parameters. The following example adds a unique index on the *username* field, and a normal index on the *email* field: .. code-block:: python class User(Model): username = CharField(unique=True) email = CharField(index=True) To add a user-defined constraint on a column, you can pass it in using the ``constraints`` parameter. You may wish to specify a default value as part of the schema, or add a ``CHECK`` constraint, for example: .. code-block:: python class Product(Model): name = CharField(unique=True) price = DecimalField(constraints=[Check('price < 10000')]) created = DateTimeField( constraints=[SQL("DEFAULT (datetime('now'))")]) Multi-column indexes ^^^^^^^^^^^^^^^^^^^^ Multi-column indexes are defined as *Meta* attributes using a nested tuple. Each database index is a 2-tuple, the first part of which is a tuple of the names of the fields, the second part a boolean indicating whether the index should be unique. .. code-block:: python class Transaction(Model): from_acct = CharField() to_acct = CharField() amount = DecimalField() date = DateTimeField() class Meta: indexes = ( # create a unique on from/to/date (('from_acct', 'to_acct', 'date'), True), # create a non-unique on from/to (('from_acct', 'to_acct'), False), ) .. note:: Remember to add a **trailing comma** if your tuple of indexes contains only one item: .. code-block:: python class Meta: indexes = ( (('first_name', 'last_name'), True), # Note the trailing comma! ) Table constraints ^^^^^^^^^^^^^^^^^ Peewee allows you to add arbitrary constraints to your :py:class:`Model`, that will be part of the table definition when the schema is created. For instance, suppose you have a *people* table with a composite primary key of two columns, the person's first and last name. You wish to have another table relate to the *people* table, and to do this, you will need to define a foreign key constraint: .. code-block:: python class Person(Model): first = CharField() last = CharField() class Meta: primary_key = CompositeKey('first', 'last') class Pet(Model): owner_first = CharField() owner_last = CharField() pet_name = CharField() class Meta: constraints = [SQL('FOREIGN KEY(owner_first, owner_last) ' 'REFERENCES person(first, last)')] You can also implement ``CHECK`` constraints at the table level: .. code-block:: python class Product(Model): name = CharField(unique=True) price = DecimalField() class Meta: constraints = [Check('price < 10000')] .. _non_integer_primary_keys: Non-integer Primary Keys, Composite Keys and other Tricks --------------------------------------------------------- Non-integer primary keys ^^^^^^^^^^^^^^^^^^^^^^^^ If you would like use a non-integer primary key (which I generally don't recommend), you can specify ``primary_key=True`` when creating a field. When you wish to create a new instance for a model using a non-autoincrementing primary key, you need to be sure you :py:meth:`~Model.save` specifying ``force_insert=True``. .. code-block:: python from peewee import * class UUIDModel(Model): id = UUIDField(primary_key=True) Auto-incrementing IDs are, as their name says, automatically generated for you when you insert a new row into the database. When you call :py:meth:`~Model.save`, peewee determines whether to do an *INSERT* versus an *UPDATE* based on the presence of a primary key value. Since, with our uuid example, the database driver won't generate a new ID, we need to specify it manually. When we call save() for the first time, pass in ``force_insert = True``: .. code-block:: python # This works because .create() will specify `force_insert=True`. obj1 = UUIDModel.create(id=uuid.uuid4()) # This will not work, however. Peewee will attempt to do an update: obj2 = UUIDModel(id=uuid.uuid4()) obj2.save() # WRONG obj2.save(force_insert=True) # CORRECT # Once the object has been created, you can call save() normally. obj2.save() .. note:: Any foreign keys to a model with a non-integer primary key will have a ``ForeignKeyField`` use the same underlying storage type as the primary key they are related to. .. _composite-key: Composite primary keys ^^^^^^^^^^^^^^^^^^^^^^ Peewee has very basic support for composite keys. In order to use a composite key, you must set the ``primary_key`` attribute of the model options to a :py:class:`CompositeKey` instance: .. code-block:: python class BlogToTag(Model): """A simple "through" table for many-to-many relationship.""" blog = ForeignKeyField(Blog) tag = ForeignKeyField(Tag) class Meta: primary_key = CompositeKey('blog', 'tag') Manually specifying primary keys ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Sometimes you do not want the database to automatically generate a value for the primary key, for instance when bulk loading relational data. To handle this on a *one-off* basis, you can simply tell peewee to turn off ``auto_increment`` during the import: .. code-block:: python data = load_user_csv() # load up a bunch of data User._meta.auto_increment = False # turn off auto incrementing IDs with db.transaction(): for row in data: u = User(id=row[0], username=row[1]) u.save(force_insert=True) # <-- force peewee to insert row User._meta.auto_increment = True If you *always* want to have control over the primary key, simply do not use the :py:class:`PrimaryKeyField` field type, but use a normal :py:class:`IntegerField` (or other column type): .. code-block:: python class User(BaseModel): id = IntegerField(primary_key=True) username = CharField() >>> u = User.create(id=999, username='somebody') >>> u.id 999 >>> User.get(User.username == 'somebody').id 999 Models without a Primary Key ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ If you wish to create a model with no primary key, you can specify ``primary_key = False`` in the inner ``Meta`` class: .. code-block:: python class MyData(BaseModel): timestamp = DateTimeField() value = IntegerField() class Meta: primary_key = False This will yield the following DDL: .. code-block:: sql CREATE TABLE "mydata" ( "timestamp" DATETIME NOT NULL, "value" INTEGER NOT NULL ) .. warning:: Some model APIs may not work correctly for models without a primary key, for instance :py:meth:`~Model.save` and `~Model.delete_instance` (you can instead use `~Model.insert`, `~Model.update` and `~Model.delete`). Self-referential foreign keys ----------------------------- When creating a heirarchical structure it is necessary to create a self-referential foreign key which links a child object to its parent. Because the model class is not defined at the time you instantiate the self-referential foreign key, use the special string ``'self'`` to indicate a self-referential foreign key: .. code-block:: python class Category(Model): name = CharField() parent = ForeignKeyField('self', null=True, related_name='children') As you can see, the foreign key points *upward* to the parent object and the back-reference is named *children*. .. attention:: Self-referential foreign-keys should always be ``null=True``. When querying against a model that contains a self-referential foreign key you may sometimes need to perform a self-join. In those cases you can use :py:meth:`Model.alias` to create a table reference. Here is how you might query the category and parent model using a self-join: .. code-block:: python Parent = Category.alias() GrandParent = Category.alias() query = (Category .select(Category, Parent) .join(Parent, on=(Category.parent == Parent.id)) .join(GrandParent, on=(Parent.parent == GrandParent.id)) .where(GrandParent.name == 'some category') .order_by(Category.name)) Circular foreign key dependencies --------------------------------- Sometimes it happens that you will create a circular dependency between two tables. .. note:: My personal opinion is that circular foreign keys are a code smell and should be refactored (by adding an intermediary table, for instance). Adding circular foreign keys with peewee is a bit tricky because at the time you are defining either foreign key, the model it points to will not have been defined yet, causing a ``NameError``. .. code-block:: python class User(Model): username = CharField() favorite_tweet = ForeignKeyField(Tweet, null=True) # NameError!! class Tweet(Model): message = TextField() user = ForeignKeyField(User, related_name='tweets') One option is to simply use an :py:class:`IntegerField` to store the raw ID: .. code-block:: python class User(Model): username = CharField() favorite_tweet_id = IntegerField(null=True) By using :py:class:`DeferredRelation` we can get around the problem and still use a foreign key field: .. code-block:: python # Create a reference object to stand in for our as-yet-undefined Tweet model. DeferredTweet = DeferredRelation() class User(Model): username = CharField() # Tweet has not been defined yet so use the deferred reference. favorite_tweet = ForeignKeyField(DeferredTweet, null=True) class Tweet(Model): message = TextField() user = ForeignKeyField(User, related_name='tweets') # Now that Tweet is defined, we can initialize the reference. DeferredTweet.set_model(Tweet) After initializing the deferred relation, the foreign key fields are now correctly set up. There is one more quirk to watch out for, though. When you call :py:class:`~Model.create_table` we will again encounter the same issue. For this reason peewee will not automatically create a foreign key constraint for any *deferred* foreign keys. Here is how to create the tables: .. code-block:: python # Foreign key constraint from User -> Tweet will NOT be created because the # Tweet table does not exist yet. `favorite_tweet` will just be a regular # integer field: User.create_table() # Foreign key constraint from Tweet -> User will be created normally. Tweet.create_table() # Now that both tables exist, we can create the foreign key from User -> Tweet: # NOTE: this will not work in SQLite! db.create_foreign_key(User, User.favorite_tweet) .. warning:: SQLite does not support adding constraints to existing tables through the ``ALTER TABLE`` statement. peewee-2.10.2/docs/peewee/more-resources.rst000066400000000000000000000042661316645060400207710ustar00rootroot00000000000000.. _more-resources: Additional Resources ==================== I've written a number of blog posts about building applications and web-services with peewee (and usually Flask). If you'd like to see some "real-life" applications that use peewee, the following resources may be useful: * `How to make a Flask blog in one hour or less `_. * `Building a note-taking app with Flask and Peewee `_ as well as `Part 2 `_ and `Part 3 `_. * `Analytics web service built with Flask and Peewee `_. * `Personalized news digest (with a boolean query parser!) `_. * `Using peewee to explore CSV files `_. * `Structuring Flask apps with Peewee `_. * `Creating a lastpass clone with Flask and Peewee `_. * `Building a web-based encrypted file manager with Flask, peewee and S3 `_. * `Creating a bookmarking web-service that takes screenshots of your bookmarks `_. * `Building a pastebin, wiki and a bookmarking service using Flask and Peewee `_. * `Encrypted databases with Python and SQLCipher `_. * `Dear Diary, an Encrypted Command-Line Diary `_. peewee-2.10.2/docs/peewee/playhouse.rst000066400000000000000000005105441316645060400200310ustar00rootroot00000000000000.. _playhouse: Playhouse, extensions to Peewee =============================== Peewee comes with numerous extension modules which are collected under the ``playhouse`` namespace. Despite the silly name, there are some very useful extensions, particularly those that expose vendor-specific database features like the :ref:`sqlite_ext` and :ref:`postgres_ext` extensions. Below you will find a loosely organized listing of the various modules that make up the ``playhouse``. **Database drivers / vendor-specific database functionality** * :ref:`sqlite_ext` * :ref:`sqliteq` * :ref:`sqlite_udf` * :ref:`apsw` * :ref:`berkeleydb` * :ref:`sqlcipher_ext` * :ref:`postgres_ext` **High-level features** * :ref:`extra-fields` * :ref:`shortcuts` * :ref:`hybrid` * :ref:`signals` * :ref:`dataset` * :ref:`kv` * :ref:`gfk` * :ref:`csv_utils` **Database management and framework integration** * :ref:`pwiz` * :ref:`migrate` * :ref:`pool` * :ref:`reflection` * :ref:`db_url` * :ref:`read_slaves` * :ref:`test_utils` * :ref:`pskel` * :ref:`flask_utils` * :ref:`djpeewee` .. _sqlite_ext: Sqlite Extensions ----------------- The SQLite extensions module provides support for some interesting sqlite-only features: * Define custom aggregates, collations and functions. * Support for FTS3/4 (sqlite full-text search) with :ref:`BM25 ranking `. * C extension providing fast implementations of ranking and other utility functions. * Support for the new FTS5 search extension. * Specify isolation level in transactions. * Support for virtual tables and SQLite C extensions. * Support for the `closure table `_ extension, which allows efficient querying of heirarchical tables. sqlite_ext API notes ^^^^^^^^^^^^^^^^^^^^ .. py:class:: SqliteExtDatabase(database, pragmas=(), c_extensions=True, **kwargs) :param pragmas: A list or tuple of 2-tuples containing ``PRAGMA`` settings to configure on a per-connection basis. :param bool c_extensions: Boolean flag indicating whether to use the fast implementations of various SQLite user-defined functions. If Cython was installed when you built ``peewee``, then these functions should be available. If not, Peewee will fall back to using the slower pure-Python functions. Subclass of the :py:class:`SqliteDatabase` that provides some advanced features only offered by Sqlite. * Register custom aggregates, collations and functions * Support for SQLite virtual tables and C extensions * Specify a row factory * Advanced transactions (specify isolation level) .. py:method:: aggregate([name=None[, num_params=-1]]) Class-decorator for registering custom aggregation functions. :param name: string name for the aggregate, defaults to the name of the class. :param num_params: integer representing number of parameters the aggregate function accepts. The default value, ``-1``, indicates the aggregate can accept any number of parameters. .. code-block:: python @db.aggregate('product', 1) class Product(object): """Like sum, except calculate the product of a series of numbers.""" def __init__(self): self.product = 1 def step(self, value): self.product *= value def finalize(self): return self.product # To use this aggregate: product = (Score .select(fn.product(Score.value)) .scalar()) .. py:method:: unregister_aggregate(name): Unregister the given aggregate function. .. py:method:: collation([name]) Function decorator for registering a custom collation. :param name: string name to use for this collation. .. code-block:: python @db.collation() def collate_reverse(s1, s2): return -cmp(s1, s2) # To use this collation: Book.select().order_by(collate_reverse.collation(Book.title)) As you might have noticed, the original ``collate_reverse`` function has a special attribute called ``collation`` attached to it. This extra attribute provides a shorthand way to generate the SQL necessary to use our custom collation. .. py:method:: unregister_collation(name): Unregister the given collation function. .. py:method:: func([name[, num_params]]) Function decorator for registering user-defined functions. :param name: name to use for this function. :param num_params: number of parameters this function accepts. If not provided, peewee will introspect the function for you. .. code-block:: python @db.func() def title_case(s): return s.title() # Use in the select clause... titled_books = Book.select(fn.title_case(Book.title)) @db.func() def sha1(s): return hashlib.sha1(s).hexdigest() # Use in the where clause... user = User.select().where( (User.username == username) & (fn.sha1(User.password) == password_hash)).get() .. py:method:: unregister_function(name): Unregister the given user-defiend function. .. py:method:: load_extension(extension) Load the given C extension. If a connection is currently open in the calling thread, then the extension will be loaded for that connection as well as all subsequent connections. For example, if you've compiled the closure table extension and wish to use it in your application, you might write: .. code-block:: python db = SqliteExtDatabase('my_app.db') db.load_extension('closure') .. py:method:: unload_extension(name): Unload the given SQLite extension. .. py:class:: VirtualModel Subclass of :py:class:`Model` that signifies the model operates using a virtual table provided by a sqlite extension. Creating a virtual model is easy, simply subclass ``VirtualModel`` and specify the extension module and any options: .. code-block:: python class MyVirtualModel(VirtualModel): class Meta: database = db extension_module = 'nextchar' extension_options = {} .. py:attribute:: Meta.extension_module = 'name of sqlite extension' .. py:attribute:: Meta.extension_options = {'tokenize': 'porter', etc} SQLite virtual tables often support configuration via arbitrary key/value options which are included in the ``CREATE TABLE`` statement. To configure a virtual table, you can specify options like this: .. code-block:: python class SearchIndex(FTSModel): content = SearchField() metadata = SearchField() class Meta: database = my_db extension_options = { 'prefix': [2, 3], 'tokenize': 'porter', } .. _sqlite_fts: .. py:class:: FTSModel Model class that provides support for Sqlite's full-text search extension. Models should be defined normally, however there are a couple caveats: * Unique constraints, not null constraints, check constraints and foreign keys are not supported. * Indexes on fields and multi-column indexes are ignored completely * Sqlite will treat all column types as ``TEXT`` (although you can store other data types, Sqlite will treat them as text). * FTS models contain a ``docid`` field which is automatically created and managed by SQLite (unless you choose to explicitly set it during model creation). Lookups on this column **are performant**. ``sqlite_ext`` provides a :py:class:`SearchField` field class which should be used on ``FTSModel`` implementations instead of the regular peewee field types. This will help prevent you accidentally creating invalid column constraints. Because of the lack of secondary indexes, it usually makes sense to use the ``docid`` primary key as a pointer to a row in a regular table. For example: .. code-block:: python class Document(Model): author = ForeignKeyField(User, related_name='documents') title = TextField(null=False, unique=True) content = TextField(null=False) timestamp = DateTimeField() class Meta: database = db class DocumentIndex(FTSModel): title = SearchField() content = SearchField() class Meta: database = db # Use the porter stemming algorithm to tokenize content. extension_options = {'tokenize': 'porter'} To store a document in the document index, we will ``INSERT`` a row into the ``DocumentIndex`` table, manually setting the ``docid``: .. code-block:: python def store_document(document): DocumentIndex.insert({ DocumentIndex.docid: document.id, DocumentIndex.title: document.title, DocumentIndex.content: document.content}).execute() To perform a search and return ranked results, we can query the ``Document`` table and join on the ``DocumentIndex``: .. code-block:: python def search(phrase): # Query the search index and join the corresponding Document # object on each search result. return (Document .select() .join( DocumentIndex, on=(Document.id == DocumentIndex.docid)) .where(DocumentIndex.match(phrase)) .order_by(DocumentIndex.bm25())) .. warning:: All SQL queries on ``FTSModel`` classes will be slow **except** full-text searches and ``docid`` lookups. Continued examples: .. code-block:: python # Use the "match" operation for FTS queries. matching_docs = (DocumentIndex .select() .where(DocumentIndex.match('some query'))) # To sort by best match, use the custom "rank" function. best = (DocumentIndex .select() .where(DocumentIndex.match('some query')) .order_by(DocumentIndex.rank())) # Or use the shortcut method: best = DocumentIndex.search('some phrase') # Peewee allows you to specify weights for columns. # Matches in the title will be 2x more valuable than matches # in the content field: best = DocumentIndex.search( 'some phrase', weights=[2.0, 1.0], ) Examples using the BM25 ranking algorithm: .. code-block:: python # you can also use the BM25 algorithm to rank documents: best = (DocumentIndex .select() .where(DocumentIndex.match('some query')) .order_by(DocumentIndex.bm25())) # There is a shortcut method for bm25 as well: best_bm25 = DocumentIndex.search_bm25('some phrase') # BM25 allows you to specify weights for columns. # Matches in the title will be 2x more valuable than matches # in the content field: best_bm25 = DocumentIndex.search_bm25( 'some phrase', weights=[2.0, 1.0], ) If the primary source of the content you are indexing exists in a separate table, you can save some disk space by instructing SQLite to not store an additional copy of the search index content. SQLite will still create the metadata and data-structures needed to perform searches on the content, but the content itself will not be stored in the search index. To accomplish this, you can specify a table or column using the ``content`` option. The `FTS4 documentation `_ has more information. Here is a short code snippet illustrating how to implement this with peewee: .. code-block:: python class Blog(Model): title = CharField() pub_date = DateTimeField() content = TextField() # we want to search this. class Meta: database = db class BlogIndex(FTSModel): content = SearchField() class Meta: database = db extension_options = {'content': Blog.content} db.create_tables([Blog, BlogIndex]) # Now, we can manage content in the FTSBlog. To populate it with # content: BlogIndex.rebuild() # Optimize the index. BlogIndex.optimize() The ``content`` option accepts either a single :py:class:`Field` or a :py:class:`Model` and can reduce the amount of storage used. However, content will need to be manually moved to/from the associated ``FTSModel``. **FTSModel API methods:** .. py:classmethod:: create_table([fail_silently=False[, **options]]) :param boolean fail_silently: do not re-create if table already exists. :param options: options passed along when creating the table, e.g. ``content``. .. py:classmethod:: match(term) Shorthand for generating a ``MATCH`` expression for the given term(s). .. code-block:: python query = (DocumentIndex .select() .where(DocumentIndex.match('search phrase'))) for doc in query: print 'match: ', doc.title .. py:classmethod:: search(term[, weights=None[, with_score=False[, score_alias='score']]]) Shorthand way of searching for a term and sorting results by the quality of the match. This is equivalent to the :py:meth:`~FTSModel.rank` example code presented below. :param str term: Search term to use. :param weights: A list of weights for the columns, ordered with respect to the column's position in the table. **Or**, a dictionary keyed by the field or field name and mapped to a value. :param with_score: Whether the score should be returned as part of the ``SELECT`` statement. :param str score_alias: Alias to use for the calculated rank score. This is the attribute you will use to access the score if ``with_score=True``. .. code-block:: python # Simple search. docs = DocumentIndex.search('search term') for result in docs: print result.title # More complete example. docs = DocumentIndex.search( 'search term', weights={'title': 2.0, 'content': 1.0}, with_score=True, score_alias='search_score') for result in docs: print result.title, result.search_score .. py:classmethod:: rank([col1_weight, col2_weight...coln_weight]) Generate an expression that will calculate and return the quality of the search match. This ``rank`` can be used to sort the search results. The lower the ``rank``, the better the match. The ``rank`` function accepts optional parameters that allow you to specify weights for the various columns. If no weights are specified, all columns are considered of equal importance. .. code-block:: python query = (DocumentIndex .select( DocumentIndex, DocumentIndex.rank().alias('score')) .where(DocumentIndex.match('search phrase')) .order_by(DocumentIndex.rank())) for search_result in query: print search_result.title, search_result.score .. _sqlite_bm25: .. py:classmethod:: search_bm25(term[, weights=None[, with_score=False[, score_alias='score']]]) Shorthand way of searching for a term and sorting results by the quality of the match, as determined by the BM25 algorithm. This is equivalent to the :py:meth:`~FTSModel.bm25` example code presented below. :param str term: Search term to use. :param weights: A list of weights for the columns, ordered with respect to the column's position in the table. **Or**, a dictionary keyed by the field or field name and mapped to a value. :param with_score: Whether the score should be returned as part of the ``SELECT`` statement. :param str score_alias: Alias to use for the calculated rank score. This is the attribute you will use to access the score if ``with_score=True``. .. code-block:: python # Simple search. docs = DocumentIndex.search('search term') for result in docs: print result.title # More complete example. docs = DocumentIndex.search( 'search term', weights={'title': 2.0, 'content': 1.0}, with_score=True, score_alias='search_score') for result in docs: print result.title, result.search_score .. py:classmethod:: bm25([col1_weight, col2_weight...coln_weight]) Generate an expression that will calculate and return the quality of the search match using the `BM25 algorithm `_. This value can be used to sort the search results, and the lower the value the better the match. The ``bm25`` function accepts optional parameters that allow you to specify weights for the various columns. If no weights are specified, all columns are considered of equal importance. .. code-block:: python query = (DocumentIndex .select( DocumentIndex, DocumentIndex.bm25().alias('score')) .where(DocumentIndex.match('search phrase')) .order_by(DocumentIndex.bm25())) for search_result in query: print search_result.title, search_result.score .. py:classmethod:: rebuild() Rebuild the search index -- this only works when the ``content`` option was specified during table creation. .. py:classmethod:: optimize() Optimize the search index. .. py:class:: SearchField([unindexed=False[, db_column=None[, coerce=None]]]) :param unindexed: Whether the contents of this field should be excluded from the full-text search index. :param db_column: Name of the underlying database column. :param coerce: Function used to convert the value from the database into the appropriate Python format. .. py:class:: JSONField() Field class suitable for working with JSON stored and manipulated using the `JSON1 extension `_. Most functions that operate on JSON fields take a ``path`` argument. The JSON documents specify that the path should begin with ``'$'`` followed by zero or more instances of ``'.objectlabel'`` or ``'[arrayindex]'``. Peewee simplifies this by allowing you to omit the ``'$'`` character and just specify the path you need or ``None`` for an empty path: * ``path=''`` --> ``'$'`` * ``path='tags'`` --> ``'$.tags'`` * ``path='[0][1].bar'`` --> ``'$[0][1].bar'`` * ``path='metadata[0]'`` --> ``'$.metadata[0]'`` * ``path='user.data.email'`` --> ``'$.user.data.email'`` .. py:method:: length([path=None]) Return the number of items in a JSON array at the given path. If the path is omitted, then return the number of items in the top-level array. `SQLite documentation `_. .. py:method:: extract(path) Return the value at the given path. If the value is a JSON object or array, it will be decoded into a ``dict`` or ``list``. If the value is a scalar type, string or ``null`` then it will be returned as the appropriate Python type. `SQLite documentation `_. Example: .. code-block:: python # data looks like {'post': {'title': 'post 1', 'body': '...'}, ...} query = (Post .select(Post.data.json_extract('post.title')) .tuples()) # Only the `title` value is extracted from the JSON data. for title, in query: print title .. py:method:: set(path, value[, path2, value2...]) Set values stored in the input JSON string using the given path/value pairs. The ``set`` function returns a **new** JSON string formed by updating the input JSON with the given path/value pairs. If the path does not exist, it **will** be created. Similarly, if the path does exist, it **will** be overwritten. `SQLite documentation `_. .. _updating-json: Example: .. code-block:: python PostAlias = Post.alias() set_query = (PostAlias .select(PostAlias.data.set( 'title', 'New title', 'tags', ['list', 'of', 'new', 'tags'], 'totally.new.field', 3, 'status.published', True)) .where(PostAlias.id == Post.id)) # Update multiple fields at one time on the Post # with the title "Old title". query = (Post .update(data=set_query) .where(Post.data.extract('title') == 'Old title')) query.execute() post = (Post .select() .where(Post.data.extract('title') == 'New title') .get()) # Our new data has been added, even nested objects that did not # exist before. Any pre-existing data has also been preserved, # provided it was not over-written. assert post.data == { 'title': 'New title', 'tags': ['list', 'of', 'new', 'tags'], 'totally': {'new': {'field: 3}}, 'status': {'published': True, 'draft': False}, 'other-field': ['this', 'was', 'here', 'before'], 'another-old-field': 'etc, etc'} .. py:method:: insert(path, value[, path2, value2...]) Insert the given path/value pairs into the JSON string stored in the field. The ``insert`` function returns a **new** JSON string formed by updating the input JSON with the given path/value pairs. If the path already exists, it will **not** be overwritten. `SQLite documentation `_. .. py:method:: replace(path, value[, path2, value2...]) Replace values stored in the input JSON string using the given path/value pairs. The ``replace`` function returns a **new** JSON string formed by updating the input JSON with the given path/value pairs. If the path does not exist, it will **not** be created. `SQLite documentation `_. .. py:method:: remove(*paths) Remove values referenced by the given path(s). The ``remove`` function returns a **new** JSON string formed by removing the specified paths from the input JSON string. The process for removing fields from a JSON column is similar to the way you :py:meth:`~JSONField.set` them. For a code example, see :ref:`updating JSON data `. `SQLite documentation `_. .. py:method:: json_type([path=None]) Return a string indicating the type of object stored in the field. You can optionally supply a path to specify a sub-item. The types of objects are: * object * array * integer * real * true * false * text * null <-- the string "null" means an actual NULL value * NULL <-- an actual NULL value means the path was not found `SQLite documentation `_. .. py:method:: children([path=None]) The ``children`` function corresponds to ``json_each``, a table-valued function that walks the JSON value provided and returns the immediate children of the top-level array or object. If a path is specified, then that path is treated as the top-most element. The rows returned by calls to ``children()`` have the following attributes: * ``key``: the key of the current element relative to its parent. * ``value``: the value of the current element. * ``type``: one of the data-types (see :py:meth:`~JSONField.json_type`). * ``atom``: the scalar value for primitive types, ``NULL`` for arrays and objects. * ``id``: a unique ID referencing the current node in the tree. * ``parent``: the ID of the containing node. * ``fullkey``: the full path describing the current element. * ``path``: the path to the container of the current row. For examples, see `my blog post on JSON1 `_. `SQLite documentation `_. .. py:method:: tree([path=None]) The ``tree`` function corresponds to ``json_tree``, a table-valued function that walks the JSON value provided and recursively returns all descendants of the given root node. If a path is specified, then that path is treated as the root node element. The rows returned by calls to ``tree()`` have the same attributes as rows returned by calls to :py:meth:`~JSONField.children`. For examples, see `my blog post on JSON1 `_. `SQLite documentation `_. .. py:class:: PrimaryKeyAutoIncrementField() Subclass of :py:class:`PrimaryKeyField` that uses a monotonically-increasing value for the primary key. This differs from the default SQLite primary key, which simply uses the "max + 1" approach to determining the next ID. .. py:class:: RowIDField() Subclass of :py:class:`PrimaryKeyField` that provides access to the underlying ``rowid`` field used internally by SQLite. .. note:: When added to a Model, this field will act as the primary key. However, this field will not be included by default when selecting rows from the table. .. py:class:: DocIDField() Subclass of :py:class:`PrimaryKeyField` that provides access to the underlying ``docid`` field used internally by SQLite's FTS3/4 virtual tables. .. note:: This field should not be created manually, as it is only needed on ``FTSModel`` classes, which include it already. .. py:function:: match(lhs, rhs) Generate a SQLite `MATCH` expression for use in full-text searches. .. code-block:: python Document.select().where(match(Document.content, 'search term')) .. py:class:: FTS5Model() Model class that should be used to implement virtual tables using the FTS5 extension. Documentation on the FTS5 extension `can be found here `_. This extension behaves very similarly to the FTS3 and FTS4 extensions, and the ``FTS5Model`` supports many of the same APIs as :py:class:`FTSModel`. The ``FTS5`` extension is more strict in enforcing that no column define any type or constraints. For this reason, only :py:class:`SearchField` objects can be used with ``FTS5Model`` implementations. Additionally, ``FTS5`` comes with a built-in implementation of the BM25 ranking function. Therefore, the ``search`` and ``search_bm25`` methods have been overridden to use the builtin ranking functions rather than user-defined functions. .. py:classmethod:: fts5_installed() Return a boolean indicating whether the FTS5 extension is installed. If it is not installed, an attempt will be made to load the extension. .. py:classmethod:: search(term[, weights=None[, with_score=False[, score_alias='score']]]) Shorthand way of searching for a term and sorting results by the quality of the match. This is equivalent to the built-in ``rank`` value provided by the ``FTS5`` extension. :param str term: Search term to use. :param weights: A list of weights for the columns, ordered with respect to the column's position in the table. **Or**, a dictionary keyed by the field or field name and mapped to a value. :param with_score: Whether the score should be returned as part of the ``SELECT`` statement. :param str score_alias: Alias to use for the calculated rank score. This is the attribute you will use to access the score if ``with_score=True``. .. code-block:: python # Simple search. docs = DocumentIndex.search('search term') for result in docs: print result.title # More complete example. docs = DocumentIndex.search( 'search term', weights={'title': 2.0, 'content': 1.0}, with_score=True, score_alias='search_score') for result in docs: print result.title, result.search_score .. py:classmethod:: search_bm25(term[, weights=None[, with_score=False[, score_alias='score']]]) With FTS5, the ``search_bm25`` method is the same as the :py:meth:`FTS5Model.search` method. .. py:classmethod:: VocabModel([table_type='row'|'col'[, table_name=None]]) :param table_type: Either ``'row'`` or ``'col'``. :param table_name: Name for the vocab table. If not specified, will be "fts5tablename_v". .. _sqlite_closure: .. py:function:: ClosureTable(model_class[, foreign_key=None[, referencing_class=None, referencing_key=None]]) Factory function for creating a model class suitable for working with a `transitive closure `_ table. Closure tables are :py:class:`VirtualModel` subclasses that work with the transitive closure SQLite extension. These special tables are designed to make it easy to efficiently query heirarchical data. The SQLite extension manages an AVL tree behind-the-scenes, transparently updating the tree when your table changes and making it easy to perform common queries on heirarchical data. To use the closure table extension in your project, you need: 1. A copy of the SQLite extension. The source code can be found in the `SQLite code repository `_ or by cloning `this gist `_: .. code-block:: console $ git clone https://gist.github.com/coleifer/7f3593c5c2a645913b92 closure $ cd closure/ 2. Compile the extension as a shared library, e.g. .. code-block:: console $ gcc -g -fPIC -shared closure.c -o closure.so 3. Create a model for your hierarchical data. The only requirement here is that the model has an integer primary key and a self-referential foreign key. Any additional fields are fine. .. code-block:: python class Category(Model): name = CharField() metadata = TextField() parent = ForeignKeyField('self', index=True, null=True) # Required. # Generate a model for the closure virtual table. CategoryClosure = ClosureTable(Category) The self-referentiality can also be achieved via an intermediate table (for a many-to-many relation). .. code-block:: python class User(Model): name = CharField() class UserRelations(Model): user = ForeignKeyField(User) knows = ForeignKeyField(User, related_name='_known_by') class Meta: primary_key = CompositeKey('user', 'knows') # Alternatively, a unique index on both columns. # Generate a model for the closure virtual table, specifying the UserRelations as the referencing table UserClosure = ClosureTable( User, referencing_class=UserRelations, foreign_key=UserRelations.knows, referencing_key=UserRelations.user) 4. In your application code, make sure you load the extension when you instantiate your :py:class:`Database` object. This is done by passing the path to the shared library to the :py:meth:`~SqliteExtDatabase.load_extension` method. .. code-block:: python db = SqliteExtDatabase('my_database.db') db.load_extension('/path/to/closure') :param model_class: The model class containing the nodes in the tree. :param foreign_key: The self-referential parent-node field on the model class. If not provided, peewee will introspect the model to find a suitable key. :param referencing_class: The intermediate table for a many-to-many relationship. :param referencing_key: For a many-to-many relationship: the originating side of the relation. :return: Returns a :py:class:`VirtualModel` for working with a closure table. .. warning:: There are two caveats you should be aware of when using the ``transitive_closure`` extension. First, it requires that your *source model* have an integer primary key. Second, it is strongly recommended that you create an index on the self-referential foreign key. Example code: .. code-block:: python db = SqliteExtDatabase('my_database.db') db.load_extension('/path/to/closure') class Category(Model): name = CharField() parent = ForiegnKeyField('self', index=True, null=True) # Required. class Meta: database = db CategoryClosure = ClosureTable(Category) # Create the tables if they do not exist. db.create_tables([Category, CategoryClosure], True) It is now possible to perform interesting queries using the data from the closure table: .. code-block:: python # Get all ancestors for a particular node. laptops = Category.get(Category.name == 'Laptops') for parent in Closure.ancestors(laptops): print parent.name # Computer Hardware # Computers # Electronics # All products # Get all descendants for a particular node. hardware = Category.get(Category.name == 'Computer Hardware') for node in Closure.descendants(hardware): print node.name # Laptops # Desktops # Hard-drives # Monitors # LCD Monitors # LED Monitors The :py:class:`VirtualTable` returned by this function contains a handful of interesting methods. The model will be a subclass of :py:class:`BaseClosureTable`. .. py:class:: BaseClosureTable() .. py:attribute:: id A field for the primary key of the given node. .. py:attribute:: depth A field representing the relative depth of the given node. .. py:attribute:: root A field representing the relative root node. .. py:method:: descendants(node[, depth=None[, include_node=False]]) Retrieve all descendants of the given node. If a depth is specified, only nodes at that depth (relative to the given node) will be returned. .. code-block:: python node = Category.get(Category.name == 'Electronics') # Direct child categories. children = CategoryClosure.descendants(node, depth=1) # Grand-child categories. children = CategoryClosure.descendants(node, depth=2) # Descendants at all depths. all_descendants = CategoryClosure.descendants(node) .. py:method:: ancestors(node[, depth=None[, include_node=False]]) Retrieve all ancestors of the given node. If a depth is specified, only nodes at that depth (relative to the given node) will be returned. .. code-block:: python node = Category.get(Category.name == 'Laptops') # All ancestors. all_ancestors = CategoryClosure.ancestors(node) # Grand-parent category. grandparent = CategoryClosure.ancestores(node, depth=2) .. py:method:: siblings(node[, include_node=False]) Retrieve all nodes that are children of the specified node's parent. .. note:: For an in-depth discussion of the SQLite transitive closure extension, check out this blog post, `Querying Tree Structures in SQLite using Python and the Transitive Closure Extension `_. .. _sqliteq: SqliteQ ------- The ``playhouse.sqliteq`` module provides a subclass of :py:class:`SqliteExtDatabase`, that will serialize concurrent writes to a SQLite database. :py:class:`SqliteQueueDatabase` can be used as a drop-in replacement for the regular :py:class:`SqliteDatabase` if you want simple **read and write** access to a SQLite database from **multiple threads**. SQLite only allows one connection to write to the database at any given time. As a result, if you have a multi-threaded application (like a web-server, for example) that needs to write to the database, you may see occasional errors when one or more of the threads attempting to write cannot acquire the lock. :py:class:`SqliteQueueDatabase` is designed to simplify things by sending all write queries through a single, long-lived connection. The benefit is that you get the appearance of multiple threads writing to the database without conflicts or timeouts. The downside, however, is that you cannot issue write transactions that encompass multiple queries -- all writes run in autocommit mode, essentially. .. note:: The module gets its name from the fact that all write queries get put into a thread-safe queue. A single worker thread listens to the queue and executes all queries that are sent to it. Transactions ^^^^^^^^^^^^ Because all queries are serialized and executed by a single worker thread, it is possible for transactional SQL from separate threads to be executed out-of-order. In the example below, the transaction started by thread "B" is rolled back by thread "A" (with bad consequences!): * Thread A: UPDATE transplants SET organ='liver', ...; * Thread B: BEGIN TRANSACTION; * Thread B: UPDATE life_support_system SET timer += 60 ...; * Thread A: ROLLBACK; -- Oh no.... Since there is a potential for queries from separate transactions to be interleaved, the :py:meth:`~SqliteQueueDatabase.transaction` and :py:meth:`~SqliteQueueDatabase.atomic` methods are disabled on :py:class:`SqliteQueueDatabase`. For cases when you wish to temporarily write to the database from a different thread, you can use the :py:meth:`~SqliteQueueDatabase.pause` and :py:meth:`~SqliteQueueDatabase.unpause` methods. These methods block the caller until the writer thread is finished with its current workload. The writer then disconnects and the caller takes over until ``unpause`` is called. The :py:meth:`~SqliteQueueDatabase.stop`, :py:meth:`~SqliteQueueDatabase.start`, and :py:meth:`~SqliteQueueDatabase.is_stopped` methods can also be used to control the writer thread. .. note:: Take a look at SQLite's `isolation `_ documentation for more information about how SQLite handles concurrent connections. Code sample ^^^^^^^^^^^ Creating a database instance does not require any special handling. The :py:class:`SqliteQueueDatabase` accepts some special parameters which you should be aware of, though. If you are using `gevent `_, you must specify ``use_gevent=True`` when instantiating your database -- this way Peewee will know to use the appropriate objects for handling queueing, thread creation, and locking. .. code-block:: python from playhouse.sqliteq import SqliteQueueDatabase db = SqliteQueueDatabase( 'my_app.db', use_gevent=False, # Use the standard library "threading" module. autostart=False, # The worker thread now must be started manually. queue_max_size=64, # Max. # of pending writes that can accumulate. results_timeout=5.0) # Max. time to wait for query to be executed. If ``autostart=False``, as in the above example, you will need to call :py:meth:`~SqliteQueueDatabase.start` to bring up the worker threads that will do the actual write query execution. .. code-block:: python @app.before_first_request def _start_worker_threads(): db.start() If you plan on performing SELECT queries or generally wanting to access the database, you will need to call :py:meth:`~Database.connect` and :py:meth:`~Database.close` as you would with any other database instance. When your application is ready to terminate, use the :py:meth:`~SqliteQueueDatabase.stop` method to shut down the worker thread. If there was a backlog of work, then this method will block until all pending work is finished (though no new work is allowed). .. code-block:: python import atexit @atexit.register def _stop_worker_threads(): db.stop() Lastly, the :py:meth:`~SqliteQueueDatabase.is_stopped` method can be used to determine whether the database writer is up and running. .. _sqlite_udf: Sqlite User-Defined Functions ----------------------------- The ``sqlite_udf`` playhouse module contains a number of user-defined functions, aggregates, and table-valued functions, which you may find useful. The functions are grouped in collections and you can register these user-defined extensions individually, by collection, or register everything. Scalar functions are functions which take a number of parameters and return a single value. For example, converting a string to upper-case, or calculating the MD5 hex digest. Aggregate functions are like scalar functions that operate on multiple rows of data, producing a single result. For example, calculating the sum of a list of integers, or finding the smallest value in a particular column. Table-valued functions are simply functions that can return multiple rows of data. For example, a regular-expression search function that returns all the matches in a given string, or a function that accepts two dates and generates all the intervening days. .. note:: To use table-valued functions, you will need to install the ``vtfunc`` module. The ``vtfunc`` module is available `on GitHub `_ or can be installed using ``pip``. Functions, listed by collection name ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Scalar functions are indicated by ``(f)``, aggregate functions by ``(a)``, and table-valued functions by ``(t)``. * ``CONTROL_FLOW`` * :py:func:`if_then_else` (f) * ``DATE`` * :py:func:`strip_tz` (f) * :py:func:`human_delta` (f) * :py:func:`mintdiff` (a) * :py:func:`avgtdiff` (a) * :py:func:`duration` (a) * :py:func:`date_series` (t) * ``FILE`` * :py:func:`file_ext` (f) * :py:func:`file_read` (f) * ``HELPER`` * :py:func:`gzip` (f) * :py:func:`gunzip` (f) * :py:func:`hostname` (f) * :py:func:`toggle` (f) * :py:func:`setting` (f) * :py:func:`clear_toggles` (f) * :py:func:`clear_settings` (f) * ``MATH`` * :py:func:`randomrange` (f) * :py:func:`gauss_distribution` (f) * :py:func:`sqrt` (f) * :py:func:`tonumber` (f) * :py:func:`mode` (a) * :py:func:`minrange` (a) * :py:func:`avgrange` (a) * :py:func:`range` (a) * :py:func:`median` (a) (requires cython) * ``STRING`` * :py:func:`substr_count` (f) * :py:func:`strip_chars` (f) * :py:func:`md5` (f) * :py:func:`sha1` (f) * :py:func:`sha256` (f) * :py:func:`sha512` (f) * :py:func:`adler32` (f) * :py:func:`crc32` (f) * :py:func:`damerau_levenshtein_dist` (f) (requires cython) * :py:func:`levenshtein_dist` (f) (requires cython) * :py:func:`str_dist` (f) (requires cython) * :py:func:`regex_search` (t) .. _apsw: apsw, an advanced sqlite driver ------------------------------- The ``apsw_ext`` module contains a database class suitable for use with the apsw sqlite driver. APSW Project page: https://github.com/rogerbinns/apsw APSW is a really neat library that provides a thin wrapper on top of SQLite's C interface, making it possible to use all of SQLite's advanced features. Here are just a few reasons to use APSW, taken from the documentation: * APSW gives all functionality of SQLite, including virtual tables, virtual file system, blob i/o, backups and file control. * Connections can be shared across threads without any additional locking. * Transactions are managed explicitly by your code. * APSW can handle nested transactions. * Unicode is handled correctly. * APSW is faster. For more information on the differences between apsw and pysqlite, check `the apsw docs `_. How to use the APSWDatabase ^^^^^^^^^^^^^^^^^^^^^^^^^^^ .. code-block:: python from apsw_ext import * db = APSWDatabase(':memory:') class BaseModel(Model): class Meta: database = db class SomeModel(BaseModel): col1 = CharField() col2 = DateTimeField() apsw_ext API notes ^^^^^^^^^^^^^^^^^^ :py:class:`APSWDatabase` extends the :py:class:`SqliteExtDatabase` and inherits its advanced features. .. py:class:: APSWDatabase(database, **connect_kwargs) :param string database: filename of sqlite database :param connect_kwargs: keyword arguments passed to apsw when opening a connection .. py:method:: register_module(mod_name, mod_inst) Provides a way of globally registering a module. For more information, see the `documentation on virtual tables `_. :param string mod_name: name to use for module :param object mod_inst: an object implementing the `Virtual Table `_ interface .. py:method:: unregister_module(mod_name) Unregister a module. :param string mod_name: name to use for module .. note:: Be sure to use the ``Field`` subclasses defined in the ``apsw_ext`` module, as they will properly handle adapting the data types for storage. For example, instead of using ``peewee.DateTimeField``, be sure you are importing and using ``playhouse.apsw_ext.DateTimeField``. .. _berkeleydb: BerkeleyDB backend ------------------ BerkeleyDB provides a `SQLite-compatible API `_. BerkeleyDB's SQL API has many advantages over SQLite: * Higher transactions-per-second in multi-threaded environments. * Built-in replication and hot backup. * Fewer system calls, less resource utilization. * Multi-version concurrency control. For more details, Oracle has published a short `technical overview `_. In order to use peewee with BerkeleyDB, you need to compile BerkeleyDB with the SQL API enabled. Then compile the Python SQLite driver against BerkeleyDB's sqlite replacement. Begin by downloading and compiling BerkeleyDB: .. code-block:: console wget http://download.oracle.com/berkeley-db/db-6.0.30.tar.gz tar xzf db-6.0.30.tar.gz cd db-6.0.30/build_unix export CFLAGS='-DSQLITE_ENABLE_FTS3=1 -DSQLITE_ENABLE_FTS3_PARENTHESIS=1 -DSQLITE_ENABLE_UPDATE_DELETE_LIMIT -DSQLITE_SECURE_DELETE -DSQLITE_SOUNDEX -DSQLITE_ENABLE_RTREE=1 -fPIC' ../dist/configure --enable-static --enable-shared --enable-sql --enable-sql-compat make sudo make prefix=/usr/local/ install Then get a copy of the standard library SQLite driver and build it against BerkeleyDB: .. code-block:: console git clone https://github.com/ghaering/pysqlite cd pysqlite sed -i "s|#||g" setup.cfg python setup.py build sudo python setup.py install You can also find up-to-date `step by step instructions `_ on my blog. .. py:class:: BerkeleyDatabase(database, **kwargs) :param bool multiversion: Enable multiversion concurrency control. Default is ``False``. :param int page_size: Set the page size ``PRAGMA``. This option only works on new databases. :param int cache_size: Set the cache size ``PRAGMA``. Subclass of the :py:class:`SqliteExtDatabase` that supports connecting to BerkeleyDB-backed version of SQLite. .. py:classmethod:: check_pysqlite() Check whether ``pysqlite2`` was compiled against the BerkeleyDB SQLite. Returns ``True`` or ``False``. .. py:classmethod:: check_libsqlite() Check whether ``libsqlite3`` is the BerkeleyDB SQLite implementation. Returns ``True`` or ``False``. .. _sqlcipher_ext: Sqlcipher backend ----------------- * Although this extention's code is short, it has not been properly peer-reviewed yet and may have introduced vulnerabilities. * The code contains minimum values for `passphrase` length and `kdf_iter`, as well as a default value for the later. **Do not** regard these numbers as advice. Consult the docs at http://sqlcipher.net/sqlcipher-api/ and security experts. Also note that this code relies on pysqlcipher_ and sqlcipher_, and the code there might have vulnerabilities as well, but since these are widely used crypto modules, we can expect "short zero days" there. .. _pysqlcipher: https://pypi.python.org/pypi/pysqlcipher .. _sqlcipher: http://sqlcipher.net sqlcipher_ext API notes ^^^^^^^^^^^^^^^^^^^^^^^ .. py:class:: SqlCipherDatabase(database, passphrase, kdf_iter=64000, **kwargs) Subclass of :py:class:`SqliteDatabase` that stores the database encrypted. Instead of the standard ``sqlite3`` backend, it uses pysqlcipher_: a python wrapper for sqlcipher_, which -- in turn -- is an encrypted wrapper around ``sqlite3``, so the API is *identical* to :py:class:`SqliteDatabase`'s, except for object construction parameters: :param database: Path to encrypted database filename to open [or create]. :param passphrase: Database encryption passphrase: should be at least 8 character long (or an error is raised), but it is *strongly advised* to enforce better `passphrase strength`_ criteria in your implementation. :param kdf_iter: [Optional] number of PBKDF2_ iterations. * If the ``database`` file doesn't exist, it will be *created* with encryption by a key derived from ``passhprase`` with ``kdf_iter`` PBKDF2_ iterations. * When trying to open an existing database, ``passhprase`` and ``kdf_iter`` should be *identical* to the ones used when it was created. .. _PBKDF2: https://en.wikipedia.org/wiki/PBKDF2 .. _passphrase strength: https://en.wikipedia.org/wiki/Password_strength Notes: * [Hopefully] there's no way to tell whether the passphrase is wrong or the file is corrupt. In both cases -- *the first time we try to acces the database* -- a :py:class:`DatabaseError` error is raised, with the *exact* message: ``"file is encrypted or is not a database"``. As mentioned above, this only happens when you *access* the databse, so if you need to know *right away* whether the passphrase was correct, you can trigger this check by calling [e.g.] :py:meth:`~Database.get_tables()` (see example below). * Most applications can expect failed attempts to open the database (common case: prompting the user for ``passphrase``), so the database can't be hardwired into the :py:class:`Meta` of model classes. To defer initialization, pass `None` in to the database. Example: .. code-block:: python db = SqlCipherDatabase(None) class BaseModel(Model): """Parent for all app's models""" class Meta: # We won't have a valid db until user enters passhrase. database = db # Derive our model subclasses class Person(BaseModel): name = CharField(primary_key=True) right_passphrase = False while not right_passphrase: db.init( 'testsqlcipher.db', passphrase=get_passphrase_from_user()) try: # Actually execute a query against the db to test passphrase. db.get_tables() except DatabaseError as exc: # We only allow a specific [somewhat cryptic] error message. if exc.args[0] != 'file is encrypted or is not a database': raise exc else: tell_user_the_passphrase_was_wrong() db.init(None) # Reset the db. else: # The password was correct. right_passphrase = True See also: a slightly more elaborate `example `_. .. _postgres_ext: Postgresql Extensions --------------------- The postgresql extensions module provides a number of "postgres-only" functions, currently: * :ref:`hstore support ` * :ref:`json support `, including ``jsonb`` for Postgres 9.4. * :ref:`server-side cursors ` * :ref:`full-text search ` * :py:class:`ArrayField` field type, for storing arrays. * :py:class:`HStoreField` field type, for storing key/value pairs. * :py:class:`IntervalField` field type, for storing ``timedelta`` objects. * :py:class:`JSONField` field type, for storing JSON data. * :py:class:`BinaryJSONField` field type for the ``jsonb`` JSON data type. * :py:class:`TSVectorField` field type, for storing full-text search data. * :py:class:`DateTimeTZ` field type, a timezone-aware datetime field. In the future I would like to add support for more of postgresql's features. If there is a particular feature you would like to see added, please `open a Github issue `_. .. warning:: In order to start using the features described below, you will need to use the extension :py:class:`PostgresqlExtDatabase` class instead of :py:class:`PostgresqlDatabase`. The code below will assume you are using the following database and base model: .. code-block:: python from playhouse.postgres_ext import * ext_db = PostgresqlExtDatabase('peewee_test', user='postgres') class BaseExtModel(Model): class Meta: database = ext_db .. _hstore: hstore support ^^^^^^^^^^^^^^ `Postgresql hstore `_ is an embedded key/value store. With hstore, you can store arbitrary key/value pairs in your database alongside structured relational data. Currently the ``postgres_ext`` module supports the following operations: * Store and retrieve arbitrary dictionaries * Filter by key(s) or partial dictionary * Update/add one or more keys to an existing dictionary * Delete one or more keys from an existing dictionary * Select keys, values, or zip keys and values * Retrieve a slice of keys/values * Test for the existence of a key * Test that a key has a non-NULL value Using hstore ^^^^^^^^^^^^ To start with, you will need to import the custom database class and the hstore functions from ``playhouse.postgres_ext`` (see above code snippet). Then, it is as simple as adding a :py:class:`HStoreField` to your model: .. code-block:: python class House(BaseExtModel): address = CharField() features = HStoreField() You can now store arbitrary key/value pairs on ``House`` instances: .. code-block:: pycon >>> h = House.create(address='123 Main St', features={'garage': '2 cars', 'bath': '2 bath'}) >>> h_from_db = House.get(House.id == h.id) >>> h_from_db.features {'bath': '2 bath', 'garage': '2 cars'} You can filter by keys or partial dictionary: .. code-block:: pycon >>> f = House.features >>> House.select().where(f.contains('garage')) # <-- all houses w/garage key >>> House.select().where(f.contains(['garage', 'bath'])) # <-- all houses w/garage & bath >>> House.select().where(f.contains({'garage': '2 cars'})) # <-- houses w/2-car garage Suppose you want to do an atomic update to the house: .. code-block:: pycon >>> f = House.features >>> new_features = House.features.update({'bath': '2.5 bath', 'sqft': '1100'}) >>> query = House.update(features=new_features) >>> query.where(House.id == h.id).execute() 1 >>> h = House.get(House.id == h.id) >>> h.features {'bath': '2.5 bath', 'garage': '2 cars', 'sqft': '1100'} Or, alternatively an atomic delete: .. code-block:: pycon >>> query = House.update(features=f.delete('bath')) >>> query.where(House.id == h.id).execute() 1 >>> h = House.get(House.id == h.id) >>> h.features {'garage': '2 cars', 'sqft': '1100'} Multiple keys can be deleted at the same time: .. code-block:: pycon >>> query = House.update(features=f.delete('garage', 'sqft')) You can select just keys, just values, or zip the two: .. code-block:: pycon >>> f = House.features >>> for h in House.select(House.address, f.keys().alias('keys')): ... print h.address, h.keys 123 Main St [u'bath', u'garage'] >>> for h in House.select(House.address, f.values().alias('vals')): ... print h.address, h.vals 123 Main St [u'2 bath', u'2 cars'] >>> for h in House.select(House.address, f.items().alias('mtx')): ... print h.address, h.mtx 123 Main St [[u'bath', u'2 bath'], [u'garage', u'2 cars']] You can retrieve a slice of data, for example, all the garage data: .. code-block:: pycon >>> f = House.features >>> for h in House.select(House.address, f.slice('garage').alias('garage_data')): ... print h.address, h.garage_data 123 Main St {'garage': '2 cars'} You can check for the existence of a key and filter rows accordingly: .. code-block:: pycon >>> for h in House.select(House.address, f.exists('garage').alias('has_garage')): ... print h.address, h.has_garage 123 Main St True >>> for h in House.select().where(f.exists('garage')): ... print h.address, h.features['garage'] # <-- just houses w/garage data 123 Main St 2 cars Interval support ^^^^^^^^^^^^^^^^ Postgres supports durations through the ``INTERVAL`` data-type (`docs `_). .. py:class:: IntervalField([null=False, [...]]) Field class capable of storing Python ``datetime.timedelta`` instances. Example: .. code-block:: python from datetime import timedelta from playhouse.postgres_ext import * db = PostgresqlExtDatabase('my_db') class Event(Model): location = CharField() duration = IntervalField() start_time = DateTimeField() class Meta: database = db @classmethod def get_long_meetings(cls): return cls.select().where(cls.duration > timedelta(hours=1)) .. _pgjson: JSON Support ^^^^^^^^^^^^ peewee has basic support for Postgres' native JSON data type, in the form of :py:class:`JSONField`. As of version 2.4.7, peewee also supports the Postgres 9.4 binary json ``jsonb`` type, via :py:class:`BinaryJSONField`. .. warning:: Postgres supports a JSON data type natively as of 9.2 (full support in 9.3). In order to use this functionality you must be using the correct version of Postgres with `psycopg2` version 2.5 or greater. To use :py:class:`BinaryJSONField`, which has many performance and querying advantages, you must have Postgres 9.4 or later. .. note:: You must be sure your database is an instance of :py:class:`PostgresqlExtDatabase` in order to use the `JSONField`. Here is an example of how you might declare a model with a JSON field: .. code-block:: python import json import urllib2 from playhouse.postgres_ext import * db = PostgresqlExtDatabase('my_database') # note class APIResponse(Model): url = CharField() response = JSONField() class Meta: database = db @classmethod def request(cls, url): fh = urllib2.urlopen(url) return cls.create(url=url, response=json.loads(fh.read())) APIResponse.create_table() # Store a JSON response. offense = APIResponse.request('http://wtf.charlesleifer.com/api/offense/') booking = APIResponse.request('http://wtf.charlesleifer.com/api/booking/') # Query a JSON data structure using a nested key lookup: offense_responses = APIResponse.select().where( APIResponse.response['meta']['model'] == 'offense') # Retrieve a sub-key for each APIResponse. By calling .as_json(), the # data at the sub-key will be returned as Python objects (dicts, lists, # etc) instead of serialized JSON. q = (APIResponse .select( APIResponse.data['booking']['person'].as_json().alias('person')) .where( APIResponse.data['meta']['model'] == 'booking')) for result in q: print result.person['name'], result.person['dob'] The :py:class:`BinaryJSONField` works the same and supports the same operations as the regular :py:class:`JSONField`, but provides several additional operations for testing *containment*. Using the binary json field, you can test whether your JSON data contains other partial JSON structures (:py:meth:`~BinaryJSONField.contains`, :py:meth:`~BinaryJSONField.contains_any`, :py:meth:`~BinaryJSONField.contains_all`), or whether it is a subset of a larger JSON document (:py:meth:`~BinaryJSONField.contained_by`). For more examples, see the :py:class:`JSONField` and :py:class:`BinaryJSONField` API documents below. .. _server_side_cursors: Server-side cursors ^^^^^^^^^^^^^^^^^^^ When psycopg2 executes a query, normally all results are fetched and returned to the client by the backend. This can cause your application to use a lot of memory when making large queries. Using server-side cursors, results are returned a little at a time (by default 2000 records). For the definitive reference, please see the `psycopg2 documentation `_. .. note:: To use server-side (or named) cursors, you must be using :py:class:`PostgresqlExtDatabase`. To execute a query using a server-side cursor, simply wrap your select query using the :py:func:`ServerSide` helper: .. code-block:: python large_query = PageView.select() # Build query normally. # Iterate over large query inside a transaction. for page_view in ServerSide(large_query): # do some interesting analysis here. pass # Server-side resources are released. If you would like all ``SELECT`` queries to automatically use a server-side cursor, you can specify this when creating your :py:class:`PostgresqlExtDatabase`: .. code-block:: python from postgres_ext import PostgresqlExtDatabase ss_db = PostgresqlExtDatabase('my_db', server_side_cursors=True) .. note:: Server-side cursors live only as long as the transaction, so for this reason peewee will not automatically call ``commit()`` after executing a ``SELECT`` query. If you do not ``commit`` after you are done iterating, you will not release the server-side resources until the connection is closed (or the transaction is committed later). Furthermore, since peewee will by default cache rows returned by the cursor, you should always call ``.iterator()`` when iterating over a large query. If you are using the :py:func:`ServerSide` helper, the transaction and call to ``iterator()`` will be handled transparently. .. _pg_fts: Full-text search ^^^^^^^^^^^^^^^^ Postgresql provides `sophisticated full-text search `_ using special data-types (``tsvector`` and ``tsquery``). Documents should be stored or converted to the ``tsvector`` type, and search queries should be converted to ``tsquery``. For simple cases, you can simply use the :py:func:`Match` function, which will automatically perform the appropriate conversions, and requires no schema changes: .. code-block:: python def blog_search(query): return Blog.select().where( (Blog.status == Blog.STATUS_PUBLISHED) & Match(Blog.content, query)) The :py:func:`Match` function will automatically convert the left-hand operand to a ``tsvector``, and the right-hand operand to a ``tsquery``. For better performance, it is recommended you create a ``GIN`` index on the column you plan to search: .. code-block:: sql CREATE INDEX blog_full_text_search ON blog USING gin(to_tsvector(content)); Alternatively, you can use the :py:class:`TSVectorField` to maintain a dedicated column for storing ``tsvector`` data: .. code-block:: python class Blog(Model): content = TextField() search_content = TSVectorField() You will need to explicitly convert the incoming text data to ``tsvector`` when inserting or updating the ``search_content`` field: .. code-block:: python content = 'Excellent blog post about peewee ORM.' blog_entry = Blog.create( content=content, search_content=fn.to_tsvector(content)) .. note:: If you are using the :py:class:`TSVectorField`, it will automatically be created with a GIN index. postgres_ext API notes ^^^^^^^^^^^^^^^^^^^^^^ .. py:class:: PostgresqlExtDatabase(database[, server_side_cursors=False[, register_hstore=True[, ...]]]) Identical to :py:class:`PostgresqlDatabase` but required in order to support: * :ref:`server_side_cursors` * :py:class:`ArrayField` * :py:class:`DateTimeTZField` * :py:class:`JSONField` * :py:class:`BinaryJSONField` * :py:class:`HStoreField` * :py:class:`TSVectorField` :param str database: Name of database to connect to. :param bool server_side_cursors: Whether ``SELECT`` queries should utilize server-side cursors. :param bool register_hstore: Register the HStore extension with the connection. If using ``server_side_cursors``, also be sure to wrap your queries with :py:func:`ServerSide`. If you do not wish to use the HStore extension, you can specify ``register_hstore=False``. .. warning:: The :py:class:`PostgresqlExtDatabase` by default will attempt to register the ``HSTORE`` extension. Most distributions and recent versions include this, but in some cases the extension may not be available. If you **do not** plan to use the :ref:`HStore features of peewee `, you can pass ``register_hstore=False`` when initializing your :py:class:`PostgresqlExtDatabase`. .. py:function:: ServerSide(select_query) Wrap the given select query in a transaction, and call it's :py:meth:`~SelectQuery.iterator` method to avoid caching row instances. In order for the server-side resources to be released, be sure to exhaust the generator (iterate over all the rows). :param select_query: a :py:class:`SelectQuery` instance. :rtype: ``generator`` Usage: .. code-block:: python large_query = PageView.select() for page_view in ServerSide(large_query): # Do something interesting. pass # At this point server side resources are released. .. _pgarrays: .. py:class:: ArrayField([field_class=IntegerField[, dimensions=1]]) Field capable of storing arrays of the provided `field_class`. :param field_class: a subclass of :py:class:`Field`, e.g. :py:class:`IntegerField`. :param int dimensions: dimensions of array. You can store and retrieve lists (or lists-of-lists): .. code-block:: python class BlogPost(BaseModel): content = TextField() tags = ArrayField(CharField) post = BlogPost(content='awesome', tags=['foo', 'bar', 'baz']) Additionally, you can use the ``__getitem__`` API to query values or slices in the database: .. code-block:: python # Get the first tag on a given blog post. first_tag = (BlogPost .select(BlogPost.tags[0].alias('first_tag')) .where(BlogPost.id == 1) .dicts() .get()) # first_tag = {'first_tag': 'foo'} Get a slice of values: .. code-block:: python # Get the first two tags. two_tags = (BlogPost .select(BlogPost.tags[:2].alias('two')) .dicts() .get()) # two_tags = {'two': ['foo', 'bar']} .. py:method:: contains(*items) :param items: One or more items that must be in the given array field. .. code-block:: python # Get all blog posts that are tagged with both "python" and "django". Blog.select().where(Blog.tags.contains('python', 'django')) .. py:method:: contains_any(*items) :param items: One or more items to search for in the given array field. Like :py:meth:`~ArrayField.contains`, except will match rows where the array contains *any* of the given items. .. code-block:: python # Get all blog posts that are tagged with "flask" and/or "django". Blog.select().where(Blog.tags.contains_any('flask', 'django')) .. py:class:: DateTimeTZField(*args, **kwargs) A timezone-aware subclass of :py:class:`DateTimeField`. .. py:class:: HStoreField(*args, **kwargs) A field for storing and retrieving arbitrary key/value pairs. For details on usage, see :ref:`hstore`. .. py:method:: keys() Returns the keys for a given row. .. code-block:: pycon >>> f = House.features >>> for h in House.select(House.address, f.keys().alias('keys')): ... print h.address, h.keys 123 Main St [u'bath', u'garage'] .. py:method:: values() Return the values for a given row. .. code-block:: pycon >>> for h in House.select(House.address, f.values().alias('vals')): ... print h.address, h.vals 123 Main St [u'2 bath', u'2 cars'] .. py:method:: items() Like python's ``dict``, return the keys and values in a list-of-lists: .. code-block:: pycon >>> for h in House.select(House.address, f.items().alias('mtx')): ... print h.address, h.mtx 123 Main St [[u'bath', u'2 bath'], [u'garage', u'2 cars']] .. py:method:: slice(*args) Return a slice of data given a list of keys. .. code-block:: pycon >>> f = House.features >>> for h in House.select(House.address, f.slice('garage').alias('garage_data')): ... print h.address, h.garage_data 123 Main St {'garage': '2 cars'} .. py:method:: exists(key) Query for whether the given key exists. .. code-block:: pycon >>> for h in House.select(House.address, f.exists('garage').alias('has_garage')): ... print h.address, h.has_garage 123 Main St True >>> for h in House.select().where(f.exists('garage')): ... print h.address, h.features['garage'] # <-- just houses w/garage data 123 Main St 2 cars .. py:method:: defined(key) Query for whether the given key has a value associated with it. .. py:method:: update(**data) Perform an atomic update to the keys/values for a given row or rows. .. code-block:: pycon >>> query = House.update(features=House.features.update( ... sqft=2000, ... year_built=2012)) >>> query.where(House.id == 1).execute() .. py:method:: delete(*keys) Delete the provided keys for a given row or rows. .. note:: We will use an ``UPDATE`` query. .. code-block:: pycon >>> query = House.update(features=House.features.delete( ... 'sqft', 'year_built')) >>> query.where(House.id == 1).execute() .. py:method:: contains(value) :param value: Either a ``dict``, a ``list`` of keys, or a single key. Query rows for the existence of either: * a partial dictionary. * a list of keys. * a single key. .. code-block:: pycon >>> f = House.features >>> House.select().where(f.contains('garage')) # <-- all houses w/garage key >>> House.select().where(f.contains(['garage', 'bath'])) # <-- all houses w/garage & bath >>> House.select().where(f.contains({'garage': '2 cars'})) # <-- houses w/2-car garage .. py:method:: contains_any(*keys) :param keys: One or more keys to search for. Query rows for the existince of *any* key. .. py:class:: JSONField(dumps=None, *args, **kwargs) Field class suitable for storing and querying arbitrary JSON. When using this on a model, set the field's value to a Python object (either a ``dict`` or a ``list``). When you retrieve your value from the database it will be returned as a Python data structure. :param dumps: The default is to call json.dumps() or the dumps function. You can override this method to create a customized JSON wrapper. .. note:: You must be using Postgres 9.2 / psycopg2 2.5 or greater. .. note:: If you are using Postgres 9.4, strongly consider using the :py:class:`BinaryJSONField` instead as it offers better performance and more powerful querying options. Example model declaration: .. code-block:: python db = PostgresqlExtDatabase('my_db') class APIResponse(Model): url = CharField() response = JSONField() class Meta: database = db Example of storing JSON data: .. code-block:: python url = 'http://foo.com/api/resource/' resp = json.loads(urllib2.urlopen(url).read()) APIResponse.create(url=url, response=resp) APIResponse.create(url='http://foo.com/baz/', response={'key': 'value'}) To query, use Python's ``[]`` operators to specify nested key or array lookups: .. code-block:: python APIResponse.select().where( APIResponse.response['key1']['nested-key'] == 'some-value') To illustrate the use of the ``[]`` operators, imagine we have the following data stored in an ``APIResponse``: .. code-block:: javascript { "foo": { "bar": ["i1", "i2", "i3"], "baz": { "huey": "mickey", "peewee": "nugget" } } } Here are the results of a few queries: .. code-block:: python def get_data(expression): # Helper function to just retrieve the results of a # particular expression. query = (APIResponse .select(expression.alias('my_data')) .dicts() .get()) return query['my_data'] # Accessing the foo -> bar subkey will return a JSON # representation of the list. get_data(APIResponse.data['foo']['bar']) # '["i1", "i2", "i3"]' # In order to retrieve this list as a Python list, # we will call .as_json() on the expression. get_data(APIResponse.data['foo']['bar'].as_json()) # ['i1', 'i2', 'i3'] # Similarly, accessing the foo -> baz subkey will # return a JSON representation of the dictionary. get_data(APIResponse.data['foo']['baz']) # '{"huey": "mickey", "peewee": "nugget"}' # Again, calling .as_json() will return an actual # python dictionary. get_data(APIResponse.data['foo']['baz'].as_json()) # {'huey': 'mickey', 'peewee': 'nugget'} # When dealing with simple values, either way works as # you expect. get_data(APIResponse.data['foo']['bar'][0]) # 'i1' # Calling .as_json() when the result is a simple value # will return the same thing as the previous example. get_data(APIResponse.data['foo']['bar'][0].as_json()) # 'i1' .. py:class:: BinaryJSONField(dumps=None, *args, **kwargs) Store and query arbitrary JSON documents. Data should be stored using normal Python ``dict`` and ``list`` objects, and when data is returned from the database, it will be returned using ``dict`` and ``list`` as well. For examples of basic query operations, see the above code samples for :py:class:`JSONField`. The example queries below will use the same ``APIResponse`` model described above. :param dumps: The default is to call json.dumps() or the dumps function. You can override this method to create a customized JSON wrapper. .. note:: You must be using Postgres 9.4 / psycopg2 2.5 or newer. If you are using Postgres 9.2 or 9.3, you can use the regular :py:class:`JSONField` instead. .. py:method:: contains(other) Test whether the given JSON data contains the given JSON fragment or key. Example: .. code-block:: python search_fragment = { 'foo': {'bar': ['i2']} } query = (APIResponse .select() .where(APIResponse.data.contains(search_fragment))) # If we're searching for a list, the list items do not need to # be ordered in a particular way: query = (APIResponse .select() .where(APIResponse.data.contains({ 'foo': {'bar': ['i2', 'i1']}}))) We can pass in simple keys as well. To find APIResponses that contain the key ``foo`` at the top-level: .. code-block:: python APIResponse.select().where(APIResponse.data.contains('foo')) We can also search sub-keys using square-brackets: .. code-block:: python APIResponse.select().where( APIResponse.data['foo']['bar'].contains(['i2', 'i1'])) .. py:method:: contains_any(*items) Search for the presence of one or more of the given items. .. code-block:: python APIResponse.select().where( APIResponse.data.contains_any('foo', 'baz', 'nugget')) Like :py:meth:`~BinaryJSONField.contains`, we can also search sub-keys: .. code-block:: python APIResponse.select().where( APIResponse.data['foo']['bar'].contains_any('i2', 'ix')) .. py:method:: contains_all(*items) Search for the presence of all of the given items. .. code-block:: python APIResponse.select().where( APIResponse.data.contains_all('foo')) Like :py:meth:`~BinaryJSONField.contains_any`, we can also search sub-keys: .. code-block:: python APIResponse.select().where( APIResponse.data['foo']['bar'].contains_all('i1', 'i2', 'i3')) .. py:method:: contained_by(other) Test whether the given JSON document is contained by (is a subset of) the given JSON document. This method is the inverse of :py:meth:`~BinaryJSONField.contains`. .. code-block:: python big_doc = { 'foo': { 'bar': ['i1', 'i2', 'i3'], 'baz': { 'huey': 'mickey', 'peewee': 'nugget', } }, 'other_key': ['nugget', 'bear', 'kitten'], } APIResponse.select().where( APIResponse.data.contained_by(big_doc)) .. py:function:: Match(field, query) Generate a full-text search expression, automatically converting the left-hand operand to a ``tsvector``, and the right-hand operand to a ``tsquery``. Example: .. code-block:: python def blog_search(query): return Blog.select().where( (Blog.status == Blog.STATUS_PUBLISHED) & Match(Blog.content, query)) .. py:class:: TSVectorField Field type suitable for storing ``tsvector`` data. This field will automatically be created with a ``GIN`` index for improved search performance. .. note:: Data stored in this field will still need to be manually converted to the ``tsvector`` type. Example usage: .. code-block:: python class Blog(Model): content = TextField() search_content = TSVectorField() content = 'this is a sample blog entry.' blog_entry = Blog.create( content=content, search_content=fn.to_tsvector(content)) # Note `to_tsvector()`. .. _dataset: DataSet ------- The *dataset* module contains a high-level API for working with databases modeled after the popular `project of the same name `_. The aims of the *dataset* module are to provide: * A simplified API for working with relational data, along the lines of working with JSON. * An easy way to export relational data as JSON or CSV. * An easy way to import JSON or CSV data into a relational database. A minimal data-loading script might look like this: .. code-block:: python from playhouse.dataset import DataSet db = DataSet('sqlite:///:memory:') table = db['sometable'] table.insert(name='Huey', age=3) table.insert(name='Mickey', age=5, gender='male') huey = table.find_one(name='Huey') print huey # {'age': 3, 'gender': None, 'id': 1, 'name': 'Huey'} for obj in table: print obj # {'age': 3, 'gender': None, 'id': 1, 'name': 'Huey'} # {'age': 5, 'gender': 'male', 'id': 2, 'name': 'Mickey'} You can export or import data using :py:meth:`~DataSet.freeze` and :py:meth:`~DataSet.thaw`: .. code-block:: python # Export table content to the `users.json` file. db.freeze(table.all(), format='json', filename='users.json') # Import data from a CSV file into a new table. Columns will be automatically # created for each field in the CSV file. new_table = db['stats'] new_table.thaw(format='csv', filename='monthly_stats.csv') Getting started ^^^^^^^^^^^^^^^ :py:class:`DataSet` objects are initialized by passing in a database URL of the format ``dialect://user:password@host/dbname``. See the :ref:`db_url` section for examples of connecting to various databases. .. code-block:: python # Create an in-memory SQLite database. db = DataSet('sqlite:///:memory:') Storing data ^^^^^^^^^^^^ To store data, we must first obtain a reference to a table. If the table does not exist, it will be created automatically: .. code-block:: python # Get a table reference, creating the table if it does not exist. table = db['users'] We can now :py:meth:`~Table.insert` new rows into the table. If the columns do not exist, they will be created automatically: .. code-block:: python table.insert(name='Huey', age=3, color='white') table.insert(name='Mickey', age=5, gender='male') To update existing entries in the table, pass in a dictionary containing the new values and filter conditions. The list of columns to use as filters is specified in the *columns* argument. If no filter columns are specified, then all rows will be updated. .. code-block:: python # Update the gender for "Huey". table.update(name='Huey', gender='male', columns=['name']) # Update all records. If the column does not exist, it will be created. table.update(favorite_orm='peewee') Importing data ^^^^^^^^^^^^^^ To import data from an external source, such as a JSON or CSV file, you can use the :py:meth:`~Table.thaw` method. By default, new columns will be created for any attributes encountered. If you wish to only populate columns that are already defined on a table, you can pass in ``strict=True``. .. code-block:: python # Load data from a JSON file containing a list of objects. table = dataset['stock_prices'] table.thaw(filename='stocks.json', format='json') table.all()[:3] # Might print... [{'id': 1, 'ticker': 'GOOG', 'price': 703}, {'id': 2, 'ticker': 'AAPL', 'price': 109}, {'id': 3, 'ticker': 'AMZN', 'price': 300}] Using transactions ^^^^^^^^^^^^^^^^^^ DataSet supports nesting transactions using a simple context manager. .. code-block:: python table = db['users'] with db.transaction() as txn: table.insert(name='Charlie') with db.transaction() as nested_txn: # Set Charlie's favorite ORM to Django. table.update(name='Charlie', favorite_orm='django', columns=['name']) # jk/lol nested_txn.rollback() Inspecting the database ^^^^^^^^^^^^^^^^^^^^^^^ You can use the :py:meth:`tables` method to list the tables in the current database: .. code-block:: pycon >>> print db.tables ['sometable', 'user'] And for a given table, you can print the columns: .. code-block:: pycon >>> table = db['user'] >>> print table.columns ['id', 'age', 'name', 'gender', 'favorite_orm'] We can also find out how many rows are in a table: .. code-block:: pycon >>> print len(db['user']) 3 Reading data ^^^^^^^^^^^^ To retrieve all rows, you can use the :py:meth:`~Table.all` method: .. code-block:: python # Retrieve all the users. users = db['user'].all() # We can iterate over all rows without calling `.all()` for user in db['user']: print user['name'] Specific objects can be retrieved using :py:meth:`~Table.find` and :py:meth:`~Table.find_one`. .. code-block:: python # Find all the users who like peewee. peewee_users = db['user'].find(favorite_orm='peewee') # Find Huey. huey = db['user'].find_one(name='Huey') Exporting data ^^^^^^^^^^^^^^ To export data, use the :py:meth:`~DataSet.freeze` method, passing in the query you wish to export: .. code-block:: python peewee_users = db['user'].find(favorite_orm='peewee') db.freeze(peewee_users, format='json', filename='peewee_users.json') API ^^^ .. py:class:: DataSet(url) The *DataSet* class provides a high-level API for working with relational databases. :param str url: A database URL. See :ref:`db_url` for examples. .. py:attribute:: tables Return a list of tables stored in the database. This list is computed dynamically each time it is accessed. .. py:method:: __getitem__(table_name) Provide a :py:class:`Table` reference to the specified table. If the table does not exist, it will be created. .. py:method:: query(sql[, params=None[, commit=True]]) :param str sql: A SQL query. :param list params: Optional parameters for the query. :param bool commit: Whether the query should be committed upon execution. :return: A database cursor. Execute the provided query against the database. .. py:method:: transaction() Create a context manager representing a new transaction (or savepoint). .. py:method:: freeze(query[, format='csv'[, filename=None[, file_obj=None[, **kwargs]]]]) :param query: A :py:class:`SelectQuery`, generated using :py:meth:`~Table.all` or `~Table.find`. :param format: Output format. By default, *csv* and *json* are supported. :param filename: Filename to write output to. :param file_obj: File-like object to write output to. :param kwargs: Arbitrary parameters for export-specific functionality. .. py:method:: thaw(table[, format='csv'[, filename=None[, file_obj=None[, strict=False[, **kwargs]]]]]) :param str table: The name of the table to load data into. :param format: Input format. By default, *csv* and *json* are supported. :param filename: Filename to read data from. :param file_obj: File-like object to read data from. :param bool strict: Whether to store values for columns that do not already exist on the table. :param kwargs: Arbitrary parameters for import-specific functionality. .. py:method:: connect() Open a connection to the underlying database. If a connection is not opened explicitly, one will be opened the first time a query is executed. .. py:method:: close() Close the connection to the underlying database. .. py:class:: Table(dataset, name, model_class) The *Table* class provides a high-level API for working with rows in a given table. .. py:attribute:: columns Return a list of columns in the given table. .. py:attribute:: model_class A dynamically-created :py:class:`Model` class. .. py:method:: create_index(columns[, unique=False]) Create an index on the given columns: .. code-block:: python # Create a unique index on the `username` column. db['users'].create_index(['username'], unique=True) .. py:method:: insert(**data) Insert the given data dictionary into the table, creating new columns as needed. .. py:method:: update(columns=None, conjunction=None, **data) Update the table using the provided data. If one or more columns are specified in the *columns* parameter, then those columns' values in the *data* dictionary will be used to determine which rows to update. .. code-block:: python # Update all rows. db['users'].update(favorite_orm='peewee') # Only update Huey's record, setting his age to 3. db['users'].update(name='Huey', age=3, columns=['name']) .. py:method:: find(**query) Query the table for rows matching the specified equality conditions. If no query is specified, then all rows are returned. .. code-block:: python peewee_users = db['users'].find(favorite_orm='peewee') .. py:method:: find_one(**query) Return a single row matching the specified equality conditions. If no matching row is found then ``None`` will be returned. .. code-block:: python huey = db['users'].find_one(name='Huey') .. py:method:: all() Return all rows in the given table. .. py:method:: delete(**query) Delete all rows matching the given equality conditions. If no query is provided, then all rows will be deleted. .. code-block:: python # Adios, Django! db['users'].delete(favorite_orm='Django') # Delete all the secret messages. db['secret_messages'].delete() .. py:method:: freeze([format='csv'[, filename=None[, file_obj=None[, **kwargs]]]]) :param format: Output format. By default, *csv* and *json* are supported. :param filename: Filename to write output to. :param file_obj: File-like object to write output to. :param kwargs: Arbitrary parameters for export-specific functionality. .. py:method:: thaw([format='csv'[, filename=None[, file_obj=None[, strict=False[, **kwargs]]]]]) :param format: Input format. By default, *csv* and *json* are supported. :param filename: Filename to read data from. :param file_obj: File-like object to read data from. :param bool strict: Whether to store values for columns that do not already exist on the table. :param kwargs: Arbitrary parameters for import-specific functionality. .. _djpeewee: Django Integration ------------------ The Django ORM provides a very high-level abstraction over SQL and as a consequence is in some ways `limited in terms of flexibility or expressiveness `_. I wrote a `blog post `_ describing my search for a "missing link" between Django's ORM and the SQL it generates, concluding that no such layer exists. The ``djpeewee`` module attempts to provide an easy-to-use, structured layer for generating SQL queries for use with Django's ORM. A couple use-cases might be: * Joining on fields that are not related by foreign key (for example UUID fields). * Performing aggregate queries on calculated values. * Features that Django does not support such as ``CASE`` statements. * Utilizing SQL functions that Django does not support, such as ``SUBSTR``. * Replacing nearly-identical SQL queries with reusable, composable data-structures. Below is an example of how you might use this: .. code-block:: python # Django model. class Event(models.Model): start_time = models.DateTimeField() end_time = models.DateTimeField() title = models.CharField(max_length=255) # Suppose we want to find all events that are longer than an hour. Django # does not support this, but we can use peewee. from playhouse.djpeewee import translate P = translate(Event) query = (P.Event .select() .where( (P.Event.end_time - P.Event.start_time) > timedelta(hours=1))) # Now feed our peewee query into Django's `raw()` method: sql, params = query.sql() Event.objects.raw(sql, params) Foreign keys and Many-to-many relationships ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The :py:func:`translate` function will recursively traverse the graph of models and return a dictionary populated with everything it finds. Back-references are not searched by default, but can be included by specifying ``backrefs=True``. Example: .. code-block:: pycon >>> from django.contrib.auth.models import User, Group >>> from playhouse.djpeewee import translate >>> translate(User, Group) {'ContentType': peewee.ContentType, 'Group': peewee.Group, 'Group_permissions': peewee.Group_permissions, 'Permission': peewee.Permission, 'User': peewee.User, 'User_groups': peewee.User_groups, 'User_user_permissions': peewee.User_user_permissions} As you can see in the example above, although only `User` and `Group` were passed in to :py:func:`translate`, several other models which are related by foreign key were also created. Additionally, the many-to-many "through" tables were created as separate models since peewee does not abstract away these types of relationships. Using the above models it is possible to construct joins. The following example will get all users who belong to a group that starts with the letter "A": .. code-block:: pycon >>> P = translate(User, Group) >>> query = P.User.select().join(P.User_groups).join(P.Group).where( ... fn.Lower(fn.Substr(P.Group.name, 1, 1)) == 'a') >>> sql, params = query.sql() >>> print sql # formatted for legibility SELECT t1."id", t1."password", ... FROM "auth_user" AS t1 INNER JOIN "auth_user_groups" AS t2 ON (t1."id" = t2."user_id") INNER JOIN "auth_group" AS t3 ON (t2."group_id" = t3."id") WHERE (Lower(Substr(t3."name", %s, %s)) = %s) djpeewee API ^^^^^^^^^^^^ .. py:function:: translate(*models, **options) Translate the given Django models into roughly equivalent peewee models suitable for use constructing queries. Foreign keys and many-to-many relationships will be followed and models generated, although back references are not traversed. :param models: One or more Django model classes. :param options: A dictionary of options, see note below. :returns: A dict-like object containing the generated models, but which supports dotted-name style lookups. The following are valid options: * ``recurse``: Follow foreign keys and many to many (default: ``True``). * ``max_depth``: Maximum depth to recurse (default: ``None``, unlimited). * ``backrefs``: Follow backrefs (default: ``False``). * ``exclude``: A list of models to exclude. .. _extra-fields: Fields ------ This module also contains several field classes that implement additional logic like encryption and compression. There is also a :py:class:`ManyToManyField` that makes it easy to work with simple many-to-many relationships. These fields can be found in the ``playhouse.fields`` module. .. py:class:: ManyToManyField(rel_model[, related_name=None[, through_model=None]]) :param rel_model: :py:class:`Model` class. :param str related_name: Name for the automatically-created backref. If not provided, the pluralized version of the model will be used. :param through_model: :py:class:`Model` to use for the intermediary table. If not provided, a simple through table will be automatically created. The :py:class:`ManyToManyField` provides a simple interface for working with many-to-many relationships, inspired by Django. A many-to-many relationship is typically implemented by creating a junction table with foreign keys to the two models being related. For instance, if you were building a syllabus manager for college students, the relationship between students and courses would be many-to-many. Here is the schema using standard APIs: .. code-block:: python class Student(Model): name = CharField() class Course(Model): name = CharField() class StudentCourse(Model): student = ForeignKeyField(Student) course = ForeignKeyField(Course) To query the courses for a particular student, you would join through the junction table: .. code-block:: python # List the courses that "Huey" is enrolled in: courses = (Course .select() .join(StudentCourse) .join(Student) .where(Student.name == 'Huey')) for course in courses: print course.name The :py:class:`ManyToManyField` is designed to simplify this use-case by providing a *field-like* API for querying and modifying data in the junction table. Here is how our code looks using :py:class:`ManyToManyField`: .. code-block:: python class Student(Model): name = CharField() class Course(Model): name = CharField() students = ManyToManyField(Student, related_name='courses') .. note:: It does not matter from Peewee's perspective which model the :py:class:`ManyToManyField` goes on, since the back-reference is just the mirror image. In order to write valid Python, though, you will need to add the ``ManyToManyField`` on the second model so that the name of the first model is in the scope. We still need a junction table to store the relationships between students and courses. This model can be accessed by calling the :py:meth:`~ManyToManyField.get_through_model` method. This is useful when creating tables. .. code-block:: python # Create tables for the students, courses, and relationships between # the two. db.create_tables([ Student, Course, Course.students.get_through_model()]) When accessed from a model instance, the :py:class:`ManyToManyField` exposes a :py:class:`SelectQuery` representing the set of related objects. Let's use the interactive shell to see how all this works: .. code-block:: pycon >>> huey = Student.get(Student.name == 'huey') >>> [course.name for course in huey.courses] ['English 101', 'CS 101'] >>> engl_101 = Course.get(Course.name == 'English 101') >>> [student.name for student in engl_101.students] ['Huey', 'Mickey', 'Zaizee'] To add new relationships between objects, you can either assign the objects directly to the ``ManyToManyField`` attribute, or call the :py:meth:`~ManyToManyField.add` method. The difference between the two is that simply assigning will clear out any existing relationships, whereas ``add()`` can preserve existing relationships. .. code-block:: pycon >>> huey.courses = Course.select().where(Course.name.contains('english')) >>> for course in huey.courses.order_by(Course.name): ... print course.name English 101 English 151 English 201 English 221 >>> cs_101 = Course.get(Course.name == 'CS 101') >>> cs_151 = Course.get(Course.name == 'CS 151') >>> huey.courses.add([cs_101, cs_151]) >>> [course.name for course in huey.courses.order_by(Course.name)] ['CS 101', 'CS151', 'English 101', 'English 151', 'English 201', 'English 221'] This is quite a few courses, so let's remove the 200-level english courses. To remove objects, use the :py:meth:`~ManyToManyField.remove` method. .. code-block:: pycon >>> huey.courses.remove(Course.select().where(Course.name.contains('2')) 2 >>> [course.name for course in huey.courses.order_by(Course.name)] ['CS 101', 'CS151', 'English 101', 'English 151'] To remove all relationships from a collection, you can use the :py:meth:`~SelectQuery.clear` method. Let's say that English 101 is canceled, so we need to remove all the students from it: .. code-block:: pycon >>> engl_101 = Course.get(Course.name == 'English 101') >>> engl_101.students.clear() .. note:: For an overview of implementing many-to-many relationships using standard Peewee APIs, check out the :ref:`manytomany` section. For all but the most simple cases, you will be better off implementing many-to-many using the standard APIs. .. py:method:: add(value[, clear_existing=True]) :param value: Either a :py:class:`Model` instance, a list of model instances, or a :py:class:`SelectQuery`. :param bool clear_existing: Whether to remove existing relationships first. Associate ``value`` with the current instance. You can pass in a single model instance, a list of model instances, or even a :py:class:`SelectQuery`. Example code: .. code-block:: python # Huey needs to enroll in a bunch of courses, including all # the English classes, and a couple Comp-Sci classes. huey = Student.get(Student.name == 'Huey') # We can add all the objects represented by a query. english_courses = Course.select().where( Course.name.contains('english')) huey.courses.add(english_courses) # We can also add lists of individual objects. cs101 = Course.get(Course.name == 'CS 101') cs151 = Course.get(Course.name == 'CS 151') huey.courses.add([cs101, cs151]) .. py:method:: remove(value) :param value: Either a :py:class:`Model` instance, a list of model instances, or a :py:class:`SelectQuery`. Disassociate ``value`` from the current instance. Like :py:meth:`~ManyToManyField.add`, you can pass in a model instance, a list of model instances, or even a :py:class:`SelectQuery`. Example code: .. code-block:: python # Huey is currently enrolled in a lot of english classes # as well as some Comp-Sci. He is changing majors, so we # will remove all his courses. english_courses = Course.select().where( Course.name.contains('english')) huey.courses.remove(english_courses) # Remove the two Comp-Sci classes Huey is enrolled in. cs101 = Course.get(Course.name == 'CS 101') cs151 = Course.get(Course.name == 'CS 151') huey.courses.remove([cs101, cs151]) .. py:method:: clear() Remove all associated objects. Example code: .. code-block:: python # English 101 is canceled this semester, so remove all # the enrollments. english_101 = Course.get(Course.name == 'English 101') english_101.students.clear() .. py:method:: get_through_model() Return the :py:class:`Model` representing the many-to-many junction table. This can be specified manually when the field is being instantiated using the ``through_model`` parameter. If a ``through_model`` is not specified, one will automatically be created. When creating tables for an application that uses :py:class:`ManyToManyField`, **you must create the through table expicitly**. .. code-block:: python # Get a reference to the automatically-created through table. StudentCourseThrough = Course.students.get_through_model() # Create tables for our two models as well as the through model. db.create_tables([ Student, Course, StudentCourseThrough]) .. py:class:: DeferredThroughModel() In some instances, you may need to obtain a reference to a through model before that model is actually defined. In order to avoid weird circular logic, you can use the ``DeferredThroughModel`` as a placeholder, then "fill it in" when you're ready. Example: .. code-block:: python class User(Model): username = CharField() NoteThroughDeferred = DeferredThroughModel() # Create placeholder. class Note(Model): text = TextField() users = ManyToManyField(User, through_model=NoteThroughDeferred) class NoteThrough(Model): user = ForeignKeyField(User) note = ForeignKeyField(Note) sort_order = IntegerField(default=0) # Now that all the models are defined, we can replace the placeholder # with the actual through model implementation. NoteThroughDeferred.set_model(NoteThrough) .. py:method:: set_model(model_class) Initialize the deferred placeholder with the appropriate model class. .. py:class:: CompressedField([compression_level=6[, algorithm='zlib'[, **kwargs]]]) ``CompressedField`` stores compressed data using the specified algorithm. This field extends :py:class:`BlobField`, transparently storing a compressed representation of the data in the database. :param int compression_level: A value from 0 to 9. :param str algorithm: Either ``'zlib'`` or ``'bz2'``. .. py:class:: PasswordField([iterations=12[, **kwargs]]) ``PasswordField`` stores a password hash and lets you verify it. The password is hashed when it is saved to the database and after reading it from the database you can call ``check_password (password) -> bool`` on it. :param int iterations: Indicates the work factor, it does 2^n iterations. .. note:: This field requires `bcrypt `_, which can be installed by running ``pip install bcrypt``. .. py:class:: PickledField([**kwargs]) A field capable of storing arbitrary Python objects. .. note:: If the ``cPickle`` module is available, it will be used. .. _gfk: Generic foreign keys -------------------- The ``gfk`` module provides a Generic ForeignKey (GFK), similar to Django. A GFK is composed of two columns: an object ID and an object type identifier. The object types are collected in a global registry (``all_models``). How a :py:class:`GFKField` is resolved: 1. Look up the object type in the global registry (returns a model class) 2. Look up the model instance by object ID .. note:: In order to use Generic ForeignKeys, your application's models *must* subclass ``playhouse.gfk.Model``. This ensures that the model class will be added to the global registry. .. note:: GFKs themselves are not actually a field and will not add a column to your table. Like regular ForeignKeys, GFKs support a "back-reference" via the :py:class:`ReverseGFK` descriptor. How to use GFKs ^^^^^^^^^^^^^^^ 1. Be sure your model subclasses ``playhouse.gfk.Model`` 2. Add a :py:class:`CharField` to store the ``object_type`` 3. Add a field to store the ``object_id`` (usually a :py:class:`IntegerField`) 4. Add a :py:class:`GFKField` and instantiate it with the names of the ``object_type`` and ``object_id`` fields. 5. (optional) On any other models, add a :py:class:`ReverseGFK` descriptor Example: .. code-block:: python from playhouse.gfk import * class Tag(Model): tag = CharField() object_type = CharField(null=True) object_id = IntegerField(null=True) object = GFKField('object_type', 'object_id') class Blog(Model): tags = ReverseGFK(Tag, 'object_type', 'object_id') class Photo(Model): tags = ReverseGFK(Tag, 'object_type', 'object_id') How you use these is pretty straightforward hopefully: .. code-block:: pycon >>> b = Blog.create(name='awesome post') >>> Tag.create(tag='awesome', object=b) >>> b2 = Blog.create(name='whiny post') >>> Tag.create(tag='whiny', object=b2) >>> b.tags # <-- a select query SELECT t1."id", t1."tag", t1."object_type", t1."object_id" FROM "tag" AS t1 WHERE ((t1."object_type" = ?) AND (t1."object_id" = ?)) [u'blog', 1] >>> [x.tag for x in b.tags] [u'awesome'] >>> [x.tag for x in b2.tags] [u'whiny'] >>> p = Photo.create(name='picture of cat') >>> Tag.create(object=p, tag='kitties') >>> Tag.create(object=p, tag='cats') >>> [x.tag for x in p.tags] [u'kitties', u'cats'] >>> [x.tag for x in Blog.tags] [u'awesome', u'whiny'] >>> t = Tag.get(Tag.tag == 'awesome') >>> t.object <__main__.Blog at 0x268f450> >>> t.object.name u'awesome post' GFK API ^^^^^^^ .. py:class:: GFKField([model_type_field='object_type'[, model_id_field='object_id']]) Provide a clean API for storing "generic" foreign keys. Generic foreign keys are comprised of an object type, which maps to a model class, and an object id, which maps to the primary key of the related model class. Setting the GFKField on a model will automatically populate the ``model_type_field`` and ``model_id_field``. Similarly, getting the GFKField on a model instance will "resolve" the two fields, first looking up the model class, then looking up the instance by ID. .. py:class:: ReverseGFK(model, [model_type_field='object_type'[, model_id_field='object_id']]) Back-reference support for :py:class:`GFKField`. .. _hybrid: Hybrid Attributes ----------------- Hybrid attributes encapsulate functionality that operates at both the Python *and* SQL levels. The idea for hybrid attributes comes from a feature of the `same name in SQLAlchemy `_. Consider the following example: .. code-block:: python class Interval(Model): start = IntegerField() end = IntegerField() @hybrid_property def length(self): return self.end - self.start @hybrid_method def contains(self, point): return (self.start <= point) & (point < self.end) The *hybrid attribute* gets its name from the fact that the ``length`` attribute will behave differently depending on whether it is accessed via the ``Interval`` class or an ``Interval`` instance. If accessed via an instance, then it behaves just as you would expect. If accessed via the ``Interval.length`` class attribute, however, the length calculation will be expressed as a SQL expression. For example: .. code-block:: python query = Interval.select().where(Interval.length > 5) This query will be equivalent to the following SQL: .. code-block:: sql SELECT "t1"."id", "t1"."start", "t1"."end" FROM "interval" AS t1 WHERE (("t1"."end" - "t1"."start") > 5) The ``hybrid`` module also contains a decorator for implementing hybrid methods which can accept parameters. As with hybrid properties, when accessed via a model instance, then the function executes normally as-written. When the hybrid method is called on the class, however, it will generate a SQL expression. Example: .. code-block:: python query = Interval.select().where(Interval.contains(2)) This query is equivalent to the following SQL: .. code-block:: sql SELECT "t1"."id", "t1"."start", "t1"."end" FROM "interval" AS t1 WHERE (("t1"."start" <= 2) AND (2 < "t1"."end")) There is an additional API for situations where the python implementation differs slightly from the SQL implementation. Let's add a ``radius`` method to the ``Interval`` model. Because this method calculates an absolute value, we will use the Python ``abs()`` function for the instance portion and the ``fn.ABS()`` SQL function for the class portion. .. code-block:: python class Interval(Model): start = IntegerField() end = IntegerField() @hybrid_property def length(self): return self.end - self.start @hybrid_property def radius(self): return abs(self.length) / 2 @radius.expression def radius(cls): return fn.ABS(cls.length) / 2 What is neat is that both the ``radius`` implementations refer to the ``length`` hybrid attribute! When accessed via an ``Interval`` instance, the radius calculation will be executed in Python. When invoked via an ``Interval`` class, we will get the appropriate SQL. Example: .. code-block:: python query = Interval.select().where(Interval.radius < 3) This query is equivalent to the following SQL: .. code-block:: sql SELECT "t1"."id", "t1"."start", "t1"."end" FROM "interval" AS t1 WHERE ((abs("t1"."end" - "t1"."start") / 2) < 3) Pretty neat, right? Thanks for the cool idea, SQLAlchemy! Hybrid API ^^^^^^^^^^ .. py:class:: hybrid_method(func[, expr=None]) Method decorator that allows the definition of a Python object method with both instance-level and class-level behavior. Example: .. code-block:: python class Interval(Model): start = IntegerField() end = IntegerField() @hybrid_method def contains(self, point): return (self.start <= point) & (point < self.end) When called with an ``Interval`` instance, the ``contains`` method will behave as you would expect. When called as a classmethod, though, a SQL expression will be generated: .. code-block:: python query = Interval.select().where(Interval.contains(2)) Would generate the following SQL: .. code-block:: sql SELECT "t1"."id", "t1"."start", "t1"."end" FROM "interval" AS t1 WHERE (("t1"."start" <= 2) AND (2 < "t1"."end")) .. py:method:: expression(expr) Method decorator for specifying the SQL-expression producing method. .. py:class:: hybrid_property(fget[, fset=None[, fdel=None[, expr=None]]]) Method decorator that allows the definition of a Python object property with both instance-level and class-level behavior. Examples: .. code-block:: python class Interval(Model): start = IntegerField() end = IntegerField() @hybrid_property def length(self): return self.end - self.start @hybrid_property def radius(self): return abs(self.length) / 2 @radius.expression def radius(cls): return fn.ABS(cls.length) / 2 When accessed on an ``Interval`` instance, the ``length`` and ``radius`` properties will behave as you would expect. When accessed as class attributes, though, a SQL expression will be generated instead: .. code-block:: python query = (Interval .select() .where( (Interval.length > 6) & (Interval.radius >= 3))) Would generate the following SQL: .. code-block:: sql SELECT "t1"."id", "t1"."start", "t1"."end" FROM "interval" AS t1 WHERE ( (("t1"."end" - "t1"."start") > 6) AND ((abs("t1"."end" - "t1"."start") / 2) >= 3) ) .. _kv: Key/Value Store --------------- Provides a simple key/value store, using a dictionary API. By default the the :py:class:`KeyStore` will use an in-memory sqlite database, but any database will work. To start using the key-store, create an instance and pass it a field to use for the values. .. code-block:: python >>> kv = KeyStore(TextField()) >>> kv['a'] = 'A' >>> kv['a'] 'A' .. note:: To store arbitrary python objects, use the :py:class:`PickledKeyStore`, which stores values in a pickled :py:class:`BlobField`. If your objects are JSON-serializable, you can also use the :py:class:`JSONKeyStore`, which stores the values as JSON-encoded strings. Using the :py:class:`KeyStore` it is possible to use "expressions" to retrieve values from the dictionary. For instance, imagine you want to get all keys which contain a certain substring: .. code-block:: python >>> keys_matching_substr = kv[kv.key % '%substr%'] >>> keys_start_with_a = kv[fn.Lower(fn.Substr(kv.key, 1, 1)) == 'a'] KeyStore API ^^^^^^^^^^^^ .. py:class:: KeyStore(value_field[, ordered=False[, database=None]]) Lightweight dictionary interface to a model containing a key and value. Implements common dictionary methods, such as ``__getitem__``, ``__setitem__``, ``get``, ``pop``, ``items``, ``keys``, and ``values``. :param Field value_field: Field instance to use as value field, e.g. an instance of :py:class:`TextField`. :param boolean ordered: Whether the keys should be returned in sorted order :param Database database: :py:class:`Database` class to use for the storage backend. If none is supplied, an in-memory Sqlite DB will be used. Example: .. code-block:: pycon >>> from playhouse.kv import KeyStore >>> kv = KeyStore(TextField()) >>> kv['a'] = 'foo' >>> for k, v in kv: ... print k, v a foo >>> 'a' in kv True >>> 'b' in kv False .. py:class:: JSONKeyStore([ordered=False[, database=None]]) Identical to the :py:class:`KeyStore` except the values are stored as JSON-encoded strings, so you can store complex data-types like dictionaries and lists. Example: .. code-block:: pycon >>> from playhouse.kv import JSONKeyStore >>> jkv = JSONKeyStore() >>> jkv['a'] = 'A' >>> jkv['b'] = [1, 2, 3] >>> list(jkv.items()) [(u'a', 'A'), (u'b', [1, 2, 3])] .. py:class:: PickledKeyStore([ordered=False[, database=None]]) Identical to the :py:class:`KeyStore` except *anything* can be stored as a value in the dictionary. The storage for the value will be a pickled :py:class:`BlobField`. Example: .. code-block:: pycon >>> from playhouse.kv import PickledKeyStore >>> pkv = PickledKeyStore() >>> pkv['a'] = 'A' >>> pkv['b'] = 1.0 >>> list(pkv.items()) [(u'a', 'A'), (u'b', 1.0)] .. _shortcuts: Shortcuts --------- This module contains helper functions for expressing things that would otherwise be somewhat verbose or cumbersome using peewee's APIs. There are also helpers for serializing models to dictionaries and vice-versa. .. py:function:: case(predicate, expression_tuples, default=None) :param predicate: A SQL expression or can be ``None``. :param expression_tuples: An iterable containing one or more 2-tuples comprised of an expression and return value. :param default: default if none of the cases match. Example SQL case statements: .. code-block:: sql -- case with predicate -- SELECT "username", CASE "user_id" WHEN 1 THEN "one" WHEN 2 THEN "two" ELSE "?" END FROM "users"; -- case with no predicate (inline expressions) -- SELECT "username", CASE WHEN "user_id" = 1 THEN "one" WHEN "user_id" = 2 THEN "two" ELSE "?" END FROM "users"; Equivalent function invocations: .. code-block:: python User.select(User.username, case(User.user_id, ( (1, "one"), (2, "two")), "?")) User.select(User.username, case(None, ( (User.user_id == 1, "one"), # note the double equals (User.user_id == 2, "two")), "?")) You can specify a value for the CASE expression using the ``alias()`` method: .. code-block:: python User.select(User.username, case(User.user_id, ( (1, "one"), (2, "two")), "?").alias("id_string")) .. py:function:: cast(node, as_type) :param node: A peewee :py:class:`Node`, for instance a :py:class:`Field` or an :py:class:`Expression`. :param str as_type: The type name to cast to, e.g. ``'int'``. :returns: a function call to cast the node as the given type. Example: .. code-block:: python # Find all data points whose numbers are palindromes. We do this by # casting the number to string, reversing it, then casting the reversed # string back to an integer. reverse_val = cast(fn.REVERSE(cast(DataPoint.value, 'str')), 'int') query = (DataPoint .select() .where(DataPoint.value == reverse_val)) .. py:function:: model_to_dict(model[, recurse=True[, backrefs=False[, only=None[, exclude=None[, extra_attrs=None[, fields_from_query=None]]]]]]) Convert a model instance (and optionally any related instances) to a dictionary. :param bool recurse: Whether foreign-keys should be recursed. :param bool backrefs: Whether lists of related objects should be recursed. :param only: A list (or set) of field instances which should be included in the result dictionary. :param exclude: A list (or set) of field instances which should be excluded from the result dictionary. :param extra_attrs: A list of attribute or method names on the instance which should be included in the dictionary. :param SelectQuery fields_from_query: The :py:class:`SelectQuery` that created this model instance. Only the fields and values explicitly selected by the query will be serialized. Examples: .. code-block:: pycon >>> user = User.create(username='charlie') >>> model_to_dict(user) {'id': 1, 'username': 'charlie'} >>> model_to_dict(user, backrefs=True) {'id': 1, 'tweets': [], 'username': 'charlie'} >>> t1 = Tweet.create(user=user, message='tweet-1') >>> t2 = Tweet.create(user=user, message='tweet-2') >>> model_to_dict(user, backrefs=True) { 'id': 1, 'tweets': [ {'id': 1, 'message': 'tweet-1'}, {'id': 2, 'message': 'tweet-2'}, ], 'username': 'charlie' } >>> model_to_dict(t1) { 'id': 1, 'message': 'tweet-1', 'user': { 'id': 1, 'username': 'charlie' } } >>> model_to_dict(t2, recurse=False) {'id': 1, 'message': 'tweet-2', 'user': 1} .. py:function:: dict_to_model(model_class, data[, ignore_unknown=False]) Convert a dictionary of data to a model instance, creating related instances where appropriate. :param Model model_class: The model class to construct. :param dict data: A dictionary of data. Foreign keys can be included as nested dictionaries, and back-references as lists of dictionaries. :param bool ignore_unknown: Whether to allow unrecognized (non-field) attributes. Examples: .. code-block:: pycon >>> user_data = {'id': 1, 'username': 'charlie'} >>> user = dict_to_model(User, user_data) >>> user <__main__.User at 0x7fea8fa4d490> >>> user.username 'charlie' >>> note_data = {'id': 2, 'text': 'note text', 'user': user_data} >>> note = dict_to_model(Note, note_data) >>> note.text 'note text' >>> note.user.username 'charlie' >>> user_with_notes = { ... 'id': 1, ... 'username': 'charlie', ... 'notes': [{'id': 1, 'text': 'note-1'}, {'id': 2, 'text': 'note-2'}]} >>> user = dict_to_model(User, user_with_notes) >>> user.notes[0].text 'note-1' >>> user.notes[0].user.username 'charlie' .. py:class:: RetryOperationalError() When mixed-in with a vendor-specific :py:class:`Database` subclass, this class overrides the :py:meth:`~Database.execute_sql` method to automatically reconnect and retry queries that fail due to an ``OperationalError``. The query that failed will be retried only once, and if it fails twice an exception will be raised. Usage: .. code-block:: python from peewee import * from playhouse.shortcuts import RetryOperationalError class MyRetryDB(RetryOperationalError, MySQLDatabase): pass db = MyRetryDB('my_app') .. _signals: Signal support -------------- Models with hooks for signals (a-la django) are provided in ``playhouse.signals``. To use the signals, you will need all of your project's models to be a subclass of ``playhouse.signals.Model``, which overrides the necessary methods to provide support for the various signals. .. highlight:: python .. code-block:: python from playhouse.signals import Model, post_save class MyModel(Model): data = IntegerField() @post_save(sender=MyModel) def on_save_handler(model_class, instance, created): put_data_in_cache(instance.data) .. warning:: For what I hope are obvious reasons, Peewee signals do not work when you use the :py:meth:`Model.insert`, :py:meth:`Model.update`, or :py:meth:`Model.delete` methods. These methods generate queries that execute beyond the scope of the ORM, and the ORM does not know about which model instances might or might not be affected when the query executes. Signals work by hooking into the higher-level peewee APIs like :py:meth:`Model.save` and :py:meth:`Model.delete_instance`, where the affected model instance is known ahead of time. The following signals are provided: ``pre_save`` Called immediately before an object is saved to the database. Provides an additional keyword argument ``created``, indicating whether the model is being saved for the first time or updated. ``post_save`` Called immediately after an object is saved to the database. Provides an additional keyword argument ``created``, indicating whether the model is being saved for the first time or updated. ``pre_delete`` Called immediately before an object is deleted from the database when :py:meth:`Model.delete_instance` is used. ``post_delete`` Called immediately after an object is deleted from the database when :py:meth:`Model.delete_instance` is used. ``pre_init`` Called when a model class is first instantiated ``post_init`` Called after a model class has been instantiated and the fields have been populated, for example when being selected as part of a database query. Connecting handlers ^^^^^^^^^^^^^^^^^^^ Whenever a signal is dispatched, it will call any handlers that have been registered. This allows totally separate code to respond to events like model save and delete. The :py:class:`Signal` class provides a :py:meth:`~Signal.connect` method, which takes a callback function and two optional parameters for "sender" and "name". If specified, the "sender" parameter should be a single model class and allows your callback to only receive signals from that one model class. The "name" parameter is used as a convenient alias in the event you wish to unregister your signal handler. Example usage: .. code-block:: python from playhouse.signals import * def post_save_handler(sender, instance, created): print '%s was just saved' % instance # our handler will only be called when we save instances of SomeModel post_save.connect(post_save_handler, sender=SomeModel) All signal handlers accept as their first two arguments ``sender`` and ``instance``, where ``sender`` is the model class and ``instance`` is the actual model being acted upon. If you'd like, you can also use a decorator to connect signal handlers. This is functionally equivalent to the above example: .. code-block:: python @post_save(sender=SomeModel) def post_save_handler(sender, instance, created): print '%s was just saved' % instance Signal API ^^^^^^^^^^ .. py:class:: Signal() Stores a list of receivers (callbacks) and calls them when the "send" method is invoked. .. py:method:: connect(receiver[, sender=None[, name=None]]) Add the receiver to the internal list of receivers, which will be called whenever the signal is sent. :param callable receiver: a callable that takes at least two parameters, a "sender", which is the Model subclass that triggered the signal, and an "instance", which is the actual model instance. :param Model sender: if specified, only instances of this model class will trigger the receiver callback. :param string name: a short alias .. code-block:: python from playhouse.signals import post_save from project.handlers import cache_buster post_save.connect(cache_buster, name='project.cache_buster') .. py:method:: disconnect([receiver=None[, name=None]]) Disconnect the given receiver (or the receiver with the given name alias) so that it no longer is called. Either the receiver or the name must be provided. :param callable receiver: the callback to disconnect :param string name: a short alias .. code-block:: python post_save.disconnect(name='project.cache_buster') .. py:method:: send(instance, *args, **kwargs) Iterates over the receivers and will call them in the order in which they were connected. If the receiver specified a sender, it will only be called if the instance is an instance of the sender. :param instance: a model instance .. py:method __call__([sender=None[, name=None]]) Function decorator that is an alias for a signal's connect method: .. code-block:: python from playhouse.signals import connect, post_save @post_save(name='project.cache_buster') def cache_bust_handler(sender, instance, *args, **kwargs): # bust the cache for this instance cache.delete(cache_key_for(instance)) .. _pwiz: pwiz, a model generator ----------------------- ``pwiz`` is a little script that ships with peewee and is capable of introspecting an existing database and generating model code suitable for interacting with the underlying data. If you have a database already, pwiz can give you a nice boost by generating skeleton code with correct column affinities and foreign keys. If you install peewee using ``setup.py install``, pwiz will be installed as a "script" and you can just run: .. highlight:: console .. code-block:: console python -m pwiz -e postgresql -u postgres my_postgres_db This will print a bunch of models to standard output. So you can do this: .. code-block:: console python -m pwiz -e postgresql my_postgres_db > mymodels.py python # <-- fire up an interactive shell .. highlight:: pycon .. code-block:: pycon >>> from mymodels import Blog, Entry, Tag, Whatever >>> print [blog.name for blog in Blog.select()] ====== ========================= ============================================ Option Meaning Example ====== ========================= ============================================ -h show help -e database backend -e mysql -H host to connect to -H remote.db.server -p port to connect on -p 9001 -u database user -u postgres -P database password -P secret -s postgres schema -s public ====== ========================= ============================================ The following are valid parameters for the engine: * sqlite * mysql * postgresql .. _migrate: Schema Migrations ----------------- Peewee now supports schema migrations, with well-tested support for Postgresql, SQLite and MySQL. Unlike other schema migration tools, peewee's migrations do not handle introspection and database "versioning". Rather, peewee provides a number of helper functions for generating and running schema-altering statements. This engine provides the basis on which a more sophisticated tool could some day be built. Migrations can be written as simple python scripts and executed from the command-line. Since the migrations only depend on your applications :py:class:`Database` object, it should be easy to manage changing your model definitions and maintaining a set of migration scripts without introducing dependencies. Example usage ^^^^^^^^^^^^^ Begin by importing the helpers from the `migrate` module: .. code-block:: python from playhouse.migrate import * Instantiate a ``migrator``. The :py:class:`SchemaMigrator` class is responsible for generating schema altering operations, which can then be run sequentially by the :py:func:`migrate` helper. .. code-block:: python # Postgres example: my_db = PostgresqlDatabase(...) migrator = PostgresqlMigrator(my_db) # SQLite example: my_db = SqliteDatabase('my_database.db') migrator = SqliteMigrator(my_db) Use :py:func:`migrate` to execute one or more operations: .. code-block:: python title_field = CharField(default='') status_field = IntegerField(null=True) migrate( migrator.add_column('some_table', 'title', title_field), migrator.add_column('some_table', 'status', status_field), migrator.drop_column('some_table', 'old_column'), ) .. warning:: Migrations are not run inside a transaction. If you wish the migration to run in a transaction you will need to wrap the call to `migrate` in a transaction block, e.g. .. code-block:: python with my_db.transaction(): migrate(...) Supported Operations ^^^^^^^^^^^^^^^^^^^^ Add new field(s) to an existing model: .. code-block:: python # Create your field instances. For non-null fields you must specify a # default value. pubdate_field = DateTimeField(null=True) comment_field = TextField(default='') # Run the migration, specifying the database table, field name and field. migrate( migrator.add_column('comment_tbl', 'pub_date', pubdate_field), migrator.add_column('comment_tbl', 'comment', comment_field), ) Renaming a field: .. code-block:: python # Specify the table, original name of the column, and its new name. migrate( migrator.rename_column('story', 'pub_date', 'publish_date'), migrator.rename_column('story', 'mod_date', 'modified_date'), ) Dropping a field: .. code-block:: python migrate( migrator.drop_column('story', 'some_old_field'), ) Making a field nullable or not nullable: .. code-block:: python # Note that when making a field not null that field must not have any # NULL values present. migrate( # Make `pub_date` allow NULL values. migrator.drop_not_null('story', 'pub_date'), # Prevent `modified_date` from containing NULL values. migrator.add_not_null('story', 'modified_date'), ) Renaming a table: .. code-block:: python migrate( migrator.rename_table('story', 'stories_tbl'), ) Adding an index: .. code-block:: python # Specify the table, column names, and whether the index should be # UNIQUE or not. migrate( # Create an index on the `pub_date` column. migrator.add_index('story', ('pub_date',), False), # Create a multi-column index on the `pub_date` and `status` fields. migrator.add_index('story', ('pub_date', 'status'), False), # Create a unique index on the category and title fields. migrator.add_index('story', ('category_id', 'title'), True), ) Dropping an index: .. code-block:: python # Specify the index name. migrate(migrator.drop_index('story', 'story_pub_date_status')) Migrations API ^^^^^^^^^^^^^^ .. py:function:: migrate(*operations) Execute one or more schema altering operations. Usage: .. code-block:: python migrate( migrator.add_column('some_table', 'new_column', CharField(default='')), migrator.create_index('some_table', ('new_column',)), ) .. py:class:: SchemaMigrator(database) :param database: a :py:class:`Database` instance. The :py:class:`SchemaMigrator` is responsible for generating schema-altering statements. .. py:method:: add_column(table, column_name, field) :param str table: Name of the table to add column to. :param str column_name: Name of the new column. :param Field field: A :py:class:`Field` instance. Add a new column to the provided table. The ``field`` provided will be used to generate the appropriate column definition. .. note:: If the field is not nullable it must specify a default value. .. note:: For non-null fields, the field will initially be added as a null field, then an ``UPDATE`` statement will be executed to populate the column with the default value. Finally, the column will be marked as not null. .. py:method:: drop_column(table, column_name[, cascade=True]) :param str table: Name of the table to drop column from. :param str column_name: Name of the column to drop. :param bool cascade: Whether the column should be dropped with `CASCADE`. .. py:method:: rename_column(table, old_name, new_name) :param str table: Name of the table containing column to rename. :param str old_name: Current name of the column. :param str new_name: New name for the column. .. py:method:: add_not_null(table, column) :param str table: Name of table containing column. :param str column: Name of the column to make not nullable. .. py:method:: drop_not_null(table, column) :param str table: Name of table containing column. :param str column: Name of the column to make nullable. .. py:method:: rename_table(old_name, new_name) :param str old_name: Current name of the table. :param str new_name: New name for the table. .. py:method:: add_index(table, columns[, unique=False]) :param str table: Name of table on which to create the index. :param list columns: List of columns which should be indexed. :param bool unique: Whether the new index should specify a unique constraint. .. py:method:: drop_index(table, index_name) :param str table Name of the table containing the index to be dropped. :param str index_name: Name of the index to be dropped. .. py:class:: PostgresqlMigrator(database) Generate migrations for Postgresql databases. .. py:class:: SqliteMigrator(database) Generate migrations for SQLite databases. .. py:class:: MySQLMigrator(database) Generate migrations for MySQL databases. .. _reflection: Reflection ---------- The reflection module contains helpers for introspecting existing databases. This module is used internally by several other modules in the playhouse, including :ref:`dataset` and :ref:`pwiz`. .. py:class:: Introspector(metadata[, schema=None]) Metadata can be extracted from a database by instantiating an :py:class:`Introspector`. Rather than instantiating this class directly, it is recommended to use the factory method :py:meth:`~Introspector.from_database`. .. py:classmethod:: from_database(database[, schema=None]) Creates an :py:class:`Introspector` instance suitable for use with the given database. :param database: a :py:class:`Database` instance. :param str schema: an optional schema (supported by some databases). Usage: .. code-block:: python db = SqliteDatabase('my_app.db') introspector = Introspector.from_database(db) models = introspector.generate_models() # User and Tweet (assumed to exist in the database) are # peewee Model classes generated from the database schema. User = models['user'] Tweet = models['tweet'] .. py:method:: generate_models() Introspect the database, reading in the tables, columns, and foreign key constraints, then generate a dictionary mapping each database table to a dynamically-generated :py:class:`Model` class. :return: A dictionary mapping table-names to model classes. .. _db_url: Database URL ------------ This module contains a helper function to generate a database connection from a URL connection string. .. py:function:: connect(url, **connect_params) Create a :py:class:`Database` instance from the given connection URL. Examples: * *sqlite:///my_database.db* will create a :py:class:`SqliteDatabase` instance for the file ``my_database.db`` in the current directory. * *sqlite:///:memory:* will create an in-memory :py:class:`SqliteDatabase` instance. * *postgresql://postgres:my_password@localhost:5432/my_database* will create a :py:class:`PostgresqlDatabase` instance. A username and password are provided, as well as the host and port to connect to. * *mysql://user:passwd@ip:port/my_db* will create a :py:class:`MySQLDatabase` instance for the local MySQL database *my_db*. * *mysql+pool://user:passwd@ip:port/my_db?max_connections=20&stale_timeout=300* will create a :py:class:`PooledMySQLDatabase` instance for the local MySQL database *my_db* with max_connections set to 20 and a stale_timeout setting of 300 seconds. Supported schemes: * ``apsw``: :py:class:`APSWDatabase` * ``mysql``: :py:class:`MySQLDatabase` * ``mysql+pool``: :py:class:`PooledMySQLDatabase` * ``postgres``: :py:class:`PostgresqlDatabase` * ``postgres+pool``: :py:class:`PooledPostgresqlDatabase` * ``postgresext``: :py:class:`PostgresqlExtDatabase` * ``postgresext+pool``: :py:class:`PooledPostgresqlExtDatabase` * ``sqlite``: :py:class:`SqliteDatabase` * ``sqliteext``: :py:class:`SqliteExtDatabase` * ``sqlite+pool``: :py:class:`PooledSqliteDatabase` * ``sqliteext+pool``: :py:class:`PooledSqliteExtDatabase` Usage: .. code-block:: python import os from playhouse.db_url import connect # Connect to the database URL defined in the environment, falling # back to a local Sqlite database if no database URL is specified. db = connect(os.environ.get('DATABASE') or 'sqlite:///default.db') .. py:function:: parse(url) Parse the information in the given URL into a dictionary containing ``database``, ``host``, ``port``, ``user`` and/or ``password``. Additional connection arguments can be passed in the URL query string. If you are using a custom database class, you can use the ``parse()`` function to extract information from a URL which can then be passed in to your database object. .. py:function:: register_database(db_class, *names) :param db_class: A subclass of :py:class:`Database`. :param names: A list of names to use as the scheme in the URL, e.g. 'sqlite' or 'firebird' Register additional database class under the specified names. This function can be used to extend the ``connect()`` function to support additional schemes. Suppose you have a custom database class for ``Firebird`` named ``FirebirdDatabase``. .. code-block:: python from playhouse.db_url import connect, register_database register_database(FirebirdDatabase, 'firebird') db = connect('firebird://my-firebird-db') .. _csv_utils: CSV Utils --------- This module contains helpers for dumping queries into CSV, and for loading CSV data into a database. CSV files can be introspected to generate an appropriate model class for working with the data. This makes it really easy to explore the data in a CSV file using Peewee and SQL. Here is how you would load a CSV file into an in-memory SQLite database. The call to :py:func:`load_csv` returns a :py:class:`Model` instance suitable for working with the CSV data: .. code-block:: python from peewee import * from playhouse.csv_loader import load_csv db = SqliteDatabase(':memory:') ZipToTZ = load_csv(db, 'zip_to_tz.csv') Now we can run queries using the new model. .. code-block:: pycon # Get the timezone for a zipcode. >>> ZipToTZ.get(ZipToTZ.zip == 66047).timezone 'US/Central' # Get all the zipcodes for my town. >>> [row.zip for row in ZipToTZ.select().where( ... (ZipToTZ.city == 'Lawrence') && (ZipToTZ.state == 'KS'))] [66044, 66045, 66046, 66047, 66049] For more information and examples check out this `blog post `_. CSV Loader API ^^^^^^^^^^^^^^ .. py:function:: load_csv(db_or_model, filename[, fields=None[, field_names=None[, has_header=True[, sample_size=10[, converter=None[, db_table=None[, **reader_kwargs]]]]]]]) Load a CSV file into the provided database or model class, returning a :py:class:`Model` suitable for working with the CSV data. :param db_or_model: Either a :py:class:`Database` instance or a :py:class:`Model` class. If a model is not provided, one will be automatically generated for you. :param str filename: Path of CSV file to load. :param list fields: A list of :py:class:`Field` instances mapping to each column in the CSV. This allows you to manually specify the column types. If not provided, and a model is not provided, the field types will be determined automatically. :param list field_names: A list of strings to use as field names for each column in the CSV. If not provided, and a model is not provided, the field names will be determined by looking at the header row of the file. If no header exists, then the fields will be given generic names. :param bool has_header: Whether the first row is a header. :param int sample_size: Number of rows to look at when introspecting data types. If set to ``0``, then a generic field type will be used for all fields. :param RowConverter converter: a :py:class:`RowConverter` instance to use for introspecting the CSV. If not provided, one will be created. :param str db_table: The name of the database table to load data into. If this value is not provided, it will be determined using the filename of the CSV file. If a model is provided, this value is ignored. :param reader_kwargs: Arbitrary keyword arguments to pass to the ``csv.reader`` object, such as the dialect, separator, etc. :rtype: A :py:class:`Model` suitable for querying the CSV data. Basic example -- field names and types will be introspected: .. code-block:: python from peewee import * from playhouse.csv_loader import * db = SqliteDatabase(':memory:') User = load_csv(db, 'users.csv') Using a pre-defined model: .. code-block:: python class ZipToTZ(Model): zip = IntegerField() timezone = CharField() load_csv(ZipToTZ, 'zip_to_tz.csv') Specifying fields: .. code-block:: python fields = [DecimalField(), IntegerField(), IntegerField(), DateField()] field_names = ['amount', 'from_acct', 'to_acct', 'timestamp'] Payments = load_csv(db, 'payments.csv', fields=fields, field_names=field_names, has_header=False) Dumping CSV ^^^^^^^^^^^ .. py:function:: dump_csv(query, file_or_name[, include_header=True[, close_file=True[, append=True[, csv_writer=None]]]]) :param query: A peewee :py:class:`SelectQuery` to dump as CSV. :param file_or_name: Either a filename or a file-like object. :param include_header: Whether to generate a CSV header row consisting of the names of the selected columns. :param close_file: Whether the file should be closed after writing the query data. :param append: Whether new data should be appended to the end of the file. :param csv_writer: A python ``csv.writer`` instance to use. Example usage: .. code-block:: python with open('account-export.csv', 'w') as fh: query = Account.select().order_by(Account.id) dump_csv(query, fh) .. _pool: Connection pool --------------- The ``pool`` module contains a number of :py:class:`Database` classes that provide connection pooling for PostgreSQL and MySQL databases. The pool works by overriding the methods on the :py:class:`Database` class that open and close connections to the backend. The pool can specify a timeout after which connections are recycled, as well as an upper bound on the number of open connections. In a multi-threaded application, up to `max_connections` will be opened. Each thread (or, if using gevent, greenlet) will have it's own connection. In a single-threaded application, only one connection will be created. It will be continually recycled until either it exceeds the stale timeout or is closed explicitly (using `.manual_close()`). **By default, all your application needs to do is ensure that connections are closed when you are finished with them, and they will be returned to the pool**. For web applications, this typically means that at the beginning of a request, you will open a connection, and when you return a response, you will close the connection. Simple Postgres pool example code: .. code-block:: python # Use the special postgresql extensions. from playhouse.pool import PooledPostgresqlExtDatabase db = PooledPostgresqlExtDatabase( 'my_app', max_connections=32, stale_timeout=300, # 5 minutes. user='postgres') class BaseModel(Model): class Meta: database = db That's it! If you would like finer-grained control over the pool of connections, check out the :ref:`advanced_connection_management` section. Pool APIs ^^^^^^^^^ .. py:class:: PooledDatabase(database[, max_connections=20[, stale_timeout=None[, timeout=None[, **kwargs]]]]) Mixin class intended to be used with a subclass of :py:class:`Database`. :param str database: The name of the database or database file. :param int max_connections: Maximum number of connections. Provide ``None`` for unlimited. :param int stale_timeout: Number of seconds to allow connections to be used. :param int timeout: Number of seconds block when pool is full. By default peewee does not block when the pool is full but simply throws an exception. To block indefinitely set this value to ``0``. :param kwargs: Arbitrary keyword arguments passed to database class. .. note:: Connections will not be closed exactly when they exceed their `stale_timeout`. Instead, stale connections are only closed when a new connection is requested. .. note:: If the number of open connections exceeds `max_connections`, a `ValueError` will be raised. .. py:method:: _connect(*args, **kwargs) Request a connection from the pool. If there are no available connections a new one will be opened. .. py:method:: _close(conn[, close_conn=False]) By default `conn` will not be closed and instead will be returned to the pool of available connections. If `close_conn=True`, then `conn` will be closed and *not* be returned to the pool. .. py:method:: manual_close() Close the currently-open connection without returning it to the pool. .. py:class:: PooledPostgresqlDatabase Subclass of :py:class:`PostgresqlDatabase` that mixes in the :py:class:`PooledDatabase` helper. .. py:class:: PooledPostgresqlExtDatabase Subclass of :py:class:`PostgresqlExtDatabase` that mixes in the :py:class:`PooledDatabase` helper. The :py:class:`PostgresqlExtDatabase` is a part of the :ref:`postgres_ext` module and provides support for many Postgres-specific features. .. py:class:: PooledMySQLDatabase Subclass of :py:class:`MySQLDatabase` that mixes in the :py:class:`PooledDatabase` helper. .. py:class:: PooledSqliteDatabase Persistent connections for SQLite apps. .. py:class:: PooledSqliteExtDatabase Persistent connections for SQLite apps, using the :ref:`sqlite_ext` advanced database driver :py:class:`SqliteExtDatabase`. .. _read_slaves: Read Slaves ----------- The ``read_slave`` module contains a :py:class:`Model` subclass that can be used to automatically execute ``SELECT`` queries against different database(s). This might be useful if you have your databases in a master / slave configuration. .. py:class:: ReadSlaveModel Model subclass that will route ``SELECT`` queries to a different database. Master and read-slaves are specified using ``Model.Meta``: .. code-block:: python # Declare a master and two read-replicas. master = PostgresqlDatabase('master') replica_1 = PostgresqlDatabase('replica_1') replica_2 = PostgresqlDatabase('replica_2') # Declare a BaseModel, the normal best-practice. class BaseModel(ReadSlaveModel): class Meta: database = master read_slaves = (replica_1, replica_2) # Declare your models. class User(BaseModel): username = CharField() When you execute writes (or deletes), they will be executed against the master database: .. code-block:: python User.create(username='Peewee') # Executed against master. When you execute a read query, it will run against one of the replicas: .. code-block:: python users = User.select().where(User.username == 'Peewee') .. note:: To force a ``SELECT`` query against the master database, manually create the :py:class:`SelectQuery`. .. code-block:: python SelectQuery(User) # master database. .. note:: Queries will be dispatched among the ``read_slaves`` in round-robin fashion. .. _test_utils: Test Utils ---------- Contains utilities helpful when testing peewee projects. .. py:class:: test_database(db, models[, create_tables=True[, fail_silently=False]]) Context manager that lets you use a different database with a set of models. Models can also be automatically created and dropped. This context manager helps make it possible to test your peewee models using a "test-only" database. :param Database db: Database to use with the given models :param models: a ``list`` or ``tuple`` of :py:class:`Model` classes to use with the ``db`` :param boolean create_tables: Whether tables should be automatically created and dropped. :param boolean fail_silently: Whether the table create / drop should fail silently. Example: .. code-block:: python from unittest import TestCase from playhouse.test_utils import test_database from peewee import * from my_app.models import User, Tweet test_db = SqliteDatabase(':memory:') class TestUsersTweets(TestCase): def create_test_data(self): # ... create a bunch of users and tweets for i in range(10): User.create(username='user-%d' % i) def test_timeline(self): with test_database(test_db, (User, Tweet)): # This data will be created in `test_db` self.create_test_data() # Perform assertions on test data inside ctx manager. self.assertEqual(Tweet.timeline('user-0') [...]) with test_database(test_db, (User,)): # Test something that just affects user. self.test_some_user_thing() # once we exit the context manager, we're back to using the normal database .. py:class:: count_queries([only_select=False]) Context manager that will count the number of queries executed within the context. :param bool only_select: Only count *SELECT* queries. .. code-block:: python with count_queries() as counter: huey = User.get(User.username == 'huey') huey_tweets = [tweet.message for tweet in huey.tweets] assert counter.count == 2 .. py:attribute:: count The number of queries executed. .. py:method:: get_queries() Return a list of 2-tuples consisting of the SQL query and a list of parameters. .. py:function:: assert_query_count(expected[, only_select=False]) Function or method decorator that will raise an ``AssertionError`` if the number of queries executed in the decorated function does not equal the expected number. .. code-block:: python class TestMyApp(unittest.TestCase): @assert_query_count(1) def test_get_popular_blogs(self): popular_blogs = Blog.get_popular() self.assertEqual( [blog.title for blog in popular_blogs], ["Peewee's Playhouse!", "All About Huey", "Mickey's Adventures"]) This function can also be used as a context manager: .. code-block:: python class TestMyApp(unittest.TestCase): def test_expensive_operation(self): with assert_query_count(1): perform_expensive_operation() .. _pskel: pskel ----- I often find myself writing very small scripts with peewee. *pskel* will generate the boilerplate code for a basic peewee script. Usage:: pskel [options] model1 model2 ... *pskel* accepts the following options: ================= ============= ======================================= Option Default Description ================= ============= ======================================= ``-l,--logging`` False Log all queries to stdout. ``-e,--engine`` sqlite Database driver to use. ``-d,--database`` ``:memory:`` Database to connect to. ================= ============= ======================================= Example:: $ pskel -e postgres -d my_database User Tweet This will print the following code to *stdout* (which you can redirect into a file using ``>``): .. code-block:: python #!/usr/bin/env python import logging from peewee import * from peewee import create_model_tables db = PostgresqlDatabase('my_database') class BaseModel(Model): class Meta: database = db class User(BaseModel): pass class Tweet(BaseModel): pass def main(): create_model_tables([User, Tweet], fail_silently=True) if __name__ == '__main__': main() .. _flask_utils: Flask Utils ----------- The ``playhouse.flask_utils`` module contains several helpers for integrating peewee with the `Flask `_ web framework. Database Wrapper ^^^^^^^^^^^^^^^^ The :py:class:`FlaskDB` class is a wrapper for configuring and referencing a Peewee database from within a Flask application. Don't let it's name fool you: it is **not the same thing as a peewee database**. ``FlaskDB`` is designed to remove the following boilerplate from your flask app: * Dynamically create a Peewee database instance based on app config data. * Create a base class from which all your application's models will descend. * Register hooks at the start and end of a request to handle opening and closing a database connection. Basic usage: .. code-block:: python import datetime from flask import Flask from peewee import * from playhouse.flask_utils import FlaskDB DATABASE = 'postgresql://postgres:password@localhost:5432/my_database' app = Flask(__name__) app.config.from_object(__name__) db_wrapper = FlaskDB(app) class User(db_wrapper.Model): username = CharField(unique=True) class Tweet(db_wrapper.Model): user = ForeignKeyField(User, related_name='tweets') content = TextField() timestamp = DateTimeField(default=datetime.datetime.now) The above code example will create and instantiate a peewee :py:class:`PostgresqlDatabase` specified by the given database URL. Request hooks will be configured to establish a connection when a request is received, and automatically close the connection when the response is sent. Lastly, the :py:class:`FlaskDB` class exposes a :py:attr:`FlaskDB.Model` property which can be used as a base for your application's models. Here is how you can access the wrapped Peewee database instance that is configured for you by the ``FlaskDB`` wrapper: .. code-block:: python # Obtain a reference to the Peewee database instance. peewee_db = db_wrapper.database @app.route('/transfer-funds/', methods=['POST']) def transfer_funds(): with peewee_db.atomic(): # ... return jsonify({'transfer-id': xid}) .. note:: The actual peewee database can be accessed using the ``FlaskDB.database`` attribute. Here is another way to configure a Peewee database using ``FlaskDB``: .. code-block:: python app = Flask(__name__) db_wrapper = FlaskDB(app, 'sqlite:///my_app.db') While the above examples show using a database URL, for more advanced usages you can specify a dictionary of configuration options, or simply pass in a peewee :py:class:`Database` instance: .. code-block:: python DATABASE = { 'name': 'my_app_db', 'engine': 'playhouse.pool.PooledPostgresqlDatabase', 'user': 'postgres', 'max_connections': 32, 'stale_timeout': 600, } app = Flask(__name__) app.config.from_object(__name__) wrapper = FlaskDB(app) pooled_postgres_db = wrapper.database Using a peewee :py:class:`Database` object: .. code-block:: python peewee_db = PostgresqlExtDatabase('my_app') app = Flask(__name__) db_wrapper = FlaskDB(app, peewee_db) Database with Application Factory ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ If you prefer to use the `application factory pattern `_, the :py:class:`FlaskDB` class implements an ``init_app()`` method. Using as a factory: .. code-block:: python db_wrapper = FlaskDB() # Even though the database is not yet initialized, you can still use the # `Model` property to create model classes. class User(db_wrapper.Model): username = CharField(unique=True) def create_app(): app = Flask(__name__) app.config['DATABASE'] = 'sqlite:////home/code/apps/my-database.db' db_wrapper.init_app(app) return app Query utilities ^^^^^^^^^^^^^^^ The ``flask_utils`` module provides several helpers for managing queries in your web app. Some common patterns include: .. py:function:: get_object_or_404(query_or_model, *query) Retrieve the object matching the given query, or return a 404 not found response. A common use-case might be a detail page for a weblog. You want to either retrieve the post matching the given URL, or return a 404. :param query_or_model: Either a :py:class:`Model` class or a pre-filtered :py:class:`SelectQuery`. :param query: An arbitrarily complex peewee expression. Example: .. code-block:: python @app.route('/blog//') def post_detail(slug): public_posts = Post.select().where(Post.published == True) post = get_object_or_404(public_posts, (Post.slug == slug)) return render_template('post_detail.html', post=post) .. py:function:: object_list(template_name, query[, context_variable='object_list'[, paginate_by=20[, page_var='page'[, check_bounds=True[, **kwargs]]]]]) Retrieve a paginated list of objects specified by the given query. The paginated object list will be dropped into the context using the given ``context_variable``, as well as metadata about the current page and total number of pages, and finally any arbitrary context data passed as keyword-arguments. The page is specified using the ``page`` ``GET`` argument, e.g. ``/my-object-list/?page=3`` would return the third page of objects. :param template_name: The name of the template to render. :param query: A :py:class:`SelectQuery` instance to paginate. :param context_variable: The context variable name to use for the paginated object list. :param paginate_by: Number of objects per-page. :param page_var: The name of the ``GET`` argument which contains the page. :param check_bounds: Whether to check that the given page is a valid page. If ``check_bounds`` is ``True`` and an invalid page is specified, then a 404 will be returned. :param kwargs: Arbitrary key/value pairs to pass into the template context. Example: .. code-block:: python @app.route('/blog/') def post_index(): public_posts = (Post .select() .where(Post.published == True) .order_by(Post.timestamp.desc())) return object_list( 'post_index.html', query=public_posts, context_variable='post_list', paginate_by=10) The template will have the following context: * ``post_list``, which contains a list of up to 10 posts. * ``page``, which contains the current page based on the value of the ``page`` ``GET`` parameter. * ``pagination``, a :py:class:`PaginatedQuery` instance. .. py:class:: PaginatedQuery(query_or_model, paginate_by[, page_var='page'[, check_bounds=False]]) Helper class to perform pagination based on ``GET`` arguments. :param query_or_model: Either a :py:class:`Model` or a :py:class:`SelectQuery` instance containing the collection of records you wish to paginate. :param paginate_by: Number of objects per-page. :param page_var: The name of the ``GET`` argument which contains the page. :param check_bounds: Whether to check that the given page is a valid page. If ``check_bounds`` is ``True`` and an invalid page is specified, then a 404 will be returned. .. py:method:: get_page() Return the currently selected page, as indicated by the value of the ``page_var`` ``GET`` parameter. If no page is explicitly selected, then this method will return 1, indicating the first page. .. py:method:: get_page_count() Return the total number of possible pages. .. py:method:: get_object_list() Using the value of :py:meth:`~PaginatedQuery.get_page`, return the page of objects requested by the user. The return value is a :py:class:`SelectQuery` with the appropriate ``LIMIT`` and ``OFFSET`` clauses. If ``check_bounds`` was set to ``True`` and the requested page contains no objects, then a 404 will be raised. peewee-2.10.2/docs/peewee/querying.rst000066400000000000000000001766521316645060400176730ustar00rootroot00000000000000.. _querying: Querying ======== This section will cover the basic CRUD operations commonly performed on a relational database: * :py:meth:`Model.create`, for executing *INSERT* queries. * :py:meth:`Model.save` and :py:meth:`Model.update`, for executing *UPDATE* queries. * :py:meth:`Model.delete_instance` and :py:meth:`Model.delete`, for executing *DELETE* queries. * :py:meth:`Model.select`, for executing *SELECT* queries. Creating a new record --------------------- You can use :py:meth:`Model.create` to create a new model instance. This method accepts keyword arguments, where the keys correspond to the names of the model's fields. A new instance is returned and a row is added to the table. .. code-block:: pycon >>> User.create(username='Charlie') <__main__.User object at 0x2529350> This will *INSERT* a new row into the database. The primary key will automatically be retrieved and stored on the model instance. Alternatively, you can build up a model instance programmatically and then call :py:meth:`~Model.save`: .. code-block:: pycon >>> user = User(username='Charlie') >>> user.save() # save() returns the number of rows modified. 1 >>> user.id 1 >>> huey = User() >>> huey.username = 'Huey' >>> huey.save() 1 >>> huey.id 2 When a model has a foreign key, you can directly assign a model instance to the foreign key field when creating a new record. .. code-block:: pycon >>> tweet = Tweet.create(user=huey, message='Hello!') You can also use the value of the related object's primary key: .. code-block:: pycon >>> tweet = Tweet.create(user=2, message='Hello again!') If you simply wish to insert data and do not need to create a model instance, you can use :py:meth:`Model.insert`: .. code-block:: pycon >>> User.insert(username='Mickey').execute() 3 After executing the insert query, the primary key of the new row is returned. .. note:: There are several ways you can speed up bulk insert operations. Check out the :ref:`bulk_inserts` recipe section for more information. .. _bulk_inserts: Bulk inserts ------------ There are a couple of ways you can load lots of data quickly. The naive approach is to simply call :py:meth:`Model.create` in a loop: .. code-block:: python data_source = [ {'field1': 'val1-1', 'field2': 'val1-2'}, {'field1': 'val2-1', 'field2': 'val2-2'}, # ... ] for data_dict in data_source: Model.create(**data_dict) The above approach is slow for a couple of reasons: 1. If you are using autocommit (the default), then each call to :py:meth:`~Model.create` happens in its own transaction. That is going to be really slow! 2. There is a decent amount of Python logic getting in your way, and each :py:class:`InsertQuery` must be generated and parsed into SQL. 3. That's a lot of data (in terms of raw bytes of SQL) you are sending to your database to parse. 4. We are retrieving the *last insert id*, which causes an additional query to be executed in some cases. You can get a **very significant speedup** by simply wrapping this in a :py:meth:`~Database.atomic`. .. code-block:: python # This is much faster. with db.atomic(): for data_dict in data_source: Model.create(**data_dict) The above code still suffers from points 2, 3 and 4. We can get another big boost by calling :py:meth:`~Model.insert_many`. This method accepts a list of dictionaries to insert. .. code-block:: python # Fastest. with db.atomic(): Model.insert_many(data_source).execute() Depending on the number of rows in your data source, you may need to break it up into chunks: .. code-block:: python # Insert rows 100 at a time. with db.atomic(): for idx in range(0, len(data_source), 100): Model.insert_many(data_source[idx:idx+100]).execute() .. note:: SQLite users should be aware of some caveats when using bulk inserts. Specifically, your SQLite3 version must be 3.7.11.0 or newer to take advantage of the bulk insert API. Additionally, by default SQLite limits the number of bound variables in a SQL query to ``999``. This value can be modified by setting the ``SQLITE_MAX_VARIABLE_NUMBER`` flag. If the data you would like to bulk load is stored in another table, you can also create *INSERT* queries whose source is a *SELECT* query. Use the :py:meth:`Model.insert_from` method: .. code-block:: python query = (TweetArchive .insert_from( fields=[Tweet.user, Tweet.message], query=Tweet.select(Tweet.user, Tweet.message)) .execute()) Updating existing records ------------------------- Once a model instance has a primary key, any subsequent call to :py:meth:`~Model.save` will result in an *UPDATE* rather than another *INSERT*. The model's primary key will not change: .. code-block:: pycon >>> user.save() # save() returns the number of rows modified. 1 >>> user.id 1 >>> user.save() >>> user.id 1 >>> huey.save() 1 >>> huey.id 2 If you want to update multiple records, issue an *UPDATE* query. The following example will update all ``Tweet`` objects, marking them as *published*, if they were created before today. :py:meth:`Model.update` accepts keyword arguments where the keys correspond to the model's field names: .. code-block:: pycon >>> today = datetime.today() >>> query = Tweet.update(is_published=True).where(Tweet.creation_date < today) >>> query.execute() # Returns the number of rows that were updated. 4 For more information, see the documentation on :py:meth:`Model.update` and :py:class:`UpdateQuery`. .. note:: If you would like more information on performing atomic updates (such as incrementing the value of a column), check out the :ref:`atomic update ` recipes. .. _atomic_updates: Atomic updates -------------- Peewee allows you to perform atomic updates. Let's suppose we need to update some counters. The naive approach would be to write something like this: .. code-block:: pycon >>> for stat in Stat.select().where(Stat.url == request.url): ... stat.counter += 1 ... stat.save() **Do not do this!** Not only is this slow, but it is also vulnerable to race conditions if multiple processes are updating the counter at the same time. Instead, you can update the counters atomically using :py:meth:`~Model.update`: .. code-block:: pycon >>> query = Stat.update(counter=Stat.counter + 1).where(Stat.url == request.url) >>> query.execute() You can make these update statements as complex as you like. Let's give all our employees a bonus equal to their previous bonus plus 10% of their salary: .. code-block:: pycon >>> query = Employee.update(bonus=(Employee.bonus + (Employee.salary * .1))) >>> query.execute() # Give everyone a bonus! We can even use a subquery to update the value of a column. Suppose we had a denormalized column on the ``User`` model that stored the number of tweets a user had made, and we updated this value periodically. Here is how you might write such a query: .. code-block:: pycon >>> subquery = Tweet.select(fn.COUNT(Tweet.id)).where(Tweet.user == User.id) >>> update = User.update(num_tweets=subquery) >>> update.execute() Deleting records ---------------- To delete a single model instance, you can use the :py:meth:`Model.delete_instance` shortcut. :py:meth:`~Model.delete_instance` will delete the given model instance and can optionally delete any dependent objects recursively (by specifying `recursive=True`). .. code-block:: pycon >>> user = User.get(User.id == 1) >>> user.delete_instance() # Returns the number of rows deleted. 1 >>> User.get(User.id == 1) UserDoesNotExist: instance matching query does not exist: SQL: SELECT t1."id", t1."username" FROM "user" AS t1 WHERE t1."id" = ? PARAMS: [1] To delete an arbitrary set of rows, you can issue a *DELETE* query. The following will delete all ``Tweet`` objects that are over one year old: .. code-block:: pycon >>> query = Tweet.delete().where(Tweet.creation_date < one_year_ago) >>> query.execute() # Returns the number of rows deleted. 7 For more information, see the documentation on: * :py:meth:`Model.delete_instance` * :py:meth:`Model.delete` * :py:class:`DeleteQuery` Selecting a single record ------------------------- You can use the :py:meth:`Model.get` method to retrieve a single instance matching the given query. This method is a shortcut that calls :py:meth:`Model.select` with the given query, but limits the result set to a single row. Additionally, if no model matches the given query, a ``DoesNotExist`` exception will be raised. .. code-block:: pycon >>> User.get(User.id == 1) <__main__.User object at 0x25294d0> >>> User.get(User.id == 1).username u'Charlie' >>> User.get(User.username == 'Charlie') <__main__.User object at 0x2529410> >>> User.get(User.username == 'nobody') UserDoesNotExist: instance matching query does not exist: SQL: SELECT t1."id", t1."username" FROM "user" AS t1 WHERE t1."username" = ? PARAMS: ['nobody'] For more advanced operations, you can use :py:meth:`SelectQuery.get`. The following query retrieves the latest tweet from the user named *charlie*: .. code-block:: pycon >>> (Tweet ... .select() ... .join(User) ... .where(User.username == 'charlie') ... .order_by(Tweet.created_date.desc()) ... .get()) <__main__.Tweet object at 0x2623410> For more information, see the documentation on: * :py:meth:`Model.get` * :py:meth:`Model.select` * :py:meth:`SelectQuery.get` Create or get ------------- Peewee has one helper method for performing "get/create" type operations: * :py:meth:`Model.get_or_create`, which first attempts to retrieve the matching row. Failing that, a new row will be created. For "create or get" type logic, typically one would rely on a *unique* constraint or primary key to prevent the creation of duplicate objects. As an example, let's say we wish to implement registering a new user account using the :ref:`example User model `. The *User* model has a *unique* constraint on the username field, so we will rely on the database's integrity guarantees to ensure we don't end up with duplicate usernames: .. code-block:: python try: with db.atomic(): return User.create(username=username) except peewee.IntegrityError: # `username` is a unique column, so this username already exists, # making it safe to call .get(). return User.get(User.username == username) You can easily encapsulate this type of logic as a ``classmethod`` on your own ``Model`` classes. The above example first attempts at creation, then falls back to retrieval, relying on the database to enforce a unique constraint. If you prefer to attempt to retrieve the record first, you can use :py:meth:`~Model.get_or_create`. This method is implemented along the same lines as the Django function of the same name. You can use the Django-style keyword argument filters to specify your ``WHERE`` conditions. The function returns a 2-tuple containing the instance and a boolean value indicating if the object was created. Here is how you might implement user account creation using :py:meth:`~Model.get_or_create`: .. code-block:: python user, created = User.get_or_create(username=username) Suppose we have a different model ``Person`` and would like to get or create a person object. The only conditions we care about when retrieving the ``Person`` are their first and last names, **but** if we end up needing to create a new record, we will also specify their date-of-birth and favorite color: .. code-block:: python person, created = Person.get_or_create( first_name=first_name, last_name=last_name, defaults={'dob': dob, 'favorite_color': 'green'}) Any keyword argument passed to :py:meth:`~Model.get_or_create` will be used in the ``get()`` portion of the logic, except for the ``defaults`` dictionary, which will be used to populate values on newly-created instances. For more details check out the documentation for :py:meth:`Model.get_or_create`. Selecting multiple records -------------------------- We can use :py:meth:`Model.select` to retrieve rows from the table. When you construct a *SELECT* query, the database will return any rows that correspond to your query. Peewee allows you to iterate over these rows, as well as use indexing and slicing operations. In the following example, we will simply call :py:meth:`~Model.select` and iterate over the return value, which is an instance of :py:class:`SelectQuery`. This will return all the rows in the *User* table: .. code-block:: pycon >>> for user in User.select(): ... print user.username ... Charlie Huey Peewee .. note:: Subsequent iterations of the same query will not hit the database as the results are cached. To disable this behavior (to reduce memory usage), call :py:meth:`SelectQuery.iterator` when iterating. When iterating over a model that contains a foreign key, be careful with the way you access values on related models. Accidentally resolving a foreign key or iterating over a back-reference can cause :ref:`N+1 query behavior `. When you create a foreign key, such as ``Tweet.user``, you can use the *related_name* to create a back-reference (``User.tweets``). Back-references are exposed as :py:class:`SelectQuery` instances: .. code-block:: pycon >>> tweet = Tweet.get() >>> tweet.user # Accessing a foreign key returns the related model. >>> user = User.get() >>> user.tweets # Accessing a back-reference returns a query. SELECT t1."id", t1."user_id", t1."message", t1."created_date", t1."is_published" FROM "tweet" AS t1 WHERE (t1."user_id" = ?) [1] You can iterate over the ``user.tweets`` back-reference just like any other :py:class:`SelectQuery`: .. code-block:: pycon >>> for tweet in user.tweets: ... print tweet.message ... hello world this is fun look at this picture of my food Filtering records ----------------- You can filter for particular records using normal python operators. Peewee supports a wide variety of :ref:`query operators `. .. code-block:: pycon >>> user = User.get(User.username == 'Charlie') >>> for tweet in Tweet.select().where(Tweet.user == user, Tweet.is_published == True): ... print '%s: %s' % (tweet.user.username, tweet.message) ... Charlie: hello world Charlie: this is fun >>> for tweet in Tweet.select().where(Tweet.created_date < datetime.datetime(2011, 1, 1)): ... print tweet.message, tweet.created_date ... Really old tweet 2010-01-01 00:00:00 You can also filter across joins: .. code-block:: pycon >>> for tweet in Tweet.select().join(User).where(User.username == 'Charlie'): ... print tweet.message hello world this is fun look at this picture of my food If you want to express a complex query, use parentheses and python's bitwise *or* and *and* operators: .. code-block:: pycon >>> Tweet.select().join(User).where( ... (User.username == 'Charlie') | ... (User.username == 'Peewee Herman') ... ) Check out :ref:`the table of query operations ` to see what types of queries are possible. .. note:: A lot of fun things can go in the where clause of a query, such as: * A field expression, e.g. ``User.username == 'Charlie'`` * A function expression, e.g. ``fn.Lower(fn.Substr(User.username, 1, 1)) == 'a'`` * A comparison of one column to another, e.g. ``Employee.salary < (Employee.tenure * 1000) + 40000`` You can also nest queries, for example tweets by users whose username starts with "a": .. code-block:: python # get users whose username starts with "a" a_users = User.select().where(fn.Lower(fn.Substr(User.username, 1, 1)) == 'a') # the "<<" operator signifies an "IN" query a_user_tweets = Tweet.select().where(Tweet.user << a_users) More query examples ^^^^^^^^^^^^^^^^^^^ Get active users: .. code-block:: python User.select().where(User.active == True) Get users who are either staff or superusers: .. code-block:: python User.select().where( (User.is_staff == True) | (User.is_superuser == True)) Get tweets by user named "charlie": .. code-block:: python Tweet.select().join(User).where(User.username == 'charlie') Get tweets by staff or superusers (assumes FK relationship): .. code-block:: python Tweet.select().join(User).where( (User.is_staff == True) | (User.is_superuser == True)) Get tweets by staff or superusers using a subquery: .. code-block:: python staff_super = User.select(User.id).where( (User.is_staff == True) | (User.is_superuser == True)) Tweet.select().where(Tweet.user << staff_super) Sorting records --------------- To return rows in order, use the :py:meth:`~SelectQuery.order_by` method: .. code-block:: pycon >>> for t in Tweet.select().order_by(Tweet.created_date): ... print t.pub_date ... 2010-01-01 00:00:00 2011-06-07 14:08:48 2011-06-07 14:12:57 >>> for t in Tweet.select().order_by(Tweet.created_date.desc()): ... print t.pub_date ... 2011-06-07 14:12:57 2011-06-07 14:08:48 2010-01-01 00:00:00 You can also use ``+`` and ``-`` prefix operators to indicate ordering: .. code-block:: python # The following queries are equivalent: Tweet.select().order_by(Tweet.created_date.desc()) Tweet.select().order_by(-Tweet.created_date) # Note the "-" prefix. # Similarly you can use "+" to indicate ascending order: User.select().order_by(+User.username) You can also order across joins. Assuming you want to order tweets by the username of the author, then by created_date: .. code-block:: pycon >>> qry = Tweet.select().join(User).order_by(User.username, Tweet.created_date.desc()) .. code-block:: sql SELECT t1."id", t1."user_id", t1."message", t1."is_published", t1."created_date" FROM "tweet" AS t1 INNER JOIN "user" AS t2 ON t1."user_id" = t2."id" ORDER BY t2."username", t1."created_date" DESC When sorting on a calculated value, you can either include the necessary SQL expressions, or reference the alias assigned to the value. Here are two examples illustrating these methods: .. code-block:: python # Let's start with our base query. We want to get all usernames and the number of # tweets they've made. We wish to sort this list from users with most tweets to # users with fewest tweets. query = (User .select(User.username, fn.COUNT(Tweet.id).alias('num_tweets')) .join(Tweet, JOIN.LEFT_OUTER) .group_by(User.username)) You can order using the same COUNT expression used in the ``select`` clause. In the example below we are ordering by the ``COUNT()`` of tweet ids descending: .. code-block:: python query = (User .select(User.username, fn.COUNT(Tweet.id).alias('num_tweets')) .join(Tweet, JOIN.LEFT_OUTER) .group_by(User.username) .order_by(fn.COUNT(Tweet.id).desc())) Alternatively, you can reference the alias assigned to the calculated value in the ``select`` clause. This method has the benefit of being a bit easier to read. Note that we are not referring to the named alias directly, but are wrapping it using the :py:class:`SQL` helper: .. code-block:: python query = (User .select(User.username, fn.COUNT(Tweet.id).alias('num_tweets')) .join(Tweet, JOIN.LEFT_OUTER) .group_by(User.username) .order_by(SQL('num_tweets').desc())) Getting random records ---------------------- Occasionally you may want to pull a random record from the database. You can accomplish this by ordering by the *random* or *rand* function (depending on your database): Postgresql and Sqlite use the *Random* function: .. code-block:: python # Pick 5 lucky winners: LotteryNumber.select().order_by(fn.Random()).limit(5) MySQL uses *Rand*: .. code-block:: python # Pick 5 lucky winners: LotterNumber.select().order_by(fn.Rand()).limit(5) Paginating records ------------------ The :py:meth:`~SelectQuery.paginate` method makes it easy to grab a *page* or records. :py:meth:`~SelectQuery.paginate` takes two parameters, ``page_number``, and ``items_per_page``. .. attention:: Page numbers are 1-based, so the first page of results will be page 1. .. code-block:: pycon >>> for tweet in Tweet.select().order_by(Tweet.id).paginate(2, 10): ... print tweet.message ... tweet 10 tweet 11 tweet 12 tweet 13 tweet 14 tweet 15 tweet 16 tweet 17 tweet 18 tweet 19 If you would like more granular control, you can always use :py:meth:`~SelectQuery.limit` and :py:meth:`~SelectQuery.offset`. Counting records ---------------- You can count the number of rows in any select query: .. code-block:: python >>> Tweet.select().count() 100 >>> Tweet.select().where(Tweet.id > 50).count() 50 In some cases it may be necessary to wrap your query and apply a count to the rows of the inner query (such as when using *DISTINCT* or *GROUP BY*). Peewee will usually do this automatically, but in some cases you may need to manually call :py:meth:`~SelectQuery.wrapped_count` instead. Aggregating records ------------------- Suppose you have some users and want to get a list of them along with the count of tweets in each. The :py:meth:`~SelectQuery.annotate` method provides a short-hand for creating these types of queries: .. code-block:: python query = User.select().annotate(Tweet) The above query is equivalent to: .. code-block:: python query = (User .select(User, fn.Count(Tweet.id).alias('count')) .join(Tweet) .group_by(User)) The resulting query will return *User* objects with all their normal attributes plus an additional attribute *count* which will contain the count of tweets for each user. By default it uses an inner join if the foreign key is not nullable, which means users without tweets won't appear in the list. To remedy this, manually specify the type of join to include users with 0 tweets: .. code-block:: python query = (User .select() .join(Tweet, JOIN.LEFT_OUTER) .switch(User) .annotate(Tweet)) You can also specify a custom aggregator, such as *MIN* or *MAX*: .. code-block:: python query = (User .select() .annotate( Tweet, fn.Max(Tweet.created_date).alias('latest_tweet_date'))) Let's assume you have a tagging application and want to find tags that have a certain number of related objects. For this example we'll use some different models in a :ref:`many-to-many ` configuration: .. code-block:: python class Photo(Model): image = CharField() class Tag(Model): name = CharField() class PhotoTag(Model): photo = ForeignKeyField(Photo) tag = ForeignKeyField(Tag) Now say we want to find tags that have at least 5 photos associated with them: .. code-block:: python query = (Tag .select() .join(PhotoTag) .join(Photo) .group_by(Tag) .having(fn.Count(Photo.id) > 5)) This query is equivalent to the following SQL: .. code-block:: sql SELECT t1."id", t1."name" FROM "tag" AS t1 INNER JOIN "phototag" AS t2 ON t1."id" = t2."tag_id" INNER JOIN "photo" AS t3 ON t2."photo_id" = t3."id" GROUP BY t1."id", t1."name" HAVING Count(t3."id") > 5 Suppose we want to grab the associated count and store it on the tag: .. code-block:: python query = (Tag .select(Tag, fn.Count(Photo.id).alias('count')) .join(PhotoTag) .join(Photo) .group_by(Tag) .having(fn.Count(Photo.id) > 5)) Retrieving Scalar Values ------------------------ You can retrieve scalar values by calling :py:meth:`Query.scalar`. For instance: .. code-block:: python >>> PageView.select(fn.Count(fn.Distinct(PageView.url))).scalar() 100 You can retrieve multiple scalar values by passing ``as_tuple=True``: .. code-block:: python >>> Employee.select( ... fn.Min(Employee.salary), fn.Max(Employee.salary) ... ).scalar(as_tuple=True) (30000, 50000) SQL Functions, Subqueries and "Raw expressions" ----------------------------------------------- Suppose you need to want to get a list of all users whose username begins with *a*. There are a couple ways to do this, but one method might be to use some SQL functions like *LOWER* and *SUBSTR*. To use arbitrary SQL functions, use the special :py:func:`fn` object to construct queries: .. code-block:: python # Select the user's id, username and the first letter of their username, lower-cased query = User.select(User, fn.Lower(fn.Substr(User.username, 1, 1)).alias('first_letter')) # Alternatively we could select only users whose username begins with 'a' a_users = User.select().where(fn.Lower(fn.Substr(User.username, 1, 1)) == 'a') >>> for user in a_users: ... print user.username There are times when you may want to simply pass in some arbitrary sql. You can do this using the special :py:class:`SQL` class. One use-case is when referencing an alias: .. code-block:: python # We'll query the user table and annotate it with a count of tweets for # the given user query = User.select(User, fn.Count(Tweet.id).alias('ct')).join(Tweet).group_by(User) # Now we will order by the count, which was aliased to "ct" query = query.order_by(SQL('ct')) There are two ways to execute hand-crafted SQL statements with peewee: 1. :py:meth:`Database.execute_sql` for executing any type of query 2. :py:class:`RawQuery` for executing ``SELECT`` queries and *returning model instances*. Example: .. code-block:: python db = SqliteDatabase(':memory:') class Person(Model): name = CharField() class Meta: database = db # let's pretend we want to do an "upsert", something that SQLite can # do, but peewee cannot. for name in ('charlie', 'mickey', 'huey'): db.execute_sql('REPLACE INTO person (name) VALUES (?)', (name,)) # now let's iterate over the people using our own query. for person in Person.raw('select * from person'): print person.name # .raw() will return model instances. Security and SQL Injection -------------------------- By default peewee will parameterize queries, so any parameters passed in by the user will be escaped. The only exception to this rule is if you are writing a raw SQL query or are passing in a ``SQL`` object which may contain untrusted data. To mitigate this, ensure that any user-defined data is passed in as a query parameter and not part of the actual SQL query: .. code-block:: python # Bad! query = MyModel.raw('SELECT * FROM my_table WHERE data = %s' % (user_data,)) # Good. `user_data` will be treated as a parameter to the query. query = MyModel.raw('SELECT * FROM my_table WHERE data = %s', user_data) # Bad! query = MyModel.select().where(SQL('Some SQL expression %s' % user_data)) # Good. `user_data` will be treated as a parameter. query = MyModel.select().where(SQL('Some SQL expression %s', user_data)) .. note:: MySQL and Postgresql use ``'%s'`` to denote parameters. SQLite, on the other hand, uses ``'?'``. Be sure to use the character appropriate to your database. You can also find this parameter by checking :py:attr:`Database.interpolation`. .. _window-functions: Window functions ---------------- peewee comes with basic support for SQL window functions, which can be created by calling :py:meth:`fn.over` and passing in your partitioning or ordering parameters. .. code-block:: python # Get the list of employees and the average salary for their dept. query = (Employee .select( Employee.name, Employee.department, Employee.salary, fn.Avg(Employee.salary).over( partition_by=[Employee.department])) .order_by(Employee.name)) # Rank employees by salary. query = (Employee .select( Employee.name, Employee.salary, fn.rank().over( order_by=[Employee.salary]))) For general information on window functions, check out the `postgresql docs `_. Retrieving raw tuples / dictionaries ------------------------------------ Sometimes you do not need the overhead of creating model instances and simply want to iterate over the row tuples. To do this, call :py:meth:`SelectQuery.tuples` or :py:meth:`RawQuery.tuples`: .. code-block:: python stats = Stat.select(Stat.url, fn.Count(Stat.url)).group_by(Stat.url).tuples() # iterate over a list of 2-tuples containing the url and count for stat_url, stat_count in stats: print stat_url, stat_count Similarly, you can return the rows from the cursor as dictionaries using :py:meth:`SelectQuery.dicts` or :py:meth:`RawQuery.dicts`: .. code-block:: python stats = Stat.select(Stat.url, fn.Count(Stat.url).alias('ct')).group_by(Stat.url).dicts() # iterate over a list of 2-tuples containing the url and count for stat in stats: print stat['url'], stat['ct'] .. _returning-clause: Returning Clause ---------------- :py:class:`PostgresqlDatabase` supports a ``RETURNING`` clause on ``UPDATE``, ``INSERT`` and ``DELETE`` queries. Specifying a ``RETURNING`` clause allows you to iterate over the rows accessed by the query. For example, let's say you have an :py:class:`UpdateQuery` that deactivates all user accounts whose registration has expired. After deactivating them, you want to send each user an email letting them know their account was deactivated. Rather than writing two queries, a ``SELECT`` and an ``UPDATE``, you can do this in a single ``UPDATE`` query with a ``RETURNING`` clause: .. code-block:: python query = (User .update(is_active=False) .where(User.registration_expired == True) .returning(User)) # Send an email to every user that was deactivated. for deactivate_user in query.execute(): send_deactivation_email(deactivated_user) The ``RETURNING`` clause is also available on :py:class:`InsertQuery` and :py:class:`DeleteQuery`. When used with ``INSERT``, the newly-created rows will be returned. When used with ``DELETE``, the deleted rows will be returned. The only limitation of the ``RETURNING`` clause is that it can only consist of columns from tables listed in the query's ``FROM`` clause. To select all columns from a particular table, you can simply pass in the :py:class:`Model` class. For more information, see: * :py:meth:`UpdateQuery.returning` * :py:meth:`InsertQuery.returning` * :py:meth:`DeleteQuery.returning` .. _query-operators: Query operators =============== The following types of comparisons are supported by peewee: ================ ======================================= Comparison Meaning ================ ======================================= ``==`` x equals y ``<`` x is less than y ``<=`` x is less than or equal to y ``>`` x is greater than y ``>=`` x is greater than or equal to y ``!=`` x is not equal to y ``<<`` x IN y, where y is a list or query ``>>`` x IS y, where y is None/NULL ``%`` x LIKE y where y may contain wildcards ``**`` x ILIKE y where y may contain wildcards ``~`` Negation ================ ======================================= Because I ran out of operators to override, there are some additional query operations available as methods: ======================= =============================================== Method Meaning ======================= =============================================== ``.contains(substr)`` Wild-card search for substring. ``.startswith(prefix)`` Search for values beginning with ``prefix``. ``.endswith(suffix)`` Search for values ending with ``suffix``. ``.between(low, high)`` Search for values between ``low`` and ``high``. ``.regexp(exp)`` Regular expression match. ``.bin_and(value)`` Binary AND. ``.bin_or(value)`` Binary OR. ``.in_(value)`` IN lookup (identical to ``<<``). ``.not_in(value)`` NOT IN lookup. ``.is_null(is_null)`` IS NULL or IS NOT NULL. Accepts boolean param. ``.concat(other)`` Concatenate two strings using ``||``. ======================= =============================================== To combine clauses using logical operators, use: ================ ==================== ====================================================== Operator Meaning Example ================ ==================== ====================================================== ``&`` AND ``(User.is_active == True) & (User.is_admin == True)`` ``|`` (pipe) OR ``(User.is_admin) | (User.is_superuser)`` ``~`` NOT (unary negation) ``~(User.username << ['foo', 'bar', 'baz'])`` ================ ==================== ====================================================== Here is how you might use some of these query operators: .. code-block:: python # Find the user whose username is "charlie". User.select().where(User.username == 'charlie') # Find the users whose username is in [charlie, huey, mickey] User.select().where(User.username << ['charlie', 'huey', 'mickey']) Employee.select().where(Employee.salary.between(50000, 60000)) Employee.select().where(Employee.name.startswith('C')) Blog.select().where(Blog.title.contains(search_string)) Here is how you might combine expressions. Comparisons can be arbitrarily complex. .. note:: Note that the actual comparisons are wrapped in parentheses. Python's operator precedence necessitates that comparisons be wrapped in parentheses. .. code-block:: python # Find any users who are active administrations. User.select().where( (User.is_admin == True) & (User.is_active == True)) # Find any users who are either administrators or super-users. User.select().where( (User.is_admin == True) | (User.is_superuser == True)) # Find any Tweets by users who are not admins (NOT IN). admins = User.select().where(User.is_admin == True) non_admin_tweets = Tweet.select().where( ~(Tweet.user << admins)) # Find any users who are not my friends (strangers). friends = User.select().where( User.username << ['charlie', 'huey', 'mickey']) strangers = User.select().where(~(User.id << friends)) .. warning:: Although you may be tempted to use python's ``in``, ``and``, ``or`` and ``not`` operators in your query expressions, these **will not work.** The return value of an ``in`` expression is always coerced to a boolean value. Similarly, ``and``, ``or`` and ``not`` all treat their arguments as boolean values and cannot be overloaded. So just remember: * Use ``<<`` instead of ``in`` * Use ``&`` instead of ``and`` * Use ``|`` instead of ``or`` * Use ``~`` instead of ``not`` * Don't forget to wrap your comparisons in parentheses when using logical operators. For more examples, see the :ref:`expressions` section. .. note:: **LIKE and ILIKE with SQLite** Because SQLite's ``LIKE`` operation is case-insensitive by default, peewee will use the SQLite ``GLOB`` operation for case-sensitive searches. The glob operation uses asterisks for wildcards as opposed to the usual percent-sign. If you are using SQLite and want case-sensitive partial string matching, remember to use asterisks for the wildcard. Three valued logic ------------------ Because of the way SQL handles ``NULL``, there are some special operations available for expressing: * ``IS NULL`` * ``IS NOT NULL`` * ``IN`` * ``NOT IN`` While it would be possible to use the ``IS NULL`` and ``IN`` operators with the negation operator (``~``), sometimes to get the correct semantics you will need to explicitly use ``IS NOT NULL`` and ``NOT IN``. The simplest way to use ``IS NULL`` and ``IN`` is to use the operator overloads: .. code-block:: python # Get all User objects whose last login is NULL. User.select().where(User.last_login >> None) # Get users whose username is in the given list. usernames = ['charlie', 'huey', 'mickey'] User.select().where(User.username << usernames) If you don't like operator overloads, you can call the Field methods instead: .. code-block:: python # Get all User objects whose last login is NULL. User.select().where(User.last_login.is_null(True)) # Get users whose username is in the given list. usernames = ['charlie', 'huey', 'mickey'] User.select().where(User.username.in_(usernames)) To negate the above queries, you can use unary negation, but for the correct semantics you may need to use the special ``IS NOT`` and ``NOT IN`` operators: .. code-block:: python # Get all User objects whose last login is *NOT* NULL. User.select().where(User.last_login.is_null(False)) # Using unary negation instead. User.select().where(~(User.last_login >> None)) # Get users whose username is *NOT* in the given list. usernames = ['charlie', 'huey', 'mickey'] User.select().where(User.username.not_in(usernames)) # Using unary negation instead. usernames = ['charlie', 'huey', 'mickey'] User.select().where(~(User.username << usernames)) .. _custom-operators: Adding user-defined operators ----------------------------- Because I ran out of python operators to overload, there are some missing operators in peewee, for instance `modulo `_. If you find that you need to support an operator that is not in the table above, it is very easy to add your own. Here is how you might add support for ``modulo`` in SQLite: .. code-block:: python from peewee import * from peewee import Expression # the building block for expressions OP['MOD'] = 'mod' def mod(lhs, rhs): return Expression(lhs, OP.MOD, rhs) SqliteDatabase.register_ops({OP.MOD: '%'}) Now you can use these custom operators to build richer queries: .. code-block:: python # Users with even ids. User.select().where(mod(User.id, 2) == 0) For more examples check out the source to the ``playhouse.postgresql_ext`` module, as it contains numerous operators specific to postgresql's hstore. .. _expressions: Expressions ----------- Peewee is designed to provide a simple, expressive, and pythonic way of constructing SQL queries. This section will provide a quick overview of some common types of expressions. There are two primary types of objects that can be composed to create expressions: * :py:class:`Field` instances * SQL aggregations and functions using :py:class:`fn` We will assume a simple "User" model with fields for username and other things. It looks like this: .. code-block:: python class User(Model): username = CharField() is_admin = BooleanField() is_active = BooleanField() last_login = DateTimeField() login_count = IntegerField() failed_logins = IntegerField() Comparisons use the :ref:`query-operators`: .. code-block:: python # username is equal to 'charlie' User.username == 'charlie' # user has logged in less than 5 times User.login_count < 5 Comparisons can be combined using bitwise *and* and *or*. Operator precedence is controlled by python and comparisons can be nested to an arbitrary depth: .. code-block:: python # User is both and admin and has logged in today (User.is_admin == True) & (User.last_login >= today) # User's username is either charlie or charles (User.username == 'charlie') | (User.username == 'charles') Comparisons can be used with functions as well: .. code-block:: python # user's username starts with a 'g' or a 'G': fn.Lower(fn.Substr(User.username, 1, 1)) == 'g' We can do some fairly interesting things, as expressions can be compared against other expressions. Expressions also support arithmetic operations: .. code-block:: python # users who entered the incorrect more than half the time and have logged # in at least 10 times (User.failed_logins > (User.login_count * .5)) & (User.login_count > 10) Expressions allow us to do atomic updates: .. code-block:: python # when a user logs in we want to increment their login count: User.update(login_count=User.login_count + 1).where(User.id == user_id) Expressions can be used in all parts of a query, so experiment! Foreign Keys ============ Foreign keys are created using a special field class :py:class:`ForeignKeyField`. Each foreign key also creates a back-reference on the related model using the specified *related_name*. Traversing foreign keys ----------------------- Referring back to the :ref:`User and Tweet models `, note that there is a :py:class:`ForeignKeyField` from *Tweet* to *User*. The foreign key can be traversed, allowing you access to the associated user instance: .. code-block:: pycon >>> tweet.user.username 'charlie' .. note:: Unless the *User* model was explicitly selected when retrieving the *Tweet*, an additional query will be required to load the *User* data. To learn how to avoid the extra query, see the :ref:`N+1 query documentation `. The reverse is also true, and we can iterate over the tweets associated with a given *User* instance: .. code-block:: python >>> for tweet in user.tweets: ... print tweet.message ... http://www.youtube.com/watch?v=xdhLQCYQ-nQ Under the hood, the *tweets* attribute is just a :py:class:`SelectQuery` with the *WHERE* clause pre-populated to point to the given *User* instance: .. code-block:: python >>> user.tweets SELECT t1."id", t1."user_id", t1."message", ... Joining tables -------------- Use the :py:meth:`~Query.join` method to *JOIN* additional tables. When a foreign key exists between the source model and the join model, you do not need to specify any additional parameters: .. code-block:: pycon >>> my_tweets = Tweet.select().join(User).where(User.username == 'charlie') By default peewee will use an *INNER* join, but you can use *LEFT OUTER*, *RIGHT OUTER*, *FULL*, or *CROSS* joins as well: .. code-block:: python users = (User .select(User, fn.Count(Tweet.id).alias('num_tweets')) .join(Tweet, JOIN.LEFT_OUTER) .group_by(User) .order_by(fn.Count(Tweet.id).desc())) for user in users: print user.username, 'has created', user.num_tweets, 'tweet(s).' Multiple Foreign Keys to the Same Model ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ When there are multiple foreign keys to the same model, it is good practice to explicitly specify which field you are joining on. Referring back to the :ref:`example app's models `, consider the *Relationship* model, which is used to denote when one user follows another. Here is the model definition: .. code-block:: python class Relationship(BaseModel): from_user = ForeignKeyField(User, related_name='relationships') to_user = ForeignKeyField(User, related_name='related_to') class Meta: indexes = ( # Specify a unique multi-column index on from/to-user. (('from_user', 'to_user'), True), ) Since there are two foreign keys to *User*, we should always specify which field we are using in a join. For example, to determine which users I am following, I would write: .. code-block:: python (User .select() .join(Relationship, on=Relationship.to_user) .where(Relationship.from_user == charlie)) On the other hand, if I wanted to determine which users are following me, I would instead join on the *from_user* column and filter on the relationship's *to_user*: .. code-block:: python (User .select() .join(Relationship, on=Relationship.from_user) .where(Relationship.to_user == charlie)) Joining on arbitrary fields ^^^^^^^^^^^^^^^^^^^^^^^^^^^ If a foreign key does not exist between two tables you can still perform a join, but you must manually specify the join predicate. In the following example, there is no explicit foreign-key between *User* and *ActivityLog*, but there is an implied relationship between the *ActivityLog.object_id* field and *User.id*. Rather than joining on a specific :py:class:`Field`, we will join using an :py:class:`Expression`. .. code-block:: python user_log = (User .select(User, ActivityLog) .join( ActivityLog, on=(User.id == ActivityLog.object_id).alias('log')) .where( (ActivityLog.activity_type == 'user_activity') & (User.username == 'charlie'))) for user in user_log: print user.username, user.log.description #### Print something like #### charlie logged in charlie posted a tweet charlie retweeted charlie posted a tweet charlie logged out .. note:: By specifying an alias on the join condition, you can control the attribute peewee will assign the joined instance to. In the previous example, we used the following *join*: .. code-block:: python (User.id == ActivityLog.object_id).alias('log') Then when iterating over the query, we were able to directly access the joined *ActivityLog* without incurring an additional query: .. code-block:: python for user in user_log: print user.username, user.log.description Joining on Multiple Tables ^^^^^^^^^^^^^^^^^^^^^^^^^^ When calling :py:meth:`~Query.join`, peewee will use the *last joined table* as the source table. For example: .. code-block:: python User.select().join(Tweet).join(Comment) This query will result in a join from *User* to *Tweet*, and another join from *Tweet* to *Comment*. If you would like to join the same table twice, use the :py:meth:`~Query.switch` method: .. code-block:: python # Join the Artist table on both `Ablum` and `Genre`. Artist.select().join(Album).switch(Artist).join(Genre) .. _manytomany: Implementing Many to Many ------------------------- Peewee does not provide a *field* for many to many relationships the way that django does -- this is because the field really is hiding an intermediary table. To implement many-to-many with peewee, you will therefore create the intermediary table yourself and query through it: .. code-block:: python class Student(Model): name = CharField() class Course(Model): name = CharField() class StudentCourse(Model): student = ForeignKeyField(Student) course = ForeignKeyField(Course) To query, let's say we want to find students who are enrolled in math class: .. code-block:: python query = (Student .select() .join(StudentCourse) .join(Course) .where(Course.name == 'math')) for student in query: print student.name To query what classes a given student is enrolled in: .. code-block:: python courses = (Course .select() .join(StudentCourse) .join(Student) .where(Student.name == 'da vinci')) for course in courses: print course.name To efficiently iterate over a many-to-many relation, i.e., list all students and their respective courses, we will query the *through* model ``StudentCourse`` and *precompute* the Student and Course: .. code-block:: python query = (StudentCourse .select(StudentCourse, Student, Course) .join(Course) .switch(StudentCourse) .join(Student) .order_by(Student.name)) To print a list of students and their courses you might do the following: .. code-block:: python last = None for student_course in query: student = student_course.student if student != last: last = student print 'Student: %s' % student.name print ' - %s' % student_course.course.name Since we selected all fields from ``Student`` and ``Course`` in the *select* clause of the query, these foreign key traversals are "free" and we've done the whole iteration with just 1 query. ManyToManyField ^^^^^^^^^^^^^^^ The :py:class:`ManyToManyField` provides a *field-like* API over many-to-many fields. For all but the simplest many-to-many situations, you're better off using the standard peewee APIs. But, if your models are very simple and your querying needs are not very complex, you can get a big boost by using :py:class:`ManyToManyField`. Check out the :ref:`extra-fields` extension module for details. Modeling students and courses using :py:class:`ManyToManyField`: .. code-block:: python from peewee import * from playhouse.fields import ManyToManyField db = SqliteDatabase('school.db') class BaseModel(Model): class Meta: database = db class Student(BaseModel): name = CharField() class Course(BaseModel): name = CharField() students = ManyToManyField(Student, related_name='courses') StudentCourse = Course.students.get_through_model() db.create_tables([ Student, Course, StudentCourse]) # Get all classes that "huey" is enrolled in: huey = Student.get(Student.name == 'Huey') for course in huey.courses.order_by(Course.name): print course.name # Get all students in "English 101": engl_101 = Course.get(Course.name == 'English 101') for student in engl_101.students: print student.name # When adding objects to a many-to-many relationship, we can pass # in either a single model instance, a list of models, or even a # query of models: huey.courses.add(Course.select().where(Course.name.contains('English'))) engl_101.students.add(Student.get(Student.name == 'Mickey')) engl_101.students.add([ Student.get(Student.name == 'Charlie'), Student.get(Student.name == 'Zaizee')]) # The same rules apply for removing items from a many-to-many: huey.courses.remove(Course.select().where(Course.name.startswith('CS'))) engl_101.students.remove(huey) # Calling .clear() will remove all associated objects: cs_150.students.clear() For more examples, see: * :py:meth:`ManyToManyField.add` * :py:meth:`ManyToManyField.remove` * :py:meth:`ManyToManyField.clear` * :py:meth:`ManyToManyField.get_through_model` Self-joins ---------- Peewee supports several methods for constructing queries containing a self-join. Using model aliases ^^^^^^^^^^^^^^^^^^^ To join on the same model (table) twice, it is necessary to create a model alias to represent the second instance of the table in a query. Consider the following model: .. code-block:: python class Category(Model): name = CharField() parent = ForeignKeyField('self', related_name='children') What if we wanted to query all categories whose parent category is *Electronics*. One way would be to perform a self-join: .. code-block:: python Parent = Category.alias() query = (Category .select() .join(Parent, on=(Category.parent == Parent.id)) .where(Parent.name == 'Electronics')) When performing a join that uses a :py:class:`ModelAlias`, it is necessary to specify the join condition using the ``on`` keyword argument. In this case we are joining the category with its parent category. Using subqueries ^^^^^^^^^^^^^^^^ Another less common approach involves the use of subqueries. Here is another way we might construct a query to get all the categories whose parent category is *Electronics* using a subquery: .. code-block:: python join_query = Category.select().where(Category.name == 'Electronics') # Subqueries used as JOINs need to have an alias. join_query = join_query.alias('jq') query = (Category .select() .join(join_query, on=(Category.parent == join_query.c.id))) This will generate the following SQL query: .. code-block:: sql SELECT t1."id", t1."name", t1."parent_id" FROM "category" AS t1 INNER JOIN ( SELECT t3."id" FROM "category" AS t3 WHERE (t3."name" = ?) ) AS jq ON (t1."parent_id" = "jq"."id" To access the ``id`` value from the subquery, we use the ``.c`` magic lookup which will generate the appropriate SQL expression: .. code-block:: python Category.parent == join_query.c.id # Becomes: (t1."parent_id" = "jq"."id") Performance Techniques ====================== This section outlines some techniques for improving performance when using peewee. .. _nplusone: Avoiding N+1 queries -------------------- The term *N+1 queries* refers to a situation where an application performs a query, then for each row of the result set, the application performs at least one other query (another way to conceptualize this is as a nested loop). In many cases, these *n* queries can be avoided through the use of a SQL join or subquery. The database itself may do a nested loop, but it will usually be more performant than doing *n* queries in your application code, which involves latency communicating with the database and may not take advantage of indices or other optimizations employed by the database when joining or executing a subquery. Peewee provides several APIs for mitigating *N+1* query behavior. Recollecting the models used throughout this document, *User* and *Tweet*, this section will try to outline some common *N+1* scenarios, and how peewee can help you avoid them. .. note:: In some cases, N+1 queries will not result in a significant or measurable performance hit. It all depends on the data you are querying, the database you are using, and the latency involved in executing queries and retrieving results. As always when making optimizations, profile before and after to ensure the changes do what you expect them to. List recent tweets ^^^^^^^^^^^^^^^^^^ The twitter timeline displays a list of tweets from multiple users. In addition to the tweet's content, the username of the tweet's author is also displayed. The N+1 scenario here would be: 1. Fetch the 10 most recent tweets. 2. For each tweet, select the author (10 queries). By selecting both tables and using a *join*, peewee makes it possible to accomplish this in a single query: .. code-block:: python query = (Tweet .select(Tweet, User) # Note that we are selecting both models. .join(User) # Use an INNER join because every tweet has an author. .order_by(Tweet.id.desc()) # Get the most recent tweets. .limit(10)) for tweet in query: print tweet.user.username, '-', tweet.message Without the join, accessing ``tweet.user.username`` would trigger a query to resolve the foreign key ``tweet.user`` and retrieve the associated user. But since we have selected and joined on ``User``, peewee will automatically resolve the foreign-key for us. List users and all their tweets ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Let's say you want to build a page that shows several users and all of their tweets. The N+1 scenario would be: 1. Fetch some users. 2. For each user, fetch their tweets. This situation is similar to the previous example, but there is one important difference: when we selected tweets, they only have a single associated user, so we could directly assign the foreign key. The reverse is not true, however, as one user may have any number of tweets (or none at all). Peewee provides two approaches to avoiding *O(n)* queries in this situation. We can either: * Fetch users first, then fetch all the tweets associated with those users. Once peewee has the big list of tweets, it will assign them out, matching them with the appropriate user. This method is usually faster but will involve a query for each table being selected. * Fetch both users and tweets in a single query. User data will be duplicated, so peewee will de-dupe it and aggregate the tweets as it iterates through the result set. This method involves a lot of data being transferred over the wire and a lot of logic in Python to de-duplicate rows. Each solution has its place and, depending on the size and shape of the data you are querying, one may be more performant than the other. .. _prefetch: Using prefetch ^^^^^^^^^^^^^^ peewee supports pre-fetching related data using sub-queries. This method requires the use of a special API, :py:func:`prefetch`. Pre-fetch, as its name indicates, will eagerly load the appropriate tweets for the given users using subqueries. This means instead of *O(n)* queries for *n* rows, we will do *O(k)* queries for *k* tables. Here is an example of how we might fetch several users and any tweets they created within the past week. .. code-block:: python week_ago = datetime.date.today() - datetime.timedelta(days=7) users = User.select() tweets = (Tweet .select() .where( (Tweet.is_published == True) & (Tweet.created_date >= week_ago))) # This will perform two queries. users_with_tweets = prefetch(users, tweets) for user in users_with_tweets: print user.username for tweet in user.tweets_prefetch: print ' ', tweet.message .. note:: Note that neither the ``User`` query, nor the ``Tweet`` query contained a JOIN clause. When using :py:func:`prefetch` you do not need to specify the join. :py:func:`prefetch` can be used to query an arbitrary number of tables. Check the API documentation for more examples. Some things to consider when using :py:func:`prefetch`: * Foreign keys must exist between the models being prefetched. * In general it is more performant than :py:meth:`~SelectQuery.aggregate_rows`. * Typically a lot less data is transferred over the wire since data is not duplicated. * There is less Python overhead since we don't have to de-dupe things. * `LIMIT` works as you'd expect on the outer-most query, but may be difficult to implement correctly if trying to limit the size of the sub-selects. .. _aggregate-rows: Using aggregate_rows ^^^^^^^^^^^^^^^^^^^^ The :py:meth:`~SelectQuery.aggregate_rows` approach selects all data in one go and de-dupes things in-memory. Like :py:func:`prefetch`, it can work with arbitrarily complex queries. To use this feature We will use a special flag, :py:meth:`~SelectQuery.aggregate_rows`, when creating our query. This method tells peewee to de-duplicate any rows that, due to the structure of the JOINs, may be duplicated. .. warning:: Because there is a lot of computation involved in de-duping data, it is possible that for some queries :py:meth:`~SelectQuery.aggregate_rows` will be **significantly less performant** than using :py:func:`prefetch` (described in the previous section) or even issuing *O(n)* simple queries! Profile your code if you're not sure. .. code-block:: python query = (User .select(User, Tweet) # As in the previous example, we select both tables. .join(Tweet, JOIN.LEFT_OUTER) .order_by(User.username) # We need to specify an ordering here. .aggregate_rows()) # Tell peewee to de-dupe and aggregate results. for user in query: print user.username for tweet in user.tweets: print ' ', tweet.message Ordinarily, ``user.tweets`` would be a :py:class:`SelectQuery` and iterating over it would trigger an additional query. By using :py:meth:`~SelectQuery.aggregate_rows`, though, ``user.tweets`` is a Python ``list`` and no additional query occurs. .. note:: We used a *LEFT OUTER* join to ensure that users with zero tweets would also be included in the result set. Below is an example of how we might fetch several users and any tweets they created within the past week. Because we are filtering the tweets and the user may not have any tweets, we need our *WHERE* clause to allow *NULL* tweet IDs. .. code-block:: python week_ago = datetime.date.today() - datetime.timedelta(days=7) query = (User .select(User, Tweet) .join(Tweet, JOIN.LEFT_OUTER) .where( (Tweet.id >> None) | ( (Tweet.is_published == True) & (Tweet.created_date >= week_ago))) .order_by(User.username, Tweet.created_date.desc()) .aggregate_rows()) for user in query: print user.username for tweet in user.tweets: print ' ', tweet.message Some things to consider when using :py:meth:`~SelectQuery.aggregate_rows`: * You must specify an ordering for each table that is joined on so the rows can be aggregated correctly, sort of similar to `itertools.groupby `_. * Do not mix calls to :py:meth:`~SelectQuery.aggregate_rows` with ``LIMIT`` or ``OFFSET`` clauses, or with :py:meth:`~SelectQuery.get` (which applies a ``LIMIT 1`` SQL clause). Since the aggregate result set may contain more than one item due to rows being duplicated, limits can lead to incorrect behavior. Imagine you have three users, each of whom has 10 tweets. If you run a query with a ``LIMIT 5``, then you will only receive the first user and their first 5 tweets. * In general the Python overhead of de-duplicating data can make this method less performant than :py:func:`prefetch`, and sometimes even less performan than simply issuing *O(n)* simple queries! When in doubt profile. * Because every column from every table is included in each row tuple returned by the cursor, this approach can use a lot more bandwidth than :py:func:`prefetch`. Iterating over lots of rows --------------------------- By default peewee will cache the rows returned when iterating of a :py:class:`SelectQuery`. This is an optimization to allow multiple iterations as well as indexing and slicing without causing additional queries. This caching can be problematic, however, when you plan to iterate over a large number of rows. To reduce the amount of memory used by peewee when iterating over a query, use the :py:meth:`~SelectQuery.iterator` method. This method allows you to iterate without caching each model returned, using much less memory when iterating over large result sets. .. code-block:: python # Let's assume we've got 10 million stat objects to dump to a csv file. stats = Stat.select() # Our imaginary serializer class serializer = CSVSerializer() # Loop over all the stats and serialize. for stat in stats.iterator(): serializer.serialize_object(stat) For simple queries you can see further speed improvements by using the :py:meth:`~SelectQuery.naive` method. This method speeds up the construction of peewee model instances from raw cursor data. See the :py:meth:`~SelectQuery.naive` documentation for more details on this optimization. .. code-block:: python for stat in stats.naive().iterator(): serializer.serialize_object(stat) You can also see performance improvements by using the :py:meth:`~SelectQuery.dicts` and :py:meth:`~SelectQuery.tuples` methods. When iterating over a large number of rows that contain columns from multiple tables, peewee will reconstruct the model graph for each row returned. This operation can be slow for complex graphs. To speed up model creation, you can: * Call :py:meth:`~SelectQuery.naive`, which will not construct a graph and simply patch all attributes from the row directly onto a model instance. * Use :py:meth:`~SelectQuery.dicts` or :py:meth:`~SelectQuery.tuples`. Speeding up Bulk Inserts ------------------------ See the :ref:`bulk_inserts` section for details on speeding up bulk insert operations. peewee-2.10.2/docs/peewee/quickstart.rst000066400000000000000000000261551316645060400202120ustar00rootroot00000000000000.. _quickstart: Quickstart ========== This document presents a brief, high-level overview of Peewee's primary features. This guide will cover: * :ref:`model-definition` * :ref:`storing-data` * :ref:`retrieving-data` .. note:: If you'd like something a bit more meaty, there is a thorough tutorial on :ref:`creating a "twitter"-style web app ` using peewee and the Flask framework. I **strongly** recommend opening an interactive shell session and running the code. That way you can get a feel for typing in queries. .. _model-definition: Model Definition ----------------- Model classes, fields and model instances all map to database concepts: ================= ================================= Thing Corresponds to... ================= ================================= Model class Database table Field instance Column on a table Model instance Row in a database table ================= ================================= When starting a project with peewee, it's typically best to begin with your data model, by defining one or more :py:class:`Model` classes: .. code-block:: python from peewee import * db = SqliteDatabase('people.db') class Person(Model): name = CharField() birthday = DateField() is_relative = BooleanField() class Meta: database = db # This model uses the "people.db" database. .. note:: Note that we named our model ``Person`` instead of ``People``. This is a convention you should follow -- even though the table will contain multiple people, we always name the class using the singular form. There are lots of :ref:`field types ` suitable for storing various types of data. Peewee handles converting between *pythonic* values those used by the database, so you can use Python types in your code without having to worry. Things get interesting when we set up relationships between models using `foreign keys (wikipedia) `_. This is easy to do with peewee: .. code-block:: python class Pet(Model): owner = ForeignKeyField(Person, related_name='pets') name = CharField() animal_type = CharField() class Meta: database = db # this model uses the "people.db" database Now that we have our models, let's connect to the database. Although it's not necessary to open the connection explicitly, it is good practice since it will reveal any errors with your database connection immediately, as opposed to some arbitrary time later when the first query is executed. It is also good to close the connection when you are done -- for instance, a web app might open a connection when it receives a request, and close the connection when it sends the response. .. code-block:: pycon >>> db.connect() We'll begin by creating the tables in the database that will store our data. This will create the tables with the appropriate columns, indexes, sequences, and foreign key constraints: .. code-block:: pycon >>> db.create_tables([Person, Pet]) .. _storing-data: Storing data ------------ Let's begin by populating the database with some people. We will use the :py:meth:`~Model.save` and :py:meth:`~Model.create` methods to add and update people's records. .. code-block:: pycon >>> from datetime import date >>> uncle_bob = Person(name='Bob', birthday=date(1960, 1, 15), is_relative=True) >>> uncle_bob.save() # bob is now stored in the database 1 .. note:: When you call :py:meth:`~Model.save`, the number of rows modified is returned. You can also add a person by calling the :py:meth:`~Model.create` method, which returns a model instance: .. code-block:: pycon >>> grandma = Person.create(name='Grandma', birthday=date(1935, 3, 1), is_relative=True) >>> herb = Person.create(name='Herb', birthday=date(1950, 5, 5), is_relative=False) To update a row, modify the model instance and call :py:meth:`~Model.save` to persist the changes. Here we will change Grandma's name and then save the changes in the database: .. code-block:: pycon >>> grandma.name = 'Grandma L.' >>> grandma.save() # Update grandma's name in the database. 1 Now we have stored 3 people in the database. Let's give them some pets. Grandma doesn't like animals in the house, so she won't have any, but Herb is an animal lover: .. code-block:: pycon >>> bob_kitty = Pet.create(owner=uncle_bob, name='Kitty', animal_type='cat') >>> herb_fido = Pet.create(owner=herb, name='Fido', animal_type='dog') >>> herb_mittens = Pet.create(owner=herb, name='Mittens', animal_type='cat') >>> herb_mittens_jr = Pet.create(owner=herb, name='Mittens Jr', animal_type='cat') After a long full life, Mittens sickens and dies. We need to remove him from the database: .. code-block:: pycon >>> herb_mittens.delete_instance() # he had a great life 1 .. note:: The return value of :py:meth:`~Model.delete_instance` is the number of rows removed from the database. Uncle Bob decides that too many animals have been dying at Herb's house, so he adopts Fido: .. code-block:: pycon >>> herb_fido.owner = uncle_bob >>> herb_fido.save() >>> bob_fido = herb_fido # rename our variable for clarity .. _retrieving-data: Retrieving Data --------------- The real strength of our database is in how it allows us to retrieve data through *queries*. Relational databases are excellent for making ad-hoc queries. Getting single records ^^^^^^^^^^^^^^^^^^^^^^ Let's retrieve Grandma's record from the database. To get a single record from the database, use :py:meth:`SelectQuery.get`: .. code-block:: pycon >>> grandma = Person.select().where(Person.name == 'Grandma L.').get() We can also use the equivalent shorthand :py:meth:`Model.get`: .. code-block:: pycon >>> grandma = Person.get(Person.name == 'Grandma L.') Lists of records ^^^^^^^^^^^^^^^^ Let's list all the people in the database: .. code-block:: pycon >>> for person in Person.select(): ... print person.name, person.is_relative ... Bob True Grandma L. True Herb False Let's list all the cats and their owner's name: .. code-block:: pycon >>> query = Pet.select().where(Pet.animal_type == 'cat') >>> for pet in query: ... print pet.name, pet.owner.name ... Kitty Bob Mittens Jr Herb There is a big problem with the previous query: because we are accessing ``pet.owner.name`` and we did not select this value in our original query, peewee will have to perform an additional query to retrieve the pet's owner. This behavior is referred to as :ref:`N+1 ` and it should generally be avoided. We can avoid the extra queries by selecting both *Pet* and *Person*, and adding a *join*. .. code-block:: pycon >>> query = (Pet ... .select(Pet, Person) ... .join(Person) ... .where(Pet.animal_type == 'cat')) >>> for pet in query: ... print pet.name, pet.owner.name ... Kitty Bob Mittens Jr Herb Let's get all the pets owned by Bob: .. code-block:: pycon >>> for pet in Pet.select().join(Person).where(Person.name == 'Bob'): ... print pet.name ... Kitty Fido We can do another cool thing here to get bob's pets. Since we already have an object to represent Bob, we can do this instead: .. code-block:: pycon >>> for pet in Pet.select().where(Pet.owner == uncle_bob): ... print pet.name Let's make sure these are sorted alphabetically by adding an :py:meth:`~SelectQuery.order_by` clause: .. code-block:: pycon >>> for pet in Pet.select().where(Pet.owner == uncle_bob).order_by(Pet.name): ... print pet.name ... Fido Kitty Let's list all the people now, youngest to oldest: .. code-block:: pycon >>> for person in Person.select().order_by(Person.birthday.desc()): ... print person.name, person.birthday ... Bob 1960-01-15 Herb 1950-05-05 Grandma L. 1935-03-01 Now let's list all the people *and* some info about their pets: .. code-block:: pycon >>> for person in Person.select(): ... print person.name, person.pets.count(), 'pets' ... for pet in person.pets: ... print ' ', pet.name, pet.animal_type ... Bob 2 pets Kitty cat Fido dog Grandma L. 0 pets Herb 1 pets Mittens Jr cat Once again we've run into a classic example of :ref:`N+1 ` query behavior. We can avoid this by performing a *JOIN* and aggregating the records: .. code-block:: pycon >>> subquery = Pet.select(fn.COUNT(Pet.id)).where(Pet.owner == Person.id) >>> query = (Person ... .select(Person, Pet, subquery.alias('pet_count')) ... .join(Pet, JOIN.LEFT_OUTER) ... .order_by(Person.name)) >>> for person in query.aggregate_rows(): # Note the `aggregate_rows()` call. ... print person.name, person.pet_count, 'pets' ... for pet in person.pets: ... print ' ', pet.name, pet.animal_type ... Bob 2 pets Kitty cat Fido dog Grandma L. 0 pets Herb 1 pets Mittens Jr cat Even though we created the subquery separately, **only one** query is actually executed. Finally, let's do a complicated one. Let's get all the people whose birthday was either: * before 1940 (grandma) * after 1959 (bob) .. code-block:: pycon >>> d1940 = date(1940, 1, 1) >>> d1960 = date(1960, 1, 1) >>> query = (Person ... .select() ... .where((Person.birthday < d1940) | (Person.birthday > d1960))) ... >>> for person in query: ... print person.name, person.birthday ... Bob 1960-01-15 Grandma L. 1935-03-01 Now let's do the opposite. People whose birthday is between 1940 and 1960: .. code-block:: pycon >>> query = (Person ... .select() ... .where((Person.birthday > d1940) & (Person.birthday < d1960))) ... >>> for person in query: ... print person.name, person.birthday ... Herb 1950-05-05 One last query. This will use a SQL function to find all people whose names start with either an upper or lower-case *G*: .. code-block:: pycon >>> expression = (fn.Lower(fn.Substr(Person.name, 1, 1)) == 'g') >>> for person in Person.select().where(expression): ... print person.name ... Grandma L. We're done with our database, let's close the connection: .. code-block:: pycon >>> db.close() This is just the basics! You can make your queries as complex as you like. All the other SQL clauses are available as well, such as: * :py:meth:`~SelectQuery.group_by` * :py:meth:`~SelectQuery.having` * :py:meth:`~SelectQuery.limit` and :py:meth:`~SelectQuery.offset` Check the documentation on :ref:`querying` for more info. Working with existing databases ------------------------------- If you already have a database, you can autogenerate peewee models using :ref:`pwiz`. For instance, if I have a postgresql database named *charles_blog*, I might run: .. code-block:: console python -m pwiz -e postgresql charles_blog > blog_models.py What next? ---------- That's it for the quickstart. If you want to look at a full web-app, check out the :ref:`example-app`. peewee-2.10.2/docs/peewee/schema.jpg000066400000000000000000000265701316645060400172310ustar00rootroot00000000000000ÿØÿàJFIFÿÛC    $.' ",#(7),01444'9=82<.342ÿÛC  2!!22222222222222222222222222222222222222222222222222ÿÀdš"ÿÄ ÿĵ}!1AQa"q2‘¡#B±ÁRÑð$3br‚ %&'()*456789:CDEFGHIJSTUVWXYZcdefghijstuvwxyzƒ„…†‡ˆ‰Š’“”•–—˜™š¢£¤¥¦§¨©ª²³´µ¶·¸¹ºÂÃÄÅÆÇÈÉÊÒÓÔÕÖרÙÚáâãäåæçèéêñòóôõö÷øùúÿÄ ÿĵw!1AQaq"2B‘¡±Á #3RðbrÑ $4á%ñ&'()*56789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz‚ƒ„…†‡ˆ‰Š’“”•–—˜™š¢£¤¥¦§¨©ª²³´µ¶·¸¹ºÂÃÄÅÆÇÈÉÊÒÓÔÕÖרÙÚâãäåæçèéêòóôõö÷øùúÿÚ ?÷ú+Ìô¿ˆþ-ñ ‚êšÃÉ/4¹ÅµÄšÌ´Š®W%Êœ©ã'êzÕÏøK~!ÿÑ0ÿÊý¿øP Q^ÿ oÄ?ú&ù_·ÿ ?á-ø‡ÿDÃÿ+öÿá@Eyÿü%¿ÿè˜å~ßü(ÿ„·âýü¯Ûÿ…zçÿð–üCÿ¢aÿ•ûð£þ߈ôL?ò¿oþèWŸÿÂ[ñþ‰‡þWíÿÂøK~!ÿÑ0ÿÊý¿øP Q^ÿ oÄ?ú&ù_·ÿ Øð?‹.<]§j3^i_ÙwV„¶[ý O‡Œ)o˜:±g§^h¨¢Š(¢Š(¢Š(Äþü5ð—‹|§kÚö™%ö©zó½ÍÌ—“†•¼ç8p3€9ï]gü)/‡Ÿô/äíÇÿ£à—ü’ þÞ?ô¢Jô óÿøR_?è^ÿÉÛþ9Gü)/‡Ÿô/äíÇÿ¯@¯ø{ÿ%ÃÇßUÿШ ÿ…%ðóþ…ïü¸ÿã”Â’øyÿB÷þNÜñÊóïø†ç¾øƒ­ÚY}²kmK+8åŠä㜠äûÝk¸ð5_Þ·ö†¥á{‹QlfdÓ$”Oá’NHÁ9#¾(Ïü)/‡Ÿô/äíÇÿ£þ—ÃÏú¿òvãÿŽW7í¨8¸Ôm`ÑRÆ)¶&Ÿ;ÍöÉ“#æ –:ô>‡¯~¾%øŸYñÂè^Ót¹ ¹Ó"¾‚KÒèbˆù©960rG-BçU±†Ê`+™.cpFA NG¥|¸£D²ømâ=+ZÒg>4ŽìÈÓIlÍ".äËq…_¼O%»æ¶5ë+¨çð¥ª\}“A]Þ4»›OûlK°ç|GŒœ¯?ON>“´½µÔ-’æÊæ›wû²Ã uo¡á¾Ìëÿc]÷þÉYŸ4û $×äÒõ¹uK+‰ã¦µœ(ä6ï-Kƒòƒ€Ú:öÓøYÿ3¯ýwßû%zQ@Q@Q@ðKþI…ÿoúQ%zxÂßxÊÇáÆ“m¤øûRÅ<ï.óû^<ÌÌäüŒ20IõÆ{×aÿ oÄ?ú&ù_·ÿ ô óÍsáÅ׋.¼Iáß]h7בˆî¼¸d@$`ð=yæŸÿ oÄ?ú&ù_·ÿ ?á-ø‡ÿDÃÿ+öÿá@t‡káÜèº.³sms(žMLÆ®æL©'iã.0}MUðÿÃ9´ïk:èÕ/’‚1„v¨ç%‚¸àŸÏØRÿÂ[ñþ‰‡þWíÿÂøK~!ÿÑ0ÿÊý¿øP[…ú¿‡ÅÍŸ†|mw¥éÊeû²ŽfŒž¡$c‘ù~¼ÖÆ™à&Ó¾ Ëâ¶Õ¤¸i4ä±0ÉÌJ„Ëî䙯;ÕøK~!ÿÑ0ÿÊý¿øQÿ oÄ?ú&ù_·ÿ ô +Ïÿá-ø‡ÿDÃÿ+öÿáGü%¿ÿè˜å~ßü( ñßü“ÏÿØ*ëÿE5ÿ’yá¯ûZÿ襮Åž'ñÝǃuÈo>}’ÖM>á&¸þÛ‚O) lö–ÀÉÀëŠî< ÿ$óÃ_ö µÿÑK@Q@Q@p~<Õ5ñâ øsAÕ£Ò_W{¦–ðÚ­Ã(†0ÁB±ÆN{ð9êy^âßù+ß¿î'ÿ¤ë@ü"_ÿè§ÿåßühÿ„Kâýÿü [ÿdüHñ‰í> xoÃÚ¶º\z”l$‘­£˜ÜyÂ{v"¤øwã=nãZñN‘âmBÖö î:œq¬k´nÝ»o>Ø=hKþ/ˆôSÿòoþ4Â%ñþŠþP-ÿÆ’Ûã‡.'¶/i«Ûé÷Syú¥Å™KY8À|ç±ê;ã›eâmboŽº†Þï:L:`¸ŽßËN$ýß;±»øÇ4ÿøD¾!ÿÑOÿÊ¿øÑÿ—Ä?ú)ÿù@·ÿ[ß‹ZN™.íGCñ›åhO§•·Îq۳¹ûÿ‹•—Å9ô„ÑukÍ&+s‹{K$ò°?ëSæù¢=oÿÂ%ñþŠþP-ÿÆøD¾!ÿÑOÿÊ¿øÓìÕ^ããW…àžr–úµÆŸo7“6§™{XÛ¦ ç?ç¶h_øD¾!ÿÑOÿÊ¿øÑÿ—Ä?ú)ÿù@·ÿ»¯|Nд WKÓåŠþíõ;´ZÉe•dS wØÀÀ=GJ†Ë⿆îô _W•o¬×HuŽòÚê ³ÆÌv¨Ú äG^0sŠƒþ/ˆôSÿòoþ4Â%ñþŠþP-ÿƧ‡âÕôÝHZèšö—qŸ5Ôê6>\mµr œO à×%àïvpøSK“ŨÜ]Ï3Åq¨GfÞ&ÞÛU˜mÙƒ…Šé¿áø‡ÿE?ÿ(ÿãGü"_ÿè§ÿåßüj]wÄ—–_|7¡Å©Ï­ì<–‹i¤¤ ™ Ü¿tpéïTão‡¤³»»Jפ‚Îc̑نX{f €¤ççƒÅC®è¿ôOjz·ü,Ÿ;ì6’Üù_ØVë¿bÛœœgÎ w¾¸Ôü¡ßÞIæ]]iöóLû@Üí–8 ’zVwŠõ ]Wá^»¨YL³ZÜè×2Å"ôe0±gÀŸòO<5ÿ`«_ý´Ïüÿ’C¡ÛÇþ”I^^ð·áoƒ|GðãIÕµmíÓùÞd¿j™7m™Ôp®àÀ®Ãþ—ÃÏú¿òvãÿŽP Q^ÿ Káçý ßù;qÿÇ(ÿ…%ðóþ…ïü¸ÿã”èWŸÿÂ’øyÿB÷þNÜñÊ?áI|<ÿ¡{ÿ'n?øåzçÿð¤¾нÿ“·ür©jŸ ~èÐG5î…"¬²£X§»•ÝÈ$*¢1bp àt€4µ/…z^±s)ÔuÏ]YK1™´é5m²[vã ÐgŠíà‚+[x­àc†$ˆ£T =±^oÂo†ßÇbº §·ûH‰®n”ùy'/ÁÉÆ>Õsþ—ÃÏú¿òvãÿŽP WŸü,ÿ™×þÆ»ïý’øR_?è^ÿÉÛþ9UþØÛéšw‹,,ãòím|Ky )¸¨¢0£'“€ZôŠ*½ýž§gå…Üv²gdÐH$FÁ á‡øUŠ(¢Š(¢Šò¯†^ °ðçÀ½&úúêڱݴQÏ:Åç2Í)ؤõ'³]’xÆÂæÓI¸Óå¶½Kû´´sor® fœäŒä¸ÇkšøAe£ðOI³¸]ÐÏÔn;ᦔzpk²Ô ³Ÿ>Uýý·™`štÞKG™aMÛA,„©ùÛ•ÛרbX¼§Ãqm:ÍtZßìÛeÁòÕ3ò÷søc…sñ:Ñ!±h!²I.4ص'KýEm¶¤€•D%N÷ù[ŽL‘‘]¦}©¦Zjû¼‹¨Rx÷ ¬¡†Xpx2ÞÆ $ÓuMFÂ[[8ìLÐ4E¦Š<ìIn@æ5ÑDžT)ö}ªsœ–Çsï@^;ÿ’yâ_û]覣ÀŸòO<5ÿ`«_ý´xïþIç‰ìuÿ¢š²ô}r |Ñu›‹[»˜-4{G’;HÃÉ·Ë@H€âsÀö ÒЧ¥j¶:æ—o©é—1ÜÙÜ&ø¥NŒ?˜ äypE\ Š( ¼ÿÅ¿òW¾ÜOÿIÖºOx·HðªYNi<ÛÛ„·¶·‚3,²³0ª/$.A8ö$¨<_ÄoNðçÄêÚµÇÙì`þÑó%ØÏ·t(£…žH â/€o|_ñÃWi¿kÐàFKæóÖ= ’q÷ƒßv©hß5Í÷Å>¶…„µ{i>ËxeBÖÒ²à\ï#·CœS]ü.߇Ÿô0ÿä•Çÿ£þoÃÏúòJãÿОi õK}> 'Rø{k}Ò¼O¤è «¢hñÙOoöØàhÜ.ÌÜÝ3Þºø]¿?èaÿÉ+þ7Gü.߇Ÿô0ÿä•Çÿ mü+«ŸÚ¦¿5¡]çJËp%L—ýÞFÜîtóŒq\\>ñö‘à­SÀ6š5Ý…åÉtÕâ D%OÍù³òŽ=ëºÿ…Ûðóþ†ü’¸ÿãtÂíøyÿCþI\ñºÈÕì<à9íàûN™¢éÆÚæèȃÀ;IÜrHè3Umü­Åª|DšïÃj6š½Ä/im5ÚÆ·*$bß2’P€r Ç8®‡þoÃÏúòJãÿÑÿ ·áçý ?ù%qÿÆè’ð_k©êM™u¢è²is[2ãW[Á4¬Œoe䎸Æ<Ô¿éQÉ©ZÙÇ#™môɣ竱Œž4zãA | ÔtYeËk¡Ü¬Ž½ œ¶=²N=«wÀŸòO<5ÿ`«_ýµÃø³âÿ5?ë–zï™uu§ÜC }’q¹Ú6 2S$޵ÜxþI熿ìkÿ¢–€8/Zj׿³õ…¾“-²Ë%½ê2MHdYFÕîÒ}N~•ÒÞÙë6Ú'‡¦Ô‚]µŽ¡Ò-…œ£‹Êdå7;1¹ÛÛ·_à—ü’ þÞ?ô¢Jô ò»8^ØOs{eyfêâ{­kȦV)’°z©8ÁÏz±­XêW¶>š÷JŠ :+7Z{i²_Å Ä'—˜#`Äóm'‘ÎG¦Q@>…-Ç…¼-§Ì.¯¢TW”½³ÄD?½ÀdbYP«†=1ž¸®‡PÓ4{t‹dÑeh#Ôíã°ˆª[ÊCþñ ¶s‘–Vµî¡k§GÝK嬳$v“—v £Ü3Ò¬Ð\—ŒQïŠmÇcÑõk{X=H‘æ^Ýùð;å Ã’`ûU¨¼io<7pj:„z„m3½à14“voÜ õhÕ â¬Aáe²Ñ-4­?XԬⶑ˜ä I!·ÆÀõë€}è oÅo£éú]Ã[ÙfûïM=ï—kÉ»æ˜#pz)Ûϵ:ÛÆ6‚õ­5/&ÎBÖÉ $ÆT˜Ì…— q”‘AèvçŒâ¬¯†’ÛF±ÒôÝNûN‚Î?)L6.¸ÆÌFò—?Ã} {xá wŧ >3£*¡·,€x¤œƒqâ€"‹Ç7·¶qM§èk355;ˆä»òöBå¼°§aÜìŽÐ=kBïÅ2™´(´­=o±Ï K?’±ªª°-…cÈnÀó^àë+ÆSíõ‡úØËöGAç@¹Ú¹[ÜØ+†5û Íot»˜ÃÅý› Ão°#êZ…½ö­{¨ë>ƒQ¸76Ú&‹zÐ-²‘Ö@AÈåvãq:OøUŸõ>øçÿý…zçÿð«?ê}ñÏþ?û ?áVÔûãŸüöèP¾sðãâ §ˆo×Rñ6“v‰c¡tÏu{¦;ûÈìÄeW?1QÉÄ'‹ä¯|:ÿ¸Ÿþ“­zyÿ‹ä¯|:ÿ¸Ÿþ“­wúØêM§‰?Ò–9iû„•=:ƒ\üÞ0'Æ)áû]6y6¢½ÅİÎ3(Ûˆ˜0ùOÌYTöcÎ?ÛÇâ¦Ö>ß©•òUDgR¸?8rÜüø)Èù:uãšÒ‹K¯éwÚM¾§jÒ]ÛIòåW#r‘œÎ3@Fµ§#í*ì¿lù”€!Î7Fó¨tÿiš´×6òÌ–ðÆ%yn-¥6NàdU ¸ädV.™ ëk©›ÍJ=3jéOXc•äWpÙÜÙEùO§8÷ª¶ž ÔfÒu½.îd±Óïì~É ¤²Ý¬,C"™UJŒ-oZø·E¼·ºž+©[@ndóm匘pO˜¡”^+‘V´½rÃZRö<‘…WYÚHÑÔô(Ì 8÷Rk—¶ðûYêKsmoäÚlÖ0Ü^êï;ÀÏË(Äk•S¸ñ××°Ó­šÏL´µm»¡…#;zd8ö‥ê–ZÖ›£§Î'´wG R7 ã¡õˆ5¸<;¢]j—O,vñ³ì‚œ)nvƒ´q÷Žîj}/ûGû2íqj/öþøZ1g?ûœcjiòêÞÕtÛvEšîÎh#i Ш'œdúPM¿‹nGÓ¯®´Ï%ïo£´ò·Ê»ÿï"Bqé·±­Ý;R´Õ­~Óc/öA RÈ8%I0ÿhdƱ¼Ká“âNÓ&òšn ’áYÙCÆŸyAäþ{ú}î“¥:ît¸ŽÙÌv²‚w´ ‚N>øåIÎÐz’_ÿÉ<ñ/ý‚®¿ôSQàOù'žÿ°U¯þŠZ£Q\ÜÅgi5ÕÃì†i$l…$àsÐPžj>{û«“¨Yͨ¼^Ž(î'·9{€däq.px;†zóSëz5­Ïü":¶¯¡ý¸Â_±°7€mÜè±B8ÁÁ9®öÞxî­¢¸…·E*FÁR25Æ¡kkwik4»f¼vHi;ÙT¹åRyô  ¡Ø ÊM‘íWnÜ p1ÛéO¢Š+ÏþÌëÿc]÷þÉ^^ð³þg_ûï¿öJô (¢€ (¢€ (¢€<«á—ˆ,<9ð/I¾¾º¶„¬wmsαy̳Jv)=IÇl×QñH¶Ñ,µ K½>é¯'[dÿNE†) "IyƒØžƒÖ' (5‚zMÂî†xî£qß 4 ãÓƒ]÷‡í/lm-ÖI­^ÍÖKk‹r‘8R¹«0 ‚'Šäág¼¨ßbÒ­®ÚËrÐê£I~í >rvô ò1ZÒxÆdÑ ¿{}.ÖSs=´Ëª"GŠFŒ…ÆKä©#åuÅZŸÁÖ÷Ÿ5ö§¨ÝÌbòŒ²´aŠù‰'E@1¯AОüÒÿÂiÄ6·×¶×0Ëu"ÍŽ.%ód\2ØÁÆ@z䆣âË«Ÿ[ø‡JQ&ØdÝ8.íù_œŽÃæ¯ëW7Öž*ðØ‚þeµ½¹’Þ{M‘˜Ü y¤ ’»ÃnEèØÀéÖ¢›Á6ÒxbßÃð꺕½Œ9Üc13Ëóï™ãn‡Óù«×Þþг±ŽmRø]ØÌg‚ùB`åYNFÍ„v\léïÍlÑME)©vr›>çS¨¢Š(Ÿñßü“ÏÿØ*ëÿE5ÿ’yá¯ûZÿ襣ÇòOß©•òUDgR¸?8rÜüø)Èù:uãš´º^£mâûÍFµ–ÆþÞ¦ß3$˜Ìœ¨Cä8êWï@éþ)ѵ[ággvÏ+hËBè“8c²…yÚMlW¢øgSÓBRšÁ4ïA"C&ÒµÆ_ìÙ§K,¿e•b88 HT.AíœõãŠÌð߇®´ýOí·ö¬énmÒeÕ®®ËÊH 0ÂTOkKÂÚLÚ†l4ˆ‰¦·k˜‰*NIã zúPÍ?T²Õᬧ‹k‡¶— FÙá—‘Øþ5ÝÔvV¯q*Ìȃ$C Êýq ,PéÿÚ;n?´…¨o´?‘öbØòsònÝüxëŽ=*åqV=–÷Á÷~#m!¢†%G†&yTȬ@—‰Gà.=몳Ôìõ ®c´›Í6ÒyR²©Úº†Æ ðN¹§ðìŸ bð³KÚÖÎ;v;ò¤g ãƒÎ+SÚ$Þ[­>9#m!_}Šn&HCd¼g=T7*rNƒ÷A ”QEsþ;ÿ’yâ_û]覣ÀŸòO<5ÿ`«_ý´xïþIç‰ìuÿ¢šÉ<ð×ý‚­ôRÐà MZ÷ö~°·Òe¶Yd·½FI i €Ë(Ú¸uÚO©ÏÒµ|Emy‡ü/ˆ#†ýWU€4–Ž?v!“*P»—# uc×Á3üCðw„,tøWlû/™ûÿí«x÷n‘ŸîóŒnÇ^Õ¯w®øêùí^çá^öµ˜O ÿ„†µÂ•Ï xŠÒÍoµMSNŒ..(¨©vfT<î‘@0sØ n©â«¹¼ »¦Û·2¬Ë$Ád„‰0_‘ÕÈ`G8ãœÕ‹O cBðü2_\ÙjZU’[-Õ™Lãb«®YJ’Šy©/<ksáË} BþÊÆ!†Ù¥;ƒešDcÃ9É'4cYÖï,5}7L±Ó⺞ù&pÒܘR?,)ä„cÎîÃÿ­oAÕ—\Ðí5%…¡óÓq‰ŽJ08#=ðA昚"}»N½¸½º¹º±ŽXÒIv “nKP26Œ`|Ô:u¹ðý„Z]žŸsovJ^›$±êÊz’:”›âê'U³Ñìâ{ 8Þ\Ló„òƒ ±J°výÛ£Ž¼Ô:ŸŒ¦ÐôK ˦IæX­Ä†ïR[Y%m¹"4ØC7â£3Z:Ÿ† ×Z{—¸Ô4Ùo->Çt<`Ë[ ÙßH?1æ™uà»;‚Æ;ûûo2Á4é¼–2»¶‚Y Só·+·¯°À=__Öÿµü5ýohöZ‘f"âäÆÒ~áÜ+b'ÚÈ$’1ŒsUµ‰6šF§{k*XìÓŒkx$Ô9‹2«&"¹(aÉ+ž@ɽuá›{‹-*¯.íeÒñök˜J˜ÎC+)ʱÎWéŠlžþkË]_Q³k’­t¶Ü:¨]͹Ö*ª B¹À xïþIç‰ìuÿ¢šÉ<ð×ý‚­ôRÑã¿ù'ž%ÿ°U×þŠj< ÿ$óÃ_ö µÿÑK@Q@Q@qþ2ðn£â=cDÕ´ûûIóü¹~Æ·¼ÕU<38r^خŠóÿøD¾!ÿÑOÿÊ¿øÑÿ—Ä?ú)ÿù@·ÿô (Ïÿáø‡ÿE?ÿ(ÿãGü"_ÿè§ÿåßükÐ( ?ÿ„Kâýÿü [ÿð‰|Cÿ¢Ÿÿ” ñ¯@¢€<ÿþ/ˆôSÿòoþ4Â%ñþŠþP-ÿƽŠóÿøD¾!ÿÑOÿÊ¿øÑÿ—Ä?ú)ÿù@·ÿô (Ïÿáø‡ÿE?ÿ(ÿãGü"_ÿè§ÿåßükÐ( ?ÿ„Kâýÿü [ÿð‰|Cÿ¢Ÿÿ” ñ¯@¢€<ÞÿÀž;Ôôë› Ï‰~e­ÔO Éý…ÜŒaÙô®ãBÓ?±<=¦i>wöH­¼Ý»wì@»±“Œã8É­ (¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(ÏÿáI|<ÿ¡{ÿ'n?øåð¤¾нÿ“·ürŠ(ÿ…%ðóþ…ïü¸ÿã”Â’øyÿB÷þNÜñÊ( þ—ÃÏú¿òvãÿŽQÿ Káçý ßù;qÿÇ(¢€øR_?è^ÿÉÛþ9Gü)/‡Ÿô/äíÇÿ¢Š?áI|<ÿ¡{ÿ'n?øåð¤¾нÿ“·ürŠ(ÿ…%ðóþ…ïü¸ÿã•ÜXXÛéšuµ…œ~]­¬I )¸¨ <œ:ÑEX¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(ÿÙpeewee-2.10.2/docs/peewee/transactions.rst000066400000000000000000000171261316645060400205260ustar00rootroot00000000000000.. _transactions: Transactions ============ Peewee provides several interfaces for working with transactions. The most general is the :py:meth:`Database.atomic` method, which also supports nested transactions. :py:meth:`~Database.atomic` blocks will be run in a transaction or savepoint, depending on the level of nesting. If an exception occurs in a wrapped block, the current transaction/savepoint will be rolled back. Otherwise the statements will be committed at the end of the wrapped block. .. note:: While inside a block wrapped by the :py:meth:`~Database.atomic` context manager, you can explicitly rollback or commit at any point by calling :py:meth:`Transaction.rollback` or :py:meth:`Transaction.commit`. When you do this inside a wrapped block of code, a new transaction will be started automatically. Consider this code: .. code-block:: python db.begin() # Open a new transaction. try: save_some_objects() except ErrorSavingData: db.rollback() # Uh-oh! Let's roll-back any partial changes. error_saving = True create_report(error_saving=error_saving) db.commit() # What happens here?? If the ``ErrorSavingData`` exception gets raised, we call rollback, but because we are not using the ``~Database.atomic`` context manager, **no new transaction is begun**. The call to ``commit()`` will fail because no transaction is active! On the other hand, consider this: .. code-block:: python with db.atomic() as transaction: # Opens new transaction. try: save_some_objects() except ErrorSavingData: # Because this block of code is wrapped with "atomic", a # new transaction will begin automatically after the call # to rollback(). db.rollback() error_saving = True create_report(error_saving=error_saving) # Note: no need to call commit. Since this marks the end of the # wrapped block of code, the `atomic` context manager will # automatically call commit for us. .. note:: :py:meth:`~Database.atomic` can be used as either a **context manager** or a **decorator**. Context manager --------------- Using ``atomic`` as context manager: .. code-block:: python db = SqliteDatabase(':memory:') with db.atomic() as txn: # This is the outer-most level, so this block corresponds to # a transaction. User.create(username='charlie') with db.atomic() as nested_txn: # This block corresponds to a savepoint. User.create(username='huey') # This will roll back the above create() query. nested_txn.rollback() User.create(username='mickey') # When the block ends, the transaction is committed (assuming no error # occurs). At that point there will be two users, "charlie" and "mickey". You can use the ``atomic`` method to perform *get or create* operations as well: .. code-block:: python try: with db.atomic(): user = User.create(username=username) return 'Success' except peewee.IntegrityError: return 'Failure: %s is already in use.' % username Decorator --------- Using ``atomic`` as a decorator: .. code-block:: python @db.atomic() def create_user(username): # This statement will run in a transaction. If the caller is already # running in an `atomic` block, then a savepoint will be used instead. return User.create(username=username) create_user('charlie') Nesting Transactions -------------------- :py:meth:`~Database.atomic` provides transparent nesting of transactions. When using :py:meth:`~Database.atomic`, the outer-most call will be wrapped in a transaction, and any nested calls will use savepoints. .. code-block:: python with db.atomic() as txn: perform_operation() with db.atomic() as nested_txn: perform_another_operation() Peewee supports nested transactions through the use of savepoints (for more information, see :py:meth:`~Database.savepoint`). Explicit transaction -------------------- If you wish to explicitly run code in a transaction, you can use :py:meth:`~Database.transaction`. Like :py:meth:`~Database.atomic`, :py:meth:`~Database.transaction` can be used as a context manager or as a decorator. If an exception occurs in a wrapped block, the transaction will be rolled back. Otherwise the statements will be committed at the end of the wrapped block. .. code-block:: python db = SqliteDatabase(':memory:') with db.transaction(): # Delete the user and their associated tweets. user.delete_instance(recursive=True) Transactions can be explicitly committed or rolled-back within the wrapped block. When this happens, a new transaction will be started. .. code-block:: python with db.transaction() as txn: User.create(username='mickey') txn.commit() # Changes are saved and a new transaction begins. User.create(username='huey') # Roll back. "huey" will not be saved, but since "mickey" was already # committed, that row will remain in the database. txn.rollback() with db.transaction() as txn: User.create(username='whiskers') # Roll back changes, which removes "whiskers". txn.rollback() # Create a new row for "mr. whiskers" which will be implicitly committed # at the end of the `with` block. User.create(username='mr. whiskers') .. note:: If you attempt to nest transactions with peewee using the :py:meth:`~Database.transaction` context manager, only the outer-most transaction will be used. However if an exception occurs in a nested block, this can lead to unpredictable behavior, so it is strongly recommended that you use :py:meth:`~Database.atomic`. Explicit Savepoints ^^^^^^^^^^^^^^^^^^^ Just as you can explicitly create transactions, you can also explicitly create savepoints using the :py:meth:`~Database.savepoint` method. Savepoints must occur within a transaction, but can be nested arbitrarily deep. .. code-block:: python with db.transaction() as txn: with db.savepoint() as sp: User.create(username='mickey') with db.savepoint() as sp2: User.create(username='zaizee') sp2.rollback() # "zaizee" will not be saved, but "mickey" will be. .. note:: If you manually commit or roll back a savepoint, a new savepoint **will not** automatically be created. This differs from the behavior of :py:class:`transaction`, which will automatically open a new transaction after manual commit/rollback. Autocommit Mode --------------- By default, databases are initialized with ``autocommit=True``, you can turn this on and off at runtime if you like. If you choose to disable autocommit, then you must explicitly call :py:meth:`Database.begin` to begin a transaction, and commit or roll back. The behavior below is roughly the same as the context manager and decorator: .. code-block:: python db.set_autocommit(False) db.begin() try: user.delete_instance(recursive=True) except: db.rollback() raise else: try: db.commit() except: db.rollback() raise finally: db.set_autocommit(True) If you would like to manually control *every* transaction, simply turn autocommit off when instantiating your database: .. code-block:: python db = SqliteDatabase(':memory:', autocommit=False) db.begin() User.create(username='somebody') db.commit() peewee-2.10.2/docs/peewee/tweepee.jpg000066400000000000000000000740021316645060400174200ustar00rootroot00000000000000ÿØÿàJFIFHHÿþCreated with GIMPÿÛCÿÛCÿÀæa"ÿÄ   ÿÄl  !Õ17STUVv”¤¥µÑÔ6AQ…“¶"9Xaq—˜Ö×#%WYˆ‘¨±Á $&'(2458‚¡§³)3BGRuy·áðbgwx¢¸ñÿÄÿÄ: !"15AUv”¶Ó#QTV•–ÔÕ$2B4bqÿÚ ?öà8ÏW(Ÿ{¤^HÛÎ㳠ŧøtro2sj“·tÍxÊÇSïÅ躅:²¦,íÕ½RC®èIV¢ÍÎã.*»§Eu•¾Ë.µŒ›õr‰÷ºEä¼î\¢}î‘y#o;‹Åóø#éù?=ÌCà§äüôÊïÕÊ'Þé’6ó¸ur‰÷ºEä¼î,GÌCà§äüô_1‚>Ÿ“óÐc+¿W(Ÿ{¤^HÛÎáÕÊ'Þé’6ó¸±_1‚>Ÿ“óÐq|Ä>ú~OÏAŒ®ý\¢}î‘y#o;‡W(Ÿ{¤^HÛÎâÄq|Ä>ú~OÏAÅóø#éù?=2»õr‰÷ºEä¼î\¢}î‘y#o;‹Åóø#éù?=ÌCà§äüôÊïÕÊ'Þé’6ó¸ur‰÷ºEä¼î,GÌCà§äüô_1‚>Ÿ“óÐc+¿W(Ÿ{¤^HÛÎáÕÊ'Þé’6ó¸±_1‚>Ÿ“óÐq|Ä>ú~OÏAŒ®ý\¢}î‘y#o;‡W(Ÿ{¤^HÛÎâÄq|Ä>ú~OÏAÅóø#éù?=2»õr‰÷ºEä¼î\¢}î‘y#o;‹Åóø#éù?=ÌCà§äüôÊïÕÊ'Þé’6ó¸ur‰÷ºEä¼î,GÌCà§äüô_1‚>Ÿ“óÐc+¿W(Ÿ{¤^HÛÎáÕÊ'Þé’6ó¸±_1‚>Ÿ“óÐq|Ä>ú~OÏAŒ®ý\¢}î‘y#o;‡W(Ÿ{¤^HÛÎâÄq|Ä>ú~OÏAÅóø#éù?=2»õr‰÷ºEä¼î\¢}î‘y#o;‹Åóø#éù?=ÌCà§äüôÊïÕÊ'Þé’6ó¸ur‰÷ºEä¼î,GÌCà§äüô_1‚>Ÿ“óÐc+¿W(Ÿ{¤^HÛÎáÕÊ'Þé’6ó¸±_1‚>Ÿ“óÐq|Ä>ú~OÏAŒ®ý\¢}î‘y#o;‡W(Ÿ{¤^HÛÎâÄq|Ä>ú~OÏAÅóø#éù?=2»õr‰÷ºEä¼î\¢}î‘y#o;‹Åóø#éù?=ÌCà§äüôÊïÕÊ'Þé’6ó¸ur‰÷ºEä¼î,GÌCà§äüô_1‚>Ÿ“óÐc+¿W(Ÿ{¤^HÛÎáÕÊ'Þé’6ó¸±_1‚>Ÿ“óÐq|Ä>ú~OÏAŒ®ý\¢}î‘y#o;‡W(Ÿ{¤^HÛÎâÄq|Ä>ú~OÏAÅóø#éù?=2»õr‰÷ºEä¼î\¢}î‘y#o;‹Åóø#éù?=ÌCà§äüôÊïÕÊ'Þé’6ó¸ur‰÷ºEä¼î,GÌCà§äüô_1‚>Ÿ“óÐc+¿W(Ÿ{¤^HÛÎáÕÊ'Þé’6ó¸±_1‚>Ÿ“óÐq|Ä>ú~OÏAŒ®ý\¢}î‘y#o;‡W(Ÿ{¤^HÛÎâÄq|Ä>ú~OÏAÅóø#éù?=2»õr‰÷ºEä¼î\¢}î‘y#o;‹Åóø#éù?=ÌCà§äüôÊïÕÊ'Þé’6ó¸ur‰÷ºEä¼î,GÌCà§äüô_1‚>Ÿ“óÐc+¿W(Ÿ{¤^HÛÎáÕÊ'Þé’6ó¸±_1‚>Ÿ“óÐq|Ä>ú~OÏAŒ®ý\¢}î‘y#o;‡W(Ÿ{¤^HÛÎâÄq|Ä>ú~OÏAÅóø#éù?=2»õr‰÷ºEä¼î\¢}î‘y#o;‹Åóø#éù?=ÌCà§äüôÊïÕÊ'Þé’6ó¸ur‰÷ºEä¼î,GÌCà§äüô_1‚>Ÿ“óÐc+¿W(Ÿ{¤^HÛÎáÕÊ'Þé’6ó¸±_1‚>Ÿ“óÐq|Ä>ú~OÏAŒ®ý\¢}î‘y#o;‡W(Ÿ{¤^HÛÎâÄq|Ä>ú~OÏAÅóø#éù?=2»õr‰÷ºEä¼î\¢}î‘y#o;‹Åóø#éù?=ÌCà§äüôÊïÕÊ'Þé’6ó¸ur‰÷ºEä¼î,GÌCà§äüô_1‚>Ÿ“óÐc+¿W(Ÿ{¤^HÛÎáÕÊ'Þé’6ó¸±_1‚>Ÿ“óÐq|Ä>ú~OÏAŒ®ý\¢}î‘y#o;‡W(Ÿ{¤^HÛÎâÄq|Ä>ú~OÏAÅóø#éù?=2»õr‰÷ºEä¼î:RVÝ0n9Ͱ•¤BÓßbâÈ(êœQ ”]u¶§RªÊ—[JV¦RêÝKé[)J[ußžbøâ+Ž$oÌ1Ýà샥Ñ_MßUn[éõ­ÿ´-tP˜ÍÑ2ƒ‹ý°›ú£³¡2Û/¶ƒ~ 8øÄ¯ÕŒìÀÀáTì†óâc¯#£»Ž¥NÈo>&8úò:Ëþ8Ì×QÚyƯÆÅ²6xÃ0 9 Ó+>95ʈ«ñ)Yº¤RkCëâÓª/öÄÇ^žÒϳìʺëyFÔ¡öY ®–½³P´¬ÚÉÿ´ÀLG×=:GךŸa—.²åWPÌA5ì,d¦ bM„#S1>ù˜ˆ÷çf[8æhÿî­ÓgåÓþ”‡ÍýÕºlüºbÿÒ‘Ùì}·Ù{ÁYý,áöÖ›ímgãêþ®Y01*‹ÍØ[ePÉ# º0òEÊY䑇†÷öTÖša(mxjP­½qœQ¤ÜjU—CJ0ºÝK캔ûÀÀÔf¶-‹",Æ@ÀÂdL J „„¢D„¢$f&&"c$ŹkrX S@X¦¬ÄÖŘÁ¬ÆdL fHfD†bbf'1ÌðŒcÀ0ŒcÀ0ŒcÀ0ŒcÀ0ŒcÀ0ŒcÀ0ŒcÀ0ŒcÀ3Œê± »æ¬ì¢»àß‚N>1+õc@±ƒìC.ùƒë;(®ø7à“ŒJýXÐÎÌŒ¥NÈo>&8úò:;¸áTì†óâc¯#¡Œ¿ã=^´)È:ÇÉš‚ÌQLW“±Ô¯Cáш„º>\•ÙŽK=6þz1Ó1ÌÉIPŒ£“'R‰yËo°ë‹4’‹­Ûz»_²¹«sŸE²‡:¥ŠdÑë)²0-ñœLÙÐcµƒ0A>øž¹Á²ÖRÛ!u¯¦…Ú«r]$ ´ÞS ‰/ʱïYD‰Q(˜™Ï{3¸«bPÃãX®Çqåxb8î©’ÀÙkPê¢[7J{‰ÈZ“%NbÓ“#HAŠo.¦ÞJb ºúÚU”¦¾Gtï¦8öÇ6=Ê*´‡Ê3·;A8Üs…¢³lž¾O+‡¶¢¬¶Â±¹rãZœœ)!}téŽýL*Ç( ò¨]ùƒ³Åþ—þƒÅ¾ºdéDô¥Ú<Ó ·R•¶ì ‹iZV›t­+ h¥iZ|t­9+AöŽC²¼Ÿ‚žn-Û—Ú}IË*u¥íˆÐÛ+d>€¾ø)%ü“”Óáœw[@þ¹½H¥FʪŽWWªùV‚õÔjм’Í„ÎÉÖu0>¢g”[»%q¯`]%°âÍEÎ2;* #‹×Dà'F¥¬‘*=deSäs_Æ“º”©ñÔÖ]´ÈP¢ä÷ÒÛ-¿Dµc²to0B2¢9Û£Ôù îɸÓuMg%Ì–«¤Õy•G•%D©MË*–­‰ÔSK•jE¦Aj|ÓÍŠ;cãe4Çdö\ÙŠeäÒT¶”]å§¿äËjs’4ÖÓj‡–×’9³ÙZRë(àÓvÕ:2v©÷6K[åú—åd6-£ŽŒfâ´½ ©%Üyw …FL\t‘Âï¹9Œ.Ò^Þ™E”­†¬s°º[«Ð‰KÜ7A¿ä¼s~aaZWJÞÿfó´bë;]ºjN±D+ ¬Ù¹»¢`_&êÚ ((w4ä|gŒòž.—oøuª<{D¦¿4Úšì;já6\TjxþÀØã9Pùu¦ÆZg§ Tk¤¸œ~w2ÆÙž} {±I«e8š"Í&`ˆ“aÌÉ›N™;<Ê£Xʪ{L––Û6UcŒ(1}S!Vë+H½aè\jSmS¥©÷pÍ—ã&æìrás2Z•ä[§޳Æê+o]DS céU$‰èµ©RæÅT)â‹-¡íë#6”¥éÔUm2é&Àüu™.“gÏÄ¡KÓyi@î³q.ªU!f‰Ä-lB ÚÛ[ÌJ’®NžûªQw¸*¾Ë)rƒk| N5ÆUÁv¼Ší7Þ½¥Üž¤Š†ßÇOg!fšÆÂÝé^ ¬À·Ö KlµjY¬ U’ï(ålçÚ.7Få}}C  ×e§†]ÔwרÕjFÕs}¥’-jE.qCAÁ^Tû·¥=Í5ë jö^þöÏ6mE{š˜ñ©;¬Æ¢¯´³œ›¬BäôÆò™=÷ÙE–³=¸)o¶ò^1'a³ÍMk/OÚGfnsÌÓ.–9>X}ñ؃*ßf Whó›Ù’mot%_ûUîï ZÙ­Q[R\ãEFEþF1úõ8‡e ¨œ}Ñ5Ö:èƒR6Ên%[sÊŠâK˜É,½«h‰Luz¶›‰¥: êmÅÖ”§->¦ÈüÆI8ÙË}L{íc™9 GU-µ{˜hJÜÊ•J¾‰3jGƒŒR¼Óﶉ¬Tð­q¶Ýº™[¬)ø)Óì¹7EKwªh·š%¿a^ÝbŽ¢ ì(amꙨÈaVWÖH’yZwÂöïUŹ]‹Ôè^ÞqÞDŽ=^ÈÕ¡hí¯påÚµ^_ÔÄ2ï˜>³²Šïƒ~ 8øÄ¯ÕÄj± »æ¬ì¢»àß‚N>1+õc@c;008F•;!¼ø˜ãëÈèîã„iS²ωŽ>¼Ž†2ÿÅEæ–œó'|žY&ÞJ}ÒÒwsm²ë‹'u¿ì Ýo¥,Ý/û:.ŠîJT~»ΰ>•ær§I.HÁ8Fg4rÞ=9’Kñ 6Q"rÞmÈÐ7tÅõÖ4à½vójJ…Mð¬íê…*dend§,«7$+™ÙsÐÚÊõÕe}b"ÁIšJ›˜¡sÄ<&‡ÅαÛ]âŽ6Ÿ0#Y¸©s£Qbe'«¶ÓP퓸YuöYkô$£ö0+¤[Z’W!SÑî‰÷ã} ®j¢ÚOw›îý.Ü+2ýÁé•Uï>”rïéöbËq_ÐÿÜæÏÈ<ý+úû˜tÙùÇŸ¡Ã›ãk#êxàìm…-.Õ{Š/JâèÛKìÙ¯ tï¥dªÏ´Ö !"A0¨ï˜PDuüMTòm§*f®“vIé¯WfåÓ¯eF¢­W˜×Ž>,Ø«QUÞÏ<‹Nê¾û3Í-ˆMæ]2Ìr¶PÔ˜\¥Å¦CÂh| ëÙÖ­¹ÞVãqð÷é 4{²–Èò4ö)R]Ö”»hš‘}·ßq6G´›[¸â>D}ñº+•ñÚ—5°w§‹ÜÂâ‘䤶<Æ$"!RÔî& nV™Ñ"5Ê"¶ëQ)N©QWöž+úû˜tÙùÇŸ¡ÁÅCÿs›? 8óô8eµæÚòDr£Ù[§´©¢§¢Ó¤kÕÊ”øÔ«Û'+±ò´¶1€éµd$a-ñF½7 F—ŒYâKÕÓ½©¼VŠï¯Ý8ìÛ+F'ÞÖUãõY\W]uZ•)©ŠµÛ6B^Y룔û%IÄÉ4þó£ˆö_`Š9<Ö9mÔV0†#BÜòä©Üô.ˆ\Ny{qo%Ùzõ‰U”ÒØàJCìA{i÷aö]'iã,Ÿ3eÔ„½ºS–ó»¢S!±—uøÛD[ˆ5+L2.S¶å¾ÎªC -õÎÔD®ôiI.õ·©ÕÎcÅCÿs›? 8óô88¯èîaÓgä~‡ 6Üž–Øö¯$¦•ÍàÊö÷©iN,ßY½6^ù5ºõý]šé}Â¥^«-L8Ímp3=?½§^ª°¹÷õú&ùôúëûÕ•Z=1:œR¥»1M6ºK½jÚªA!`ĠՌλz¥ÒªØõ£Vˆ®\ˆ2IÝÙqû䥪#"nbJ½µòè¦J±¡•cm­ŠÛ™ä Ìq³¡L­k6î’»öødØ®ºµ ÓÅ“lL×¢ìPd‰•ë(Iæö<™“å,ìÊ‹_l^ \z…+c‹V”Vûqxr.ëŠ,£ªUä¡™ÚÔñ_ÐÿÜæÏÈ<ý+úû˜tÙùÇŸ¡Ã²Ç7¾ÉvÇŲÙé´ë77t„Í…xC<ÕÉ¢•zû¥]ýl+×Ñ´>rc˜&Æ´Ž¿:'¸^®]ªÖïØÆí´Ô7½u’zå6a2Þ,톾,¤¥ eúe…¥Dµ¥º³Š+V‰aÍh¬¸´M¹’!w˜a·™Njœ‹.4Û¯4Û­(»)q†_q—Ö•ºû®ºµ­höÆ®ºà[˜Jt¯Æ9¦3'ɲ&ÕPy‹ñQI2`ÕŒYÂvÇIm…ú>îÞ±¼§&cÝ8­˜›Ñ_BÖ*2¶×]›.¹ÎÆØf™q®†pì 5ÌÏ‹%µÇò<}˜”H‘Ñõ+ƒ5ñþ5vf2©æÅ;ÈÒ÷ò‡$•0û‹):~òÐÆÇÓæ ¢¨f¥q®„ó=P{ãí²ã1ü› fdï‹HgJØÌKÄû´²Y IbªnCB÷âu‹M5=÷ÑA´¶Õ¥Üñê_[Mkö´®Ü»³=¡k¶U Ýe@THÔÚ騍áÌ*fj%l«Ø“‘(‚˜:ŽÿIÈîü%höut×èÑÖéÕ¦ ަÉ*4Ëe r¬;XÀôé ‚Öðn®ÕXP²K·ªùnƒ6*s{6 ÓjW=%fxc”9OZ`¨žšd¯o™ Áj—"ä25ñãÜ#mí­NjîyD•±ÝÅZ×bSQEˆ%¾Åý#d«bžq¨ ˜~¡ôêæÄTõÙ+e³x;û‡Hì}scHB¹eìÂŒnLîbˆ8¶¼Ú€íâ[‰N…ª1Be:eÅCÿs›? 8óô88¯èîaÓgä~‡ Û>ö½¡¿FÒÅ+Êž‚:z:Ê×"‰H“*EbÞ0ÉM5¨ÌÄè$V•´=-hU¡ÚÕèv|}úš×Ѹ· ëÛû-ÙØ¼R«siz5>FÂÅiȾеL‹v¡Õ}³(l£L1‘xÝ^‘àxÏ%8G)š€‚ºC[ŽˆL“cèaR‰=í\bäV®\•*íÈã/ZžÛ›®ìšŠ+ÑJÏ&R)ŸN˜Ï¾A’ÏR¹øéÚ‡K$dJЖæ­Y¥§e%ZT,2Ú³ !µbƒn9IæO8¯èîaÓgä~‡ýýÌ:lü€ãÏÐá o}¯p˜ÐÑÃv¶ŽF Öm»Hšt› »Éo2ºé±îmzÔνU¸á¾5$—9GŽlk ù–ÿÒëìêêŽët¶­®‚urº\V‚í6Úk%-_VÍ"j‡À>Ì;"´‰¥­­ŽŒ““Áðœ7S¸Â~µ;—2Ü?©¼øú‡ £Ï©ŒŸ(Bµ±À”ŽÎ:³ÜÌêEÊ,%;×B¥§h4 jƒ8j:¨H jÂŒJÌè—éꜮr’=•q å9>LÌ•®.éj2÷Z ZQjR#誒ԋ ~íüWô?÷0é³ò?CƒŠþ‡þæ6~@qçèp“Ùó5m¬žÊÐ"7l¢Zöo£Ø¡P2ñO%-`YšFuBÒõÁa ˜ìL­RZÎ ÝM_eÔ'Ɔ/†Äxûy3Z/]¥Þù§‹FتzÄ®Ë)žÌ«XlÚ[…ÏÛPÞ0wš­÷`ácwùªßv:ϧdˆw…Œßæ«}Ø8XÁÝþj·Ýƒ"ácwùªßv0wš­÷`ÆH€GxXÁÝþj·Ýƒ…Œßæ«}Ø1’ Þ0wš­÷`ácwùªßv dˆw…Œßæ«}Ø8XÁÝþj·Ýƒ"ácwùªßv0wš­÷`ÆH€GxXÁÝþj·Ýƒ…Œßæ«}Ø1’ Þ0wš­÷`ácwùªßv dˆw…Œßæ«}Ø8XÁÝþj·Ýƒ"ácwùªßv0wš­÷`ÆH€GxXÁÝþj·Ýƒ…Œßæ«}Ø1’ Þ0wš­÷`ácwùªßv dˆw…Œßæ«}Ø8XÁÝþj·Ýƒ"ácwùªßv0wš­÷`ÆH€GxXÁÝþj·Ýƒ…Œßæ«}Ø1’ Þ0wš­÷`ácwùªßv dˆw…Œßæ«}Ø8XÁÝþj·Ýƒ"ácwùªßv0wš­÷`ÆH€GxXÁÝþj·Ýƒ…Œßæ«}Ø1’ Þ0wš­÷`ácwùªßv dˆw…Œßæ«}Ø8XÁÝþj·Ýƒ"ácwùªßv0wš­÷`ÆH€GxXÁÝþj·Ýƒ…Œßæ«}Ø1’ Þ0wš­÷`ácwùªßv dˆw…Œßæ«}Ø}¤ŠÓ®NZ¤¦n¤ÑîftÙÑtÜ]ßbe¶_M«ìºß²¶›{[tÛ¥iZ±œ‡P}ˆeß0}geßüqñ‰_«ˆÔbwÌYÙEwÁ¿œ|bWêÆ€Æv` `p*vCyñ1ÇבÑÝÇÒ§d7Ÿ}y eÿoùI¿êð}áðVÿ”›þ§ü;ü À0ŒcÀ0ŒcÀ0ŒcÀ0ŒcÀ0ŒcÀ01š™Z¤®Ë)Í™!ÇRºÌw‹â¬T¯$LOHë/(16.µâ“ë’ã„ÑUоIÔGÜJ…ïutNøÄc q ftºfîžÔ)’²X×žï º"lÞ«@ ½“ªée°}§ÏXMeµ‚ dŽ}»V†ªm9pÐsÞžÙp'ÿ¬Øí ŒJö¯ZÊõS¾kNH¹6†Ì’ÐüÆþZãX^š^Êkupbs1¡Å‘mÏm'Õ3«:ëÑœu©[Ò©ÜÔTµhÏ¥JRIfR¶Œ¼’çUQج«:å7†üÁn¼"ÌÈ¢¦JÝ-'Æ=EÄ4c}«jò—»c÷+[RŸ[ ˆ*j_Hí†_rª7™ÈZ¥ìÑ×,Êd/.Nê<½oJã¼>†u"DÓ(Ž:evDÒtÔÄ)—“›ÄV0*“«”Ïœãîë"G$URd¬w1§LT½#bØw~cZêÖcÏ/ÅÙ FQÓ¦¼G>ÐwBWNËD\P 8M—3E! àÉ­eêº ´µ qæITt(õÍ>(^š³`!Ó~ºÈÓN ‹¶{¿rè&‘;·yQÐBz=Û´Ý6ï¾!¥m,®é*O½”îÑë6ÝÊÞçîˆíÜŒèeƒ+™œ[­cµõ¨mqÙFœ78ZQé­\Þ}ó#@²„Þ]#8ÂI¼Ô§Òò ¼¢î¼ºÝeµ§BÂ’ØcæX”•˜³DµŸQ‰3.Dha‡¿-ˆ5—m|wKKár¸œÚ.ï%ªFlÉ\NF¡c’µêŽ¡!çgí¯êç0F¼<ÖšåbÄ÷RÔXë\Œu³g"û&ÎĬä›Xµ³¨y?KŒ¨Ô!sÝUmuˆ­^<[žU®‰m†AÇ™¨ãi® “m‡Á EùTüÄ…Ù¥kÓJ7×âœNceTâ;³É-s±Í-ÇbÇšÊT˜×’uˆ‹PEêjU¦—[/ÌOö/1‰é¥êƧUìN—´¸£r±µñ¨ÚæÌ¾ägDn­ÇV„¯nQRÕ£6´-A%ß]¡DõVÙŽÒjKRü¥1x€CHC˜£ªå–åé¶"¤{\Í_iU!Ì"HQ¸¾š‘È’Sœ´£äe#éZš8'J!|S¤µñ™¶>“ Ï#iä›$ó¨C«œZXðÌò¢Bfg¨Š’ì‘UªÓ3ª) $XJsKª ­%{eèÜ‘¢XŸÊ||.Õ×8,5gv¹1†ÔÅCcˆjÈÍÄ(5Zz|Kb…©!& †#Ý%=k¶pÚÊbµö‰` |ú»×Äîò#l À"vhºwùZ¶’Ý $™F­È©9|.1¨ÙK^`ÍŠ–içRú+‹¼e™ÃÃz›íØf@ý˜àô¢ü¤‘Òì€þÓEy@Ùƒ£[=­ÉÙ6ž}nÈYq×3=£~Êð|—ê9KK4BwªŒ‘\á‰ROË@Íé,ŒR¯Ì™æxÖÒïhœ#|vyPöéI ³6uT³7o¯ÄÎØ%µ¯,Ôô*ÄI§ÆÅªÍ-&»X©tÏ…u7ôæù¨šÊÎU¤ÖMî•Nß5žd¦ÀZ×4^ËjL-ÞU5Õ®oè=kl":lñ˃¯†ÊìSe£×ÄÝŠZÓlö ~åÐM"wnò£ „ô2&{·iºmß|CJÚY]ÒTŸ{)Ý£Öm»•½ÏÝÛ¹ÐËW298·ZÉkzõ¨mpÙFœ78ZQé­\Þ}ó#B²„e£8ÂI¼Ô§tdyEÝ}•ºËkOçß’öèÜ÷0[“rª™S½ãöD òLÈèZ |羽>q…«Í!®l¦°?:Ñ!ŽÌ‹ÜÙÔžæGâ[›ˆKˆq‘lÕ·Ù6Ž‚MR}ll=€” ÀÈö(lï\Þã!®“0S¦dþXUNä2œ6*žÇÜ ņ̃[g›yO¸„àštøJ•ØVí€”Žæ§V@cSvBËŽ™íöWƒãü¾‡QêZY¢½Tdˆ¢åø™ü´ ñ¨þ’ÈÅ*ñüÍžgm.ö‰Â'Çg•n”›4gP…K3uÜÕä‘Î<ÅŠíY+‘À1KÆWljÍóÈ«£„qÒ1¾5'VˆÕS‹‰vƒ°;M’Eäå­J™›N­šK<ÅÄò?޽ Ô¤¬*KmÙ㘉(èiªñbUÝSPÒ;Wª‰ëš!±ZJƒJì„$éùª‰¬„è( ’ÔEpi˜jz²Ñ`¹¶$tbÝã8ôâIo¦:qf:?ŒÖ¦æó¥5DŽ53.@É0¤}±i«dm9%®¿ìçêçcšïf™d·8Tõ^=ŽÈ°»eNh‹¢ì,Ü{Ž%BZËh•¾zÞøsÂè™VØÌâ™[ÃɨöÌY_SÅ­XM6)½Çv­k*\WlË=Qif^c¬Ù:ÉÝ]…ŒMWÒ¸“SÛë9]TÛ±Y¨ ö­VkÉéˆG¥¨ËnŒôŠk¶í_n²XsÑn³ÔPRĆŸ"~brryfnzi^ï52¤N(Õ91ä’Çâ^PqŠ› ^€Ë¢-iD^©%ö)"†u/¬Ñù1_ëÿļffoǬzÅÏH¦¯M9"Un.’ÁbYŠpŠÉÃBŒLJy#ÃN5]2.=8B̵ ©;çƒOÄoE}ˆ*Ñjbif™¢ÿ&+ýø—ˆ½®¼5ͪ°72­×]#ja!-·QOxW(c!éCˆòÌ,¡ªjpj")=&ÐöÕ_`Ô”ÊïÞ¬ [å­„W²ÅÖ;A+_§{ë¬x š¦©ËlƒF‘ê± »æ¬ì¢»àß‚N>1+õc@±ƒìC.ùƒë;(®ø7à“ŒJýXÐ"òc;008F•;!¼ø˜ãëÈèîã„iS²ωŽ>¼Ž†2ÿЬ£nPeÖ–eÖ× Ú­¶]ZWjËi]ªÒ•¥v«ÉøGÚÆG7ûI¿G°7ûI¿G°HÀ1‘ÍÀþÒoÑßì ÀþÒoÑßì0 dsp?´›ôwûp?´›ôwûŒÜí&ýþÀÜí&ýþÁ#ÆG7ûI¿G°7ûI¿G°HÀ1‘ÍÀþÒoÑßì ÀþÒoÑßì0 dsp?´›ôwûp?´›ôwûŒÜí&ýþÀÜí&ýþÁ#ÆG7ûI¿G°7ûI¿G°HÀ1‘ÍÀþÒoÑßì ÀþÒoÑßì0 dsp?´›ôwûp?´›ôwûŒÜí&ýþÀÜí&ýþÁ#ÆG7ûI¿G°7ûI¿G°HÀ1‘ÍÀþÒoÑßì ÀþÒoÑßì0 dsp?´›ôwûp?´›ôwûŒÜí&ýþÀÜí&ýþÁ#ÆG7ûI¿G°7ûI¿G°HÀ1‘ÍÀþÒoÑßì ÀþÒoÑßì0 dsp?´›ôwûp?´›ôwûŒÜí&ýþÀÜí&ýþÁ#ÆG7ûI¿G°7ûI¿G°HÀ1‘ÍÀþÒoÑßì ÀþÒoÑßì0 dsp?´›ôwûF‰h‹Cˆ|NÝ&Ô™¶?ʤÕuãU©F~ÂD¶åË Üâ¹>QYù+!™ì?xÏ^±Ô¢'¡O½u å\¯R«lª°Û^l¡nð±Š4™0J#½fBQóOÉ™Žà ÅGøC©?ÏWŸ®à⣎|!ԟ烫Ï×p¶ 7{cmö¦Çñ¶W4ûMöN³ðK*wsá¤ÿ<^~»ƒŠŽ9ð‡Rž¯?]ÂØ€{cmö¦Çñ¶WÅÓ}“¬ü_ÒÊÅGøC©?ÏWŸ®à⣎|!ԟ烫Ï×p¶ ØÛ}©±ümŸÕDZtßdë?Wô²§qQÇ>êOóÁÕçë¸8¨ãŸu'ùàêóõÜ-ˆ¶6ßjlgõqì]7Ù:ÏÀUý,©ÜTqÏ„:“üðuyúî*8çÂIþx:¼ýw bí·Ú›ÆÙý\{MöN³ðK*wsá¤ÿ<^~»‡GÈ–?“:QÂ5Ù1·¤›ÿyõ?ËùƒnÝ2Þ[ã¦ýJæÐΟî{ÀáÓî™t«t[Ò½çÓ'õÚ€k-–ÄØ§û¦Ô÷øZVžLO{Yâ9d’üƒ'Ù1Ü1кǻ6†¯Xµµ!®¢ gAR¸­Þ"“W•p¸xÊdƒ¾'°¦dzO¿*wsá¤ÿ<^~»ƒŠŽ9ð‡Rž¯?]ÂØ€Ùí·Ú›ÆÙý\Õì]7Ù:ÏÀUý,©ÜTqÏ„:“üðuyúî*8çÂIþx:¼ýw bí·Ú›ÆÙý\{MöN³ðK8Eq˺—¸ûž^pZ©´æ£Ižg|ï”Ú-Jz¤jï54'dI{ 7+MBM„¼¤m!Ý:[Ö"N¸¤N Ó©Ž<éšúîêöµóP$-yr\ꬖmTj–:ÐJ§F«PST~?˜X›K4ëìBÌÈÚÞÐØ–…"mB‘§.Ï€×-ˆ´ž7îÃŒD ÑiðÓÿ¨!ä#ÿ¨ÌÌGÕ¶uzÂP µÔe "0LÔ¯*?q.WØ$QQ3Óß3”åóH0Gfw&ÄíTF®Frd²=]ê…CÃ1ÆÙ[lpm"I•¤,¬Muha6;±º ºúRŠž^Ý•†Àt¡W9¸:d½WdIs͈’ºÏæ¢Î-S†«–ʽn3—ãÆåÍ g9:ÐcQç"5ÕÆûW}™J߀!¾Ýš‘Ú^€wo’}Ke“0P"é)pÌDˆÄwÁtŽœ‡ÇtLjœz|š;¥qT+‰8í’4ˆÂZQô6¬É}fW#%35;ŠŽ9ð‡Rž¯?]ÁÅGøC©?ÏWŸ®ál@jöÆÛíMãlþ®nö.›ìgà*þ–W(Žžá™ |™ã8-rlß{Ù4»RŽŸÇŒß¨T·ÓŒó)I"ŽÝ Í1'MYVïÖ&sC½Ü‘#VE„ImÖ§.Û­­·S£Û¥Ô­+M»î­6é]ªÓnœ¿€HGÙ±há–l:à ` ÞÖ8à"d  ˜DP0DSÒ$¦zu™ÎÊõkT ]Z謲)2]t­!'0#'"±’‘)޳1פGN3¨>Ä2ï˜>³²Šïƒ~ 8øÄ¯ÕÄj± »æ¬ì¢»àß‚N>1+õc@Ó›ó³c„iS²ωŽ>¼ŽŽî8F•;!¼ø˜ãëÈèc/øÃ½›IDûâÌ_“ñfTÌÚXï’AÜŒå¼c[.5.|¡gÅX$Íñ›Ü¯q$‹Í}«ONŽN:WÞ‰9)ì•Ñêçu·×êEñXö6UQN%Ë@â€\˜Áxûæ ÈfHG©@Ä ÃïöÑ¢Ólw$‰²½mV[j£b“Ì…œ“²&@J H¢Œ"{£qy¡Ö^UËšCÈ:=|ÓÆ 3d²g“¢mŒ'<ËsŒÌÌò©Gíj6èÔéòLàßÃÅo/Z”ÒÝ[hl`ÆÅIM6ºé×´w¨Yž–éŠ3Dk2Abβ—vgô8Ë¥+S¶7´¹‰™å“(½'9sê7ÆÃÙmq±­5¶ª¶Ç¥LÇyEOìxNÆ®¾¦Ò“eRÊ·Î1Í{•SÆî.–ÒËê°Ë­@k•+°†6N<‚¢%ÁÖµ?»÷Ý­º£ÕZ ãBƒkFÅKMå•&Þ¢ YX†ëL^Ò‘ÃÞ†=Pf¨ šÀÅZØnÏØæ%Í8ù×\~sÓ×è¢å$t&/,Å9!ñÝÖåñeå*qmxkºÚTä­Y›[2£c{dKŽpn–çK5K¨‹"Y—'™#Yn™N˜Þc y­|O¦MiÚÚpN䢿ö‡ë±2×UÍÉ‘ÿn¦§ ÛÚ¯¼"\¢îz@IZG!d¦À¶P"KíqÌ©©ñ™;G¦ï:ÓTµ X²,RÞÙßT‹ë–ұǪ•‹ª±XÓ#ï­, A®zϤ°A²{©“}âPi‹2äø½™!¶0Þù².@Ç—ØâÕ)6@Å)0>´ RòÚñJž¦¼·«^Ùq ,iYc{‘÷¬ýõŸ¨)¼ ÇcÈ›òž¡±“{÷¤pVø¤†7—•.KµÙžN¡v[ŒI–-iß;NÉ%1gXÓ‰ê¹'1Ñɰôƒ¯]ÁvãZè¥jöóc¡uKU¬Cé\ÖÒ;ÌïR¡wr ãZ< â8 ð¨Ÿ<o„]f¬öD:\b¯*¯z•ºä‹Ô-^ xˆ±²¤ …ÞVK[bQ5Ö%ä‡6;°Ÿæ-lâ\E–¢8†iîUÎBKVÕ‹±K3#´…#y©”­-ÅñÂS$‡ÅÒ\…Ç úe#!A ©îsRœ–ûÉToNÃùå‹/ºÏã%C²=™â÷&f¹¬7"´37¼5!m¹Ý•Beñ™ ®.öÞâ‚ËÏNàÃ!sIu-­µ6—mRµ‚Ôl‚ì›B™¢mO´%•bÒéMµ¨ŠÊ‘q«©6e_Ô˜§É䘷ÆëVWËX7RwÃXÐGã´u|*µ‚2•Û:Je°ªÆ ‚¬ñ\®$³¹®9¿H3$¯¨+(äH$lÄù³6ȳÜhõ21å¥ XÕ¦¾@ßX•ÞÉ1Ë®N´ãJ¾ÕXU¶ÔÃz,KÓj ÖGÙ.Ô†›eš„ÕBìKŽYgKbŒdjw8!\‰C$–ÜÛqï‰';-´”NÝjµfÚuM²ói}åu²ÚN=_r‹¬¤!Úí5ýåÄz3oŠ¥ AXÕ ó*Ûa‰z×áð´{ž,‚XÄoù(wj¦Å}–ûYÇ*>n‚I—ö•ßa-•x#I^™Ék¤üÐà˜ Æ¹O¤ «)NSÒ´³HØÑóPºŸjj;9LŸ ˜1ªC[²dJA‘[ÖÇñô¹þešgnë1ìeÁ²:âš\ÀïpHûGê¸*k½ ·ß.ìbL#¨9ÎñŽ|&c$1¶Ø£Û4"8ù–’ñmå7*ŒØÝ76^îR§rTGŠ%ÓÔÚðÍä_ô±ž­RzÐ06S].6,TDSÒtpÄöN°¬ÂCqú ]¨ä¹‹žq+°Š¼\™FO{-%—šà±‘±šNìÄJâ-,åŠä’(éV©åSnpEiýO kËNÙÏMsû,ÚƧ¯MÉŸ¶Ìÿ9RWnníKŸS*¹reU®ÖEî×8©3¥©hc¡f¢.:ÇßTC,ØÕÛ(i0Û ïÇÝEÂK’†"ÙþÉ_rXøšârá%Ä¥^QÇ®YU*Ûzl¶ç]®ºþNÇMhÉߪ`È]ª‹‰s«2àŽ–%~†MËœ­›'xæìk.’Ãó&>ÄÙ~FåÇ9ªm‹"Ç2fŦ·›CjÏ8{šGQ¨…Jå*†±¡5:e‹®<´(–)"£j†q‘ {,N†D2¶Yjäô1©ÊNUŸ¨€<®5Þ\Êe,…©‘ŸBÜ¡½‘·wjmkJÔ¡QF¸˜’õÊÕ):K]ÃvÖöKÖÝS5lz6¦–ØLšæÎ2ûÔYã>©³\‚ÊK­Š®$.Â1[ns¦¡§·¸¢äíÕDõEa_ o£ÜY”o"L$lV°Ó®ð˜«i+²h²fî˜ eÇêXá²¹•1ÓÆ@Ô6¢q²û°š~;Dk+’gÖEìYrDßKß’|‘‘Ë‚­2e—H¬f“ÞÐRÈ¿±ÄvSq,ÚS” 8'TŠò."Fæ¶]O¡RBËap¹¶Cjn’äçFr®cQKmY{³»Y o5:F“œœŒÍOâ[`]6U_­ <…ò˜•*W™aC.7øÁ‹A×8}´i¯¬L¿¶`§}~i¦co¦ÓfTä–xÂ<ý7öêÑ·€¯å52Êï/ÒÔ²)ºè ¯1î7Qðî·ôùšp ƒRŒRe‘¬i¹Ø™¹ÓDµ¼ÃÖ3&Jµ[{Ã{r§‚ZjGãÛÊcZñkËÒ%o1JÓ*šÞ:Ù²wnƲé,?2cìM—änQ\sš¦Ñx²,s!vlZky´6¬ó‡¹¤uêˆQD®R¨kS¦XºãËB‰b’9Œr³f ê.úšvMé”Ȱn9Lz*. §žÝ„%¯­YGØJɨ[YÚ\¯ŽJ—ËsGÑÞ®ëu¬C –u+1jµläbe5iµÊUË/…¢£X ²Å@æÿ-ºÛ­¶ën¥ÖÝJ]mÖÖ•¶ëkMº]mi·JÒ´­+JÒ»U§-ú ²ÁbzÄûâcæ˜ýøÆ1€ `Æ1€ `Æ1€ `Æ1€ `Æ1€ `Æ1€ `Æu؆]óÖvQ]ðoÁ'•ú± XAö!—|Áõ”W|ðIÇÆ%~¬h gfÆÒ§d7Ÿ}yÜp*vCyñ1Ç×‘ÐÆ_ñ‡{6‘yöZŘ¿bÌW˜2L±£!$œ9Ä™\ÆŠ>dj\ÇC•0F\#69XâqÅWnBuÆ ±‚T_¸€%t{9Òí¨m¡dõöUmI&J€Ú’ƒ\0 |}ñâ1CÔ`Âgº!÷ú˜Þé¶:b|Ö^Ê«*5â¸i­NŽÖJÂLÉÙ3E2"S@q³Jt¥„´Ø‚ ŸAt߯ӶffÆW'¹&ŸUâ|†™ù#+yO‡§áœ67,¹2µ'mÏHl¹µÔÊ(¡+UT£º ·äì²ç7Ôî(ÊrÜC”±½Xæ8×Iò÷{¢PæZ6È­Š·96®Ñe)wwz#Ga Q8œ ¦íö©.䀕¯Ê­#kµØ˜6Ò÷®–Å{wæ&ŽßÆo¶H?dĽKey$1`#+b™E0Žá”™£ÓéÖÅÖf“i£ÜV·Vš/¿¡h•V]¬qz`“n!ëc;É‹jJÏŽÆÃ~HÂù3]Ùƒgø-¹%èù\`£°fU~!Ù;lƒ",sBÐé‰<5=8I+]­©ZV­5ü³ RÃk’b8¾£Æ\͆t=¬¼K4Òî¦ ™æ;dMé\‘Vúã$ƒðbë’Sû`mf4ôÎG½>§@̨“%k\µÕ¤úŠ,îuê'oåÓ§·wKŠÒ¶µÜp •ÄácFj̬ÍrðRÆÏ˜ìuŽù\ÈE§àûÓ³NÕn ÑíyVÖ“ŠJMÜ­ MÀ´0Ñ%Î4JB¼ü ‰€)g•|¯¥ ù¶0qvQŠr¬¿1K³Ú¼Ì¾ÇSIý!qª09±Q…ê ÈôÂÈë[Oh_s:ç$ëÍP漤ÉÔ\Öçz~ׯ†|…–£›¤ãÜ!¨ÉØ;c¾C%63iF%2Òq+ÊWOLmU }6dÚkè os±5ÍÏ- Vú=Ú¿„˾¾žÑúÊî»SÝä#"ö*¹:Þ¸õCSÃã6Ezôâ¸KÉÆÔ›ÂóH®5¿ô _kTµ¤ÒµÄ×Ä‹ºº›b*FËÚϹæ–æãî¶Ét„ u)«P&|=ìÁy~'Ê8Ÿdò-­Ò±†RÈX)ÅÒï×n:˜Hç˜íC®0M¢IF/DÒnEj1´HV,&øÅª’7¸ªBbn›·ªo뇧ò\’é?‘-ÅOîc«;d îbØãŸd$ í½Ùýú öÔÙ!ˆµ"sQ{Tq–Â_¦N­z––¤Æ¤µOsVÚrjÑÖÔ³ErÝF¡ }Ÿ;`•«©t­Ô†¨!k}Ô²¤Ù.‰mf°™NmÂm&ãªãQ¨Øm.Õ¾ÑVën{ý¥O¤]¶v¹ZûF—2ZÚÔ¬xQlªŒ›—e *¸ õ_üË…"¥¦Ø¤Ò‘¦=Q¥#H¥zà NUæßbT(Š=bÕ7Ûem!"BR ÚÚImöY_<úLlžG6T3öoá]C°â̶–`Á ™=iã5µµª_(–ÁT4^ñUÐR‹·©z…®òbšZÚÈOq®jÒ[n¯¢´;ÈÒ{_þ'ªö¾žÖ‘_)ðÕºje†E7¹ýÕÓá"ýšâ¶÷Œ¯"ÐÎü4Ëõ~’4ûú<…süòëšÕØ]EW*·Kv&ÀŒK[2ŸS 8nl¾E2EɺIGŽñ>eÈ6ã)“Œ¾jãÄ&`ÄÎÊèñ 1%ÔŽÆYW8îqÇS37®T쌲“^­¯CU­S“1l—é#6E°Æ¡ž±~9ŽÂʘÊÓé×6ï&e ¤Ò J0ƒ`v,<ä$mUt-"cÌgpªÆÇ*%skqJ—Ð ½G6=UM XÖƒã@„k²m’åóÉ Fäº!á\t­1ÚSÞÙlGl×·œ7Vy-‚ÚoŒÉãh´L[éׯ,>ÝM€ên{Êm „Ô0 —Vfàä³(–Ê®¨³¼ƒ ê (™À¥,Ñy•ÚyÍ'7»¹5 ÇÊN´ðSUÝU¶CßKh2©í)õQHQ´^µ[»Q+*þ˜4»œçºL×FŸbÌ³Ž²O‘E²9#"âüŽJÑB$)žNe¶Y)4EÑ:¼_jd]’¬¼Ã·ñ‰èÚÞâ©/©À—Ï^šAQå¬Ó¥âštX+$Â\pýˆí5V¥rˆ4í)Smeû&¬L*–w³ø<®Ûåqû&1m䜧‘¾´U€†·—jìéö”¡£bMu•·ÅSó©Ò¶›+?88wMщ†•qöËÚ[ÕÄç=Ã^³¡ÆyK©˜fmt2B½1S¶ùiDá8¼u aÔåK¯Š›YÃírDÐÊâjûMSÐ5aš[²_¤üùâEŒ0TjšÌc8G2LNnr“¯­±:ˉLÜô×¾ØÏy³|Þ¥2•5pF¼¢wøù ángnê†Þ÷î­"Ù[*«µ¼®5mME2ªµ”©|×H(Ÿ |yí½Epç/ƒ-|qòЦâÒ>“Q® €êéò¡¥¶7k« •¶Ý¦95ÃØÁA&¼ÊéªÓÝh°&>†l—f6]ÔaÜþV!tŒpöí?æ^üT=ºíõHGíl•zmV]’S-²>b"ìu±Êæ³JY|FÉf°=ekÃ+L0ž¢X`ùm&@vÇŽrÍÅV@ŒÉK¬¤ä"¬ ®)\W¶T“3­J™ÍÅeÄ·$H{Å¥¯¡`‡Ì¡´ ë#²xjxT0nH”PKŽÀÚ˜šÅn\CÝó'Æ2¸"òhðNÛÓxvÓÜ\é<ô×4bFvI¥[^5bÔHÒšÕ£º'¹òâ–C`#Å>_´—¥ìá9ØõÕ¶˜œqŽSÆÙb]6kÈðæ¼“§P9;|vøšÊåL-й<¬+lNÜcµª“™R\WCQF¬³¡áÝ7F&UÇØg/ioWœ÷ xR·ä .¦a˜µÐÉ ôÅNÛå¦=„âñÔ1‡S•.¾*md µÉC+‰«í5O£à6¾v]±`ÓôÁ°ÜkwÒ/Ú¬Ô쵺åj†ÒÄ2ï˜>³²Šïƒ~ 8øÄ¯ÕÄj± »æ¬ì¢»àß‚N>1+õc@c;008F•;!¼ø˜ãëÈèîã„iS²ωŽ>¼Ž†2ÿ€cÀ06'ÔÔ=5œp&ÑWŽ ýM8W+ãÅý” é&l_¸S8¯ýðÛûšòñþgß‹:_Ó©GùÆd;ø0?ž®ý(âÛnÄ™ž–¿¨­ZÚ×o“qªv==ÊêµUŒ«kl›)óVrœå,û3#r³<»OäzÖ½íOjÕ6¶Ÿå7«zš6[NÚ×nž™õ_à´‡ Íbü‹(‚ž™¥3_ŒþÈwð`=\ú> ó}uë'Ö§!l|Äá=.±µ éN¼ôúÆ[’ë®*Ú¦Es‹Bz)0½Ø«Ïܺ+SuQRŠ¥o£â¾Ëøž;ýßÄÿÝããn«øNMý•Ì¿Ðæ°Ì¶íXkõá7VŒôNŽ ÊVÅ»[ø!j©N²†¥"´Ìf§Rœë.¥åI—–eµ¥Ö]ZWlgíïàÁþz¸?˜â¾Ëøž;ýßÄÿÝããn«øNMý•Ì¿Ðæ”Í~3Û!ßÁƒüõp0ýïàÀþz¸?ô|>+쿉ã¿ÝüOýÞ>6ê¿„äßÙ\Ëýi@ ×ã=²ü?ÏWóqŸÙþ 竃ÿGÃâ¾Ëøž;ýßÄÿÝããn«øNMý•Ì¿Ðæ”͈>µ5]D`ü ž´cÔ#«¿T¾ JøÅAò‡bø:¹›ïî2+ÿs¶þé<³ÿž7â>˜t½RQ¤â;eª»ª:ápkË­êë2¥ê;î¯ê,U–.ֺͪÅÛf­„˜C{ÀÔPcî²z½½Â첉Zÿ‡kÑÚUÍ~ÃYe}5k¶TÙÕ§h{ªÜ¬ð9OŒÖá"÷ô…ŲF;œºÍ˜a3Ø\ÅóÈí‡dfh´¥Ž@ë—\ÔÞûlZlÜÒ¹ZÈ¬ŽæGf§‹XßI@çV§6÷ %ªE‰Ž2h#¢bD '¨±kjÊ=âjh TÀ˜÷Ú³,ÆdLLfD¢fOë!ÿØ Öqõƒd¶,£ç[€Æz”ADÄ> àF<\»Îaù(k›ó!Rx,™–]1æ,öá“´õZàÛ{œrFÔé~Ajš«h{mpjp):ôj-Ö:ôëïé%Óë퉒éóô‚!‰Ÿš$†>yŒt˜Žé“Ý×êï!3ëówH­…óÈÌGA)‰€sþDÇñI$&)C£Rü–àîÓŽ"¯òvFi&@u³)‘¿6BXÜW&s•¸2G‘,~wFÄ•z–ÖdŠ]–J >Ɉ|þø÷ÄLŒÏÿhˆ™ÿbfc爘Ÿ®3Ù‰Žbc¸`ǯ»¨IAGï5˜ÁG»¸ zõˆ…¬ÉíºzÉŠœ'°´9BKy˜Ç1ºÉKiì‚#ZÜÛ!”²CÏ]d…Ö8ÄâîÒåñqík]’®TAëSlÐ"`„N'¨| G¼JTÖ!%éñ¹MIô™ìjزè`QO¸ˆgÜAÙÜ3ÿaò,Ñóz˜¶‡XŽå°:‰ ÈÆ9¯"c÷Éœ³2N¡ÏßvœÀÚäìŽ8cTÄ·b.rȺEÇ>G[å%3»Xð…gÂÚÜol1U¨”Ô§×õÏ^‘õÏH’ž‘õôˆ™ŸÝ3>èÇIèEÓäl”ýCb±’Ÿš;˜`×ç3”Q1)Üú ‹bù&ÍbXêo1ÚS7ÈÙâ1ÓYWYa®OòYÆöVfò¯0» Xâµ2k.¾Ën2•ºÚWɘ’)ˆˆŽ³31<ÌϺ"?|ç¢$D"#$E0"#$E3Òb:ÌÌÌÄDDu™÷FK@~eœIÄ– £K4ƒK´â.ûo$Âo¶—Øif[ZÙywÙZ_möÝ[n¶´º•­+Jˆ”"cü«mâùÌ;$AÞLq)že“2Ì"ަ³º-cw)¶CZâиƷ¦×‡¬6ä.ˆ·©¡JÒžIytŸ•13פî™í.‘??lôù§1‰‚1˜.¥ÔKº$‡¡GºzŒIGI÷ÄLǺ2bÌ÷À0ŒcÀ0ŒcÀ0ŒcÀ0ŒcÎ3¨>Ä2ï˜>³²Šïƒ~ 8øÄ¯ÕÄj± »æ¬ì¢»àß‚N>1+õc@c;008F•;!¼ø˜ãëÈèîã„iS²ωŽ>¼Ž†2ÿ€cÀ35ôÉZÓdCdÿ—ký ¾OÞ9ÿå­:Ýé8±jF䪜ªN… Ç+\±YĦH‘"bï9J¥*N2ÂS§NM—šyÆßid•eÆu¶[ZÓ8ôËöÄ6Oÿ‰WýŸÞ9ÿå§ÇÖþ‘eõV”µXunÿ@k¬Y,Ï/È Å£!À¥ÙG‹ªÈ8 6ëlZΆ(S«“ÒNM´£ZMQ¬¶û“båIÕû»Ä?)i2³Ä¾Š·÷›šþrßdÒ ž0ÖM\cT&ÃeŽDÐûè™ñ•j Lr”ǪDž‡Zk‚"Ô#VMW!µB=Ñ*›(ujA´²²æ8üݾ§Þ1›s3ΠTcؼ›]*jkyLªÔÕcc¬š×÷2S«“·É sES #¦S&E+©k,©9ï²A¯¹^гÆ#i˸n=3ØÿËѶDɲŒ^mò¼_6@ª¦½–@ÀaE°˜‰Œo¢Û¨…j«Í4ö§4† Z¢ËfH¿CuÊ\áŒãçV_…3Âw$Í®V¶®ImÒœž/iLbDgË#µRÇ–Þ¾³¶¨q“†Ä–æG—úîY²¡lblŠD2< K+-‹ŒÐjïOeœsa8–CUD~oÅÑe÷Ùs óªà£.DÚn-¢û”K#É/tFüKï¢TªHX•:´g¡"²JR•I7XaG}¶šIÅ_mÕ´ÂÍ.ûo²û~Æën¥Ô®ÕG—i¾Å®¡²–Èc6È^Š]â8 ?bJó”ãùõ¹Wd›å(‹C5u.&{Wl~@ÐIh¥M³U0µ†¼ÑBÔ¨ÔÒâUÓyp® ÌØ‚#|!BÆçÄÜŽ¾ ;ÊLƒÅÎ,›Èƒ$’_’[OxccU¾É*\ÈBö¶C4*¹Ò­Ö+9Œµ›uûÿß%>ÿÿÇý]nPÛ®Õ:ÿ{’œ¼•ëý—ãø¾AÉ\$Ùi‰9ª”ãy)Ê;¡M¡¤‘y¶[“ïfióL"6µÓs:ÕSûª’Û­¸»ÖÒ—ÒÞkµ ¥ùÞËõ×åyâyÈ ŠT;HФ_o¹ÕísC¢‘˜ì‚Äí½”ïSTݺ¤é™+,(ƒØËGZ×j»í¥:ûTÚÿ¥òòü'ÄwRœ´Ú®ÕzÔ¥iñ}ý¾OÅ×ûÃüä®ÞÝi·òíÛµ^JrWã¯/ËɵJß-9iµñ‹·N·_“jŸ‹úC›š›®ÞÈ~ÆñÖù?xö’µëu†“Œ×Ô×ÛØÁü:ÖåûÞ=‡k­ýˆi@±nþŒâwm~måYÐý+Í~óTü›Ä³6-Ò¯f³ÿ”?þ­`Ù×îhж£2a+剖bL‘±¹šò>š¡«cÑ]´ºÌÁ3V†»¼Ñ’Éù¹)›0c²Ê=»;´lAI­-ˆÌptªÍ<ÀzQÇzuÈ É—™££æ¬óIYÛ#%”¸±­je—ˆc«[¡D4ÇÖ7G*É jUrGÕÒG:º¨p>ŽôHjd)9vµ6;ôÿ¯7½0¿ç&©×é?<Ƶ+ ubj)õþ8juÂçV½Æd—¼c™¤ ¤¬M&0:8ÊÙ¸?#,“‹>¦šÒT¸­'5ª]7ǸþÚk´–jéÂUĶW+°"HÛ£¶öoµ²¦ûºš^#I˜9vŶg}c§M®A»ÞQt8~7·”kê3ߥî„é¶RbÐU •àj^jfLèç]:×ɹWB,Ê™] Φ±Þj×#~ÈŒ­¢ŽZxsŒô²„–|{u‘Ô±"Ùà­™]ß8éþJÎÝt½#zĈ\ L½ÊÅ<;êƒ\÷hëJ¬pã–1¶’jdE“T™»D,ÃY³1ã¤pNdæ &—O xºGI”Ø…øì§“ذFE—œêÈÝÿ z”¼ÉÕî¾/ØîÓö$×>¢6Aâ¦MŒÎš—‚Ãà†çGvEù®$‚2Üs”),iÛCܼˆ\Bé’Ç)3ÚgC£†£DÙ[QG.dؾcüQĸ7XÚÞÓÓ\i¦ä¾â\‡‡Št•¯ÔR{Ë’ÔxüóÎñ¬~Jþ⎺°$²HÛ¦"«úåJÜW­ÜÓsãWiŠP_e'»zšâK×Nèï¤zÓ®«Íz“£F­„Ñ®êɪ÷ÙLèmº×u©k@[¬kHÖ>ĉ[-4i÷†À´ÂK”Ý]žòhúæV–ݯ®×¸Ì&µg£%3dîi©l°vé¦Ít¥ÍóWíNkB0ŸVÓ,ŽL#–6iË&£˜7Éð(ö50fp·Âf\ 2¹3[Ô­¿0¸j6Çö_Ô‰œ×¦‹õšŒÔ¢Í-:éò]Žs›Î>ÇøÊu!ƒj¾I©F±KO(u„¾EÒ·È#±f*¼58§«’*+MS/šâ­Š­8a÷}2É#2lÊå+Ó6iÏz‡K,•ÌÙdrLÑ™µ)‘Bò¼ï8º-ˆîÒG„RUŽ(ˆ…ð¹±É#iIÑô•'I̳xëKXÿêGR¢ax˜«ŸêŸ ²OÙÝÜCÚ`¨üŠ72ÞŠ<é½CŠ2ódw½H$«VJC h$³ˆQÔ¤²ÜC‹pDÞèƒRœT´p~„eH(.ëí‚- j¶ìT{&â×6Nº±>R<½Ðtè‚iö‘¬å‘Ê7–ºÝ˜ke²z ¨Kj›zú·ÆY¬›%žyíõè§ÿmdç.ž}Ù/ϹgKp9gx°¸ö5‹kOqO¶©dº#éó2J¯ÃO÷(p|gqqðNg<ƒMúu^ÆçiÕU.9¥jôJ,4“J8îQ«Œ_¬ç™¡9Cál—‚cŒ(ÜXË-ˆåI42U!q{k>8¢B¦FÆÒK2´–ÖÄè”8–¹¡Äó“(IöµY¦¬u¬]9æ0å““ãÜÕ t„I"‹›¥Lé×гR?Åœ]Ú_šÛä¬IÑ=0®peuH‘ÕE Õ”]Ä_=M^”CbÖ§g²Ù¶‘2‚Ëá {Ê•«s¤LF¶ÛWr½ ¥+`©vì,ÖÏ f@»-Ñ1’š»j´(…¡7ÒˆúN6ý‚ Qú½mê–nQíjÈZ»EÝ>c]6\õª‹dM™œÇ BžÏ]ˆ+HcûL‹¢zÖ”ÃA_MÞ0j|š ‰=qŽs³‡…K ½ždRí¹Š”G[¨çI&¶5'¥†íZêJ½Äp¤RÙ~ö6²¸&¹ï3d¬fƒ$¿Jrc¼OM9ßc<{ow77ÀXà)bõr'VéSS2Vâ胆Þ‘ŽØÜ5A¹6Ÿ±t‘<QR½…LË@ÙN”ÌÕma7].8©&=?¡‘«¡?ÝÂKÝzæy•éÙÃc¦•3kÞ£±ž µC¤,Õ5‹Gá™Ng¥ÉÄ…>`Cí\\,¼— Ëx»1ã·ÇÈrg7øÜÍEªnÜвöZHÌh%"$ý¥ªµzäÆÚmWlµ~°{Ôvµtk¢·Ü´ ÀÁl›U»rh.Þj7/‹·hRˆòN[fݬ»ˆ¯²UBìhkö[ ÉvûJ¦;Y®¬:úîÑ7ÑnjTuªÛ²ÕåqºÙÙÎð­†èë½fòæ¯3¦qV¦TH4ðZE®ÑüŽòZË'ðìY¨­“,|®hžŸ%ãFÙTå´*’0§š4LâÈ—ÇÞ>î¯u“ž0¾¬ñ6ǵšÂÔŽ>i‡é€ÝFæ-]c­Ù«ýJågyvS’Á±ì9ÀúYÈ8?16$Ž¿9Kg *'‹QÇ#Ñëž;8¬|E1úÙ.&’œ©ž§ó-Irt·K'ó–YL!>å˜$ŸÉÕeçuñ¬~)+,±Àèú8™°„ŒªRµ#LEXÊd¯ã©½cÍHäh&pkÊ™ßMz„Ç1§˜${=iªeˆÏÖc‰ ŠG§Œm/l²V6ŸAÔ> Fü‰šoU…ð›cjYÖ«\r¬Ý=.‘Äyª3u°·d; ë3Bý~¶*ˆ­Z¾åˆÛ6‚Ž­r¶]Zè‚uÍô{&¸~ÁÁ¤­V¼ŸS\ŽOgasÔ“&ë-½¼zU¦ ö å€)YªÅw ÞS±]© í¨¬#“+¨g96 ÏSœKÌ“<ä-9ê3µ6Fä°<Ï~Ê8Kô9ÍÙªQXÌ­|u¾5|Â,ü¢>‘ qÄ·£Ïbq^§rîÍÉœwj½n“Ûô× ×IT¶#‰±^VÈr ìy½4!©"LÎÁ3„5A‘[Â5³ʈß)|QVL2¨¹D:·s´û„ÕàX!°ÇÕœ³óšÇç –EÔ½–];\­Ä„I®BQ‘h¬"ÂÂŒ´\ߊDXžzÕe!ßk•žu+Ì;xï'êk,êò©ýdéÇ:fh=Ųé&žòn=6[ÇhÞÒ£aC›b,‚Âq¯F¼Øàá |Bõ)arjB§>B¨­ø·¬.}­kЯRlÔÉ«/¶\uÚó¸b…U mØm…tÕBĜꔵ'TïjŸm=ºŠ|'uŠ:ÀÀõ”/–k6óTeÇlº+UQõi“Üæ÷puÅ´Šê³µÈöxÊÚFÓ”eË,fØ~·%º…Õž#zÓÎ ËYP.:?›Ê±öGŸÁYõL¤ü‰qùÄ6Fæ²§ùÃhÜÉŠ2+.ó“Ò•»4ê»QyëaÿfÛ ê$øÌŸ¤²îÇI¤yn…1Þ_‘B§˜óå½ùZ%§ ÞEÁHæíT’¸7㌤)˜˜‹aqS?ðΛk ØžÓÄv¥˜†œç5K4svA®Í8ŽS xÊÄÛ˜K©™¢ÉÙÙ¿æ,’é—÷9<èÉÎ?|=T „Ïl沫NUÖÄ™ö4äÙƒõÍ‚×åmKËöBÞ亃žL² >E’¯”%‹²FdçÕ˜Û¥,ª_¨È[ÂÆ·(ûìm¡rÅ pöHÄI,e§,;cÃnißZXIZýÐÚãâ›úºk‚^¦•ºU7.³R ARÕÉ©Y ®^¤½¦Ù¯°ã¶ 5¨o×yë69:Xný¡Bí†0Û²µJ,è—¸îEGܵp\´×gnÈzÚÕÞ­õkˆp&©Óé/h^1 í ­˜[e…ùÛ1e<<טÜ]òºì˜ÌêðljcìÏqȲ8î,q€Ì2Dén@m1;yEM¿±ó5ÔýˆÝ)žûD½œ£<šókUÆ\Ùk©š“̸Ѻã¯0ÛQ]N¢;0Ã*Ÿs­÷ÝvÝÕ²Y3c¢5ËòÌÝsÆ¥t×4ÊXú'‹³½¸Y޳ÌB ‘sdB“ÔÙäulèó£œy§&bøÃ'!bYFäÓ‰@ÓÒÿÏlnã3á $iï fÍOC±®ò, EÛ²“[iùq,™lýÉf:Ô Z!­¨rN3Jå–¹#ÚU\ãñW%îË—¶)R¿½n¯eR3jæŒLö΃U¸ïvžbêÀªø¿_cOY oˆšªé¢ªݲàZÜšu•îcjÖªbºké’ŠteQÕedm-Õ| ûãddó¿b¶·C€rçV1€ `Æ1€ `Æ1€ `Æ1€ `Æ1œgP}ˆeß0}geßüqñ‰_«ˆÔbwÌYÙEwÁ¿œ|bWêÆ€Æv` `p*vCyñ1ÇבÑÝÇÒ§d7Ÿ}y eÿÆ1€ fké“íˆlŸrí¡_ǵNÁÏÿz´äûãHN(µ˜Aöi'qF’e-¼³‹2•¶û ²û+eömkmöVÚÛuµ­.¥i]¡›Údûb'ÿ'øÚýãŸöºß×ø†’ÿ/òßøþ._ö (úN¯ÝÞ!ùKI•ž%ôU¿¼Ü×ó–û3ß=adˈ¥Ø§-C–ä&ÉS«-Syí†I×`”Úe´1µ"K/Š›.¾Ó›-i!s¾2*È&ë …¢*ægHº˜ÓÎÈ ;JÚ–c}Ñfh‹HçS)B |æ §R¶ÇHùLèÕÀléÁ=Ö´°)$ÔN JµÁÐë/k‹È­§°MG©rK€3=YŽ=3Ú¬e4kcP˜ÃŠPCãËæ–Sˆ4¿Û 8·E©/*òÿl¶úRë>ΖŽ?ÒôÒ7aÈd½l¦>ézGÆŠã,}\á'±¸¤.«¢-L-¯æ8–]I9ª4üÖ¬ƒ”žîÞèê}Ê®å›>¶žá £9.?²>š«ËÛqº”Lt^ðä<á­±í¾Q+@¡ñ{Ââxk¶èY&»¸TØÙ15]7šmÊ,4ÎIH»Q+ÌXÒÔ”Õ¶¦[ :„°GSoUŠîty6§¡²Ä—”¶â(jä%šà½7**†rýƆ„ h ml"â‘SoÚ¼å*<õÞ¡RÅŠÔÜj¥«–©4åK—+8åkUj•'q—ßu!ÌZ†žË§ç`-5¶Úé+Nr”SlˆmöØÇ9¥êÐΰôËQU[çLÚVr'DQ'ÑÃØØåy j易Þ#š‚(Ç1Ùìåú;µýµK‘ ®ï–XrÄ-\™+³ ‘µ¼¼³½¤GÓ¶%e2’zöUIVU-”ºêR6³RZb™l]ÚuvJº¥îˆÚ^ @u,2·’ªŠYHEm„NìJûO¶ÔÊ¡ÅQÄÒëc0-bvJUß#¤?/Ì›U¯N3u..ÑåJÌ.Ën¶±'‹Z(MöZq.ó$òk•Vå&>r'%7hQ€ðR¤tB« âu­¥m¢C±ÔHÄÖÛ]­ºZMìÕ*ÞZS’–S­OŽ”­ÈÞ8Ÿ4—+·#™$ž±;ÇœeØÞZä‰Ê™š¤Ñ7‰ SöçRôÂ{[Ò¥‡>¾0Z{ i@±nþŒâwm~måYÐý+Í~óTü›Ä°»–lcÀ0ŒcÀ0ŒcÀ0ŒcÀ0ŒcÀ0ŒcÀ0ŒcÎ3¨>Ä2ï˜>³²Šïƒ~ 8øÄ¯ÕÄj± »æ¬ì¢»àß‚N>1+õc@c;008F•;!¼ø˜ãëÈèîã„iS²ωŽ>¼Ž†2ÿ€cÀ35ôÉöÄ6O¿‰_.ÕkµýãŸþJÓúþðÒ^]º~ rí]øºÕåûû/Æ3kL´ÿ”?dÿø•üŸ¼sÿËZ~­ñ %ÚërSâø­åëýÿú¿±r¤êýÝâ”´™Yâ_E[ûÍÍ9o²½ê‡"Çq†”HdI¤«l:¨›Ûâ‘I½Ý\…BŠ)h²æØãk²¤-DŽ«$+ÉLÇgLµáéz$Î>ÛMªÓ’µ­+NMª]µZmrm}–ÖÖ×'þƒ˜äè“4Ñ×&âšÏÂÝ–¾±¢{´‹^®_z:ÇÖÞÔü¡™¶DÌö¹­sãSBçVÔÊ ½:GÆ-jpŒiòHøá"8µ|ŸŠ:_ò"/'È“ÜìÜÈØµž@•ékk×fù„as,¡+‚¶fÅæ­Ä)[—¤O]Ë6¨²šüA…år¦3Š"T®Ä‘¨‰ç–QÅ¡~¨µ ÷¤WZ&pOm¹Êh¹¬ã±Íº:±ø"õl_H8M.ÄŒÆ-JªÉ¬ÕdŽ^k‘«º¢=Y&¯CVâàa®Kb«šÓÖ­<ÃÞ%ÎR™)–'9ôÔ¤üÍmÆL‘a}Ûz±3¡šÔgt4¢¢Ý¡ÈÊ.µ»jÛêí)mº¦V´´‚­1EÕÚ*´­®krDôÖÜòÚuŠ›]¤roT]»V)B¸‚Õ$>ÊV¶ÝKN Òï¶•¶—m]OŒ1•Šw²ì[VØ0iÆe0ÃÙJ “¤ÓíJ·/¡pÜE …¤L¦1·ZÚ} Y34Ë‘ 1K«iæ¨XŽö¤ŽÄ ^×kyv©×ëü—ò~=¿“â§&ßòò¼›u­:Ôø­§õ]ׯÅþÏŒyÎ[$úÇžIH^ÈóÇÙ?%¿Cq; Ï(Å\t¶Óò…ÅÞIœà¬ëî‹du9) ˜äVan@TÍBääÕbh!#šµ‹XͯÇXûS0©öœ–jšyɹ’Kžóò %¸Õ3Ì">V!²3>—ãe¬i™[Ñ»I"-–0¹Ü­!8¥6¡Zý&5 ÏŽÚG ¶¤°ŠÞBœÙQ[µZÔ¶Ö7ç&†Ânº•Û2â[Ñ%*ã/­o2ë*a•èî®Ý ÀÙÖIœâÚ6ÌJ0Š 8}Áù;'È ´RkKõ²!ns‰Dž•˜ö­’Z…Ùéæ2Z«Õ®mh|h@µÄõKªb­`o9±½š”bËÝœ &•©J_r·EVVî‚î…SêTS¢²Í­×k ·’Ú1™å©¯¶!±ƒøu­ñVŸõÁËËZü'ãP3_STÚÙØÁ§ÿz¿ùÁ÷ëÖë/Ƕ4 X·Fq»¶¿6òŒ¬è~•æ¿yª~MâX]Ë61€ `Æ1€ `Æ1€ `Æ1€ `Æ1€ `Æ1€ `Æ1€ gÔbwÌYÙEwÁ¿œ|bWêÆb5؆]óÖvQ]ðoÁ'•ú± 1˜#JÞ|LqõätwqÂ4©Ù çÄÇ^GCÀ1€ `šúeûb'ÿįäýãŸþ_—­ÉøÆ’rrrüœ»vr³âûÃ6ôËöÃöOÿŠù6ö¶ÿ¼sÿÞ¯Åø)·´4—o–œ¿6¾Ë¯×ëíÛñü‹ãÚ.Qô_»¼Cò–“+å(ÖÑ15L°¤Š‹ânZ|j¥ê_¡¬zšu¶ É©j}—Ÿ¹Ý~çÔ –ªM(ªònÛ­m­ÖP«­lº2fÜ wJéÒ“C)w¸"qpŒWT2hö¹LªeÃÈt%åBèsyišút¥¶1qTÆòÄR²Z Wf™r{æÆuéF-,œåMAdœ¹Å}1Že&ØìQT1’Dk¥Ycv»»-yšËä²—YB&¤ó¶è³[±è$]bg›YâH^‰5Y:'·µZWn›}nJÓ“ðýŽÕ6¿÷ÈÌÛÔ×ÛØÁù?ÃWä­{°õëOëüR3[S<›!ûäÚ¥u«µµµûÇ0}êëµÉÉ×Ò‘bÝýÄ>îÚüÛÊ2³¡úWšýæ©ù7‰`w,ØÆ1€ `Æ1€ `Æ1€ `Æ1€ `Æ1€ `Æ1€ `Æ1œgP}ˆeß0}geßüqñ‰_«ˆÔbwÌYÙEwÁ¿œ|bWêÆ€Æv` `p*vCyñ1ÇבÑÝÇÒ§d7Ÿ}y eÿÆ1€ fké—í‡ìŸÿ®^^Oïÿò}ÿ”i/òí|ãþ1‰ Z¼ÓÆ•vCöD:½ä.ðóŠW?¹9Ì£¦¼ÁÆô÷àdnC¼w™éû¥½7ÎýþÓßÝ^ád¿eßcÓü!)Ö§ýSg¿×þöÜ¿'Ç×ü"ù½ãœ†ýºVèèwWj·Žñ/šš»Ök·ÇÅ´Ê?’ƒYö0 gÚSÚ`A=f#çœ{”q­u;ôö‹EFây72óU¹¶¡Vʼœ»vÕùûÐïQƒ¸c¹f&=D¢gH¯èú ºñúô<·íVíªíuéµMºíuöé´?çÓ—ukšt²$õœ#5ÊM3ù*Œ¢Êáe­=Qb¯ëŒ‘G«Si¶«BôBwfgÓ˜SuJf\ÜžÒۚɳקì»ìxr„/û¦Î\œ•ÿå·âä%ÃegcQÚ†Ñ×64¹ÐâêI´pÂÙ™m *¶ÝmJ2Šq”¼ºÛ}öÖËöíèo¾›[WWnâ‡,þWä_Ñ6_ãdïÇ^üÛÆ¯j¿ËË#£ÍdàÝqa–,Ù‚e$>°9^ssÛA×Ô™>NßB­z‹InÚ9 «IçYa›Š´æ¥^ŽóQ,Lq–«—ïÓïíßìù?þ|Cžµõ±6îôL¡&]"%/Oe…1€âì÷“Ԃ͸ëR-vŒc¶Óž[ªa†Ý{Så®mfTÓjb;«}j?"öU´«J¼¶í]1å—¢ômsÜ)›!’ u¦ÒÔWΠx–æ‚Ë#l«S\dbÛ©C.P¾ó.°ÂŸ9gò¿"þ‰²ÿxoóoþ½ªÿ/6õ)ÓYSTYR´¥o4Ú—oEuv©NŠíªmÖ»T¥)ËZü¿tj/[H‰Æ™þÍ5ÒÜ›6ÁñgW|‚àËZ¨c„…±S™ŒNŒÈ¦ªÑj„‘åªly@Ø ·õ…µÜ•Yù”òwƒâº¹G•Y²DíN YŒŸ¢SÈ$¢o›'&:p5<1Ì1L2S PÂÞ¼’[:É îÊڜؒ>¦òÌFT¯dK9ëP'=ê5—âìà{;kÔ >ž²4›(ºÒÜtrA7á¤u±Þƪe)†4R%ì3eÉ‘Fº›sJÅ*ƒâ‡,þWä_Ñ6_ãc㯠þmã?×µ_åææàŒg\I‹bðÕKºq %)ŽÓ)m¶Ó$SgÓ¯u•=ßqE[ÉVò©UÖј –öúyh˨ëß'_ùoåÿñü|ŸÐ35·e£c¥©¹Z]CrVÔIP'ªŒYTqÈ-9U<ó1½L<Ú–]»¡Æ][Í¿¢¾úÖêÖµþïÙwØðäÿ_÷Oœ¹?ݰ|PåŸÊü‹ú&Ëül|uá¿Í¼gúö«ü¼jkíˆl`ÿ_Ž¿¼sËJïñ (Ší«Íó9NBFH˜ ”TÊŒP(VEáZäR°½Ö{;\ÖR–W-ÕZ[´¶ÒÊÕV{­”Á¡R'@ ¢LK…€„0’C’“œœ)çìáþ8æu­jW¿ö:{|Ÿ÷}Þ÷}¾ï;‚iš&ÿÂCüG¼äJ|Õþ®Ó4¡ïw("‰ i¢(¢ª*v»Y–“¾8ˆÅb‚€¢(Øl6ÚÚÚhkk# ×4¸ô-EQðxyyy2räH233©ªªbÏž=?~{;q-Ži‚$I8DQ¤­­h$Œ$ (ŠŒE²‹ÇéîŽ âr»=º”™3g2iÒ$¬Vk_d? $!W"ݵkëׯ§¹©‰áÊ]\Œ/Ç¢¨hšF0ØESãÎÖÕÑÔÒŠ®Øl6ü~?ù¹Ù\?~YÙ9Ø à ÔÕESó S °hÙƒ Pín{“MôöN8Îηßá𑣌.-åž{¼ªã[óæQ]}‚ÆÆÆþšššhnjbly9«ˆizÒ²ÕkŽ®›¸íV–L+ç¯ûª¨mòcWL®^³/;@裓ۮòÑ™Oøù¦ÝÌ«(fÚØ"4ÝHHTY’¨®ofPV†®úèèè@Ó52³s¨kjOH¾„>qééŽ1vX™©N^yï86Uæ¹¾pW¶) ;?8ÅÞcgY6³œt·ÃHìîq]G–{¡ëzÑhLÙê ‰]Í~„KFÅt‰£s¾µ“šÆ‹H¢H0#é&׈^bÏ1M“hÛ«béß=/y>¦iüø–ÉÜ2åZ;Ãè†NNº‡X\ÃnUè EûªX¾ÏCå‘ó¸Ün<O6›ˆ‡»pÛUB`˜&²$b‘$nž| uÍ~þ´ãCêš;pØ®•ǃߙIMc;ÕuÍ¨Š¥Ÿ÷ánnžTÊŠ¹×ñÜ›`³*¼¬–á¹iL,-Ó¡çâ²‘íµ²©ªŠ¢ÂB¼^oÿðz½(ŠJã' äø<ÂÀJ[{êô oaöšgøí+ûð8­ülÅ,Ú:Ã\hïbî„‘Dâñ„ôïõ>Àœ #y·ò¯î;ÁЬT¢±8›JW$F¸[C@À0 ²}^¼ŠÎÑã'(..Æjµö@jj*n‡ººZ &”p©w%å° @\ÓÑtƒêúV"Ýq|^'ox†OZ;©>ßJšÇNiÁ K51¡M@‹„?áÃÓ ,ùÙ&$Iä–)chéöÐYM7?"—óuçðûýŒ3&Á®„äççsêä)rS8¬ê€}À ‡Ã)N’("ÑXœýUuÔ7wpªá"C¥`WdŒ+‹‚i" š¦S}¾•â!™¸VYÂã°bWeüpßí²Å”²Bö¾û.>_ÅÅÅ võPU•#Fpâä)Rm¾'Æu¼·æ¦ÑçUIqÙUlª…–ö.\6U±$¼ÇìmN‰½•çÈÏð2*ÏGw\G€„FiŒœA¡Ïζ—ÿÊĉÉÈÈè?¥ìåäˆ#h¹ØF¤ó"Ãr}”SIQd‰HLK¤Ç%ºtÇ5D±Gû|8€Ý*óÁ©óÔ6ùY<­Œ¸¦‰Å Ec¤¸lÄâ:‹¦¥êX%çjë˜7o^¿™ðöaÆ!I§ObÜð\ ÝLÚQuÝÄë´a±ˆ´v±H‰43 ›ª`=´ MQ‰Æ4ž{ëCfÎÈü :BQ¡nRÝvbšN~¦—Y×å‰ß>ɨQ£˜4iR?Z'¨Ñ‚‚ Ù½ç]®žÃ¦`$IbÍ0HuÙ¸ØFº${š˜ ÏóÑÔ$#~º‘]úm·*¼q¨š–ö +æŒG×M:‚ìªBw\cùÜ Ú›ØöâˬX±"i>ŠŸ.&Là½}HU †æ¤ëý"±8’ bU-}StÝÀ®ÈL-+ä@U¡h 1.%s0ã©×rãµÃ¹vXјF$£0;ESJYû?““ÃÒ¥K“²¡_±ŸË½÷Þ‹(ŠIÅa?ãÆ#??Ÿ-Û¶1çúQ¸lJ¿2Ø#ܺÙôÎan›>–‘y>B‘šn Š‹§Ž¡òl#•gQ¹'³Õ‘˜Æó;?bJY%ùÈ™Ùã†ò½ïßEYYwÜqǧ&T—#!&YóçÏç…M[H· L(B(ï»ÞË_§UáOo}HU]3¬šÃ5C³‰Åu ³Ò]É o!ë-Ï $QàPõyv:Iw\crùp¶¼ð<;vîdÍš5;vŒ]»v±{÷nZ[[À']ت««cä¨Q¼òâÒ†çÖžÃmWЋ¢ÈÅÎ ÓÆ㙟ÜÌ…¶NÞ;шC‘øZq¿Ø¼›w+ÏEQdË€‹S¦iîŽSQœÏ£«çáVL&M™ÊÑcóõ¯O$Ø$ ©¨¨`ݺudee ¼2·hÑ"jkk9xð ËÞÂãµxÖ¾Uƒ@¸›ü /}>M§óܳÏrÝøqÜô­›È\Ä…¶µMíÜûÌ[t#ÈéªÆ¦É 4?¹e2ƒ3SPDƒÚ“èì$âp§—“Íñª*Ö¬YÃ3Ï<ÃôéÓ“ç@/žûî»C‡±ãÍ7øÑ­Ó$¸fôð?#Ýã`ÃÒrú#¾5Ï¿°‰m/½ &lÞ}”ÿÞø:£g°hÊ5„£±gh¦ G!7ÝÃ=ù*w>ñ7Þ¬Ñ9N匑C<µˆÑ¥¥¨ŠB<GQ”«'qo˜KJJX¶l+W­ft^*·Ï™@{0Œ?FU$žüÑ"‚g¸ïÁŸ²ú»+ÙùÆv¶¿ò2·h<¶õ]ö¯£3¥0'­O^B>·EQ’ŽÛi£#FUUfUŒfú˜Álܸ‘Ûo¿ùóçSVVvõµÑ+“ù±ÇcèС|ÿÎÕü~ãF¢Ñï=˃+¿‰5ÒÂOÿ×_{€fáïGÎðÃ?îa÷‘38¬*¿ùÏoS˜•ÆG§?ùŒ4îYå°[e†dºxdù $YÆÔu>i¨ç[YùÔFŽ=ÊÝwßÍÚµkq¹\èõXZZ[·nåÆoÄãõðÓ@½}:§Nž`ÆÌYàp89[ßFK['ù^Xþ f_?‚Î uœ«9ER0úÚ— I&ý¢(’îáw¿y‚üð'8B¡š¦!Ë23gÎdÏž=TTTô“ø–Ê›išÌ˜1ƒÍ›7³zõj¶lÙJnn••G)))¡úäI**&°rÅw˜V^ŽÃ᥵µ…õëþÌ£¿zœ—¶lBÉ+cào¨²ŒÓj¡¶¶–iÓ§óÙ³q¹\ 2„òòrRRR_IEËçY¿Y¸p!ååålÚ´‰ÆÆFî¸cK—.åôéÓ<üðÃ<òèzºººú<–›—‡ÕfíQ¢‚p…˜’ÒÕiW°YD.45sËÂ…¬X±"iaI–C–ϳû"EEE¬]»6áZII Ï?ÿ<­­­444‰Dp¹\”––róÍ ƒø,–^ñ“à´©8Tæ–Vâ}SÐþŽü\_äá^p>ŸŸÏ—pM–-ĺ»Qdé3Þ§´(þŽÎ„ û—¾G6av»@0€Ãª" \µ¦Ivš›N;áH˜ôôô/´ª÷ÙäK6<Í-Iu;$᪕T×uò3Rð·µõtäAƒ>Ó9_ €üü|šššIóX‘-ý¥ðåÄš“ÎÙ³5äæä1xðà¯~›5Ù?~<5gÏb—  +hü EÛ»¾ªéd¥º[˜Îî½û?~\B“ú§¸á†èèè¤òð‡¬œ7Ý€˜¦¡ëFÏò£f`S~|ëêkNòÑáJ.\ø…÷š-_†ñ½ÕiÉ’%|wÕ*ÞÚ±“{þm¯í?Ž¦ë¤º ÏõññCƒM¬ºóÜzëâ>óyù? œþGŒp8̬Y³ð¸ÝÜ{ï=x3s‰i‚£íBï¿÷Û_‹ÉS¦ðÈÏþ™¢ï+ÐkHMM ÷ß?Ç?>†×ãÆf³×4DÉB^^>7ÝtsçÎí›ó~ã¿ôôÔÙÙIuu5~¿AvùNçgþàŸà‹€ü§þ[åËêæŸgü?£sZ ƒ­tIEND®B`‚peewee-2.10.2/docs/sqlite.png000066400000000000000000000046471316645060400160250ustar00rootroot00000000000000‰PNG  IHDR0(¸‡yqsRGB®ÎébKGDÿÿÿ ½§“ pHYs  šœtIMEÜ .†ÂCOtEXtCommentCreated with GIMPW IDATXÃÕ™YT]åÇçœ{/w†Ë<ÏÈ2›Ähšµ.µuµv­vùÜÕ§>ö±}°]jm«µ5j0I$‰H#3$Œ—y¼Ì—Ë|çÓÉ5mk0Ùoç|ûûÎþßÞûûï}Y–e6Y–ï»Ë‰ËíÆ ÑòÿŠÓíâµOΓÅOö§¾âÞ×K}We·ë˜[ZÄ‹—S0Ï'Äßtßdóð¥ ß`±N!Ë29œÚ¿6Þ6ØÇ¥Ú òR39™{ßü%»òæ: 2r0huLÎÚ°-Íã§T~/À>Þ*½ÈÈÔ8 áѼzêôì¨,Ãû׸Õß 2¤ÇÄãöxHŒˆôÑë`Á¾Ì¸Íºî:ËN-ƒ½l‹OY0a³" "á¦àÿÀé¼jÌ­tŒôó—K¿u;¹I©hüÔ>“ÎW”Ò2ÐKRx4g÷$@g b”÷¾ºJD@0&ƒqe‡vR£ãÖýxÑŸßž}ÉçÝ‚}Y–Y´/11;ƒQ£C¥P ŠâºkÅ€Ãå䣛å4öv°=1•§ò ѪÕÜêëäƒÊ/ÈNHá\þaŸ9WÝ Ë2È/Ž>…IoäÍ+%X¬“¼|ä4ñ¡ëp­©†ÄðHB£8_y•Îá¼²ŒËíÆ¨ÕJjT,ËN×›ª‘퉩œÙ[Œ €¸ÑÑø)U3íƒ}ÈȨU*^,>ÁïÎýœ—ž ;!…òæz4*5’(ÒØÓÁÀ„x€;FV| ¶Åy:-LÍÍ’Ÿì£ãöxøÃùÐ3:ÌΔ­,,-Ò7nÁér!FnÃ,çözYr؉0“›œF~F Q"=6aMopr Y–yjO!GròD»Ëy@e[eSëü,ý£ x½2j¥Šá©qIÑk´´ô ×h‘ѨԈ‚€ÖOB’Ö÷_A@¯Ö *¥’§÷êÈüò2c3ÓkzÞUÛ þÔ˜[ˆ ¿Ä­Ž×/HBx:?5n¯ëƒ£LÚfP*‚@\HEÛvðYM³‹ gí$>,IHŽŒ¡e°‡ËI€Î@x`J…‡ËM÷è0sK Ôw·3a›A’$R"c8¼}7‘!Øçq8Wv4%2I(¹YÆ+ÇÏ ””ݪàµOÞCEžÞ[„v5±(²ãS¨no¦²µ i5Úï\` ÅÊî݃V˾ô,àJÝ×¼UúÑê H€Œ$J(JÎåA!J5z&l3¼sýDADÅ•“v»hèî *0”ôØZ{Ði4hT~ìKÏæFK=,y—_=q†ŒØ$,ÓS¨”Jö¤fn ÂbDß,d Úܨu 6ú“›Ä®”t’OÆÅåvcégpr‡Ë…ŸR‰Q£§ìv /:I|h$Ý£Cü³ì2v—‹@ƒ‘˜0Œj•ßJæ ‹ÂíñðES ÇvìõYÿëö[T›[ШÔüúÄYFg¦(­ÿ†ž±Ü7&⬧Ñÿ ¸)»UËÑœ¼µÜíözPˆC~p›-"¹<öŠ!6+n­Ÿš½áñp©®’êŽÔ*?tz4~~H¢DtP(IÑÄmÀ¡~Ô –e™éùY–Lzzõú\É+{ÛÂÓó³¤DÆúÔ.’(nî ˜Gøª¹»ËÉÜÒ" ö%~ÿÜ/ï79g#Èà¿v©¶ õ“õ-þuglS‚xÌ6MEk#ðÊñ•›U¥Tò›“çÖjÛ}Ý4ö˜™[^$Ôß„$Џ=.|]†€ŒQ«Û\jê£ÚÜBjT{Ò¶! v§ƒ·¯Š¿VdzŽÑ;6LïØA†r’R×vuÉa§¡§§ËINR:¦õîB£3Óü»üs¦ælä¥f’—š¸Ê«Þ¸|g ނ׻âßÇrïRˆþq µÝíćD¿uûæßÄu]mŒÍL31;C°1€Óy|ÆoõvÑ=6Dbx49‰[î‚¶Nq­©šÌ¸$r“Ò6ÿ¸ÑÒ@Rx™ñɨ”Jú'F9½ûÀ*ésñåízšú:)ÌÌå쾃>{±ªx¡è‰ ëÞ‡À2=ÉŪr¦ççˆ A­TQ~»žŸÃéqQÚPEçÈG¶çq28½Z‹ÇëA%¬ós”6Vq(k'¡›GædY¦m¨šÎVôj iÑñÔu·óòá'×S­MT››ÙŸžÛãA”D²ã·`ÐhYt,#ã6+[îÉë€ÓíâÍ+%„™‚HŽ'#. QxûÚ§ß±‡ˆÀÚ‡ú(¹YFvâYÀd4²/-kmæþnº,ƒ´ öa]˜£hÛŽûâ䡘°Yù²¹žg Žøæø™itzþu£I”2úãt¹x2¯åj䕽|ÕÒHMg+¡þ&r’ÒHŠC­Rmn=p©¶’©9?;tjí]Ë@—j+ÉÛ’Ù2È¡¬]$GÆÜÓ¤¢­‘ì„-ìLN4 š¿_ýQ1juÌ/-ÊÐä/Ÿ@¥Pú´[º,ƒLÌÎàõÊý‰ Ç_§ÿq,;üíêE²âSè·a æhîžµñ¹¥En´Ô3=?KFl9‰©¶VZ=p‡í}[úÇG)ù¦ŒS» ¸XUΉûÈŒ[!ZëŸ×ßÄîtpfo!?^A³žñ =4ö˜9·ÿ0ï|ñÏ£¶« ‡ËM}w;¯‡'w¶i ÷{»Pek³3oÛÁŸ/¿Ï E'H‹d~yi5Í: 2lzAô½|y»×CAFºø.§ó ÙzOïò‘.êË›ë8”½‹¿~^ÂÌÜGÆøïäB =($‰}iY¼~ùÛ’WZ‹<Ð…:†ûqº\\kªæ`ö.rS¯¶ŠËí¦ÊÜÌ‹OºÎO¾Gþù^ÂØ*¯™)IEND®B`‚peewee-2.10.2/peewee.py000066400000000000000000005507621316645060400147160ustar00rootroot00000000000000# May you do good and not evil # May you find forgiveness for yourself and forgive others # May you share freely, never taking more than you give. -- SQLite source code # # As we enjoy great advantages from the inventions of others, we should be glad # of an opportunity to serve others by an invention of ours, and this we should # do freely and generously. -- Ben Franklin # # (\ # ( \ /(o)\ caw! # ( \/ ()/ /) # ( `;.))'".) # `(/////.-' # =====))=))===() # ///' # // # ' import calendar import datetime import decimal import hashlib import itertools import logging import operator import re import sys import threading import time import uuid import weakref from bisect import bisect_left from bisect import bisect_right from collections import deque from collections import namedtuple try: from collections import OrderedDict except ImportError: OrderedDict = dict from copy import deepcopy from functools import wraps from inspect import isclass __version__ = '2.10.2' __all__ = [ 'BareField', 'BigIntegerField', 'BlobField', 'BooleanField', 'CharField', 'Check', 'Clause', 'CompositeKey', 'DatabaseError', 'DataError', 'DateField', 'DateTimeField', 'DecimalField', 'DeferredRelation', 'DoesNotExist', 'DoubleField', 'DQ', 'Field', 'FixedCharField', 'FloatField', 'fn', 'ForeignKeyField', 'ImproperlyConfigured', 'IntegerField', 'IntegrityError', 'InterfaceError', 'InternalError', 'JOIN', 'JOIN_FULL', 'JOIN_INNER', 'JOIN_LEFT_OUTER', 'Model', 'MySQLDatabase', 'NotSupportedError', 'OperationalError', 'Param', 'PostgresqlDatabase', 'prefetch', 'PrimaryKeyField', 'ProgrammingError', 'Proxy', 'R', 'SmallIntegerField', 'SqliteDatabase', 'SQL', 'TextField', 'TimeField', 'TimestampField', 'Tuple', 'Using', 'UUIDField', 'Window', ] # Set default logging handler to avoid "No handlers could be found for logger # "peewee"" warnings. try: # Python 2.7+ from logging import NullHandler except ImportError: class NullHandler(logging.Handler): def emit(self, record): pass # All peewee-generated logs are logged to this namespace. logger = logging.getLogger('peewee') logger.addHandler(NullHandler()) # Python 2/3 compatibility helpers. These helpers are used internally and are # not exported. _METACLASS_ = '_metaclass_helper_' def with_metaclass(meta, base=object): return meta(_METACLASS_, (base,), {}) PY2 = sys.version_info[0] == 2 PY3 = sys.version_info[0] == 3 PY26 = sys.version_info[:2] == (2, 6) if PY3: import builtins from collections import Callable from functools import reduce callable = lambda c: isinstance(c, Callable) unicode_type = str string_type = bytes basestring = str print_ = getattr(builtins, 'print') binary_construct = lambda s: bytes(s.encode('raw_unicode_escape')) long = int def reraise(tp, value, tb=None): if value.__traceback__ is not tb: raise value.with_traceback(tb) raise value elif PY2: unicode_type = unicode string_type = basestring binary_construct = buffer def print_(s): sys.stdout.write(s) sys.stdout.write('\n') exec('def reraise(tp, value, tb=None): raise tp, value, tb') else: raise RuntimeError('Unsupported python version.') if PY26: _D, _M = 24 * 3600., 10**6 total_seconds = lambda t: (t.microseconds+(t.seconds+t.days*_D)*_M)/_M else: total_seconds = lambda t: t.total_seconds() # By default, peewee supports Sqlite, MySQL and Postgresql. try: from pysqlite2 import dbapi2 as pysq3 except ImportError: pysq3 = None try: import sqlite3 except ImportError: sqlite3 = pysq3 else: if pysq3 and pysq3.sqlite_version_info >= sqlite3.sqlite_version_info: sqlite3 = pysq3 try: from psycopg2cffi import compat compat.register() except ImportError: pass try: import psycopg2 from psycopg2 import extensions as pg_extensions except ImportError: psycopg2 = None try: import MySQLdb as mysql # prefer the C module. except ImportError: try: import pymysql as mysql except ImportError: mysql = None try: from playhouse._speedups import format_date_time from playhouse._speedups import sort_models_topologically from playhouse._speedups import strip_parens except ImportError: def format_date_time(value, formats, post_process=None): post_process = post_process or (lambda x: x) for fmt in formats: try: return post_process(datetime.datetime.strptime(value, fmt)) except ValueError: pass return value def sort_models_topologically(models): """Sort models topologically so that parents will precede children.""" models = set(models) seen = set() ordering = [] def dfs(model): # Omit models which are already sorted # or should not be in the list at all if model in models and model not in seen: seen.add(model) # First create models on which current model depends # (either through foreign keys or through depends_on), # then create current model itself for foreign_key in model._meta.rel.values(): dfs(foreign_key.rel_model) if model._meta.depends_on: for dependency in model._meta.depends_on: dfs(dependency) ordering.append(model) # Order models by name and table initially to guarantee total ordering. names = lambda m: (m._meta.name, m._meta.db_table) for m in sorted(models, key=names): dfs(m) return ordering def strip_parens(s): # Quick sanity check. if not s or s[0] != '(': return s ct = i = 0 l = len(s) while i < l: if s[i] == '(' and s[l - 1] == ')': ct += 1 i += 1 l -= 1 else: break if ct: # If we ever end up with negatively-balanced parentheses, then we # know that one of the outer parentheses was required. unbalanced_ct = 0 required = 0 for i in range(ct, l - ct): if s[i] == '(': unbalanced_ct += 1 elif s[i] == ')': unbalanced_ct -= 1 if unbalanced_ct < 0: required += 1 unbalanced_ct = 0 if required == ct: break ct -= required if ct > 0: return s[ct:-ct] return s try: from playhouse._speedups import _DictQueryResultWrapper from playhouse._speedups import _ModelQueryResultWrapper from playhouse._speedups import _SortedFieldList from playhouse._speedups import _TuplesQueryResultWrapper except ImportError: _DictQueryResultWrapper = _ModelQueryResultWrapper = _SortedFieldList =\ _TuplesQueryResultWrapper = None if sqlite3: sqlite3.register_adapter(decimal.Decimal, str) sqlite3.register_adapter(datetime.date, str) sqlite3.register_adapter(datetime.time, str) DATETIME_PARTS = ['year', 'month', 'day', 'hour', 'minute', 'second'] DATETIME_LOOKUPS = set(DATETIME_PARTS) # Sqlite does not support the `date_part` SQL function, so we will define an # implementation in python. SQLITE_DATETIME_FORMATS = ( '%Y-%m-%d %H:%M:%S', '%Y-%m-%d %H:%M:%S.%f', '%Y-%m-%d', '%H:%M:%S', '%H:%M:%S.%f', '%H:%M') def _sqlite_date_part(lookup_type, datetime_string): assert lookup_type in DATETIME_LOOKUPS if not datetime_string: return dt = format_date_time(datetime_string, SQLITE_DATETIME_FORMATS) return getattr(dt, lookup_type) SQLITE_DATE_TRUNC_MAPPING = { 'year': '%Y', 'month': '%Y-%m', 'day': '%Y-%m-%d', 'hour': '%Y-%m-%d %H', 'minute': '%Y-%m-%d %H:%M', 'second': '%Y-%m-%d %H:%M:%S'} MYSQL_DATE_TRUNC_MAPPING = SQLITE_DATE_TRUNC_MAPPING.copy() MYSQL_DATE_TRUNC_MAPPING['minute'] = '%Y-%m-%d %H:%i' MYSQL_DATE_TRUNC_MAPPING['second'] = '%Y-%m-%d %H:%i:%S' def _sqlite_date_trunc(lookup_type, datetime_string): assert lookup_type in SQLITE_DATE_TRUNC_MAPPING if not datetime_string: return dt = format_date_time(datetime_string, SQLITE_DATETIME_FORMATS) return dt.strftime(SQLITE_DATE_TRUNC_MAPPING[lookup_type]) def _sqlite_regexp(regex, value, case_sensitive=False): flags = 0 if case_sensitive else re.I return re.search(regex, value, flags) is not None class attrdict(dict): def __getattr__(self, attr): try: return self[attr] except KeyError: raise AttributeError(attr) SENTINEL = object() # Operators used in binary expressions. OP = attrdict( AND='and', OR='or', ADD='+', SUB='-', MUL='*', DIV='/', BIN_AND='&', BIN_OR='|', XOR='^', MOD='%', EQ='=', LT='<', LTE='<=', GT='>', GTE='>=', NE='!=', IN='in', NOT_IN='not in', IS='is', IS_NOT='is not', LIKE='like', ILIKE='ilike', BETWEEN='between', REGEXP='regexp', CONCAT='||', ) JOIN = attrdict( INNER='INNER', LEFT_OUTER='LEFT OUTER', RIGHT_OUTER='RIGHT OUTER', FULL='FULL', CROSS='CROSS', ) JOIN_INNER = JOIN.INNER JOIN_LEFT_OUTER = JOIN.LEFT_OUTER JOIN_FULL = JOIN.FULL RESULTS_NAIVE = 1 RESULTS_MODELS = 2 RESULTS_TUPLES = 3 RESULTS_DICTS = 4 RESULTS_AGGREGATE_MODELS = 5 RESULTS_NAMEDTUPLES = 6 # To support "django-style" double-underscore filters, create a mapping between # operation name and operation code, e.g. "__eq" == OP.EQ. DJANGO_MAP = { 'eq': OP.EQ, 'lt': OP.LT, 'lte': OP.LTE, 'gt': OP.GT, 'gte': OP.GTE, 'ne': OP.NE, 'in': OP.IN, 'is': OP.IS, 'like': OP.LIKE, 'ilike': OP.ILIKE, 'regexp': OP.REGEXP, } # Helper functions that are used in various parts of the codebase. def merge_dict(source, overrides): merged = source.copy() merged.update(overrides) return merged def returns_clone(func): """ Method decorator that will "clone" the object before applying the given method. This ensures that state is mutated in a more predictable fashion, and promotes the use of method-chaining. """ def inner(self, *args, **kwargs): clone = self.clone() # Assumes object implements `clone`. func(clone, *args, **kwargs) return clone inner.call_local = func # Provide a way to call without cloning. return inner def not_allowed(func): """ Method decorator to indicate a method is not allowed to be called. Will raise a `NotImplementedError`. """ def inner(self, *args, **kwargs): raise NotImplementedError('%s is not allowed on %s instances' % ( func, type(self).__name__)) return inner class Proxy(object): """ Proxy class useful for situations when you wish to defer the initialization of an object. """ __slots__ = ('obj', '_callbacks') def __init__(self): self._callbacks = [] self.initialize(None) def initialize(self, obj): self.obj = obj for callback in self._callbacks: callback(obj) def attach_callback(self, callback): self._callbacks.append(callback) return callback def __getattr__(self, attr): if self.obj is None: raise AttributeError('Cannot use uninitialized Proxy.') return getattr(self.obj, attr) def __setattr__(self, attr, value): if attr not in self.__slots__: raise AttributeError('Cannot set attribute on proxy.') return super(Proxy, self).__setattr__(attr, value) class DeferredRelation(object): _unresolved = set() def __init__(self, rel_model_name=None): self.fields = [] if rel_model_name is not None: self._rel_model_name = rel_model_name.lower() self._unresolved.add(self) def set_field(self, model_class, field, name): self.fields.append((model_class, field, name)) def set_model(self, rel_model): for model, field, name in self.fields: field.rel_model = rel_model field.add_to_class(model, name) @staticmethod def resolve(model_cls): unresolved = list(DeferredRelation._unresolved) for dr in unresolved: if dr._rel_model_name == model_cls.__name__.lower(): dr.set_model(model_cls) DeferredRelation._unresolved.discard(dr) class _CDescriptor(object): def __get__(self, instance, instance_type=None): if instance is not None: return Entity(instance._alias) return self # Classes representing the query tree. class Node(object): """Base-class for any part of a query which shall be composable.""" c = _CDescriptor() _node_type = 'node' def __init__(self): self._negated = False self._alias = None self._bind_to = None self._ordering = None # ASC or DESC. @classmethod def extend(cls, name=None, clone=False): def decorator(method): method_name = name or method.__name__ if clone: method = returns_clone(method) setattr(cls, method_name, method) return method return decorator def clone_base(self): return type(self)() def clone(self): inst = self.clone_base() inst._negated = self._negated inst._alias = self._alias inst._ordering = self._ordering inst._bind_to = self._bind_to return inst @returns_clone def __invert__(self): self._negated = not self._negated @returns_clone def alias(self, a=None): self._alias = a @returns_clone def bind_to(self, bt): """ Bind the results of an expression to a specific model type. Useful when adding expressions to a select, where the result of the expression should be placed on a joined instance. """ self._bind_to = bt @returns_clone def asc(self): self._ordering = 'ASC' @returns_clone def desc(self): self._ordering = 'DESC' def __pos__(self): return self.asc() def __neg__(self): return self.desc() def _e(op, inv=False): """ Lightweight factory which returns a method that builds an Expression consisting of the left-hand and right-hand operands, using `op`. """ def inner(self, rhs): if inv: return Expression(rhs, op, self) return Expression(self, op, rhs) return inner __and__ = _e(OP.AND) __or__ = _e(OP.OR) __add__ = _e(OP.ADD) __sub__ = _e(OP.SUB) __mul__ = _e(OP.MUL) __div__ = __truediv__ = _e(OP.DIV) __xor__ = _e(OP.XOR) __radd__ = _e(OP.ADD, inv=True) __rsub__ = _e(OP.SUB, inv=True) __rmul__ = _e(OP.MUL, inv=True) __rdiv__ = __rtruediv__ = _e(OP.DIV, inv=True) __rand__ = _e(OP.AND, inv=True) __ror__ = _e(OP.OR, inv=True) __rxor__ = _e(OP.XOR, inv=True) def __eq__(self, rhs): if rhs is None: return Expression(self, OP.IS, None) return Expression(self, OP.EQ, rhs) def __ne__(self, rhs): if rhs is None: return Expression(self, OP.IS_NOT, None) return Expression(self, OP.NE, rhs) __lt__ = _e(OP.LT) __le__ = _e(OP.LTE) __gt__ = _e(OP.GT) __ge__ = _e(OP.GTE) __lshift__ = _e(OP.IN) __rshift__ = _e(OP.IS) __mod__ = _e(OP.LIKE) __pow__ = _e(OP.ILIKE) bin_and = _e(OP.BIN_AND) bin_or = _e(OP.BIN_OR) # Special expressions. def in_(self, rhs): return Expression(self, OP.IN, rhs) def not_in(self, rhs): return Expression(self, OP.NOT_IN, rhs) def is_null(self, is_null=True): if is_null: return Expression(self, OP.IS, None) return Expression(self, OP.IS_NOT, None) def contains(self, rhs): return Expression(self, OP.ILIKE, '%%%s%%' % rhs) def startswith(self, rhs): return Expression(self, OP.ILIKE, '%s%%' % rhs) def endswith(self, rhs): return Expression(self, OP.ILIKE, '%%%s' % rhs) def between(self, low, high): return Expression(self, OP.BETWEEN, Clause(low, R('AND'), high)) def regexp(self, expression): return Expression(self, OP.REGEXP, expression) def concat(self, rhs): return StringExpression(self, OP.CONCAT, rhs) class SQL(Node): """An unescaped SQL string, with optional parameters.""" _node_type = 'sql' def __init__(self, value, *params): self.value = value self.params = params super(SQL, self).__init__() def clone_base(self): return SQL(self.value, *self.params) R = SQL # backwards-compat. class Entity(Node): """A quoted-name or entity, e.g. "table"."column".""" _node_type = 'entity' def __init__(self, *path): super(Entity, self).__init__() self.path = path def clone_base(self): return Entity(*self.path) def __getattr__(self, attr): return Entity(*filter(None, self.path + (attr,))) class Func(Node): """An arbitrary SQL function call.""" _node_type = 'func' _no_coerce = set(('count', 'sum')) def __init__(self, name, *arguments): self.name = name self.arguments = arguments self._coerce = (name.lower() not in self._no_coerce) if name else False super(Func, self).__init__() @returns_clone def coerce(self, coerce=True): self._coerce = coerce def clone_base(self): res = Func(self.name, *self.arguments) res._coerce = self._coerce return res def over(self, partition_by=None, order_by=None, start=None, end=None, window=None): if isinstance(partition_by, Window) and window is None: window = partition_by if start is not None and not isinstance(start, SQL): start = SQL(*start) if end is not None and not isinstance(end, SQL): end = SQL(*end) if window is None: sql = Window(partition_by=partition_by, order_by=order_by, start=start, end=end).__sql__() else: sql = SQL(window._alias) return Clause(self, SQL('OVER'), sql) def __getattr__(self, attr): def dec(*args, **kwargs): return Func(attr, *args, **kwargs) return dec # fn is a factory for creating `Func` objects and supports a more friendly # API. So instead of `Func("LOWER", param)`, `fn.LOWER(param)`. fn = Func(None) class Expression(Node): """A binary expression, e.g `foo + 1` or `bar < 7`.""" _node_type = 'expression' def __init__(self, lhs, op, rhs, flat=False): super(Expression, self).__init__() self.lhs = lhs self.op = op self.rhs = rhs self.flat = flat def clone_base(self): return Expression(self.lhs, self.op, self.rhs, self.flat) class StringExpression(Expression): def __add__(self, other): return self.concat(other) def __radd__(self, other): return other.concat(self) class Param(Node): """ Arbitrary parameter passed into a query. Instructs the query compiler to specifically treat this value as a parameter, useful for `list` which is special-cased for `IN` lookups. """ _node_type = 'param' def __init__(self, value, adapt=None): self.value = value self.adapt = adapt super(Param, self).__init__() def clone_base(self): return Param(self.value, self.adapt) class Passthrough(Param): _node_type = 'passthrough' class Clause(Node): """A SQL clause, one or more Node objects joined by spaces.""" _node_type = 'clause' glue = ' ' parens = False def __init__(self, *nodes, **kwargs): if 'glue' in kwargs: self.glue = kwargs['glue'] if 'parens' in kwargs: self.parens = kwargs['parens'] super(Clause, self).__init__() self.nodes = list(nodes) def clone_base(self): clone = Clause(*self.nodes) clone.glue = self.glue clone.parens = self.parens return clone class CommaClause(Clause): """One or more Node objects joined by commas, no parens.""" glue = ', ' class EnclosedClause(CommaClause): """One or more Node objects joined by commas and enclosed in parens.""" parens = True Tuple = EnclosedClause class Window(Node): CURRENT_ROW = 'CURRENT ROW' def __init__(self, partition_by=None, order_by=None, start=None, end=None): super(Window, self).__init__() self.partition_by = partition_by self.order_by = order_by self.start = start self.end = end if self.start is None and self.end is not None: raise ValueError('Cannot specify WINDOW end without start.') self._alias = self._alias or 'w' @staticmethod def following(value=None): if value is None: return SQL('UNBOUNDED FOLLOWING') return SQL('%d FOLLOWING' % value) @staticmethod def preceding(value=None): if value is None: return SQL('UNBOUNDED PRECEDING') return SQL('%d PRECEDING' % value) def __sql__(self): over_clauses = [] if self.partition_by: over_clauses.append(Clause( SQL('PARTITION BY'), CommaClause(*self.partition_by))) if self.order_by: over_clauses.append(Clause( SQL('ORDER BY'), CommaClause(*self.order_by))) if self.start is not None and self.end is not None: over_clauses.append(Clause( SQL('RANGE BETWEEN'), self.start, SQL('AND'), self.end)) elif self.start is not None: over_clauses.append(Clause(SQL('RANGE'), self.start)) return EnclosedClause(Clause(*over_clauses)) def clone_base(self): return Window(self.partition_by, self.order_by) def Check(value): return SQL('CHECK (%s)' % value) class DQ(Node): """A "django-style" filter expression, e.g. {'foo__eq': 'x'}.""" def __init__(self, **query): super(DQ, self).__init__() self.query = query def clone_base(self): return DQ(**self.query) class _StripParens(Node): _node_type = 'strip_parens' def __init__(self, node): super(_StripParens, self).__init__() self.node = node JoinMetadata = namedtuple('JoinMetadata', ( 'src_model', # Source Model class. 'dest_model', # Dest Model class. 'src', # Source, may be Model, ModelAlias 'dest', # Dest, may be Model, ModelAlias, or SelectQuery. 'attr', # Attribute name joined instance(s) should be assigned to. 'primary_key', # Primary key being joined on. 'foreign_key', # Foreign key being joined from. 'is_backref', # Is this a backref, i.e. 1 -> N. 'alias', # Explicit alias given to join expression. 'is_self_join', # Is this a self-join? 'is_expression', # Is the join ON clause an Expression? )) class Join(namedtuple('_Join', ('src', 'dest', 'join_type', 'on'))): def get_foreign_key(self, source, dest, field=None): if isinstance(source, SelectQuery) or isinstance(dest, SelectQuery): return None, None fk_field = source._meta.rel_for_model(dest, field) if fk_field is not None: return fk_field, False reverse_rel = source._meta.reverse_rel_for_model(dest, field) if reverse_rel is not None: return reverse_rel, True return None, None def get_join_type(self): return self.join_type or JOIN.INNER def model_from_alias(self, model_or_alias): if isinstance(model_or_alias, ModelAlias): return model_or_alias.model_class elif isinstance(model_or_alias, SelectQuery): return model_or_alias.model_class return model_or_alias def _join_metadata(self): # Get the actual tables being joined. src = self.model_from_alias(self.src) dest = self.model_from_alias(self.dest) join_alias = isinstance(self.on, Node) and self.on._alias or None is_expression = isinstance(self.on, (Expression, Func, SQL)) on_field = isinstance(self.on, (Field, FieldProxy)) and self.on or None if on_field: fk_field = on_field is_backref = on_field.name not in src._meta.fields else: fk_field, is_backref = self.get_foreign_key(src, dest, self.on) if fk_field is None and self.on is not None: fk_field, is_backref = self.get_foreign_key(src, dest) if fk_field is not None: primary_key = fk_field.to_field else: primary_key = None if not join_alias: if fk_field is not None: if is_backref: target_attr = dest._meta.db_table else: target_attr = fk_field.name else: try: target_attr = self.on.lhs.name except AttributeError: target_attr = dest._meta.db_table else: target_attr = None return JoinMetadata( src_model=src, dest_model=dest, src=self.src, dest=self.dest, attr=join_alias or target_attr, primary_key=primary_key, foreign_key=fk_field, is_backref=is_backref, alias=join_alias, is_self_join=src is dest, is_expression=is_expression) @property def metadata(self): if not hasattr(self, '_cached_metadata'): self._cached_metadata = self._join_metadata() return self._cached_metadata class FieldDescriptor(object): # Fields are exposed as descriptors in order to control access to the # underlying "raw" data. def __init__(self, field): self.field = field self.att_name = self.field.name def __get__(self, instance, instance_type=None): if instance is not None: return instance._data.get(self.att_name) return self.field def __set__(self, instance, value): instance._data[self.att_name] = value instance._dirty.add(self.att_name) class Field(Node): """A column on a table.""" _field_counter = 0 _order = 0 _node_type = 'field' db_field = 'unknown' def __init__(self, null=False, index=False, unique=False, verbose_name=None, help_text=None, db_column=None, default=None, choices=None, primary_key=False, sequence=None, constraints=None, schema=None, undeclared=False): self.null = null self.index = index self.unique = unique self.verbose_name = verbose_name self.help_text = help_text self.db_column = db_column self.default = default self.choices = choices # Used for metadata purposes, not enforced. self.primary_key = primary_key self.sequence = sequence # Name of sequence, e.g. foo_id_seq. self.constraints = constraints # List of column constraints. self.schema = schema # Name of schema, e.g. 'public'. self.undeclared = undeclared # Whether this field is part of schema. # Used internally for recovering the order in which Fields were defined # on the Model class. Field._field_counter += 1 self._order = Field._field_counter self._sort_key = (self.primary_key and 1 or 2), self._order self._is_bound = False # Whether the Field is "bound" to a Model. super(Field, self).__init__() def clone_base(self, **kwargs): inst = type(self)( null=self.null, index=self.index, unique=self.unique, verbose_name=self.verbose_name, help_text=self.help_text, db_column=self.db_column, default=self.default, choices=self.choices, primary_key=self.primary_key, sequence=self.sequence, constraints=self.constraints, schema=self.schema, undeclared=self.undeclared, **kwargs) if self._is_bound: inst.name = self.name inst.model_class = self.model_class inst._is_bound = self._is_bound return inst def add_to_class(self, model_class, name): """ Hook that replaces the `Field` attribute on a class with a named `FieldDescriptor`. Called by the metaclass during construction of the `Model`. """ self.name = name self.model_class = model_class self.db_column = self.db_column or self.name if not self.verbose_name: self.verbose_name = re.sub('_+', ' ', name).title() model_class._meta.add_field(self) setattr(model_class, name, FieldDescriptor(self)) self._is_bound = True def get_database(self): return self.model_class._meta.database def get_column_type(self): field_type = self.get_db_field() return self.get_database().compiler().get_column_type(field_type) def get_db_field(self): return self.db_field def get_modifiers(self): return None def coerce(self, value): return value def db_value(self, value): """Convert the python value for storage in the database.""" return value if value is None else self.coerce(value) def python_value(self, value): """Convert the database value to a pythonic value.""" return value if value is None else self.coerce(value) def as_entity(self, with_table=False): if with_table: return Entity(self.model_class._meta.db_table, self.db_column) return Entity(self.db_column) def __ddl_column__(self, column_type): """Return the column type, e.g. VARCHAR(255) or REAL.""" modifiers = self.get_modifiers() if modifiers: return SQL( '%s(%s)' % (column_type, ', '.join(map(str, modifiers)))) return SQL(column_type) def __ddl__(self, column_type): """Return a list of Node instances that defines the column.""" ddl = [self.as_entity(), self.__ddl_column__(column_type)] if not self.null: ddl.append(SQL('NOT NULL')) if self.primary_key: ddl.append(SQL('PRIMARY KEY')) if self.sequence: ddl.append(SQL("DEFAULT NEXTVAL('%s')" % self.sequence)) if self.constraints: ddl.extend(self.constraints) return ddl def __hash__(self): return hash(self.name + '.' + self.model_class.__name__) class BareField(Field): db_field = 'bare' def __init__(self, coerce=None, *args, **kwargs): super(BareField, self).__init__(*args, **kwargs) if coerce is not None: self.coerce = coerce def clone_base(self, **kwargs): return super(BareField, self).clone_base(coerce=self.coerce, **kwargs) class IntegerField(Field): db_field = 'int' coerce = int class BigIntegerField(IntegerField): db_field = 'bigint' class SmallIntegerField(IntegerField): db_field = 'smallint' class PrimaryKeyField(IntegerField): db_field = 'primary_key' def __init__(self, *args, **kwargs): kwargs['primary_key'] = True super(PrimaryKeyField, self).__init__(*args, **kwargs) class _AutoPrimaryKeyField(PrimaryKeyField): _column_name = None def __init__(self, *args, **kwargs): if 'undeclared' in kwargs and not kwargs['undeclared']: raise ValueError('%r must be created with undeclared=True.' % self) kwargs['undeclared'] = True super(_AutoPrimaryKeyField, self).__init__(*args, **kwargs) def add_to_class(self, model_class, name): if name != self._column_name: raise ValueError('%s must be named `%s`.' % (type(self), name)) super(_AutoPrimaryKeyField, self).add_to_class(model_class, name) class FloatField(Field): db_field = 'float' coerce = float class DoubleField(FloatField): db_field = 'double' class DecimalField(Field): db_field = 'decimal' def __init__(self, max_digits=10, decimal_places=5, auto_round=False, rounding=None, *args, **kwargs): self.max_digits = max_digits self.decimal_places = decimal_places self.auto_round = auto_round self.rounding = rounding or decimal.DefaultContext.rounding self._exp = decimal.Decimal(10) ** (-self.decimal_places) super(DecimalField, self).__init__(*args, **kwargs) def clone_base(self, **kwargs): return super(DecimalField, self).clone_base( max_digits=self.max_digits, decimal_places=self.decimal_places, auto_round=self.auto_round, rounding=self.rounding, **kwargs) def get_modifiers(self): return [self.max_digits, self.decimal_places] def db_value(self, value): D = decimal.Decimal if not value: return value if value is None else D(0) elif self.auto_round or not isinstance(value, D): value = D(str(value)) if value.is_normal() and self.auto_round: value = value.quantize(self._exp, rounding=self.rounding) return value def python_value(self, value): if value is not None: if isinstance(value, decimal.Decimal): return value return decimal.Decimal(str(value)) def coerce_to_unicode(s, encoding='utf-8'): if isinstance(s, unicode_type): return s elif isinstance(s, string_type): try: return s.decode(encoding) except UnicodeDecodeError: return s return unicode_type(s) class _StringField(Field): def coerce(self, value): return coerce_to_unicode(value or '') def __add__(self, other): return self.concat(other) def __radd__(self, other): return other.concat(self) class CharField(_StringField): db_field = 'string' def __init__(self, max_length=255, *args, **kwargs): self.max_length = max_length super(CharField, self).__init__(*args, **kwargs) def clone_base(self, **kwargs): return super(CharField, self).clone_base( max_length=self.max_length, **kwargs) def get_modifiers(self): return self.max_length and [self.max_length] or None class FixedCharField(CharField): db_field = 'fixed_char' def python_value(self, value): value = super(FixedCharField, self).python_value(value) if value: value = value.strip() return value class TextField(_StringField): db_field = 'text' class BlobField(Field): db_field = 'blob' _constructor = binary_construct def add_to_class(self, model_class, name): if isinstance(model_class._meta.database, Proxy): model_class._meta.database.attach_callback(self._set_constructor) return super(BlobField, self).add_to_class(model_class, name) def _set_constructor(self, database): self._constructor = database.get_binary_type() def db_value(self, value): if isinstance(value, unicode_type): value = value.encode('raw_unicode_escape') if isinstance(value, basestring): return self._constructor(value) return value class UUIDField(Field): db_field = 'uuid' def db_value(self, value): if isinstance(value, uuid.UUID): return value.hex try: return uuid.UUID(value).hex except: return value def python_value(self, value): if isinstance(value, uuid.UUID): return value return None if value is None else uuid.UUID(value) def _date_part(date_part): def dec(self): return self.model_class._meta.database.extract_date(date_part, self) return dec class _BaseFormattedField(Field): formats = None def __init__(self, formats=None, *args, **kwargs): if formats is not None: self.formats = formats super(_BaseFormattedField, self).__init__(*args, **kwargs) def clone_base(self, **kwargs): return super(_BaseFormattedField, self).clone_base( formats=self.formats, **kwargs) class DateTimeField(_BaseFormattedField): db_field = 'datetime' formats = [ '%Y-%m-%d %H:%M:%S.%f', '%Y-%m-%d %H:%M:%S', '%Y-%m-%d', ] def python_value(self, value): if value and isinstance(value, basestring): return format_date_time(value, self.formats) return value year = property(_date_part('year')) month = property(_date_part('month')) day = property(_date_part('day')) hour = property(_date_part('hour')) minute = property(_date_part('minute')) second = property(_date_part('second')) class DateField(_BaseFormattedField): db_field = 'date' formats = [ '%Y-%m-%d', '%Y-%m-%d %H:%M:%S', '%Y-%m-%d %H:%M:%S.%f', ] def python_value(self, value): if value and isinstance(value, basestring): pp = lambda x: x.date() return format_date_time(value, self.formats, pp) elif value and isinstance(value, datetime.datetime): return value.date() return value year = property(_date_part('year')) month = property(_date_part('month')) day = property(_date_part('day')) class TimeField(_BaseFormattedField): db_field = 'time' formats = [ '%H:%M:%S.%f', '%H:%M:%S', '%H:%M', '%Y-%m-%d %H:%M:%S.%f', '%Y-%m-%d %H:%M:%S', ] def python_value(self, value): if value: if isinstance(value, basestring): pp = lambda x: x.time() return format_date_time(value, self.formats, pp) elif isinstance(value, datetime.datetime): return value.time() if value is not None and isinstance(value, datetime.timedelta): return (datetime.datetime.min + value).time() return value hour = property(_date_part('hour')) minute = property(_date_part('minute')) second = property(_date_part('second')) class TimestampField(IntegerField): # Support second -> microsecond resolution. valid_resolutions = [10**i for i in range(7)] zero_value = None def __init__(self, *args, **kwargs): self.resolution = kwargs.pop('resolution', 1) or 1 if self.resolution not in self.valid_resolutions: raise ValueError('TimestampField resolution must be one of: %s' % ', '.join(str(i) for i in self.valid_resolutions)) self.utc = kwargs.pop('utc', False) or False _dt = datetime.datetime self._conv = _dt.utcfromtimestamp if self.utc else _dt.fromtimestamp _default = _dt.utcnow if self.utc else _dt.now kwargs.setdefault('default', _default) self.zero_value = kwargs.pop('zero_value', None) super(TimestampField, self).__init__(*args, **kwargs) def get_db_field(self): # For second resolution we can get away (for a while) with using # 4 bytes to store the timestamp (as long as they're not > ~2038). # Otherwise we'll need to use a BigInteger type. return (self.db_field if self.resolution == 1 else BigIntegerField.db_field) def db_value(self, value): if value is None: return if isinstance(value, datetime.datetime): pass elif isinstance(value, datetime.date): value = datetime.datetime(value.year, value.month, value.day) else: return int(round(value * self.resolution)) if self.utc: timestamp = calendar.timegm(value.utctimetuple()) else: timestamp = time.mktime(value.timetuple()) timestamp += (value.microsecond * .000001) if self.resolution > 1: timestamp *= self.resolution return int(round(timestamp)) def python_value(self, value): if value is not None and isinstance(value, (int, float, long)): if value == 0: return self.zero_value elif self.resolution > 1: ticks_to_microsecond = 1000000 // self.resolution value, ticks = divmod(value, self.resolution) microseconds = ticks * ticks_to_microsecond return self._conv(value).replace(microsecond=microseconds) else: return self._conv(value) return value class BooleanField(Field): db_field = 'bool' coerce = bool class RelationDescriptor(FieldDescriptor): """Foreign-key abstraction to replace a related PK with a related model.""" def __init__(self, field, rel_model): self.rel_model = rel_model super(RelationDescriptor, self).__init__(field) def get_object_or_id(self, instance): rel_id = instance._data.get(self.att_name) if rel_id is not None or self.att_name in instance._obj_cache: if self.att_name not in instance._obj_cache: obj = self.rel_model.get(self.field.to_field == rel_id) instance._obj_cache[self.att_name] = obj return instance._obj_cache[self.att_name] elif not self.field.null: raise self.rel_model.DoesNotExist return rel_id def __get__(self, instance, instance_type=None): if instance is not None: return self.get_object_or_id(instance) return self.field def __set__(self, instance, value): if isinstance(value, self.rel_model): instance._data[self.att_name] = getattr( value, self.field.to_field.name) instance._obj_cache[self.att_name] = value else: orig_value = instance._data.get(self.att_name) instance._data[self.att_name] = value if orig_value != value and self.att_name in instance._obj_cache: del instance._obj_cache[self.att_name] instance._dirty.add(self.att_name) class ReverseRelationDescriptor(object): """Back-reference to expose related objects as a `SelectQuery`.""" def __init__(self, field): self.field = field self.rel_model = field.model_class def __get__(self, instance, instance_type=None): if instance is not None: return self.rel_model.select().where( self.field == getattr(instance, self.field.to_field.name)) return self class ObjectIdDescriptor(object): """Gives direct access to the underlying id""" def __init__(self, field): self.attr_name = field.name self.field = weakref.ref(field) def __get__(self, instance, instance_type=None): if instance is not None: return instance._data.get(self.attr_name) return self.field() def __set__(self, instance, value): setattr(instance, self.attr_name, value) class ForeignKeyField(IntegerField): def __init__(self, rel_model, related_name=None, on_delete=None, on_update=None, extra=None, to_field=None, object_id_name=None, *args, **kwargs): if rel_model != 'self' and not \ isinstance(rel_model, (Proxy, DeferredRelation)) and not \ issubclass(rel_model, Model): raise TypeError('Unexpected value for `rel_model`. Expected ' '`Model`, `Proxy`, `DeferredRelation`, or "self"') self.rel_model = rel_model self._related_name = related_name self.deferred = isinstance(rel_model, (Proxy, DeferredRelation)) self.on_delete = on_delete self.on_update = on_update self.extra = extra self.to_field = to_field self.object_id_name = object_id_name super(ForeignKeyField, self).__init__(*args, **kwargs) def clone_base(self, **kwargs): return super(ForeignKeyField, self).clone_base( rel_model=self.rel_model, related_name=self._get_related_name(), on_delete=self.on_delete, on_update=self.on_update, extra=self.extra, to_field=self.to_field, object_id_name=self.object_id_name, **kwargs) def _get_descriptor(self): return RelationDescriptor(self, self.rel_model) def _get_id_descriptor(self): return ObjectIdDescriptor(self) def _get_backref_descriptor(self): return ReverseRelationDescriptor(self) def _get_related_name(self): if self._related_name and callable(self._related_name): return self._related_name(self) return self._related_name or ('%s_set' % self.model_class._meta.name) def add_to_class(self, model_class, name): if isinstance(self.rel_model, Proxy): def callback(rel_model): self.rel_model = rel_model self.add_to_class(model_class, name) self.rel_model.attach_callback(callback) return elif isinstance(self.rel_model, DeferredRelation): self.rel_model.set_field(model_class, self, name) return self.name = name self.model_class = model_class self.db_column = self.db_column or '%s_id' % self.name obj_id_name = self.object_id_name if not obj_id_name: obj_id_name = self.db_column if obj_id_name == self.name: obj_id_name += '_id' elif obj_id_name == self.name: raise ValueError('Cannot set a foreign key object_id_name to ' 'the same name as the field itself.') if not self.verbose_name: self.verbose_name = re.sub('_+', ' ', name).title() model_class._meta.add_field(self) self.related_name = self._get_related_name() if self.rel_model == 'self': self.rel_model = self.model_class if self.to_field is not None: if not isinstance(self.to_field, Field): self.to_field = getattr(self.rel_model, self.to_field) else: self.to_field = self.rel_model._meta.primary_key # TODO: factor into separate method. if model_class._meta.validate_backrefs: def invalid(msg, **context): context.update( field='%s.%s' % (model_class._meta.name, name), backref=self.related_name, obj_id_name=obj_id_name) raise AttributeError(msg % context) if self.related_name in self.rel_model._meta.fields: invalid('The related_name of %(field)s ("%(backref)s") ' 'conflicts with a field of the same name.') elif self.related_name in self.rel_model._meta.reverse_rel: invalid('The related_name of %(field)s ("%(backref)s") ' 'is already in use by another foreign key.') if obj_id_name in model_class._meta.fields: invalid('The object id descriptor of %(field)s conflicts ' 'with a field named %(obj_id_name)s') elif obj_id_name in model_class.__dict__: invalid('Model attribute "%(obj_id_name)s" would be shadowed ' 'by the object id descriptor of %(field)s.') setattr(model_class, name, self._get_descriptor()) setattr(model_class, obj_id_name, self._get_id_descriptor()) setattr(self.rel_model, self.related_name, self._get_backref_descriptor()) self._is_bound = True model_class._meta.rel[self.name] = self self.rel_model._meta.reverse_rel[self.related_name] = self def get_db_field(self): """ Overridden to ensure Foreign Keys use same column type as the primary key they point to. """ if not isinstance(self.to_field, PrimaryKeyField): return self.to_field.get_db_field() return super(ForeignKeyField, self).get_db_field() def get_modifiers(self): if not isinstance(self.to_field, PrimaryKeyField): return self.to_field.get_modifiers() return super(ForeignKeyField, self).get_modifiers() def coerce(self, value): return self.to_field.coerce(value) def db_value(self, value): if isinstance(value, self.rel_model): value = value._get_pk_value() return self.to_field.db_value(value) def python_value(self, value): if isinstance(value, self.rel_model): return value return self.to_field.python_value(value) class CompositeKey(object): """A primary key composed of multiple columns.""" _node_type = 'composite_key' sequence = None def __init__(self, *field_names): self.field_names = field_names def add_to_class(self, model_class, name): self.name = name self.model_class = model_class setattr(model_class, name, self) def __get__(self, instance, instance_type=None): if instance is not None: return tuple([getattr(instance, field_name) for field_name in self.field_names]) return self def __set__(self, instance, value): pass def __eq__(self, other): expressions = [(self.model_class._meta.fields[field] == value) for field, value in zip(self.field_names, other)] return reduce(operator.and_, expressions) def __ne__(self, other): return ~(self == other) def __hash__(self): return hash((self.model_class.__name__, self.field_names)) class AliasMap(object): prefix = 't' def __init__(self, start=0): self._alias_map = {} self._counter = start def __repr__(self): return '' % self._alias_map def add(self, obj, alias=None): if obj in self._alias_map: return self._counter += 1 self._alias_map[obj] = alias or '%s%s' % (self.prefix, self._counter) def __getitem__(self, obj): if obj not in self._alias_map: self.add(obj) return self._alias_map[obj] def __contains__(self, obj): return obj in self._alias_map def update(self, alias_map): if alias_map: for obj, alias in alias_map._alias_map.items(): if obj not in self: self._alias_map[obj] = alias return self class QueryCompiler(object): # Mapping of `db_type` to actual column type used by database driver. # Database classes may provide additional column types or overrides. field_map = { 'bare': '', 'bigint': 'BIGINT', 'blob': 'BLOB', 'bool': 'SMALLINT', 'date': 'DATE', 'datetime': 'DATETIME', 'decimal': 'DECIMAL', 'double': 'REAL', 'fixed_char': 'CHAR', 'float': 'REAL', 'int': 'INTEGER', 'primary_key': 'INTEGER', 'smallint': 'SMALLINT', 'string': 'VARCHAR', 'text': 'TEXT', 'time': 'TIME', } # Mapping of OP. to actual SQL operation. For most databases this will be # the same, but some column types or databases may support additional ops. # Like `field_map`, Database classes may extend or override these. op_map = { OP.EQ: '=', OP.LT: '<', OP.LTE: '<=', OP.GT: '>', OP.GTE: '>=', OP.NE: '!=', OP.IN: 'IN', OP.NOT_IN: 'NOT IN', OP.IS: 'IS', OP.IS_NOT: 'IS NOT', OP.BIN_AND: '&', OP.BIN_OR: '|', OP.LIKE: 'LIKE', OP.ILIKE: 'ILIKE', OP.BETWEEN: 'BETWEEN', OP.ADD: '+', OP.SUB: '-', OP.MUL: '*', OP.DIV: '/', OP.XOR: '#', OP.AND: 'AND', OP.OR: 'OR', OP.MOD: '%', OP.REGEXP: 'REGEXP', OP.CONCAT: '||', } join_map = { JOIN.INNER: 'INNER JOIN', JOIN.LEFT_OUTER: 'LEFT OUTER JOIN', JOIN.RIGHT_OUTER: 'RIGHT OUTER JOIN', JOIN.FULL: 'FULL JOIN', JOIN.CROSS: 'CROSS JOIN', } alias_map_class = AliasMap def __init__(self, quote_char='"', interpolation='?', field_overrides=None, op_overrides=None): self.quote_char = quote_char self.interpolation = interpolation self._field_map = merge_dict(self.field_map, field_overrides or {}) self._op_map = merge_dict(self.op_map, op_overrides or {}) self._parse_map = self.get_parse_map() self._unknown_types = set(['param']) def get_parse_map(self): # To avoid O(n) lookups when parsing nodes, use a lookup table for # common node types O(1). return { 'expression': self._parse_expression, 'param': self._parse_param, 'passthrough': self._parse_passthrough, 'func': self._parse_func, 'clause': self._parse_clause, 'entity': self._parse_entity, 'field': self._parse_field, 'sql': self._parse_sql, 'select_query': self._parse_select_query, 'compound_select_query': self._parse_compound_select_query, 'strip_parens': self._parse_strip_parens, 'composite_key': self._parse_composite_key, } def quote(self, s): return '%s%s%s' % (self.quote_char, s, self.quote_char) def get_column_type(self, f): return self._field_map[f] if f in self._field_map else f.upper() def get_op(self, q): return self._op_map[q] def _sorted_fields(self, field_dict): return sorted(field_dict.items(), key=lambda i: i[0]._sort_key) def _parse_default(self, node, alias_map, conv): return self.interpolation, [node] def _parse_expression(self, node, alias_map, conv): if isinstance(node.lhs, Field): conv = node.lhs lhs, lparams = self.parse_node(node.lhs, alias_map, conv) rhs, rparams = self.parse_node(node.rhs, alias_map, conv) if node.op == OP.IN and rhs == '()' and not rparams: return ('0 = 1' if node.flat else '(0 = 1)'), [] template = '%s %s %s' if node.flat else '(%s %s %s)' sql = template % (lhs, self.get_op(node.op), rhs) return sql, lparams + rparams def _parse_passthrough(self, node, alias_map, conv): if node.adapt: return self.parse_node(node.adapt(node.value), alias_map, None) return self.interpolation, [node.value] def _parse_param(self, node, alias_map, conv): if node.adapt: if conv and conv.db_value is node.adapt: conv = None return self.parse_node(node.adapt(node.value), alias_map, conv) elif conv is not None: return self.parse_node(conv.db_value(node.value), alias_map) else: return self.interpolation, [node.value] def _parse_func(self, node, alias_map, conv): conv = node._coerce and conv or None sql, params = self.parse_node_list(node.arguments, alias_map, conv) return '%s(%s)' % (node.name, strip_parens(sql)), params def _parse_clause(self, node, alias_map, conv): sql, params = self.parse_node_list( node.nodes, alias_map, conv, node.glue) if node.parens: sql = '(%s)' % strip_parens(sql) return sql, params def _parse_entity(self, node, alias_map, conv): return '.'.join(map(self.quote, node.path)), [] def _parse_sql(self, node, alias_map, conv): return node.value, list(node.params) def _parse_field(self, node, alias_map, conv): if alias_map: sql = '.'.join(( self.quote(alias_map[node.model_class]), self.quote(node.db_column))) else: sql = self.quote(node.db_column) return sql, [] def _parse_composite_key(self, node, alias_map, conv): fields = [] for field_name in node.field_names: fields.append(node.model_class._meta.fields[field_name]) return self._parse_clause(CommaClause(*fields), alias_map, conv) def _parse_compound_select_query(self, node, alias_map, conv): csq = 'compound_select_query' lhs, rhs = node.lhs, node.rhs inv = rhs._node_type == csq and lhs._node_type != csq if inv: lhs, rhs = rhs, lhs new_map = self.alias_map_class() if lhs._node_type == csq: new_map._counter = alias_map._counter sql1, p1 = self.generate_select(lhs, new_map) sql2, p2 = self.generate_select(rhs, self.calculate_alias_map(rhs, new_map)) # We add outer parentheses in the event the compound query is used in # the `from_()` clause, in which case we'll need them. if node.database.compound_select_parentheses: if lhs._node_type != csq: sql1 = '(%s)' % sql1 if rhs._node_type != csq: sql2 = '(%s)' % sql2 if inv: sql1, p1, sql2, p2 = sql2, p2, sql1, p1 return '(%s %s %s)' % (sql1, node.operator, sql2), (p1 + p2) def _parse_select_query(self, node, alias_map, conv): clone = node.clone() if not node._explicit_selection: if conv and isinstance(conv, ForeignKeyField): clone._select = (conv.to_field,) else: clone._select = clone.model_class._meta.get_primary_key_fields() sub, params = self.generate_select(clone, alias_map) return '(%s)' % strip_parens(sub), params def _parse_strip_parens(self, node, alias_map, conv): sql, params = self.parse_node(node.node, alias_map, conv) return strip_parens(sql), params def _parse(self, node, alias_map, conv): # By default treat the incoming node as a raw value that should be # parameterized. node_type = getattr(node, '_node_type', None) unknown = False if node_type in self._parse_map: sql, params = self._parse_map[node_type](node, alias_map, conv) unknown = (node_type in self._unknown_types and node.adapt is None and conv is None) elif isinstance(node, (list, tuple, set)): # If you're wondering how to pass a list into your query, simply # wrap it in Param(). sql, params = self.parse_node_list(node, alias_map, conv) sql = '(%s)' % sql elif isinstance(node, Model): sql = self.interpolation if conv and isinstance(conv, ForeignKeyField): to_field = conv.to_field if isinstance(to_field, ForeignKeyField): value = conv.db_value(node) else: value = to_field.db_value(getattr(node, to_field.name)) else: value = node._get_pk_value() params = [value] elif (isclass(node) and issubclass(node, Model)) or \ isinstance(node, ModelAlias): entity = node.as_entity().alias(alias_map[node]) sql, params = self.parse_node(entity, alias_map, conv) elif conv is not None: value = conv.db_value(node) sql, params, _ = self._parse(value, alias_map, None) else: sql, params = self._parse_default(node, alias_map, None) unknown = True return sql, params, unknown def parse_node(self, node, alias_map=None, conv=None): sql, params, unknown = self._parse(node, alias_map, conv) if unknown and (conv is not None) and params: params = [conv.db_value(i) for i in params] if isinstance(node, Node): if node._negated: sql = 'NOT %s' % sql if node._alias: sql = ' '.join((sql, 'AS', node._alias)) if node._ordering: sql = ' '.join((sql, node._ordering)) if params and any(isinstance(p, Node) for p in params): clean_params = [] clean_sql = [] for idx, param in enumerate(params): if isinstance(param, Node): csql, cparams = self.parse_node(param) return sql, params def parse_node_list(self, nodes, alias_map, conv=None, glue=', '): sql = [] params = [] for node in nodes: node_sql, node_params = self.parse_node(node, alias_map, conv) sql.append(node_sql) params.extend(node_params) return glue.join(sql), params def calculate_alias_map(self, query, alias_map=None): new_map = self.alias_map_class() if alias_map is not None: new_map._counter = alias_map._counter new_map.add(query.model_class, query.model_class._meta.table_alias) for src_model, joined_models in query._joins.items(): new_map.add(src_model, src_model._meta.table_alias) for join_obj in joined_models: if isinstance(join_obj.dest, Node): new_map.add(join_obj.dest, join_obj.dest.alias) else: new_map.add(join_obj.dest, join_obj.dest._meta.table_alias) return new_map.update(alias_map) def build_query(self, clauses, alias_map=None): return self.parse_node(Clause(*clauses), alias_map) def generate_joins(self, joins, model_class, alias_map): # Joins are implemented as an adjancency-list graph. Perform a # depth-first search of the graph to generate all the necessary JOINs. clauses = [] seen = set() q = [model_class] while q: curr = q.pop() if curr not in joins or curr in seen: continue seen.add(curr) for join in joins[curr]: src = curr dest = join.dest join_type = join.get_join_type() if isinstance(join.on, (Expression, Func, Clause, Entity)): # Clear any alias on the join expression. constraint = join.on.clone().alias() elif join_type != JOIN.CROSS: metadata = join.metadata if metadata.is_backref: fk_model = join.dest pk_model = join.src else: fk_model = join.src pk_model = join.dest fk = metadata.foreign_key if fk: lhs = getattr(fk_model, fk.name) rhs = getattr(pk_model, fk.to_field.name) if metadata.is_backref: lhs, rhs = rhs, lhs constraint = (lhs == rhs) else: raise ValueError('Missing required join predicate.') if isinstance(dest, Node): # TODO: ensure alias? dest_n = dest else: q.append(dest) dest_n = dest.as_entity().alias(alias_map[dest]) join_sql = SQL(self.join_map.get(join_type) or join_type) if join_type == JOIN.CROSS: clauses.append(Clause(join_sql, dest_n)) else: clauses.append(Clause(join_sql, dest_n, SQL('ON'), constraint)) return clauses def generate_select(self, query, alias_map=None): model = query.model_class db = model._meta.database alias_map = self.calculate_alias_map(query, alias_map) if isinstance(query, CompoundSelect): clauses = [_StripParens(query)] else: if not query._distinct: clauses = [SQL('SELECT')] else: clauses = [SQL('SELECT DISTINCT')] if query._distinct not in (True, False): clauses += [SQL('ON'), EnclosedClause(*query._distinct)] select_clause = Clause(*query._select) select_clause.glue = ', ' clauses.extend((select_clause, SQL('FROM'))) if query._from is None: clauses.append(model.as_entity().alias(alias_map[model])) else: clauses.append(CommaClause(*query._from)) join_clauses = self.generate_joins(query._joins, model, alias_map) if join_clauses: clauses.extend(join_clauses) if query._where is not None: clauses.extend([SQL('WHERE'), query._where]) if query._group_by: clauses.extend([SQL('GROUP BY'), CommaClause(*query._group_by)]) if query._having: clauses.extend([SQL('HAVING'), query._having]) if query._windows is not None: clauses.append(SQL('WINDOW')) clauses.append(CommaClause(*[ Clause( SQL(window._alias), SQL('AS'), window.__sql__()) for window in query._windows])) if query._order_by: clauses.extend([SQL('ORDER BY'), CommaClause(*query._order_by)]) if query._limit is not None or (query._offset and db.limit_max): limit = query._limit if query._limit is not None else db.limit_max clauses.append(SQL('LIMIT %d' % limit)) if query._offset is not None: clauses.append(SQL('OFFSET %d' % query._offset)) if query._for_update: clauses.append(SQL(query._for_update)) return self.build_query(clauses, alias_map) def generate_update(self, query): model = query.model_class alias_map = self.alias_map_class() alias_map.add(model, model._meta.db_table) if query._on_conflict: statement = 'UPDATE OR %s' % query._on_conflict else: statement = 'UPDATE' clauses = [SQL(statement), model.as_entity(), SQL('SET')] update = [] for field, value in self._sorted_fields(query._update): if not isinstance(value, (Node, Model)): value = Param(value, adapt=field.db_value) update.append(Expression( field.as_entity(with_table=False), OP.EQ, value, flat=True)) # No outer parens, no table alias. clauses.append(CommaClause(*update)) if query._where: clauses.extend([SQL('WHERE'), query._where]) if query._returning is not None: returning_clause = Clause(*query._returning) returning_clause.glue = ', ' clauses.extend([SQL('RETURNING'), returning_clause]) return self.build_query(clauses, alias_map) def _get_field_clause(self, fields, clause_type=EnclosedClause): return clause_type(*[ field.as_entity(with_table=False) for field in fields]) def generate_insert(self, query): model = query.model_class meta = model._meta alias_map = self.alias_map_class() alias_map.add(model, model._meta.db_table) if query._upsert: statement = meta.database.upsert_sql elif query._on_conflict: statement = 'INSERT OR %s INTO' % query._on_conflict else: statement = 'INSERT INTO' clauses = [SQL(statement), model.as_entity()] if query._query is not None: # This INSERT query is of the form INSERT INTO ... SELECT FROM. if query._fields: clauses.append(self._get_field_clause(query._fields)) clauses.append(_StripParens(query._query)) elif query._rows is not None: fields, value_clauses = [], [] have_fields = False for row_dict in query._iter_rows(): if not have_fields: fields = sorted( row_dict.keys(), key=operator.attrgetter('_sort_key')) have_fields = True values = [] for field in fields: value = row_dict[field] if not isinstance(value, (Node, Model)): value = Param(value, adapt=field.db_value) values.append(value) value_clauses.append(EnclosedClause(*values)) if fields: clauses.extend([ self._get_field_clause(fields), SQL('VALUES'), CommaClause(*value_clauses)]) elif query.model_class._meta.auto_increment: # Bare insert, use default value for primary key. clauses.append(query.database.default_insert_clause( query.model_class)) if query.is_insert_returning: clauses.extend([ SQL('RETURNING'), self._get_field_clause( meta.get_primary_key_fields(), clause_type=CommaClause)]) elif query._returning is not None: returning_clause = Clause(*query._returning) returning_clause.glue = ', ' clauses.extend([SQL('RETURNING'), returning_clause]) return self.build_query(clauses, alias_map) def generate_delete(self, query): model = query.model_class clauses = [SQL('DELETE FROM'), model.as_entity()] if query._where: clauses.extend([SQL('WHERE'), query._where]) if query._returning is not None: returning_clause = Clause(*query._returning) returning_clause.glue = ', ' clauses.extend([SQL('RETURNING'), returning_clause]) return self.build_query(clauses) def field_definition(self, field): column_type = self.get_column_type(field.get_db_field()) ddl = field.__ddl__(column_type) return Clause(*ddl) def foreign_key_constraint(self, field): ddl = [ SQL('FOREIGN KEY'), EnclosedClause(field.as_entity()), SQL('REFERENCES'), field.rel_model.as_entity(), EnclosedClause(field.to_field.as_entity())] if field.on_delete: ddl.append(SQL('ON DELETE %s' % field.on_delete)) if field.on_update: ddl.append(SQL('ON UPDATE %s' % field.on_update)) return Clause(*ddl) def return_parsed_node(function_name): # TODO: treat all `generate_` functions as returning clauses, instead # of SQL/params. def inner(self, *args, **kwargs): fn = getattr(self, function_name) return self.parse_node(fn(*args, **kwargs)) return inner def _create_foreign_key(self, model_class, field, constraint=None): constraint = constraint or 'fk_%s_%s_refs_%s' % ( model_class._meta.db_table, field.db_column, field.rel_model._meta.db_table) fk_clause = self.foreign_key_constraint(field) return Clause( SQL('ALTER TABLE'), model_class.as_entity(), SQL('ADD CONSTRAINT'), Entity(constraint), *fk_clause.nodes) create_foreign_key = return_parsed_node('_create_foreign_key') def _create_table(self, model_class, safe=False): statement = 'CREATE TABLE IF NOT EXISTS' if safe else 'CREATE TABLE' meta = model_class._meta columns, constraints = [], [] if meta.composite_key: pk_cols = [meta.fields[f].as_entity() for f in meta.primary_key.field_names] constraints.append(Clause( SQL('PRIMARY KEY'), EnclosedClause(*pk_cols))) for field in meta.declared_fields: columns.append(self.field_definition(field)) if isinstance(field, ForeignKeyField) and not field.deferred: constraints.append(self.foreign_key_constraint(field)) if model_class._meta.constraints: for constraint in model_class._meta.constraints: if not isinstance(constraint, Node): constraint = SQL(constraint) constraints.append(constraint) return Clause( SQL(statement), model_class.as_entity(), EnclosedClause(*(columns + constraints))) create_table = return_parsed_node('_create_table') def _drop_table(self, model_class, fail_silently=False, cascade=False): statement = 'DROP TABLE IF EXISTS' if fail_silently else 'DROP TABLE' ddl = [SQL(statement), model_class.as_entity()] if cascade: ddl.append(SQL('CASCADE')) return Clause(*ddl) drop_table = return_parsed_node('_drop_table') def _truncate_table(self, model_class, restart_identity=False, cascade=False): ddl = [SQL('TRUNCATE TABLE'), model_class.as_entity()] if restart_identity: ddl.append(SQL('RESTART IDENTITY')) if cascade: ddl.append(SQL('CASCADE')) return Clause(*ddl) truncate_table = return_parsed_node('_truncate_table') def index_name(self, table, columns): index = '%s_%s' % (table, '_'.join(columns)) if len(index) > 64: index_hash = hashlib.md5(index.encode('utf-8')).hexdigest() index = '%s_%s' % (table[:55], index_hash[:8]) # 55 + 1 + 8 = 64 return index def _create_index(self, model_class, fields, unique, *extra): tbl_name = model_class._meta.db_table statement = 'CREATE UNIQUE INDEX' if unique else 'CREATE INDEX' index_name = self.index_name(tbl_name, [f.db_column for f in fields]) return Clause( SQL(statement), Entity(index_name), SQL('ON'), model_class.as_entity(), EnclosedClause(*[field.as_entity() for field in fields]), *extra) create_index = return_parsed_node('_create_index') def _drop_index(self, model_class, fields, fail_silently=False): tbl_name = model_class._meta.db_table statement = 'DROP INDEX IF EXISTS' if fail_silently else 'DROP INDEX' index_name = self.index_name(tbl_name, [f.db_column for f in fields]) return Clause(SQL(statement), Entity(index_name)) drop_index = return_parsed_node('_drop_index') def _create_sequence(self, sequence_name): return Clause(SQL('CREATE SEQUENCE'), Entity(sequence_name)) create_sequence = return_parsed_node('_create_sequence') def _drop_sequence(self, sequence_name): return Clause(SQL('DROP SEQUENCE'), Entity(sequence_name)) drop_sequence = return_parsed_node('_drop_sequence') class SqliteQueryCompiler(QueryCompiler): def truncate_table(self, model_class, restart_identity=False, cascade=False): return model_class.delete().sql() class ResultIterator(object): def __init__(self, qrw): self.qrw = qrw self._idx = 0 def next(self): if self._idx < self.qrw._ct: obj = self.qrw._result_cache[self._idx] elif not self.qrw._populated: obj = self.qrw.iterate() self.qrw._result_cache.append(obj) self.qrw._ct += 1 else: raise StopIteration self._idx += 1 return obj __next__ = next class QueryResultWrapper(object): """ Provides an iterator over the results of a raw Query, additionally doing two things: - converts rows from the database into python representations - ensures that multiple iterations do not result in multiple queries """ def __init__(self, model, cursor, meta=None): self.model = model self.cursor = cursor self._ct = 0 self._idx = 0 self._result_cache = [] self._populated = False self._initialized = False if meta is not None: self.column_meta, self.join_meta = meta else: self.column_meta = self.join_meta = None def __iter__(self): if self._populated: return iter(self._result_cache) else: return ResultIterator(self) @property def count(self): self.fill_cache() return self._ct def __len__(self): return self.count def process_row(self, row): return row def iterate(self): row = self.cursor.fetchone() if not row: self._populated = True if not getattr(self.cursor, 'name', None): self.cursor.close() raise StopIteration elif not self._initialized: self.initialize(self.cursor.description) self._initialized = True return self.process_row(row) def iterator(self): while True: yield self.iterate() def next(self): if self._idx < self._ct: inst = self._result_cache[self._idx] self._idx += 1 return inst elif self._populated: raise StopIteration obj = self.iterate() self._result_cache.append(obj) self._ct += 1 self._idx += 1 return obj __next__ = next def fill_cache(self, n=None): n = n or float('Inf') if n < 0: raise ValueError('Negative values are not supported.') self._idx = self._ct while not self._populated and (n > self._ct): try: next(self) except StopIteration: break class ExtQueryResultWrapper(QueryResultWrapper): def initialize(self, description): n_cols = len(description) self.conv = conv = [] if self.column_meta is not None: n_meta = len(self.column_meta) for i, node in enumerate(self.column_meta): if not self._initialize_node(node, i): self._initialize_by_name(description[i][0], i) if n_cols == n_meta: return else: i = 0 for i in range(i, n_cols): self._initialize_by_name(description[i][0], i) def _initialize_by_name(self, name, i): model_cols = self.model._meta.columns if name in model_cols: field = model_cols[name] self.conv.append((i, field.name, field.python_value)) else: self.conv.append((i, name, None)) def _initialize_node(self, node, i): if isinstance(node, Field): self.conv.append((i, node._alias or node.name, node.python_value)) return True elif isinstance(node, Func) and len(node.arguments): arg = node.arguments[0] if isinstance(arg, Field): name = node._alias or arg._alias or arg.name func = node._coerce and arg.python_value or None self.conv.append((i, name, func)) return True return False class TuplesQueryResultWrapper(ExtQueryResultWrapper): def process_row(self, row): return tuple([col if self.conv[i][2] is None else self.conv[i][2](col) for i, col in enumerate(row)]) if _TuplesQueryResultWrapper is None: _TuplesQueryResultWrapper = TuplesQueryResultWrapper class NaiveQueryResultWrapper(ExtQueryResultWrapper): def process_row(self, row): instance = self.model() for i, column, f in self.conv: setattr(instance, column, f(row[i]) if f is not None else row[i]) instance._prepare_instance() return instance if _ModelQueryResultWrapper is None: _ModelQueryResultWrapper = NaiveQueryResultWrapper class DictQueryResultWrapper(ExtQueryResultWrapper): def process_row(self, row): res = {} for i, column, f in self.conv: res[column] = f(row[i]) if f is not None else row[i] return res if _DictQueryResultWrapper is None: _DictQueryResultWrapper = DictQueryResultWrapper class NamedTupleQueryResultWrapper(ExtQueryResultWrapper): def initialize(self, description): super(NamedTupleQueryResultWrapper, self).initialize(description) columns = [column for _, column, _ in self.conv] self.constructor = namedtuple('Row', columns) def process_row(self, row): return self.constructor(*[f(row[i]) if f is not None else row[i] for i, _, f in self.conv]) class ModelQueryResultWrapper(QueryResultWrapper): def initialize(self, description): self.column_map, model_set = self.generate_column_map() self._col_set = set(col for col in self.column_meta if isinstance(col, Field)) self.join_list = self.generate_join_list(model_set) def generate_column_map(self): column_map = [] models = set([self.model]) for i, node in enumerate(self.column_meta): attr = conv = None if isinstance(node, Field): if isinstance(node, FieldProxy): key = node._model_alias constructor = node.model conv = node.field_instance.python_value else: key = constructor = node.model_class conv = node.python_value attr = node._alias or node.name else: if node._bind_to is None: key = constructor = self.model else: key = constructor = node._bind_to if isinstance(node, Node) and node._alias: attr = node._alias elif isinstance(node, Entity): attr = node.path[-1] column_map.append((key, constructor, attr, conv)) models.add(key) return column_map, models def generate_join_list(self, models): join_list = [] joins = self.join_meta stack = [self.model] while stack: current = stack.pop() if current not in joins: continue for join in joins[current]: metadata = join.metadata if metadata.dest in models or metadata.dest_model in models: if metadata.foreign_key is not None: fk_present = metadata.foreign_key in self._col_set pk_present = metadata.primary_key in self._col_set check = metadata.foreign_key.null and (fk_present or pk_present) else: check = fk_present = pk_present = False join_list.append(( metadata, check, fk_present, pk_present)) stack.append(join.dest) return join_list def process_row(self, row): collected = self.construct_instances(row) instances = self.follow_joins(collected) for i in instances: i._prepare_instance() return instances[0] def construct_instances(self, row, keys=None): collected_models = {} for i, (key, constructor, attr, conv) in enumerate(self.column_map): if keys is not None and key not in keys: continue value = row[i] if key not in collected_models: collected_models[key] = constructor() instance = collected_models[key] if attr is None: attr = self.cursor.description[i][0] setattr(instance, attr, value if conv is None else conv(value)) return collected_models def follow_joins(self, collected): prepared = [collected[self.model]] for (metadata, check_null, fk_present, pk_present) in self.join_list: inst = collected[metadata.src] try: joined_inst = collected[metadata.dest] except KeyError: joined_inst = collected[metadata.dest_model] has_fk = True if check_null: if fk_present: has_fk = inst._data.get(metadata.foreign_key.name) elif pk_present: has_fk = joined_inst._data.get(metadata.primary_key.name) if not has_fk: continue # Can we populate a value on the joined instance using the current? mpk = metadata.primary_key is not None can_populate_joined_pk = ( mpk and (metadata.attr in inst._data) and (getattr(joined_inst, metadata.primary_key.name) is None)) if can_populate_joined_pk: setattr( joined_inst, metadata.primary_key.name, inst._data[metadata.attr]) if metadata.is_backref: can_populate_joined_fk = ( mpk and (metadata.foreign_key is not None) and (getattr(inst, metadata.primary_key.name) is not None) and (joined_inst._data.get(metadata.foreign_key.name) is None)) if can_populate_joined_fk: setattr( joined_inst, metadata.foreign_key.name, inst) setattr(inst, metadata.attr, joined_inst) prepared.append(joined_inst) return prepared JoinCache = namedtuple('JoinCache', ('metadata', 'attr')) class AggregateQueryResultWrapper(ModelQueryResultWrapper): def __init__(self, *args, **kwargs): self._row = [] super(AggregateQueryResultWrapper, self).__init__(*args, **kwargs) def initialize(self, description): super(AggregateQueryResultWrapper, self).initialize(description) # Collect the set of all models (and ModelAlias objects) queried. self.all_models = set() for key, _, _, _ in self.column_map: self.all_models.add(key) # Prepare data structures for analyzing unique rows. Also cache # foreign key and attribute names for joined models. self.models_with_aggregate = set() self.back_references = {} self.source_to_dest = {} self.dest_to_source = {} for (metadata, _, _, _) in self.join_list: if metadata.is_backref: att_name = metadata.foreign_key.related_name else: att_name = metadata.attr is_backref = metadata.is_backref or metadata.is_self_join if is_backref: self.models_with_aggregate.add(metadata.src) else: self.dest_to_source.setdefault(metadata.dest, set()) self.dest_to_source[metadata.dest].add(metadata.src) self.source_to_dest.setdefault(metadata.src, {}) self.source_to_dest[metadata.src][metadata.dest] = JoinCache( metadata=metadata, attr=metadata.alias or att_name) # Determine which columns could contain "duplicate" data, e.g. if # getting Users and their Tweets, this would be the User columns. self.columns_to_compare = {} key_to_columns = {} for idx, (key, model_class, col_name, _) in enumerate(self.column_map): if key in self.models_with_aggregate: self.columns_to_compare.setdefault(key, []) self.columns_to_compare[key].append((idx, col_name)) key_to_columns.setdefault(key, []) key_to_columns[key].append((idx, col_name)) # Also compare columns for joins -> many-related model. for model_or_alias in self.models_with_aggregate: if model_or_alias not in self.columns_to_compare: continue sources = self.dest_to_source.get(model_or_alias, ()) for joined_model in sources: self.columns_to_compare[model_or_alias].extend( key_to_columns[joined_model]) def read_model_data(self, row): models = {} for model_class, column_data in self.columns_to_compare.items(): models[model_class] = [] for idx, col_name in column_data: models[model_class].append(row[idx]) return models def iterate(self): if self._row: row = self._row.pop() else: row = self.cursor.fetchone() if not row: self._populated = True if not getattr(self.cursor, 'name', None): self.cursor.close() raise StopIteration elif not self._initialized: self.initialize(self.cursor.description) self._initialized = True def _get_pk(instance): if instance._meta.composite_key: return tuple([ instance._data[field_name] for field_name in instance._meta.primary_key.field_names]) return instance._get_pk_value() identity_map = {} _constructed = self.construct_instances(row) primary_instance = _constructed[self.model] for model_or_alias, instance in _constructed.items(): identity_map[model_or_alias] = OrderedDict() identity_map[model_or_alias][_get_pk(instance)] = instance model_data = self.read_model_data(row) while True: cur_row = self.cursor.fetchone() if cur_row is None: break duplicate_models = set() cur_row_data = self.read_model_data(cur_row) for model_class, data in cur_row_data.items(): if model_data[model_class] == data: duplicate_models.add(model_class) if not duplicate_models: self._row.append(cur_row) break different_models = self.all_models - duplicate_models new_instances = self.construct_instances(cur_row, different_models) for model_or_alias, instance in new_instances.items(): # Do not include any instances which are comprised solely of # NULL values. all_none = True for value in instance._data.values(): if value is not None: all_none = False if not all_none: identity_map[model_or_alias][_get_pk(instance)] = instance stack = [self.model] instances = [primary_instance] while stack: current = stack.pop() if current not in self.join_meta: continue for join in self.join_meta[current]: try: metadata, attr = self.source_to_dest[current][join.dest] except KeyError: continue if metadata.is_backref or metadata.is_self_join: for instance in identity_map[current].values(): setattr(instance, attr, []) if join.dest not in identity_map: continue for pk, inst in identity_map[join.dest].items(): if pk is None: continue try: # XXX: if no FK exists, unable to join. joined_inst = identity_map[current][ inst._data[metadata.foreign_key.name]] except KeyError: continue getattr(joined_inst, attr).append(inst) instances.append(inst) elif attr: if join.dest not in identity_map: continue for pk, instance in identity_map[current].items(): # XXX: if no FK exists, unable to join. joined_inst = identity_map[join.dest][ instance._data[metadata.foreign_key.name]] setattr( instance, metadata.foreign_key.name, joined_inst) instances.append(joined_inst) stack.append(join.dest) for instance in instances: instance._prepare_instance() return primary_instance class Query(Node): """Base class representing a database query on one or more tables.""" require_commit = True def __init__(self, model_class): super(Query, self).__init__() self.model_class = model_class self.database = model_class._meta.database self._dirty = True self._query_ctx = model_class self._joins = {self.model_class: []} # Join graph as adjacency list. self._where = None def __repr__(self): sql, params = self.sql() return '%s %s %s' % (self.model_class, sql, params) def clone(self): query = type(self)(self.model_class) query.database = self.database return self._clone_attributes(query) def _clone_attributes(self, query): if self._where is not None: query._where = self._where.clone() query._joins = self._clone_joins() query._query_ctx = self._query_ctx return query def _clone_joins(self): return dict( (mc, list(j)) for mc, j in self._joins.items()) def _add_query_clauses(self, initial, expressions, conjunction=None): reduced = reduce(operator.and_, expressions) if initial is None: return reduced conjunction = conjunction or operator.and_ return conjunction(initial, reduced) def _model_shorthand(self, args): accum = [] for arg in args: if isinstance(arg, Node): accum.append(arg) elif isinstance(arg, Query): accum.append(arg) elif isinstance(arg, ModelAlias): accum.extend(arg.get_proxy_fields()) elif isclass(arg) and issubclass(arg, Model): accum.extend(arg._meta.declared_fields) return accum @returns_clone def where(self, *expressions): self._where = self._add_query_clauses(self._where, expressions) @returns_clone def orwhere(self, *expressions): self._where = self._add_query_clauses( self._where, expressions, operator.or_) @returns_clone def join(self, dest, join_type=None, on=None): src = self._query_ctx if on is None: require_join_condition = join_type != JOIN.CROSS and ( isinstance(dest, SelectQuery) or (isclass(dest) and not src._meta.rel_exists(dest))) if require_join_condition: raise ValueError('A join condition must be specified.') elif join_type == JOIN.CROSS: raise ValueError('A CROSS join cannot have a constraint.') elif isinstance(on, basestring): on = src._meta.fields[on] self._joins.setdefault(src, []) self._joins[src].append(Join(src, dest, join_type, on)) if not isinstance(dest, SelectQuery): self._query_ctx = dest @returns_clone def switch(self, model_class=None): """Change or reset the query context.""" self._query_ctx = model_class or self.model_class def ensure_join(self, lm, rm, on=None, **join_kwargs): ctx = self._query_ctx for join in self._joins.get(lm, []): if join.dest == rm: return self return self.switch(lm).join(rm, on=on, **join_kwargs).switch(ctx) def convert_dict_to_node(self, qdict): accum = [] joins = [] relationship = (ForeignKeyField, ReverseRelationDescriptor) for key, value in sorted(qdict.items()): curr = self.model_class if '__' in key and key.rsplit('__', 1)[1] in DJANGO_MAP: key, op = key.rsplit('__', 1) op = DJANGO_MAP[op] elif value is None: op = OP.IS else: op = OP.EQ for piece in key.split('__'): model_attr = getattr(curr, piece) if value is not None and isinstance(model_attr, relationship): curr = model_attr.rel_model joins.append(model_attr) accum.append(Expression(model_attr, op, value)) return accum, joins def filter(self, *args, **kwargs): # normalize args and kwargs into a new expression dq_node = Node() if args: dq_node &= reduce(operator.and_, [a.clone() for a in args]) if kwargs: dq_node &= DQ(**kwargs) # dq_node should now be an Expression, lhs = Node(), rhs = ... q = deque([dq_node]) dq_joins = set() while q: curr = q.popleft() if not isinstance(curr, Expression): continue for side, piece in (('lhs', curr.lhs), ('rhs', curr.rhs)): if isinstance(piece, DQ): query, joins = self.convert_dict_to_node(piece.query) dq_joins.update(joins) expression = reduce(operator.and_, query) # Apply values from the DQ object. expression._negated = piece._negated expression._alias = piece._alias setattr(curr, side, expression) else: q.append(piece) dq_node = dq_node.rhs query = self.clone() for field in dq_joins: if isinstance(field, ForeignKeyField): lm, rm = field.model_class, field.rel_model field_obj = field elif isinstance(field, ReverseRelationDescriptor): lm, rm = field.field.rel_model, field.rel_model field_obj = field.field query = query.ensure_join(lm, rm, field_obj) return query.where(dq_node) def compiler(self): return self.database.compiler() def sql(self): raise NotImplementedError def _execute(self): sql, params = self.sql() return self.database.execute_sql(sql, params, self.require_commit) def execute(self): raise NotImplementedError def scalar(self, as_tuple=False, convert=False): if convert: row = self.tuples().first() else: row = self._execute().fetchone() if row and not as_tuple: return row[0] else: return row class RawQuery(Query): """ Execute a SQL query, returning a standard iterable interface that returns model instances. """ def __init__(self, model, query, *params): self._sql = query self._params = list(params) self._qr = None self._tuples = False self._dicts = False super(RawQuery, self).__init__(model) def clone(self): query = RawQuery(self.model_class, self._sql, *self._params) query._tuples = self._tuples query._dicts = self._dicts return query join = not_allowed('joining') where = not_allowed('where') switch = not_allowed('switch') @returns_clone def tuples(self, tuples=True): self._tuples = tuples @returns_clone def dicts(self, dicts=True): self._dicts = dicts def sql(self): return self._sql, self._params def execute(self): if self._qr is None: if self._tuples: QRW = self.database.get_result_wrapper(RESULTS_TUPLES) elif self._dicts: QRW = self.database.get_result_wrapper(RESULTS_DICTS) else: QRW = self.database.get_result_wrapper(RESULTS_NAIVE) self._qr = QRW(self.model_class, self._execute(), None) return self._qr def __iter__(self): return iter(self.execute()) def allow_extend(orig, new_val, **kwargs): extend = kwargs.pop('extend', False) if kwargs: raise ValueError('"extend" is the only valid keyword argument.') if extend: return ((orig or []) + new_val) or None elif new_val: return new_val class SelectQuery(Query): _node_type = 'select_query' def __init__(self, model_class, *selection): super(SelectQuery, self).__init__(model_class) self.require_commit = self.database.commit_select self.__select(*selection) self._from = None self._group_by = None self._having = None self._order_by = None self._windows = None self._limit = None self._offset = None self._distinct = False self._for_update = None self._naive = False self._tuples = False self._dicts = False self._namedtuples = False self._aggregate_rows = False self._alias = None self._qr = None def _clone_attributes(self, query): query = super(SelectQuery, self)._clone_attributes(query) query._explicit_selection = self._explicit_selection query._select = list(self._select) if self._from is not None: query._from = [] for f in self._from: if isinstance(f, Node): query._from.append(f.clone()) else: query._from.append(f) if self._group_by is not None: query._group_by = list(self._group_by) if self._having: query._having = self._having.clone() if self._order_by is not None: query._order_by = list(self._order_by) if self._windows is not None: query._windows = list(self._windows) query._limit = self._limit query._offset = self._offset query._distinct = self._distinct query._for_update = self._for_update query._naive = self._naive query._tuples = self._tuples query._dicts = self._dicts query._namedtuples = self._namedtuples query._aggregate_rows = self._aggregate_rows query._alias = self._alias return query def compound_op(operator): def inner(self, other): supported_ops = self.model_class._meta.database.compound_operations if operator not in supported_ops: raise ValueError( 'Your database does not support %s' % operator) return CompoundSelect(self.model_class, self, operator, other) return inner _compound_op_static = staticmethod(compound_op) __or__ = compound_op('UNION') __and__ = compound_op('INTERSECT') __sub__ = compound_op('EXCEPT') def __xor__(self, rhs): # Symmetric difference, should just be (self | rhs) - (self & rhs)... wrapped_rhs = self.model_class.select(SQL('*')).from_( EnclosedClause((self & rhs)).alias('_')).order_by() return (self | rhs) - wrapped_rhs def union_all(self, rhs): return SelectQuery._compound_op_static('UNION ALL')(self, rhs) def __select(self, *selection): self._explicit_selection = len(selection) > 0 selection = selection or self.model_class._meta.declared_fields self._select = self._model_shorthand(selection) select = returns_clone(__select) @returns_clone def from_(self, *args): self._from = list(args) if args else None @returns_clone def group_by(self, *args, **kwargs): self._group_by = self._model_shorthand(args) if args else None @returns_clone def having(self, *expressions): self._having = self._add_query_clauses(self._having, expressions) @returns_clone def order_by(self, *args, **kwargs): self._order_by = allow_extend(self._order_by, list(args), **kwargs) @returns_clone def window(self, *windows, **kwargs): self._windows = allow_extend(self._windows, list(windows), **kwargs) @returns_clone def limit(self, lim): self._limit = lim @returns_clone def offset(self, off): self._offset = off @returns_clone def paginate(self, page, paginate_by=20): if page > 0: page -= 1 self._limit = paginate_by self._offset = page * paginate_by @returns_clone def distinct(self, is_distinct=True): self._distinct = is_distinct @returns_clone def for_update(self, for_update=True, nowait=False): self._for_update = 'FOR UPDATE NOWAIT' if for_update and nowait else \ 'FOR UPDATE' if for_update else None @returns_clone def with_lock(self, lock_type='UPDATE'): self._for_update = ('FOR %s' % lock_type) if lock_type else None @returns_clone def naive(self, naive=True): self._naive = naive @returns_clone def tuples(self, tuples=True): self._tuples = tuples if tuples: self._dicts = self._namedtuples = False @returns_clone def dicts(self, dicts=True): self._dicts = dicts if dicts: self._tuples = self._namedtuples = False @returns_clone def namedtuples(self, namedtuples=True): self._namedtuples = namedtuples if namedtuples: self._dicts = self._tuples = False @returns_clone def aggregate_rows(self, aggregate_rows=True): self._aggregate_rows = aggregate_rows @returns_clone def alias(self, alias=None): self._alias = alias def annotate(self, rel_model, annotation=None): if annotation is None: annotation = fn.Count(rel_model._meta.primary_key).alias('count') if self._query_ctx == rel_model: query = self.switch(self.model_class) else: query = self.clone() query = query.ensure_join(query._query_ctx, rel_model) if not query._group_by: query._group_by = [x.alias() for x in query._select] query._select = tuple(query._select) + (annotation,) return query def _aggregate(self, aggregation=None): if aggregation is None: aggregation = fn.Count(SQL('*')) query = self.order_by() query._select = [aggregation] return query def aggregate(self, aggregation=None, convert=True): return self._aggregate(aggregation).scalar(convert=convert) def count(self, clear_limit=False): if self._distinct or self._group_by or self._limit or self._offset: return self.wrapped_count(clear_limit=clear_limit) # defaults to a count() of the primary key return self.aggregate(convert=False) or 0 def wrapped_count(self, clear_limit=False): clone = self.order_by() if clear_limit: clone._limit = clone._offset = None sql, params = clone.sql() wrapped = 'SELECT COUNT(1) FROM (%s) AS wrapped_select' % sql rq = self.model_class.raw(wrapped, *params) return rq.scalar() or 0 def exists(self): clone = self.paginate(1, 1) clone._select = [SQL('1')] return bool(clone.scalar()) def get(self): clone = self.paginate(1, 1) try: return next(clone.execute()) except StopIteration: raise self.model_class.DoesNotExist( 'Instance matching query does not exist:\nSQL: %s\nPARAMS: %s' % self.sql()) def peek(self, n=1): res = self.execute() res.fill_cache(n) models = res._result_cache[:n] if models: return models[0] if n == 1 else models def first(self, n=1): if self._limit != n: self._limit = n self._dirty = True return self.peek(n=n) def sql(self): return self.compiler().generate_select(self) def verify_naive(self): model_class = self.model_class for node in self._select: if isinstance(node, Field) and node.model_class != model_class: return False elif isinstance(node, Node) and node._bind_to is not None: if node._bind_to != model_class: return False return True def get_query_meta(self): return (self._select, self._joins) def _get_result_wrapper(self): if self._tuples: return self.database.get_result_wrapper(RESULTS_TUPLES) elif self._dicts: return self.database.get_result_wrapper(RESULTS_DICTS) elif self._namedtuples: return self.database.get_result_wrapper(RESULTS_NAMEDTUPLES) elif self._naive or not self._joins or self.verify_naive(): return self.database.get_result_wrapper(RESULTS_NAIVE) elif self._aggregate_rows: return self.database.get_result_wrapper(RESULTS_AGGREGATE_MODELS) else: return self.database.get_result_wrapper(RESULTS_MODELS) def execute(self): if self._dirty or self._qr is None: model_class = self.model_class query_meta = self.get_query_meta() ResultWrapper = self._get_result_wrapper() self._qr = ResultWrapper(model_class, self._execute(), query_meta) self._dirty = False return self._qr else: return self._qr def __iter__(self): return iter(self.execute()) def iterator(self): return iter(self.execute().iterator()) def __getitem__(self, value): res = self.execute() if isinstance(value, slice): index = value.stop else: index = value if index is not None: index = index + 1 if index >= 0 else None res.fill_cache(index) return res._result_cache[value] def __len__(self): return len(self.execute()) if PY3: def __hash__(self): return id(self) class NoopSelectQuery(SelectQuery): def sql(self): return (self.database.get_noop_sql(), ()) def get_query_meta(self): return None, None def _get_result_wrapper(self): return self.database.get_result_wrapper(RESULTS_TUPLES) class CompoundSelect(SelectQuery): _node_type = 'compound_select_query' def __init__(self, model_class, lhs=None, operator=None, rhs=None): self.lhs = lhs self.operator = operator self.rhs = rhs super(CompoundSelect, self).__init__(model_class, []) def _clone_attributes(self, query): query = super(CompoundSelect, self)._clone_attributes(query) query.lhs = self.lhs query.operator = self.operator query.rhs = self.rhs return query def count(self, clear_limit=False): return self.wrapped_count(clear_limit=clear_limit) def get_query_meta(self): return self.lhs.get_query_meta() def verify_naive(self): return self.lhs.verify_naive() and self.rhs.verify_naive() def _get_result_wrapper(self): if self._tuples: return self.database.get_result_wrapper(RESULTS_TUPLES) elif self._dicts: return self.database.get_result_wrapper(RESULTS_DICTS) elif self._namedtuples: return self.database.get_result_wrapper(RESULTS_NAMEDTUPLES) elif self._aggregate_rows: return self.database.get_result_wrapper(RESULTS_AGGREGATE_MODELS) has_joins = self.lhs._joins or self.rhs._joins is_naive = self.lhs._naive or self.rhs._naive or self._naive if is_naive or not has_joins or self.verify_naive(): return self.database.get_result_wrapper(RESULTS_NAIVE) else: return self.database.get_result_wrapper(RESULTS_MODELS) class _WriteQuery(Query): def __init__(self, model_class): self._returning = None self._tuples = False self._dicts = False self._namedtuples = False self._qr = None super(_WriteQuery, self).__init__(model_class) def _clone_attributes(self, query): query = super(_WriteQuery, self)._clone_attributes(query) if self._returning: query._returning = list(self._returning) query._tuples = self._tuples query._dicts = self._dicts query._namedtuples = self._namedtuples return query def requires_returning(method): def inner(self, *args, **kwargs): db = self.model_class._meta.database if not db.returning_clause: raise ValueError('RETURNING is not supported by your ' 'database: %s' % type(db)) return method(self, *args, **kwargs) return inner @requires_returning @returns_clone def returning(self, *selection): if len(selection) == 1 and selection[0] is None: self._returning = None else: if not selection: selection = self.model_class._meta.declared_fields self._returning = self._model_shorthand(selection) @requires_returning @returns_clone def tuples(self, tuples=True): self._tuples = tuples if tuples: self._dicts = self._namedtuples = False @requires_returning @returns_clone def dicts(self, dicts=True): self._dicts = dicts if dicts: self._tuples = self._namedtuples = False @requires_returning @returns_clone def namedtuples(self, namedtuples=True): self._namedtuples = namedtuples if namedtuples: self._dicts = self._tuples = False def get_result_wrapper(self): if self._returning is not None: if self._tuples: return self.database.get_result_wrapper(RESULTS_TUPLES) elif self._dicts: return self.database.get_result_wrapper(RESULTS_DICTS) elif self._namedtuples: return self.database.get_result_wrapper(RESULTS_NAMEDTUPLES) return self.database.get_result_wrapper(RESULTS_NAIVE) def _execute_with_result_wrapper(self): ResultWrapper = self.get_result_wrapper() meta = (self._returning, {self.model_class: []}) self._qr = ResultWrapper(self.model_class, self._execute(), meta) return self._qr class UpdateQuery(_WriteQuery): def __init__(self, model_class, update=None): self._update = update self._on_conflict = None super(UpdateQuery, self).__init__(model_class) def _clone_attributes(self, query): query = super(UpdateQuery, self)._clone_attributes(query) query._update = dict(self._update) query._on_conflict = self._on_conflict return query @returns_clone def on_conflict(self, action=None): self._on_conflict = action join = not_allowed('joining') def sql(self): return self.compiler().generate_update(self) def execute(self): if self._returning is not None and self._qr is None: return self._execute_with_result_wrapper() elif self._qr is not None: return self._qr else: return self.database.rows_affected(self._execute()) def __iter__(self): if not self.model_class._meta.database.returning_clause: raise ValueError('UPDATE queries cannot be iterated over unless ' 'they specify a RETURNING clause, which is not ' 'supported by your database.') return iter(self.execute()) def iterator(self): return iter(self.execute().iterator()) class InsertQuery(_WriteQuery): def __init__(self, model_class, field_dict=None, rows=None, fields=None, query=None, validate_fields=False): super(InsertQuery, self).__init__(model_class) self._upsert = False self._is_multi_row_insert = rows is not None or query is not None self._return_id_list = False if rows is not None: self._rows = rows else: self._rows = [field_dict or {}] self._fields = fields self._query = query self._validate_fields = validate_fields self._on_conflict = None def _iter_rows(self): model_meta = self.model_class._meta if self._validate_fields: valid_fields = model_meta.valid_fields def validate_field(field): if field not in valid_fields: raise KeyError('"%s" is not a recognized field.' % field) defaults = model_meta._default_dict callables = model_meta._default_callables for row_dict in self._rows: field_row = defaults.copy() seen = set() for key in row_dict: if self._validate_fields: validate_field(key) if key in model_meta.fields: field = model_meta.fields[key] else: field = key field_row[field] = row_dict[key] seen.add(field) if callables: for field in callables: if field not in seen: field_row[field] = callables[field]() yield field_row def _clone_attributes(self, query): query = super(InsertQuery, self)._clone_attributes(query) query._rows = self._rows query._upsert = self._upsert query._is_multi_row_insert = self._is_multi_row_insert query._fields = self._fields query._query = self._query query._return_id_list = self._return_id_list query._validate_fields = self._validate_fields query._on_conflict = self._on_conflict return query join = not_allowed('joining') where = not_allowed('where clause') @returns_clone def upsert(self, upsert=True): self._upsert = upsert @returns_clone def on_conflict(self, action=None): self._on_conflict = action @returns_clone def return_id_list(self, return_id_list=True): self._return_id_list = return_id_list @property def is_insert_returning(self): if self.database.insert_returning: if not self._is_multi_row_insert or self._return_id_list: return True return False def sql(self): return self.compiler().generate_insert(self) def _insert_with_loop(self): id_list = [] last_id = None return_id_list = self._return_id_list for row in self._rows: last_id = (InsertQuery(self.model_class, row) .upsert(self._upsert) .execute()) if return_id_list: id_list.append(last_id) if return_id_list: return id_list else: return last_id def execute(self): insert_with_loop = ( self._is_multi_row_insert and self._query is None and self._returning is None and not self.database.insert_many) if insert_with_loop: return self._insert_with_loop() if self._returning is not None and self._qr is None: return self._execute_with_result_wrapper() elif self._qr is not None: return self._qr else: cursor = self._execute() if not self._is_multi_row_insert: if self.database.insert_returning: pk_row = cursor.fetchone() meta = self.model_class._meta clean_data = [ field.python_value(column) for field, column in zip(meta.get_primary_key_fields(), pk_row)] if self.model_class._meta.composite_key: return clean_data return clean_data[0] return self.database.last_insert_id(cursor, self.model_class) elif self._return_id_list: return map(operator.itemgetter(0), cursor.fetchall()) else: return True class DeleteQuery(_WriteQuery): join = not_allowed('joining') def sql(self): return self.compiler().generate_delete(self) def execute(self): if self._returning is not None and self._qr is None: return self._execute_with_result_wrapper() elif self._qr is not None: return self._qr else: return self.database.rows_affected(self._execute()) IndexMetadata = namedtuple( 'IndexMetadata', ('name', 'sql', 'columns', 'unique', 'table')) ColumnMetadata = namedtuple( 'ColumnMetadata', ('name', 'data_type', 'null', 'primary_key', 'table')) ForeignKeyMetadata = namedtuple( 'ForeignKeyMetadata', ('column', 'dest_table', 'dest_column', 'table')) class PeeweeException(Exception): pass class ImproperlyConfigured(PeeweeException): pass class DatabaseError(PeeweeException): pass class DataError(DatabaseError): pass class IntegrityError(DatabaseError): pass class InterfaceError(PeeweeException): pass class InternalError(DatabaseError): pass class NotSupportedError(DatabaseError): pass class OperationalError(DatabaseError): pass class ProgrammingError(DatabaseError): pass class ExceptionWrapper(object): __slots__ = ['exceptions'] def __init__(self, exceptions): self.exceptions = exceptions def __enter__(self): pass def __exit__(self, exc_type, exc_value, traceback): if exc_type is None: return if exc_type.__name__ in self.exceptions: new_type = self.exceptions[exc_type.__name__] if PY26: exc_args = exc_value else: exc_args = exc_value.args reraise(new_type, new_type(*exc_args), traceback) class _BaseConnectionLocal(object): def __init__(self, **kwargs): super(_BaseConnectionLocal, self).__init__(**kwargs) self.autocommit = None self.closed = True self.conn = None self.context_stack = [] self.transactions = [] class _ConnectionLocal(_BaseConnectionLocal, threading.local): pass class Database(object): commit_select = False compiler_class = QueryCompiler compound_operations = ['UNION', 'INTERSECT', 'EXCEPT', 'UNION ALL'] compound_select_parentheses = False distinct_on = False drop_cascade = False field_overrides = {} foreign_keys = True for_update = False for_update_nowait = False insert_many = True insert_returning = False interpolation = '?' limit_max = None op_overrides = {} quote_char = '"' reserved_tables = [] returning_clause = False savepoints = True sequences = False subquery_delete_same_table = True upsert_sql = None window_functions = False exceptions = { 'ConstraintError': IntegrityError, 'DatabaseError': DatabaseError, 'DataError': DataError, 'IntegrityError': IntegrityError, 'InterfaceError': InterfaceError, 'InternalError': InternalError, 'NotSupportedError': NotSupportedError, 'OperationalError': OperationalError, 'ProgrammingError': ProgrammingError} def __init__(self, database, threadlocals=True, autocommit=True, fields=None, ops=None, autorollback=False, use_speedups=True, **connect_kwargs): self.connect_kwargs = {} if threadlocals: self._local = _ConnectionLocal() else: self._local = _BaseConnectionLocal() self.init(database, **connect_kwargs) self._conn_lock = threading.Lock() self.autocommit = autocommit self.autorollback = autorollback self.use_speedups = use_speedups self.field_overrides = merge_dict(self.field_overrides, fields or {}) self.op_overrides = merge_dict(self.op_overrides, ops or {}) self.exception_wrapper = ExceptionWrapper(self.exceptions) def init(self, database, **connect_kwargs): if not self.is_closed(): self.close() self.deferred = database is None self.database = database self.connect_kwargs.update(connect_kwargs) def connect(self): with self._conn_lock: if self.deferred: raise OperationalError('Database has not been initialized') if not self._local.closed: raise OperationalError('Connection already open') self._local.conn = self._create_connection() self._local.closed = False with self.exception_wrapper: self.initialize_connection(self._local.conn) def initialize_connection(self, conn): pass def close(self): with self._conn_lock: if self.deferred: raise Exception('Error, database not properly initialized ' 'before closing connection') try: with self.exception_wrapper: self._close(self._local.conn) finally: self._local.closed = True def get_conn(self): if self._local.context_stack: conn = self._local.context_stack[-1].connection if conn is not None: return conn if self._local.closed: self.connect() return self._local.conn def _create_connection(self): with self.exception_wrapper: return self._connect(self.database, **self.connect_kwargs) def is_closed(self): return self._local.closed def get_cursor(self): return self.get_conn().cursor() def _close(self, conn): conn.close() def _connect(self, database, **kwargs): raise NotImplementedError @classmethod def register_fields(cls, fields): cls.field_overrides = merge_dict(cls.field_overrides, fields) @classmethod def register_ops(cls, ops): cls.op_overrides = merge_dict(cls.op_overrides, ops) def get_result_wrapper(self, wrapper_type): if wrapper_type == RESULTS_NAIVE: return (_ModelQueryResultWrapper if self.use_speedups else NaiveQueryResultWrapper) elif wrapper_type == RESULTS_MODELS: return ModelQueryResultWrapper elif wrapper_type == RESULTS_TUPLES: return (_TuplesQueryResultWrapper if self.use_speedups else TuplesQueryResultWrapper) elif wrapper_type == RESULTS_DICTS: return (_DictQueryResultWrapper if self.use_speedups else DictQueryResultWrapper) elif wrapper_type == RESULTS_NAMEDTUPLES: return NamedTupleQueryResultWrapper elif wrapper_type == RESULTS_AGGREGATE_MODELS: return AggregateQueryResultWrapper else: return (_ModelQueryResultWrapper if self.use_speedups else NaiveQueryResultWrapper) def last_insert_id(self, cursor, model): if model._meta.auto_increment: return cursor.lastrowid def rows_affected(self, cursor): return cursor.rowcount def compiler(self): return self.compiler_class( self.quote_char, self.interpolation, self.field_overrides, self.op_overrides) def execute(self, clause): return self.execute_sql(*self.compiler().parse_node(clause)) def execute_sql(self, sql, params=None, require_commit=True): logger.debug((sql, params)) with self.exception_wrapper: cursor = self.get_cursor() try: cursor.execute(sql, params or ()) except Exception: if self.autorollback and self.get_autocommit(): self.rollback() raise else: if require_commit and self.get_autocommit(): self.commit() return cursor def begin(self): pass def commit(self): with self.exception_wrapper: self.get_conn().commit() def rollback(self): with self.exception_wrapper: self.get_conn().rollback() def set_autocommit(self, autocommit): self._local.autocommit = autocommit def get_autocommit(self): if self._local.autocommit is None: self.set_autocommit(self.autocommit) return self._local.autocommit def push_execution_context(self, transaction): self._local.context_stack.append(transaction) def pop_execution_context(self): self._local.context_stack.pop() def execution_context_depth(self): return len(self._local.context_stack) def execution_context(self, with_transaction=True, transaction_type=None): return ExecutionContext(self, with_transaction, transaction_type) __call__ = execution_context def push_transaction(self, transaction): self._local.transactions.append(transaction) def pop_transaction(self): self._local.transactions.pop() def transaction_depth(self): return len(self._local.transactions) def transaction(self, transaction_type=None): return transaction(self, transaction_type) commit_on_success = property(transaction) def savepoint(self, sid=None): if not self.savepoints: raise NotImplementedError return savepoint(self, sid) def atomic(self, transaction_type=None): return _atomic(self, transaction_type) def get_tables(self, schema=None): raise NotImplementedError def get_indexes(self, table, schema=None): raise NotImplementedError def get_columns(self, table, schema=None): raise NotImplementedError def get_primary_keys(self, table, schema=None): raise NotImplementedError def get_foreign_keys(self, table, schema=None): raise NotImplementedError def sequence_exists(self, seq): raise NotImplementedError def create_table(self, model_class, safe=False): qc = self.compiler() return self.execute_sql(*qc.create_table(model_class, safe)) def create_tables(self, models, safe=False): create_model_tables(models, fail_silently=safe) def create_index(self, model_class, fields, unique=False): qc = self.compiler() if not isinstance(fields, (list, tuple)): raise ValueError('Fields passed to "create_index" must be a list ' 'or tuple: "%s"' % fields) fobjs = [ model_class._meta.fields[f] if isinstance(f, basestring) else f for f in fields] return self.execute_sql(*qc.create_index(model_class, fobjs, unique)) def drop_index(self, model_class, fields, safe=False): qc = self.compiler() if not isinstance(fields, (list, tuple)): raise ValueError('Fields passed to "drop_index" must be a list ' 'or tuple: "%s"' % fields) fobjs = [ model_class._meta.fields[f] if isinstance(f, basestring) else f for f in fields] return self.execute_sql(*qc.drop_index(model_class, fobjs, safe)) def create_foreign_key(self, model_class, field, constraint=None): qc = self.compiler() return self.execute_sql(*qc.create_foreign_key( model_class, field, constraint)) def create_sequence(self, seq): if self.sequences: qc = self.compiler() return self.execute_sql(*qc.create_sequence(seq)) def drop_table(self, model_class, fail_silently=False, cascade=False): qc = self.compiler() if cascade and not self.drop_cascade: raise ValueError('Database does not support DROP TABLE..CASCADE.') return self.execute_sql(*qc.drop_table( model_class, fail_silently, cascade)) def drop_tables(self, models, safe=False, cascade=False): drop_model_tables(models, fail_silently=safe, cascade=cascade) def truncate_table(self, model_class, restart_identity=False, cascade=False): qc = self.compiler() return self.execute_sql(*qc.truncate_table( model_class, restart_identity, cascade)) def truncate_tables(self, models, restart_identity=False, cascade=False): for model in reversed(sort_models_topologically(models)): model.truncate_table(restart_identity, cascade) def drop_sequence(self, seq): if self.sequences: qc = self.compiler() return self.execute_sql(*qc.drop_sequence(seq)) def extract_date(self, date_part, date_field): return fn.EXTRACT(Clause(date_part, R('FROM'), date_field)) def truncate_date(self, date_part, date_field): return fn.DATE_TRUNC(date_part, date_field) def default_insert_clause(self, model_class): return SQL('DEFAULT VALUES') def get_noop_sql(self): return 'SELECT 0 WHERE 0' def get_binary_type(self): return binary_construct def __pragma__(name): def __get__(self): return self.pragma(name) def __set__(self, value): return self.pragma(name, value) return property(__get__, __set__) class SqliteDatabase(Database): compiler_class = SqliteQueryCompiler field_overrides = { 'bool': 'INTEGER', 'smallint': 'INTEGER', 'uuid': 'TEXT', } foreign_keys = False insert_many = sqlite3 and sqlite3.sqlite_version_info >= (3, 7, 11, 0) limit_max = -1 op_overrides = { OP.LIKE: 'GLOB', OP.ILIKE: 'LIKE', } upsert_sql = 'INSERT OR REPLACE INTO' def __init__(self, database, pragmas=None, *args, **kwargs): self._pragmas = pragmas or [] journal_mode = kwargs.pop('journal_mode', None) # Backwards-compat. if journal_mode: self._pragmas.append(('journal_mode', journal_mode)) super(SqliteDatabase, self).__init__(database, *args, **kwargs) def _connect(self, database, **kwargs): if not sqlite3: raise ImproperlyConfigured('pysqlite or sqlite3 must be installed.') conn = sqlite3.connect(database, **kwargs) conn.isolation_level = None try: self._add_conn_hooks(conn) except: conn.close() raise return conn def _add_conn_hooks(self, conn): self._set_pragmas(conn) conn.create_function('date_part', 2, _sqlite_date_part) conn.create_function('date_trunc', 2, _sqlite_date_trunc) conn.create_function('regexp', -1, _sqlite_regexp) def _set_pragmas(self, conn): if self._pragmas: cursor = conn.cursor() for pragma, value in self._pragmas: cursor.execute('PRAGMA %s = %s;' % (pragma, value)) cursor.close() def pragma(self, key, value=SENTINEL): sql = 'PRAGMA %s' % key if value is not SENTINEL: sql += ' = %s' % value return self.execute_sql(sql).fetchone() cache_size = __pragma__('cache_size') foreign_keys = __pragma__('foreign_keys') journal_mode = __pragma__('journal_mode') journal_size_limit = __pragma__('journal_size_limit') mmap_size = __pragma__('mmap_size') page_size = __pragma__('page_size') read_uncommitted = __pragma__('read_uncommitted') synchronous = __pragma__('synchronous') wal_autocheckpoint = __pragma__('wal_autocheckpoint') def begin(self, lock_type=None): statement = 'BEGIN %s' % lock_type if lock_type else 'BEGIN' self.execute_sql(statement, require_commit=False) def transaction(self, transaction_type=None): return transaction_sqlite(self, transaction_type) def create_foreign_key(self, model_class, field, constraint=None): raise OperationalError('SQLite does not support ALTER TABLE ' 'statements to add constraints.') def get_tables(self, schema=None): cursor = self.execute_sql('SELECT name FROM sqlite_master WHERE ' 'type = ? ORDER BY name;', ('table',)) return [row[0] for row in cursor.fetchall()] def get_indexes(self, table, schema=None): query = ('SELECT name, sql FROM sqlite_master ' 'WHERE tbl_name = ? AND type = ? ORDER BY name') cursor = self.execute_sql(query, (table, 'index')) index_to_sql = dict(cursor.fetchall()) # Determine which indexes have a unique constraint. unique_indexes = set() cursor = self.execute_sql('PRAGMA index_list("%s")' % table) for row in cursor.fetchall(): name = row[1] is_unique = int(row[2]) == 1 if is_unique: unique_indexes.add(name) # Retrieve the indexed columns. index_columns = {} for index_name in sorted(index_to_sql): cursor = self.execute_sql('PRAGMA index_info("%s")' % index_name) index_columns[index_name] = [row[2] for row in cursor.fetchall()] return [ IndexMetadata( name, index_to_sql[name], index_columns[name], name in unique_indexes, table) for name in sorted(index_to_sql)] def get_columns(self, table, schema=None): cursor = self.execute_sql('PRAGMA table_info("%s")' % table) return [ColumnMetadata(row[1], row[2], not row[3], bool(row[5]), table) for row in cursor.fetchall()] def get_primary_keys(self, table, schema=None): cursor = self.execute_sql('PRAGMA table_info("%s")' % table) return [row[1] for row in cursor.fetchall() if row[-1]] def get_foreign_keys(self, table, schema=None): cursor = self.execute_sql('PRAGMA foreign_key_list("%s")' % table) return [ForeignKeyMetadata(row[3], row[2], row[4], table) for row in cursor.fetchall()] def savepoint(self, sid=None): return savepoint_sqlite(self, sid) def extract_date(self, date_part, date_field): return fn.date_part(date_part, date_field) def truncate_date(self, date_part, date_field): return fn.strftime(SQLITE_DATE_TRUNC_MAPPING[date_part], date_field) def get_binary_type(self): return sqlite3.Binary class PostgresqlDatabase(Database): commit_select = True compound_select_parentheses = True distinct_on = True drop_cascade = True field_overrides = { 'blob': 'BYTEA', 'bool': 'BOOLEAN', 'datetime': 'TIMESTAMP', 'decimal': 'NUMERIC', 'double': 'DOUBLE PRECISION', 'primary_key': 'SERIAL', 'uuid': 'UUID', } for_update = True for_update_nowait = True insert_returning = True interpolation = '%s' op_overrides = { OP.REGEXP: '~', } reserved_tables = ['user'] returning_clause = True sequences = True window_functions = True register_unicode = True def _connect(self, database, encoding=None, **kwargs): if not psycopg2: raise ImproperlyConfigured('psycopg2 must be installed.') conn = psycopg2.connect(database=database, **kwargs) if self.register_unicode: pg_extensions.register_type(pg_extensions.UNICODE, conn) pg_extensions.register_type(pg_extensions.UNICODEARRAY, conn) if encoding: conn.set_client_encoding(encoding) return conn def _get_pk_sequence(self, model): meta = model._meta if meta.primary_key is not False and meta.primary_key.sequence: return meta.primary_key.sequence elif meta.auto_increment: return '%s_%s_seq' % (meta.db_table, meta.primary_key.db_column) def last_insert_id(self, cursor, model): sequence = self._get_pk_sequence(model) if not sequence: return meta = model._meta if meta.schema: schema = '%s.' % meta.schema else: schema = '' cursor.execute("SELECT CURRVAL('%s\"%s\"')" % (schema, sequence)) result = cursor.fetchone()[0] if self.get_autocommit(): self.commit() return result def get_tables(self, schema='public'): query = ('SELECT tablename FROM pg_catalog.pg_tables ' 'WHERE schemaname = %s ORDER BY tablename') return [r for r, in self.execute_sql(query, (schema,)).fetchall()] def get_indexes(self, table, schema='public'): query = """ SELECT i.relname, idxs.indexdef, idx.indisunique, array_to_string(array_agg(cols.attname), ',') FROM pg_catalog.pg_class AS t INNER JOIN pg_catalog.pg_index AS idx ON t.oid = idx.indrelid INNER JOIN pg_catalog.pg_class AS i ON idx.indexrelid = i.oid INNER JOIN pg_catalog.pg_indexes AS idxs ON (idxs.tablename = t.relname AND idxs.indexname = i.relname) LEFT OUTER JOIN pg_catalog.pg_attribute AS cols ON (cols.attrelid = t.oid AND cols.attnum = ANY(idx.indkey)) WHERE t.relname = %s AND t.relkind = %s AND idxs.schemaname = %s GROUP BY i.relname, idxs.indexdef, idx.indisunique ORDER BY idx.indisunique DESC, i.relname;""" cursor = self.execute_sql(query, (table, 'r', schema)) return [IndexMetadata(row[0], row[1], row[3].split(','), row[2], table) for row in cursor.fetchall()] def get_columns(self, table, schema='public'): query = """ SELECT column_name, is_nullable, data_type FROM information_schema.columns WHERE table_name = %s AND table_schema = %s ORDER BY ordinal_position""" cursor = self.execute_sql(query, (table, schema)) pks = set(self.get_primary_keys(table, schema)) return [ColumnMetadata(name, dt, null == 'YES', name in pks, table) for name, null, dt in cursor.fetchall()] def get_primary_keys(self, table, schema='public'): query = """ SELECT kc.column_name FROM information_schema.table_constraints AS tc INNER JOIN information_schema.key_column_usage AS kc ON ( tc.table_name = kc.table_name AND tc.table_schema = kc.table_schema AND tc.constraint_name = kc.constraint_name) WHERE tc.constraint_type = %s AND tc.table_name = %s AND tc.table_schema = %s""" cursor = self.execute_sql(query, ('PRIMARY KEY', table, schema)) return [row for row, in cursor.fetchall()] def get_foreign_keys(self, table, schema='public'): sql = """ SELECT kcu.column_name, ccu.table_name, ccu.column_name FROM information_schema.table_constraints AS tc JOIN information_schema.key_column_usage AS kcu ON (tc.constraint_name = kcu.constraint_name AND tc.constraint_schema = kcu.constraint_schema) JOIN information_schema.constraint_column_usage AS ccu ON (ccu.constraint_name = tc.constraint_name AND ccu.constraint_schema = tc.constraint_schema) WHERE tc.constraint_type = 'FOREIGN KEY' AND tc.table_name = %s AND tc.table_schema = %s""" cursor = self.execute_sql(sql, (table, schema)) return [ForeignKeyMetadata(row[0], row[1], row[2], table) for row in cursor.fetchall()] def sequence_exists(self, sequence): res = self.execute_sql(""" SELECT COUNT(*) FROM pg_class, pg_namespace WHERE relkind='S' AND pg_class.relnamespace = pg_namespace.oid AND relname=%s""", (sequence,)) return bool(res.fetchone()[0]) def set_search_path(self, *search_path): path_params = ','.join(['%s'] * len(search_path)) self.execute_sql('SET search_path TO %s' % path_params, search_path) def get_noop_sql(self): return 'SELECT 0 WHERE false' def get_binary_type(self): return psycopg2.Binary class MySQLDatabase(Database): commit_select = True compound_select_parentheses = True compound_operations = ['UNION', 'UNION ALL'] field_overrides = { 'bool': 'BOOL', 'decimal': 'NUMERIC', 'double': 'DOUBLE PRECISION', 'float': 'FLOAT', 'primary_key': 'INTEGER AUTO_INCREMENT', 'text': 'LONGTEXT', 'uuid': 'VARCHAR(40)', } for_update = True interpolation = '%s' limit_max = 2 ** 64 - 1 # MySQL quirk op_overrides = { OP.LIKE: 'LIKE BINARY', OP.ILIKE: 'LIKE', OP.XOR: 'XOR', } quote_char = '`' subquery_delete_same_table = False upsert_sql = 'REPLACE INTO' def _connect(self, database, **kwargs): if not mysql: raise ImproperlyConfigured('MySQLdb or PyMySQL must be installed.') conn_kwargs = { 'charset': 'utf8', 'use_unicode': True, } conn_kwargs.update(kwargs) if 'password' in conn_kwargs: conn_kwargs['passwd'] = conn_kwargs.pop('password') return mysql.connect(db=database, **conn_kwargs) def get_tables(self, schema=None): return [row for row, in self.execute_sql('SHOW TABLES')] def get_indexes(self, table, schema=None): cursor = self.execute_sql('SHOW INDEX FROM `%s`' % table) unique = set() indexes = {} for row in cursor.fetchall(): if not row[1]: unique.add(row[2]) indexes.setdefault(row[2], []) indexes[row[2]].append(row[4]) return [IndexMetadata(name, None, indexes[name], name in unique, table) for name in indexes] def get_columns(self, table, schema=None): sql = """ SELECT column_name, is_nullable, data_type FROM information_schema.columns WHERE table_name = %s AND table_schema = DATABASE()""" cursor = self.execute_sql(sql, (table,)) pks = set(self.get_primary_keys(table)) return [ColumnMetadata(name, dt, null == 'YES', name in pks, table) for name, null, dt in cursor.fetchall()] def get_primary_keys(self, table, schema=None): cursor = self.execute_sql('SHOW INDEX FROM `%s`' % table) return [row[4] for row in cursor.fetchall() if row[2] == 'PRIMARY'] def get_foreign_keys(self, table, schema=None): query = """ SELECT column_name, referenced_table_name, referenced_column_name FROM information_schema.key_column_usage WHERE table_name = %s AND table_schema = DATABASE() AND referenced_table_name IS NOT NULL AND referenced_column_name IS NOT NULL""" cursor = self.execute_sql(query, (table,)) return [ ForeignKeyMetadata(column, dest_table, dest_column, table) for column, dest_table, dest_column in cursor.fetchall()] def extract_date(self, date_part, date_field): return fn.EXTRACT(Clause(R(date_part), R('FROM'), date_field)) def truncate_date(self, date_part, date_field): return fn.DATE_FORMAT(date_field, MYSQL_DATE_TRUNC_MAPPING[date_part]) def default_insert_clause(self, model_class): return Clause( EnclosedClause(model_class._meta.primary_key), SQL('VALUES (DEFAULT)')) def get_noop_sql(self): return 'DO 0' def get_binary_type(self): return mysql.Binary class _callable_context_manager(object): __slots__ = () def __call__(self, fn): @wraps(fn) def inner(*args, **kwargs): with self: return fn(*args, **kwargs) return inner class ExecutionContext(_callable_context_manager): def __init__(self, database, with_transaction=True, transaction_type=None): self.database = database self.with_transaction = with_transaction self.transaction_type = transaction_type self.connection = None def __enter__(self): with self.database._conn_lock: self.database.push_execution_context(self) self.connection = self.database._connect( self.database.database, **self.database.connect_kwargs) if self.with_transaction: self.txn = self.database.transaction() self.txn.__enter__() return self def __exit__(self, exc_type, exc_val, exc_tb): with self.database._conn_lock: if self.connection is None: self.database.pop_execution_context() else: try: if self.with_transaction: if not exc_type: self.txn.commit(False) self.txn.__exit__(exc_type, exc_val, exc_tb) finally: self.database.pop_execution_context() self.database._close(self.connection) class Using(ExecutionContext): def __init__(self, database, models, with_transaction=True): super(Using, self).__init__(database, with_transaction) self.models = models def __enter__(self): self._orig = [] for model in self.models: self._orig.append(model._meta.database) model._meta.database = self.database return super(Using, self).__enter__() def __exit__(self, exc_type, exc_val, exc_tb): super(Using, self).__exit__(exc_type, exc_val, exc_tb) for i, model in enumerate(self.models): model._meta.database = self._orig[i] class _atomic(_callable_context_manager): __slots__ = ('db', 'transaction_type', 'context_manager') def __init__(self, db, transaction_type=None): self.db = db self.transaction_type = transaction_type def __enter__(self): if self.db.transaction_depth() == 0: self.context_manager = self.db.transaction(self.transaction_type) else: self.context_manager = self.db.savepoint() return self.context_manager.__enter__() def __exit__(self, exc_type, exc_val, exc_tb): return self.context_manager.__exit__(exc_type, exc_val, exc_tb) class transaction(_callable_context_manager): __slots__ = ('db', 'autocommit', 'transaction_type') def __init__(self, db, transaction_type=None): self.db = db self.transaction_type = transaction_type def _begin(self): if self.transaction_type: self.db.begin(self.transaction_type) else: self.db.begin() def commit(self, begin=True): self.db.commit() if begin: self._begin() def rollback(self, begin=True): self.db.rollback() if begin: self._begin() def __enter__(self): self.autocommit = self.db.get_autocommit() self.db.set_autocommit(False) if self.db.transaction_depth() == 0: self._begin() self.db.push_transaction(self) return self def __exit__(self, exc_type, exc_val, exc_tb): try: if exc_type: self.rollback(False) elif self.db.transaction_depth() == 1: try: self.commit(False) except: self.rollback(False) raise finally: self.db.set_autocommit(self.autocommit) self.db.pop_transaction() class savepoint(_callable_context_manager): __slots__ = ('db', 'sid', 'quoted_sid', 'autocommit') def __init__(self, db, sid=None): self.db = db _compiler = db.compiler() self.sid = sid or 's' + uuid.uuid4().hex self.quoted_sid = _compiler.quote(self.sid) def _execute(self, query): self.db.execute_sql(query, require_commit=False) def _begin(self): self._execute('SAVEPOINT %s;' % self.quoted_sid) def commit(self, begin=True): self._execute('RELEASE SAVEPOINT %s;' % self.quoted_sid) if begin: self._begin() def rollback(self): self._execute('ROLLBACK TO SAVEPOINT %s;' % self.quoted_sid) def __enter__(self): self.autocommit = self.db.get_autocommit() self.db.set_autocommit(False) self._begin() return self def __exit__(self, exc_type, exc_val, exc_tb): try: if exc_type: self.rollback() else: try: self.commit(begin=False) except: self.rollback() raise finally: self.db.set_autocommit(self.autocommit) class transaction_sqlite(transaction): __slots__ = () def _begin(self): self.db.begin(lock_type=self.transaction_type) class savepoint_sqlite(savepoint): __slots__ = ('isolation_level',) def __enter__(self): conn = self.db.get_conn() # For sqlite, the connection's isolation_level *must* be set to None. # The act of setting it, though, will break any existing savepoints, # so only write to it if necessary. if conn.isolation_level is not None: self.isolation_level = conn.isolation_level conn.isolation_level = None else: self.isolation_level = None return super(savepoint_sqlite, self).__enter__() def __exit__(self, exc_type, exc_val, exc_tb): try: return super(savepoint_sqlite, self).__exit__( exc_type, exc_val, exc_tb) finally: if self.isolation_level is not None: self.db.get_conn().isolation_level = self.isolation_level class FieldProxy(Field): def __init__(self, alias, field_instance): self._model_alias = alias self.model = self._model_alias.model_class self.field_instance = field_instance def clone_base(self): return FieldProxy(self._model_alias, self.field_instance) def coerce(self, value): return self.field_instance.coerce(value) def python_value(self, value): return self.field_instance.python_value(value) def db_value(self, value): return self.field_instance.db_value(value) def __getattr__(self, attr): if attr == 'model_class': return self._model_alias return getattr(self.field_instance, attr) class ModelAlias(object): def __init__(self, model_class): self.__dict__['model_class'] = model_class def __getattr__(self, attr): model_attr = getattr(self.model_class, attr) if isinstance(model_attr, Field): return FieldProxy(self, model_attr) return model_attr def __setattr__(self, attr, value): raise AttributeError('Cannot set attributes on ModelAlias instances') def get_proxy_fields(self, declared_fields=False): mm = self.model_class._meta fields = mm.declared_fields if declared_fields else mm.sorted_fields return [FieldProxy(self, f) for f in fields] def select(self, *selection): if not selection: selection = self.get_proxy_fields() query = SelectQuery(self, *selection) if self._meta.order_by: query = query.order_by(*self._meta.order_by) return query def __call__(self, **kwargs): return self.model_class(**kwargs) if _SortedFieldList is None: class _SortedFieldList(object): __slots__ = ('_keys', '_items') def __init__(self): self._keys = [] self._items = [] def __getitem__(self, i): return self._items[i] def __iter__(self): return iter(self._items) def __contains__(self, item): k = item._sort_key i = bisect_left(self._keys, k) j = bisect_right(self._keys, k) return item in self._items[i:j] def index(self, field): return self._keys.index(field._sort_key) def insert(self, item): k = item._sort_key i = bisect_left(self._keys, k) self._keys.insert(i, k) self._items.insert(i, item) def remove(self, item): idx = self.index(item) del self._items[idx] del self._keys[idx] class DoesNotExist(Exception): pass if sqlite3: default_database = SqliteDatabase('peewee.db') else: default_database = None class ModelOptions(object): def __init__(self, cls, database=None, db_table=None, db_table_func=None, indexes=None, order_by=None, primary_key=None, table_alias=None, constraints=None, schema=None, validate_backrefs=True, only_save_dirty=False, depends_on=None, **kwargs): self.model_class = cls self.name = cls.__name__.lower() self.fields = {} self.columns = {} self.defaults = {} self._default_by_name = {} self._default_dict = {} self._default_callables = {} self._default_callable_list = [] self._sorted_field_list = _SortedFieldList() self.sorted_fields = [] self.sorted_field_names = [] self.valid_fields = set() self.declared_fields = [] self.database = database if database is not None else default_database self.db_table = db_table self.db_table_func = db_table_func self.indexes = list(indexes or []) self.order_by = order_by self.primary_key = primary_key self.table_alias = table_alias self.constraints = constraints self.schema = schema self.validate_backrefs = validate_backrefs self.only_save_dirty = only_save_dirty self.depends_on = depends_on self.auto_increment = None self.composite_key = False self.rel = {} self.reverse_rel = {} for key, value in kwargs.items(): setattr(self, key, value) self._additional_keys = set(kwargs.keys()) if self.db_table_func and not self.db_table: self.db_table = self.db_table_func(cls) def __repr__(self): return '<%s: %s>' % (self.__class__.__name__, self.name) def prepared(self): if self.order_by: norm_order_by = [] for item in self.order_by: if isinstance(item, Field): prefix = '-' if item._ordering == 'DESC' else '' item = prefix + item.name field = self.fields[item.lstrip('-')] if item.startswith('-'): norm_order_by.append(field.desc()) else: norm_order_by.append(field.asc()) self.order_by = norm_order_by def _update_field_lists(self): self.sorted_fields = list(self._sorted_field_list) self.sorted_field_names = [f.name for f in self.sorted_fields] self.valid_fields = (set(self.fields.keys()) | set(self.fields.values()) | set((self.primary_key,))) self.declared_fields = [field for field in self.sorted_fields if not field.undeclared] def add_field(self, field): self.remove_field(field.name) self.fields[field.name] = field self.columns[field.db_column] = field self._sorted_field_list.insert(field) self._update_field_lists() if field.default is not None: self.defaults[field] = field.default if callable(field.default): self._default_callables[field] = field.default self._default_callable_list.append((field.name, field.default)) else: self._default_dict[field] = field.default self._default_by_name[field.name] = field.default def remove_field(self, field_name): if field_name not in self.fields: return original = self.fields.pop(field_name) del self.columns[original.db_column] self._sorted_field_list.remove(original) self._update_field_lists() if original.default is not None: del self.defaults[original] if self._default_callables.pop(original, None): for i, (name, _) in enumerate(self._default_callable_list): if name == field_name: self._default_callable_list.pop(i) break else: self._default_dict.pop(original, None) self._default_by_name.pop(original.name, None) def get_default_dict(self): dd = self._default_by_name.copy() for field_name, default in self._default_callable_list: dd[field_name] = default() return dd def get_field_index(self, field): try: return self._sorted_field_list.index(field) except ValueError: return -1 def get_primary_key_fields(self): if self.composite_key: return [ self.fields[field_name] for field_name in self.primary_key.field_names] return [self.primary_key] def rel_for_model(self, model, field_obj=None, multi=False): is_field = isinstance(field_obj, Field) is_node = not is_field and isinstance(field_obj, Node) if multi: accum = [] for field in self.sorted_fields: if isinstance(field, ForeignKeyField) and field.rel_model == model: is_match = ( (field_obj is None) or (is_field and field_obj.name == field.name) or (is_node and field_obj._alias == field.name)) if is_match: if not multi: return field accum.append(field) if multi: return accum def reverse_rel_for_model(self, model, field_obj=None, multi=False): return model._meta.rel_for_model(self.model_class, field_obj, multi) def rel_exists(self, model): return self.rel_for_model(model) or self.reverse_rel_for_model(model) def related_models(self, backrefs=False): models = [] stack = [self.model_class] while stack: model = stack.pop() if model in models: continue models.append(model) for fk in model._meta.rel.values(): stack.append(fk.rel_model) if backrefs: for fk in model._meta.reverse_rel.values(): stack.append(fk.model_class) return models class BaseModel(type): inheritable = set([ 'constraints', 'database', 'db_table_func', 'indexes', 'order_by', 'primary_key', 'schema', 'validate_backrefs', 'only_save_dirty']) def __new__(cls, name, bases, attrs): if name == _METACLASS_ or bases[0].__name__ == _METACLASS_: return super(BaseModel, cls).__new__(cls, name, bases, attrs) meta_options = {} meta = attrs.pop('Meta', None) if meta: for k, v in meta.__dict__.items(): if not k.startswith('_'): meta_options[k] = v model_pk = getattr(meta, 'primary_key', None) parent_pk = None # inherit any field descriptors by deep copying the underlying field # into the attrs of the new model, additionally see if the bases define # inheritable model options and swipe them for b in bases: if not hasattr(b, '_meta'): continue base_meta = getattr(b, '_meta') if parent_pk is None: parent_pk = deepcopy(base_meta.primary_key) all_inheritable = cls.inheritable | base_meta._additional_keys for (k, v) in base_meta.__dict__.items(): if k in all_inheritable and k not in meta_options: meta_options[k] = v for (k, v) in b.__dict__.items(): if k in attrs: continue if isinstance(v, FieldDescriptor): if not v.field.primary_key: attrs[k] = deepcopy(v.field) # initialize the new class and set the magic attributes cls = super(BaseModel, cls).__new__(cls, name, bases, attrs) ModelOptionsBase = meta_options.get('model_options_base', ModelOptions) cls._meta = ModelOptionsBase(cls, **meta_options) cls._data = None cls._meta.indexes = list(cls._meta.indexes) if not cls._meta.db_table: cls._meta.db_table = re.sub('[^\w]+', '_', cls.__name__.lower()) # replace fields with field descriptors, calling the add_to_class hook fields = [] for name, attr in cls.__dict__.items(): if isinstance(attr, Field): if attr.primary_key and model_pk: raise ValueError('primary key is overdetermined.') elif attr.primary_key: model_pk, pk_name = attr, name else: fields.append((attr, name)) composite_key = False if model_pk is None: if parent_pk: model_pk, pk_name = parent_pk, parent_pk.name else: model_pk, pk_name = PrimaryKeyField(primary_key=True), 'id' if isinstance(model_pk, CompositeKey): pk_name = '_composite_key' composite_key = True if model_pk is not False: model_pk.add_to_class(cls, pk_name) cls._meta.primary_key = model_pk cls._meta.auto_increment = ( isinstance(model_pk, PrimaryKeyField) or bool(model_pk.sequence)) cls._meta.composite_key = composite_key for field, name in fields: field.add_to_class(cls, name) # create a repr and error class before finalizing if hasattr(cls, '__unicode__'): setattr(cls, '__repr__', lambda self: '<%s: %r>' % ( cls.__name__, self.__unicode__())) exc_name = '%sDoesNotExist' % cls.__name__ exc_attrs = {'__module__': cls.__module__} exception_class = type(exc_name, (DoesNotExist,), exc_attrs) cls.DoesNotExist = exception_class cls._meta.prepared() if hasattr(cls, 'validate_model'): cls.validate_model() DeferredRelation.resolve(cls) return cls def __iter__(self): return iter(self.select()) class Model(with_metaclass(BaseModel)): def __init__(self, *args, **kwargs): self._data = self._meta.get_default_dict() self._dirty = set(self._data) self._obj_cache = {} for k, v in kwargs.items(): setattr(self, k, v) @classmethod def alias(cls): return ModelAlias(cls) @classmethod def select(cls, *selection): query = SelectQuery(cls, *selection) if cls._meta.order_by: query = query.order_by(*cls._meta.order_by) return query @classmethod def update(cls, __data=None, **update): fdict = __data or {} fdict.update([(cls._meta.fields[f], update[f]) for f in update]) return UpdateQuery(cls, fdict) @classmethod def insert(cls, __data=None, **insert): fdict = __data or {} fdict.update([(cls._meta.fields[f], insert[f]) for f in insert]) return InsertQuery(cls, fdict) @classmethod def insert_many(cls, rows, validate_fields=True): return InsertQuery(cls, rows=rows, validate_fields=validate_fields) @classmethod def insert_from(cls, fields, query): return InsertQuery(cls, fields=fields, query=query) @classmethod def delete(cls): return DeleteQuery(cls) @classmethod def raw(cls, sql, *params): return RawQuery(cls, sql, *params) @classmethod def create(cls, **query): inst = cls(**query) inst.save(force_insert=True) inst._prepare_instance() return inst @classmethod def get(cls, *query, **kwargs): sq = cls.select().naive() if query: sq = sq.where(*query) if kwargs: sq = sq.filter(**kwargs) return sq.get() @classmethod def get_or_create(cls, **kwargs): defaults = kwargs.pop('defaults', {}) query = cls.select() for field, value in kwargs.items(): if '__' in field: query = query.filter(**{field: value}) else: query = query.where(getattr(cls, field) == value) try: return query.get(), False except cls.DoesNotExist: try: params = dict((k, v) for k, v in kwargs.items() if '__' not in k) params.update(defaults) with cls._meta.database.atomic(): return cls.create(**params), True except IntegrityError as exc: try: return query.get(), False except cls.DoesNotExist: raise exc @classmethod def filter(cls, *dq, **query): return cls.select().filter(*dq, **query) @classmethod def table_exists(cls): kwargs = {} if cls._meta.schema: kwargs['schema'] = cls._meta.schema return cls._meta.db_table in cls._meta.database.get_tables(**kwargs) @classmethod def create_table(cls, fail_silently=False): if fail_silently and cls.table_exists(): return db = cls._meta.database pk = cls._meta.primary_key if db.sequences and pk is not False and pk.sequence: if not db.sequence_exists(pk.sequence): db.create_sequence(pk.sequence) db.create_table(cls) cls._create_indexes() @classmethod def _fields_to_index(cls): fields = [] for field in cls._meta.sorted_fields: if field.primary_key: continue requires_index = any(( field.index, field.unique, isinstance(field, ForeignKeyField))) if requires_index: fields.append(field) return fields @classmethod def _index_data(cls): return itertools.chain( [((field,), field.unique) for field in cls._fields_to_index()], cls._meta.indexes or ()) @classmethod def _create_indexes(cls): for field_list, is_unique in cls._index_data(): cls._meta.database.create_index(cls, field_list, is_unique) @classmethod def _drop_indexes(cls, safe=False): for field_list, is_unique in cls._index_data(): cls._meta.database.drop_index(cls, field_list, safe) @classmethod def sqlall(cls): queries = [] compiler = cls._meta.database.compiler() pk = cls._meta.primary_key if cls._meta.database.sequences and pk.sequence: queries.append(compiler.create_sequence(pk.sequence)) queries.append(compiler.create_table(cls)) for field in cls._fields_to_index(): queries.append(compiler.create_index(cls, [field], field.unique)) if cls._meta.indexes: for field_names, unique in cls._meta.indexes: fields = [cls._meta.fields[f] for f in field_names] queries.append(compiler.create_index(cls, fields, unique)) return [sql for sql, _ in queries] @classmethod def drop_table(cls, fail_silently=False, cascade=False): cls._meta.database.drop_table(cls, fail_silently, cascade) @classmethod def truncate_table(cls, restart_identity=False, cascade=False): cls._meta.database.truncate_table(cls, restart_identity, cascade) @classmethod def as_entity(cls): if cls._meta.schema: return Entity(cls._meta.schema, cls._meta.db_table) return Entity(cls._meta.db_table) @classmethod def noop(cls, *args, **kwargs): return NoopSelectQuery(cls, *args, **kwargs) def _get_pk_value(self): return getattr(self, self._meta.primary_key.name) get_id = _get_pk_value # Backwards-compatibility. def _set_pk_value(self, value): if not self._meta.composite_key: setattr(self, self._meta.primary_key.name, value) set_id = _set_pk_value # Backwards-compatibility. def _pk_expr(self): return self._meta.primary_key == self._get_pk_value() def _prepare_instance(self): self._dirty.clear() self.prepared() def prepared(self): pass def _prune_fields(self, field_dict, only): new_data = {} for field in only: if field.name in field_dict: new_data[field.name] = field_dict[field.name] return new_data def _populate_unsaved_relations(self, field_dict): for key in self._meta.rel: conditions = ( key in self._dirty and key in field_dict and field_dict[key] is None and self._obj_cache.get(key) is not None) if conditions: setattr(self, key, getattr(self, key)) field_dict[key] = self._data[key] def save(self, force_insert=False, only=None): field_dict = dict(self._data) if self._meta.primary_key is not False: pk_field = self._meta.primary_key pk_value = self._get_pk_value() else: pk_field = pk_value = None if only: field_dict = self._prune_fields(field_dict, only) elif self._meta.only_save_dirty and not force_insert: field_dict = self._prune_fields( field_dict, self.dirty_fields) if not field_dict: self._dirty.clear() return False self._populate_unsaved_relations(field_dict) if pk_value is not None and not force_insert: if self._meta.composite_key: for pk_part_name in pk_field.field_names: field_dict.pop(pk_part_name, None) else: field_dict.pop(pk_field.name, None) rows = self.update(**field_dict).where(self._pk_expr()).execute() elif pk_field is None: self.insert(**field_dict).execute() rows = 1 else: pk_from_cursor = self.insert(**field_dict).execute() if pk_from_cursor is not None: pk_value = pk_from_cursor self._set_pk_value(pk_value) rows = 1 self._dirty.clear() return rows def is_dirty(self): return bool(self._dirty) @property def dirty_fields(self): return [f for f in self._meta.sorted_fields if f.name in self._dirty] def dependencies(self, search_nullable=False): model_class = type(self) query = self.select().where(self._pk_expr()) stack = [(type(self), query)] seen = set() while stack: klass, query = stack.pop() if klass in seen: continue seen.add(klass) for rel_name, fk in klass._meta.reverse_rel.items(): rel_model = fk.model_class if fk.rel_model is model_class: node = (fk == self._data[fk.to_field.name]) subquery = rel_model.select().where(node) else: node = fk << query subquery = rel_model.select().where(node) if not fk.null or search_nullable: stack.append((rel_model, subquery)) yield (node, fk) def delete_instance(self, recursive=False, delete_nullable=False): if recursive: dependencies = self.dependencies(delete_nullable) for query, fk in reversed(list(dependencies)): model = fk.model_class if fk.null and not delete_nullable: model.update(**{fk.name: None}).where(query).execute() else: model.delete().where(query).execute() return self.delete().where(self._pk_expr()).execute() def __hash__(self): return hash((self.__class__, self._get_pk_value())) def __eq__(self, other): return ( other.__class__ == self.__class__ and self._get_pk_value() is not None and other._get_pk_value() == self._get_pk_value()) def __ne__(self, other): return not self == other def prefetch_add_subquery(sq, subqueries): fixed_queries = [PrefetchResult(sq)] for i, subquery in enumerate(subqueries): if isinstance(subquery, tuple): subquery, target_model = subquery else: target_model = None if not isinstance(subquery, Query) and issubclass(subquery, Model): subquery = subquery.select() subquery_model = subquery.model_class fks = backrefs = None for j in reversed(range(i + 1)): prefetch_result = fixed_queries[j] last_query = prefetch_result.query last_model = prefetch_result.model rels = subquery_model._meta.rel_for_model(last_model, multi=True) if rels: fks = [getattr(subquery_model, fk.name) for fk in rels] pks = [getattr(last_model, fk.to_field.name) for fk in rels] else: backrefs = last_model._meta.rel_for_model( subquery_model, multi=True) if (fks or backrefs) and ((target_model is last_model) or (target_model is None)): break if not (fks or backrefs): tgt_err = ' using %s' % target_model if target_model else '' raise AttributeError('Error: unable to find foreign key for ' 'query: %s%s' % (subquery, tgt_err)) if fks: expr = reduce(operator.or_, [ (fk << last_query.select(pk)) for (fk, pk) in zip(fks, pks)]) subquery = subquery.where(expr) fixed_queries.append(PrefetchResult(subquery, fks, False)) elif backrefs: expr = reduce(operator.or_, [ (backref.to_field << last_query.select(backref)) for backref in backrefs]) subquery = subquery.where(expr) fixed_queries.append(PrefetchResult(subquery, backrefs, True)) return fixed_queries __prefetched = namedtuple('__prefetched', ( 'query', 'fields', 'backref', 'rel_models', 'field_to_name', 'model')) class PrefetchResult(__prefetched): def __new__(cls, query, fields=None, backref=None, rel_models=None, field_to_name=None, model=None): if fields: if backref: rel_models = [field.model_class for field in fields] foreign_key_attrs = [field.to_field.name for field in fields] else: rel_models = [field.rel_model for field in fields] foreign_key_attrs = [field.name for field in fields] field_to_name = list(zip(fields, foreign_key_attrs)) model = query.model_class return super(PrefetchResult, cls).__new__( cls, query, fields, backref, rel_models, field_to_name, model) def populate_instance(self, instance, id_map): if self.backref: for field in self.fields: identifier = instance._data[field.name] key = (field, identifier) if key in id_map: setattr(instance, field.name, id_map[key]) else: for field, attname in self.field_to_name: identifier = instance._data[field.to_field.name] key = (field, identifier) rel_instances = id_map.get(key, []) dest = '%s_prefetch' % field.related_name for inst in rel_instances: setattr(inst, attname, instance) setattr(instance, dest, rel_instances) def store_instance(self, instance, id_map): for field, attname in self.field_to_name: identity = field.to_field.python_value(instance._data[attname]) key = (field, identity) if self.backref: id_map[key] = instance else: id_map.setdefault(key, []) id_map[key].append(instance) def prefetch(sq, *subqueries): if not subqueries: return sq fixed_queries = prefetch_add_subquery(sq, subqueries) deps = {} rel_map = {} for prefetch_result in reversed(fixed_queries): query_model = prefetch_result.model if prefetch_result.fields: for rel_model in prefetch_result.rel_models: rel_map.setdefault(rel_model, []) rel_map[rel_model].append(prefetch_result) deps[query_model] = {} id_map = deps[query_model] has_relations = bool(rel_map.get(query_model)) for instance in prefetch_result.query: if prefetch_result.fields: prefetch_result.store_instance(instance, id_map) if has_relations: for rel in rel_map[query_model]: rel.populate_instance(instance, deps[rel.model]) return prefetch_result.query def create_model_tables(models, **create_table_kwargs): """Create tables for all given models (in the right order).""" for m in sort_models_topologically(models): m.create_table(**create_table_kwargs) def drop_model_tables(models, **drop_table_kwargs): """Drop tables for all given models (in the right order).""" for m in reversed(sort_models_topologically(models)): m.drop_table(**drop_table_kwargs) peewee-2.10.2/playhouse/000077500000000000000000000000001316645060400150645ustar00rootroot00000000000000peewee-2.10.2/playhouse/README.md000066400000000000000000000072501316645060400163470ustar00rootroot00000000000000## Playhouse The `playhouse` namespace contains numerous extensions to Peewee. These include vendor-specific database extensions, high-level abstractions to simplify working with databases, and tools for low-level database operations and introspection. ### Vendor extensions * [SQLite extensions](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#sqlite-ext) * User-defined aggregates, collations, and functions * Full-text search (FTS3/4/5) * BM25 ranking algorithm implemented as SQLite C extension, backported to FTS4 * Virtual tables and C extensions * Closure tables * [APSW extensions](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#apsw-an-advanced-sqlite-driver): use Peewee with the powerful [APSW](https://github.com/rogerbinns/apsw) SQLite driver. * [BerkeleyDB](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#berkeleydb-backend): compile BerkeleyDB with SQLite compatibility API, then use with Peewee. * [SQLCipher](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#sqlcipher-backend): encrypted SQLite databases. * [Postgresql extensions](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#postgresql-extensions) * JSON and JSONB * HStore * Arrays * Server-side cursors * Full-text search ### High-level libraries * [Extra fields](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#extra-fields) * Many-to-many field * Compressed field * Password field * AES encrypted field * [Shortcuts / helpers](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#shortcuts) * `CASE` statement constructor * `CAST` * Model to dict serializer * Dict to model deserializer * Retry query with backoff * [Hybrid attributes](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#hybrid) * [Signals](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#signals): pre/post-save, pre/post-delete, pre/post-init. * [Dataset](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#dataset): high-level API for working with databases popuarlized by the [project of the same name](https://dataset.readthedocs.io/). * [Key/Value Store](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#kv): key/value store using SQLite. Supports *smart indexing*, for *Pandas*-style queries. * [Generic foreign key](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#gfk): made popular by Django. * [CSV utilities](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#csv-utils): load CSV directly into database, generate models from CSV, and more. ### Database management and framework support * [pwiz](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#pwiz): generate model code from a pre-existing database. * [Schema migrations](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#migrate): modify your schema using high-level APIs. Even supports dropping or renaming columns in SQLite. * [Connection pool](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#pool): simple connection pooling. * [Reflection](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#reflection): low-level, cross-platform database introspection * [Database URLs](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#db-url): use URLs to connect to database * [Read slave](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#read-slaves) * [Flask utils](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#flask-utils): paginated object lists, database connection management, and more. * [Django integration](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#djpeewee): generate peewee models from Django models, use Peewee alongside your Django ORM code. peewee-2.10.2/playhouse/__init__.py000066400000000000000000000000001316645060400171630ustar00rootroot00000000000000peewee-2.10.2/playhouse/_speedups.pyx000066400000000000000000000220321316645060400176140ustar00rootroot00000000000000from bisect import bisect_left from bisect import bisect_right from collections import deque from cpython cimport datetime cdef basestring _strip_parens(basestring s): if not s or s[0] != '(': return s cdef int ct = 0, i = 0, unbalanced_ct = 0, required = 0 cdef int l = len(s) while i < l: if s[i] == '(' and s[l - 1] == ')': ct += 1 i += 1 l -= 1 else: break if ct: for i in range(ct, l - ct): if s[i] == '(': unbalanced_ct += 1 elif s[i] == ')': unbalanced_ct -= 1 if unbalanced_ct < 0: required += 1 unbalanced_ct = 0 if required == ct: break ct -= required if ct > 0: return s[ct:-ct] return s def strip_parens(basestring s): return _strip_parens(s) cdef tuple SQLITE_DATETIME_FORMATS = ( '%Y-%m-%d %H:%M:%S', '%Y-%m-%d %H:%M:%S.%f', '%Y-%m-%d', '%H:%M:%S', '%H:%M:%S.%f', '%H:%M') cdef dict SQLITE_DATE_TRUNC_MAPPING = { 'year': '%Y', 'month': '%Y-%m', 'day': '%Y-%m-%d', 'hour': '%Y-%m-%d %H', 'minute': '%Y-%m-%d %H:%M', 'second': '%Y-%m-%d %H:%M:%S'} cpdef format_date_time(date_value, formats, post_fn=None): cdef: datetime.datetime date_obj tuple formats_t = tuple(formats) for date_format in formats_t: try: date_obj = datetime.datetime.strptime(date_value, date_format) except ValueError: pass else: if post_fn: return post_fn(date_obj) return date_obj return date_value cpdef datetime.datetime format_date_time_sqlite(date_value): return format_date_time(date_value, SQLITE_DATETIME_FORMATS) cdef class _QueryResultWrapper(object) # Forward decl. cdef class _ResultIterator(object): cdef: int _idx _QueryResultWrapper qrw def __init__(self, _QueryResultWrapper qrw): self.qrw = qrw self._idx = 0 def __next__(self): if self._idx < self.qrw._ct: obj = self.qrw._result_cache[self._idx] elif not self.qrw._populated: obj = self.qrw.iterate() self.qrw._result_cache.append(obj) self.qrw._ct += 1 else: raise StopIteration self._idx += 1 return obj cdef class _QueryResultWrapper(object): cdef: bint _initialized dict join_meta int _idx int row_size list column_names, converters readonly bint _populated readonly int _ct readonly list _result_cache readonly object column_meta, cursor, model def __init__(self, model, cursor, meta=None): self.model = model self.cursor = cursor self._ct = self._idx = 0 self._populated = self._initialized = False self._result_cache = [] if meta is not None: self.column_meta, self.join_meta = meta else: self.column_meta = self.join_meta = None def __iter__(self): if self._populated: return iter(self._result_cache) return _ResultIterator(self) @property def count(self): self.fill_cache() return self._ct def __len__(self): return self.count cdef initialize(self, cursor_description): cdef: bint found int i = 0 int n = len(cursor_description) int n_cm self.row_size = n self.column_names = [] self.converters = [] if self.column_meta is not None: n_cm = len(self.column_meta) for i, node in enumerate(self.column_meta): if not self._initialize_node(node, i): self._initialize_by_name(cursor_description[i][0], i) if n_cm == n: return for i in range(i, n): self._initialize_by_name(cursor_description[i][0], i) def _initialize_by_name(self, name, int i): if name in self.model._meta.columns: field = self.model._meta.columns[name] self.converters.append(field.python_value) else: self.converters.append(None) self.column_names.append(name) cdef bint _initialize_node(self, node, int i): try: node_type = node._node_type except AttributeError: return False if (node_type == 'field') is True: self.column_names.append(node._alias or node.name) self.converters.append(node.python_value) return True if node_type != 'func' or not len(node.arguments): return False arg = node.arguments[0] try: node_type = arg._node_type except AttributeError: return False if (node_type == 'field') is True: self.column_names.append(node._alias or arg._alias or arg.name) self.converters.append(arg.python_value if node._coerce else None) return True return False cdef process_row(self, tuple row): return row cdef iterate(self): cdef: tuple row = self.cursor.fetchone() if not row: self._populated = True if not getattr(self.cursor, 'name', None): self.cursor.close() raise StopIteration elif not self._initialized: self.initialize(self.cursor.description) self._initialized = True return self.process_row(row) def iterator(self): while True: yield self.iterate() def __next__(self): cdef object inst if self._idx < self._ct: inst = self._result_cache[self._idx] self._idx += 1 return inst elif self._populated: raise StopIteration inst = self.iterate() self._result_cache.append(inst) self._ct += 1 self._idx += 1 return inst cpdef fill_cache(self, n=None): cdef: int counter = -1 if n is None else n if counter > 0: counter = counter - self._ct self._idx = self._ct while not self._populated and counter: try: next(self) except StopIteration: break else: counter -= 1 cdef class _TuplesQueryResultWrapper(_QueryResultWrapper): cdef process_row(self, tuple row): cdef: int i = 0 list ret = [] for i in range(self.row_size): func = self.converters[i] if func is None: ret.append(row[i]) else: ret.append(func(row[i])) return tuple(ret) cdef class _DictQueryResultWrapper(_QueryResultWrapper): cdef dict _make_dict(self, tuple row): cdef: dict result = {} int i = 0 for i in range(self.row_size): func = self.converters[i] if func is not None: result[self.column_names[i]] = func(row[i]) else: result[self.column_names[i]] = row[i] return result cdef process_row(self, tuple row): return self._make_dict(row) cdef class _ModelQueryResultWrapper(_DictQueryResultWrapper): cdef process_row(self, tuple row): inst = self.model(**self._make_dict(row)) inst._prepare_instance() return inst cdef class _SortedFieldList(object): cdef: list _items, _keys def __init__(self): self._items = [] self._keys = [] def __getitem__(self, i): return self._items[i] def __iter__(self): return iter(self._items) def __contains__(self, item): k = item._sort_key i = bisect_left(self.keys, k) j = bisect_right(self.keys, k) return item in self._items[i:j] def index(self, field): return self._keys.index(field._sort_key) def insert(self, item): k = item._sort_key i = bisect_left(self._keys, k) self._keys.insert(i, k) self._items.insert(i, item) def remove(self, item): idx = self.index(item) del self._items[idx] del self._keys[idx] cdef tuple _sort_key(model): return (model._meta.name, model._meta.db_table) cdef _sort_models(model, set model_set, set seen, list accum): if model in model_set and model not in seen: seen.add(model) for foreign_key in model._meta.rel.values(): _sort_models(foreign_key.rel_model, model_set, seen, accum) if model._meta.depends_on is not None: for dependency in model._meta.depends_on: _sort_models(dependency, model_set, seen, accum) accum.append(model) def sort_models_topologically(models): cdef: set model_set = set(models) set seen = set() list accum = [] for model in sorted(model_set, key=_sort_key): _sort_models(model, model_set, seen, accum) return accum peewee-2.10.2/playhouse/_sqlite_ext.pyx000066400000000000000000000170441316645060400201540ustar00rootroot00000000000000import re from cpython cimport datetime from libc.math cimport log, sqrt from libc.stdlib cimport free, malloc cdef tuple SQLITE_DATETIME_FORMATS = ( '%Y-%m-%d %H:%M:%S', '%Y-%m-%d %H:%M:%S.%f', '%Y-%m-%d', '%H:%M:%S', '%H:%M:%S.%f', '%H:%M') cdef dict SQLITE_DATE_TRUNC_MAPPING = { 'year': '%Y', 'month': '%Y-%m', 'day': '%Y-%m-%d', 'hour': '%Y-%m-%d %H', 'minute': '%Y-%m-%d %H:%M', 'second': '%Y-%m-%d %H:%M:%S'} cdef tuple validate_and_format_datetime(lookup, date_str): if not date_str or not lookup: return lookup = lookup.lower() if lookup not in SQLITE_DATE_TRUNC_MAPPING: return cdef datetime.datetime date_obj cdef bint success = False for date_format in SQLITE_DATETIME_FORMATS: try: date_obj = datetime.datetime.strptime(date_str, date_format) except ValueError: pass else: return (date_obj, lookup) cpdef peewee_date_part(lookup, date_str): cdef: tuple result = validate_and_format_datetime(lookup, date_str) if result: return getattr(result[0], result[1]) cpdef peewee_date_trunc(lookup, date_str): cdef: tuple result = validate_and_format_datetime(lookup, date_str) if result: return result[0].strftime(SQLITE_DATE_TRUNC_MAPPING[result[1]]) def peewee_regexp(regex_str, value, case_sensitive=False): if value is None or regex_str is None: return flags = 0 if case_sensitive else re.I return re.search(regex_str, value, flags) is not None def peewee_rank(py_match_info, *raw_weights): cdef: unsigned int *match_info unsigned int *phrase_info bytes _match_info_buf = bytes(py_match_info) char *match_info_buf = _match_info_buf int argc = len(raw_weights) int ncol, nphrase, icol, iphrase, hits, global_hits int P_O = 0, C_O = 1, X_O = 2 double score = 0.0, weight double *weights match_info = match_info_buf nphrase = match_info[P_O] ncol = match_info[C_O] weights = malloc(sizeof(double) * ncol) for icol in range(ncol): if icol < argc: weights[icol] = raw_weights[icol] else: weights[icol] = 1.0 for iphrase in range(nphrase): phrase_info = &match_info[X_O + iphrase * ncol * 3] for icol in range(ncol): weight = weights[icol] if weight == 0: continue hits = phrase_info[3 * icol] global_hits = phrase_info[3 * icol + 1] if hits > 0: score += weight * (hits / global_hits) free(weights) return -1 * score def peewee_lucene(py_match_info, *raw_weights): # Usage: peewee_lucene(matchinfo(table, 'pcxnal'), 1) cdef: unsigned int *match_info unsigned int *phrase_info bytes _match_info_buf = bytes(py_match_info) char *match_info_buf = _match_info_buf int argc = len(raw_weights) int term_count, col_count double total_docs, term_frequency, double doc_length, docs_with_term, avg_length double idf, weight, rhs, denom double *weights int P_O = 0, C_O = 1, N_O = 2, L_O, X_O int i, j, x double score = 0.0 match_info = match_info_buf term_count = match_info[P_O] col_count = match_info[C_O] total_docs = match_info[N_O] L_O = 3 + col_count X_O = L_O + col_count weights = malloc(sizeof(double) * col_count) for i in range(col_count): if i < argc: weights[i] = raw_weights[i] else: weights[i] = 0 for i in range(term_count): for j in range(col_count): weight = weights[j] if weight == 0: continue doc_length = match_info[L_O + j] x = X_O + (3 * j * (i + 1)) term_frequency = match_info[x] docs_with_term = match_info[x + 2] idf = log(total_docs / (docs_with_term + 1.)) tf = sqrt(term_frequency) fieldNorms = 1.0 / sqrt(doc_length) score += (idf * tf * fieldNorms) free(weights) return -1 * score def peewee_bm25(py_match_info, *raw_weights): # Usage: peewee_bm25(matchinfo(table, 'pcxnal'), 1) # where the second parameter is the index of the column and # the 3rd and 4th specify k and b. cdef: unsigned int *match_info unsigned int *phrase_info bytes _match_info_buf = bytes(py_match_info) char *match_info_buf = _match_info_buf int argc = len(raw_weights) int term_count, col_count double B = 0.75, K = 1.2, D double total_docs, term_frequency, double doc_length, docs_with_term, avg_length double idf, weight, rhs, denom double *weights int P_O = 0, C_O = 1, N_O = 2, A_O = 3, L_O, X_O int i, j, x double score = 0.0 match_info = match_info_buf term_count = match_info[P_O] col_count = match_info[C_O] total_docs = match_info[N_O] L_O = A_O + col_count X_O = L_O + col_count weights = malloc(sizeof(double) * col_count) for i in range(col_count): if argc == 0: weights[i] = 1. elif i < argc: weights[i] = raw_weights[i] else: weights[i] = 0 for i in range(term_count): for j in range(col_count): weight = weights[j] if weight == 0: continue avg_length = match_info[A_O + j] doc_length = match_info[L_O + j] if avg_length == 0: D = 0 else: D = 1 - B + (B * (doc_length / avg_length)) x = X_O + (3 * j * (i + 1)) term_frequency = match_info[x] docs_with_term = match_info[x + 2] idf = max( log( (total_docs - docs_with_term + 0.5) / (docs_with_term + 0.5)), 0) denom = term_frequency + (K * D) if denom == 0: rhs = 0 else: rhs = (term_frequency * (K + 1)) / denom score += (idf * rhs) * weight free(weights) return -1 * score cdef unsigned int murmurhash2(const char *key, int nlen, unsigned int seed): cdef: unsigned int m = 0x5bd1e995 int r = 24 unsigned int l = nlen unsigned char *data = key unsigned int h = seed unsigned int k unsigned int t = 0 while nlen >= 4: k = (data)[0] # mmix(h, k). k *= m k = k ^ (k >> r) k *= m h *= m h = h ^ k data += 4 nlen -= 4 if nlen == 3: t = t ^ (data[2] << 16) if nlen >= 2: t = t ^ (data[1] << 8) if nlen >= 1: t = t ^ (data[0]) # mmix(h, t). t *= m t = t ^ (t >> r) t *= m h *= m h = h ^ t # mmix(h, l). l *= m l = l ^ (l >> r) l *= m h *= m h = h ^ l h = h ^ (h >> 13) h *= m h = h ^ (h >> 15) return h def peewee_murmurhash(key, seed=None): if key is None: return cdef: bytes bkey int nseed = seed or 0 if isinstance(key, unicode): bkey = key.encode('utf-8') else: bkey = key if key: return murmurhash2(bkey, len(bkey), nseed) return 0 peewee-2.10.2/playhouse/_sqlite_udf.pyx000066400000000000000000000066271316645060400201370ustar00rootroot00000000000000import sys from difflib import SequenceMatcher from random import randint IS_PY3K = sys.version_info[0] == 3 # String UDF. def damerau_levenshtein_dist(s1, s2): cdef: int i, j, del_cost, add_cost, sub_cost int s1_len = len(s1), s2_len = len(s2) list one_ago, two_ago, current_row list zeroes = [0] * (s2_len + 1) if IS_PY3K: current_row = list(range(1, s2_len + 2)) else: current_row = range(1, s2_len + 2) current_row[-1] = 0 one_ago = None for i in range(s1_len): two_ago = one_ago one_ago = current_row current_row = list(zeroes) current_row[-1] = i + 1 for j in range(s2_len): del_cost = one_ago[j] + 1 add_cost = current_row[j - 1] + 1 sub_cost = one_ago[j - 1] + (s1[i] != s2[j]) current_row[j] = min(del_cost, add_cost, sub_cost) # Handle transpositions. if (i > 0 and j > 0 and s1[i] == s2[j - 1] and s1[i-1] == s2[j] and s1[i] != s2[j]): current_row[j] = min(current_row[j], two_ago[j - 2] + 1) return current_row[s2_len - 1] # String UDF. def levenshtein_dist(a, b): cdef: int add, delete, change int i, j int n = len(a), m = len(b) list current, previous list zeroes if n > m: a, b = b, a n, m = m, n zeroes = [0] * (m + 1) if IS_PY3K: current = list(range(n + 1)) else: current = range(n + 1) for i in range(1, m + 1): previous = current current = list(zeroes) current[0] = i for j in range(1, n + 1): add = previous[j] + 1 delete = current[j - 1] + 1 change = previous[j - 1] if a[j - 1] != b[i - 1]: change +=1 current[j] = min(add, delete, change) return current[n] # String UDF. def str_dist(a, b): cdef: int t = 0 for i in SequenceMatcher(None, a, b).get_opcodes(): if i[0] == 'equal': continue t = t + max(i[4] - i[3], i[2] - i[1]) return t # Math Aggregate. cdef class median(object): cdef: int ct list items def __init__(self): self.ct = 0 self.items = [] cdef selectKth(self, int k, int s=0, int e=-1): cdef: int idx if e < 0: e = len(self.items) idx = randint(s, e-1) idx = self.partition_k(idx, s, e) if idx > k: return self.selectKth(k, s, idx) elif idx < k: return self.selectKth(k, idx + 1, e) else: return self.items[idx] cdef int partition_k(self, int pi, int s, int e): cdef: int i, x val = self.items[pi] # Swap pivot w/last item. self.items[e - 1], self.items[pi] = self.items[pi], self.items[e - 1] x = s for i in range(s, e): if self.items[i] < val: self.items[i], self.items[x] = self.items[x], self.items[i] x += 1 self.items[x], self.items[e-1] = self.items[e-1], self.items[x] return x def step(self, item): self.items.append(item) self.ct += 1 def finalize(self): if self.ct == 0: return None elif self.ct < 3: return self.items[0] else: return self.selectKth(self.ct / 2) peewee-2.10.2/playhouse/apsw_ext.py000066400000000000000000000113511316645060400172710ustar00rootroot00000000000000""" Peewee integration with APSW, "another python sqlite wrapper". Project page: https://rogerbinns.github.io/apsw/ APSW is a really neat library that provides a thin wrapper on top of SQLite's C interface. Here are just a few reasons to use APSW, taken from the documentation: * APSW gives all functionality of SQLite, including virtual tables, virtual file system, blob i/o, backups and file control. * Connections can be shared across threads without any additional locking. * Transactions are managed explicitly by your code. * APSW can handle nested transactions. * Unicode is handled correctly. * APSW is faster. """ import apsw from peewee import * from peewee import _sqlite_date_part from peewee import _sqlite_date_trunc from peewee import _sqlite_regexp from peewee import BooleanField as _BooleanField from peewee import DateField as _DateField from peewee import DateTimeField as _DateTimeField from peewee import DecimalField as _DecimalField from peewee import logger from peewee import savepoint from peewee import TimeField as _TimeField from peewee import transaction from playhouse.sqlite_ext import SqliteExtDatabase from playhouse.sqlite_ext import VirtualCharField from playhouse.sqlite_ext import VirtualField from playhouse.sqlite_ext import VirtualFloatField from playhouse.sqlite_ext import VirtualIntegerField from playhouse.sqlite_ext import VirtualModel class APSWDatabase(SqliteExtDatabase): def __init__(self, database, timeout=None, **kwargs): self.timeout = timeout self._modules = {} super(APSWDatabase, self).__init__(database, **kwargs) def register_module(self, mod_name, mod_inst): self._modules[mod_name] = mod_inst def unregister_module(self, mod_name): del(self._modules[mod_name]) def _connect(self, database, **kwargs): conn = apsw.Connection(database, **kwargs) if self.timeout is not None: conn.setbusytimeout(self.timeout) try: self._add_conn_hooks(conn) except: conn.close() raise return conn def _add_conn_hooks(self, conn): super(APSWDatabase, self)._add_conn_hooks(conn) self._load_modules(conn) # APSW-only. def _load_modules(self, conn): for mod_name, mod_inst in self._modules.items(): conn.createmodule(mod_name, mod_inst) return conn def _load_aggregates(self, conn): for name, (klass, num_params) in self._aggregates.items(): def make_aggregate(): instance = klass() return (instance, instance.step, instance.finalize) conn.createaggregatefunction(name, make_aggregate) def _load_collations(self, conn): for name, fn in self._collations.items(): conn.createcollation(name, fn) def _load_functions(self, conn): for name, (fn, num_params) in self._functions.items(): conn.createscalarfunction(name, fn, num_params) def _load_extensions(self, conn): conn.enableloadextension(True) for extension in self._extensions: conn.loadextension(extension) def load_extension(self, extension): self._extensions.add(extension) if not self.is_closed(): conn = self.get_conn() conn.enableloadextension(True) conn.loadextension(extension) def _execute_sql(self, cursor, sql, params): cursor.execute(sql, params or ()) return cursor def execute_sql(self, sql, params=None, require_commit=True): logger.debug((sql, params)) with self.exception_wrapper: cursor = self.get_cursor() self._execute_sql(cursor, sql, params) return cursor def last_insert_id(self, cursor, model): if model._meta.auto_increment: return cursor.getconnection().last_insert_rowid() def rows_affected(self, cursor): return cursor.getconnection().changes() def begin(self, lock_type='deferred'): self.get_cursor().execute('begin %s;' % lock_type) def commit(self): self.get_cursor().execute('commit;') def rollback(self): self.get_cursor().execute('rollback;') def transaction(self, lock_type='deferred'): return transaction(self, lock_type) def savepoint(self, sid=None): return savepoint(self, sid) def nh(s, v): if v is not None: return str(v) class BooleanField(_BooleanField): def db_value(self, v): v = super(BooleanField, self).db_value(v) if v is not None: return v and 1 or 0 class DateField(_DateField): db_value = nh class TimeField(_TimeField): db_value = nh class DateTimeField(_DateTimeField): db_value = nh class DecimalField(_DecimalField): db_value = nh peewee-2.10.2/playhouse/berkeleydb.py000066400000000000000000000100171316645060400175450ustar00rootroot00000000000000import ctypes import datetime import decimal import sys from peewee import ImproperlyConfigured from playhouse.sqlite_ext import * # Peewee assumes that the `pysqlite2` module was compiled against the # BerkeleyDB SQLite libraries. try: from pysqlite2 import dbapi2 as berkeleydb except ImportError: import sqlite3 as berkeleydb sqlite3_lib_version = sqlite3.sqlite_version_info berkeleydb.register_adapter(decimal.Decimal, str) berkeleydb.register_adapter(datetime.date, str) berkeleydb.register_adapter(datetime.time, str) class BerkeleyDatabase(SqliteExtDatabase): def __init__(self, database, pragmas=None, cache_size=None, page_size=None, multiversion=None, *args, **kwargs): super(BerkeleyDatabase, self).__init__( database, pragmas=pragmas, *args, **kwargs) if multiversion: self._pragmas.append(('multiversion', 'on')) if page_size: self._pragmas.append(('page_size', page_size)) if cache_size: self._pragmas.append(('cache_size', cache_size)) def _connect(self, database, **kwargs): if not PYSQLITE_BERKELEYDB: message = ('Your Python SQLite driver (%s) does not appear to ' 'have been compiled against the BerkeleyDB SQLite ' 'library.' % berkeleydb) if LIBSQLITE_BERKELEYDB: message += (' However, the libsqlite on your system is the ' 'BerkeleyDB implementation. Try recompiling ' 'pysqlite.') else: message += (' Additionally, the libsqlite on your system ' 'does not appear to be the BerkeleyDB ' 'implementation.') raise ImproperlyConfigured(message) conn = berkeleydb.connect(database, **kwargs) conn.isolation_level = None self._add_conn_hooks(conn) return conn def _set_pragmas(self, conn): # `multiversion` is weird. It checks first whether another connection # from the BTree cache is available, and then switches to that, which # may have the handle of the DB_Env. If that happens, then we get # an error stating that you cannot set `multiversion` despite the # fact we have not done any operations and it's a brand new conn. if self._pragmas: cursor = conn.cursor() for pragma, value in self._pragmas: if pragma == 'multiversion': try: cursor.execute('PRAGMA %s = %s;' % (pragma, value)) except berkeleydb.OperationalError: pass else: cursor.execute('PRAGMA %s = %s;' % (pragma, value)) cursor.close() @classmethod def check_pysqlite(cls): try: from pysqlite2 import dbapi2 as sqlite3 except ImportError: import sqlite3 conn = sqlite3.connect(':memory:') try: results = conn.execute('PRAGMA compile_options;').fetchall() finally: conn.close() for option, in results: if option == 'BERKELEY_DB': return True return False @classmethod def check_libsqlite(cls): # Checking compile options is not supported. if sys.platform.startswith('win'): library = 'libsqlite3.dll' elif sys.platform == 'darwin': library = 'libsqlite3.dylib' else: library = 'libsqlite3.so' try: libsqlite = ctypes.CDLL(library) except OSError: return False return libsqlite.sqlite3_compileoption_used('BERKELEY_DB') == 1 if sqlite3_lib_version < (3, 6, 23): # Checking compile flags is not supported in older SQLite versions. PYSQLITE_BERKELEYDB = False LIBSQLITE_BERKELEYDB = False else: PYSQLITE_BERKELEYDB = BerkeleyDatabase.check_pysqlite() LIBSQLITE_BERKELEYDB = BerkeleyDatabase.check_libsqlite() peewee-2.10.2/playhouse/csv_loader.py000066400000000000000000000001131316645060400175520ustar00rootroot00000000000000from playhouse.csv_utils import * # Provided for backwards-compatibility. peewee-2.10.2/playhouse/csv_utils.py000066400000000000000000000265301316645060400174570ustar00rootroot00000000000000""" Peewee helper for loading CSV data into a database. Load the users CSV file into the database and return a Model for accessing the data: from playhouse.csv_loader import load_csv db = SqliteDatabase(':memory:') User = load_csv(db, 'users.csv') Provide explicit field types and/or field names: fields = [IntegerField(), IntegerField(), DateTimeField(), DecimalField()] field_names = ['from_acct', 'to_acct', 'timestamp', 'amount'] Payments = load_csv(db, 'payments.csv', fields, field_names) """ import csv import datetime import os import re from contextlib import contextmanager try: from StringIO import StringIO except ImportError: from io import StringIO from peewee import * from peewee import Database from peewee import Func from peewee import PY3 if PY3: basestring = str decode_value = False else: decode_value = True class _CSVReader(object): @contextmanager def get_reader(self, file_or_name, **reader_kwargs): is_file = False if isinstance(file_or_name, basestring): fh = open(file_or_name, 'r') elif isinstance(file_or_name, StringIO): fh = file_or_name fh.seek(0) else: fh = file_or_name fh.seek(0) is_file = True reader = csv.reader(fh, **reader_kwargs) yield reader if is_file: fh.close() def convert_field(field_class, **field_kwargs): def decorator(fn): fn.field = lambda: field_class(**field_kwargs) return fn return decorator class RowConverter(_CSVReader): """ Simple introspection utility to convert a CSV file into a list of headers and column types. :param database: a peewee Database object. :param bool has_header: whether the first row of CSV is a header row. :param int sample_size: number of rows to introspect """ date_formats = [ '%Y-%m-%d', '%m/%d/%Y'] datetime_formats = [ '%Y-%m-%d %H:%M:%S', '%Y-%m-%d %H:%M:%S.%f'] def __init__(self, database, has_header=True, sample_size=10): self.database = database self.has_header = has_header self.sample_size = sample_size def matches_date(self, value, formats): for fmt in formats: try: datetime.datetime.strptime(value, fmt) except ValueError: pass else: return True @convert_field(IntegerField, default=0) def is_integer(self, value): return value.isdigit() @convert_field(FloatField, default=0) def is_float(self, value): try: float(value) except (ValueError, TypeError): pass else: return True @convert_field(DateTimeField, null=True) def is_datetime(self, value): return self.matches_date(value, self.datetime_formats) @convert_field(DateField, null=True) def is_date(self, value): return self.matches_date(value, self.date_formats) @convert_field(BareField, default='') def default(self, value): return True def extract_rows(self, file_or_name, **reader_kwargs): """ Extract `self.sample_size` rows from the CSV file and analyze their data-types. :param str file_or_name: A string filename or a file handle. :param reader_kwargs: Arbitrary parameters to pass to the CSV reader. :returns: A 2-tuple containing a list of headers and list of rows read from the CSV file. """ rows = [] rows_to_read = self.sample_size with self.get_reader(file_or_name, **reader_kwargs) as reader: if self.has_header: rows_to_read += 1 for i, row in enumerate(reader): rows.append(row) if i == self.sample_size: break if self.has_header: header, rows = rows[0], rows[1:] else: header = ['field_%d' % i for i in range(len(rows[0]))] return header, rows def get_checks(self): """Return a list of functions to use when testing values.""" return [ self.is_date, self.is_datetime, self.is_integer, self.is_float, self.default] def analyze(self, rows): """ Analyze the given rows and try to determine the type of value stored. :param list rows: A list-of-lists containing one or more rows from a csv file. :returns: A list of peewee Field objects for each column in the CSV. """ transposed = zip(*rows) checks = self.get_checks() column_types = [] for i, column in enumerate(transposed): # Remove any empty values. col_vals = [val for val in column if val != ''] for check in checks: results = set(check(val) for val in col_vals) if all(results): column_types.append(check.field()) break return column_types class Loader(_CSVReader): """ Load the contents of a CSV file into a database and return a model class suitable for working with the CSV data. :param db_or_model: a peewee Database instance or a Model class. :param file_or_name: the filename of the CSV file *or* a file handle. :param list fields: A list of peewee Field() instances appropriate to the values in the CSV file. :param list field_names: A list of names to use for the fields. :param bool has_header: Whether the first row of the CSV file is a header. :param int sample_size: Number of rows to introspect if fields are not defined. :param converter: A RowConverter instance to use. :param str db_table: Name of table to store data in (if not specified, the table name will be derived from the CSV filename). :param reader_kwargs: Arbitrary arguments to pass to the CSV reader. """ def __init__(self, db_or_model, file_or_name, fields=None, field_names=None, has_header=True, sample_size=10, converter=None, db_table=None, pk_in_csv=False, **reader_kwargs): self.file_or_name = file_or_name self.fields = fields self.field_names = field_names self.has_header = has_header self.sample_size = sample_size self.converter = converter self.reader_kwargs = reader_kwargs if isinstance(file_or_name, basestring): self.filename = file_or_name elif isinstance(file_or_name, StringIO): self.filename = 'data.csv' else: self.filename = file_or_name.name if isinstance(db_or_model, Database): self.database = db_or_model self.model = None self.db_table = ( db_table or os.path.splitext(os.path.basename(self.filename))[0]) else: self.model = db_or_model self.database = self.model._meta.database self.db_table = self.model._meta.db_table self.fields = self.model._meta.sorted_fields self.field_names = self.model._meta.sorted_field_names # If using an auto-incrementing primary key, ignore it unless we # are told the primary key is included in the CSV. if self.model._meta.auto_increment and not pk_in_csv: self.fields = self.fields[1:] self.field_names = self.field_names[1:] def clean_field_name(self, s): return re.sub('[^a-z0-9]+', '_', s.lower()) def get_converter(self): return self.converter or RowConverter( self.database, has_header=self.has_header, sample_size=self.sample_size) def analyze_csv(self): converter = self.get_converter() header, rows = converter.extract_rows( self.file_or_name, **self.reader_kwargs) if rows: self.fields = converter.analyze(rows) else: self.fields = [converter.default.field() for _ in header] if not self.field_names: self.field_names = [self.clean_field_name(col) for col in header] def get_model_class(self, field_names, fields): if self.model: return self.model attrs = dict(zip(field_names, fields)) if 'id' not in attrs: attrs['_auto_pk'] = PrimaryKeyField() elif isinstance(attrs['id'], IntegerField): attrs['id'] = PrimaryKeyField() klass = type(self.db_table.title(), (Model,), attrs) klass._meta.database = self.database klass._meta.db_table = self.db_table return klass def load(self): if not self.fields: self.analyze_csv() if not self.field_names and not self.has_header: self.field_names = [ 'field_%d' % i for i in range(len(self.fields))] reader_obj = self.get_reader(self.file_or_name, **self.reader_kwargs) with reader_obj as reader: if not self.field_names: row = next(reader) self.field_names = [self.clean_field_name(col) for col in row] elif self.has_header: next(reader) ModelClass = self.get_model_class(self.field_names, self.fields) with self.database.transaction(): ModelClass.create_table(True) for row in reader: insert = {} for field_name, value in zip(self.field_names, row): if value: if decode_value: value = value.decode('utf-8') insert[field_name] = value if insert: ModelClass.insert(**insert).execute() return ModelClass def load_csv(db_or_model, file_or_name, fields=None, field_names=None, has_header=True, sample_size=10, converter=None, db_table=None, pk_in_csv=False, **reader_kwargs): loader = Loader( db_or_model=db_or_model, file_or_name=file_or_name, fields=fields, field_names=field_names, has_header=has_header, sample_size=sample_size, converter=converter, db_table=db_table, pk_in_csv=pk_in_csv, **reader_kwargs) return loader.load() load_csv.__doc__ = Loader.__doc__ def dump_csv(query, file_or_name, include_header=True, close_file=True, append=True, csv_writer=None): """ Create a CSV dump of a query. """ if isinstance(file_or_name, basestring): fh = open(file_or_name, append and 'a' or 'w') else: fh = file_or_name if append: fh.seek(0, 2) writer = csv_writer or csv.writer( fh, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL) if include_header: header = [] for idx, node in enumerate(query._select): if node._alias: header.append(node._alias) elif isinstance(node, (Field, Func)): header.append(node.name) else: header.append('col_%s' % idx) writer.writerow(header) for row in query.tuples().iterator(): writer.writerow(row) if close_file: fh.close() return fh peewee-2.10.2/playhouse/dataset.py000066400000000000000000000260441316645060400170710ustar00rootroot00000000000000import csv import datetime from decimal import Decimal import json import operator try: from urlparse import urlparse except ImportError: from urllib.parse import urlparse import sys from peewee import * from playhouse.db_url import connect from playhouse.migrate import migrate from playhouse.migrate import SchemaMigrator from playhouse.reflection import Introspector if sys.version_info[0] == 3: basestring = str from functools import reduce class DataSet(object): def __init__(self, url): self._url = url parse_result = urlparse(url) self._database_path = parse_result.path[1:] # Connect to the database. self._database = connect(url) self._database.connect() # Introspect the database and generate models. self._introspector = Introspector.from_database(self._database) self._models = self._introspector.generate_models( skip_invalid=True, literal_column_names=True) self._migrator = SchemaMigrator.from_database(self._database) class BaseModel(Model): class Meta: database = self._database self._base_model = BaseModel self._export_formats = self.get_export_formats() self._import_formats = self.get_import_formats() def __repr__(self): return '' % self._database_path def get_export_formats(self): return { 'csv': CSVExporter, 'json': JSONExporter} def get_import_formats(self): return { 'csv': CSVImporter, 'json': JSONImporter} def __getitem__(self, table): if table not in self._models and table in self.tables: self.update_cache(table) return Table(self, table, self._models.get(table)) @property def tables(self): return self._database.get_tables() def __contains__(self, table): return table in self.tables def connect(self): self._database.connect() def close(self): self._database.close() def update_cache(self, table=None): if table: dependencies = [table] if table in self._models: model_class = self._models[table] dependencies.extend([ related._meta.db_table for related in model_class._meta.related_models(backrefs=True)]) else: dependencies = None # Update all tables. updated = self._introspector.generate_models( skip_invalid=True, table_names=dependencies, literal_column_names=True) self._models.update(updated) def __enter__(self): self.connect() return self def __exit__(self, exc_type, exc_val, exc_tb): if not self._database.is_closed(): self.close() def query(self, sql, params=None, commit=True): return self._database.execute_sql(sql, params, commit) def transaction(self): if self._database.transaction_depth() == 0: return self._database.transaction() else: return self._database.savepoint() def _check_arguments(self, filename, file_obj, format, format_dict): if filename and file_obj: raise ValueError('file is over-specified. Please use either ' 'filename or file_obj, but not both.') if not filename and not file_obj: raise ValueError('A filename or file-like object must be ' 'specified.') if format not in format_dict: valid_formats = ', '.join(sorted(format_dict.keys())) raise ValueError('Unsupported format "%s". Use one of %s.' % ( format, valid_formats)) def freeze(self, query, format='csv', filename=None, file_obj=None, **kwargs): self._check_arguments(filename, file_obj, format, self._export_formats) if filename: file_obj = open(filename, 'w') exporter = self._export_formats[format](query) exporter.export(file_obj, **kwargs) if filename: file_obj.close() def thaw(self, table, format='csv', filename=None, file_obj=None, strict=False, **kwargs): self._check_arguments(filename, file_obj, format, self._export_formats) if filename: file_obj = open(filename, 'r') importer = self._import_formats[format](self[table], strict) count = importer.load(file_obj, **kwargs) if filename: file_obj.close() return count class Table(object): def __init__(self, dataset, name, model_class): self.dataset = dataset self.name = name if model_class is None: model_class = self._create_model() model_class.create_table() self.dataset._models[name] = model_class @property def model_class(self): return self.dataset._models[self.name] def __repr__(self): return '' % self.name def __len__(self): return self.find().count() def __iter__(self): return iter(self.find().iterator()) def _create_model(self): class Meta: db_table = self.name return type( str(self.name), (self.dataset._base_model,), {'Meta': Meta}) def create_index(self, columns, unique=False): self.dataset._database.create_index( self.model_class, columns, unique=unique) def _guess_field_type(self, value): if isinstance(value, basestring): return TextField if isinstance(value, (datetime.date, datetime.datetime)): return DateTimeField elif value is True or value is False: return BooleanField elif isinstance(value, int): return IntegerField elif isinstance(value, float): return FloatField elif isinstance(value, Decimal): return DecimalField return TextField @property def columns(self): return [f.name for f in self.model_class._meta.sorted_fields] def _migrate_new_columns(self, data): new_keys = set(data) - set(self.model_class._meta.fields) if new_keys: operations = [] for key in new_keys: field_class = self._guess_field_type(data[key]) field = field_class(null=True) operations.append( self.dataset._migrator.add_column(self.name, key, field)) field.add_to_class(self.model_class, key) migrate(*operations) self.dataset.update_cache(self.name) def insert(self, **data): self._migrate_new_columns(data) return self.model_class.insert(**data).execute() def _apply_where(self, query, filters, conjunction=None): conjunction = conjunction or operator.and_ if filters: expressions = [ (self.model_class._meta.fields[column] == value) for column, value in filters.items()] query = query.where(reduce(conjunction, expressions)) return query def update(self, columns=None, conjunction=None, **data): self._migrate_new_columns(data) filters = {} if columns: for column in columns: filters[column] = data.pop(column) return self._apply_where( self.model_class.update(**data), filters, conjunction).execute() def _query(self, **query): return self._apply_where(self.model_class.select(), query) def find(self, **query): return self._query(**query).dicts() def find_one(self, **query): try: return self.find(**query).get() except self.model_class.DoesNotExist: return None def all(self): return self.find() def delete(self, **query): return self._apply_where(self.model_class.delete(), query).execute() def freeze(self, *args, **kwargs): return self.dataset.freeze(self.all(), *args, **kwargs) def thaw(self, *args, **kwargs): return self.dataset.thaw(self.name, *args, **kwargs) class Exporter(object): def __init__(self, query): self.query = query def export(self, file_obj): raise NotImplementedError class JSONExporter(Exporter): @staticmethod def default(o): if isinstance(o, (datetime.datetime, datetime.date, datetime.time)): return o.isoformat() elif isinstance(o, Decimal): return str(o) raise TypeError('Unable to serialize %r as JSON.' % o) def export(self, file_obj, **kwargs): json.dump( list(self.query), file_obj, default=JSONExporter.default, **kwargs) class CSVExporter(Exporter): def export(self, file_obj, header=True, **kwargs): writer = csv.writer(file_obj, **kwargs) if header and hasattr(self.query, '_select'): writer.writerow([field.name for field in self.query._select]) for row in self.query.tuples(): writer.writerow(row) class Importer(object): def __init__(self, table, strict=False): self.table = table self.strict = strict model = self.table.model_class self.columns = model._meta.columns self.columns.update(model._meta.fields) def load(self, file_obj): raise NotImplementedError class JSONImporter(Importer): def load(self, file_obj, **kwargs): data = json.load(file_obj, **kwargs) count = 0 for row in data: if self.strict: obj = {} for key in row: field = self.columns.get(key) if field is not None: obj[field.name] = field.python_value(row[key]) else: obj = row if obj: self.table.insert(**obj) count += 1 return count class CSVImporter(Importer): def load(self, file_obj, header=True, **kwargs): count = 0 reader = csv.reader(file_obj, **kwargs) if header: try: header_keys = next(reader) except StopIteration: return count if self.strict: header_fields = [] for idx, key in enumerate(header_keys): if key in self.columns: header_fields.append((idx, self.columns[key])) else: header_fields = list(enumerate(header_keys)) else: header_fields = list(enumerate(self.model._meta.sorted_fields)) if not header_fields: return count for row in reader: obj = {} for idx, field in header_fields: if self.strict: obj[field.name] = field.python_value(row[idx]) else: obj[field] = row[idx] self.table.insert(**obj) count += 1 return count peewee-2.10.2/playhouse/db_url.py000066400000000000000000000074651316645060400167210ustar00rootroot00000000000000try: from urlparse import urlparse, parse_qsl except ImportError: from urllib.parse import urlparse, parse_qsl from peewee import * from playhouse.pool import PooledMySQLDatabase from playhouse.pool import PooledPostgresqlDatabase from playhouse.pool import PooledSqliteDatabase from playhouse.pool import PooledSqliteExtDatabase from playhouse.sqlite_ext import SqliteExtDatabase schemes = { 'mysql': MySQLDatabase, 'mysql+pool': PooledMySQLDatabase, 'postgres': PostgresqlDatabase, 'postgresql': PostgresqlDatabase, 'postgres+pool': PooledPostgresqlDatabase, 'postgresql+pool': PooledPostgresqlDatabase, 'sqlite': SqliteDatabase, 'sqliteext': SqliteExtDatabase, 'sqlite+pool': PooledSqliteDatabase, 'sqliteext+pool': PooledSqliteExtDatabase, } def register_database(db_class, *names): global schemes for name in names: schemes[name] = db_class def parseresult_to_dict(parsed): # urlparse in python 2.6 is broken so query will be empty and instead # appended to path complete with '?' path_parts = parsed.path[1:].split('?') try: query = path_parts[1] except IndexError: query = parsed.query connect_kwargs = {'database': path_parts[0]} if parsed.username: connect_kwargs['user'] = parsed.username if parsed.password: connect_kwargs['password'] = parsed.password if parsed.hostname: connect_kwargs['host'] = parsed.hostname if parsed.port: connect_kwargs['port'] = parsed.port # Adjust parameters for MySQL. if parsed.scheme == 'mysql' and 'password' in connect_kwargs: connect_kwargs['passwd'] = connect_kwargs.pop('password') elif 'sqlite' in parsed.scheme and not connect_kwargs['database']: connect_kwargs['database'] = ':memory:' # Get additional connection args from the query string qs_args = parse_qsl(query, keep_blank_values=True) for key, value in qs_args: if value.lower() == 'false': value = False elif value.lower() == 'true': value = True elif value.isdigit(): value = int(value) elif '.' in value and all(p.isdigit() for p in value.split('.', 1)): try: value = float(value) except ValueError: pass elif value.lower() in ('null', 'none'): value = None connect_kwargs[key] = value return connect_kwargs def parse(url): parsed = urlparse(url) return parseresult_to_dict(parsed) def connect(url, **connect_params): parsed = urlparse(url) connect_kwargs = parseresult_to_dict(parsed) connect_kwargs.update(connect_params) database_class = schemes.get(parsed.scheme) if database_class is None: if database_class in schemes: raise RuntimeError('Attempted to use "%s" but a required library ' 'could not be imported.' % parsed.scheme) else: raise RuntimeError('Unrecognized or unsupported scheme: "%s".' % parsed.scheme) return database_class(**connect_kwargs) # Conditionally register additional databases. try: from playhouse.pool import PooledPostgresqlExtDatabase except ImportError: pass else: register_database( PooledPostgresqlExtDatabase, 'postgresext+pool', 'postgresqlext+pool') try: from playhouse.apsw_ext import APSWDatabase except ImportError: pass else: register_database(APSWDatabase, 'apsw') try: from playhouse.berkeleydb import BerkeleyDatabase except ImportError: pass else: register_database(BerkeleyDatabase, 'berkeleydb') try: from playhouse.postgres_ext import PostgresqlExtDatabase except ImportError: pass else: register_database(PostgresqlExtDatabase, 'postgresext', 'postgresqlext') peewee-2.10.2/playhouse/djpeewee.py000066400000000000000000000173401316645060400172330ustar00rootroot00000000000000""" Simple translation of Django model classes to peewee model classes. """ from functools import partial import logging from peewee import * logger = logging.getLogger('peewee.playhouse.djpeewee') class AttrDict(dict): def __getattr__(self, attr): return self[attr] class DjangoTranslator(object): def __init__(self): self._field_map = self.get_django_field_map() def get_django_field_map(self): from django.db.models import fields as djf return [ (djf.AutoField, PrimaryKeyField), (djf.BigIntegerField, BigIntegerField), # (djf.BinaryField, BlobField), (djf.BooleanField, BooleanField), (djf.CharField, CharField), (djf.DateTimeField, DateTimeField), # Extends DateField. (djf.DateField, DateField), (djf.DecimalField, DecimalField), (djf.FilePathField, CharField), (djf.FloatField, FloatField), (djf.IntegerField, IntegerField), (djf.NullBooleanField, partial(BooleanField, null=True)), (djf.TextField, TextField), (djf.TimeField, TimeField), (djf.related.ForeignKey, ForeignKeyField), ] def convert_field(self, field): converted = None for django_field, peewee_field in self._field_map: if isinstance(field, django_field): converted = peewee_field break return converted def _translate_model(self, model, mapping, max_depth=None, backrefs=False, exclude=None): if exclude and model in exclude: return if max_depth is None: max_depth = -1 from django.db.models import fields as djf options = model._meta if mapping.get(options.object_name): return mapping[options.object_name] = None attrs = {} # Sort fields such that nullable fields appear last. field_key = lambda field: (field.null and 1 or 0, field) for model_field in sorted(options.fields, key=field_key): # Get peewee equivalent for this field type. converted = self.convert_field(model_field) # Special-case ForeignKey fields. if converted is ForeignKeyField: if max_depth != 0: related_model = model_field.rel.to model_name = related_model._meta.object_name # If we haven't processed the related model yet, do so now. if model_name not in mapping: mapping[model_name] = None # Avoid endless recursion. self._translate_model( related_model, mapping, max_depth=max_depth - 1, backrefs=backrefs, exclude=exclude) if mapping[model_name] is None: # Cycle detected, put an integer field here. logger.warn('Cycle detected: %s: %s', model_field.name, model_name) attrs[model_field.name] = IntegerField( db_column=model_field.column) else: related_name = (model_field.rel.related_name or model_field.related_query_name()) if related_name.endswith('+'): related_name = '__%s:%s:%s' % ( options, model_field.name, related_name.strip('+')) attrs[model_field.name] = ForeignKeyField( mapping[model_name], related_name=related_name, db_column=model_field.column, ) else: attrs[model_field.name] = IntegerField( db_column=model_field.column) elif converted: attrs[model_field.name] = converted( db_column=model_field.db_column) klass = type(options.object_name, (Model,), attrs) klass._meta.db_table = options.db_table klass._meta.database.interpolation = '%s' mapping[options.object_name] = klass if backrefs: # Follow back-references for foreign keys. try: all_related = [(f, f.model) for f in options.get_all_related_objects()] except AttributeError: all_related = [(f, f.field.model) for f in options.get_fields() if (f.one_to_many or f.one_to_one) and f.auto_created and not f.concrete] for rel_obj, model in all_related: if model._meta.object_name in mapping: continue self._translate_model( model, mapping, max_depth=max_depth - 1, backrefs=backrefs, exclude=exclude) # Load up many-to-many relationships. for many_to_many in options.many_to_many: if not isinstance(many_to_many, djf.related.ManyToManyField): continue self._translate_model( many_to_many.rel.through, mapping, max_depth=max_depth, # Do not decrement. backrefs=backrefs, exclude=exclude) def translate_models(self, *models, **options): """ Generate a group of peewee models analagous to the provided Django models for the purposes of creating queries. :param model: A Django model class. :param options: A dictionary of options, see note below. :returns: A dictionary mapping model names to peewee model classes. :rtype: dict Recognized options: `recurse`: Follow foreign keys (default: True) `max_depth`: Max depth to recurse (default: None, unlimited) `backrefs`: Follow backrefs (default: False) `exclude`: A list of models to exclude Example:: # Map Django models to peewee models. Foreign keys and M2M will be # traversed as well. peewee = translate(Account) # Generate query using peewee. PUser = peewee['User'] PAccount = peewee['Account'] query = (PUser .select() .join(PAccount) .where(PAccount.acct_type == 'foo')) # Django raw query. users = User.objects.raw(*query.sql()) """ mapping = AttrDict() recurse = options.get('recurse', True) max_depth = options.get('max_depth', None) backrefs = options.get('backrefs', False) exclude = options.get('exclude', None) if not recurse and max_depth: raise ValueError('Error, you cannot specify a max_depth when ' 'recurse=False.') elif not recurse: max_depth = 0 elif recurse and max_depth is None: max_depth = -1 for model in models: self._translate_model( model, mapping, max_depth=max_depth, backrefs=backrefs, exclude=exclude) return mapping try: import django translate = DjangoTranslator().translate_models except ImportError: pass peewee-2.10.2/playhouse/fields.py000066400000000000000000000241201316645060400167030ustar00rootroot00000000000000try: import cPickle as pickle except ImportError: import pickle import re import sys PY2 = sys.version_info[0] == 2 # Conditional standard library imports. try: from cStringIO import StringIO except ImportError: if sys.version_info[0] == 2: from StringIO import StringIO else: from io import StringIO try: import bz2 except ImportError: bz2 = None try: import zlib except ImportError: zlib = None try: from bcrypt import hashpw, gensalt except ImportError: hashpw = gensalt = None from peewee import * from peewee import binary_construct from peewee import Field from peewee import FieldDescriptor from peewee import SelectQuery from peewee import unicode_type if hashpw and gensalt: class PasswordHash(bytes): def check_password(self, password): password = password.encode('utf-8') return hashpw(password, self) == self class PasswordField(BlobField): def __init__(self, iterations=12, *args, **kwargs): if None in (hashpw, gensalt): raise ValueError('Missing library required for PasswordField: bcrypt') self.bcrypt_iterations = iterations self.raw_password = None super(PasswordField, self).__init__(*args, **kwargs) def db_value(self, value): """Convert the python value for storage in the database.""" if isinstance(value, PasswordHash): return bytes(value) if isinstance(value, unicode_type): value = value.encode('utf-8') salt = gensalt(self.bcrypt_iterations) return value if value is None else hashpw(value, salt) def python_value(self, value): """Convert the database value to a pythonic value.""" if isinstance(value, unicode_type): value = value.encode('utf-8') return PasswordHash(value) class DeferredThroughModel(object): def set_field(self, model_class, field, name): self.model_class = model_class self.field = field self.name = name def set_model(self, through_model): self.field._through_model = through_model self.field.add_to_class(self.model_class, self.name) class ManyToManyField(Field): def __init__(self, rel_model, related_name=None, through_model=None, _is_backref=False): if through_model is not None and not ( isinstance(through_model, (Proxy, DeferredThroughModel)) or issubclass(through_model, Model)): raise TypeError('Unexpected value for `through_model`. Expected ' '`Model`, `Proxy` or `DeferredThroughModel`.') self.rel_model = rel_model self._related_name = related_name self._through_model = through_model self._is_backref = _is_backref self.primary_key = False self.verbose_name = None def _get_descriptor(self): return ManyToManyFieldDescriptor(self) def add_to_class(self, model_class, name): if isinstance(self._through_model, Proxy): def callback(through_model): self._through_model = through_model self.add_to_class(model_class, name) self._through_model.attach_callback(callback) return elif isinstance(self._through_model, DeferredThroughModel): self._through_model.set_field(model_class, self, name) return self.name = name self.model_class = model_class if not self.verbose_name: self.verbose_name = re.sub('_+', ' ', name).title() setattr(model_class, name, self._get_descriptor()) if not self._is_backref: backref = ManyToManyField( self.model_class, through_model=self._through_model, _is_backref=True) related_name = self._related_name or model_class._meta.name + 's' backref.add_to_class(self.rel_model, related_name) def get_models(self): return [model for _, model in sorted(( (self._is_backref, self.model_class), (not self._is_backref, self.rel_model)))] def get_through_model(self): if not self._through_model: lhs, rhs = self.get_models() tables = [model._meta.db_table for model in (lhs, rhs)] class Meta: database = self.model_class._meta.database db_table = '%s_%s_through' % tuple(tables) indexes = ( ((lhs._meta.name, rhs._meta.name), True),) validate_backrefs = False attrs = { lhs._meta.name: ForeignKeyField(rel_model=lhs), rhs._meta.name: ForeignKeyField(rel_model=rhs)} attrs['Meta'] = Meta self._through_model = type( '%s%sThrough' % (lhs.__name__, rhs.__name__), (Model,), attrs) return self._through_model class ManyToManyFieldDescriptor(FieldDescriptor): def __init__(self, field): super(ManyToManyFieldDescriptor, self).__init__(field) self.model_class = field.model_class self.rel_model = field.rel_model self.through_model = field.get_through_model() self.src_fk = self.through_model._meta.rel_for_model(self.model_class) self.dest_fk = self.through_model._meta.rel_for_model(self.rel_model) def __get__(self, instance, instance_type=None): if instance is not None: return (ManyToManyQuery(instance, self, self.rel_model) .select() .join(self.through_model) .join(self.model_class) .where(self.src_fk == instance)) return self.field def __set__(self, instance, value): query = self.__get__(instance) query.add(value, clear_existing=True) class ManyToManyQuery(SelectQuery): def __init__(self, instance, field_descriptor, *args, **kwargs): self._instance = instance self._field_descriptor = field_descriptor super(ManyToManyQuery, self).__init__(*args, **kwargs) def clone(self): query = type(self)( self._instance, self._field_descriptor, self.model_class) query.database = self.database return self._clone_attributes(query) def _id_list(self, model_or_id_list): if isinstance(model_or_id_list[0], Model): return [obj.get_id() for obj in model_or_id_list] return model_or_id_list def add(self, value, clear_existing=False): if clear_existing: self.clear() fd = self._field_descriptor if isinstance(value, SelectQuery): query = value.select( SQL(str(self._instance.get_id())), fd.rel_model._meta.primary_key) fd.through_model.insert_from( fields=[fd.src_fk, fd.dest_fk], query=query).execute() else: if not isinstance(value, (list, tuple)): value = [value] if not value: return inserts = [{ fd.src_fk.name: self._instance.get_id(), fd.dest_fk.name: rel_id} for rel_id in self._id_list(value)] fd.through_model.insert_many(inserts).execute() def remove(self, value): fd = self._field_descriptor if isinstance(value, SelectQuery): subquery = value.select(value.model_class._meta.primary_key) return (fd.through_model .delete() .where( (fd.dest_fk << subquery) & (fd.src_fk == self._instance.get_id())) .execute()) else: if not isinstance(value, (list, tuple)): value = [value] if not value: return return (fd.through_model .delete() .where( (fd.dest_fk << self._id_list(value)) & (fd.src_fk == self._instance.get_id())) .execute()) def clear(self): return (self._field_descriptor.through_model .delete() .where(self._field_descriptor.src_fk == self._instance) .execute()) class CompressedField(BlobField): ZLIB = 'zlib' BZ2 = 'bz2' algorithm_to_import = { ZLIB: zlib, BZ2: bz2, } def __init__(self, compression_level=6, algorithm=ZLIB, *args, **kwargs): self.compression_level = compression_level if algorithm not in self.algorithm_to_import: raise ValueError('Unrecognized algorithm %s' % algorithm) compress_module = self.algorithm_to_import[algorithm] if compress_module is None: raise ValueError('Missing library required for %s.' % algorithm) self.algorithm = algorithm self.compress = compress_module.compress self.decompress = compress_module.decompress super(CompressedField, self).__init__(*args, **kwargs) if PY2: def db_value(self, value): if value is not None: return binary_construct( self.compress(value, self.compression_level)) def python_value(self, value): if value is not None: return self.decompress(value) else: def db_value(self, value): if value is not None: return self.compress( binary_construct(value), self.compression_level) def python_value(self, value): if value is not None: return self.decompress(value).decode('utf-8') class PickledField(BlobField): def db_value(self, value): if value is not None: return pickle.dumps(value) def python_value(self, value): if value is not None: if isinstance(value, unicode_type): value = value.encode('raw_unicode_escape') return pickle.loads(value) peewee-2.10.2/playhouse/flask_utils.py000066400000000000000000000130371316645060400177620ustar00rootroot00000000000000import math import sys from flask import abort from flask import render_template from flask import request from peewee import Database from peewee import DoesNotExist from peewee import Model from peewee import Proxy from peewee import SelectQuery from playhouse.db_url import connect as db_url_connect class PaginatedQuery(object): def __init__(self, query_or_model, paginate_by, page_var='page', check_bounds=False): self.paginate_by = paginate_by self.page_var = page_var self.check_bounds = check_bounds if isinstance(query_or_model, SelectQuery): self.query = query_or_model self.model = self.query.model_class else: self.model = query_or_model self.query = self.model.select() def get_page(self): curr_page = request.args.get(self.page_var) if curr_page and curr_page.isdigit(): return max(1, int(curr_page)) return 1 def get_page_count(self): return int(math.ceil(float(self.query.count()) / self.paginate_by)) def get_object_list(self): if self.check_bounds and self.get_page() > self.get_page_count(): abort(404) return self.query.paginate(self.get_page(), self.paginate_by) def get_object_or_404(query_or_model, *query): if not isinstance(query_or_model, SelectQuery): query_or_model = query_or_model.select() try: return query_or_model.where(*query).get() except DoesNotExist: abort(404) def object_list(template_name, query, context_variable='object_list', paginate_by=20, page_var='page', check_bounds=True, **kwargs): paginated_query = PaginatedQuery( query, paginate_by, page_var, check_bounds) kwargs[context_variable] = paginated_query.get_object_list() return render_template( template_name, pagination=paginated_query, page=paginated_query.get_page(), **kwargs) def get_current_url(): if not request.query_string: return request.path return '%s?%s' % (request.path, request.query_string) def get_next_url(default='/'): if request.args.get('next'): return request.args['next'] elif request.form.get('next'): return request.form['next'] return default class FlaskDB(object): def __init__(self, app=None, database=None): self.database = None # Reference to actual Peewee database instance. self._app = app self._db = database # dict, url, Database, or None (default). if app is not None: self.init_app(app) def init_app(self, app): self._app = app if self._db is None: if 'DATABASE' in app.config: initial_db = app.config['DATABASE'] elif 'DATABASE_URL' in app.config: initial_db = app.config['DATABASE_URL'] else: raise ValueError('Missing required configuration data for ' 'database: DATABASE or DATABASE_URL.') else: initial_db = self._db self._load_database(app, initial_db) self._register_handlers(app) def _load_database(self, app, config_value): if isinstance(config_value, Database): database = config_value elif isinstance(config_value, dict): database = self._load_from_config_dict(dict(config_value)) else: # Assume a database connection URL. database = db_url_connect(config_value) if isinstance(self.database, Proxy): self.database.initialize(database) else: self.database = database def _load_from_config_dict(self, config_dict): try: name = config_dict.pop('name') engine = config_dict.pop('engine') except KeyError: raise RuntimeError('DATABASE configuration must specify a ' '`name` and `engine`.') if '.' in engine: path, class_name = engine.rsplit('.', 1) else: path, class_name = 'peewee', engine try: __import__(path) module = sys.modules[path] database_class = getattr(module, class_name) assert issubclass(database_class, Database) except ImportError: raise RuntimeError('Unable to import %s' % engine) except AttributeError: raise RuntimeError('Database engine not found %s' % engine) except AssertionError: raise RuntimeError('Database engine not a subclass of ' 'peewee.Database: %s' % engine) return database_class(name, **config_dict) def _register_handlers(self, app): app.before_request(self.connect_db) app.teardown_request(self.close_db) def get_model_class(self): if self.database is None: raise RuntimeError('Database must be initialized.') class BaseModel(Model): class Meta: database = self.database return BaseModel @property def Model(self): if self._app is None: database = getattr(self, 'database', None) if database is None: self.database = Proxy() if not hasattr(self, '_model_class'): self._model_class = self.get_model_class() return self._model_class def connect_db(self): self.database.connect() def close_db(self, exc): if not self.database.is_closed(): self.database.close() peewee-2.10.2/playhouse/gfk.py000066400000000000000000000136231316645060400162120ustar00rootroot00000000000000""" Provide a "Generic ForeignKey", similar to Django. A "GFK" is composed of two columns: an object ID and an object type identifier. The object types are collected in a global registry (all_models), so all you need to do is subclass ``gfk.Model`` and your model will be added to the registry. Example: class Tag(Model): tag = CharField() object_type = CharField(null=True) object_id = IntegerField(null=True) object = GFKField('object_type', 'object_id') class Blog(Model): tags = ReverseGFK(Tag, 'object_type', 'object_id') class Photo(Model): tags = ReverseGFK(Tag, 'object_type', 'object_id') tag.object -> a blog or photo blog.tags -> select query of tags for ``blog`` instance Blog.tags -> select query of all tags for Blog instances """ from peewee import * from peewee import BaseModel as _BaseModel from peewee import Model as _Model from peewee import SelectQuery from peewee import UpdateQuery from peewee import with_metaclass all_models = set() table_cache = {} class BaseModel(_BaseModel): def __new__(cls, name, bases, attrs): cls = super(BaseModel, cls).__new__(cls, name, bases, attrs) if name not in ('_metaclass_helper_', 'Model'): all_models.add(cls) return cls class Model(with_metaclass(BaseModel, _Model)): pass def get_model(tbl_name): if tbl_name not in table_cache: for model in all_models: if model._meta.db_table == tbl_name: table_cache[tbl_name] = model break return table_cache.get(tbl_name) class BoundGFKField(object): __slots__ = ('model_class', 'gfk_field') def __init__(self, model_class, gfk_field): self.model_class = model_class self.gfk_field = gfk_field @property def unique(self): indexes = self.model_class._meta.indexes fields = set((self.gfk_field.model_type_field, self.gfk_field.model_id_field)) for (indexed_columns, is_unique) in indexes: if not fields - set(indexed_columns): return True return False @property def primary_key(self): pk = self.model_class._meta.primary_key if isinstance(pk, CompositeKey): fields = set((self.gfk_field.model_type_field, self.gfk_field.model_id_field)) if not fields - set(pk.field_names): return True return False def __eq__(self, other): meta = self.model_class._meta type_field = meta.fields[self.gfk_field.model_type_field] id_field = meta.fields[self.gfk_field.model_id_field] return ( (type_field == other._meta.db_table) & (id_field == other._get_pk_value())) def __ne__(self, other): other_cls = type(other) type_field = other._meta.fields[self.gfk_field.model_type_field] id_field = other._meta.fields[self.gfk_field.model_id_field] return ( (type_field == other._meta.db_table) & (id_field != other._get_pk_value())) class GFKField(object): def __init__(self, model_type_field='object_type', model_id_field='object_id'): self.model_type_field = model_type_field self.model_id_field = model_id_field self.att_name = '.'.join((self.model_type_field, self.model_id_field)) def get_obj(self, instance): data = instance._data if data.get(self.model_type_field) and data.get(self.model_id_field): tbl_name = data[self.model_type_field] model_class = get_model(tbl_name) if not model_class: raise AttributeError('Model for table "%s" not found in GFK ' 'lookup.' % tbl_name) query = model_class.select().where( model_class._meta.primary_key == data[self.model_id_field]) return query.get() def __get__(self, instance, instance_type=None): if instance: if self.att_name not in instance._obj_cache: rel_obj = self.get_obj(instance) if rel_obj: instance._obj_cache[self.att_name] = rel_obj return instance._obj_cache.get(self.att_name) return BoundGFKField(instance_type, self) def __set__(self, instance, value): instance._obj_cache[self.att_name] = value instance._data[self.model_type_field] = value._meta.db_table instance._data[self.model_id_field] = value._get_pk_value() class ReverseGFK(object): def __init__(self, model, model_type_field='object_type', model_id_field='object_id'): self.model_class = model self.model_type_field = model._meta.fields[model_type_field] self.model_id_field = model._meta.fields[model_id_field] def __get__(self, instance, instance_type=None): if instance: return self.model_class.select().where( (self.model_type_field == instance._meta.db_table) & (self.model_id_field == instance._get_pk_value()) ) else: return self.model_class.select().where( self.model_type_field == instance_type._meta.db_table ) def __set__(self, instance, value): mtv = instance._meta.db_table miv = instance._get_pk_value() if (isinstance(value, SelectQuery) and value.model_class == self.model_class): UpdateQuery(self.model_class, { self.model_type_field: mtv, self.model_id_field: miv, }).where(value._where).execute() elif all(map(lambda i: isinstance(i, self.model_class), value)): for obj in value: setattr(obj, self.model_type_field.name, mtv) setattr(obj, self.model_id_field.name, miv) obj.save() else: raise ValueError('ReverseGFK field unable to handle "%s"' % value) peewee-2.10.2/playhouse/hybrid.py000066400000000000000000000027011316645060400167170ustar00rootroot00000000000000# Hybrid methods/attributes, based on similar functionality in SQLAlchemy: # http://docs.sqlalchemy.org/en/improve_toc/orm/extensions/hybrid.html class hybrid_method(object): def __init__(self, func, expr=None): self.func = func self.expr = expr or func def __get__(self, instance, instance_type): if instance is None: return self.expr.__get__(instance_type, instance_type.__class__) return self.func.__get__(instance, instance_type) def expression(self, expr): self.expr = expr return self class hybrid_property(object): def __init__(self, fget, fset=None, fdel=None, expr=None): self.fget = fget self.fset = fset self.fdel = fdel self.expr = expr or fget def __get__(self, instance, instance_type): if instance is None: return self.expr(instance_type) return self.fget(instance) def __set__(self, instance, value): if self.fset is None: raise AttributeError('Cannot set attribute.') self.fset(instance, value) def __delete__(self, instance): if self.fdel is None: raise AttributeError('Cannot delete attribute.') self.fdel(instance) def setter(self, fset): self.fset = fset return self def deleter(self, fdel): self.fdel = fdel return self def expression(self, expr): self.expr = expr return self peewee-2.10.2/playhouse/kv.py000066400000000000000000000113071316645060400160600ustar00rootroot00000000000000import operator import pickle try: import simplejson as json except ImportError: import json from peewee import * from peewee import Node from playhouse.fields import PickledField try: from playhouse.apsw_ext import APSWDatabase def KeyValueDatabase(db_name, **kwargs): return APSWDatabase(db_name, **kwargs) except ImportError: def KeyValueDatabase(db_name, **kwargs): return SqliteDatabase(db_name, check_same_thread=False, **kwargs) Sentinel = type('Sentinel', (object,), {}) key_value_db = KeyValueDatabase(':memory:', threadlocals=False) class JSONField(TextField): def db_value(self, value): return json.dumps(value) def python_value(self, value): if value is not None: return json.loads(value) class KeyStore(object): """ Rich dictionary with support for storing a wide variety of data types. :param peewee.Field value_type: Field type to use for values. :param boolean ordered: Whether keys should be returned in sorted order. :param peewee.Model model: Model class to use for Keys/Values. """ def __init__(self, value_field, ordered=False, database=None): self._value_field = value_field self._ordered = ordered self._database = database or key_value_db self._compiler = self._database.compiler() self.model = self.create_model() self.key = self.model.key self.value = self.model.value self._database.create_table(self.model, True) self._native_upsert = isinstance(self._database, SqliteDatabase) def create_model(self): class KVModel(Model): key = CharField(max_length=255, primary_key=True) value = self._value_field class Meta: database = self._database return KVModel def query(self, *select): query = self.model.select(*select).tuples() if self._ordered: query = query.order_by(self.key) return query def convert_node(self, node): if not isinstance(node, Node): return (self.key == node), True return node, False def __contains__(self, key): node, _ = self.convert_node(key) return self.model.select().where(node).exists() def __len__(self): return self.model.select().count() def __getitem__(self, node): converted, is_single = self.convert_node(node) result = self.query(self.value).where(converted) item_getter = operator.itemgetter(0) result = [item_getter(val) for val in result] if len(result) == 0 and is_single: raise KeyError(node) elif is_single: return result[0] return result def _upsert(self, key, value): self.model.insert(**{ self.key.name: key, self.value.name: value}).upsert().execute() def __setitem__(self, node, value): if isinstance(node, Node): update = {self.value.name: value} self.model.update(**update).where(node).execute() elif self._native_upsert: self._upsert(node, value) else: try: self.model.create(key=node, value=value) except: self._database.rollback() (self.model .update(**{self.value.name: value}) .where(self.key == node) .execute()) def __delitem__(self, node): converted, _ = self.convert_node(node) self.model.delete().where(converted).execute() def __iter__(self): return self.query().execute() def keys(self): return map(operator.itemgetter(0), self.query(self.key)) def values(self): return map(operator.itemgetter(0), self.query(self.value)) def items(self): return iter(self) def get(self, k, default=None): try: return self[k] except KeyError: return default def pop(self, k, default=Sentinel): with self._database.transaction(): node, is_single = self.convert_node(k) try: res = self[k] except KeyError: if default is Sentinel: raise return default del(self[node]) return res def clear(self): self.model.delete().execute() class PickledKeyStore(KeyStore): def __init__(self, ordered=False, database=None): super(PickledKeyStore, self).__init__( PickledField(), ordered, database) class JSONKeyStore(KeyStore): def __init__(self, ordered=False, database=None): field = JSONField(null=True) super(JSONKeyStore, self).__init__(field, ordered, database) peewee-2.10.2/playhouse/migrate.py000066400000000000000000000573631316645060400171040ustar00rootroot00000000000000""" Lightweight schema migrations. NOTE: Currently tested with SQLite and Postgresql. MySQL may be missing some features. Example Usage ------------- Instantiate a migrator: # Postgres example: my_db = PostgresqlDatabase(...) migrator = PostgresqlMigrator(my_db) # SQLite example: my_db = SqliteDatabase('my_database.db') migrator = SqliteMigrator(my_db) Then you will use the `migrate` function to run various `Operation`s which are generated by the migrator: migrate( migrator.add_column('some_table', 'column_name', CharField(default='')) ) Migrations are not run inside a transaction, so if you wish the migration to run in a transaction you will need to wrap the call to `migrate` in a transaction block, e.g.: with my_db.transaction(): migrate(...) Supported Operations -------------------- Add new field(s) to an existing model: # Create your field instances. For non-null fields you must specify a # default value. pubdate_field = DateTimeField(null=True) comment_field = TextField(default='') # Run the migration, specifying the database table, field name and field. migrate( migrator.add_column('comment_tbl', 'pub_date', pubdate_field), migrator.add_column('comment_tbl', 'comment', comment_field), ) Renaming a field: # Specify the table, original name of the column, and its new name. migrate( migrator.rename_column('story', 'pub_date', 'publish_date'), migrator.rename_column('story', 'mod_date', 'modified_date'), ) Dropping a field: migrate( migrator.drop_column('story', 'some_old_field'), ) Making a field nullable or not nullable: # Note that when making a field not null that field must not have any # NULL values present. migrate( # Make `pub_date` allow NULL values. migrator.drop_not_null('story', 'pub_date'), # Prevent `modified_date` from containing NULL values. migrator.add_not_null('story', 'modified_date'), ) Renaming a table: migrate( migrator.rename_table('story', 'stories_tbl'), ) Adding an index: # Specify the table, column names, and whether the index should be # UNIQUE or not. migrate( # Create an index on the `pub_date` column. migrator.add_index('story', ('pub_date',), False), # Create a multi-column index on the `pub_date` and `status` fields. migrator.add_index('story', ('pub_date', 'status'), False), # Create a unique index on the category and title fields. migrator.add_index('story', ('category_id', 'title'), True), ) Dropping an index: # Specify the index name. migrate(migrator.drop_index('story', 'story_pub_date_status')) """ from collections import namedtuple import functools import re from peewee import * from peewee import CommaClause from peewee import EnclosedClause from peewee import Entity from peewee import Expression from peewee import Node from peewee import OP class Operation(object): """Encapsulate a single schema altering operation.""" def __init__(self, migrator, method, *args, **kwargs): self.migrator = migrator self.method = method self.args = args self.kwargs = kwargs def _parse_node(self, node): compiler = self.migrator.database.compiler() return compiler.parse_node(node) def execute(self, node): sql, params = self._parse_node(node) self.migrator.database.execute_sql(sql, params) def _handle_result(self, result): if isinstance(result, Node): self.execute(result) elif isinstance(result, Operation): result.run() elif isinstance(result, (list, tuple)): for item in result: self._handle_result(item) def run(self): kwargs = self.kwargs.copy() kwargs['generate'] = True self._handle_result( getattr(self.migrator, self.method)(*self.args, **kwargs)) def operation(fn): @functools.wraps(fn) def inner(self, *args, **kwargs): generate = kwargs.pop('generate', False) if generate: return fn(self, *args, **kwargs) return Operation(self, fn.__name__, *args, **kwargs) return inner class SchemaMigrator(object): explicit_create_foreign_key = False explicit_delete_foreign_key = False def __init__(self, database): self.database = database @classmethod def from_database(cls, database): if isinstance(database, PostgresqlDatabase): return PostgresqlMigrator(database) elif isinstance(database, MySQLDatabase): return MySQLMigrator(database) else: return SqliteMigrator(database) @operation def apply_default(self, table, column_name, field): default = field.default if callable(default): default = default() return Clause( SQL('UPDATE'), Entity(table), SQL('SET'), Expression( Entity(column_name), OP.EQ, Param(field.db_value(default)), flat=True)) @operation def alter_add_column(self, table, column_name, field): # Make field null at first. field_null, field.null = field.null, True field.name = field.db_column = column_name field_clause = self.database.compiler().field_definition(field) field.null = field_null parts = [ SQL('ALTER TABLE'), Entity(table), SQL('ADD COLUMN'), field_clause] if isinstance(field, ForeignKeyField): parts.extend(self.get_inline_fk_sql(field)) return Clause(*parts) def get_inline_fk_sql(self, field): return [ SQL('REFERENCES'), Entity(field.rel_model._meta.db_table), EnclosedClause(Entity(field.to_field.db_column)) ] @operation def add_foreign_key_constraint(self, table, column_name, field): raise NotImplementedError @operation def add_column(self, table, column_name, field): # Adding a column is complicated by the fact that if there are rows # present and the field is non-null, then we need to first add the # column as a nullable field, then set the value, then add a not null # constraint. if not field.null and field.default is None: raise ValueError('%s is not null but has no default' % column_name) is_foreign_key = isinstance(field, ForeignKeyField) # Foreign key fields must explicitly specify a `to_field`. if is_foreign_key and not field.to_field: raise ValueError('Foreign keys must specify a `to_field`.') operations = [self.alter_add_column(table, column_name, field)] # In the event the field is *not* nullable, update with the default # value and set not null. if not field.null: operations.extend([ self.apply_default(table, column_name, field), self.add_not_null(table, column_name)]) if is_foreign_key and self.explicit_create_foreign_key: operations.append( self.add_foreign_key_constraint( table, column_name, field.rel_model._meta.db_table, field.to_field.db_column)) if field.index or field.unique: operations.append( self.add_index(table, (column_name,), field.unique)) return operations @operation def drop_foreign_key_constraint(self, table, column_name): raise NotImplementedError @operation def drop_column(self, table, column_name, cascade=True): nodes = [ SQL('ALTER TABLE'), Entity(table), SQL('DROP COLUMN'), Entity(column_name)] if cascade: nodes.append(SQL('CASCADE')) drop_column_node = Clause(*nodes) fk_columns = [ foreign_key.column for foreign_key in self.database.get_foreign_keys(table)] if column_name in fk_columns and self.explicit_delete_foreign_key: return [ self.drop_foreign_key_constraint(table, column_name), drop_column_node] else: return drop_column_node @operation def rename_column(self, table, old_name, new_name): return Clause( SQL('ALTER TABLE'), Entity(table), SQL('RENAME COLUMN'), Entity(old_name), SQL('TO'), Entity(new_name)) def _alter_column(self, table, column): return [ SQL('ALTER TABLE'), Entity(table), SQL('ALTER COLUMN'), Entity(column)] @operation def add_not_null(self, table, column): nodes = self._alter_column(table, column) nodes.append(SQL('SET NOT NULL')) return Clause(*nodes) @operation def drop_not_null(self, table, column): nodes = self._alter_column(table, column) nodes.append(SQL('DROP NOT NULL')) return Clause(*nodes) @operation def rename_table(self, old_name, new_name): return Clause( SQL('ALTER TABLE'), Entity(old_name), SQL('RENAME TO'), Entity(new_name)) @operation def add_index(self, table, columns, unique=False): compiler = self.database.compiler() statement = 'CREATE UNIQUE INDEX' if unique else 'CREATE INDEX' return Clause( SQL(statement), Entity(compiler.index_name(table, columns)), SQL('ON'), Entity(table), EnclosedClause(*[Entity(column) for column in columns])) @operation def drop_index(self, table, index_name): return Clause( SQL('DROP INDEX'), Entity(index_name)) class PostgresqlMigrator(SchemaMigrator): def _primary_key_columns(self, tbl): query = """ SELECT pg_attribute.attname FROM pg_index, pg_class, pg_attribute WHERE pg_class.oid = '%s'::regclass AND indrelid = pg_class.oid AND pg_attribute.attrelid = pg_class.oid AND pg_attribute.attnum = any(pg_index.indkey) AND indisprimary; """ cursor = self.database.execute_sql(query % tbl) return [row[0] for row in cursor.fetchall()] @operation def rename_table(self, old_name, new_name): pk_names = self._primary_key_columns(old_name) ParentClass = super(PostgresqlMigrator, self) operations = [ ParentClass.rename_table(old_name, new_name, generate=True)] if len(pk_names) == 1: # Check for existence of primary key sequence. seq_name = '%s_%s_seq' % (old_name, pk_names[0]) query = """ SELECT 1 FROM information_schema.sequences WHERE LOWER(sequence_name) = LOWER(%s) """ cursor = self.database.execute_sql(query, (seq_name,)) if bool(cursor.fetchone()): new_seq_name = '%s_%s_seq' % (new_name, pk_names[0]) operations.append(ParentClass.rename_table( seq_name, new_seq_name, generate=True)) return operations _column_attributes = ('name', 'definition', 'null', 'pk', 'default', 'extra') class MySQLColumn(namedtuple('_Column', _column_attributes)): @property def is_pk(self): return self.pk == 'PRI' @property def is_unique(self): return self.pk == 'UNI' @property def is_null(self): return self.null == 'YES' def sql(self, column_name=None, is_null=None): if is_null is None: is_null = self.is_null if column_name is None: column_name = self.name parts = [ Entity(column_name), SQL(self.definition)] if self.is_unique: parts.append(SQL('UNIQUE')) if is_null: parts.append(SQL('NULL')) else: parts.append(SQL('NOT NULL')) if self.is_pk: parts.append(SQL('PRIMARY KEY')) if self.extra: parts.append(SQL(self.extra)) return Clause(*parts) class MySQLMigrator(SchemaMigrator): explicit_create_foreign_key = True explicit_delete_foreign_key = True @operation def rename_table(self, old_name, new_name): return Clause( SQL('RENAME TABLE'), Entity(old_name), SQL('TO'), Entity(new_name)) def _get_column_definition(self, table, column_name): cursor = self.database.execute_sql('DESCRIBE %s;' % table) rows = cursor.fetchall() for row in rows: column = MySQLColumn(*row) if column.name == column_name: return column return False @operation def add_foreign_key_constraint(self, table, column_name, rel, rel_column): # TODO: refactor, this duplicates QueryCompiler._create_foreign_key constraint = 'fk_%s_%s_refs_%s' % (table, column_name, rel) return Clause( SQL('ALTER TABLE'), Entity(table), SQL('ADD CONSTRAINT'), Entity(constraint), SQL('FOREIGN KEY'), EnclosedClause(Entity(column_name)), SQL('REFERENCES'), Entity(rel), EnclosedClause(Entity(rel_column))) def get_foreign_key_constraint(self, table, column_name): cursor = self.database.execute_sql( ('SELECT constraint_name ' 'FROM information_schema.key_column_usage WHERE ' 'table_schema = DATABASE() AND ' 'table_name = %s AND ' 'column_name = %s AND ' 'referenced_table_name IS NOT NULL AND ' 'referenced_column_name IS NOT NULL;'), (table, column_name)) result = cursor.fetchone() if not result: raise AttributeError( 'Unable to find foreign key constraint for ' '"%s" on table "%s".' % (table, column_name)) return result[0] @operation def drop_foreign_key_constraint(self, table, column_name): return Clause( SQL('ALTER TABLE'), Entity(table), SQL('DROP FOREIGN KEY'), Entity(self.get_foreign_key_constraint(table, column_name))) def get_inline_fk_sql(self, field): return [] @operation def add_not_null(self, table, column): column = self._get_column_definition(table, column) return Clause( SQL('ALTER TABLE'), Entity(table), SQL('MODIFY'), column.sql(is_null=False)) @operation def drop_not_null(self, table, column): column = self._get_column_definition(table, column) if column.is_pk: raise ValueError('Primary keys can not be null') return Clause( SQL('ALTER TABLE'), Entity(table), SQL('MODIFY'), column.sql(is_null=True)) @operation def rename_column(self, table, old_name, new_name): fk_objects = dict( (fk.column, fk) for fk in self.database.get_foreign_keys(table)) is_foreign_key = old_name in fk_objects column = self._get_column_definition(table, old_name) rename_clause = Clause( SQL('ALTER TABLE'), Entity(table), SQL('CHANGE'), Entity(old_name), column.sql(column_name=new_name)) if is_foreign_key: fk_metadata = fk_objects[old_name] return [ self.drop_foreign_key_constraint(table, old_name), rename_clause, self.add_foreign_key_constraint( table, new_name, fk_metadata.dest_table, fk_metadata.dest_column), ] else: return rename_clause @operation def drop_index(self, table, index_name): return Clause( SQL('DROP INDEX'), Entity(index_name), SQL('ON'), Entity(table)) class SqliteMigrator(SchemaMigrator): """ SQLite supports a subset of ALTER TABLE queries, view the docs for the full details http://sqlite.org/lang_altertable.html """ column_re = re.compile('(.+?)\((.+)\)') column_split_re = re.compile(r'(?:[^,(]|\([^)]*\))+') column_name_re = re.compile('["`\']?([\w]+)') fk_re = re.compile('FOREIGN KEY\s+\("?([\w]+)"?\)\s+', re.I) def _get_column_names(self, table): res = self.database.execute_sql('select * from "%s" limit 1' % table) return [item[0] for item in res.description] def _get_create_table(self, table): res = self.database.execute_sql( ('select name, sql from sqlite_master ' 'where type=? and LOWER(name)=?'), ['table', table.lower()]) return res.fetchone() @operation def _update_column(self, table, column_to_update, fn): columns = set(column.name.lower() for column in self.database.get_columns(table)) if column_to_update.lower() not in columns: raise ValueError('Column "%s" does not exist on "%s"' % (column_to_update, table)) # Get the SQL used to create the given table. table, create_table = self._get_create_table(table) # Get the indexes and SQL to re-create indexes. indexes = self.database.get_indexes(table) # Find any foreign keys we may need to remove. self.database.get_foreign_keys(table) # Make sure the create_table does not contain any newlines or tabs, # allowing the regex to work correctly. create_table = re.sub(r'\s+', ' ', create_table) # Parse out the `CREATE TABLE` and column list portions of the query. raw_create, raw_columns = self.column_re.search(create_table).groups() # Clean up the individual column definitions. split_columns = self.column_split_re.findall(raw_columns) column_defs = [col.strip() for col in split_columns] new_column_defs = [] new_column_names = [] original_column_names = [] for column_def in column_defs: column_name, = self.column_name_re.match(column_def).groups() if column_name == column_to_update: new_column_def = fn(column_name, column_def) if new_column_def: new_column_defs.append(new_column_def) original_column_names.append(column_name) column_name, = self.column_name_re.match( new_column_def).groups() new_column_names.append(column_name) else: new_column_defs.append(column_def) if not column_name.lower().startswith(('foreign', 'primary')): new_column_names.append(column_name) original_column_names.append(column_name) # Create a mapping of original columns to new columns. original_to_new = dict(zip(original_column_names, new_column_names)) new_column = original_to_new.get(column_to_update) fk_filter_fn = lambda column_def: column_def if not new_column: # Remove any foreign keys associated with this column. fk_filter_fn = lambda column_def: None elif new_column != column_to_update: # Update any foreign keys for this column. fk_filter_fn = lambda column_def: self.fk_re.sub( 'FOREIGN KEY ("%s") ' % new_column, column_def) cleaned_columns = [] for column_def in new_column_defs: match = self.fk_re.match(column_def) if match is not None and match.groups()[0] == column_to_update: column_def = fk_filter_fn(column_def) if column_def: cleaned_columns.append(column_def) # Update the name of the new CREATE TABLE query. temp_table = table + '__tmp__' rgx = re.compile('("?)%s("?)' % table, re.I) create = rgx.sub( '\\1%s\\2' % temp_table, raw_create) # Create the new table. columns = ', '.join(cleaned_columns) queries = [ Clause(SQL('DROP TABLE IF EXISTS'), Entity(temp_table)), SQL('%s (%s)' % (create.strip(), columns))] # Populate new table. populate_table = Clause( SQL('INSERT INTO'), Entity(temp_table), EnclosedClause(*[Entity(col) for col in new_column_names]), SQL('SELECT'), CommaClause(*[Entity(col) for col in original_column_names]), SQL('FROM'), Entity(table)) queries.append(populate_table) # Drop existing table and rename temp table. queries.append(Clause( SQL('DROP TABLE'), Entity(table))) queries.append(self.rename_table(temp_table, table)) # Re-create user-defined indexes. User-defined indexes will have a # non-empty SQL attribute. for index in filter(lambda idx: idx.sql, indexes): if column_to_update not in index.columns: queries.append(SQL(index.sql)) elif new_column: sql = self._fix_index(index.sql, column_to_update, new_column) if sql is not None: queries.append(SQL(sql)) return queries def _fix_index(self, sql, column_to_update, new_column): # Split on the name of the column to update. If it splits into two # pieces, then there's no ambiguity and we can simply replace the # old with the new. parts = sql.split(column_to_update) if len(parts) == 2: return sql.replace(column_to_update, new_column) # Find the list of columns in the index expression. lhs, rhs = sql.rsplit('(', 1) # Apply the same "split in two" logic to the column list portion of # the query. if len(rhs.split(column_to_update)) == 2: return '%s(%s' % (lhs, rhs.replace(column_to_update, new_column)) # Strip off the trailing parentheses and go through each column. parts = rhs.rsplit(')', 1)[0].split(',') columns = [part.strip('"`[]\' ') for part in parts] # `columns` looks something like: ['status', 'timestamp" DESC'] # https://www.sqlite.org/lang_keywords.html # Strip out any junk after the column name. clean = [] for column in columns: if re.match('%s(?:[\'"`\]]?\s|$)' % column_to_update, column): column = new_columne + column[len(column_to_update):] clean.append(column) return '%s(%s)' % (lhs, ', '.join('"%s"' % c for c in clean)) @operation def drop_column(self, table, column_name, cascade=True): return self._update_column(table, column_name, lambda a, b: None) @operation def rename_column(self, table, old_name, new_name): def _rename(column_name, column_def): return column_def.replace(column_name, new_name) return self._update_column(table, old_name, _rename) @operation def add_not_null(self, table, column): def _add_not_null(column_name, column_def): return column_def + ' NOT NULL' return self._update_column(table, column, _add_not_null) @operation def drop_not_null(self, table, column): def _drop_not_null(column_name, column_def): return column_def.replace('NOT NULL', '') return self._update_column(table, column, _drop_not_null) def migrate(*operations, **kwargs): for operation in operations: operation.run() peewee-2.10.2/playhouse/pool.py000066400000000000000000000235021316645060400164110ustar00rootroot00000000000000""" Lightweight connection pooling for peewee. In a multi-threaded application, up to `max_connections` will be opened. Each thread (or, if using gevent, greenlet) will have it's own connection. In a single-threaded application, only one connection will be created. It will be continually recycled until either it exceeds the stale timeout or is closed explicitly (using `.manual_close()`). By default, all your application needs to do is ensure that connections are closed when you are finished with them, and they will be returned to the pool. For web applications, this typically means that at the beginning of a request, you will open a connection, and when you return a response, you will close the connection. Simple Postgres pool example code: # Use the special postgresql extensions. from playhouse.pool import PooledPostgresqlExtDatabase db = PooledPostgresqlExtDatabase( 'my_app', max_connections=32, stale_timeout=300, # 5 minutes. user='postgres') class BaseModel(Model): class Meta: database = db That's it! In some situations you may want to manage your connections more explicitly. Since peewee stores the active connection in a threadlocal, this typically would mean that there could only ever be one connection open per thread. For most applications this is desirable, but if you would like to manually manage multiple connections you can create an *ExecutionContext*. Execution contexts allow finer-grained control over managing multiple connections to the database. When an execution context is initialized (either as a context manager or as a decorated function), a separate connection will be used for the duration of the wrapped block. You can also choose whether to wrap the block in a transaction. Execution context examples (using above `db` instance): with db.execution_context() as ctx: # A new connection will be opened or pulled from the pool of available # connections. Additionally, a transaction will be started. user = User.create(username='charlie') # When the block ends, the transaction will be committed and the connection # will be returned to the pool. @db.execution_context(with_transaction=False) def do_something(foo, bar): # When this function is called, a separate connection is made and will # be closed when the function returns. """ import heapq import logging import threading import time try: from Queue import Queue except ImportError: from queue import Queue try: from psycopg2 import extensions as pg_extensions except ImportError: pg_extensions = None from peewee import MySQLDatabase from peewee import PostgresqlDatabase from peewee import SqliteDatabase logger = logging.getLogger('peewee.pool') def make_int(val): if val is not None and not isinstance(val, (int, float)): return int(val) return val class MaxConnectionsExceeded(ValueError): pass class PooledDatabase(object): def __init__(self, database, max_connections=20, stale_timeout=None, timeout=None, **kwargs): self.max_connections = make_int(max_connections) self.stale_timeout = make_int(stale_timeout) self.timeout = make_int(timeout) if self.timeout == 0: self.timeout = float('inf') self._closed = set() self._connections = [] self._in_use = {} self.conn_key = id if self.timeout: self._event = threading.Event() self._ready_queue = Queue() super(PooledDatabase, self).__init__(database, **kwargs) def init(self, database, max_connections=None, stale_timeout=None, timeout=None, **connect_kwargs): super(PooledDatabase, self).init(database, **connect_kwargs) if max_connections is not None: self.max_connections = make_int(max_connections) if stale_timeout is not None: self.stale_timeout = make_int(stale_timeout) if timeout is not None: self.timeout = make_int(timeout) if self.timeout == 0: self.timeout = float('inf') def connect(self): if self.timeout: start = time.time() while start + self.timeout > time.time(): try: super(PooledDatabase, self).connect() except MaxConnectionsExceeded: time.sleep(0.1) else: return raise MaxConnectionsExceeded('Max connections exceeded, timed out ' 'attempting to connect.') else: super(PooledDatabase, self).connect() def _connect(self, *args, **kwargs): while True: try: # Remove the oldest connection from the heap. ts, conn = heapq.heappop(self._connections) key = self.conn_key(conn) except IndexError: ts = conn = None logger.debug('No connection available in pool.') break else: if self._is_closed(key, conn): # This connecton was closed, but since it was not stale # it got added back to the queue of available conns. We # then closed it and marked it as explicitly closed, so # it's safe to throw it away now. # (Because Database.close() calls Database._close()). logger.debug('Connection %s was closed.', key) ts = conn = None self._closed.discard(key) elif self.stale_timeout and self._is_stale(ts): # If we are attempting to check out a stale connection, # then close it. We don't need to mark it in the "closed" # set, because it is not in the list of available conns # anymore. logger.debug('Connection %s was stale, closing.', key) self._close(conn, True) self._closed.discard(key) ts = conn = None else: break if conn is None: if self.max_connections and ( len(self._in_use) >= self.max_connections): raise MaxConnectionsExceeded('Exceeded maximum connections.') conn = super(PooledDatabase, self)._connect(*args, **kwargs) ts = time.time() key = self.conn_key(conn) logger.debug('Created new connection %s.', key) self._in_use[key] = ts return conn def _is_stale(self, timestamp): # Called on check-out and check-in to ensure the connection has # not outlived the stale timeout. return (time.time() - timestamp) > self.stale_timeout def _is_closed(self, key, conn): return key in self._closed def _can_reuse(self, conn): # Called on check-in to make sure the connection can be re-used. return True def _close(self, conn, close_conn=False): key = self.conn_key(conn) if close_conn: self._closed.add(key) super(PooledDatabase, self)._close(conn) elif key in self._in_use: ts = self._in_use[key] del self._in_use[key] if self.stale_timeout and self._is_stale(ts): logger.debug('Closing stale connection %s.', key) super(PooledDatabase, self)._close(conn) elif self._can_reuse(conn): logger.debug('Returning %s to pool.', key) heapq.heappush(self._connections, (ts, conn)) else: logger.debug('Closed %s.', key) def manual_close(self): """ Close the underlying connection without returning it to the pool. """ conn = self.get_conn() self.close() if not self._is_closed(self.conn_key(conn), conn): self._close(conn, close_conn=True) def close_all(self): """ Close all connections managed by the pool. """ for _, conn in self._connections: self._close(conn, close_conn=True) class PooledMySQLDatabase(PooledDatabase, MySQLDatabase): def _is_closed(self, key, conn): is_closed = super(PooledMySQLDatabase, self)._is_closed(key, conn) if not is_closed: try: conn.ping(False) except: is_closed = True return is_closed class _PooledPostgresqlDatabase(PooledDatabase): def _is_closed(self, key, conn): closed = super(_PooledPostgresqlDatabase, self)._is_closed(key, conn) if not closed: closed = bool(conn.closed) return closed def _can_reuse(self, conn): txn_status = conn.get_transaction_status() # Do not return connection in an error state, as subsequent queries # will all fail. if txn_status == pg_extensions.TRANSACTION_STATUS_INERROR: conn.reset() return True class PooledPostgresqlDatabase(_PooledPostgresqlDatabase, PostgresqlDatabase): pass try: from playhouse.postgres_ext import PostgresqlExtDatabase class PooledPostgresqlExtDatabase(_PooledPostgresqlDatabase, PostgresqlExtDatabase): pass except ImportError: pass class _PooledSqliteDatabase(PooledDatabase): def _is_closed(self, key, conn): closed = super(_PooledSqliteDatabase, self)._is_closed(key, conn) if not closed: try: conn.total_changes except: return True return closed class PooledSqliteDatabase(_PooledSqliteDatabase, SqliteDatabase): pass try: from playhouse.sqlite_ext import SqliteExtDatabase class PooledSqliteExtDatabase(_PooledSqliteDatabase, SqliteExtDatabase): pass except ImportError: pass peewee-2.10.2/playhouse/postgres_ext.py000066400000000000000000000326071316645060400201740ustar00rootroot00000000000000""" Collection of postgres-specific extensions, currently including: * Support for hstore, a key/value type storage """ import uuid from peewee import * from peewee import Expression from peewee import logger from peewee import Node from peewee import OP from peewee import Param from peewee import Passthrough from peewee import returns_clone from peewee import QueryCompiler from peewee import SelectQuery from peewee import UUIDField # For backwards-compatibility. try: from psycopg2cffi import compat compat.register() except ImportError: pass from psycopg2.extensions import adapt from psycopg2.extensions import AsIs from psycopg2.extensions import register_adapter from psycopg2.extras import register_hstore try: from psycopg2.extras import Json except: Json = None @Node.extend(clone=False) def cast(self, as_type): return Expression(self, OP.CAST, SQL(as_type)) class _LookupNode(Node): def __init__(self, node, parts): self.node = node self.parts = parts super(_LookupNode, self).__init__() def clone_base(self): return type(self)(self.node, list(self.parts)) def cast(self, as_type): return Expression(Clause(self, parens=True), OP.CAST, SQL(as_type)) class _JsonLookupBase(_LookupNode): def __init__(self, node, parts, as_json=False): super(_JsonLookupBase, self).__init__(node, parts) self._as_json = as_json def clone_base(self): return type(self)(self.node, list(self.parts), self._as_json) @returns_clone def as_json(self, as_json=True): self._as_json = as_json def contains(self, other): clone = self.as_json(True) if isinstance(other, (list, dict)): return Expression(clone, OP.JSONB_CONTAINS, Json(other)) return Expression(clone, OP.JSONB_EXISTS, other) def contains_any(self, *keys): return Expression( self.as_json(True), OP.JSONB_CONTAINS_ANY_KEY, Passthrough(list(keys))) def contains_all(self, *keys): return Expression( self.as_json(True), OP.JSONB_CONTAINS_ALL_KEYS, Passthrough(list(keys))) class JsonLookup(_JsonLookupBase): _node_type = 'json_lookup' def __getitem__(self, value): return JsonLookup(self.node, self.parts + [value], self._as_json) class JsonPath(_JsonLookupBase): _node_type = 'json_path' class ObjectSlice(_LookupNode): _node_type = 'object_slice' @classmethod def create(cls, node, value): if isinstance(value, slice): parts = [value.start or 0, value.stop or 0] elif isinstance(value, int): parts = [value] else: parts = map(int, value.split(':')) return cls(node, parts) def __getitem__(self, value): return ObjectSlice.create(self, value) class _Array(object): def __init__(self, field, items): self.field = field self.items = items def adapt_array(arr): conn = arr.field.model_class._meta.database.get_conn() items = adapt(arr.items) items.prepare(conn) return AsIs('%s::%s%s' % ( items, arr.field.get_column_type(), '[]' * arr.field.dimensions)) register_adapter(_Array, adapt_array) class IndexedFieldMixin(object): default_index_type = 'GiST' def __init__(self, index_type=None, *args, **kwargs): kwargs.setdefault('index', True) # By default, use an index. super(IndexedFieldMixin, self).__init__(*args, **kwargs) if self.index: self.index_type = index_type or self.default_index_type else: self.index_type = None class ArrayField(IndexedFieldMixin, Field): default_index_type = 'GIN' def __init__(self, field_class=IntegerField, dimensions=1, *args, **kwargs): self.__field = field_class(*args, **kwargs) self.dimensions = dimensions self.db_field = self.__field.get_db_field() super(ArrayField, self).__init__(*args, **kwargs) def __ddl_column__(self, column_type): sql = self.__field.__ddl_column__(column_type) sql.value += '[]' * self.dimensions return sql def db_value(self, value): if value is None: return if not isinstance(value, (list, _Array)): value = list(value) return _Array(self, value) def __getitem__(self, value): return ObjectSlice.create(self, value) def contains(self, *items): return Expression(self, OP.ACONTAINS, Param(items)) def contains_any(self, *items): return Expression(self, OP.ACONTAINS_ANY, Param(items)) class DateTimeTZField(DateTimeField): db_field = 'timestamptz' class IntervalField(Field): db_field = 'interval' class HStoreField(IndexedFieldMixin, Field): db_field = 'hash' def __getitem__(self, key): return Expression(self, OP.HKEY, Param(key)) def keys(self): return fn.akeys(self) def values(self): return fn.avals(self) def items(self): return fn.hstore_to_matrix(self) def slice(self, *args): return fn.slice(self, Passthrough(list(args))) def exists(self, key): return fn.exist(self, key) def defined(self, key): return fn.defined(self, key) def update(self, **data): return Expression(self, OP.HUPDATE, data) def delete(self, *keys): return fn.delete(self, Passthrough(list(keys))) def contains(self, value): if isinstance(value, dict): return Expression(self, OP.HCONTAINS_DICT, Passthrough(value)) elif isinstance(value, (list, tuple)): return Expression(self, OP.HCONTAINS_KEYS, Passthrough(value)) return Expression(self, OP.HCONTAINS_KEY, value) def contains_any(self, *keys): return Expression(self, OP.HCONTAINS_ANY_KEY, Passthrough(list(keys))) class JSONField(Field): db_field = 'json' def __init__(self, dumps=None, *args, **kwargs): if Json is None: raise Exception('Your version of psycopg2 does not support JSON.') self.dumps = dumps super(JSONField, self).__init__(*args, **kwargs) def db_value(self, value): if value is None: return value if not isinstance(value, Json): return Json(value, dumps=self.dumps) return value def __getitem__(self, value): return JsonLookup(self, [value]) def path(self, *keys): return JsonPath(self, keys) class BinaryJSONField(IndexedFieldMixin, JSONField): db_field = 'jsonb' default_index_type = 'GIN' def contains(self, other): if isinstance(other, (list, dict)): return Expression(self, OP.JSONB_CONTAINS, Json(other)) return Expression(self, OP.JSONB_EXISTS, Passthrough(other)) def contained_by(self, other): return Expression(self, OP.JSONB_CONTAINED_BY, Json(other)) def contains_any(self, *items): return Expression( self, OP.JSONB_CONTAINS_ANY_KEY, Passthrough(list(items)),) def contains_all(self, *items): return Expression( self, OP.JSONB_CONTAINS_ALL_KEYS, Passthrough(list(items))) class TSVectorField(IndexedFieldMixin, TextField): db_field = 'tsvector' default_index_type = 'GIN' def match(self, query, language=None): params = (language, query) if language is not None else (query,) return Expression(self, OP.TS_MATCH, fn.to_tsquery(*params)) def Match(field, query, language=None): params = (language, query) if language is not None else (query,) return Expression( fn.to_tsvector(field), OP.TS_MATCH, fn.to_tsquery(*params)) OP.update( HKEY='key', HUPDATE='H@>', HCONTAINS_DICT='H?&', HCONTAINS_KEYS='H?', HCONTAINS_KEY='H?|', HCONTAINS_ANY_KEY='H||', ACONTAINS='A@>', ACONTAINS_ANY='A||', TS_MATCH='T@@', JSONB_CONTAINS='JB@>', JSONB_CONTAINED_BY='JB<@', JSONB_CONTAINS_ANY_KEY='JB?|', JSONB_CONTAINS_ALL_KEYS='JB?&', JSONB_EXISTS='JB?', CAST='::', ) class PostgresqlExtCompiler(QueryCompiler): def _create_index(self, model_class, fields, unique=False): clause = super(PostgresqlExtCompiler, self)._create_index( model_class, fields, unique) # Allow fields to specify a type of index. HStore and Array fields # may want to use GiST indexes, for example. index_type = None for field in fields: if isinstance(field, IndexedFieldMixin): index_type = field.index_type if index_type: clause.nodes.insert(-1, SQL('USING %s' % index_type)) return clause def _parse_object_slice(self, node, alias_map, conv): sql, params = self.parse_node(node.node, alias_map, conv) # Postgresql uses 1-based indexes. parts = [str(part + 1) for part in node.parts] sql = '%s[%s]' % (sql, ':'.join(parts)) return sql, params def _parse_json_lookup(self, node, alias_map, conv): sql, params = self.parse_node(node.node, alias_map, conv) lookups = [sql] for part in node.parts: part_sql, part_params = self.parse_node( part, alias_map, conv) lookups.append(part_sql) params.extend(part_params) if node._as_json: sql = '->'.join(lookups) else: # The last lookup should be converted to text. head, tail = lookups[:-1], lookups[-1] sql = '->>'.join(('->'.join(head), tail)) return sql, params def _parse_json_path(self, node, alias_map, conv): sql, params = self.parse_node(node.node, alias_map, conv) if node._as_json: operand = '#>' else: operand = '#>>' params.append('{%s}' % ','.join(map(str, node.parts))) return operand.join((sql, self.interpolation)), params def get_parse_map(self): parse_map = super(PostgresqlExtCompiler, self).get_parse_map() parse_map.update( object_slice=self._parse_object_slice, json_lookup=self._parse_json_lookup, json_path=self._parse_json_path) return parse_map class PostgresqlExtDatabase(PostgresqlDatabase): compiler_class = PostgresqlExtCompiler def __init__(self, *args, **kwargs): self.server_side_cursors = kwargs.pop('server_side_cursors', False) self.register_hstore = kwargs.pop('register_hstore', True) super(PostgresqlExtDatabase, self).__init__(*args, **kwargs) def get_cursor(self, name=None): if name: return self.get_conn().cursor(name=name) return self.get_conn().cursor() def execute_sql(self, sql, params=None, require_commit=True, named_cursor=False): logger.debug((sql, params)) use_named_cursor = (named_cursor or ( self.server_side_cursors and sql.lower().startswith('select'))) with self.exception_wrapper: if use_named_cursor: cursor = self.get_cursor(name=str(uuid.uuid1())) require_commit = False else: cursor = self.get_cursor() try: cursor.execute(sql, params or ()) except Exception as exc: if self.get_autocommit() and self.autorollback: self.rollback() raise else: if require_commit and self.get_autocommit(): self.commit() return cursor def _connect(self, database, **kwargs): conn = super(PostgresqlExtDatabase, self)._connect(database, **kwargs) if self.register_hstore: register_hstore(conn, globally=True) return conn class ServerSideSelectQuery(SelectQuery): @classmethod def clone_from_query(cls, query): clone = ServerSideSelectQuery(query.model_class) return query._clone_attributes(clone) def _execute(self): sql, params = self.sql() return self.database.execute_sql( sql, params, require_commit=False, named_cursor=True) PostgresqlExtDatabase.register_fields({ 'datetime_tz': 'timestamp with time zone', 'hash': 'hstore', 'json': 'json', 'jsonb': 'jsonb', 'tsvector': 'tsvector', }) PostgresqlExtDatabase.register_ops({ OP.HCONTAINS_DICT: '@>', OP.HCONTAINS_KEYS: '?&', OP.HCONTAINS_KEY: '?', OP.HCONTAINS_ANY_KEY: '?|', OP.HKEY: '->', OP.HUPDATE: '||', OP.ACONTAINS: '@>', OP.ACONTAINS_ANY: '&&', OP.TS_MATCH: '@@', OP.JSONB_CONTAINS: '@>', OP.JSONB_CONTAINED_BY: '<@', OP.JSONB_CONTAINS_ANY_KEY: '?|', OP.JSONB_CONTAINS_ALL_KEYS: '?&', OP.JSONB_EXISTS: '?', OP.CAST: '::', }) def ServerSide(select_query): # Flag query for execution using server-side cursors. clone = ServerSideSelectQuery.clone_from_query(select_query) with clone.database.transaction(): # Execute the query. query_result = clone.execute() # Patch QueryResultWrapper onto original query. select_query._qr = query_result # Expose generator for iterating over query. for obj in query_result.iterator(): yield obj def LateralJoin(lhs, rhs, join_type='LEFT', condition=True): return Clause( lhs, SQL('%s JOIN LATERAL' % join_type), rhs, SQL('ON %s', condition)) peewee-2.10.2/playhouse/pskel000077500000000000000000000047621316645060400161410ustar00rootroot00000000000000#!/usr/bin/env python from collections import namedtuple import optparse template = """ #!/usr/bin/env python import logging from peewee import * from peewee import create_model_tables %(extra_import)s db = %(engine)s('%(database)s') class BaseModel(Model): class Meta: database = db %(models)s def main(): db.create_tables([%(model_names)s], True) %(logging)s if __name__ == '__main__': main() """.strip() model_template = """ class %(model_name)s(BaseModel): pass """ logging_template = """ logger = logging.getLogger('peewee') logger.setLevel(logging.DEBUG) logger.addHandler(logging.StreamHandler()) """ class Engine(namedtuple('_Engine', ('engine', 'imports'))): def __new__(cls, engine, imports=None): return super(Engine, cls).__new__(cls, engine, imports) def get_import(self): if self.imports: return 'from %s import *' % self.imports return '' engine_mapping = { 'postgres': Engine('PostgresqlDatabase'), 'postgres_ext': Engine('PostgresqlExtDatabase', 'playhouse.postgres_ext'), 'sqlite': Engine('SqliteDatabase'), 'sqlite_ext': Engine('SqliteExtDatabase', 'playhouse.sqlite_ext'), 'mysql': Engine('MySQLDatabase'), 'apsw': Engine('APSWDatabase', 'playhouse.apsw_ext'), 'bdb': Engine('BerkeleyDatabase', 'playhouse.berkeleydb'), } def render_models(models, engine, database, logging=False): rendered_models = [] class_names = [] for model in models: class_name = model.strip().title() class_names.append(class_name) rendered_models.append(model_template % {'model_name': class_name}) parameters = { 'database': database, 'engine': engine.engine, 'extra_import': engine.get_import(), 'logging': logging_template if logging else '', 'models': '\n'.join(rendered_models + ['']), 'model_names': ', '.join(class_names), } return template % parameters if __name__ == '__main__': parser = optparse.OptionParser( usage='Usage: %prog [options] model1 model2...') ao = parser.add_option ao('-l', '--logging', dest='logging', action='store_true') ao('-e', '--engine', dest='engine', choices=sorted(engine_mapping.keys()), default='sqlite') ao('-d', '--database', dest='database', default=':memory:') options, models = parser.parse_args() print(render_models( models, engine=engine_mapping[options.engine], database=options.database, logging=options.logging)) peewee-2.10.2/playhouse/read_slave.py000066400000000000000000000026511316645060400175470ustar00rootroot00000000000000""" Support for using a dedicated read-slave. The read database is specified as a Model.Meta option, and will be used for SELECT statements: master = PostgresqlDatabase('master') read_slave = PostgresqlDatabase('read_slave') class BaseModel(ReadSlaveModel): class Meta: database = master read_slaves = [read_slave] # This database will be used for SELECTs. # Now define your models as you would normally. class User(BaseModel): username = CharField() # To force a SELECT on the master database, you can instantiate the SelectQuery # by hand: master_select = SelectQuery(User).where(...) """ from peewee import * class ReadSlaveModel(Model): @classmethod def _get_read_database(cls): if not getattr(cls._meta, 'read_slaves', None): return cls._meta.database current_idx = getattr(cls, '_read_slave_idx', -1) cls._read_slave_idx = (current_idx + 1) % len(cls._meta.read_slaves) return cls._meta.read_slaves[cls._read_slave_idx] @classmethod def select(cls, *args, **kwargs): query = super(ReadSlaveModel, cls).select(*args, **kwargs) query.database = cls._get_read_database() return query @classmethod def raw(cls, *args, **kwargs): query = super(ReadSlaveModel, cls).raw(*args, **kwargs) if query._sql.lower().startswith('select'): query.database = cls._get_read_database() return query peewee-2.10.2/playhouse/reflection.py000066400000000000000000000535261316645060400176030ustar00rootroot00000000000000try: from collections import OrderedDict except ImportError: OrderedDict = dict from collections import namedtuple import re from peewee import * try: from MySQLdb.constants import FIELD_TYPE except ImportError: try: from pymysql.constants import FIELD_TYPE except ImportError: FIELD_TYPE = None try: from playhouse import postgres_ext except ImportError: postgres_ext = None RESERVED_WORDS = set([ 'and', 'as', 'assert', 'break', 'class', 'continue', 'def', 'del', 'elif', 'else', 'except', 'exec', 'finally', 'for', 'from', 'global', 'if', 'import', 'in', 'is', 'lambda', 'not', 'or', 'pass', 'print', 'raise', 'return', 'try', 'while', 'with', 'yield', ]) class UnknownField(object): pass class Column(object): """ Store metadata about a database column. """ primary_key_types = (IntegerField, PrimaryKeyField) def __init__(self, name, field_class, raw_column_type, nullable, primary_key=False, db_column=None, index=False, unique=False): self.name = name self.field_class = field_class self.raw_column_type = raw_column_type self.nullable = nullable self.primary_key = primary_key self.db_column = db_column self.index = index self.unique = unique # Foreign key metadata. self.rel_model = None self.related_name = None self.to_field = None def __repr__(self): attrs = [ 'field_class', 'raw_column_type', 'nullable', 'primary_key', 'db_column'] keyword_args = ', '.join( '%s=%s' % (attr, getattr(self, attr)) for attr in attrs) return 'Column(%s, %s)' % (self.name, keyword_args) def get_field_parameters(self): params = {} # Set up default attributes. if self.nullable: params['null'] = True if self.field_class is ForeignKeyField or self.name != self.db_column: params['db_column'] = "'%s'" % self.db_column if self.primary_key and self.field_class is not PrimaryKeyField: params['primary_key'] = True # Handle ForeignKeyField-specific attributes. if self.is_foreign_key(): params['rel_model'] = self.rel_model if self.to_field: params['to_field'] = "'%s'" % self.to_field if self.related_name: params['related_name'] = "'%s'" % self.related_name # Handle indexes on column. if not self.is_primary_key(): if self.unique: params['unique'] = 'True' elif self.index and not self.is_foreign_key(): params['index'] = 'True' return params def is_primary_key(self): return self.field_class is PrimaryKeyField or self.primary_key def is_foreign_key(self): return self.field_class is ForeignKeyField def is_self_referential_fk(self): return (self.field_class is ForeignKeyField and self.rel_model == "'self'") def set_foreign_key(self, foreign_key, model_names, dest=None, related_name=None): self.foreign_key = foreign_key self.field_class = ForeignKeyField if foreign_key.dest_table == foreign_key.table: self.rel_model = "'self'" else: self.rel_model = model_names[foreign_key.dest_table] self.to_field = dest and dest.name or None self.related_name = related_name or None def get_field(self): # Generate the field definition for this column. field_params = self.get_field_parameters() param_str = ', '.join('%s=%s' % (k, v) for k, v in sorted(field_params.items())) field = '%s = %s(%s)' % ( self.name, self.field_class.__name__, param_str) if self.field_class is UnknownField: field = '%s # %s' % (field, self.raw_column_type) return field class Metadata(object): column_map = {} extension_import = '' def __init__(self, database): self.database = database self.requires_extension = False def execute(self, sql, *params): return self.database.execute_sql(sql, params) def get_columns(self, table, schema=None): metadata = OrderedDict( (metadata.name, metadata) for metadata in self.database.get_columns(table, schema)) # Look up the actual column type for each column. column_types = self.get_column_types(table, schema) # Look up the primary keys. pk_names = self.get_primary_keys(table, schema) if len(pk_names) == 1: pk = pk_names[0] if column_types[pk] is IntegerField: column_types[pk] = PrimaryKeyField columns = OrderedDict() for name, column_data in metadata.items(): columns[name] = Column( name, field_class=column_types[name], raw_column_type=column_data.data_type, nullable=column_data.null, primary_key=column_data.primary_key, db_column=name) return columns def get_column_types(self, table, schema=None): raise NotImplementedError def get_foreign_keys(self, table, schema=None): return self.database.get_foreign_keys(table, schema) def get_primary_keys(self, table, schema=None): return self.database.get_primary_keys(table, schema) def get_indexes(self, table, schema=None): return self.database.get_indexes(table, schema) class PostgresqlMetadata(Metadata): column_map = { 16: BooleanField, 17: BlobField, 20: BigIntegerField, 21: IntegerField, 23: IntegerField, 25: TextField, 700: FloatField, 701: FloatField, 1042: CharField, # blank-padded CHAR 1043: CharField, 1082: DateField, 1114: DateTimeField, 1184: DateTimeField, 1083: TimeField, 1266: TimeField, 1700: DecimalField, 2950: TextField, # UUID } extension_import = 'from playhouse.postgres_ext import *' def __init__(self, database): super(PostgresqlMetadata, self).__init__(database) if postgres_ext is not None: # Attempt to add types like HStore and JSON. cursor = self.execute('select oid, typname from pg_type;') results = cursor.fetchall() for oid, typname in results: if 'json' in typname: self.column_map[oid] = postgres_ext.JSONField elif 'hstore' in typname: self.column_map[oid] = postgres_ext.HStoreField elif 'tsvector' in typname: self.column_map[oid] = postgres_ext.TSVectorField def get_column_types(self, table, schema): column_types = {} extension_types = set(( postgres_ext.JSONField, postgres_ext.TSVectorField, postgres_ext.HStoreField)) if postgres_ext is not None else set() # Look up the actual column type for each column. identifier = '"%s"."%s"' % (schema, table) cursor = self.execute('SELECT * FROM %s LIMIT 1' % identifier) # Store column metadata in dictionary keyed by column name. for column_description in cursor.description: column_types[column_description.name] = self.column_map.get( column_description.type_code, UnknownField) if column_types[column_description.name] in extension_types: self.requires_extension = True return column_types def get_columns(self, table, schema=None): schema = schema or 'public' return super(PostgresqlMetadata, self).get_columns(table, schema) def get_foreign_keys(self, table, schema=None): schema = schema or 'public' return super(PostgresqlMetadata, self).get_foreign_keys(table, schema) def get_primary_keys(self, table, schema=None): schema = schema or 'public' return super(PostgresqlMetadata, self).get_primary_keys(table, schema) def get_indexes(self, table, schema=None): schema = schema or 'public' return super(PostgresqlMetadata, self).get_indexes(table, schema) class MySQLMetadata(Metadata): if FIELD_TYPE is None: column_map = {} else: column_map = { FIELD_TYPE.BLOB: TextField, FIELD_TYPE.CHAR: CharField, FIELD_TYPE.DATE: DateField, FIELD_TYPE.DATETIME: DateTimeField, FIELD_TYPE.DECIMAL: DecimalField, FIELD_TYPE.DOUBLE: FloatField, FIELD_TYPE.FLOAT: FloatField, FIELD_TYPE.INT24: IntegerField, FIELD_TYPE.LONG_BLOB: TextField, FIELD_TYPE.LONG: IntegerField, FIELD_TYPE.LONGLONG: BigIntegerField, FIELD_TYPE.MEDIUM_BLOB: TextField, FIELD_TYPE.NEWDECIMAL: DecimalField, FIELD_TYPE.SHORT: IntegerField, FIELD_TYPE.STRING: CharField, FIELD_TYPE.TIMESTAMP: DateTimeField, FIELD_TYPE.TIME: TimeField, FIELD_TYPE.TINY_BLOB: TextField, FIELD_TYPE.TINY: IntegerField, FIELD_TYPE.VAR_STRING: CharField, } def __init__(self, database, **kwargs): if 'password' in kwargs: kwargs['passwd'] = kwargs.pop('password') super(MySQLMetadata, self).__init__(database, **kwargs) def get_column_types(self, table, schema=None): column_types = {} # Look up the actual column type for each column. cursor = self.execute('SELECT * FROM `%s` LIMIT 1' % table) # Store column metadata in dictionary keyed by column name. for column_description in cursor.description: name, type_code = column_description[:2] column_types[name] = self.column_map.get(type_code, UnknownField) return column_types class SqliteMetadata(Metadata): column_map = { 'bigint': BigIntegerField, 'blob': BlobField, 'bool': BooleanField, 'boolean': BooleanField, 'char': CharField, 'date': DateField, 'datetime': DateTimeField, 'decimal': DecimalField, 'integer': IntegerField, 'integer unsigned': IntegerField, 'int': IntegerField, 'long': BigIntegerField, 'real': FloatField, 'smallinteger': IntegerField, 'smallint': IntegerField, 'smallint unsigned': IntegerField, 'text': TextField, 'time': TimeField, } begin = '(?:["\[\(]+)?' end = '(?:["\]\)]+)?' re_foreign_key = ( '(?:FOREIGN KEY\s*)?' '{begin}(.+?){end}\s+(?:.+\s+)?' 'references\s+{begin}(.+?){end}' '\s*\(["|\[]?(.+?)["|\]]?\)').format(begin=begin, end=end) re_varchar = r'^\s*(?:var)?char\s*\(\s*(\d+)\s*\)\s*$' def _map_col(self, column_type): raw_column_type = column_type.lower() if raw_column_type in self.column_map: field_class = self.column_map[raw_column_type] elif re.search(self.re_varchar, raw_column_type): field_class = CharField else: column_type = re.sub('\(.+\)', '', raw_column_type) field_class = self.column_map.get(column_type, UnknownField) return field_class def get_column_types(self, table, schema=None): column_types = {} columns = self.database.get_columns(table) for column in columns: column_types[column.name] = self._map_col(column.data_type) return column_types _DatabaseMetadata = namedtuple('_DatabaseMetadata', ( 'columns', 'primary_keys', 'foreign_keys', 'model_names', 'indexes')) class DatabaseMetadata(_DatabaseMetadata): def multi_column_indexes(self, table): accum = [] for index in self.indexes[table]: if len(index.columns) > 1: field_names = [self.columns[table][column].name for column in index.columns if column in self.columns[table]] accum.append((field_names, index.unique)) return accum def column_indexes(self, table): accum = {} for index in self.indexes[table]: if len(index.columns) == 1: accum[index.columns[0]] = index.unique return accum class Introspector(object): pk_classes = [PrimaryKeyField, IntegerField] def __init__(self, metadata, schema=None): self.metadata = metadata self.schema = schema def __repr__(self): return '' % self.metadata.database @classmethod def from_database(cls, database, schema=None): if isinstance(database, PostgresqlDatabase): metadata = PostgresqlMetadata(database) elif isinstance(database, MySQLDatabase): metadata = MySQLMetadata(database) else: metadata = SqliteMetadata(database) return cls(metadata, schema=schema) def get_database_class(self): return type(self.metadata.database) def get_database_name(self): return self.metadata.database.database def get_database_kwargs(self): return self.metadata.database.connect_kwargs def get_additional_imports(self): if self.metadata.requires_extension: return '\n' + self.metadata.extension_import return '' def make_model_name(self, table): model = re.sub('[^\w]+', '', table) model_name = ''.join(sub.title() for sub in model.split('_')) if not model_name[0].isalpha(): model_name = 'T' + model_name return model_name def make_column_name(self, column): column = re.sub('_id$', '', column.lower().strip()) or column.lower() column = re.sub('[^\w]+', '_', column) if column in RESERVED_WORDS: column += '_' if len(column) and column[0].isdigit(): column = '_' + column return column def introspect(self, table_names=None, literal_column_names=False): # Retrieve all the tables in the database. if self.schema: tables = self.metadata.database.get_tables(schema=self.schema) else: tables = self.metadata.database.get_tables() if table_names is not None: tables = [table for table in tables if table in table_names] # Store a mapping of table name -> dictionary of columns. columns = {} # Store a mapping of table name -> set of primary key columns. primary_keys = {} # Store a mapping of table -> foreign keys. foreign_keys = {} # Store a mapping of table name -> model name. model_names = {} # Store a mapping of table name -> indexes. indexes = {} # Gather the columns for each table. for table in tables: table_indexes = self.metadata.get_indexes(table, self.schema) table_columns = self.metadata.get_columns(table, self.schema) try: foreign_keys[table] = self.metadata.get_foreign_keys( table, self.schema) except ValueError as exc: err(*exc.args) foreign_keys[table] = [] model_names[table] = self.make_model_name(table) lower_col_names = set(column_name.lower() for column_name in table_columns) for col_name, column in table_columns.items(): if literal_column_names: # Simply try to make a valid Python identifier. new_name = re.sub('[^\w]+', '_', col_name) else: # Snak-ify the name, stripping "_id" suffixes as well. new_name = self.make_column_name(col_name) # If we have two columns, "parent" and "parent_id", ensure # that when we don't introduce naming conflicts. lower_name = col_name.lower() if lower_name.endswith('_id') and new_name in lower_col_names: new_name = col_name.lower() column.name = new_name for index in table_indexes: if len(index.columns) == 1: column = index.columns[0] if column in table_columns: table_columns[column].unique = index.unique table_columns[column].index = True primary_keys[table] = self.metadata.get_primary_keys( table, self.schema) columns[table] = table_columns indexes[table] = table_indexes # Gather all instances where we might have a `related_name` conflict, # either due to multiple FKs on a table pointing to the same table, # or a related_name that would conflict with an existing field. related_names = {} sort_fn = lambda foreign_key: foreign_key.column for table in tables: models_referenced = set() for foreign_key in sorted(foreign_keys[table], key=sort_fn): try: column = columns[table][foreign_key.column] except KeyError: continue dest_table = foreign_key.dest_table if dest_table in models_referenced: related_names[column] = '%s_%s_set' % ( dest_table, column.name) else: models_referenced.add(dest_table) # On the second pass convert all foreign keys. for table in tables: for foreign_key in foreign_keys[table]: src = columns[foreign_key.table][foreign_key.column] try: dest = columns[foreign_key.dest_table][ foreign_key.dest_column] except KeyError: dest = None src.set_foreign_key( foreign_key=foreign_key, model_names=model_names, dest=dest, related_name=related_names.get(src)) return DatabaseMetadata( columns, primary_keys, foreign_keys, model_names, indexes) def generate_models(self, skip_invalid=False, table_names=None, literal_column_names=False): database = self.introspect(table_names=table_names, literal_column_names=literal_column_names) models = {} class BaseModel(Model): class Meta: database = self.metadata.database def _create_model(table, models): for foreign_key in database.foreign_keys[table]: dest = foreign_key.dest_table if dest not in models and dest != table: _create_model(dest, models) primary_keys = [] columns = database.columns[table] for db_column, column in columns.items(): if column.primary_key: primary_keys.append(column.name) multi_column_indexes = database.multi_column_indexes(table) column_indexes = database.column_indexes(table) class Meta: indexes = multi_column_indexes # Fix models with multi-column primary keys. composite_key = False if len(primary_keys) == 0: primary_keys = columns.keys() if len(primary_keys) > 1: Meta.primary_key = CompositeKey(*[ field.name for col, field in columns.items() if col in primary_keys]) composite_key = True attrs = {'Meta': Meta} for db_column, column in columns.items(): FieldClass = column.field_class if FieldClass is UnknownField: FieldClass = BareField params = { 'db_column': db_column, 'null': column.nullable} if column.primary_key and composite_key: if FieldClass is PrimaryKeyField: FieldClass = IntegerField params['primary_key'] = False elif column.primary_key and FieldClass is not PrimaryKeyField: params['primary_key'] = True if column.is_foreign_key(): if column.is_self_referential_fk(): params['rel_model'] = 'self' else: dest_table = column.foreign_key.dest_table params['rel_model'] = models[dest_table] if column.to_field: params['to_field'] = column.to_field # Generate a unique related name. params['related_name'] = '%s_%s_rel' % (table, db_column) if db_column in column_indexes and not column.is_primary_key(): if column_indexes[db_column]: params['unique'] = True elif not column.is_foreign_key(): params['index'] = True attrs[column.name] = FieldClass(**params) try: models[table] = type(str(table), (BaseModel,), attrs) except ValueError: if not skip_invalid: raise # Actually generate Model classes. for table, model in sorted(database.model_names.items()): if table not in models: _create_model(table, models) return models def introspect(database, schema=None): introspector = Introspector.from_database(database, schema=schema) return introspector.introspect() peewee-2.10.2/playhouse/shortcuts.py000066400000000000000000000160431316645060400175000ustar00rootroot00000000000000import sys from peewee import * from peewee import Node if sys.version_info[0] == 3: from collections import Callable callable = lambda c: isinstance(c, Callable) def case(predicate, expression_tuples, default=None): """ CASE statement builder. Example CASE statements: SELECT foo, CASE WHEN foo = 1 THEN "one" WHEN foo = 2 THEN "two" ELSE "?" END -- will be in column named "case" in postgres -- FROM bar; -- equivalent to above -- SELECT foo, CASE foo WHEN 1 THEN "one" WHEN 2 THEN "two" ELSE "?" END Corresponding peewee: # No predicate, use expressions. Bar.select(Bar.foo, case(None, ( (Bar.foo == 1, "one"), (Bar.foo == 2, "two")), "?")) # Predicate, will test for equality. Bar.select(Bar.foo, case(Bar.foo, ( (1, "one"), (2, "two")), "?")) """ clauses = [SQL('CASE')] simple_case = predicate is not None if simple_case: clauses.append(predicate) for expr, value in expression_tuples: # If this is a simple case, each tuple will contain (value, value) pair # since the DB will be performing an equality check automatically. # Otherwise, we will have (expression, value) pairs. clauses.extend((SQL('WHEN'), expr, SQL('THEN'), value)) if default is not None: clauses.extend((SQL('ELSE'), default)) clauses.append(SQL('END')) return Clause(*clauses) def cast(node, as_type): return fn.CAST(Clause(node, SQL('AS %s' % as_type))) def _clone_set(s): if s: return set(s) return set() def model_to_dict(model, recurse=True, backrefs=False, only=None, exclude=None, seen=None, extra_attrs=None, fields_from_query=None, max_depth=None): """ Convert a model instance (and any related objects) to a dictionary. :param bool recurse: Whether foreign-keys should be recursed. :param bool backrefs: Whether lists of related objects should be recursed. :param only: A list (or set) of field instances indicating which fields should be included. :param exclude: A list (or set) of field instances that should be excluded from the dictionary. :param list extra_attrs: Names of model instance attributes or methods that should be included. :param SelectQuery fields_from_query: Query that was source of model. Take fields explicitly selected by the query and serialize them. :param int max_depth: Maximum depth to recurse, value <= 0 means no max. """ max_depth = -1 if max_depth is None else max_depth if max_depth == 0: recurse = False only = _clone_set(only) extra_attrs = _clone_set(extra_attrs) if fields_from_query is not None: for item in fields_from_query._select: if isinstance(item, Field): only.add(item) elif isinstance(item, Node) and item._alias: extra_attrs.add(item._alias) data = {} exclude = _clone_set(exclude) seen = _clone_set(seen) exclude |= seen model_class = type(model) for field in model._meta.declared_fields: if field in exclude or (only and (field not in only)): continue field_data = model._data.get(field.name) if isinstance(field, ForeignKeyField) and recurse: if field_data: seen.add(field) rel_obj = getattr(model, field.name) field_data = model_to_dict( rel_obj, recurse=recurse, backrefs=backrefs, only=only, exclude=exclude, seen=seen, max_depth=max_depth - 1) else: field_data = None data[field.name] = field_data if extra_attrs: for attr_name in extra_attrs: attr = getattr(model, attr_name) if callable(attr): data[attr_name] = attr() else: data[attr_name] = attr if backrefs and recurse: for related_name, foreign_key in model._meta.reverse_rel.items(): descriptor = getattr(model_class, related_name) if descriptor in exclude or foreign_key in exclude: continue if only and (descriptor not in only) and (foreign_key not in only): continue accum = [] exclude.add(foreign_key) related_query = getattr( model, related_name + '_prefetch', getattr(model, related_name)) for rel_obj in related_query: accum.append(model_to_dict( rel_obj, recurse=recurse, backrefs=backrefs, only=only, exclude=exclude, max_depth=max_depth - 1)) data[related_name] = accum return data def dict_to_model(model_class, data, ignore_unknown=False): instance = model_class() meta = model_class._meta for key, value in data.items(): if key in meta.fields: field = meta.fields[key] is_backref = False elif key in model_class._meta.reverse_rel: field = meta.reverse_rel[key] is_backref = True elif ignore_unknown: setattr(instance, key, value) continue else: raise AttributeError('Unrecognized attribute "%s" for model ' 'class %s.' % (key, model_class)) is_foreign_key = isinstance(field, ForeignKeyField) if not is_backref and is_foreign_key and isinstance(value, dict): setattr( instance, field.name, dict_to_model(field.rel_model, value, ignore_unknown)) elif is_backref and isinstance(value, (list, tuple)): instances = [ dict_to_model( field.model_class, row_data, ignore_unknown) for row_data in value] for rel_instance in instances: setattr(rel_instance, field.name, instance) setattr(instance, field.related_name, instances) else: setattr(instance, field.name, value) return instance class RetryOperationalError(object): def execute_sql(self, sql, params=None, require_commit=True): try: cursor = super(RetryOperationalError, self).execute_sql( sql, params, require_commit) except OperationalError: if not self.is_closed(): self.close() with self.exception_wrapper: cursor = self.get_cursor() cursor.execute(sql, params or ()) if require_commit and self.get_autocommit(): self.commit() return cursor peewee-2.10.2/playhouse/signals.py000066400000000000000000000043111316645060400170750ustar00rootroot00000000000000""" Provide django-style hooks for model events. """ from peewee import Model as _Model class Signal(object): def __init__(self): self._flush() def connect(self, receiver, name=None, sender=None): name = name or receiver.__name__ if name not in self._receivers: self._receivers[name] = (receiver, sender) self._receiver_list.append(name) else: raise ValueError('receiver named %s already connected' % name) def disconnect(self, receiver=None, name=None): if receiver: name = receiver.__name__ if name: del self._receivers[name] self._receiver_list.remove(name) else: raise ValueError('a receiver or a name must be provided') def __call__(self, name=None, sender=None): def decorator(fn): self.connect(fn, name, sender) return fn return decorator def send(self, instance, *args, **kwargs): sender = type(instance) responses = [] for name in self._receiver_list: r, s = self._receivers[name] if s is None or isinstance(instance, s): responses.append((r, r(sender, instance, *args, **kwargs))) return responses def _flush(self): self._receivers = {} self._receiver_list = [] pre_save = Signal() post_save = Signal() pre_delete = Signal() post_delete = Signal() pre_init = Signal() post_init = Signal() class Model(_Model): def __init__(self, *args, **kwargs): super(Model, self).__init__(*args, **kwargs) pre_init.send(self) def prepared(self): super(Model, self).prepared() post_init.send(self) def save(self, *args, **kwargs): pk_value = self._get_pk_value() created = kwargs.get('force_insert', False) or not bool(pk_value) pre_save.send(self, created=created) ret = super(Model, self).save(*args, **kwargs) post_save.send(self, created=created) return ret def delete_instance(self, *args, **kwargs): pre_delete.send(self) ret = super(Model, self).delete_instance(*args, **kwargs) post_delete.send(self) return ret peewee-2.10.2/playhouse/sqlcipher_ext.py000066400000000000000000000101761316645060400203150ustar00rootroot00000000000000""" Peewee integration with pysqlcipher. Project page: https://github.com/leapcode/pysqlcipher/ **WARNING!!! EXPERIMENTAL!!!** * Although this extention's code is short, it has not been propery peer-reviewed yet and may have introduced vulnerabilities. * The code contains minimum values for `passphrase` length and `kdf_iter`, as well as a default value for the later. **Do not** regard these numbers as advice. Consult the docs at http://sqlcipher.net/sqlcipher-api/ and security experts. Also note that this code relies on pysqlcipher and sqlcipher, and the code there might have vulnerabilities as well, but since these are widely used crypto modules, we can expect "short zero days" there. Example usage: from peewee.playground.ciphersql_ext import SqlCipherDatabase db = SqlCipherDatabase('/path/to/my.db', passphrase="don'tuseme4real", kdf_iter=1000000) * `passphrase`: should be "long enough". Note that *length beats vocabulary* (much exponential), and even a lowercase-only passphrase like easytorememberyethardforotherstoguess packs more noise than 8 random printable chatacters and *can* be memorized. * `kdf_iter`: Should be "as much as the weakest target machine can afford". When opening an existing database, passphrase and kdf_iter should be identical to the ones used when creating it. If they're wrong, an exception will only be raised **when you access the database**. If you need to ask for an interactive passphrase, here's example code you can put after the `db = ...` line: try: # Just access the database so that it checks the encryption. db.get_tables() # We're looking for a DatabaseError with a specific error message. except peewee.DatabaseError as e: # Check whether the message *means* "passphrase is wrong" if e.args[0] == 'file is encrypted or is not a database': raise Exception('Developer should Prompt user for passphrase ' 'again.') else: # A different DatabaseError. Raise it. raise e See a more elaborate example with this code at https://gist.github.com/thedod/11048875 """ import datetime import decimal from peewee import * from playhouse.sqlite_ext import SqliteExtDatabase try: from pysqlcipher import dbapi2 as sqlcipher except ImportError: try: from pysqlcipher3 import dbapi2 as sqlcipher except ImportError: raise ImportError('Sqlcipher python bindings not found.') sqlcipher.register_adapter(decimal.Decimal, str) sqlcipher.register_adapter(datetime.date, str) sqlcipher.register_adapter(datetime.time, str) class _SqlCipherDatabase(object): def _connect(self, database, **kwargs): passphrase = kwargs.pop('passphrase', '') kdf_iter = kwargs.pop('kdf_iter', 64000) if len(passphrase) < 8: raise ImproperlyConfigured( 'SqlCipherDatabase passphrase should be at least eight ' 'character long.') if kdf_iter and kdf_iter < 10000: raise ImproperlyConfigured( 'SqlCipherDatabase kdf_iter should be at least 10000.') conn = sqlcipher.connect(database, **kwargs) self._add_conn_hooks(conn) conn.execute( 'PRAGMA key=\'{0}\''.format(passphrase.replace("'", "''"))) conn.execute('PRAGMA kdf_iter={0:d}'.format(kdf_iter)) return conn class SqlCipherDatabase(_SqlCipherDatabase, SqliteDatabase): pass class SqlCipherExtDatabase(_SqlCipherDatabase, SqliteExtDatabase): def __init__(self, *args, **kwargs): kwargs['c_extensions'] = False super(SqlCipherExtDatabase, self).__init__(*args, **kwargs) def _connect(self, *args, **kwargs): conn = super(SqlCipherExtDatabase, self)._connect(*args, **kwargs) self._load_aggregates(conn) self._load_collations(conn) self._load_functions(conn) if self._row_factory: conn.row_factory = self._row_factory if self._extensions: conn.enable_load_extension(True) for extension in self._extensions: conn.load_extension(extension) return conn peewee-2.10.2/playhouse/sqlite_ext.py000066400000000000000000001065461316645060400176330ustar00rootroot00000000000000""" Sqlite3 extensions ================== * Define custom aggregates, collations and functions * Basic support for virtual tables * Basic support for FTS3/4 * Specify isolation level in transactions Example usage of the Full-text search: class Document(FTSModel): title = TextField() # type affinities are ignored in FTS content = TextField() Document.create_table(tokenize='porter') # use the porter stemmer # populate the documents using normal operations. for doc in documents: Document.create(title=doc['title'], content=doc['content']) # use the "match" operation for FTS queries. matching_docs = Document.select().where(match(Document.title, 'some query')) # to sort by best match, use the custom "rank" function. best_docs = (Document .select(Document, Document.rank('score')) .where(match(Document.title, 'some query')) .order_by(SQL('score'))) # or use the shortcut method. best_docs = Document.match('some phrase') """ import glob import inspect import math import os import re import struct import sys try: import simplejson as json except ImportError: import json from peewee import * from peewee import EnclosedClause from peewee import Entity from peewee import Expression from peewee import Node from peewee import OP from peewee import SqliteQueryCompiler from peewee import _AutoPrimaryKeyField from peewee import sqlite3 # Import the best SQLite version. from peewee import transaction from peewee import _sqlite_date_part from peewee import _sqlite_date_trunc from peewee import _sqlite_regexp try: from playhouse import _sqlite_ext as _c_ext except ImportError: _c_ext = None if sys.version_info[0] == 3: basestring = str FTS_MATCHINFO_FORMAT = 'pcnalx' FTS_MATCHINFO_FORMAT_SIMPLE = 'pcx' if sqlite3 is not None: FTS_VER = sqlite3.sqlite_version_info[:3] >= (3, 7, 4) and 'FTS4' or 'FTS3' else: FTS_VER = 'FTS3' FTS5_MIN_VERSION = (3, 9, 0) class RowIDField(_AutoPrimaryKeyField): """ Field used to access hidden primary key on FTS5 or any other SQLite table that does not have a separately-defined primary key. """ _column_name = 'rowid' class DocIDField(_AutoPrimaryKeyField): """Field used to access hidden primary key on FTS3/4 tables.""" _column_name = 'docid' class PrimaryKeyAutoIncrementField(PrimaryKeyField): """ SQLite by default uses MAX(primary key) + 1 to set the ID on a new row. Using the `AUTOINCREMENT` field, the IDs will increase monotonically even if rows are deleted. Use this if you need to guarantee IDs are not re-used in the event of deletion. """ def __ddl__(self, column_type): ddl = super(PrimaryKeyAutoIncrementField, self).__ddl__(column_type) return ddl + [SQL('AUTOINCREMENT')] class JSONField(TextField): def python_value(self, value): if value is not None: try: return json.loads(value) except (TypeError, ValueError): return value def db_value(self, value): if value is not None: return json.dumps(value) def clean_path(self, path): if path.startswith('[') or not path: return '$%s' % path return '$.%s' % path def length(self, path=None): if path: return fn.json_array_length(self, self.clean_path(path)) return fn.json_array_length(self) def extract(self, path): return fn.json_extract(self, self.clean_path(path)) def _value_for_insertion(self, value): if isinstance(value, (list, tuple, dict)): return fn.json(json.dumps(value)) return value def _insert_like(self, fn, pairs): npairs = len(pairs) if npairs % 2 != 0: raise ValueError('Mismatched path and value parameters.') accum = [] for i in range(0, npairs, 2): accum.append(self.clean_path(pairs[i])) accum.append(self._value_for_insertion(pairs[i + 1])) return fn(self, *accum) def insert(self, *pairs): return self._insert_like(fn.json_insert, pairs) def replace(self, *pairs): return self._insert_like(fn.json_replace, pairs) def set(self, *pairs): return self._insert_like(fn.json_set, pairs) def remove(self, *paths): return fn.json_remove(self, *[self.clean_path(path) for path in paths]) def json_type(self, path=None): if path: return fn.json_type(self, self.clean_path(path)) return fn.json_type(self) def children(self, path=None): """ Schema of `json_each` and `json_tree`: key, value, type TEXT (object, array, string, etc), atom (value for primitive/scalar types, NULL for array and object) id INTEGER (unique identifier for element) parent INTEGER (unique identifier of parent element or NULL) fullkey TEXT (full path describing element) path TEXT (path to the container of the current element) json JSON hidden (1st input parameter to function) root TEXT hidden (2nd input parameter, path at which to start) """ if path: return fn.json_each(self, self.clean_path(path)) return fn.json_each(self) def tree(self, path=None): if path: return fn.json_tree(self, self.clean_path(path)) return fn.json_tree(self) class SearchField(BareField): """ Field class to be used with full-text search extension. Since the FTS extensions do not support any field types besides `TEXT`, and furthermore do not support secondary indexes, using this field will prevent you from mistakenly creating the wrong kind of field on your FTS table. """ def __init__(self, unindexed=False, db_column=None, coerce=None, **_): kwargs = {'null': True, 'db_column': db_column, 'coerce': coerce} self._unindexed = unindexed if unindexed: kwargs['constraints'] = [SQL('UNINDEXED')] super(SearchField, self).__init__(**kwargs) def clone_base(self, **kwargs): clone = super(SearchField, self).clone_base(**kwargs) clone._unindexed = self._unindexed return clone class _VirtualFieldMixin(object): """ Field mixin to support virtual table attributes that may not correspond to actual columns in the database. """ def add_to_class(self, model_class, name): super(_VirtualFieldMixin, self).add_to_class(model_class, name) model_class._meta.remove_field(name) # Virtual field types that can be used to reference specially-created fields # on virtual tables. These fields are exposed as attributes on the model class, # but are not included in any `CREATE TABLE` statements or by default when # performing an `INSERT` or `UPDATE` query. class VirtualField(_VirtualFieldMixin, BareField): pass class VirtualIntegerField(_VirtualFieldMixin, IntegerField): pass class VirtualCharField(_VirtualFieldMixin, CharField): pass class VirtualFloatField(_VirtualFieldMixin, FloatField): pass class VirtualModel(Model): class Meta: virtual_table = True extension_module = None extension_options = {} @classmethod def clean_options(cls, **options): # Called by the QueryCompiler when generating the virtual table's # options clauses. return options @classmethod def create_table(cls, fail_silently=False, **options): # Modified to support **options, which are passed back to the # query compiler. if fail_silently and cls.table_exists(): return cls._meta.database.create_table(cls, options=options) cls._create_indexes() class BaseFTSModel(VirtualModel): @classmethod def clean_options(cls, **options): tokenize = options.get('tokenize') content = options.get('content') if tokenize: # Tokenizers need to be in quoted string. options['tokenize'] = '"%s"' % tokenize if isinstance(content, basestring) and content == '': # Special-case content-less full-text search tables. options['content'] = "''" return options class FTSModel(BaseFTSModel): """ VirtualModel class for creating tables that use either the FTS3 or FTS4 search extensions. Peewee automatically determines which version of the FTS extension is supported and will use FTS4 if possible. Note: because FTS5 is significantly different from FTS3 and FTS4, there is a separate model class for FTS5 virtual tables. """ # FTS3/4 does not support declared primary keys, but we can use the # implicit docid. docid = DocIDField() class Meta: extension_module = FTS_VER @classmethod def validate_model(cls): if cls._meta.primary_key.name != 'docid': raise ImproperlyConfigured( 'FTSModel classes must use the default `docid` primary key.') @classmethod def _fts_cmd(cls, cmd): tbl = cls._meta.db_table res = cls._meta.database.execute_sql( "INSERT INTO %s(%s) VALUES('%s');" % (tbl, tbl, cmd)) return res.fetchone() @classmethod def optimize(cls): return cls._fts_cmd('optimize') @classmethod def rebuild(cls): return cls._fts_cmd('rebuild') @classmethod def integrity_check(cls): return cls._fts_cmd('integrity-check') @classmethod def merge(cls, blocks=200, segments=8): return cls._fts_cmd('merge=%s,%s' % (blocks, segments)) @classmethod def automerge(cls, state=True): return cls._fts_cmd('automerge=%s' % (state and '1' or '0')) @classmethod def match(cls, term): """ Generate a `MATCH` expression appropriate for searching this table. """ return match(cls.as_entity(), term) @classmethod def rank(cls, *weights): return fn.fts_rank(fn.matchinfo( cls.as_entity(), FTS_MATCHINFO_FORMAT_SIMPLE), *weights) @classmethod def bm25(cls, *weights): match_info = fn.matchinfo(cls.as_entity(), FTS_MATCHINFO_FORMAT) return fn.fts_bm25(match_info, *weights) @classmethod def lucene(cls, *weights): match_info = fn.matchinfo(cls.as_entity(), FTS_MATCHINFO_FORMAT) return fn.fts_lucene(match_info, *weights) @classmethod def _search(cls, term, weights, with_score, score_alias, score_fn, explicit_ordering): if not weights: rank = score_fn() elif isinstance(weights, dict): weight_args = [] for field in cls._meta.declared_fields: weight_args.append( weights.get(field, weights.get(field.name, 1.0))) rank = score_fn(*weight_args) else: rank = score_fn(*weights) selection = () order_by = rank if with_score: selection = (cls, rank.alias(score_alias)) if with_score and not explicit_ordering: order_by = SQL(score_alias) return (cls .select(*selection) .where(cls.match(term)) .order_by(order_by)) @classmethod def search(cls, term, weights=None, with_score=False, score_alias='score', explicit_ordering=False): """Full-text search using selected `term`.""" return cls._search( term, weights, with_score, score_alias, cls.rank, explicit_ordering) @classmethod def search_bm25(cls, term, weights=None, with_score=False, score_alias='score', explicit_ordering=False): """Full-text search for selected `term` using BM25 algorithm.""" return cls._search( term, weights, with_score, score_alias, cls.bm25, explicit_ordering) @classmethod def search_lucene(cls, term, weights=None, with_score=False, score_alias='score', explicit_ordering=False): """Full-text search for selected `term` using BM25 algorithm.""" return cls._search( term, weights, with_score, score_alias, cls.lucene, explicit_ordering) _alphabet = 'abcdefghijklmnopqrstuvwxyz' _alphanum = set([ '\t', ' ', ',', '"', chr(26), # Substitution control character. '(', ')', '{', '}', '*', ':', '_', '+', ]) | set('0123456789') | set(_alphabet) | set(_alphabet.upper()) _invalid_ascii = set([chr(p) for p in range(128) if chr(p) not in _alphanum]) _quote_re = re.compile('(?:[^\s"]|"(?:\\.|[^"])*")+') class FTS5Model(BaseFTSModel): """ Requires SQLite >= 3.9.0. Table options: content: table name of external content, or empty string for "contentless" content_rowid: column name of external content primary key prefix: integer(s). Ex: '2' or '2 3 4' tokenize: porter, unicode61, ascii. Ex: 'porter unicode61' The unicode tokenizer supports the following parameters: * remove_diacritics (1 or 0, default is 1) * tokenchars (string of characters, e.g. '-_' * separators (string of characters) Parameters are passed as alternating parameter name and value, so: {'tokenize': "unicode61 remove_diacritics 0 tokenchars '-_'"} Content-less tables: If you don't need the full-text content in it's original form, you can specify a content-less table. Searches and auxiliary functions will work as usual, but the only values returned when SELECT-ing can be rowid. Also content-less tables do not support UPDATE or DELETE. External content tables: You can set up triggers to sync these, e.g. -- Create a table. And an external content fts5 table to index it. CREATE TABLE tbl(a INTEGER PRIMARY KEY, b); CREATE VIRTUAL TABLE ft USING fts5(b, content='tbl', content_rowid='a'); -- Triggers to keep the FTS index up to date. CREATE TRIGGER tbl_ai AFTER INSERT ON tbl BEGIN INSERT INTO ft(rowid, b) VALUES (new.a, new.b); END; CREATE TRIGGER tbl_ad AFTER DELETE ON tbl BEGIN INSERT INTO ft(fts_idx, rowid, b) VALUES('delete', old.a, old.b); END; CREATE TRIGGER tbl_au AFTER UPDATE ON tbl BEGIN INSERT INTO ft(fts_idx, rowid, b) VALUES('delete', old.a, old.b); INSERT INTO ft(rowid, b) VALUES (new.a, new.b); END; Built-in auxiliary functions: * bm25(tbl[, weight_0, ... weight_n]) * highlight(tbl, col_idx, prefix, suffix) * snippet(tbl, col_idx, prefix, suffix, ?, max_tokens) """ # FTS5 does not support declared primary keys, but we can use the # implicit rowid. rowid = RowIDField() class Meta: extension_module = 'fts5' _error_messages = { 'field_type': ('Besides the implicit `rowid` column, all columns must ' 'be instances of SearchField'), 'index': 'Secondary indexes are not supported for FTS5 models', 'pk': 'FTS5 models must use the default `rowid` primary key', } @classmethod def validate_model(cls): # Perform FTS5-specific validation and options post-processing. if cls._meta.primary_key.name != 'rowid': raise ImproperlyConfigured(cls._error_messages['pk']) for field in cls._meta.fields.values(): if not isinstance(field, (SearchField, RowIDField)): raise ImproperlyConfigured(cls._error_messages['field_type']) if cls._meta.indexes: raise ImproperlyConfigured(cls._error_messages['index']) @classmethod def fts5_installed(cls): if sqlite3.sqlite_version_info[:3] < FTS5_MIN_VERSION: return False # Test in-memory DB to determine if the FTS5 extension is installed. tmp_db = sqlite3.connect(':memory:') try: tmp_db.execute('CREATE VIRTUAL TABLE fts5test USING fts5 (data);') except: try: sqlite3.enable_load_extension(True) sqlite3.load_extension('fts5') except: return False else: cls._meta.database.load_extension('fts5') finally: tmp_db.close() return True @staticmethod def validate_query(query): """ Simple helper function to indicate whether a search query is a valid FTS5 query. Note: this simply looks at the characters being used, and is not guaranteed to catch all problematic queries. """ tokens = _quote_re.findall(query) for token in tokens: if token.startswith('"') and token.endswith('"'): continue if set(token) & _invalid_ascii: return False return True @staticmethod def clean_query(query, replace=chr(26)): """ Clean a query of invalid tokens. """ accum = [] any_invalid = False tokens = _quote_re.findall(query) for token in tokens: if token.startswith('"') and token.endswith('"'): accum.append(token) continue token_set = set(token) invalid_for_token = token_set & _invalid_ascii if invalid_for_token: any_invalid = True for c in invalid_for_token: token = token.replace(c, replace) accum.append(token) if any_invalid: return ' '.join(accum) return query @classmethod def match(cls, term): """ Generate a `MATCH` expression appropriate for searching this table. """ return match(cls.as_entity(), term) @classmethod def rank(cls, *args): if args: return cls.bm25(*args) else: return SQL('rank') @classmethod def bm25(cls, *weights): return fn.bm25(cls.as_entity(), *weights) @classmethod def search(cls, term, weights=None, with_score=False, score_alias='score', explicit_ordering=False): """Full-text search using selected `term`.""" return cls.search_bm25( FTS5Model.clean_query(term), weights, with_score, score_alias, explicit_ordering) @classmethod def search_bm25(cls, term, weights=None, with_score=False, score_alias='score', explicit_ordering=False): """Full-text search using selected `term`.""" if not weights: rank = SQL('rank') elif isinstance(weights, dict): weight_args = [] for field in cls._meta.declared_fields: weight_args.append( weights.get(field, weights.get(field.name, 1.0))) rank = fn.bm25(cls.as_entity(), *weight_args) else: rank = fn.bm25(cls.as_entity(), *weights) selection = () order_by = rank if with_score: selection = (cls, rank.alias(score_alias)) if with_score and not explicit_ordering: order_by = SQL(score_alias) return (cls .select(*selection) .where(cls.match(FTS5Model.clean_query(term))) .order_by(order_by)) @classmethod def _fts_cmd(cls, cmd, **extra_params): tbl = cls.as_entity() columns = [tbl] values = [cmd] for key, value in extra_params.items(): columns.append(Entity(key)) values.append(value) inner_clause = EnclosedClause(tbl) clause = Clause( SQL('INSERT INTO'), cls.as_entity(), EnclosedClause(*columns), SQL('VALUES'), EnclosedClause(*values)) return cls._meta.database.execute(clause) @classmethod def automerge(cls, level): if not (0 <= level <= 16): raise ValueError('level must be between 0 and 16') return cls._fts_cmd('automerge', rank=level) @classmethod def merge(cls, npages): return cls._fts_cmd('merge', rank=npages) @classmethod def set_pgsz(cls, pgsz): return cls._fts_cmd('pgsz', rank=pgsz) @classmethod def set_rank(cls, rank_expression): return cls._fts_cmd('rank', rank=rank_expression) @classmethod def delete_all(cls): return cls._fts_cmd('delete-all') @classmethod def VocabModel(cls, table_type='row', table_name=None): if table_type not in ('row', 'col'): raise ValueError('table_type must be either "row" or "col".') attr = '_vocab_model_%s' % table_type if not hasattr(cls, attr): class Meta: database = cls._meta.database db_table = table_name or cls._meta.db_table + '_v' extension_module = fn.fts5vocab( cls.as_entity(), SQL(table_type)) attrs = { 'term': BareField(), 'doc': IntegerField(), 'cnt': IntegerField(), 'rowid': RowIDField(), 'Meta': Meta, } if table_type == 'col': attrs['col'] = BareField() class_name = '%sVocab' % cls.__name__ setattr(cls, attr, type(class_name, (VirtualModel,), attrs)) return getattr(cls, attr) def ClosureTable(model_class, foreign_key=None, referencing_class=None, referencing_key=None): """Model factory for the transitive closure extension.""" if referencing_class is None: referencing_class = model_class if foreign_key is None: for field_obj in model_class._meta.rel.values(): if field_obj.rel_model is model_class: foreign_key = field_obj break else: raise ValueError('Unable to find self-referential foreign key.') source_key = model_class._meta.primary_key if referencing_key is None: referencing_key = source_key class BaseClosureTable(VirtualModel): depth = VirtualIntegerField() id = VirtualIntegerField() idcolumn = VirtualCharField() parentcolumn = VirtualCharField() root = VirtualIntegerField() tablename = VirtualCharField() class Meta: extension_module = 'transitive_closure' @classmethod def descendants(cls, node, depth=None, include_node=False): query = (model_class .select(model_class, cls.depth.alias('depth')) .join(cls, on=(source_key == cls.id)) .where(cls.root == node) .naive()) if depth is not None: query = query.where(cls.depth == depth) elif not include_node: query = query.where(cls.depth > 0) return query @classmethod def ancestors(cls, node, depth=None, include_node=False): query = (model_class .select(model_class, cls.depth.alias('depth')) .join(cls, on=(source_key == cls.root)) .where(cls.id == node) .naive()) if depth: query = query.where(cls.depth == depth) elif not include_node: query = query.where(cls.depth > 0) return query @classmethod def siblings(cls, node, include_node=False): if referencing_class is model_class: # self-join fk_value = node._data.get(foreign_key.name) query = model_class.select().where(foreign_key == fk_value) else: # siblings as given in reference_class siblings = (referencing_class .select(referencing_key) .join(cls, on=(foreign_key == cls.root)) .where((cls.id == node) & (cls.depth == 1))) # the according models query = (model_class .select() .where(source_key << siblings) .naive()) if not include_node: query = query.where(source_key != node) return query class Meta: database = referencing_class._meta.database extension_options = { 'tablename': referencing_class._meta.db_table, 'idcolumn': referencing_key.db_column, 'parentcolumn': foreign_key.db_column} primary_key = False name = '%sClosure' % model_class.__name__ return type(name, (BaseClosureTable,), {'Meta': Meta}) class SqliteExtQueryCompiler(SqliteQueryCompiler): """ Subclass of QueryCompiler that can be used to construct virtual tables. """ def _create_table(self, model_class, safe=False, options=None): clause = super(SqliteExtQueryCompiler, self)._create_table( model_class, safe=safe) if issubclass(model_class, VirtualModel): statement = 'CREATE VIRTUAL TABLE' # If we are using a special extension, need to insert that after # the table name node. extension = model_class._meta.extension_module if isinstance(extension, Node): # If the `extension_module` attribute is a `Node` subclass, # then we assume the VirtualModel will be responsible for # defining not only the extension, but also the columns. parts = clause.nodes[:2] + [SQL('USING'), extension] clause = Clause(*parts) else: # The extension name is a simple string. clause.nodes.insert(2, SQL('USING %s' % extension)) else: statement = 'CREATE TABLE' if safe: statement += ' IF NOT EXISTS' clause.nodes[0] = SQL(statement) # Overwrite the statement. table_options = self.clean_options(model_class, clause, options) if table_options: columns_constraints = clause.nodes[-1] for k, v in sorted(table_options.items()): if isinstance(v, Field): # Special hack here for FTS5. We want to include the # fully-qualified column entity in most cases, but for # FTS5 we only want the string column name. v = v.as_entity(extension != 'fts5') elif inspect.isclass(v) and issubclass(v, Model): # The option points to a table name. v = v.as_entity() elif isinstance(v, (list, tuple)): # Lists will be quoted and joined by commas. v = SQL("'%s'" % ','.join(map(str, v))) elif not isinstance(v, Node): v = SQL(v) option = Clause(SQL(k), v) option.glue = '=' columns_constraints.nodes.append(option) if getattr(model_class._meta, 'without_rowid', None): clause.nodes.append(SQL('WITHOUT ROWID')) return clause def clean_options(self, model_class, clause, extra_options): model_options = getattr(model_class._meta, 'extension_options', None) if model_options: options = model_class.clean_options(**model_options) else: options = {} if extra_options: options.update(model_class.clean_options(**extra_options)) return options def create_table(self, model_class, safe=False, options=None): return self.parse_node(self._create_table(model_class, safe, options)) @Node.extend(clone=False) def disqualify(self): # In the where clause, prevent the given node/expression from constraining # an index. return Clause('+', self, glue='') class SqliteExtDatabase(SqliteDatabase): """ Database class which provides additional Sqlite-specific functionality: * Register custom aggregates, collations and functions * Specify a row factory * Advanced transactions (specify isolation level) """ compiler_class = SqliteExtQueryCompiler def __init__(self, database, c_extensions=True, *args, **kwargs): super(SqliteExtDatabase, self).__init__(database, *args, **kwargs) self._aggregates = {} self._collations = {} self._functions = {} self._extensions = set([]) self._row_factory = None if _c_ext and c_extensions: self._using_c_extensions = True self.register_function(_c_ext.peewee_date_part, 'date_part', 2) self.register_function(_c_ext.peewee_date_trunc, 'date_trunc', 2) self.register_function(_c_ext.peewee_regexp, 'regexp', -1) self.register_function(_c_ext.peewee_rank, 'fts_rank', -1) self.register_function(_c_ext.peewee_lucene, 'fts_lucene', -1) self.register_function(_c_ext.peewee_bm25, 'fts_bm25', -1) self.register_function(_c_ext.peewee_murmurhash, 'murmurhash', 1) else: self._using_c_extensions = False self.register_function(_sqlite_date_part, 'date_part', 2) self.register_function(_sqlite_date_trunc, 'date_trunc', 2) self.register_function(_sqlite_regexp, 'regexp', -1) self.register_function(rank, 'fts_rank', -1) self.register_function(bm25, 'fts_bm25', -1) @property def using_c_extensions(self): return self._using_c_extensions def _add_conn_hooks(self, conn): self._set_pragmas(conn) self._load_aggregates(conn) self._load_collations(conn) self._load_functions(conn) if self._row_factory: conn.row_factory = self._row_factory if self._extensions: self._load_extensions(conn) def _load_aggregates(self, conn): for name, (klass, num_params) in self._aggregates.items(): conn.create_aggregate(name, num_params, klass) def _load_collations(self, conn): for name, fn in self._collations.items(): conn.create_collation(name, fn) def _load_functions(self, conn): for name, (fn, num_params) in self._functions.items(): conn.create_function(name, num_params, fn) def _load_extensions(self, conn): conn.enable_load_extension(True) for extension in self._extensions: conn.load_extension(extension) def register_aggregate(self, klass, name=None, num_params=-1): self._aggregates[name or klass.__name__.lower()] = (klass, num_params) if not self.is_closed(): self._load_aggregates(self.get_conn()) def aggregate(self, name=None, num_params=-1): def decorator(klass): self.register_aggregate(klass, name, num_params) return klass return decorator def register_collation(self, fn, name=None): name = name or fn.__name__ def _collation(*args): expressions = args + (SQL('collate %s' % name),) return Clause(*expressions) fn.collation = _collation self._collations[name] = fn if not self.is_closed(): self._load_collations(self.get_conn()) def collation(self, name=None): def decorator(fn): self.register_collation(fn, name) return fn return decorator def register_function(self, fn, name=None, num_params=-1): self._functions[name or fn.__name__] = (fn, num_params) if not self.is_closed(): self._load_functions(self.get_conn()) def func(self, name=None, num_params=-1): def decorator(fn): self.register_function(fn, name, num_params) return fn return decorator def load_extension(self, extension): self._extensions.add(extension) if not self.is_closed(): conn = self.get_conn() conn.enable_load_extension(True) conn.load_extension(extension) def unregister_aggregate(self, name): del(self._aggregates[name]) def unregister_collation(self, name): del(self._collations[name]) def unregister_function(self, name): del(self._functions[name]) def unload_extension(self, extension): self._extensions.remove(extension) def row_factory(self, fn): self._row_factory = fn def create_table(self, model_class, safe=False, options=None): sql, params = self.compiler().create_table(model_class, safe, options) return self.execute_sql(sql, params) def create_index(self, model_class, field_name, unique=False): if issubclass(model_class, FTSModel): return return super(SqliteExtDatabase, self).create_index( model_class, field_name, unique) OP.MATCH = 'match' SqliteExtDatabase.register_ops({ OP.MATCH: 'MATCH', }) def match(lhs, rhs): return Expression(lhs, OP.MATCH, rhs) def _parse_match_info(buf): # See http://sqlite.org/fts3.html#matchinfo bufsize = len(buf) # Length in bytes. return [struct.unpack('@I', buf[i:i+4])[0] for i in range(0, bufsize, 4)] # Ranking implementation, which parse matchinfo. def rank(raw_match_info, *raw_weights): # Handle match_info called w/default args 'pcx' - based on the example rank # function http://sqlite.org/fts3.html#appendix_a match_info = _parse_match_info(raw_match_info) score = 0.0 p, c = match_info[:2] if not raw_weights: weights = [1] * c else: weights = [0] * c for i, weight in enumerate(raw_weights): weights[i] = weight for phrase_num in range(p): phrase_info_idx = 2 + (phrase_num * c * 3) for col_num in range(c): weight = weights[col_num] if not weight: continue col_idx = phrase_info_idx + (col_num * 3) x1, x2 = match_info[col_idx:col_idx + 2] if x1 > 0: score += weight * (float(x1) / x2) return -score # Okapi BM25 ranking implementation (FTS4 only). def bm25(raw_match_info, *args): """ Usage: # Format string *must* be pcnalx # Second parameter to bm25 specifies the index of the column, on # the table being queries. bm25(matchinfo(document_tbl, 'pcnalx'), 1) AS rank """ match_info = _parse_match_info(raw_match_info) K = 1.2 B = 0.75 score = 0.0 P_O, C_O, N_O, A_O = range(4) term_count = match_info[P_O] col_count = match_info[C_O] total_docs = match_info[N_O] L_O = A_O + col_count X_O = L_O + col_count if not args: weights = [1] * col_count else: weights = [0] * col_count for i, weight in enumerate(args): weights[i] = args[i] for i in range(term_count): for j in range(col_count): weight = weights[j] if weight == 0: continue avg_length = float(match_info[A_O + j]) doc_length = float(match_info[L_O + j]) if avg_length == 0: D = 0 else: D = 1 - B + (B * (doc_length / avg_length)) x = X_O + (3 * j * (i + 1)) term_frequency = float(match_info[x]) docs_with_term = float(match_info[x + 2]) idf = max( math.log( (total_docs - docs_with_term + 0.5) / (docs_with_term + 0.5)), 0) denom = term_frequency + (K * D) if denom == 0: rhs = 0 else: rhs = (term_frequency * (K + 1)) / denom score += (idf * rhs) * weight return -score peewee-2.10.2/playhouse/sqlite_udf.py000066400000000000000000000324101316645060400175750ustar00rootroot00000000000000import datetime import hashlib import heapq import math import os import random import re import sys import threading import zlib try: from collections import Counter except ImportError: Counter = None try: from urlparse import urlparse except ImportError: from urllib.parse import urlparse try: from vtfunc import TableFunction except ImportError: TableFunction = None from peewee import binary_construct from peewee import unicode_type try: from playhouse._speedups import format_date_time_sqlite except ImportError: from peewee import format_date_time from peewee import SQLITE_DATETIME_FORMATS def format_date_time_sqlite(date_value): return format_date_time(date_value, SQLITE_DATETIME_FORMATS) try: from playhouse import _sqlite_udf as cython_udf except ImportError: cython_udf = None # Group udf by function. CONTROL_FLOW = 'control_flow' DATE = 'date' FILE = 'file' HELPER = 'helpers' MATH = 'math' STRING = 'string' AGGREGATE_COLLECTION = {} TABLE_FUNCTION_COLLECTION = {} UDF_COLLECTION = {} class synchronized_dict(dict): def __init__(self, *args, **kwargs): super(synchronized_dict, self).__init__(*args, **kwargs) self._lock = threading.Lock() def __getitem__(self, key): with self._lock: return super(synchronized_dict, self).__getitem__(key) def __setitem__(self, key, value): with self._lock: return super(synchronized_dict, self).__setitem__(key, value) def __delitem__(self, key): with self._lock: return super(synchronized_dict, self).__delitem__(key) STATE = synchronized_dict() SETTINGS = synchronized_dict() # Class and function decorators. def aggregate(*groups): def decorator(klass): for group in groups: AGGREGATE_COLLECTION.setdefault(group, []) AGGREGATE_COLLECTION[group].append(klass) return klass return decorator def table_function(*groups): def decorator(klass): for group in groups: TABLE_FUNCTION_COLLECTION.setdefault(group, []) TABLE_FUNCTION_COLLECTION[group].append(klass) return klass return decorator def udf(*groups): def decorator(fn): for group in groups: UDF_COLLECTION.setdefault(group, []) UDF_COLLECTION[group].append(fn) return fn return decorator # Register aggregates / functions with connection. def register_aggregate_groups(conn, *groups): seen = set() for group in groups: klasses = AGGREGATE_COLLECTION[group] for klass in klasses: name = getattr(klass, 'name', klass.__name__) if name not in seen: seen.add(name) conn.create_aggregate(name, -1, klass) def register_table_function_groups(conn, *groups): seen = set() for group in groups: klasses = TABLE_FUNCTION_COLLECTION[group] for klass in klasses: if klass.name not in seen: seen.add(klass.name) klass.register(conn) def register_udf_groups(conn, *groups): seen = set() for group in groups: functions = UDF_COLLECTION[group] for function in functions: name = function.__name__ if name not in seen: seen.add(name) conn.create_function(name, -1, function) def register_all(conn): register_aggregate_groups(conn, *AGGREGATE_COLLECTION) register_table_function_groups(conn, *TABLE_FUNCTION_COLLECTION) register_udf_groups(conn, *UDF_COLLECTION) # Begin actual user-defined functions and aggregates. # Scalar functions. @udf(CONTROL_FLOW) def if_then_else(cond, truthy, falsey=None): if cond: return truthy return falsey @udf(DATE) def strip_tz(date_str): date_str = date_str.replace('T', ' ') tz_idx1 = date_str.find('+') if tz_idx1 != -1: return date_str[:tz_idx1] tz_idx2 = date_str.find('-') if tz_idx2 > 13: return date_str[:tz_idx2] return date_str @udf(DATE) def human_delta(nseconds, glue=', '): parts = ( (86400 * 365, 'year'), (86400 * 30, 'month'), (86400 * 7, 'week'), (86400, 'day'), (3600, 'hour'), (60, 'minute'), (1, 'second'), ) accum = [] for offset, name in parts: val, nseconds = divmod(nseconds, offset) if val: suffix = val != 1 and 's' or '' accum.append('%s %s%s' % (val, name, suffix)) if not accum: return '0 seconds' return glue.join(accum) @udf(FILE) def file_ext(filename): try: res = os.path.splitext(filename) except ValueError: return None return res[1] @udf(FILE) def file_read(filename): try: with open(filename) as fh: return fh.read() except: pass if sys.version_info[0] == 2: @udf(HELPER) def gzip(data, compression=9): return binary_construct(zlib.compress(data, compression)) @udf(HELPER) def gunzip(data): return zlib.decompress(data) else: @udf(HELPER) def gzip(data, compression=9): return zlib.compress(binary_construct(data), compression) @udf(HELPER) def gunzip(data): return zlib.decompress(data).decode('utf-8') @udf(HELPER) def hostname(url): parse_result = urlparse(url) if parse_result: return parse_result.netloc @udf(HELPER) def toggle(key, on=None): key = key.lower() if on is not None: STATE[key] = on else: STATE[key] = on = not STATE.get(key) return on @udf(HELPER) def setting(key, *args): if not args: return SETTINGS.get(key) elif len(args) == 1: SETTINGS[key] = args[0] else: return False @udf(HELPER) def clear_settings(): SETTINGS.clear() @udf(HELPER) def clear_toggles(): STATE.clear() @udf(MATH) def randomrange(start, end=None, step=None): if end is None: start, end = 0, start elif step is None: step = 1 return random.randrange(start, end, step) @udf(MATH) def gauss_distribution(mean, sigma): try: return random.gauss(mean, sigma) except ValueError: return None @udf(MATH) def sqrt(n): try: return math.sqrt(n) except ValueError: return None @udf(MATH) def tonumber(s): try: return int(s) except ValueError: try: return float(s) except: return None @udf(STRING) def substr_count(haystack, needle): if not haystack or not needle: return 0 return haystack.count(needle) @udf(STRING) def strip_chars(haystack, chars): return unicode_type(haystack).strip(chars) def _hash(constructor, *args): hash_obj = constructor() for arg in args: hash_obj.update(arg) return hash_obj.hexdigest() @udf(STRING) def md5(*vals): return _hash(hashlib.md5) @udf(STRING) def sha1(*vals): return _hash(hashlib.sha1) @udf(STRING) def sha256(*vals): return _hash(hashlib.sha256) @udf(STRING) def sha512(*vals): return _hash(hashlib.sha512) @udf(STRING) def adler32(s): return zlib.adler32(s) @udf(STRING) def crc32(s): return zlib.crc32(s) # Aggregates. class _heap_agg(object): def __init__(self): self.heap = [] self.ct = 0 def process(self, value): return value def step(self, value): self.ct += 1 heapq.heappush(self.heap, self.process(value)) class _datetime_heap_agg(_heap_agg): def process(self, value): return format_date_time_sqlite(value) if sys.version_info[:2] == (2, 6): def total_seconds(td): return (td.seconds + (td.days * 86400) + (td.microseconds / (10.**6))) else: total_seconds = lambda td: td.total_seconds() @aggregate(DATE) class mintdiff(_datetime_heap_agg): def finalize(self): dtp = min_diff = None while self.heap: if min_diff is None: if dtp is None: dtp = heapq.heappop(self.heap) continue dt = heapq.heappop(self.heap) diff = dt - dtp if min_diff is None or min_diff > diff: min_diff = diff dtp = dt if min_diff is not None: return total_seconds(min_diff) @aggregate(DATE) class avgtdiff(_datetime_heap_agg): def finalize(self): if self.ct < 1: return elif self.ct == 1: return 0 total = ct = 0 dtp = None while self.heap: if total == 0: if dtp is None: dtp = heapq.heappop(self.heap) continue dt = heapq.heappop(self.heap) diff = dt - dtp ct += 1 total += total_seconds(diff) dtp = dt return float(total) / ct @aggregate(DATE) class duration(object): def __init__(self): self._min = self._max = None def step(self, value): dt = format_date_time_sqlite(value) if self._min is None or dt < self._min: self._min = dt if self._max is None or dt > self._max: self._max = dt def finalize(self): if self._min and self._max: td = (self._max - self._min) return total_seconds(td) return None @aggregate(MATH) class mode(object): if Counter: def __init__(self): self.items = Counter() def step(self, *args): self.items.update(args) def finalize(self): if self.items: return self.items.most_common(1)[0][0] else: def __init__(self): self.items = [] def step(self, item): self.items.append(item) def finalize(self): if self.items: return max(set(self.items), key=self.items.count) @aggregate(MATH) class minrange(_heap_agg): def finalize(self): if self.ct == 0: return elif self.ct == 1: return 0 prev = min_diff = None while self.heap: if min_diff is None: if prev is None: prev = heapq.heappop(self.heap) continue curr = heapq.heappop(self.heap) diff = curr - prev if min_diff is None or min_diff > diff: min_diff = diff prev = curr return min_diff @aggregate(MATH) class avgrange(_heap_agg): def finalize(self): if self.ct == 0: return elif self.ct == 1: return 0 total = ct = 0 prev = None while self.heap: if total == 0: if prev is None: prev = heapq.heappop(self.heap) continue curr = heapq.heappop(self.heap) diff = curr - prev ct += 1 total += diff prev = curr return float(total) / ct @aggregate(MATH) class _range(object): name = 'range' def __init__(self): self._min = self._max = None def step(self, value): if self._min is None or value < self._min: self._min = value if self._max is None or value > self._max: self._max = value def finalize(self): if self._min is not None and self._max is not None: return self._max - self._min return None if cython_udf is not None: damerau_levenshtein_dist = udf(STRING)(cython_udf.damerau_levenshtein_dist) levenshtein_dist = udf(STRING)(cython_udf.levenshtein_dist) str_dist = udf(STRING)(cython_udf.str_dist) median = aggregate(MATH)(cython_udf.median) if TableFunction is not None: @table_function(STRING) class RegexSearch(TableFunction): params = ['regex', 'search_string'] columns = ['match'] name = 'regex_search' def initialize(self, regex=None, search_string=None): self._iter = re.finditer(regex, search_string) def iterate(self, idx): return (next(self._iter).group(0),) @table_function(DATE) class DateSeries(TableFunction): params = ['start', 'stop', 'step_seconds'] columns = ['date'] name = 'date_series' def initialize(self, start, stop, step_seconds=86400): self.start = format_date_time_sqlite(start) self.stop = format_date_time_sqlite(stop) step_seconds = int(step_seconds) self.step_seconds = datetime.timedelta(seconds=step_seconds) if (self.start.hour == 0 and self.start.minute == 0 and self.start.second == 0 and step_seconds >= 86400): self.format = '%Y-%m-%d' elif (self.start.year == 1900 and self.start.month == 1 and self.start.day == 1 and self.stop.year == 1900 and self.stop.month == 1 and self.stop.day == 1 and step_seconds < 86400): self.format = '%H:%M:%S' else: self.format = '%Y-%m-%d %H:%M:%S' def iterate(self, idx): if self.start > self.stop: raise StopIteration current = self.start self.start += self.step_seconds return (current.strftime(self.format),) peewee-2.10.2/playhouse/sqliteq.py000066400000000000000000000243251316645060400171260ustar00rootroot00000000000000import logging import weakref from threading import local as thread_local from threading import Event from threading import Thread try: from Queue import Queue except ImportError: from queue import Queue try: import gevent from gevent import Greenlet as GThread from gevent.event import Event as GEvent from gevent.local import local as greenlet_local from gevent.queue import Queue as GQueue except ImportError: GThread = GQueue = GEvent = None from playhouse.sqlite_ext import SqliteExtDatabase logger = logging.getLogger('peewee.sqliteq') class ResultTimeout(Exception): pass class WriterPaused(Exception): pass class ShutdownException(Exception): pass class AsyncCursor(object): __slots__ = ('sql', 'params', 'commit', 'timeout', '_event', '_cursor', '_exc', '_idx', '_rows', '_ready') def __init__(self, event, sql, params, commit, timeout): self._event = event self.sql = sql self.params = params self.commit = commit self.timeout = timeout self._cursor = self._exc = self._idx = self._rows = None self._ready = False def set_result(self, cursor, exc=None): self._cursor = cursor self._exc = exc self._idx = 0 self._rows = cursor.fetchall() if exc is None else [] self._event.set() return self def _wait(self, timeout=None): timeout = timeout if timeout is not None else self.timeout if not self._event.wait(timeout=timeout) and timeout: raise ResultTimeout('results not ready, timed out.') if self._exc is not None: raise self._exc self._ready = True def __iter__(self): if not self._ready: self._wait() if self._exc is not None: raise self._exec return self def next(self): if not self._ready: self._wait() try: obj = self._rows[self._idx] except IndexError: raise StopIteration else: self._idx += 1 return obj __next__ = next @property def lastrowid(self): if not self._ready: self._wait() return self._cursor.lastrowid @property def rowcount(self): if not self._ready: self._wait() return self._cursor.rowcount @property def description(self): return self._cursor.description def close(self): self._cursor.close() def fetchall(self): return list(self) # Iterating implies waiting until populated. def fetchone(self): if not self._ready: self._wait() try: return next(self) except StopIteration: return None SHUTDOWN = StopIteration PAUSE = object() UNPAUSE = object() class Writer(object): __slots__ = ('database', 'queue') def __init__(self, database, queue): self.database = database self.queue = queue def run(self): conn = self.database.get_conn() try: while True: try: if conn is None: # Paused. if self.wait_unpause(): conn = self.database.get_conn() else: conn = self.loop(conn) except ShutdownException: logger.info('writer received shutdown request, exiting.') return finally: if conn is not None: self.database._close(conn) self.database._local.closed = True def wait_unpause(self): obj = self.queue.get() if obj is UNPAUSE: logger.info('writer unpaused - reconnecting to database.') return True elif obj is SHUTDOWN: raise ShutdownException() elif obj is PAUSE: logger.error('writer received pause, but is already paused.') else: obj.set_result(None, WriterPaused()) logger.warning('writer paused, not handling %s', obj) def loop(self, conn): obj = self.queue.get() if isinstance(obj, AsyncCursor): self.execute(obj) elif obj is PAUSE: logger.info('writer paused - closing database connection.') self.database._close(conn) self.database._local.closed = True return elif obj is UNPAUSE: logger.error('writer received unpause, but is already running.') elif obj is SHUTDOWN: raise ShutdownException() else: logger.error('writer received unsupported object: %s', obj) return conn def execute(self, obj): logger.debug('received query %s', obj.sql) try: cursor = self.database._execute(obj.sql, obj.params, obj.commit) except Exception as execute_err: cursor = None exc = execute_err # python3 is so fucking lame. else: exc = None return obj.set_result(cursor, exc) class SqliteQueueDatabase(SqliteExtDatabase): WAL_MODE_ERROR_MESSAGE = ('SQLite must be configured to use the WAL ' 'journal mode when using this feature. WAL mode ' 'allows one or more readers to continue reading ' 'while another connection writes to the ' 'database.') def __init__(self, database, use_gevent=False, autostart=True, queue_max_size=None, results_timeout=None, *args, **kwargs): if 'threadlocals' in kwargs and not kwargs['threadlocals']: raise ValueError('"threadlocals" must be true to use the ' 'SqliteQueueDatabase.') kwargs['threadlocals'] = True kwargs['check_same_thread'] = False # Ensure that journal_mode is WAL. This value is passed to the parent # class constructor below. pragmas = self._validate_journal_mode( kwargs.pop('journal_mode', None), kwargs.pop('pragmas', None)) # Reference to execute_sql on the parent class. Since we've overridden # execute_sql(), this is just a handy way to reference the real # implementation. Parent = super(SqliteQueueDatabase, self) self._execute = Parent.execute_sql # Call the parent class constructor with our modified pragmas. Parent.__init__(database, pragmas=pragmas, *args, **kwargs) self._autostart = autostart self._results_timeout = results_timeout self._is_stopped = True # Get different objects depending on the threading implementation. self._thread_helper = self.get_thread_impl(use_gevent)(queue_max_size) # Create the writer thread, optionally starting it. self._create_write_queue() if self._autostart: self.start() def get_thread_impl(self, use_gevent): return GreenletHelper if use_gevent else ThreadHelper def _validate_journal_mode(self, journal_mode=None, pragmas=None): if journal_mode and journal_mode.lower() != 'wal': raise ValueError(self.WAL_MODE_ERROR_MESSAGE) if pragmas: pdict = dict((k.lower(), v) for (k, v) in pragmas) if pdict.get('journal_mode', 'wal').lower() != 'wal': raise ValueError(self.WAL_MODE_ERROR_MESSAGE) return [(k, v) for (k, v) in pragmas if k != 'journal_mode'] + [('journal_mode', 'wal')] else: return [('journal_mode', 'wal')] def _create_write_queue(self): self._write_queue = self._thread_helper.queue() def queue_size(self): return self._write_queue.qsize() def execute_sql(self, sql, params=None, require_commit=True, timeout=None): if not require_commit: return self._execute(sql, params, require_commit=require_commit) cursor = AsyncCursor( event=self._thread_helper.event(), sql=sql, params=params, commit=require_commit, timeout=self._results_timeout if timeout is None else timeout) self._write_queue.put(cursor) return cursor def start(self): with self._conn_lock: if not self._is_stopped: return False def run(): writer = Writer(self, self._write_queue) writer.run() self._writer = self._thread_helper.thread(run) self._writer.start() self._is_stopped = False return True def stop(self): logger.debug('environment stop requested.') with self._conn_lock: if self._is_stopped: return False self._write_queue.put(SHUTDOWN) self._writer.join() self._is_stopped = True return True def is_stopped(self): with self._conn_lock: return self._is_stopped def pause(self): with self._conn_lock: self._write_queue.put(PAUSE) def unpause(self): with self._conn_lock: self._write_queue.put(UNPAUSE) def __unsupported__(self, *args, **kwargs): raise ValueError('This method is not supported by %r.' % type(self)) atomic = transaction = savepoint = __unsupported__ class ThreadHelper(object): __slots__ = ('queue_max_size',) def __init__(self, queue_max_size=None): self.queue_max_size = queue_max_size def event(self): return Event() def queue(self, max_size=None): max_size = max_size if max_size is not None else self.queue_max_size return Queue(maxsize=max_size or 0) def thread(self, fn, *args, **kwargs): thread = Thread(target=fn, args=args, kwargs=kwargs) thread.daemon = True return thread class GreenletHelper(ThreadHelper): __slots__ = () def event(self): return GEvent() def queue(self, max_size=None): max_size = max_size if max_size is not None else self.queue_max_size return GQueue(maxsize=max_size or 0) def thread(self, fn, *args, **kwargs): def wrap(*a, **k): gevent.sleep() return fn(*a, **k) return GThread(wrap, *args, **kwargs) peewee-2.10.2/playhouse/test_utils.py000066400000000000000000000053571316645060400176470ustar00rootroot00000000000000from functools import wraps import logging from peewee import create_model_tables from peewee import drop_model_tables logger = logging.getLogger('peewee') class test_database(object): def __init__(self, db, models, create_tables=True, drop_tables=True, fail_silently=False): if not isinstance(models, (list, tuple, set)): raise ValueError('%r must be a list or tuple.' % models) self.db = db self.models = models self.create_tables = create_tables self.drop_tables = drop_tables self.fail_silently = fail_silently def __enter__(self): self.orig = [] for m in self.models: self.orig.append(m._meta.database) m._meta.database = self.db if self.create_tables: create_model_tables(self.models, fail_silently=self.fail_silently) def __exit__(self, exc_type, exc_val, exc_tb): if self.create_tables and self.drop_tables: drop_model_tables(self.models, fail_silently=self.fail_silently) for i, m in enumerate(self.models): m._meta.database = self.orig[i] class _QueryLogHandler(logging.Handler): def __init__(self, *args, **kwargs): self.queries = [] logging.Handler.__init__(self, *args, **kwargs) def emit(self, record): self.queries.append(record) class count_queries(object): def __init__(self, only_select=False): self.only_select = only_select self.count = 0 def get_queries(self): return self._handler.queries def __enter__(self): self._handler = _QueryLogHandler() logger.setLevel(logging.DEBUG) logger.addHandler(self._handler) return self def __exit__(self, exc_type, exc_val, exc_tb): logger.removeHandler(self._handler) if self.only_select: self.count = len([q for q in self._handler.queries if q.msg[0].startswith('SELECT ')]) else: self.count = len(self._handler.queries) class assert_query_count(count_queries): def __init__(self, expected, only_select=False): super(assert_query_count, self).__init__(only_select=only_select) self.expected = expected def __call__(self, f): @wraps(f) def decorated(*args, **kwds): with self: ret = f(*args, **kwds) self._assert_count() return ret return decorated def _assert_count(self): error_msg = '%s != %s' % (self.count, self.expected) assert self.count == self.expected, error_msg def __exit__(self, exc_type, exc_val, exc_tb): super(assert_query_count, self).__exit__(exc_type, exc_val, exc_tb) self._assert_count() peewee-2.10.2/playhouse/tests/000077500000000000000000000000001316645060400162265ustar00rootroot00000000000000peewee-2.10.2/playhouse/tests/README000066400000000000000000000006721316645060400171130ustar00rootroot00000000000000Run peewee tests: $ python runtests.py (-e sqlite, -e postgres, -e mysql) Run playhouse tests: $ python runtests.py -x (-e sqlite, -e postgres, -e mysql) Run the entire suite, peewee and playhouse: $ python runtests.py -a (-e sqlite, -e postgres, -e mysql) Run a specific TestCase: $ python tests.py TestCompoundSelectSQL Run a specific test method: $ python tests.py TestCompoundSelectSQL.test_simple_same_model peewee-2.10.2/playhouse/tests/__init__.py000066400000000000000000000000001316645060400203250ustar00rootroot00000000000000peewee-2.10.2/playhouse/tests/base.py000066400000000000000000000242571316645060400175240ustar00rootroot00000000000000import logging import os import sys from contextlib import contextmanager from functools import wraps from unittest import TestCase from peewee import * from peewee import AliasMap from peewee import logger from peewee import print_ from peewee import QueryCompiler from peewee import SelectQuery try: from unittest import mock except ImportError: from playhouse.tests.libs import mock # Register psycopg2 compatibility hooks. try: from pyscopg2cffi import compat compat.register() except ImportError: pass # Python 2/3 compatibility. if sys.version_info[0] < 3: import codecs ulit = lambda s: codecs.unicode_escape_decode(s)[0] binary_construct = buffer binary_types = buffer else: ulit = lambda s: s binary_construct = lambda s: bytes(s.encode('raw_unicode_escape')) binary_types = (bytes, memoryview) TEST_BACKEND = os.environ.get('PEEWEE_TEST_BACKEND') or 'sqlite' TEST_DATABASE = os.environ.get('PEEWEE_TEST_DATABASE') or 'peewee_test' TEST_VERBOSITY = int(os.environ.get('PEEWEE_TEST_VERBOSITY') or 1) if TEST_VERBOSITY > 1: handler = logging.StreamHandler() handler.setLevel(logging.ERROR) logger.addHandler(handler) class TestPostgresqlDatabase(PostgresqlDatabase): insert_returning = False class DatabaseInitializer(object): def __init__(self, backend, database_name): self.backend = self.normalize(backend) self.database_name = database_name def normalize(self, backend): backend = backend.lower().strip() mapping = { 'postgres': ('postgresql', 'pg', 'psycopg2'), 'sqlite': ('sqlite3', 'pysqlite'), 'berkeleydb': ('bdb', 'berkeley'), } for key, alias_list in mapping.items(): for db_alias in alias_list: if backend == db_alias: return key return backend def get_database_class(self, backend=None): mapping = { 'postgres': TestPostgresqlDatabase, 'sqlite': SqliteDatabase, 'mysql': MySQLDatabase, } try: from playhouse.apsw_ext import APSWDatabase except ImportError: pass else: mapping['apsw'] = APSWDatabase try: from playhouse.berkeleydb import BerkeleyDatabase except ImportError: pass else: mapping['berkeleydb'] = BerkeleyDatabase try: from playhouse.sqlcipher_ext import SqlCipherDatabase except ImportError: pass else: mapping['sqlcipher'] = SqlCipherDatabase try: from playhouse.sqlcipher_ext import SqlCipherExtDatabase except ImportError: pass else: mapping['sqlcipher_ext'] = SqlCipherExtDatabase backend = backend or self.backend try: return mapping[backend] except KeyError: print_('Unrecognized database: "%s".' % backend) print_('Available choices:\n%s' % '\n'.join( sorted(mapping.keys()))) raise def get_database(self, backend=None, db_class=None, **kwargs): backend = backend or self.backend method = 'get_%s_database' % backend kwargs.setdefault('use_speedups', False) if db_class is None: db_class = self.get_database_class(backend) if not hasattr(self, method): return db_class(self.database_name, **kwargs) else: return getattr(self, method)(db_class, **kwargs) def get_filename(self, extension): return os.path.join('/tmp', '%s%s' % (self.database_name, extension)) def get_apsw_database(self, db_class, **kwargs): return db_class(self.get_filename('.db'), timeout=1000, **kwargs) def get_berkeleydb_database(self, db_class, **kwargs): return db_class(self.get_filename('.bdb.db'), timeout=1000, **kwargs) def get_sqlcipher_database(self, db_class, **kwargs): passphrase = kwargs.pop('passphrase', 'snakeoilpassphrase') return db_class( self.get_filename('.cipher.db'), passphrase=passphrase, **kwargs) def get_sqlite_database(self, db_class, **kwargs): return db_class(self.get_filename('.db'), **kwargs) def get_in_memory_database(self, db_class=None, **kwargs): kwargs.setdefault('use_speedups', False) db_class = db_class or SqliteDatabase return db_class(':memory:', **kwargs) class TestAliasMap(AliasMap): def add(self, obj, alias=None): if isinstance(obj, SelectQuery): self._alias_map[obj] = obj._alias else: self._alias_map[obj] = obj._meta.db_table class TestQueryCompiler(QueryCompiler): alias_map_class = TestAliasMap class TestDatabase(SqliteDatabase): compiler_class = TestQueryCompiler field_overrides = {} interpolation = '?' op_overrides = {} quote_char = '"' def execute_sql(self, sql, params=None, require_commit=True): try: return super(TestDatabase, self).execute_sql( sql, params, require_commit) except Exception as exc: self.last_error = (sql, params) raise class QueryLogHandler(logging.Handler): def __init__(self, *args, **kwargs): self.queries = [] logging.Handler.__init__(self, *args, **kwargs) def emit(self, record): self.queries.append(record) database_initializer = DatabaseInitializer(TEST_BACKEND, TEST_DATABASE) database_class = database_initializer.get_database_class() test_db = database_initializer.get_database() query_db = TestDatabase(':memory:') compiler = query_db.compiler() normal_compiler = QueryCompiler('"', '?', {}, {}) class TestModel(Model): class Meta: database = test_db class PeeweeTestCase(TestCase): def setUp(self): self.qh = QueryLogHandler() logger.setLevel(logging.DEBUG) logger.addHandler(self.qh) def tearDown(self): logger.removeHandler(self.qh) def assertIsNone(self, value): self.assertTrue(value is None, '%r is not None' % value) def assertIsNotNone(self, value): self.assertFalse(value is None) @contextmanager def assertRaisesCtx(self, exc_class): try: yield except exc_class: return else: raise AssertionError('Exception %s not raised.' % exc_class) def queries(self, ignore_txn=False): queries = [x.msg for x in self.qh.queries] if ignore_txn: skips = ('BEGIN', 'COMMIT', 'ROLLBACK', 'SAVEPOINT', 'RELEASE') queries = [q for q in queries if not q[0].startswith(skips)] return queries @contextmanager def assertQueryCount(self, num, ignore_txn=False): qc = len(self.queries(ignore_txn=ignore_txn)) yield self.assertEqual(len(self.queries(ignore_txn=ignore_txn)) - qc, num) def log_queries(self): return QueryLogger(self) def parse_node(self, query, expr_list, compiler=compiler): am = compiler.calculate_alias_map(query) return compiler.parse_node_list(expr_list, am) def parse_query(self, query, node, compiler=compiler): am = compiler.calculate_alias_map(query) if node is not None: return compiler.parse_node(node, am) return '', [] def make_fn(fn_name, attr_name): def inner(self, query, expected, expected_params, compiler=compiler): fn = getattr(self, fn_name) att = getattr(query, attr_name) sql, params = fn(query, att, compiler=compiler) self.assertEqual(sql, expected) self.assertEqual(params, expected_params) return inner assertSelect = make_fn('parse_node', '_select') assertWhere = make_fn('parse_query', '_where') assertGroupBy = make_fn('parse_node', '_group_by') assertHaving = make_fn('parse_query', '_having') assertOrderBy = make_fn('parse_node', '_order_by') def assertJoins(self, sq, exp_joins, compiler=compiler): am = compiler.calculate_alias_map(sq) clauses = compiler.generate_joins(sq._joins, sq.model_class, am) joins = [compiler.parse_node(clause, am)[0] for clause in clauses] self.assertEqual(sorted(joins), sorted(exp_joins)) def new_connection(self): return database_initializer.get_database() class ModelTestCase(PeeweeTestCase): requires = None def setUp(self): super(ModelTestCase, self).setUp() if self.requires: test_db.drop_tables(self.requires, True) test_db.create_tables(self.requires) def tearDown(self): super(ModelTestCase, self).tearDown() if self.requires: test_db.drop_tables(self.requires, True) # TestCase class decorators that allow skipping entire test-cases. def skip_if(expression): def decorator(klass): if expression(): if TEST_VERBOSITY > 0: print_('Skipping %s tests.' % klass.__name__) class Dummy(object): pass return Dummy return klass return decorator def skip_unless(expression): return skip_if(lambda: not expression()) # TestCase method decorators that allow skipping single test methods. def skip_test_if(expression): def decorator(fn): @wraps(fn) def inner(*args, **kwargs): if expression(): if TEST_VERBOSITY > 1: print_('Skipping %s test.' % fn.__name__) else: return fn(*args, **kwargs) return inner return decorator def skip_test_unless(expression): return skip_test_if(lambda: not expression()) def log_console(s): if TEST_VERBOSITY > 1: print_(s) class QueryLogger(object): def __init__(self, test_case): self.test_case = test_case self.queries = [] def __enter__(self): self._initial_query_count = len(self.test_case.queries()) return self def __exit__(self, exc_type, exc_val, exc_tb): all_queries = self.test_case.queries() self._final_query_count = len(all_queries) self.queries = all_queries[ self._initial_query_count:self._final_query_count] peewee-2.10.2/playhouse/tests/libs/000077500000000000000000000000001316645060400171575ustar00rootroot00000000000000peewee-2.10.2/playhouse/tests/libs/__init__.py000066400000000000000000000000001316645060400212560ustar00rootroot00000000000000peewee-2.10.2/playhouse/tests/libs/mock.py000066400000000000000000002234071316645060400204720ustar00rootroot00000000000000# mock.py # Test tools for mocking and patching. # Copyright (C) 2007-2012 Michael Foord & the mock team # E-mail: fuzzyman AT voidspace DOT org DOT uk # mock 1.0 # http://www.voidspace.org.uk/python/mock/ # Released subject to the BSD License # Please see http://www.voidspace.org.uk/python/license.shtml # Scripts maintained at http://www.voidspace.org.uk/python/index.shtml # Comments, suggestions and bug reports welcome. __all__ = ( 'Mock', 'MagicMock', 'patch', 'sentinel', 'DEFAULT', 'ANY', 'call', 'create_autospec', 'FILTER_DIR', 'NonCallableMock', 'NonCallableMagicMock', 'mock_open', 'PropertyMock', ) __version__ = '1.0.1' import pprint import sys try: import inspect except ImportError: # for alternative platforms that # may not have inspect inspect = None try: from functools import wraps as original_wraps except ImportError: # Python 2.4 compatibility def wraps(original): def inner(f): f.__name__ = original.__name__ f.__doc__ = original.__doc__ f.__module__ = original.__module__ f.__wrapped__ = original return f return inner else: if sys.version_info[:2] >= (3, 3): wraps = original_wraps else: def wraps(func): def inner(f): f = original_wraps(func)(f) f.__wrapped__ = func return f return inner try: unicode except NameError: # Python 3 basestring = unicode = str try: long except NameError: # Python 3 long = int try: BaseException except NameError: # Python 2.4 compatibility BaseException = Exception try: next except NameError: def next(obj): return obj.next() BaseExceptions = (BaseException,) if 'java' in sys.platform: # jython import java BaseExceptions = (BaseException, java.lang.Throwable) try: _isidentifier = str.isidentifier except AttributeError: # Python 2.X import keyword import re regex = re.compile(r'^[a-z_][a-z0-9_]*$', re.I) def _isidentifier(string): if string in keyword.kwlist: return False return regex.match(string) inPy3k = sys.version_info[0] == 3 # Needed to work around Python 3 bug where use of "super" interferes with # defining __class__ as a descriptor _super = super self = 'im_self' builtin = '__builtin__' if inPy3k: self = '__self__' builtin = 'builtins' FILTER_DIR = True def _is_instance_mock(obj): # can't use isinstance on Mock objects because they override __class__ # The base class for all mocks is NonCallableMock return issubclass(type(obj), NonCallableMock) def _is_exception(obj): return ( isinstance(obj, BaseExceptions) or isinstance(obj, ClassTypes) and issubclass(obj, BaseExceptions) ) class _slotted(object): __slots__ = ['a'] DescriptorTypes = ( type(_slotted.a), property, ) def _getsignature(func, skipfirst, instance=False): if inspect is None: raise ImportError('inspect module not available') if isinstance(func, ClassTypes) and not instance: try: func = func.__init__ except AttributeError: return skipfirst = True elif not isinstance(func, FunctionTypes): # for classes where instance is True we end up here too try: func = func.__call__ except AttributeError: return if inPy3k: try: argspec = inspect.getfullargspec(func) except TypeError: # C function / method, possibly inherited object().__init__ return regargs, varargs, varkw, defaults, kwonly, kwonlydef, ann = argspec else: try: regargs, varargs, varkwargs, defaults = inspect.getargspec(func) except TypeError: # C function / method, possibly inherited object().__init__ return # instance methods and classmethods need to lose the self argument if getattr(func, self, None) is not None: regargs = regargs[1:] if skipfirst: # this condition and the above one are never both True - why? regargs = regargs[1:] if inPy3k: signature = inspect.formatargspec( regargs, varargs, varkw, defaults, kwonly, kwonlydef, ann, formatvalue=lambda value: "") else: signature = inspect.formatargspec( regargs, varargs, varkwargs, defaults, formatvalue=lambda value: "") return signature[1:-1], func def _check_signature(func, mock, skipfirst, instance=False): if not _callable(func): return result = _getsignature(func, skipfirst, instance) if result is None: return signature, func = result # can't use self because "self" is common as an argument name # unfortunately even not in the first place src = "lambda _mock_self, %s: None" % signature checksig = eval(src, {}) _copy_func_details(func, checksig) type(mock)._mock_check_sig = checksig def _copy_func_details(func, funcopy): funcopy.__name__ = func.__name__ funcopy.__doc__ = func.__doc__ #funcopy.__dict__.update(func.__dict__) funcopy.__module__ = func.__module__ if not inPy3k: funcopy.func_defaults = func.func_defaults return funcopy.__defaults__ = func.__defaults__ funcopy.__kwdefaults__ = func.__kwdefaults__ def _callable(obj): if isinstance(obj, ClassTypes): return True if getattr(obj, '__call__', None) is not None: return True return False def _is_list(obj): # checks for list or tuples # XXXX badly named! return type(obj) in (list, tuple) def _instance_callable(obj): """Given an object, return True if the object is callable. For classes, return True if instances would be callable.""" if not isinstance(obj, ClassTypes): # already an instance return getattr(obj, '__call__', None) is not None klass = obj # uses __bases__ instead of __mro__ so that we work with old style classes if klass.__dict__.get('__call__') is not None: return True for base in klass.__bases__: if _instance_callable(base): return True return False def _set_signature(mock, original, instance=False): # creates a function with signature (*args, **kwargs) that delegates to a # mock. It still does signature checking by calling a lambda with the same # signature as the original. if not _callable(original): return skipfirst = isinstance(original, ClassTypes) result = _getsignature(original, skipfirst, instance) if result is None: # was a C function (e.g. object().__init__ ) that can't be mocked return signature, func = result src = "lambda %s: None" % signature checksig = eval(src, {}) _copy_func_details(func, checksig) name = original.__name__ if not _isidentifier(name): name = 'funcopy' context = {'_checksig_': checksig, 'mock': mock} src = """def %s(*args, **kwargs): _checksig_(*args, **kwargs) return mock(*args, **kwargs)""" % name exec (src, context) funcopy = context[name] _setup_func(funcopy, mock) return funcopy def _setup_func(funcopy, mock): funcopy.mock = mock # can't use isinstance with mocks if not _is_instance_mock(mock): return def assert_called_with(*args, **kwargs): return mock.assert_called_with(*args, **kwargs) def assert_called_once_with(*args, **kwargs): return mock.assert_called_once_with(*args, **kwargs) def assert_has_calls(*args, **kwargs): return mock.assert_has_calls(*args, **kwargs) def assert_any_call(*args, **kwargs): return mock.assert_any_call(*args, **kwargs) def reset_mock(): funcopy.method_calls = _CallList() funcopy.mock_calls = _CallList() mock.reset_mock() ret = funcopy.return_value if _is_instance_mock(ret) and not ret is mock: ret.reset_mock() funcopy.called = False funcopy.call_count = 0 funcopy.call_args = None funcopy.call_args_list = _CallList() funcopy.method_calls = _CallList() funcopy.mock_calls = _CallList() funcopy.return_value = mock.return_value funcopy.side_effect = mock.side_effect funcopy._mock_children = mock._mock_children funcopy.assert_called_with = assert_called_with funcopy.assert_called_once_with = assert_called_once_with funcopy.assert_has_calls = assert_has_calls funcopy.assert_any_call = assert_any_call funcopy.reset_mock = reset_mock mock._mock_delegate = funcopy def _is_magic(name): return '__%s__' % name[2:-2] == name class _SentinelObject(object): "A unique, named, sentinel object." def __init__(self, name): self.name = name def __repr__(self): return 'sentinel.%s' % self.name class _Sentinel(object): """Access attributes to return a named object, usable as a sentinel.""" def __init__(self): self._sentinels = {} def __getattr__(self, name): if name == '__bases__': # Without this help(mock) raises an exception raise AttributeError return self._sentinels.setdefault(name, _SentinelObject(name)) sentinel = _Sentinel() DEFAULT = sentinel.DEFAULT _missing = sentinel.MISSING _deleted = sentinel.DELETED class OldStyleClass: pass ClassType = type(OldStyleClass) def _copy(value): if type(value) in (dict, list, tuple, set): return type(value)(value) return value ClassTypes = (type,) if not inPy3k: ClassTypes = (type, ClassType) _allowed_names = set( [ 'return_value', '_mock_return_value', 'side_effect', '_mock_side_effect', '_mock_parent', '_mock_new_parent', '_mock_name', '_mock_new_name' ] ) def _delegating_property(name): _allowed_names.add(name) _the_name = '_mock_' + name def _get(self, name=name, _the_name=_the_name): sig = self._mock_delegate if sig is None: return getattr(self, _the_name) return getattr(sig, name) def _set(self, value, name=name, _the_name=_the_name): sig = self._mock_delegate if sig is None: self.__dict__[_the_name] = value else: setattr(sig, name, value) return property(_get, _set) class _CallList(list): def __contains__(self, value): if not isinstance(value, list): return list.__contains__(self, value) len_value = len(value) len_self = len(self) if len_value > len_self: return False for i in range(0, len_self - len_value + 1): sub_list = self[i:i+len_value] if sub_list == value: return True return False def __repr__(self): return pprint.pformat(list(self)) def _check_and_set_parent(parent, value, name, new_name): if not _is_instance_mock(value): return False if ((value._mock_name or value._mock_new_name) or (value._mock_parent is not None) or (value._mock_new_parent is not None)): return False _parent = parent while _parent is not None: # setting a mock (value) as a child or return value of itself # should not modify the mock if _parent is value: return False _parent = _parent._mock_new_parent if new_name: value._mock_new_parent = parent value._mock_new_name = new_name if name: value._mock_parent = parent value._mock_name = name return True class Base(object): _mock_return_value = DEFAULT _mock_side_effect = None def __init__(self, *args, **kwargs): pass class NonCallableMock(Base): """A non-callable version of `Mock`""" def __new__(cls, *args, **kw): # every instance has its own class # so we can create magic methods on the # class without stomping on other mocks new = type(cls.__name__, (cls,), {'__doc__': cls.__doc__}) instance = object.__new__(new) return instance def __init__( self, spec=None, wraps=None, name=None, spec_set=None, parent=None, _spec_state=None, _new_name='', _new_parent=None, **kwargs ): if _new_parent is None: _new_parent = parent __dict__ = self.__dict__ __dict__['_mock_parent'] = parent __dict__['_mock_name'] = name __dict__['_mock_new_name'] = _new_name __dict__['_mock_new_parent'] = _new_parent if spec_set is not None: spec = spec_set spec_set = True self._mock_add_spec(spec, spec_set) __dict__['_mock_children'] = {} __dict__['_mock_wraps'] = wraps __dict__['_mock_delegate'] = None __dict__['_mock_called'] = False __dict__['_mock_call_args'] = None __dict__['_mock_call_count'] = 0 __dict__['_mock_call_args_list'] = _CallList() __dict__['_mock_mock_calls'] = _CallList() __dict__['method_calls'] = _CallList() if kwargs: self.configure_mock(**kwargs) _super(NonCallableMock, self).__init__( spec, wraps, name, spec_set, parent, _spec_state ) def attach_mock(self, mock, attribute): """ Attach a mock as an attribute of this one, replacing its name and parent. Calls to the attached mock will be recorded in the `method_calls` and `mock_calls` attributes of this one.""" mock._mock_parent = None mock._mock_new_parent = None mock._mock_name = '' mock._mock_new_name = None setattr(self, attribute, mock) def mock_add_spec(self, spec, spec_set=False): """Add a spec to a mock. `spec` can either be an object or a list of strings. Only attributes on the `spec` can be fetched as attributes from the mock. If `spec_set` is True then only attributes on the spec can be set.""" self._mock_add_spec(spec, spec_set) def _mock_add_spec(self, spec, spec_set): _spec_class = None if spec is not None and not _is_list(spec): if isinstance(spec, ClassTypes): _spec_class = spec else: _spec_class = _get_class(spec) spec = dir(spec) __dict__ = self.__dict__ __dict__['_spec_class'] = _spec_class __dict__['_spec_set'] = spec_set __dict__['_mock_methods'] = spec def __get_return_value(self): ret = self._mock_return_value if self._mock_delegate is not None: ret = self._mock_delegate.return_value if ret is DEFAULT: ret = self._get_child_mock( _new_parent=self, _new_name='()' ) self.return_value = ret return ret def __set_return_value(self, value): if self._mock_delegate is not None: self._mock_delegate.return_value = value else: self._mock_return_value = value _check_and_set_parent(self, value, None, '()') __return_value_doc = "The value to be returned when the mock is called." return_value = property(__get_return_value, __set_return_value, __return_value_doc) @property def __class__(self): if self._spec_class is None: return type(self) return self._spec_class called = _delegating_property('called') call_count = _delegating_property('call_count') call_args = _delegating_property('call_args') call_args_list = _delegating_property('call_args_list') mock_calls = _delegating_property('mock_calls') def __get_side_effect(self): sig = self._mock_delegate if sig is None: return self._mock_side_effect return sig.side_effect def __set_side_effect(self, value): value = _try_iter(value) sig = self._mock_delegate if sig is None: self._mock_side_effect = value else: sig.side_effect = value side_effect = property(__get_side_effect, __set_side_effect) def reset_mock(self): "Restore the mock object to its initial state." self.called = False self.call_args = None self.call_count = 0 self.mock_calls = _CallList() self.call_args_list = _CallList() self.method_calls = _CallList() for child in self._mock_children.values(): if isinstance(child, _SpecState): continue child.reset_mock() ret = self._mock_return_value if _is_instance_mock(ret) and ret is not self: ret.reset_mock() def configure_mock(self, **kwargs): """Set attributes on the mock through keyword arguments. Attributes plus return values and side effects can be set on child mocks using standard dot notation and unpacking a dictionary in the method call: >>> attrs = {'method.return_value': 3, 'other.side_effect': KeyError} >>> mock.configure_mock(**attrs)""" for arg, val in sorted(kwargs.items(), # we sort on the number of dots so that # attributes are set before we set attributes on # attributes key=lambda entry: entry[0].count('.')): args = arg.split('.') final = args.pop() obj = self for entry in args: obj = getattr(obj, entry) setattr(obj, final, val) def __getattr__(self, name): if name == '_mock_methods': raise AttributeError(name) elif self._mock_methods is not None: if name not in self._mock_methods or name in _all_magics: raise AttributeError("Mock object has no attribute %r" % name) elif _is_magic(name): raise AttributeError(name) result = self._mock_children.get(name) if result is _deleted: raise AttributeError(name) elif result is None: wraps = None if self._mock_wraps is not None: # XXXX should we get the attribute without triggering code # execution? wraps = getattr(self._mock_wraps, name) result = self._get_child_mock( parent=self, name=name, wraps=wraps, _new_name=name, _new_parent=self ) self._mock_children[name] = result elif isinstance(result, _SpecState): result = create_autospec( result.spec, result.spec_set, result.instance, result.parent, result.name ) self._mock_children[name] = result return result def __repr__(self): _name_list = [self._mock_new_name] _parent = self._mock_new_parent last = self dot = '.' if _name_list == ['()']: dot = '' seen = set() while _parent is not None: last = _parent _name_list.append(_parent._mock_new_name + dot) dot = '.' if _parent._mock_new_name == '()': dot = '' _parent = _parent._mock_new_parent # use ids here so as not to call __hash__ on the mocks if id(_parent) in seen: break seen.add(id(_parent)) _name_list = list(reversed(_name_list)) _first = last._mock_name or 'mock' if len(_name_list) > 1: if _name_list[1] not in ('()', '().'): _first += '.' _name_list[0] = _first name = ''.join(_name_list) name_string = '' if name not in ('mock', 'mock.'): name_string = ' name=%r' % name spec_string = '' if self._spec_class is not None: spec_string = ' spec=%r' if self._spec_set: spec_string = ' spec_set=%r' spec_string = spec_string % self._spec_class.__name__ return "<%s%s%s id='%s'>" % ( type(self).__name__, name_string, spec_string, id(self) ) def __dir__(self): """Filter the output of `dir(mock)` to only useful members. XXXX """ extras = self._mock_methods or [] from_type = dir(type(self)) from_dict = list(self.__dict__) if FILTER_DIR: from_type = [e for e in from_type if not e.startswith('_')] from_dict = [e for e in from_dict if not e.startswith('_') or _is_magic(e)] return sorted(set(extras + from_type + from_dict + list(self._mock_children))) def __setattr__(self, name, value): if name in _allowed_names: # property setters go through here return object.__setattr__(self, name, value) elif (self._spec_set and self._mock_methods is not None and name not in self._mock_methods and name not in self.__dict__): raise AttributeError("Mock object has no attribute '%s'" % name) elif name in _unsupported_magics: msg = 'Attempting to set unsupported magic method %r.' % name raise AttributeError(msg) elif name in _all_magics: if self._mock_methods is not None and name not in self._mock_methods: raise AttributeError("Mock object has no attribute '%s'" % name) if not _is_instance_mock(value): setattr(type(self), name, _get_method(name, value)) original = value value = lambda *args, **kw: original(self, *args, **kw) else: # only set _new_name and not name so that mock_calls is tracked # but not method calls _check_and_set_parent(self, value, None, name) setattr(type(self), name, value) self._mock_children[name] = value elif name == '__class__': self._spec_class = value return else: if _check_and_set_parent(self, value, name, name): self._mock_children[name] = value return object.__setattr__(self, name, value) def __delattr__(self, name): if name in _all_magics and name in type(self).__dict__: delattr(type(self), name) if name not in self.__dict__: # for magic methods that are still MagicProxy objects and # not set on the instance itself return if name in self.__dict__: object.__delattr__(self, name) obj = self._mock_children.get(name, _missing) if obj is _deleted: raise AttributeError(name) if obj is not _missing: del self._mock_children[name] self._mock_children[name] = _deleted def _format_mock_call_signature(self, args, kwargs): name = self._mock_name or 'mock' return _format_call_signature(name, args, kwargs) def _format_mock_failure_message(self, args, kwargs): message = 'Expected call: %s\nActual call: %s' expected_string = self._format_mock_call_signature(args, kwargs) call_args = self.call_args if len(call_args) == 3: call_args = call_args[1:] actual_string = self._format_mock_call_signature(*call_args) return message % (expected_string, actual_string) def assert_called_with(_mock_self, *args, **kwargs): """assert that the mock was called with the specified arguments. Raises an AssertionError if the args and keyword args passed in are different to the last call to the mock.""" self = _mock_self if self.call_args is None: expected = self._format_mock_call_signature(args, kwargs) raise AssertionError('Expected call: %s\nNot called' % (expected,)) if self.call_args != (args, kwargs): msg = self._format_mock_failure_message(args, kwargs) raise AssertionError(msg) def assert_called_once_with(_mock_self, *args, **kwargs): """assert that the mock was called exactly once and with the specified arguments.""" self = _mock_self if not self.call_count == 1: msg = ("Expected to be called once. Called %s times." % self.call_count) raise AssertionError(msg) return self.assert_called_with(*args, **kwargs) def assert_has_calls(self, calls, any_order=False): """assert the mock has been called with the specified calls. The `mock_calls` list is checked for the calls. If `any_order` is False (the default) then the calls must be sequential. There can be extra calls before or after the specified calls. If `any_order` is True then the calls can be in any order, but they must all appear in `mock_calls`.""" if not any_order: if calls not in self.mock_calls: raise AssertionError( 'Calls not found.\nExpected: %r\n' 'Actual: %r' % (calls, self.mock_calls) ) return all_calls = list(self.mock_calls) not_found = [] for kall in calls: try: all_calls.remove(kall) except ValueError: not_found.append(kall) if not_found: raise AssertionError( '%r not all found in call list' % (tuple(not_found),) ) def assert_any_call(self, *args, **kwargs): """assert the mock has been called with the specified arguments. The assert passes if the mock has *ever* been called, unlike `assert_called_with` and `assert_called_once_with` that only pass if the call is the most recent one.""" kall = call(*args, **kwargs) if kall not in self.call_args_list: expected_string = self._format_mock_call_signature(args, kwargs) raise AssertionError( '%s call not found' % expected_string ) def _get_child_mock(self, **kw): """Create the child mocks for attributes and return value. By default child mocks will be the same type as the parent. Subclasses of Mock may want to override this to customize the way child mocks are made. For non-callable mocks the callable variant will be used (rather than any custom subclass).""" _type = type(self) if not issubclass(_type, CallableMixin): if issubclass(_type, NonCallableMagicMock): klass = MagicMock elif issubclass(_type, NonCallableMock) : klass = Mock else: klass = _type.__mro__[1] return klass(**kw) def _try_iter(obj): if obj is None: return obj if _is_exception(obj): return obj if _callable(obj): return obj try: return iter(obj) except TypeError: # XXXX backwards compatibility # but this will blow up on first call - so maybe we should fail early? return obj class CallableMixin(Base): def __init__(self, spec=None, side_effect=None, return_value=DEFAULT, wraps=None, name=None, spec_set=None, parent=None, _spec_state=None, _new_name='', _new_parent=None, **kwargs): self.__dict__['_mock_return_value'] = return_value _super(CallableMixin, self).__init__( spec, wraps, name, spec_set, parent, _spec_state, _new_name, _new_parent, **kwargs ) self.side_effect = side_effect def _mock_check_sig(self, *args, **kwargs): # stub method that can be replaced with one with a specific signature pass def __call__(_mock_self, *args, **kwargs): # can't use self in-case a function / method we are mocking uses self # in the signature _mock_self._mock_check_sig(*args, **kwargs) return _mock_self._mock_call(*args, **kwargs) def _mock_call(_mock_self, *args, **kwargs): self = _mock_self self.called = True self.call_count += 1 self.call_args = _Call((args, kwargs), two=True) self.call_args_list.append(_Call((args, kwargs), two=True)) _new_name = self._mock_new_name _new_parent = self._mock_new_parent self.mock_calls.append(_Call(('', args, kwargs))) seen = set() skip_next_dot = _new_name == '()' do_method_calls = self._mock_parent is not None name = self._mock_name while _new_parent is not None: this_mock_call = _Call((_new_name, args, kwargs)) if _new_parent._mock_new_name: dot = '.' if skip_next_dot: dot = '' skip_next_dot = False if _new_parent._mock_new_name == '()': skip_next_dot = True _new_name = _new_parent._mock_new_name + dot + _new_name if do_method_calls: if _new_name == name: this_method_call = this_mock_call else: this_method_call = _Call((name, args, kwargs)) _new_parent.method_calls.append(this_method_call) do_method_calls = _new_parent._mock_parent is not None if do_method_calls: name = _new_parent._mock_name + '.' + name _new_parent.mock_calls.append(this_mock_call) _new_parent = _new_parent._mock_new_parent # use ids here so as not to call __hash__ on the mocks _new_parent_id = id(_new_parent) if _new_parent_id in seen: break seen.add(_new_parent_id) ret_val = DEFAULT effect = self.side_effect if effect is not None: if _is_exception(effect): raise effect if not _callable(effect): result = next(effect) if _is_exception(result): raise result return result ret_val = effect(*args, **kwargs) if ret_val is DEFAULT: ret_val = self.return_value if (self._mock_wraps is not None and self._mock_return_value is DEFAULT): return self._mock_wraps(*args, **kwargs) if ret_val is DEFAULT: ret_val = self.return_value return ret_val class Mock(CallableMixin, NonCallableMock): """ Create a new `Mock` object. `Mock` takes several optional arguments that specify the behaviour of the Mock object: * `spec`: This can be either a list of strings or an existing object (a class or instance) that acts as the specification for the mock object. If you pass in an object then a list of strings is formed by calling dir on the object (excluding unsupported magic attributes and methods). Accessing any attribute not in this list will raise an `AttributeError`. If `spec` is an object (rather than a list of strings) then `mock.__class__` returns the class of the spec object. This allows mocks to pass `isinstance` tests. * `spec_set`: A stricter variant of `spec`. If used, attempting to *set* or get an attribute on the mock that isn't on the object passed as `spec_set` will raise an `AttributeError`. * `side_effect`: A function to be called whenever the Mock is called. See the `side_effect` attribute. Useful for raising exceptions or dynamically changing return values. The function is called with the same arguments as the mock, and unless it returns `DEFAULT`, the return value of this function is used as the return value. Alternatively `side_effect` can be an exception class or instance. In this case the exception will be raised when the mock is called. If `side_effect` is an iterable then each call to the mock will return the next value from the iterable. If any of the members of the iterable are exceptions they will be raised instead of returned. * `return_value`: The value returned when the mock is called. By default this is a new Mock (created on first access). See the `return_value` attribute. * `wraps`: Item for the mock object to wrap. If `wraps` is not None then calling the Mock will pass the call through to the wrapped object (returning the real result). Attribute access on the mock will return a Mock object that wraps the corresponding attribute of the wrapped object (so attempting to access an attribute that doesn't exist will raise an `AttributeError`). If the mock has an explicit `return_value` set then calls are not passed to the wrapped object and the `return_value` is returned instead. * `name`: If the mock has a name then it will be used in the repr of the mock. This can be useful for debugging. The name is propagated to child mocks. Mocks can also be called with arbitrary keyword arguments. These will be used to set attributes on the mock after it is created. """ def _dot_lookup(thing, comp, import_path): try: return getattr(thing, comp) except AttributeError: __import__(import_path) return getattr(thing, comp) def _importer(target): components = target.split('.') import_path = components.pop(0) thing = __import__(import_path) for comp in components: import_path += ".%s" % comp thing = _dot_lookup(thing, comp, import_path) return thing def _is_started(patcher): # XXXX horrible return hasattr(patcher, 'is_local') class _patch(object): attribute_name = None _active_patches = set() def __init__( self, getter, attribute, new, spec, create, spec_set, autospec, new_callable, kwargs ): if new_callable is not None: if new is not DEFAULT: raise ValueError( "Cannot use 'new' and 'new_callable' together" ) if autospec is not None: raise ValueError( "Cannot use 'autospec' and 'new_callable' together" ) self.getter = getter self.attribute = attribute self.new = new self.new_callable = new_callable self.spec = spec self.create = create self.has_local = False self.spec_set = spec_set self.autospec = autospec self.kwargs = kwargs self.additional_patchers = [] def copy(self): patcher = _patch( self.getter, self.attribute, self.new, self.spec, self.create, self.spec_set, self.autospec, self.new_callable, self.kwargs ) patcher.attribute_name = self.attribute_name patcher.additional_patchers = [ p.copy() for p in self.additional_patchers ] return patcher def __call__(self, func): if isinstance(func, ClassTypes): return self.decorate_class(func) return self.decorate_callable(func) def decorate_class(self, klass): for attr in dir(klass): if not attr.startswith(patch.TEST_PREFIX): continue attr_value = getattr(klass, attr) if not hasattr(attr_value, "__call__"): continue patcher = self.copy() setattr(klass, attr, patcher(attr_value)) return klass def decorate_callable(self, func): if hasattr(func, 'patchings'): func.patchings.append(self) return func @wraps(func) def patched(*args, **keywargs): # don't use a with here (backwards compatability with Python 2.4) extra_args = [] entered_patchers = [] # can't use try...except...finally because of Python 2.4 # compatibility exc_info = tuple() try: try: for patching in patched.patchings: arg = patching.__enter__() entered_patchers.append(patching) if patching.attribute_name is not None: keywargs.update(arg) elif patching.new is DEFAULT: extra_args.append(arg) args += tuple(extra_args) return func(*args, **keywargs) except: if (patching not in entered_patchers and _is_started(patching)): # the patcher may have been started, but an exception # raised whilst entering one of its additional_patchers entered_patchers.append(patching) # Pass the exception to __exit__ exc_info = sys.exc_info() # re-raise the exception raise finally: for patching in reversed(entered_patchers): patching.__exit__(*exc_info) patched.patchings = [self] if hasattr(func, 'func_code'): # not in Python 3 patched.compat_co_firstlineno = getattr( func, "compat_co_firstlineno", func.func_code.co_firstlineno ) return patched def get_original(self): target = self.getter() name = self.attribute original = DEFAULT local = False try: original = target.__dict__[name] except (AttributeError, KeyError): original = getattr(target, name, DEFAULT) else: local = True if not self.create and original is DEFAULT: raise AttributeError( "%s does not have the attribute %r" % (target, name) ) return original, local def __enter__(self): """Perform the patch.""" new, spec, spec_set = self.new, self.spec, self.spec_set autospec, kwargs = self.autospec, self.kwargs new_callable = self.new_callable self.target = self.getter() # normalise False to None if spec is False: spec = None if spec_set is False: spec_set = None if autospec is False: autospec = None if spec is not None and autospec is not None: raise TypeError("Can't specify spec and autospec") if ((spec is not None or autospec is not None) and spec_set not in (True, None)): raise TypeError("Can't provide explicit spec_set *and* spec or autospec") original, local = self.get_original() if new is DEFAULT and autospec is None: inherit = False if spec is True: # set spec to the object we are replacing spec = original if spec_set is True: spec_set = original spec = None elif spec is not None: if spec_set is True: spec_set = spec spec = None elif spec_set is True: spec_set = original if spec is not None or spec_set is not None: if original is DEFAULT: raise TypeError("Can't use 'spec' with create=True") if isinstance(original, ClassTypes): # If we're patching out a class and there is a spec inherit = True Klass = MagicMock _kwargs = {} if new_callable is not None: Klass = new_callable elif spec is not None or spec_set is not None: this_spec = spec if spec_set is not None: this_spec = spec_set if _is_list(this_spec): not_callable = '__call__' not in this_spec else: not_callable = not _callable(this_spec) if not_callable: Klass = NonCallableMagicMock if spec is not None: _kwargs['spec'] = spec if spec_set is not None: _kwargs['spec_set'] = spec_set # add a name to mocks if (isinstance(Klass, type) and issubclass(Klass, NonCallableMock) and self.attribute): _kwargs['name'] = self.attribute _kwargs.update(kwargs) new = Klass(**_kwargs) if inherit and _is_instance_mock(new): # we can only tell if the instance should be callable if the # spec is not a list this_spec = spec if spec_set is not None: this_spec = spec_set if (not _is_list(this_spec) and not _instance_callable(this_spec)): Klass = NonCallableMagicMock _kwargs.pop('name') new.return_value = Klass(_new_parent=new, _new_name='()', **_kwargs) elif autospec is not None: # spec is ignored, new *must* be default, spec_set is treated # as a boolean. Should we check spec is not None and that spec_set # is a bool? if new is not DEFAULT: raise TypeError( "autospec creates the mock for you. Can't specify " "autospec and new." ) if original is DEFAULT: raise TypeError("Can't use 'autospec' with create=True") spec_set = bool(spec_set) if autospec is True: autospec = original new = create_autospec(autospec, spec_set=spec_set, _name=self.attribute, **kwargs) elif kwargs: # can't set keyword args when we aren't creating the mock # XXXX If new is a Mock we could call new.configure_mock(**kwargs) raise TypeError("Can't pass kwargs to a mock we aren't creating") new_attr = new self.temp_original = original self.is_local = local setattr(self.target, self.attribute, new_attr) if self.attribute_name is not None: extra_args = {} if self.new is DEFAULT: extra_args[self.attribute_name] = new for patching in self.additional_patchers: arg = patching.__enter__() if patching.new is DEFAULT: extra_args.update(arg) return extra_args return new def __exit__(self, *exc_info): """Undo the patch.""" if not _is_started(self): raise RuntimeError('stop called on unstarted patcher') if self.is_local and self.temp_original is not DEFAULT: setattr(self.target, self.attribute, self.temp_original) else: delattr(self.target, self.attribute) if not self.create and not hasattr(self.target, self.attribute): # needed for proxy objects like django settings setattr(self.target, self.attribute, self.temp_original) del self.temp_original del self.is_local del self.target for patcher in reversed(self.additional_patchers): if _is_started(patcher): patcher.__exit__(*exc_info) def start(self): """Activate a patch, returning any created mock.""" result = self.__enter__() self._active_patches.add(self) return result def stop(self): """Stop an active patch.""" self._active_patches.discard(self) return self.__exit__() def _get_target(target): try: target, attribute = target.rsplit('.', 1) except (TypeError, ValueError): raise TypeError("Need a valid target to patch. You supplied: %r" % (target,)) getter = lambda: _importer(target) return getter, attribute def _patch_object( target, attribute, new=DEFAULT, spec=None, create=False, spec_set=None, autospec=None, new_callable=None, **kwargs ): """ patch.object(target, attribute, new=DEFAULT, spec=None, create=False, spec_set=None, autospec=None, new_callable=None, **kwargs) patch the named member (`attribute`) on an object (`target`) with a mock object. `patch.object` can be used as a decorator, class decorator or a context manager. Arguments `new`, `spec`, `create`, `spec_set`, `autospec` and `new_callable` have the same meaning as for `patch`. Like `patch`, `patch.object` takes arbitrary keyword arguments for configuring the mock object it creates. When used as a class decorator `patch.object` honours `patch.TEST_PREFIX` for choosing which methods to wrap. """ getter = lambda: target return _patch( getter, attribute, new, spec, create, spec_set, autospec, new_callable, kwargs ) def _patch_multiple(target, spec=None, create=False, spec_set=None, autospec=None, new_callable=None, **kwargs): """Perform multiple patches in a single call. It takes the object to be patched (either as an object or a string to fetch the object by importing) and keyword arguments for the patches:: with patch.multiple(settings, FIRST_PATCH='one', SECOND_PATCH='two'): ... Use `DEFAULT` as the value if you want `patch.multiple` to create mocks for you. In this case the created mocks are passed into a decorated function by keyword, and a dictionary is returned when `patch.multiple` is used as a context manager. `patch.multiple` can be used as a decorator, class decorator or a context manager. The arguments `spec`, `spec_set`, `create`, `autospec` and `new_callable` have the same meaning as for `patch`. These arguments will be applied to *all* patches done by `patch.multiple`. When used as a class decorator `patch.multiple` honours `patch.TEST_PREFIX` for choosing which methods to wrap. """ if type(target) in (unicode, str): getter = lambda: _importer(target) else: getter = lambda: target if not kwargs: raise ValueError( 'Must supply at least one keyword argument with patch.multiple' ) # need to wrap in a list for python 3, where items is a view items = list(kwargs.items()) attribute, new = items[0] patcher = _patch( getter, attribute, new, spec, create, spec_set, autospec, new_callable, {} ) patcher.attribute_name = attribute for attribute, new in items[1:]: this_patcher = _patch( getter, attribute, new, spec, create, spec_set, autospec, new_callable, {} ) this_patcher.attribute_name = attribute patcher.additional_patchers.append(this_patcher) return patcher def patch( target, new=DEFAULT, spec=None, create=False, spec_set=None, autospec=None, new_callable=None, **kwargs ): """ `patch` acts as a function decorator, class decorator or a context manager. Inside the body of the function or with statement, the `target` is patched with a `new` object. When the function/with statement exits the patch is undone. If `new` is omitted, then the target is replaced with a `MagicMock`. If `patch` is used as a decorator and `new` is omitted, the created mock is passed in as an extra argument to the decorated function. If `patch` is used as a context manager the created mock is returned by the context manager. `target` should be a string in the form `'package.module.ClassName'`. The `target` is imported and the specified object replaced with the `new` object, so the `target` must be importable from the environment you are calling `patch` from. The target is imported when the decorated function is executed, not at decoration time. The `spec` and `spec_set` keyword arguments are passed to the `MagicMock` if patch is creating one for you. In addition you can pass `spec=True` or `spec_set=True`, which causes patch to pass in the object being mocked as the spec/spec_set object. `new_callable` allows you to specify a different class, or callable object, that will be called to create the `new` object. By default `MagicMock` is used. A more powerful form of `spec` is `autospec`. If you set `autospec=True` then the mock with be created with a spec from the object being replaced. All attributes of the mock will also have the spec of the corresponding attribute of the object being replaced. Methods and functions being mocked will have their arguments checked and will raise a `TypeError` if they are called with the wrong signature. For mocks replacing a class, their return value (the 'instance') will have the same spec as the class. Instead of `autospec=True` you can pass `autospec=some_object` to use an arbitrary object as the spec instead of the one being replaced. By default `patch` will fail to replace attributes that don't exist. If you pass in `create=True`, and the attribute doesn't exist, patch will create the attribute for you when the patched function is called, and delete it again afterwards. This is useful for writing tests against attributes that your production code creates at runtime. It is off by by default because it can be dangerous. With it switched on you can write passing tests against APIs that don't actually exist! Patch can be used as a `TestCase` class decorator. It works by decorating each test method in the class. This reduces the boilerplate code when your test methods share a common patchings set. `patch` finds tests by looking for method names that start with `patch.TEST_PREFIX`. By default this is `test`, which matches the way `unittest` finds tests. You can specify an alternative prefix by setting `patch.TEST_PREFIX`. Patch can be used as a context manager, with the with statement. Here the patching applies to the indented block after the with statement. If you use "as" then the patched object will be bound to the name after the "as"; very useful if `patch` is creating a mock object for you. `patch` takes arbitrary keyword arguments. These will be passed to the `Mock` (or `new_callable`) on construction. `patch.dict(...)`, `patch.multiple(...)` and `patch.object(...)` are available for alternate use-cases. """ getter, attribute = _get_target(target) return _patch( getter, attribute, new, spec, create, spec_set, autospec, new_callable, kwargs ) class _patch_dict(object): """ Patch a dictionary, or dictionary like object, and restore the dictionary to its original state after the test. `in_dict` can be a dictionary or a mapping like container. If it is a mapping then it must at least support getting, setting and deleting items plus iterating over keys. `in_dict` can also be a string specifying the name of the dictionary, which will then be fetched by importing it. `values` can be a dictionary of values to set in the dictionary. `values` can also be an iterable of `(key, value)` pairs. If `clear` is True then the dictionary will be cleared before the new values are set. `patch.dict` can also be called with arbitrary keyword arguments to set values in the dictionary:: with patch.dict('sys.modules', mymodule=Mock(), other_module=Mock()): ... `patch.dict` can be used as a context manager, decorator or class decorator. When used as a class decorator `patch.dict` honours `patch.TEST_PREFIX` for choosing which methods to wrap. """ def __init__(self, in_dict, values=(), clear=False, **kwargs): if isinstance(in_dict, basestring): in_dict = _importer(in_dict) self.in_dict = in_dict # support any argument supported by dict(...) constructor self.values = dict(values) self.values.update(kwargs) self.clear = clear self._original = None def __call__(self, f): if isinstance(f, ClassTypes): return self.decorate_class(f) @wraps(f) def _inner(*args, **kw): self._patch_dict() try: return f(*args, **kw) finally: self._unpatch_dict() return _inner def decorate_class(self, klass): for attr in dir(klass): attr_value = getattr(klass, attr) if (attr.startswith(patch.TEST_PREFIX) and hasattr(attr_value, "__call__")): decorator = _patch_dict(self.in_dict, self.values, self.clear) decorated = decorator(attr_value) setattr(klass, attr, decorated) return klass def __enter__(self): """Patch the dict.""" self._patch_dict() def _patch_dict(self): values = self.values in_dict = self.in_dict clear = self.clear try: original = in_dict.copy() except AttributeError: # dict like object with no copy method # must support iteration over keys original = {} for key in in_dict: original[key] = in_dict[key] self._original = original if clear: _clear_dict(in_dict) try: in_dict.update(values) except AttributeError: # dict like object with no update method for key in values: in_dict[key] = values[key] def _unpatch_dict(self): in_dict = self.in_dict original = self._original _clear_dict(in_dict) try: in_dict.update(original) except AttributeError: for key in original: in_dict[key] = original[key] def __exit__(self, *args): """Unpatch the dict.""" self._unpatch_dict() return False start = __enter__ stop = __exit__ def _clear_dict(in_dict): try: in_dict.clear() except AttributeError: keys = list(in_dict) for key in keys: del in_dict[key] def _patch_stopall(): """Stop all active patches.""" for patch in list(_patch._active_patches): patch.stop() patch.object = _patch_object patch.dict = _patch_dict patch.multiple = _patch_multiple patch.stopall = _patch_stopall patch.TEST_PREFIX = 'test' magic_methods = ( "lt le gt ge eq ne " "getitem setitem delitem " "len contains iter " "hash str sizeof " "enter exit " "divmod neg pos abs invert " "complex int float index " "trunc floor ceil " ) numerics = "add sub mul div floordiv mod lshift rshift and xor or pow " inplace = ' '.join('i%s' % n for n in numerics.split()) right = ' '.join('r%s' % n for n in numerics.split()) extra = '' if inPy3k: extra = 'bool next ' else: extra = 'unicode long nonzero oct hex truediv rtruediv ' # not including __prepare__, __instancecheck__, __subclasscheck__ # (as they are metaclass methods) # __del__ is not supported at all as it causes problems if it exists _non_defaults = set('__%s__' % method for method in [ 'cmp', 'getslice', 'setslice', 'coerce', 'subclasses', 'format', 'get', 'set', 'delete', 'reversed', 'missing', 'reduce', 'reduce_ex', 'getinitargs', 'getnewargs', 'getstate', 'setstate', 'getformat', 'setformat', 'repr', 'dir' ]) def _get_method(name, func): "Turns a callable object (like a mock) into a real function" def method(self, *args, **kw): return func(self, *args, **kw) method.__name__ = name return method _magics = set( '__%s__' % method for method in ' '.join([magic_methods, numerics, inplace, right, extra]).split() ) _all_magics = _magics | _non_defaults _unsupported_magics = set([ '__getattr__', '__setattr__', '__init__', '__new__', '__prepare__' '__instancecheck__', '__subclasscheck__', '__del__' ]) _calculate_return_value = { '__hash__': lambda self: object.__hash__(self), '__str__': lambda self: object.__str__(self), '__sizeof__': lambda self: object.__sizeof__(self), '__unicode__': lambda self: unicode(object.__str__(self)), } _return_values = { '__lt__': NotImplemented, '__gt__': NotImplemented, '__le__': NotImplemented, '__ge__': NotImplemented, '__int__': 1, '__contains__': False, '__len__': 0, '__exit__': False, '__complex__': 1j, '__float__': 1.0, '__bool__': True, '__nonzero__': True, '__oct__': '1', '__hex__': '0x1', '__long__': long(1), '__index__': 1, } def _get_eq(self): def __eq__(other): ret_val = self.__eq__._mock_return_value if ret_val is not DEFAULT: return ret_val return self is other return __eq__ def _get_ne(self): def __ne__(other): if self.__ne__._mock_return_value is not DEFAULT: return DEFAULT return self is not other return __ne__ def _get_iter(self): def __iter__(): ret_val = self.__iter__._mock_return_value if ret_val is DEFAULT: return iter([]) # if ret_val was already an iterator, then calling iter on it should # return the iterator unchanged return iter(ret_val) return __iter__ _side_effect_methods = { '__eq__': _get_eq, '__ne__': _get_ne, '__iter__': _get_iter, } def _set_return_value(mock, method, name): fixed = _return_values.get(name, DEFAULT) if fixed is not DEFAULT: method.return_value = fixed return return_calulator = _calculate_return_value.get(name) if return_calulator is not None: try: return_value = return_calulator(mock) except AttributeError: # XXXX why do we return AttributeError here? # set it as a side_effect instead? return_value = AttributeError(name) method.return_value = return_value return side_effector = _side_effect_methods.get(name) if side_effector is not None: method.side_effect = side_effector(mock) class MagicMixin(object): def __init__(self, *args, **kw): _super(MagicMixin, self).__init__(*args, **kw) self._mock_set_magics() def _mock_set_magics(self): these_magics = _magics if self._mock_methods is not None: these_magics = _magics.intersection(self._mock_methods) remove_magics = set() remove_magics = _magics - these_magics for entry in remove_magics: if entry in type(self).__dict__: # remove unneeded magic methods delattr(self, entry) # don't overwrite existing attributes if called a second time these_magics = these_magics - set(type(self).__dict__) _type = type(self) for entry in these_magics: setattr(_type, entry, MagicProxy(entry, self)) class NonCallableMagicMock(MagicMixin, NonCallableMock): """A version of `MagicMock` that isn't callable.""" def mock_add_spec(self, spec, spec_set=False): """Add a spec to a mock. `spec` can either be an object or a list of strings. Only attributes on the `spec` can be fetched as attributes from the mock. If `spec_set` is True then only attributes on the spec can be set.""" self._mock_add_spec(spec, spec_set) self._mock_set_magics() class MagicMock(MagicMixin, Mock): """ MagicMock is a subclass of Mock with default implementations of most of the magic methods. You can use MagicMock without having to configure the magic methods yourself. If you use the `spec` or `spec_set` arguments then *only* magic methods that exist in the spec will be created. Attributes and the return value of a `MagicMock` will also be `MagicMocks`. """ def mock_add_spec(self, spec, spec_set=False): """Add a spec to a mock. `spec` can either be an object or a list of strings. Only attributes on the `spec` can be fetched as attributes from the mock. If `spec_set` is True then only attributes on the spec can be set.""" self._mock_add_spec(spec, spec_set) self._mock_set_magics() class MagicProxy(object): def __init__(self, name, parent): self.name = name self.parent = parent def __call__(self, *args, **kwargs): m = self.create_mock() return m(*args, **kwargs) def create_mock(self): entry = self.name parent = self.parent m = parent._get_child_mock(name=entry, _new_name=entry, _new_parent=parent) setattr(parent, entry, m) _set_return_value(parent, m, entry) return m def __get__(self, obj, _type=None): return self.create_mock() class _ANY(object): "A helper object that compares equal to everything." def __eq__(self, other): return True def __ne__(self, other): return False def __repr__(self): return '' ANY = _ANY() def _format_call_signature(name, args, kwargs): message = '%s(%%s)' % name formatted_args = '' args_string = ', '.join([repr(arg) for arg in args]) kwargs_string = ', '.join([ '%s=%r' % (key, value) for key, value in kwargs.items() ]) if args_string: formatted_args = args_string if kwargs_string: if formatted_args: formatted_args += ', ' formatted_args += kwargs_string return message % formatted_args class _Call(tuple): """ A tuple for holding the results of a call to a mock, either in the form `(args, kwargs)` or `(name, args, kwargs)`. If args or kwargs are empty then a call tuple will compare equal to a tuple without those values. This makes comparisons less verbose:: _Call(('name', (), {})) == ('name',) _Call(('name', (1,), {})) == ('name', (1,)) _Call(((), {'a': 'b'})) == ({'a': 'b'},) The `_Call` object provides a useful shortcut for comparing with call:: _Call(((1, 2), {'a': 3})) == call(1, 2, a=3) _Call(('foo', (1, 2), {'a': 3})) == call.foo(1, 2, a=3) If the _Call has no name then it will match any name. """ def __new__(cls, value=(), name=None, parent=None, two=False, from_kall=True): name = '' args = () kwargs = {} _len = len(value) if _len == 3: name, args, kwargs = value elif _len == 2: first, second = value if isinstance(first, basestring): name = first if isinstance(second, tuple): args = second else: kwargs = second else: args, kwargs = first, second elif _len == 1: value, = value if isinstance(value, basestring): name = value elif isinstance(value, tuple): args = value else: kwargs = value if two: return tuple.__new__(cls, (args, kwargs)) return tuple.__new__(cls, (name, args, kwargs)) def __init__(self, value=(), name=None, parent=None, two=False, from_kall=True): self.name = name self.parent = parent self.from_kall = from_kall def __eq__(self, other): if other is ANY: return True try: len_other = len(other) except TypeError: return False self_name = '' if len(self) == 2: self_args, self_kwargs = self else: self_name, self_args, self_kwargs = self other_name = '' if len_other == 0: other_args, other_kwargs = (), {} elif len_other == 3: other_name, other_args, other_kwargs = other elif len_other == 1: value, = other if isinstance(value, tuple): other_args = value other_kwargs = {} elif isinstance(value, basestring): other_name = value other_args, other_kwargs = (), {} else: other_args = () other_kwargs = value else: # len 2 # could be (name, args) or (name, kwargs) or (args, kwargs) first, second = other if isinstance(first, basestring): other_name = first if isinstance(second, tuple): other_args, other_kwargs = second, {} else: other_args, other_kwargs = (), second else: other_args, other_kwargs = first, second if self_name and other_name != self_name: return False # this order is important for ANY to work! return (other_args, other_kwargs) == (self_args, self_kwargs) def __ne__(self, other): return not self.__eq__(other) def __call__(self, *args, **kwargs): if self.name is None: return _Call(('', args, kwargs), name='()') name = self.name + '()' return _Call((self.name, args, kwargs), name=name, parent=self) def __getattr__(self, attr): if self.name is None: return _Call(name=attr, from_kall=False) name = '%s.%s' % (self.name, attr) return _Call(name=name, parent=self, from_kall=False) def __repr__(self): if not self.from_kall: name = self.name or 'call' if name.startswith('()'): name = 'call%s' % name return name if len(self) == 2: name = 'call' args, kwargs = self else: name, args, kwargs = self if not name: name = 'call' elif not name.startswith('()'): name = 'call.%s' % name else: name = 'call%s' % name return _format_call_signature(name, args, kwargs) def call_list(self): """For a call object that represents multiple calls, `call_list` returns a list of all the intermediate calls as well as the final call.""" vals = [] thing = self while thing is not None: if thing.from_kall: vals.append(thing) thing = thing.parent return _CallList(reversed(vals)) call = _Call(from_kall=False) def create_autospec(spec, spec_set=False, instance=False, _parent=None, _name=None, **kwargs): """Create a mock object using another object as a spec. Attributes on the mock will use the corresponding attribute on the `spec` object as their spec. Functions or methods being mocked will have their arguments checked to check that they are called with the correct signature. If `spec_set` is True then attempting to set attributes that don't exist on the spec object will raise an `AttributeError`. If a class is used as a spec then the return value of the mock (the instance of the class) will have the same spec. You can use a class as the spec for an instance object by passing `instance=True`. The returned mock will only be callable if instances of the mock are callable. `create_autospec` also takes arbitrary keyword arguments that are passed to the constructor of the created mock.""" if _is_list(spec): # can't pass a list instance to the mock constructor as it will be # interpreted as a list of strings spec = type(spec) is_type = isinstance(spec, ClassTypes) _kwargs = {'spec': spec} if spec_set: _kwargs = {'spec_set': spec} elif spec is None: # None we mock with a normal mock without a spec _kwargs = {} _kwargs.update(kwargs) Klass = MagicMock if type(spec) in DescriptorTypes: # descriptors don't have a spec # because we don't know what type they return _kwargs = {} elif not _callable(spec): Klass = NonCallableMagicMock elif is_type and instance and not _instance_callable(spec): Klass = NonCallableMagicMock _new_name = _name if _parent is None: # for a top level object no _new_name should be set _new_name = '' mock = Klass(parent=_parent, _new_parent=_parent, _new_name=_new_name, name=_name, **_kwargs) if isinstance(spec, FunctionTypes): # should only happen at the top level because we don't # recurse for functions mock = _set_signature(mock, spec) else: _check_signature(spec, mock, is_type, instance) if _parent is not None and not instance: _parent._mock_children[_name] = mock if is_type and not instance and 'return_value' not in kwargs: mock.return_value = create_autospec(spec, spec_set, instance=True, _name='()', _parent=mock) for entry in dir(spec): if _is_magic(entry): # MagicMock already does the useful magic methods for us continue if isinstance(spec, FunctionTypes) and entry in FunctionAttributes: # allow a mock to actually be a function continue # XXXX do we need a better way of getting attributes without # triggering code execution (?) Probably not - we need the actual # object to mock it so we would rather trigger a property than mock # the property descriptor. Likewise we want to mock out dynamically # provided attributes. # XXXX what about attributes that raise exceptions other than # AttributeError on being fetched? # we could be resilient against it, or catch and propagate the # exception when the attribute is fetched from the mock try: original = getattr(spec, entry) except AttributeError: continue kwargs = {'spec': original} if spec_set: kwargs = {'spec_set': original} if not isinstance(original, FunctionTypes): new = _SpecState(original, spec_set, mock, entry, instance) mock._mock_children[entry] = new else: parent = mock if isinstance(spec, FunctionTypes): parent = mock.mock new = MagicMock(parent=parent, name=entry, _new_name=entry, _new_parent=parent, **kwargs) mock._mock_children[entry] = new skipfirst = _must_skip(spec, entry, is_type) _check_signature(original, new, skipfirst=skipfirst) # so functions created with _set_signature become instance attributes, # *plus* their underlying mock exists in _mock_children of the parent # mock. Adding to _mock_children may be unnecessary where we are also # setting as an instance attribute? if isinstance(new, FunctionTypes): setattr(mock, entry, new) return mock def _must_skip(spec, entry, is_type): if not isinstance(spec, ClassTypes): if entry in getattr(spec, '__dict__', {}): # instance attribute - shouldn't skip return False spec = spec.__class__ if not hasattr(spec, '__mro__'): # old style class: can't have descriptors anyway return is_type for klass in spec.__mro__: result = klass.__dict__.get(entry, DEFAULT) if result is DEFAULT: continue if isinstance(result, (staticmethod, classmethod)): return False return is_type # shouldn't get here unless function is a dynamically provided attribute # XXXX untested behaviour return is_type def _get_class(obj): try: return obj.__class__ except AttributeError: # in Python 2, _sre.SRE_Pattern objects have no __class__ return type(obj) class _SpecState(object): def __init__(self, spec, spec_set=False, parent=None, name=None, ids=None, instance=False): self.spec = spec self.ids = ids self.spec_set = spec_set self.parent = parent self.instance = instance self.name = name FunctionTypes = ( # python function type(create_autospec), # instance method type(ANY.__eq__), # unbound method type(_ANY.__eq__), ) FunctionAttributes = set([ 'func_closure', 'func_code', 'func_defaults', 'func_dict', 'func_doc', 'func_globals', 'func_name', ]) file_spec = None def mock_open(mock=None, read_data=''): """ A helper function to create a mock to replace the use of `open`. It works for `open` called directly or used as a context manager. The `mock` argument is the mock object to configure. If `None` (the default) then a `MagicMock` will be created for you, with the API limited to methods or attributes available on standard file handles. `read_data` is a string for the `read` method of the file handle to return. This is an empty string by default. """ global file_spec if file_spec is None: # set on first use if inPy3k: import _io file_spec = list(set(dir(_io.TextIOWrapper)).union(set(dir(_io.BytesIO)))) else: file_spec = file if mock is None: mock = MagicMock(name='open', spec=open) handle = MagicMock(spec=file_spec) handle.write.return_value = None handle.__enter__.return_value = handle handle.read.return_value = read_data mock.return_value = handle return mock class PropertyMock(Mock): """ A mock intended to be used as a property, or other descriptor, on a class. `PropertyMock` provides `__get__` and `__set__` methods so you can specify a return value when it is fetched. Fetching a `PropertyMock` instance from an object calls the mock, with no args. Setting it calls the mock with the value being set. """ def _get_child_mock(self, **kwargs): return MagicMock(**kwargs) def __get__(self, obj, obj_type): return self() def __set__(self, obj, val): self(val) peewee-2.10.2/playhouse/tests/models.py000066400000000000000000000243061316645060400200700ustar00rootroot00000000000000import datetime import sys from peewee import * from playhouse.tests.base import TestModel from playhouse.tests.base import test_db if sys.version_info[0] == 3: long = int class User(TestModel): username = CharField() class Meta: db_table = 'users' def prepared(self): self.foo = self.username @classmethod def create_users(cls, n): for i in range(n): cls.create(username='u%d' % (i + 1)) class Blog(TestModel): user = ForeignKeyField(User) title = CharField(max_length=25) content = TextField(default='') pub_date = DateTimeField(null=True) pk = PrimaryKeyField() def __unicode__(self): return '%s: %s' % (self.user.username, self.title) def prepared(self): self.foo = self.title class Comment(TestModel): blog = ForeignKeyField(Blog, related_name='comments') comment = CharField() class Relationship(TestModel): from_user = ForeignKeyField(User, related_name='relationships') to_user = ForeignKeyField(User, related_name='related_to') class NullModel(TestModel): char_field = CharField(null=True) text_field = TextField(null=True) datetime_field = DateTimeField(null=True) int_field = IntegerField(null=True) float_field = FloatField(null=True) decimal_field1 = DecimalField(null=True) decimal_field2 = DecimalField(decimal_places=2, null=True) double_field = DoubleField(null=True) bigint_field = BigIntegerField(null=True) date_field = DateField(null=True) time_field = TimeField(null=True) boolean_field = BooleanField(null=True) fixed_char_field = FixedCharField(null=True) ts_field = TimestampField(null=True, default=None, resolution=1000000) ts_field2 = TimestampField(null=True, default=None, resolution=1000, utc=True) class TimestampModel(TestModel): local_us = TimestampField(null=True, default=None, resolution=1000000) utc_ms = TimestampField(null=True, default=None, resolution=1000, utc=True) local = TimestampField(null=True) class UniqueModel(TestModel): name = CharField(unique=True) class UniqueMultiField(TestModel): name = CharField(unique=True) field_a = CharField(default='') field_b = IntegerField(default=0) class OrderedModel(TestModel): title = CharField() created = DateTimeField(default=datetime.datetime.now) class Meta: order_by = ('-created',) class Category(TestModel): parent = ForeignKeyField('self', related_name='children', null=True) name = CharField() class UserCategory(TestModel): user = ForeignKeyField(User) category = ForeignKeyField(Category) class NonIntModel(TestModel): pk = CharField(primary_key=True) data = CharField() class NonIntRelModel(TestModel): non_int_model = ForeignKeyField(NonIntModel, related_name='nr') class DBUser(TestModel): user_id = PrimaryKeyField(db_column='db_user_id') username = CharField(db_column='db_username') class DBBlog(TestModel): blog_id = PrimaryKeyField(db_column='db_blog_id') title = CharField(db_column='db_title') user = ForeignKeyField(DBUser, db_column='db_user') class SeqModelA(TestModel): id = IntegerField(primary_key=True, sequence='just_testing_seq') num = IntegerField() class SeqModelB(TestModel): id = IntegerField(primary_key=True, sequence='just_testing_seq') other_num = IntegerField() class MultiIndexModel(TestModel): f1 = CharField() f2 = CharField() f3 = CharField() class Meta: indexes = ( (('f1', 'f2'), True), (('f2', 'f3'), False), ) class BlogTwo(Blog): title = TextField() extra_field = CharField() class Parent(TestModel): data = CharField() class Child(TestModel): parent = ForeignKeyField(Parent) data = CharField(default='') class Orphan(TestModel): parent = ForeignKeyField(Parent, null=True) data = CharField(default='') class ChildPet(TestModel): child = ForeignKeyField(Child) data = CharField(default='') class OrphanPet(TestModel): orphan = ForeignKeyField(Orphan) data = CharField(default='') class ChildNullableData(TestModel): child = ForeignKeyField(Child, null=True) data = CharField() class CSVField(TextField): def db_value(self, value): if value: return ','.join(value) return value or '' def python_value(self, value): return value.split(',') if value else [] class CSVRow(TestModel): data = CSVField() class BlobModel(TestModel): data = BlobField() class Job(TestModel): name = CharField() class JobExecutionRecord(TestModel): job = ForeignKeyField(Job, primary_key=True) status = CharField() class JERRelated(TestModel): jer = ForeignKeyField(JobExecutionRecord) class TestModelA(TestModel): field = CharField(primary_key=True) data = CharField() class TestModelB(TestModel): field = CharField(primary_key=True) data = CharField() class TestModelC(TestModel): field = CharField(primary_key=True) data = CharField() class Post(TestModel): title = CharField() class Tag(TestModel): tag = CharField() class TagPostThrough(TestModel): tag = ForeignKeyField(Tag, related_name='posts') post = ForeignKeyField(Post, related_name='tags') class Meta: primary_key = CompositeKey('tag', 'post') class TagPostThroughAlt(TestModel): tag = ForeignKeyField(Tag, related_name='posts_alt') post = ForeignKeyField(Post, related_name='tags_alt') class Manufacturer(TestModel): name = CharField() class CompositeKeyModel(TestModel): f1 = CharField() f2 = IntegerField() f3 = FloatField() class Meta: primary_key = CompositeKey('f1', 'f2') class UserThing(TestModel): thing = CharField() user = ForeignKeyField(User, related_name='things') class Meta: primary_key = CompositeKey('thing', 'user') class Component(TestModel): name = CharField() manufacturer = ForeignKeyField(Manufacturer, null=True) class Computer(TestModel): hard_drive = ForeignKeyField(Component, related_name='c1') memory = ForeignKeyField(Component, related_name='c2') processor = ForeignKeyField(Component, related_name='c3') class CheckModel(TestModel): value = IntegerField(constraints=[Check('value > 0')]) # Deferred foreign keys. SnippetDeferred = DeferredRelation() class Language(TestModel): name = CharField() selected_snippet = ForeignKeyField(SnippetDeferred, null=True) class Snippet(TestModel): code = TextField() language = ForeignKeyField(Language, related_name='snippets') SnippetDeferred.set_model(Snippet) class _UpperField(CharField): def python_value(self, value): return value.upper() if value else value class UpperUser(TestModel): username = _UpperField() class Meta: db_table = User._meta.db_table class Package(TestModel): barcode = CharField(unique=True) class PackageItem(TestModel): title = CharField() package = ForeignKeyField( Package, related_name='items', to_field=Package.barcode) class PGSchema(TestModel): data = CharField() class Meta: schema = 'huey' class UpperCharField(CharField): def coerce(self, value): value = super(UpperCharField, self).coerce(value) if value: value = value.upper() return value class UpperModel(TestModel): data = UpperCharField() class CommentCategory(TestModel): category = ForeignKeyField(Category) comment = ForeignKeyField(Comment) sort_order = IntegerField(default=0) class Meta: primary_key = CompositeKey('comment', 'category') class BlogData(TestModel): blog = ForeignKeyField(Blog) class ServerDefaultModel(TestModel): name = CharField(constraints=[SQL("DEFAULT 'foo'")]) timestamp = DateTimeField(constraints=[ SQL('DEFAULT CURRENT_TIMESTAMP')]) class SpecialComment(TestModel): user = ForeignKeyField(User, related_name='special_comments') blog = ForeignKeyField(Blog, null=True, related_name='special_comments') name = CharField() class EmptyModel(TestModel): pass class NoPKModel(TestModel): data = TextField() class Meta: primary_key = False class TestingID(TestModel): uniq = UUIDField() class UUIDData(TestModel): id = UUIDField(primary_key=True) data = CharField() class UUIDRelatedModel(TestModel): data = ForeignKeyField(UUIDData, null=True, related_name='related_models') value = IntegerField(default=0) class UInt32Field(Field): db_field = 'int' def db_value(self, value): return long(value - (1 << 31)) def python_value(self, value): return long(value + (1 << 31)) class UIntModel(TestModel): data = UInt32Field() class UIntRelModel(TestModel): uint_model = ForeignKeyField(UIntModel, to_field='data') class Note(TestModel): user = ForeignKeyField(User, related_name='notes') text = TextField() class Flag(TestModel): label = TextField() class NoteFlag(TestModel): note = ForeignKeyField(Note, related_name='flags') flag = ForeignKeyField(Flag, related_name='notes') class NoteFlagNullable(TestModel): note = ForeignKeyField(Note, null=True, related_name='nullable_flags') flag = ForeignKeyField(Flag, null=True, related_name='nullable_notes') MODELS = [ User, Blog, Comment, Relationship, NullModel, TimestampModel, UniqueModel, OrderedModel, Category, UserCategory, NonIntModel, NonIntRelModel, DBUser, DBBlog, SeqModelA, SeqModelB, MultiIndexModel, BlogTwo, Parent, Child, Orphan, ChildPet, OrphanPet, BlobModel, Job, JobExecutionRecord, JERRelated, TestModelA, TestModelB, TestModelC, Tag, Post, TagPostThrough, TagPostThroughAlt, Language, Snippet, Manufacturer, CompositeKeyModel, UserThing, Component, Computer, CheckModel, Package, PackageItem, PGSchema, UpperModel, CommentCategory, BlogData, ServerDefaultModel, SpecialComment, EmptyModel, NoPKModel, TestingID, UUIDData, UUIDRelatedModel, UIntModel, UIntRelModel, Note, Flag, NoteFlag, ] peewee-2.10.2/playhouse/tests/test_apis.py000066400000000000000000000022221316645060400205710ustar00rootroot00000000000000from peewee import Node from peewee import * from playhouse.tests.base import PeeweeTestCase class TestNodeAPI(PeeweeTestCase): def test_extend(self): @Node.extend() def add(self, lhs, rhs): return lhs + rhs n = Node() self.assertEqual(n.add(4, 2), 6) delattr(Node, 'add') self.assertRaises(AttributeError, lambda: n.add(2, 4)) def test_clone(self): @Node.extend(clone=True) def hack(self, alias): self._negated = True self._alias = alias n = Node() c = n.hack('magic!') self.assertFalse(n._negated) self.assertEqual(n._alias, None) self.assertTrue(c._negated) self.assertEqual(c._alias, 'magic!') class TestModel(Model): data = CharField() hacked = TestModel.data.hack('nugget') self.assertFalse(TestModel.data._negated) self.assertEqual(TestModel.data._alias, None) self.assertTrue(hacked._negated) self.assertEqual(hacked._alias, 'nugget') delattr(Node, 'hack') self.assertRaises(AttributeError, lambda: TestModel.data.hack()) peewee-2.10.2/playhouse/tests/test_apsw.py000066400000000000000000000077201316645060400206170ustar00rootroot00000000000000import apsw import datetime from playhouse.apsw_ext import * from playhouse.tests.base import ModelTestCase db = APSWDatabase(':memory:') class BaseModel(Model): class Meta: database = db class User(BaseModel): username = CharField() class Message(BaseModel): user = ForeignKeyField(User) message = TextField() pub_date = DateTimeField() published = BooleanField() class APSWTestCase(ModelTestCase): requires = [Message, User] def test_db_register_functions(self): result = db.execute_sql('SELECT date_part(?, ?)', ( 'day', '2015-01-02 03:04:05')).fetchone()[0] self.assertEqual(result, 2) result = db.execute_sql('SELECT date_trunc(?, ?)', ( 'day', '2015-01-02 03:04:05')).fetchone()[0] self.assertEqual(result, '2015-01-02') def test_db_pragmas(self): test_db = APSWDatabase(':memory:', pragmas=( ('cache_size', '1337'), )) test_db.connect() cs = test_db.execute_sql('PRAGMA cache_size;').fetchone()[0] self.assertEqual(cs, 1337) def test_select_insert(self): users = ('u1', 'u2', 'u3') for user in users: User.create(username=user) self.assertEqual([x.username for x in User.select()], ['u1', 'u2', 'u3']) self.assertEqual([x.username for x in User.select().filter(username='x')], []) self.assertEqual([x.username for x in User.select().filter(username__in=['u1', 'u3'])], ['u1', 'u3']) dt = datetime.datetime(2012, 1, 1, 11, 11, 11) Message.create(user=User.get(username='u1'), message='herps', pub_date=dt, published=True) Message.create(user=User.get(username='u2'), message='derps', pub_date=dt, published=False) m1 = Message.get(message='herps') self.assertEqual(m1.user.username, 'u1') self.assertEqual(m1.pub_date, dt) self.assertEqual(m1.published, True) m2 = Message.get(message='derps') self.assertEqual(m2.user.username, 'u2') self.assertEqual(m2.pub_date, dt) self.assertEqual(m2.published, False) def test_update_delete(self): u1 = User.create(username='u1') u2 = User.create(username='u2') u1.username = 'u1-modified' u1.save() self.assertEqual(User.select().count(), 2) self.assertEqual(User.get(username='u1-modified').id, u1.id) u1.delete_instance() self.assertEqual(User.select().count(), 1) def test_transaction_handling(self): dt = datetime.datetime(2012, 1, 1, 11, 11, 11) def do_ctx_mgr_error(): with db.transaction(): User.create(username='u1') raise ValueError self.assertRaises(ValueError, do_ctx_mgr_error) self.assertEqual(User.select().count(), 0) def do_ctx_mgr_success(): with db.transaction(): u = User.create(username='test') Message.create(message='testing', user=u, pub_date=dt, published=1) do_ctx_mgr_success() self.assertEqual(User.select().count(), 1) self.assertEqual(Message.select().count(), 1) @db.commit_on_success def create_error(): u = User.create(username='test') Message.create(message='testing', user=u, pub_date=dt, published=1) raise ValueError self.assertRaises(ValueError, create_error) self.assertEqual(User.select().count(), 1) @db.commit_on_success def create_success(): u = User.create(username='test') Message.create(message='testing', user=u, pub_date=dt, published=1) create_success() self.assertEqual(User.select().count(), 2) self.assertEqual(Message.select().count(), 2) def test_exists_regression(self): User.create(username='u1') self.assertTrue(User.select().where(User.username == 'u1').exists()) self.assertFalse(User.select().where(User.username == 'ux').exists()) peewee-2.10.2/playhouse/tests/test_berkeleydb.py000066400000000000000000000074251316645060400217570ustar00rootroot00000000000000import os import shutil from peewee import IntegrityError from playhouse.berkeleydb import * from playhouse.tests.base import database_initializer from playhouse.tests.base import ModelTestCase from playhouse.tests.base import skip_unless database = database_initializer.get_database('berkeleydb') class BaseModel(Model): class Meta: database = database class Person(BaseModel): name = CharField(unique=True) class Message(BaseModel): person = ForeignKeyField(Person, related_name='messages') body = TextField() @skip_unless(BerkeleyDatabase.check_pysqlite) class TestBerkeleyDatabase(ModelTestCase): requires = [Person, Message] def setUp(self): self.remove_db_files() super(TestBerkeleyDatabase, self).setUp() def tearDown(self): super(TestBerkeleyDatabase, self).tearDown() if not database.is_closed(): database.close() def remove_db_files(self): filename = database.database if os.path.exists(filename): os.unlink(filename) if os.path.exists(filename + '-journal'): shutil.rmtree(filename + '-journal') def test_storage_retrieval(self): pc = Person.create(name='charlie') ph = Person.create(name='huey') for i in range(3): Message.create(person=pc, body='message-%s' % i) self.assertEqual(Message.select().count(), 3) self.assertEqual(Person.select().count(), 2) self.assertEqual( [msg.body for msg in pc.messages.order_by(Message.body)], ['message-0', 'message-1', 'message-2']) self.assertEqual(list(ph.messages), []) def test_transaction(self): with database.transaction(): Person.create(name='charlie') self.assertEqual(Person.select().count(), 1) @database.commit_on_success def rollback(): Person.create(name='charlie') self.assertRaises(IntegrityError, rollback) self.assertEqual(Person.select().count(), 1) def _test_pragmas(self, db): class PragmaTest(Model): data = TextField() class Meta: database = db sql = lambda q: db.execute_sql(q).fetchone()[0] with db.execution_context() as ctx: PragmaTest.create_table() # Use another connection to check the pragma values. with db.execution_context() as ctx: conn = db.get_conn() cache = sql('PRAGMA cache_size;') page = sql('PRAGMA page_size;') mvcc = sql('PRAGMA multiversion;') self.assertEqual(cache, 1000) self.assertEqual(page, 2048) self.assertEqual(mvcc, 1) # Now, use two connections. This tests the weird behavior of the # BTree cache. conn = db.get_conn() self.assertEqual(sql('PRAGMA multiversion;'), 1) with db.execution_context(): conn2 = db.get_conn() self.assertTrue(id(conn) != id(conn2)) self.assertEqual(sql('PRAGMA cache_size;'), 1000) self.assertEqual(sql('PRAGMA multiversion;'), 1) self.assertEqual(sql('PRAGMA page_size;'), 2048) def test_pragmas(self): database.close() self.remove_db_files() db = BerkeleyDatabase( database.database, cache_size=1000, page_size=2048, multiversion=True) try: self._test_pragmas(db) finally: if not db.is_closed(): db.close() def test_udf(self): @database.func() def title(s): return s.title() with database.execution_context(): res = database.execute_sql('select title(?)', ('whats up',)) self.assertEqual(res.fetchone(), ('Whats Up',)) peewee-2.10.2/playhouse/tests/test_compound_queries.py000066400000000000000000000404621316645060400232260ustar00rootroot00000000000000import itertools import operator import sys if sys.version_info[0] != 3: from functools import reduce from functools import wraps from peewee import * from playhouse.tests.base import compiler from playhouse.tests.base import database_initializer from playhouse.tests.base import log_console from playhouse.tests.base import ModelTestCase from playhouse.tests.base import PeeweeTestCase from playhouse.tests.base import skip_test_if from playhouse.tests.base import skip_unless from playhouse.tests.base import test_db from playhouse.tests.models import * compound_db = database_initializer.get_in_memory_database() class CompoundBase(Model): class Meta: database = compound_db class Alpha(CompoundBase): alpha = IntegerField() class Beta(CompoundBase): beta = IntegerField() other = IntegerField(default=0) class Gamma(CompoundBase): gamma = IntegerField() other = IntegerField(default=1) class TestCompoundSelectSQL(PeeweeTestCase): def setUp(self): super(TestCompoundSelectSQL, self).setUp() compound_db.compound_select_parentheses = False # Restore default. self.a1 = Alpha.select(Alpha.alpha).where(Alpha.alpha < 2) self.a2 = Alpha.select(Alpha.alpha).where(Alpha.alpha > 5) self.b1 = Beta.select(Beta.beta).where(Beta.beta < 3) self.b2 = Beta.select(Beta.beta).where(Beta.beta > 4) def test_simple_sql(self): lhs = Alpha.select(Alpha.alpha) rhs = Beta.select(Beta.beta) sql, params = (lhs | rhs).sql() self.assertEqual(sql, ( 'SELECT "t1"."alpha" FROM "alpha" AS t1 UNION ' 'SELECT "t2"."beta" FROM "beta" AS t2')) sql, params = ( Alpha.select(Alpha.alpha) | Beta.select(Beta.beta) | Gamma.select(Gamma.gamma)).sql() self.assertEqual(sql, ( 'SELECT "t1"."alpha" FROM "alpha" AS t1 UNION ' 'SELECT "t2"."beta" FROM "beta" AS t2 UNION ' 'SELECT "t3"."gamma" FROM "gamma" AS t3')) sql, params = ( Alpha.select(Alpha.alpha) | (Beta.select(Beta.beta) | Gamma.select(Gamma.gamma))).sql() self.assertEqual(sql, ( 'SELECT "t3"."alpha" FROM "alpha" AS t3 UNION ' 'SELECT "t1"."beta" FROM "beta" AS t1 UNION ' 'SELECT "t2"."gamma" FROM "gamma" AS t2')) def test_simple_same_model(self): queries = [Alpha.select(Alpha.alpha) for i in range(3)] lhs = queries[0] | queries[1] compound = lhs | queries[2] sql, params = compound.sql() self.assertEqual(sql, ( 'SELECT "t1"."alpha" FROM "alpha" AS t1 UNION ' 'SELECT "t2"."alpha" FROM "alpha" AS t2 UNION ' 'SELECT "t3"."alpha" FROM "alpha" AS t3')) lhs = queries[0] compound = lhs | (queries[1] | queries[2]) sql, params = compound.sql() self.assertEqual(sql, ( 'SELECT "t3"."alpha" FROM "alpha" AS t3 UNION ' 'SELECT "t1"."alpha" FROM "alpha" AS t1 UNION ' 'SELECT "t2"."alpha" FROM "alpha" AS t2')) def test_where_clauses(self): sql, params = (self.a1 | self.a2).sql() self.assertEqual(sql, ( 'SELECT "t1"."alpha" FROM "alpha" AS t1 WHERE ("t1"."alpha" < ?) ' 'UNION ' 'SELECT "t2"."alpha" FROM "alpha" AS t2 WHERE ("t2"."alpha" > ?)')) self.assertEqual(params, [2, 5]) sql, params = (self.a1 | self.b1).sql() self.assertEqual(sql, ( 'SELECT "t1"."alpha" FROM "alpha" AS t1 WHERE ("t1"."alpha" < ?) ' 'UNION ' 'SELECT "t2"."beta" FROM "beta" AS t2 WHERE ("t2"."beta" < ?)')) self.assertEqual(params, [2, 3]) sql, params = (self.a1 | self.b1 | self.a2 | self.b2).sql() self.assertEqual(sql, ( 'SELECT "t1"."alpha" FROM "alpha" AS t1 WHERE ("t1"."alpha" < ?) ' 'UNION ' 'SELECT "t2"."beta" FROM "beta" AS t2 WHERE ("t2"."beta" < ?) ' 'UNION ' 'SELECT "t4"."alpha" FROM "alpha" AS t4 WHERE ("t4"."alpha" > ?) ' 'UNION ' 'SELECT "t3"."beta" FROM "beta" AS t3 WHERE ("t3"."beta" > ?)')) self.assertEqual(params, [2, 3, 5, 4]) def test_outer_limit(self): sql, params = (self.a1 | self.a2).limit(3).sql() self.assertEqual(sql, ( 'SELECT "t1"."alpha" FROM "alpha" AS t1 WHERE ("t1"."alpha" < ?) ' 'UNION ' 'SELECT "t2"."alpha" FROM "alpha" AS t2 WHERE ("t2"."alpha" > ?) ' 'LIMIT 3')) def test_union_in_from(self): compound = (self.a1 | self.a2).alias('cq') sql, params = Alpha.select(compound.c.alpha).from_(compound).sql() self.assertEqual(sql, ( 'SELECT "cq"."alpha" FROM (' 'SELECT "t1"."alpha" FROM "alpha" AS t1 WHERE ("t1"."alpha" < ?) ' 'UNION ' 'SELECT "t2"."alpha" FROM "alpha" AS t2 WHERE ("t2"."alpha" > ?)' ') AS cq')) compound = (self.a1 | self.b1 | self.b2).alias('cq') sql, params = Alpha.select(SQL('1')).from_(compound).sql() self.assertEqual(sql, ( 'SELECT 1 FROM (' 'SELECT "t1"."alpha" FROM "alpha" AS t1 WHERE ("t1"."alpha" < ?) ' 'UNION ' 'SELECT "t2"."beta" FROM "beta" AS t2 WHERE ("t2"."beta" < ?) ' 'UNION ' 'SELECT "t3"."beta" FROM "beta" AS t3 WHERE ("t3"."beta" > ?)' ') AS cq')) self.assertEqual(params, [2, 3, 4]) def test_parentheses(self): compound_db.compound_select_parentheses = True sql, params = (self.a1 | self.a2).sql() self.assertEqual(sql, ( '(SELECT "t1"."alpha" FROM "alpha" AS t1 ' 'WHERE ("t1"."alpha" < ?)) ' 'UNION ' '(SELECT "t2"."alpha" FROM "alpha" AS t2 ' 'WHERE ("t2"."alpha" > ?))')) self.assertEqual(params, [2, 5]) def test_multiple_with_parentheses(self): compound_db.compound_select_parentheses = True queries = [Alpha.select(Alpha.alpha) for i in range(3)] lhs = queries[0] | queries[1] compound = lhs | queries[2] sql, params = compound.sql() self.assertEqual(sql, ( '(SELECT "t1"."alpha" FROM "alpha" AS t1) UNION ' '(SELECT "t2"."alpha" FROM "alpha" AS t2) UNION ' '(SELECT "t3"."alpha" FROM "alpha" AS t3)')) lhs = queries[0] compound = lhs | (queries[1] | queries[2]) sql, params = compound.sql() self.assertEqual(sql, ( '(SELECT "t3"."alpha" FROM "alpha" AS t3) UNION ' '(SELECT "t1"."alpha" FROM "alpha" AS t1) UNION ' '(SELECT "t2"."alpha" FROM "alpha" AS t2)')) def test_inner_limit(self): compound_db.compound_select_parentheses = True a1 = Alpha.select(Alpha.alpha).where(Alpha.alpha < 2).limit(2) a2 = Alpha.select(Alpha.alpha).where(Alpha.alpha > 5).limit(4) sql, params = (a1 | a2).limit(3).sql() self.assertEqual(sql, ( '(SELECT "t1"."alpha" FROM "alpha" AS t1 WHERE ("t1"."alpha" < ?) ' 'LIMIT 2) ' 'UNION ' '(SELECT "t2"."alpha" FROM "alpha" AS t2 WHERE ("t2"."alpha" > ?) ' 'LIMIT 4) ' 'LIMIT 3')) def test_union_subquery(self): union = (Alpha.select(Alpha.alpha) | Beta.select(Beta.beta)) query = Alpha.select().where(Alpha.alpha << union) sql, params = query.sql() self.assertEqual(sql, ( 'SELECT "t1"."id", "t1"."alpha" ' 'FROM "alpha" AS t1 WHERE ("t1"."alpha" IN (' 'SELECT "t1"."alpha" FROM "alpha" AS t1 ' 'UNION ' 'SELECT "t2"."beta" FROM "beta" AS t2))')) class TestCompoundSelectQueries(ModelTestCase): requires = [User, UniqueModel, OrderedModel, Blog] # User -> username, UniqueModel -> name, OrderedModel -> title test_values = { User.username: ['a', 'b', 'c', 'd'], OrderedModel.title: ['a', 'c', 'e'], UniqueModel.name: ['b', 'd', 'e'], } def setUp(self): super(TestCompoundSelectQueries, self).setUp() for field, values in self.test_values.items(): for value in values: field.model_class.create(**{field.name: value}) def requires_op(op): def decorator(fn): @wraps(fn) def inner(self): if op in test_db.compound_operations: return fn(self) else: log_console('"%s" not supported, skipping %s' % (op, fn.__name__)) return inner return decorator def assertValues(self, query, expected): self.assertEqual(sorted(query.tuples()), [(x,) for x in sorted(expected)]) def assertPermutations(self, op, expected): fields = { User: User.username, UniqueModel: UniqueModel.name, OrderedModel: OrderedModel.title, } for key in itertools.permutations(fields.keys(), 2): if key in expected: left, right = key query = op(left.select(fields[left]).order_by(), right.select(fields[right]).order_by()) # Ensure the sorted tuples returned from the query are equal # to the sorted values we expected for this combination. self.assertValues(query, expected[key]) @requires_op('UNION') def test_union(self): all_letters = ['a', 'b', 'c', 'd', 'e'] self.assertPermutations(operator.or_, { (User, UniqueModel): all_letters, (User, OrderedModel): all_letters, (UniqueModel, User): all_letters, (UniqueModel, OrderedModel): all_letters, (OrderedModel, User): all_letters, (OrderedModel, UniqueModel): all_letters, }) @requires_op('UNION ALL') def test_union(self): all_letters = ['a', 'b', 'c', 'd', 'e'] users = User.select(User.username) uniques = UniqueModel.select(UniqueModel.name) query = users.union_all(uniques) results = [row[0] for row in query.tuples()] self.assertEqual(sorted(results), ['a', 'b', 'b', 'c', 'd', 'd', 'e']) @requires_op('UNION') def test_union_from(self): uq = (User .select(User.username.alias('name')) .where(User.username << ['a', 'b', 'd'])) oq = (OrderedModel .select(OrderedModel.title.alias('name')) .where(OrderedModel.title << ['a', 'b']) .order_by()) iq = (UniqueModel .select(UniqueModel.name.alias('name')) .where(UniqueModel.name << ['c', 'd'])) union_q = (uq | oq | iq).alias('union_q') query = (User .select(union_q.c.name) .from_(union_q) .order_by(union_q.c.name.desc())) self.assertEqual([row[0] for row in query.tuples()], ['d', 'b', 'a']) @requires_op('UNION') def test_union_count(self): a = User.select().where(User.username == 'a') c_and_d = User.select().where(User.username << ['c', 'd']) self.assertEqual(a.count(), 1) self.assertEqual(c_and_d.count(), 2) union = a | c_and_d self.assertEqual(union.wrapped_count(), 3) overlapping = User.select() | c_and_d self.assertEqual(overlapping.wrapped_count(), 4) @requires_op('INTERSECT') def test_intersect(self): self.assertPermutations(operator.and_, { (User, UniqueModel): ['b', 'd'], (User, OrderedModel): ['a', 'c'], (UniqueModel, User): ['b', 'd'], (UniqueModel, OrderedModel): ['e'], (OrderedModel, User): ['a', 'c'], (OrderedModel, UniqueModel): ['e'], }) @requires_op('EXCEPT') def test_except(self): self.assertPermutations(operator.sub, { (User, UniqueModel): ['a', 'c'], (User, OrderedModel): ['b', 'd'], (UniqueModel, User): ['e'], (UniqueModel, OrderedModel): ['b', 'd'], (OrderedModel, User): ['e'], (OrderedModel, UniqueModel): ['a', 'c'], }) @requires_op('INTERSECT') @requires_op('EXCEPT') def test_symmetric_difference(self): self.assertPermutations(operator.xor, { (User, UniqueModel): ['a', 'c', 'e'], (User, OrderedModel): ['b', 'd', 'e'], (UniqueModel, User): ['a', 'c', 'e'], (UniqueModel, OrderedModel): ['a', 'b', 'c', 'd'], (OrderedModel, User): ['b', 'd', 'e'], (OrderedModel, UniqueModel): ['a', 'b', 'c', 'd'], }) def test_model_instances(self): union = (User.select(User.username) | UniqueModel.select(UniqueModel.name)) query = union.order_by(SQL('username').desc()).limit(3) self.assertEqual([user.username for user in query], ['e', 'd', 'c']) @requires_op('UNION') @requires_op('INTERSECT') def test_complex(self): left = User.select(User.username).where(User.username << ['a', 'b']) right = UniqueModel.select(UniqueModel.name).where( UniqueModel.name << ['b', 'd', 'e']) query = (left & right).order_by(SQL('1')) self.assertEqual(list(query.dicts()), [{'username': 'b'}]) query = (left | right).order_by(SQL('1')) self.assertEqual(list(query.dicts()), [ {'username': 'a'}, {'username': 'b'}, {'username': 'd'}, {'username': 'e'}]) @requires_op('UNION') @skip_test_if(lambda: isinstance(test_db, MySQLDatabase)) # MySQL needs parens, but doesn't like them here. def test_union_subquery(self): union = (User.select(User.username).where(User.username == 'a') | UniqueModel.select(UniqueModel.name)) query = (User .select(User.username) .where(User.username << union) .order_by(User.username.desc())) self.assertEqual(list(query.dicts()), [ {'username': 'd'}, {'username': 'b'}, {'username': 'a'}]) @requires_op('UNION') def test_result_wrapper(self): users = User.select().order_by(User.username) for user in users: for msg in ['foo', 'bar', 'baz']: Blog.create(title='%s-%s' % (user.username, msg), user=user) with self.assertQueryCount(1): q1 = (Blog .select(Blog, User) .join(User) .where(Blog.title.contains('foo'))) q2 = (Blog .select(Blog, User) .join(User) .where(Blog.title.contains('baz'))) cq = (q1 | q2).order_by(SQL('username, title')) results = [(b.user.username, b.title) for b in cq] self.assertEqual(results, [ ('a', 'a-baz'), ('a', 'a-foo'), ('b', 'b-baz'), ('b', 'b-foo'), ('c', 'c-baz'), ('c', 'c-foo'), ('d', 'd-baz'), ('d', 'd-foo'), ]) @requires_op('UNION') def test_union_with_count(self): lhs = User.select().where(User.username << ['a', 'b']) rhs = User.select().where(User.username << ['d', 'x']) cq = (lhs | rhs) self.assertEqual(cq.count(), 3) @skip_unless(lambda: isinstance(test_db, PostgresqlDatabase)) class TestCompoundWithOrderLimit(ModelTestCase): requires = [User] def setUp(self): super(TestCompoundWithOrderLimit, self).setUp() for username in ['a', 'b', 'c', 'd', 'e', 'f']: User.create(username=username) def test_union_with_order_limit(self): lhs = (User .select(User.username) .where(User.username << ['a', 'b', 'c'])) rhs = (User .select(User.username) .where(User.username << ['d', 'e', 'f'])) cq = (lhs.order_by(User.username.desc()).limit(2) | rhs.order_by(User.username.desc()).limit(2)) results = [user.username for user in cq] self.assertEqual(sorted(results), ['b', 'c', 'e', 'f']) cq = cq.order_by(cq.c.username.desc()).limit(3) results = [user.username for user in cq] self.assertEqual(results, ['f', 'e', 'c']) peewee-2.10.2/playhouse/tests/test_csv_utils.py000066400000000000000000000211351316645060400216540ustar00rootroot00000000000000import csv import datetime from contextlib import contextmanager from datetime import date try: from StringIO import StringIO except ImportError: from io import StringIO from textwrap import dedent from peewee import * from playhouse.csv_utils import * from playhouse.tests.base import database_initializer from playhouse.tests.base import ModelTestCase from playhouse.tests.base import PeeweeTestCase class TestRowConverter(RowConverter): @contextmanager def get_reader(self, csv_data, **reader_kwargs): reader = csv.reader(StringIO(csv_data), **reader_kwargs) yield reader class TestBooleanRowConverter(RowConverter): @convert_field(BooleanField, default=False) def is_boolean(self, value): return value in ('TRUE', 'FALSE') def get_checks(self): checks = super(TestBooleanRowConverter, self).get_checks() checks.insert(-1, self.is_boolean) return checks class TestLoader(Loader): @contextmanager def get_reader(self, csv_data, **reader_kwargs): reader = csv.reader(StringIO(csv_data), **reader_kwargs) yield reader def get_converter(self): return self.converter or TestRowConverter( self.database, has_header=self.has_header, sample_size=self.sample_size) db = database_initializer.get_in_memory_database() class BaseModel(Model): class Meta: database = db class User(BaseModel): username = CharField() class Note(BaseModel): user = ForeignKeyField(User) content = TextField() timestamp = DateTimeField(default=datetime.datetime.now) is_published = BooleanField(default=True) class TestCustomConverter(PeeweeTestCase): def setUp(self): super(TestCustomConverter, self).setUp() self.db = database_initializer.get_in_memory_database() def tearDown(self): if not self.db.is_closed(): self.db.close() super(TestCustomConverter, self).tearDown() def test_custom_converter(self): csv_data = StringIO('\r\n'.join(( 'username,enabled,last_login', 'charlie,TRUE,2015-01-02 00:00:00', 'huey,FALSE,2015-02-03 00:00:00', 'zaizee,,2015-03-04 00:00:00', ))) converter = TestBooleanRowConverter(self.db) ModelClass = load_csv(self.db, csv_data, converter=converter) self.assertEqual(sorted(ModelClass._meta.fields.keys()), [ '_auto_pk', 'enabled', 'last_login', 'username']) self.assertTrue(isinstance(ModelClass.enabled, BooleanField)) self.assertTrue(isinstance(ModelClass.last_login, DateTimeField)) self.assertTrue(isinstance(ModelClass.username, BareField)) class TestCSVConversion(PeeweeTestCase): header = 'id,name,dob,salary,is_admin' simple = '10,"F1 L1",1983-01-01,10000,t' float_sal = '20,"F2 L2",1983-01-02,20000.5,f' only_name = ',"F3 L3",,,' mismatch = 'foo,F4 L4,dob,sal,x' def setUp(self): super(TestCSVConversion, self).setUp() db.execute_sql('drop table if exists csv_test;') def build_csv(self, *lines): return '\r\n'.join(lines) def load(self, *lines, **loader_kwargs): csv = self.build_csv(*lines) loader_kwargs['file_or_name'] = csv loader_kwargs.setdefault('db_table', 'csv_test') loader_kwargs.setdefault('db_or_model', db) return TestLoader(**loader_kwargs).load() def assertData(self, ModelClass, expected): name_field = ModelClass._meta.sorted_fields[1] query = ModelClass.select().order_by(name_field).tuples() self.assertEqual([row for row in query], expected) def test_defaults(self): ModelClass = self.load( self.header, self.simple, self.float_sal, self.only_name) self.assertData(ModelClass, [ (10, 'F1 L1', date(1983, 1, 1), 10000., 't'), (20, 'F2 L2', date(1983, 1, 2), 20000.5, 'f'), (21, 'F3 L3', None, 0., ''), ]) def test_no_header(self): ModelClass = self.load( self.simple, self.float_sal, field_names=['f1', 'f2', 'f3', 'f4', 'f5'], has_header=False) self.assertEqual(ModelClass._meta.sorted_field_names, [ '_auto_pk', 'f1', 'f2', 'f3', 'f4', 'f5']) self.assertData(ModelClass, [ (1, 10, 'F1 L1', date(1983, 1, 1), 10000., 't'), (2, 20, 'F2 L2', date(1983, 1, 2), 20000.5, 'f')]) def test_no_header_no_fieldnames(self): ModelClass = self.load( self.simple, self.float_sal, has_header=False) self.assertEqual(ModelClass._meta.sorted_field_names, [ '_auto_pk', 'field_0', 'field_1', 'field_2', 'field_3', 'field_4']) def test_mismatch_types(self): ModelClass = self.load( self.header, self.simple, self.mismatch) self.assertData(ModelClass, [ ('10', 'F1 L1', '1983-01-01', '10000', 't'), ('foo', 'F4 L4', 'dob', 'sal', 'x')]) def test_fields(self): fields = [ PrimaryKeyField(), CharField(), DateField(), FloatField(), CharField()] ModelClass = self.load( self.header, self.simple, self.float_sal, fields=fields) self.assertEqual( list(map(type, fields)), list(map(type, ModelClass._meta.sorted_fields))) self.assertData(ModelClass, [ (10, 'F1 L1', date(1983, 1, 1), 10000., 't'), (20, 'F2 L2', date(1983, 1, 2), 20000.5, 'f')]) class TestCSVDump(ModelTestCase): requires = [Note, User] def setUp(self): super(TestCSVDump, self).setUp() self.users = [] for i in range(3): user = User.create(username='user-%s' % i) for j in range(i * 3): Note.create( user=user, content='note-%s-%s' % (i, j), timestamp=datetime.datetime(2014, 1 + i, 1 + j), is_published=j % 2 == 0) self.users.append(user) def assertCSV(self, query, csv_lines, **kwargs): buf = StringIO() kwargs['close_file'] = False # Do not close the StringIO object. final_buf = dump_csv(query, buf, **kwargs) self.assertEqual(final_buf.getvalue().splitlines(), csv_lines) def test_dump_simple(self): expected = [ 'id,username', '%s,user-0' % self.users[0].id, '%s,user-1' % self.users[1].id, '%s,user-2' % self.users[2].id] self.assertCSV(User.select().order_by(User.id), expected) self.assertCSV( User.select().order_by(User.id), expected[1:], include_header=False) user_0_id = self.users[0].id self.users[0].username = '"herps", derp' self.users[0].save() query = User.select().where(User.id == user_0_id) self.assertCSV(query, [ 'id,username', '%s,"""herps"", derp"' % user_0_id]) def test_dump_functions(self): query = (User .select(User.username, fn.COUNT(Note.id)) .join(Note, JOIN.LEFT_OUTER) .group_by(User.username) .order_by(User.id)) expected = [ 'username,COUNT', 'user-0,0', 'user-1,3', 'user-2,6'] self.assertCSV(query, expected) query = query.select( User.username.alias('name'), fn.COUNT(Note.id).alias('num_notes')) expected[0] = 'name,num_notes' self.assertCSV(query, expected) def test_dump_field_types(self): query = (Note .select( User.username, Note.content, Note.timestamp, Note.is_published) .join(User) .order_by(Note.id)) expected = [ 'username,content,timestamp,is_published', 'user-1,note-1-0,2014-02-01 00:00:00,True', 'user-1,note-1-1,2014-02-02 00:00:00,False', 'user-1,note-1-2,2014-02-03 00:00:00,True', 'user-2,note-2-0,2014-03-01 00:00:00,True', 'user-2,note-2-1,2014-03-02 00:00:00,False', 'user-2,note-2-2,2014-03-03 00:00:00,True', 'user-2,note-2-3,2014-03-04 00:00:00,False', 'user-2,note-2-4,2014-03-05 00:00:00,True', 'user-2,note-2-5,2014-03-06 00:00:00,False'] self.assertCSV(query, expected) peewee-2.10.2/playhouse/tests/test_database.py000066400000000000000000000305621316645060400214110ustar00rootroot00000000000000# encoding=utf-8 import sys import threading try: from Queue import Queue except ImportError: from queue import Queue from peewee import OperationalError from peewee import SqliteDatabase from playhouse.tests.base import compiler from playhouse.tests.base import database_class from playhouse.tests.base import database_initializer from playhouse.tests.base import ModelTestCase from playhouse.tests.base import PeeweeTestCase from playhouse.tests.base import query_db from playhouse.tests.base import skip_unless from playhouse.tests.base import test_db from playhouse.tests.base import ulit from playhouse.tests.models import * class TestMultiThreadedQueries(ModelTestCase): requires = [User] threads = 4 def setUp(self): self._orig_db = test_db kwargs = {} try: # Some engines need the extra kwargs. kwargs.update(test_db.connect_kwargs) except: pass if isinstance(test_db, SqliteDatabase): # Put a very large timeout in place to avoid `database is locked` # when using SQLite (default is 5). kwargs['timeout'] = 30 User._meta.database = self.new_connection() super(TestMultiThreadedQueries, self).setUp() def tearDown(self): User._meta.database = self._orig_db super(TestMultiThreadedQueries, self).tearDown() def test_multiple_writers(self): def create_user_thread(low, hi): for i in range(low, hi): User.create(username='u%d' % i) User._meta.database.close() threads = [] for i in range(self.threads): threads.append(threading.Thread(target=create_user_thread, args=(i*10, i * 10 + 10))) [t.start() for t in threads] [t.join() for t in threads] self.assertEqual(User.select().count(), self.threads * 10) def test_multiple_readers(self): data_queue = Queue() def reader_thread(q, num): for i in range(num): data_queue.put(User.select().count()) threads = [] for i in range(self.threads): threads.append(threading.Thread(target=reader_thread, args=(data_queue, 20))) [t.start() for t in threads] [t.join() for t in threads] self.assertEqual(data_queue.qsize(), self.threads * 20) class TestDeferredDatabase(PeeweeTestCase): def test_deferred_database(self): deferred_db = SqliteDatabase(None) self.assertTrue(deferred_db.deferred) class DeferredModel(Model): class Meta: database = deferred_db self.assertRaises(Exception, deferred_db.connect) sq = DeferredModel.select() self.assertRaises(Exception, sq.execute) deferred_db.init(':memory:') self.assertFalse(deferred_db.deferred) # connecting works conn = deferred_db.connect() DeferredModel.create_table() sq = DeferredModel.select() self.assertEqual(list(sq), []) deferred_db.init(None) self.assertTrue(deferred_db.deferred) class TestSQLAll(PeeweeTestCase): def setUp(self): super(TestSQLAll, self).setUp() fake_db = SqliteDatabase(':memory:') UniqueModel._meta.database = fake_db SeqModelA._meta.database = fake_db MultiIndexModel._meta.database = fake_db def tearDown(self): super(TestSQLAll, self).tearDown() UniqueModel._meta.database = test_db SeqModelA._meta.database = test_db MultiIndexModel._meta.database = test_db def test_sqlall(self): sql = UniqueModel.sqlall() self.assertEqual(sql, [ ('CREATE TABLE "uniquemodel" ("id" INTEGER NOT NULL PRIMARY KEY, ' '"name" VARCHAR(255) NOT NULL)'), 'CREATE UNIQUE INDEX "uniquemodel_name" ON "uniquemodel" ("name")', ]) sql = MultiIndexModel.sqlall() self.assertEqual(sql, [ ('CREATE TABLE "multiindexmodel" ("id" INTEGER NOT NULL PRIMARY ' 'KEY, "f1" VARCHAR(255) NOT NULL, "f2" VARCHAR(255) NOT NULL, ' '"f3" VARCHAR(255) NOT NULL)'), ('CREATE UNIQUE INDEX "multiindexmodel_f1_f2" ON "multiindexmodel"' ' ("f1", "f2")'), ('CREATE INDEX "multiindexmodel_f2_f3" ON "multiindexmodel" ' '("f2", "f3")'), ]) sql = SeqModelA.sqlall() self.assertEqual(sql, [ ('CREATE TABLE "seqmodela" ("id" INTEGER NOT NULL PRIMARY KEY ' 'DEFAULT NEXTVAL(\'just_testing_seq\'), "num" INTEGER NOT NULL)'), ]) class TestLongIndexName(PeeweeTestCase): def test_long_index(self): class LongIndexModel(TestModel): a123456789012345678901234567890 = CharField() b123456789012345678901234567890 = CharField() c123456789012345678901234567890 = CharField() fields = LongIndexModel._meta.sorted_fields[1:] self.assertEqual(len(fields), 3) sql, params = compiler.create_index(LongIndexModel, fields, False) self.assertEqual(sql, ( 'CREATE INDEX "longindexmodel_85c2f7db" ' 'ON "longindexmodel" (' '"a123456789012345678901234567890", ' '"b123456789012345678901234567890", ' '"c123456789012345678901234567890")' )) class TestDroppingIndex(ModelTestCase): def test_drop_index(self): db = database_initializer.get_in_memory_database() class IndexedModel(Model): idx = CharField(index=True) uniq = CharField(unique=True) f1 = IntegerField() f2 = IntegerField() class Meta: database = db indexes = ( (('f1', 'f2'), True), (('idx', 'uniq'), False), ) IndexedModel.create_table() indexes = db.get_indexes(IndexedModel._meta.db_table) self.assertEqual(sorted(idx.name for idx in indexes), [ 'indexedmodel_f1_f2', 'indexedmodel_idx', 'indexedmodel_idx_uniq', 'indexedmodel_uniq']) with self.log_queries() as query_log: IndexedModel._drop_indexes() self.assertEqual(sorted(query_log.queries), sorted([ ('DROP INDEX "%s"' % idx.name, []) for idx in indexes])) self.assertEqual(db.get_indexes(IndexedModel._meta.db_table), []) class TestConnectionState(PeeweeTestCase): def test_connection_state(self): conn = test_db.get_conn() self.assertFalse(test_db.is_closed()) test_db.close() self.assertTrue(test_db.is_closed()) conn = test_db.get_conn() self.assertFalse(test_db.is_closed()) def test_sql_error(self): bad_sql = 'select asdf from -1;' self.assertRaises(Exception, query_db.execute_sql, bad_sql) self.assertEqual(query_db.last_error, (bad_sql, None)) @skip_unless(lambda: test_db.drop_cascade) class TestDropTableCascade(ModelTestCase): requires = [User, Blog] def test_drop_cascade(self): u1 = User.create(username='u1') b1 = Blog.create(user=u1, title='b1') User.drop_table(cascade=True) self.assertFalse(User.table_exists()) # The constraint is dropped, we can create a blog for a non- # existant user. Blog.create(user=-1, title='b2') @skip_unless(lambda: test_db.sequences) class TestDatabaseSequences(ModelTestCase): requires = [SeqModelA, SeqModelB] def test_sequence_shared(self): a1 = SeqModelA.create(num=1) a2 = SeqModelA.create(num=2) b1 = SeqModelB.create(other_num=101) b2 = SeqModelB.create(other_num=102) a3 = SeqModelA.create(num=3) self.assertEqual(a1.id, a2.id - 1) self.assertEqual(a2.id, b1.id - 1) self.assertEqual(b1.id, b2.id - 1) self.assertEqual(b2.id, a3.id - 1) @skip_unless(lambda: issubclass(database_class, PostgresqlDatabase)) class TestUnicodeConversion(ModelTestCase): requires = [User] def setUp(self): super(TestUnicodeConversion, self).setUp() # Create a user object with UTF-8 encoded username. ustr = ulit('Ãsland') self.user = User.create(username=ustr) def tearDown(self): super(TestUnicodeConversion, self).tearDown() test_db.register_unicode = True test_db.close() def reset_encoding(self, encoding): test_db.close() conn = test_db.get_conn() conn.set_client_encoding(encoding) def test_unicode_conversion(self): # Per psycopg2's documentation, in Python2, strings are returned as # 8-bit str objects encoded in the client encoding. In python3, # the strings are automatically decoded in the connection encoding. # Turn off unicode conversion on a per-connection basis. test_db.register_unicode = False self.reset_encoding('LATIN1') u = User.get(User.id == self.user.id) if sys.version_info[0] < 3: self.assertFalse(u.username == self.user.username) else: self.assertTrue(u.username == self.user.username) test_db.register_unicode = True self.reset_encoding('LATIN1') u = User.get(User.id == self.user.id) self.assertEqual(u.username, self.user.username) @skip_unless(lambda: issubclass(database_class, PostgresqlDatabase)) class TestPostgresqlSchema(ModelTestCase): requires = [PGSchema] def setUp(self): test_db.execute_sql('CREATE SCHEMA huey;') super(TestPostgresqlSchema,self).setUp() def tearDown(self): super(TestPostgresqlSchema,self).tearDown() test_db.execute_sql('DROP SCHEMA huey;') def test_pg_schema(self): pgs = PGSchema.create(data='huey') pgs_db = PGSchema.get(PGSchema.data == 'huey') self.assertEqual(pgs.id, pgs_db.id) @skip_unless(lambda: isinstance(test_db, SqliteDatabase)) class TestOuterLoopInnerCommit(ModelTestCase): requires = [User, Blog] def tearDown(self): test_db.set_autocommit(True) super(TestOuterLoopInnerCommit, self).tearDown() def test_outer_loop_inner_commit(self): # By default we are in autocommit mode (isolation_level=None). self.assertEqual(test_db.get_conn().isolation_level, None) for username in ['u1', 'u2', 'u3']: User.create(username=username) for user in User.select(): Blog.create(user=user, title='b-%s' % user.username) # These statements are auto-committed. new_db = self.new_connection() count = new_db.execute_sql('select count(*) from blog;').fetchone() self.assertEqual(count[0], 3) self.assertEqual(Blog.select().count(), 3) blog_titles = [b.title for b in Blog.select().order_by(Blog.title)] self.assertEqual(blog_titles, ['b-u1', 'b-u2', 'b-u3']) self.assertEqual(Blog.delete().execute(), 3) # If we disable autocommit, we need to explicitly call begin(). test_db.set_autocommit(False) test_db.begin() for user in User.select(): Blog.create(user=user, title='b-%s' % user.username) # These statements have not been committed. new_db = self.new_connection() count = new_db.execute_sql('select count(*) from blog;').fetchone() self.assertEqual(count[0], 0) self.assertEqual(Blog.select().count(), 3) blog_titles = [b.title for b in Blog.select().order_by(Blog.title)] self.assertEqual(blog_titles, ['b-u1', 'b-u2', 'b-u3']) test_db.commit() count = new_db.execute_sql('select count(*) from blog;').fetchone() self.assertEqual(count[0], 3) class TestConnectionInitialization(PeeweeTestCase): def test_initialize_connection(self): state = {'initialized': 0} class TestDatabase(SqliteDatabase): def initialize_connection(self, conn): state['initialized'] += 1 # Ensure we can execute a query at this point. self.execute_sql('pragma stats;').fetchone() db = TestDatabase(':memory:') self.assertFalse(state['initialized']) conn = db.get_conn() self.assertEqual(state['initialized'], 1) # Since a conn is already open, this will return the existing conn. conn = db.get_conn() self.assertEqual(state['initialized'], 1) db.close() db.connect() self.assertEqual(state['initialized'], 2) peewee-2.10.2/playhouse/tests/test_dataset.py000066400000000000000000000335141316645060400212720ustar00rootroot00000000000000import csv import datetime import json import operator import os try: from StringIO import StringIO except ImportError: from io import StringIO from peewee import * from playhouse.dataset import DataSet from playhouse.dataset import Table from playhouse.tests.base import database_initializer from playhouse.tests.base import PeeweeTestCase db = database_initializer.get_database('sqlite') class BaseModel(Model): class Meta: database = db class User(BaseModel): username = CharField(primary_key=True) class Note(BaseModel): user = ForeignKeyField(User) content = TextField() timestamp = DateTimeField() class Category(BaseModel): name = CharField() parent = ForeignKeyField('self', null=True) class TestDataSet(PeeweeTestCase): names = ['charlie', 'huey', 'peewee', 'mickey', 'zaizee'] def setUp(self): if os.path.exists(db.database): os.unlink(db.database) db.connect() db.create_tables([User, Note, Category]) self.dataset = DataSet('sqlite:///%s' % db.database) def tearDown(self): self.dataset.close() db.close() def create_users(self, n=2): user = self.dataset['user'] for i in range(min(n, len(self.names))): user.insert(username=self.names[i]) def test_column_preservation(self): ds = DataSet('sqlite:///:memory:') books = ds['books'] books.insert(book_id='BOOK1') books.insert(bookId='BOOK2') data = [(row['book_id'] or '', row['bookId'] or '') for row in books] self.assertEqual(sorted(data), [ ('', 'BOOK2'), ('BOOK1', '')]) def test_case_insensitive(self): db.execute_sql('CREATE TABLE "SomeTable" (data TEXT);') tables = sorted(self.dataset.tables) self.assertEqual(tables, ['SomeTable', 'category', 'note', 'user']) table = self.dataset['HueyMickey'] self.assertEqual(table.model_class._meta.db_table, 'HueyMickey') tables = sorted(self.dataset.tables) self.assertEqual( tables, ['HueyMickey', 'SomeTable', 'category', 'note', 'user']) # Subsequent lookup succeeds. self.dataset['HueyMickey'] def test_introspect(self): tables = sorted(self.dataset.tables) self.assertEqual(tables, ['category', 'note', 'user']) user = self.dataset['user'] columns = sorted(user.columns) self.assertEqual(columns, ['username']) note = self.dataset['note'] columns = sorted(note.columns) self.assertEqual(columns, ['content', 'id', 'timestamp', 'user_id']) category = self.dataset['category'] columns = sorted(category.columns) self.assertEqual(columns, ['id', 'name', 'parent_id']) def test_update_cache(self): self.assertEqual(sorted(self.dataset.tables), ['category', 'note', 'user']) db.execute_sql('create table "foo" (id INTEGER, data TEXT)') Foo = self.dataset['foo'] self.assertEqual(sorted(Foo.columns), ['data', 'id']) self.assertTrue('foo' in self.dataset._models) def assertQuery(self, query, expected, sort_key='id'): key = operator.itemgetter(sort_key) self.assertEqual( sorted(list(query), key=key), sorted(expected, key=key)) def test_insert(self): self.create_users() user = self.dataset['user'] expected = [ {'username': 'charlie'}, {'username': 'huey'}] self.assertQuery(user.all(), expected, 'username') user.insert(username='mickey', age=5) expected = [ {'username': 'charlie', 'age': None}, {'username': 'huey', 'age': None}, {'username': 'mickey', 'age': 5}] self.assertQuery(user.all(), expected, 'username') query = user.find(username='charlie') expected = [{'username': 'charlie', 'age': None}] self.assertQuery(query, expected, 'username') self.assertEqual( user.find_one(username='mickey'), {'username': 'mickey', 'age': 5}) self.assertTrue(user.find_one(username='xx') is None) def test_update(self): self.create_users() user = self.dataset['user'] self.assertEqual(user.update(favorite_color='green'), 2) expected = [ {'username': 'charlie', 'favorite_color': 'green'}, {'username': 'huey', 'favorite_color': 'green'}] self.assertQuery(user.all(), expected, 'username') res = user.update( favorite_color='blue', username='huey', columns=['username']) self.assertEqual(res, 1) expected[1]['favorite_color'] = 'blue' self.assertQuery(user.all(), expected, 'username') def test_delete(self): self.create_users() user = self.dataset['user'] self.assertEqual(user.delete(username='huey'), 1) self.assertEqual(list(user.all()), [{'username': 'charlie'}]) def test_find(self): self.create_users(5) user = self.dataset['user'] def assertUsernames(query, expected): self.assertEqual( sorted(row['username'] for row in query), sorted(expected)) assertUsernames(user.all(), self.names) assertUsernames(user.find(), self.names) assertUsernames(user.find(username='charlie'), ['charlie']) assertUsernames(user.find(username='missing'), []) user.update(favorite_color='green') for username in ['zaizee', 'huey']: user.update( favorite_color='blue', username=username, columns=['username']) assertUsernames( user.find(favorite_color='green'), ['charlie', 'mickey', 'peewee']) assertUsernames( user.find(favorite_color='blue'), ['zaizee', 'huey']) assertUsernames( user.find(favorite_color='green', username='peewee'), ['peewee']) self.assertEqual( user.find_one(username='charlie'), {'username': 'charlie', 'favorite_color': 'green'}) def test_magic_methods(self): self.create_users(5) user = self.dataset['user'] # __len__() self.assertEqual(len(user), 5) # __iter__() users = sorted([u for u in user], key=operator.itemgetter('username')) self.assertEqual(users[0], {'username': 'charlie'}) self.assertEqual(users[-1], {'username': 'zaizee'}) # __contains__() self.assertTrue('user' in self.dataset) self.assertFalse('missing' in self.dataset) def test_foreign_keys(self): user = self.dataset['user'] user.insert(username='charlie') note = self.dataset['note'] for i in range(1, 4): note.insert( content='note %s' % i, timestamp=datetime.date(2014, 1, i), user_id='charlie') notes = sorted(note.all(), key=operator.itemgetter('id')) self.assertEqual(notes[0], { 'content': 'note 1', 'id': 1, 'timestamp': datetime.datetime(2014, 1, 1), 'user_id': 'charlie'}) self.assertEqual(notes[-1], { 'content': 'note 3', 'id': 3, 'timestamp': datetime.datetime(2014, 1, 3), 'user_id': 'charlie'}) user.insert(username='mickey') note.update(user_id='mickey', id=3, columns=['id']) self.assertEqual(note.find(user_id='charlie').count(), 2) self.assertEqual(note.find(user_id='mickey').count(), 1) category = self.dataset['category'] category.insert(name='c1') c1 = category.find_one(name='c1') self.assertEqual(c1, {'id': 1, 'name': 'c1', 'parent_id': None}) category.insert(name='c2', parent_id=1) c2 = category.find_one(parent_id=1) self.assertEqual(c2, {'id': 2, 'name': 'c2', 'parent_id': 1}) self.assertEqual(category.delete(parent_id=1), 1) self.assertEqual(category.all(), [c1]) def test_transactions(self): user = self.dataset['user'] with self.dataset.transaction() as txn: user.insert(username='u1') with self.dataset.transaction() as txn2: user.insert(username='u2') txn2.rollback() with self.dataset.transaction() as txn3: user.insert(username='u3') with self.dataset.transaction() as txn4: user.insert(username='u4') txn3.rollback() with self.dataset.transaction() as txn5: user.insert(username='u5') with self.dataset.transaction() as txn6: with self.dataset.transaction() as txn7: user.insert(username='u6') txn7.rollback() user.insert(username='u7') user.insert(username='u8') self.assertQuery(user.all(), [ {'username': 'u1'}, {'username': 'u5'}, {'username': 'u7'}, {'username': 'u8'}, ], 'username') def test_export(self): self.create_users() user = self.dataset['user'] buf = StringIO() self.dataset.freeze(user.all(), 'json', file_obj=buf) self.assertEqual(buf.getvalue(), ( '[{"username": "charlie"}, {"username": "huey"}]')) buf = StringIO() self.dataset.freeze(user.all(), 'csv', file_obj=buf) self.assertEqual(buf.getvalue().splitlines(), [ 'username', 'charlie', 'huey']) def test_table_column_creation(self): table = self.dataset['people'] table.insert(name='charlie') self.assertEqual(table.columns, ['id', 'name']) self.assertEqual(list(table.all()), [{'id': 1, 'name': 'charlie'}]) def test_import_json(self): table = self.dataset['people'] table.insert(name='charlie') data = [ {'name': 'zaizee', 'foo': 1}, {'name': 'huey'}, {'name': 'mickey', 'foo': 2}, {'bar': None}] buf = StringIO() json.dump(data, buf) buf.seek(0) # All rows but the last will be inserted. count = self.dataset.thaw('people', 'json', file_obj=buf, strict=True) self.assertEqual(count, 3) names = [row['name'] for row in self.dataset['people'].all()] self.assertEqual( set(names), set(['charlie', 'huey', 'mickey', 'zaizee'])) # The columns have not changed. self.assertEqual(table.columns, ['id', 'name']) # No rows are inserted because no column overlap between `user` and the # provided data. buf.seek(0) count = self.dataset.thaw('user', 'json', file_obj=buf, strict=True) self.assertEqual(count, 0) # Create a new table and load all data into it. table = self.dataset['more_people'] # All rows and columns will be inserted. buf.seek(0) count = self.dataset.thaw('more_people', 'json', file_obj=buf) self.assertEqual(count, 4) self.assertEqual( set(table.columns), set(['id', 'name', 'bar', 'foo'])) self.assertEqual(sorted(table.all(), key=lambda row: row['id']), [ {'id': 1, 'name': 'zaizee', 'foo': 1, 'bar': None}, {'id': 2, 'name': 'huey', 'foo': None, 'bar': None}, {'id': 3, 'name': 'mickey', 'foo': 2, 'bar': None}, {'id': 4, 'name': None, 'foo': None, 'bar': None}, ]) def test_import_csv(self): table = self.dataset['people'] table.insert(name='charlie') data = [ ('zaizee', 1, None), ('huey', 2, 'foo'), ('mickey', 3, 'baze')] buf = StringIO() writer = csv.writer(buf) writer.writerow(['name', 'foo', 'bar']) writer.writerows(data) buf.seek(0) count = self.dataset.thaw('people', 'csv', file_obj=buf, strict=True) self.assertEqual(count, 3) names = [row['name'] for row in self.dataset['people'].all()] self.assertEqual( set(names), set(['charlie', 'huey', 'mickey', 'zaizee'])) # The columns have not changed. self.assertEqual(table.columns, ['id', 'name']) # No rows are inserted because no column overlap between `user` and the # provided data. buf.seek(0) count = self.dataset.thaw('user', 'csv', file_obj=buf, strict=True) self.assertEqual(count, 0) # Create a new table and load all data into it. table = self.dataset['more_people'] # All rows and columns will be inserted. buf.seek(0) count = self.dataset.thaw('more_people', 'csv', file_obj=buf) self.assertEqual(count, 3) self.assertEqual( set(table.columns), set(['id', 'name', 'bar', 'foo'])) self.assertEqual(sorted(table.all(), key=lambda row: row['id']), [ {'id': 1, 'name': 'zaizee', 'foo': '1', 'bar': ''}, {'id': 2, 'name': 'huey', 'foo': '2', 'bar': 'foo'}, {'id': 3, 'name': 'mickey', 'foo': '3', 'bar': 'baze'}, ]) def test_table_thaw(self): table = self.dataset['people'] data = json.dumps([{'name': 'charlie'}, {'name': 'huey', 'color': 'white'}]) self.assertEqual(table.thaw(file_obj=StringIO(data), format='json'), 2) self.assertEqual(list(table.all()), [ {'id': 1, 'name': 'charlie', 'color': None}, {'id': 2, 'name': 'huey', 'color': 'white'}, ]) def test_creating_tables(self): new_table = self.dataset['new_table'] new_table.insert(data='foo') ref2 = self.dataset['new_table'] self.assertEqual(list(ref2.all()), [{'id': 1, 'data': 'foo'}]) peewee-2.10.2/playhouse/tests/test_db_url.py000066400000000000000000000041251316645060400211100ustar00rootroot00000000000000from peewee import * from playhouse.db_url import connect, parse from playhouse.sqlite_ext import SqliteExtDatabase from playhouse.tests.base import PeeweeTestCase class TestDBURL(PeeweeTestCase): def test_db_url_parse(self): cfg = parse('mysql://usr:pwd@hst:123/db') self.assertEqual(cfg['user'], 'usr') self.assertEqual(cfg['passwd'], 'pwd') self.assertEqual(cfg['host'], 'hst') self.assertEqual(cfg['database'], 'db') self.assertEqual(cfg['port'], 123) cfg = parse('postgresql://usr:pwd@hst/db') self.assertEqual(cfg['password'], 'pwd') cfg = parse('mysql+pool://usr:pwd@hst:123/db' '?max_connections=42&stale_timeout=8001.2&zai=&baz=3.4.5' '&boolz=false') self.assertEqual(cfg['user'], 'usr') self.assertEqual(cfg['password'], 'pwd') self.assertEqual(cfg['host'], 'hst') self.assertEqual(cfg['database'], 'db') self.assertEqual(cfg['port'], 123) self.assertEqual(cfg['max_connections'], 42) self.assertEqual(cfg['stale_timeout'], 8001.2) self.assertEqual(cfg['zai'], '') self.assertEqual(cfg['baz'], '3.4.5') self.assertEqual(cfg['boolz'], False) def test_db_url(self): db = connect('sqlite:///:memory:') self.assertTrue(isinstance(db, SqliteDatabase)) self.assertEqual(db.database, ':memory:') db = connect('sqlite:///:memory:', journal_mode='MEMORY') self.assertTrue(('journal_mode', 'MEMORY') in db._pragmas) db = connect('sqliteext:///foo/bar.db') self.assertTrue(isinstance(db, SqliteExtDatabase)) self.assertEqual(db.database, 'foo/bar.db') db = connect('sqlite:////this/is/absolute.path') self.assertEqual(db.database, '/this/is/absolute.path') db = connect('sqlite://') self.assertTrue(isinstance(db, SqliteDatabase)) self.assertEqual(db.database, ':memory:') def test_bad_scheme(self): def _test_scheme(): connect('missing:///') self.assertRaises(RuntimeError, _test_scheme) peewee-2.10.2/playhouse/tests/test_djpeewee.py000066400000000000000000000204651316645060400214360ustar00rootroot00000000000000from datetime import timedelta from peewee import * from playhouse.tests.base import PeeweeTestCase from playhouse.tests.base import skip_if try: import django except ImportError: django = None if django is not None: from django.conf import settings if not settings.configured: settings.configure( DATABASES={ 'default': { 'engine': 'django.db.backends.sqlite3', 'name': ':memory:'}}, INSTALLED_APPS=['playhouse.tests.test_djpeewee'], SITE_ID=1, ) try: from django import setup except ImportError: pass else: setup() from django.db import models from playhouse.djpeewee import translate # Django model definitions. class Simple(models.Model): char_field = models.CharField(max_length=1) int_field = models.IntegerField() class User(models.Model): username = models.CharField(max_length=255) class Meta: db_table = 'user_tbl' class Post(models.Model): author = models.ForeignKey(User, related_name='posts') content = models.TextField() class Comment(models.Model): post = models.ForeignKey(Post, related_name='comments') commenter = models.ForeignKey(User, related_name='comments') comment = models.TextField() class Tag(models.Model): tag = models.CharField() posts = models.ManyToManyField(Post) class Event(models.Model): start_time = models.DateTimeField() end_time = models.DateTimeField() title = models.CharField() class Meta: db_table = 'events_tbl' class Category(models.Model): parent = models.ForeignKey('self', null=True) class A(models.Model): a_field = models.IntegerField() b = models.ForeignKey('B', null=True, related_name='as') class B(models.Model): a = models.ForeignKey(A, related_name='bs') class C(models.Model): b = models.ForeignKey(B, related_name='cs') class Parent(models.Model): pass class Child(Parent): pass @skip_if(lambda: django is None) class TestDjPeewee(PeeweeTestCase): def assertFields(self, model, expected): self.assertEqual(len(model._meta.fields), len(expected)) zipped = zip(model._meta.sorted_fields, expected) for (model_field, (name, field_type)) in zipped: self.assertEqual(model_field.name, name) self.assertTrue(type(model_field) is field_type) def test_simple(self): P = translate(Simple) self.assertEqual(list(P.keys()), ['Simple']) self.assertFields(P['Simple'], [ ('id', PrimaryKeyField), ('char_field', CharField), ('int_field', IntegerField), ]) def test_graph(self): P = translate(User, Tag, Comment) self.assertEqual(sorted(P.keys()), [ 'Comment', 'Post', 'Tag', 'Tag_posts', 'User']) # Test the models that were found. user = P['User'] self.assertFields(user, [ ('id', PrimaryKeyField), ('username', CharField)]) self.assertEqual(user.posts.rel_model, P['Post']) self.assertEqual(user.comments.rel_model, P['Comment']) post = P['Post'] self.assertFields(post, [ ('id', PrimaryKeyField), ('author', ForeignKeyField), ('content', TextField)]) self.assertEqual(post.comments.rel_model, P['Comment']) comment = P['Comment'] self.assertFields(comment, [ ('id', PrimaryKeyField), ('post', ForeignKeyField), ('commenter', ForeignKeyField), ('comment', TextField)]) tag = P['Tag'] self.assertFields(tag, [ ('id', PrimaryKeyField), ('tag', CharField)]) thru = P['Tag_posts'] self.assertFields(thru, [ ('id', PrimaryKeyField), ('tag', ForeignKeyField), ('post', ForeignKeyField)]) def test_fk_query(self): trans = translate(User, Post, Comment, Tag) U = trans['User'] P = trans['Post'] C = trans['Comment'] query = (U.select() .join(P) .join(C) .where(C.comment == 'test')) sql, params = query.sql() self.assertEqual( sql, 'SELECT "t1"."id", "t1"."username" FROM "user_tbl" AS t1 ' 'INNER JOIN "test_djpeewee_post" AS t2 ' 'ON ("t1"."id" = "t2"."author_id") ' 'INNER JOIN "test_djpeewee_comment" AS t3 ' 'ON ("t2"."id" = "t3"."post_id") WHERE ("t3"."comment" = %s)') self.assertEqual(params, ['test']) def test_m2m_query(self): trans = translate(Post, Tag) P = trans['Post'] U = trans['User'] T = trans['Tag'] TP = trans['Tag_posts'] query = (P.select() .join(TP) .join(T) .where(T.tag == 'test')) sql, params = query.sql() self.assertEqual( sql, 'SELECT "t1"."id", "t1"."author_id", "t1"."content" ' 'FROM "test_djpeewee_post" AS t1 ' 'INNER JOIN "test_djpeewee_tag_posts" AS t2 ' 'ON ("t1"."id" = "t2"."post_id") ' 'INNER JOIN "test_djpeewee_tag" AS t3 ' 'ON ("t2"."tag_id" = "t3"."id") WHERE ("t3"."tag" = %s)') self.assertEqual(params, ['test']) def test_docs_example(self): # The docs don't lie. PEvent = translate(Event)['Event'] hour = timedelta(hours=1) query = (PEvent .select() .where( (PEvent.end_time - PEvent.start_time) > hour)) sql, params = query.sql() self.assertEqual( sql, 'SELECT "t1"."id", "t1"."start_time", "t1"."end_time", "t1"."title" ' 'FROM "events_tbl" AS t1 ' 'WHERE (("t1"."end_time" - "t1"."start_time") > %s)') self.assertEqual(params, [hour]) def test_self_referential(self): trans = translate(Category) self.assertFields(trans['Category'], [ ('id', PrimaryKeyField), ('parent', IntegerField)]) def test_cycle(self): trans = translate(A) self.assertFields(trans['A'], [ ('id', PrimaryKeyField), ('a_field', IntegerField), ('b', ForeignKeyField)]) self.assertFields(trans['B'], [ ('id', PrimaryKeyField), ('a', IntegerField)]) trans = translate(B) self.assertFields(trans['A'], [ ('id', PrimaryKeyField), ('a_field', IntegerField), ('b', IntegerField)]) self.assertFields(trans['B'], [ ('id', PrimaryKeyField), ('a', ForeignKeyField)]) def test_max_depth(self): trans = translate(C, max_depth=1) self.assertFields(trans['C'], [ ('id', PrimaryKeyField), ('b', ForeignKeyField)]) self.assertFields(trans['B'], [ ('id', PrimaryKeyField), ('a', IntegerField)]) def test_exclude(self): trans = translate(Comment, exclude=(User,)) self.assertFields(trans['Post'], [ ('id', PrimaryKeyField), ('author', IntegerField), ('content', TextField)]) self.assertEqual( trans['Post'].comments.rel_model, trans['Comment']) self.assertFields(trans['Comment'], [ ('id', PrimaryKeyField), ('post', ForeignKeyField), ('commenter', IntegerField), ('comment', TextField)]) def test_backrefs(self): trans = translate(User, backrefs=True) self.assertEqual(sorted(trans.keys()), [ 'Comment', 'Post', 'User']) def test_inheritance(self): trans = translate(Parent) self.assertEqual(list(trans.keys()), ['Parent']) self.assertFields(trans['Parent'], [ ('id', PrimaryKeyField),]) trans = translate(Child) self.assertEqual(sorted(trans.keys()), ['Child', 'Parent']) self.assertFields(trans['Child'], [ ('id', PrimaryKeyField), ('parent_ptr', ForeignKeyField)]) peewee-2.10.2/playhouse/tests/test_extra_fields.py000066400000000000000000000075261316645060400223220ustar00rootroot00000000000000import random import sys from peewee import * from playhouse.fields import * try: from playhouse.fields import PasswordField except ImportError: PasswordField = None from playhouse.tests.base import database_initializer from playhouse.tests.base import ModelTestCase from playhouse.tests.base import skip_if from playhouse.tests.base import ulit from playhouse.tests.base import TestModel PY2 = sys.version_info[0] == 2 db = database_initializer.get_in_memory_database() class BaseModel(Model): class Meta: database = db class CompressedModel(BaseModel): data = CompressedField() class PickledModel(BaseModel): data = PickledField() def convert_to_str(binary_data): if PY2: return str(binary_data) else: if isinstance(binary_data, str): return bytes(binary_data, 'utf-8') return binary_data class TestCompressedField(ModelTestCase): requires = [CompressedModel] def get_raw(self, cm): curs = db.execute_sql('SELECT data FROM %s WHERE id = %s;' % (CompressedModel._meta.db_table, cm.id)) return convert_to_str(curs.fetchone()[0]) def test_compressed_field(self): a_kb = 'a' * 1024 b_kb = 'b' * 1024 c_kb = 'c' * 1024 d_kb = 'd' * 1024 four_kb = ''.join((a_kb, b_kb, c_kb, d_kb)) data = four_kb * 16 # 64kb of data. cm = CompressedModel.create(data=data) cm_db = CompressedModel.get(CompressedModel.id == cm.id) self.assertEqual(cm_db.data, data) db_data = self.get_raw(cm) compressed = len(db_data) / float(len(data)) self.assertTrue(compressed < .01) def test_compress_random_data(self): data = ''.join( chr(random.randint(ord('A'), ord('z'))) for i in range(1024)) cm = CompressedModel.create(data=data) cm_db = CompressedModel.get(CompressedModel.id == cm.id) self.assertEqual(cm_db.data, data) @skip_if(lambda: PasswordField is None) class TestPasswordFields(ModelTestCase): def setUp(self): class PasswordModel(TestModel): username = TextField() password = PasswordField(iterations=4) self.PasswordModel = PasswordModel self.requires = [PasswordModel] super(TestPasswordFields, self).setUp() def test_valid_password(self): test_pwd = 'Hello!:)' tm = self.PasswordModel.create(username='User', password=test_pwd) tm_db = self.PasswordModel.get(self.PasswordModel.id == tm.id) self.assertTrue(tm_db.password.check_password(test_pwd),'Correct password did not match') def test_invalid_password(self): test_pwd = 'Hello!:)' tm = self.PasswordModel.create(username='User', password=test_pwd) tm_db = self.PasswordModel.get(self.PasswordModel.id == tm.id) self.assertFalse(tm_db.password.check_password('a'+test_pwd),'Incorrect password did match') def test_unicode(self): test_pwd = ulit('H\u00c3l\u00c5o!:)') tm = self.PasswordModel.create(username='User', password=test_pwd) tm_db = self.PasswordModel.get(self.PasswordModel.id == tm.id) self.assertTrue(tm_db.password.check_password(test_pwd),'Correct unicode password did not match') class TestPickledField(ModelTestCase): requires = [PickledModel] def test_pickled_field(self): test_1 = {'foo': [0, 1, '2']} test_2 = ['bar', ('nuggie', 'baze')] p1 = PickledModel.create(data=test_1) p2 = PickledModel.create(data=test_2) p1_db = PickledModel.get(PickledModel.id == p1.id) self.assertEqual(p1_db.data, test_1) p2_db = PickledModel.get(PickledModel.id == p2.id) self.assertEqual(p2_db.data, test_2) p1_db_g = PickledModel.get(PickledModel.data == test_1) self.assertEqual(p1_db_g.id, p1_db.id) peewee-2.10.2/playhouse/tests/test_fields.py000066400000000000000000001036661316645060400211210ustar00rootroot00000000000000import calendar import datetime import decimal import sys import time import uuid from peewee import MySQLDatabase from peewee import Param from peewee import Proxy from peewee import SqliteDatabase from peewee import binary_construct from peewee import sqlite3 from playhouse.tests.base import binary_construct from playhouse.tests.base import binary_types from playhouse.tests.base import database_class from playhouse.tests.base import ModelTestCase from playhouse.tests.base import PeeweeTestCase from playhouse.tests.base import skip_test_if from playhouse.tests.base import skip_test_unless from playhouse.tests.base import skip_if from playhouse.tests.base import skip_unless from playhouse.tests.base import test_db from playhouse.tests.models import * class TestFieldTypes(ModelTestCase): requires = [NullModel, BlobModel] _dt = datetime.datetime _d = datetime.date _t = datetime.time field_data = { 'char_field': ('c1', 'c2', 'c3'), 'date_field': ( _d(2010, 1, 1), _d(2010, 1, 2), _d(2010, 1, 3)), 'datetime_field': ( _dt(2010, 1, 1, 0, 0), _dt(2010, 1, 2, 0, 0), _dt(2010, 1, 3, 0, 0)), 'decimal_field1': ('1.0', '2.0', '3.0'), 'fixed_char_field': ('fc1', 'fc2', 'fc3'), 'float_field': (1.0, 2.0, 3.0), 'int_field': (1, 2, 3), 'text_field': ('t1', 't2', 't3'), 'time_field': ( _t(1, 0), _t(2, 0), _t(3, 0)), 'ts_field': ( _dt(2010, 1, 1, 0, 0), _dt(2010, 1, 2, 0, 0), _dt(2010, 1, 3, 0, 0)), 'ts_field2': ( _dt(2010, 1, 1, 13, 37, 1, 123456), _dt(2010, 1, 2, 13, 37, 1, 123456), _dt(2010, 1, 3, 13, 37, 1, 123456)), } value_table = list(zip(*[(k,) + v for k, v in field_data.items()])) def setUp(self): super(TestFieldTypes, self).setUp() header, values = self.value_table[0], self.value_table[1:] for row in values: nm = NullModel() for i, col in enumerate(row): setattr(nm, header[i], col) nm.save() def assertNM(self, q, exp): query = NullModel.select().where(q).order_by(NullModel.id) self.assertEqual([nm.char_field for nm in query], exp) def test_null_query(self): NullModel.delete().execute() nm1 = NullModel.create(char_field='nm1') nm2 = NullModel.create(char_field='nm2', int_field=1) nm3 = NullModel.create(char_field='nm3', int_field=2, float_field=3.0) q = ~(NullModel.int_field >> None) self.assertNM(q, ['nm2', 'nm3']) def test_field_types(self): for field, values in self.field_data.items(): field_obj = getattr(NullModel, field) self.assertNM(field_obj < values[2], ['c1', 'c2']) self.assertNM(field_obj <= values[1], ['c1', 'c2']) self.assertNM(field_obj > values[0], ['c2', 'c3']) self.assertNM(field_obj >= values[1], ['c2', 'c3']) self.assertNM(field_obj == values[1], ['c2']) self.assertNM(field_obj != values[1], ['c1', 'c3']) self.assertNM(field_obj << [values[0], values[2]], ['c1', 'c3']) self.assertNM(field_obj << [values[1]], ['c2']) def test_charfield(self): NM = NullModel nm = NM.create(char_field=4) nm_db = NM.get(NM.id==nm.id) self.assertEqual(nm_db.char_field, '4') nm_alpha = NM.create(char_field='Alpha') nm_bravo = NM.create(char_field='Bravo') if isinstance(test_db, SqliteDatabase): # Sqlite's sql-dialect uses "*" as case-sensitive lookup wildcard, # and pysqlcipher is simply a wrapper around sqlite's engine. like_wildcard = '*' else: like_wildcard = '%' like_str = '%sA%s' % (like_wildcard, like_wildcard) ilike_str = '%A%' case_sens = NM.select(NM.char_field).where(NM.char_field % like_str) self.assertEqual([x[0] for x in case_sens.tuples()], ['Alpha']) case_insens = NM.select(NM.char_field).where(NM.char_field ** ilike_str) self.assertEqual([x[0] for x in case_insens.tuples()], ['Alpha', 'Bravo']) def test_fixed_charfield(self): NM = NullModel nm = NM.create(fixed_char_field=4) nm_db = NM.get(NM.id == nm.id) self.assertEqual(nm_db.fixed_char_field, '4') fc_vals = [obj.fixed_char_field for obj in NM.select().order_by(NM.id)] self.assertEqual(fc_vals, ['fc1', 'fc2', 'fc3', '4']) def test_intfield(self): nm = NullModel.create(int_field='4') nm_db = NullModel.get(NullModel.id==nm.id) self.assertEqual(nm_db.int_field, 4) def test_floatfield(self): nm = NullModel.create(float_field='4.2') nm_db = NullModel.get(NullModel.id==nm.id) self.assertEqual(nm_db.float_field, 4.2) def test_decimalfield(self): D = decimal.Decimal nm = NullModel() nm.decimal_field1 = D("3.14159265358979323") nm.decimal_field2 = D("100.33") nm.save() nm_from_db = NullModel.get(NullModel.id==nm.id) # sqlite doesn't enforce these constraints properly #self.assertEqual(nm_from_db.decimal_field1, decimal.Decimal("3.14159")) self.assertEqual(nm_from_db.decimal_field2, D("100.33")) class TestDecimalModel(TestModel): df1 = DecimalField(decimal_places=2, auto_round=True) df2 = DecimalField(decimal_places=2, auto_round=True, rounding=decimal.ROUND_UP) f1 = TestDecimalModel.df1.db_value f2 = TestDecimalModel.df2.db_value self.assertEqual(f1(D('1.2345')), D('1.23')) self.assertEqual(f2(D('1.2345')), D('1.24')) def test_boolfield(self): NullModel.delete().execute() nmt = NullModel.create(boolean_field=True, char_field='t') nmf = NullModel.create(boolean_field=False, char_field='f') nmn = NullModel.create(boolean_field=None, char_field='n') self.assertNM(NullModel.boolean_field == True, ['t']) self.assertNM(NullModel.boolean_field == False, ['f']) self.assertNM(NullModel.boolean_field >> None, ['n']) def _time_to_delta(self, t): micro = t.microsecond / 1000000. return datetime.timedelta( seconds=(3600 * t.hour) + (60 * t.minute) + t.second + micro) def test_date_and_time_fields(self): dt1 = datetime.datetime(2011, 1, 2, 11, 12, 13, 54321) dt2 = datetime.datetime(2011, 1, 2, 11, 12, 13) d1 = datetime.date(2011, 1, 3) t1 = datetime.time(11, 12, 13, 54321) t2 = datetime.time(11, 12, 13) if isinstance(test_db, MySQLDatabase): dt1 = dt1.replace(microsecond=0) dt2 = dt2.replace(microsecond=0) t1 = t1.replace(microsecond=0) nm1 = NullModel.create(datetime_field=dt1, date_field=d1, time_field=t1) nm2 = NullModel.create(datetime_field=dt2, time_field=t2) nmf1 = NullModel.get(NullModel.id==nm1.id) self.assertEqual(nmf1.date_field, d1) self.assertEqual(nmf1.datetime_field, dt1) self.assertEqual(nmf1.time_field, t1) nmf2 = NullModel.get(NullModel.id==nm2.id) self.assertEqual(nmf2.datetime_field, dt2) self.assertEqual(nmf2.time_field, t2) def test_time_field_python_value(self): tf = NullModel.time_field def T(*a): return datetime.time(*a) tests = ( ('01:23:45', T(1, 23, 45)), ('01:23', T(1, 23, 0)), (T(13, 14, 0), T(13, 14, 0)), (datetime.datetime(2015, 1, 1, 0, 59, 0), T(0, 59)), ('', ''), (None, None), (T(0, 0), T(0, 0)), (datetime.timedelta(seconds=(4 * 60 * 60) + (20 * 60)), T(4, 20)), (datetime.timedelta(seconds=0), T(0, 0)), ) for val, expected in tests: self.assertEqual(tf.python_value(val), expected) def test_date_as_string(self): nm1 = NullModel.create(date_field='2014-01-02') nm1_db = NullModel.get(NullModel.id == nm1.id) self.assertEqual(nm1_db.date_field, datetime.date(2014, 1, 2)) def test_various_formats(self): class FormatModel(Model): dtf = DateTimeField() df = DateField() tf = TimeField() dtf = FormatModel._meta.fields['dtf'] df = FormatModel._meta.fields['df'] tf = FormatModel._meta.fields['tf'] d = datetime.datetime self.assertEqual(dtf.python_value('2012-01-01 11:11:11.123456'), d( 2012, 1, 1, 11, 11, 11, 123456 )) self.assertEqual(dtf.python_value('2012-01-01 11:11:11'), d( 2012, 1, 1, 11, 11, 11 )) self.assertEqual(dtf.python_value('2012-01-01'), d( 2012, 1, 1, )) self.assertEqual(dtf.python_value('2012 01 01'), '2012 01 01') d = datetime.date self.assertEqual(df.python_value('2012-01-01 11:11:11.123456'), d( 2012, 1, 1, )) self.assertEqual(df.python_value('2012-01-01 11:11:11'), d( 2012, 1, 1, )) self.assertEqual(df.python_value('2012-01-01'), d( 2012, 1, 1, )) self.assertEqual(df.python_value('2012 01 01'), '2012 01 01') t = datetime.time self.assertEqual(tf.python_value('2012-01-01 11:11:11.123456'), t( 11, 11, 11, 123456 )) self.assertEqual(tf.python_value('2012-01-01 11:11:11'), t( 11, 11, 11 )) self.assertEqual(tf.python_value('11:11:11.123456'), t( 11, 11, 11, 123456 )) self.assertEqual(tf.python_value('11:11:11'), t( 11, 11, 11 )) self.assertEqual(tf.python_value('11:11'), t( 11, 11, )) self.assertEqual(tf.python_value('11:11 AM'), '11:11 AM') class CustomFormatsModel(Model): dtf = DateTimeField(formats=['%b %d, %Y %I:%M:%S %p']) df = DateField(formats=['%b %d, %Y']) tf = TimeField(formats=['%I:%M %p']) dtf = CustomFormatsModel._meta.fields['dtf'] df = CustomFormatsModel._meta.fields['df'] tf = CustomFormatsModel._meta.fields['tf'] d = datetime.datetime self.assertEqual(dtf.python_value('2012-01-01 11:11:11.123456'), '2012-01-01 11:11:11.123456') self.assertEqual(dtf.python_value('Jan 1, 2012 11:11:11 PM'), d( 2012, 1, 1, 23, 11, 11, )) d = datetime.date self.assertEqual(df.python_value('2012-01-01'), '2012-01-01') self.assertEqual(df.python_value('Jan 1, 2012'), d( 2012, 1, 1, )) t = datetime.time self.assertEqual(tf.python_value('11:11:11'), '11:11:11') self.assertEqual(tf.python_value('11:11 PM'), t( 23, 11 )) @skip_test_if(lambda: isinstance(test_db, MySQLDatabase)) def test_blob_and_binary_field(self): byte_count = 256 data = ''.join(chr(i) for i in range(256)) blob = BlobModel.create(data=data) # pull from db and check binary data res = BlobModel.get(BlobModel.id == blob.id) self.assertTrue(isinstance(res.data, binary_types)) self.assertEqual(len(res.data), byte_count) db_data = res.data binary_data = binary_construct(data) if db_data != binary_data and sys.version_info[:3] >= (3, 3, 3): db_data = db_data.tobytes() self.assertEqual(db_data, binary_data) # try querying the blob field binary_data = res.data # use the string representation res = BlobModel.get(BlobModel.data == data) self.assertEqual(res.id, blob.id) # use the binary representation res = BlobModel.get(BlobModel.data == binary_data) self.assertEqual(res.id, blob.id) def test_between(self): field = NullModel.int_field self.assertNM(field.between(1, 2), ['c1', 'c2']) self.assertNM(field.between(2, 3), ['c2', 'c3']) self.assertNM(field.between(5, 300), []) def test_in_(self): self.assertNM(NullModel.int_field.in_([1, 3]), ['c1', 'c3']) self.assertNM(NullModel.int_field.in_([2, 5]), ['c2']) def test_contains(self): self.assertNM(NullModel.char_field.contains('c2'), ['c2']) self.assertNM(NullModel.char_field.contains('c'), ['c1', 'c2', 'c3']) self.assertNM(NullModel.char_field.contains('1'), ['c1']) def test_startswith(self): NullModel.create(char_field='ch1') self.assertNM(NullModel.char_field.startswith('c'), ['c1', 'c2', 'c3', 'ch1']) self.assertNM(NullModel.char_field.startswith('ch'), ['ch1']) self.assertNM(NullModel.char_field.startswith('a'), []) def test_endswith(self): NullModel.create(char_field='ch1') self.assertNM(NullModel.char_field.endswith('1'), ['c1', 'ch1']) self.assertNM(NullModel.char_field.endswith('4'), []) def test_regexp(self): values = [ 'abcdefg', 'abcd', 'defg', 'gij', 'xx', ] for value in values: NullModel.create(char_field=value) def assertValues(regexp, *expected): query = NullModel.select().where( NullModel.char_field.regexp(regexp)).order_by(NullModel.id) values = [nm.char_field for nm in query] self.assertEqual(values, list(expected)) assertValues('^ab', 'abcdefg', 'abcd') assertValues('d', 'abcdefg', 'abcd', 'defg') assertValues('efg$', 'abcdefg', 'defg') assertValues('a.+d', 'abcdefg', 'abcd') @skip_test_if(lambda: database_class is MySQLDatabase) def test_concat(self): NullModel.create(char_field='foo') NullModel.create(char_field='bar') values = (NullModel .select( NullModel.char_field.concat('-nuggets').alias('nugs')) .order_by(NullModel.id) .dicts()) self.assertEqual(list(values), [ {'nugs': 'c1-nuggets'}, {'nugs': 'c2-nuggets'}, {'nugs': 'c3-nuggets'}, {'nugs': 'foo-nuggets'}, {'nugs': 'bar-nuggets'}]) def test_field_aliasing(self): username = User.username user_fk = Blog.user blog_pk = Blog.pk for i in range(2): username = username.clone() user_fk = user_fk.clone() blog_pk = blog_pk.clone() self.assertEqual(username.name, 'username') self.assertEqual(username.model_class, User) self.assertEqual(user_fk.name, 'user') self.assertEqual(user_fk.model_class, Blog) self.assertEqual(user_fk.rel_model, User) self.assertEqual(blog_pk.name, 'pk') self.assertEqual(blog_pk.model_class, Blog) self.assertTrue(blog_pk.primary_key) class TestTimestampField(ModelTestCase): requires = [TimestampModel] def test_timestamp_field(self): dt = datetime.datetime(2016, 1, 2, 11, 12, 13, 654321) d_dt = datetime.datetime(2016, 1, 3) d = d_dt.date() t1 = TimestampModel.create(local_us=dt, utc_ms=dt, local=dt) t2 = TimestampModel.create(local_us=d, utc_ms=d, local=d) t1_db = TimestampModel.get(TimestampModel.local_us == dt) self.assertEqual(t1_db.id, t1.id) self.assertEqual(t1_db.local_us, dt) self.assertEqual(t1_db.utc_ms, dt.replace(microsecond=654000)) self.assertEqual(t1_db.local, dt.replace(microsecond=0).replace(second=14)) t2_db = TimestampModel.get(TimestampModel.utc_ms == d) self.assertEqual(t2_db.id, t2.id) self.assertEqual(t2_db.local_us, d_dt) self.assertEqual(t2_db.utc_ms, d_dt) self.assertEqual(t2_db.local, d_dt) dt += datetime.timedelta(days=1, seconds=3600) dt_us = dt.microsecond / 1000000. ts = time.mktime(dt.timetuple()) + dt_us utc_ts = calendar.timegm(dt.utctimetuple()) + dt_us t3 = TimestampModel.create(local_us=ts, utc_ms=utc_ts, local=ts) t3_db = TimestampModel.get(TimestampModel.local == ts) self.assertEqual(t3_db.id, t3.id) expected = datetime.datetime(2016, 1, 3, 12, 12, 13) self.assertEqual(t3_db.local_us, expected.replace(microsecond=654321)) self.assertEqual(t3_db.utc_ms, expected.replace(microsecond=654000)) self.assertEqual(t3_db.local, expected.replace(second=14)) class TestBinaryTypeFromDatabase(PeeweeTestCase): @skip_test_if(lambda: sys.version_info[0] == 3) def test_binary_type_info(self): db_proxy = Proxy() class A(Model): blob_field = BlobField() class Meta: database = db_proxy self.assertTrue(A.blob_field._constructor is binary_construct) db = SqliteDatabase(':memory:') db_proxy.initialize(db) self.assertTrue(A.blob_field._constructor is sqlite3.Binary) class TestDateTimeExtract(ModelTestCase): requires = [NullModel] test_datetimes = [ datetime.datetime(2001, 1, 2, 3, 4, 5), datetime.datetime(2002, 2, 3, 4, 5, 6), # overlap on year and hour with previous datetime.datetime(2002, 3, 4, 4, 6, 7), ] datetime_parts = ['year', 'month', 'day', 'hour', 'minute', 'second'] date_parts = datetime_parts[:3] time_parts = datetime_parts[3:] def setUp(self): super(TestDateTimeExtract, self).setUp() self.nms = [] for dt in self.test_datetimes: self.nms.append(NullModel.create( datetime_field=dt, date_field=dt.date(), time_field=dt.time())) def assertDates(self, sq, expected): sq = sq.tuples().order_by(NullModel.id) self.assertEqual(list(sq), [(e,) for e in expected]) def assertPKs(self, sq, idxs): sq = sq.tuples().order_by(NullModel.id) self.assertEqual(list(sq), [(self.nms[i].id,) for i in idxs]) def test_extract_datetime(self): self.test_extract_date(NullModel.datetime_field) self.test_extract_time(NullModel.datetime_field) def test_extract_date(self, f=None): if f is None: f = NullModel.date_field self.assertDates(NullModel.select(f.year), [2001, 2002, 2002]) self.assertDates(NullModel.select(f.month), [1, 2, 3]) self.assertDates(NullModel.select(f.day), [2, 3, 4]) def test_extract_time(self, f=None): if f is None: f = NullModel.time_field self.assertDates(NullModel.select(f.hour), [3, 4, 4]) self.assertDates(NullModel.select(f.minute), [4, 5, 6]) self.assertDates(NullModel.select(f.second), [5, 6, 7]) def test_extract_datetime_where(self): f = NullModel.datetime_field self.test_extract_date_where(f) self.test_extract_time_where(f) sq = NullModel.select(NullModel.id) self.assertPKs(sq.where((f.year == 2002) & (f.month == 2)), [1]) self.assertPKs(sq.where((f.year == 2002) & (f.hour == 4)), [1, 2]) self.assertPKs(sq.where((f.year == 2002) & (f.minute == 5)), [1]) def test_extract_date_where(self, f=None): if f is None: f = NullModel.date_field sq = NullModel.select(NullModel.id) self.assertPKs(sq.where(f.year == 2001), [0]) self.assertPKs(sq.where(f.year == 2002), [1, 2]) self.assertPKs(sq.where(f.year == 2003), []) self.assertPKs(sq.where(f.month == 1), [0]) self.assertPKs(sq.where(f.month > 1), [1, 2]) self.assertPKs(sq.where(f.month == 4), []) self.assertPKs(sq.where(f.day == 2), [0]) self.assertPKs(sq.where(f.day > 2), [1, 2]) self.assertPKs(sq.where(f.day == 5), []) def test_extract_time_where(self, f=None): if f is None: f = NullModel.time_field sq = NullModel.select(NullModel.id) self.assertPKs(sq.where(f.hour == 3), [0]) self.assertPKs(sq.where(f.hour == 4), [1, 2]) self.assertPKs(sq.where(f.hour == 5), []) self.assertPKs(sq.where(f.minute == 4), [0]) self.assertPKs(sq.where(f.minute > 4), [1, 2]) self.assertPKs(sq.where(f.minute == 7), []) self.assertPKs(sq.where(f.second == 5), [0]) self.assertPKs(sq.where(f.second > 5), [1, 2]) self.assertPKs(sq.where(f.second == 8), []) class TestUniqueColumnConstraint(ModelTestCase): requires = [UniqueModel, MultiIndexModel] def test_unique(self): uniq1 = UniqueModel.create(name='a') uniq2 = UniqueModel.create(name='b') self.assertRaises(Exception, UniqueModel.create, name='a') test_db.rollback() def test_multi_index(self): mi1 = MultiIndexModel.create(f1='a', f2='a', f3='a') mi2 = MultiIndexModel.create(f1='b', f2='b', f3='b') self.assertRaises(Exception, MultiIndexModel.create, f1='a', f2='a', f3='b') test_db.rollback() self.assertRaises(Exception, MultiIndexModel.create, f1='b', f2='b', f3='a') test_db.rollback() mi3 = MultiIndexModel.create(f1='a', f2='b', f3='b') class TestNonIntegerPrimaryKey(ModelTestCase): requires = [NonIntModel, NonIntRelModel] def test_non_int_pk(self): ni1 = NonIntModel.create(pk='a1', data='ni1') self.assertEqual(ni1.pk, 'a1') ni2 = NonIntModel(pk='a2', data='ni2') ni2.save(force_insert=True) self.assertEqual(ni2.pk, 'a2') ni2.save() self.assertEqual(ni2.pk, 'a2') self.assertEqual(NonIntModel.select().count(), 2) ni1_db = NonIntModel.get(NonIntModel.pk=='a1') self.assertEqual(ni1_db.data, ni1.data) self.assertEqual([(x.pk, x.data) for x in NonIntModel.select().order_by(NonIntModel.pk)], [ ('a1', 'ni1'), ('a2', 'ni2'), ]) def test_non_int_fk(self): ni1 = NonIntModel.create(pk='a1', data='ni1') ni2 = NonIntModel.create(pk='a2', data='ni2') rni11 = NonIntRelModel(non_int_model=ni1) rni12 = NonIntRelModel(non_int_model=ni1) rni11.save() rni12.save() self.assertEqual([r.id for r in ni1.nr.order_by(NonIntRelModel.id)], [rni11.id, rni12.id]) self.assertEqual([r.id for r in ni2.nr.order_by(NonIntRelModel.id)], []) rni21 = NonIntRelModel.create(non_int_model=ni2) self.assertEqual([r.id for r in ni2.nr.order_by(NonIntRelModel.id)], [rni21.id]) sq = NonIntRelModel.select().join(NonIntModel).where(NonIntModel.data == 'ni2') self.assertEqual([r.id for r in sq], [rni21.id]) class TestPrimaryKeyIsForeignKey(ModelTestCase): requires = [Job, JobExecutionRecord, JERRelated] def test_primary_foreign_key(self): # we have one job, unexecuted, and therefore no executed jobs job = Job.create(name='Job One') executed_jobs = Job.select().join(JobExecutionRecord) self.assertEqual([], list(executed_jobs)) # after execution, we must have one executed job exec_record = JobExecutionRecord.create(job=job, status='success') executed_jobs = Job.select().join(JobExecutionRecord) self.assertEqual([job], list(executed_jobs)) # we must not be able to create another execution record for the job self.assertRaises(Exception, JobExecutionRecord.create, job=job, status='success') test_db.rollback() def test_pk_fk_relations(self): j1 = Job.create(name='j1') j2 = Job.create(name='j2') jer1 = JobExecutionRecord.create(job=j1, status='1') jer2 = JobExecutionRecord.create(job=j2, status='2') jerr1 = JERRelated.create(jer=jer1) jerr2 = JERRelated.create(jer=jer2) jerr_j1 = [x for x in jer1.jerrelated_set] self.assertEqual(jerr_j1, [jerr1]) jerr_j2 = [x for x in jer2.jerrelated_set] self.assertEqual(jerr_j2, [jerr2]) jerr1_db = JERRelated.get(JERRelated.jer == j1) self.assertEqual(jerr1_db, jerr1) class TestFieldDatabaseColumn(ModelTestCase): requires = [DBUser, DBBlog] def test_select(self): sq = DBUser.select().where(DBUser.username == 'u1') self.assertSelect(sq, '"dbuser"."db_user_id", "dbuser"."db_username"', []) self.assertWhere(sq, '("dbuser"."db_username" = ?)', ['u1']) sq = DBUser.select(DBUser.user_id).join(DBBlog).where(DBBlog.title == 'b1') self.assertSelect(sq, '"dbuser"."db_user_id"', []) self.assertJoins(sq, ['INNER JOIN "dbblog" AS dbblog ON ("dbuser"."db_user_id" = "dbblog"."db_user")']) self.assertWhere(sq, '("dbblog"."db_title" = ?)', ['b1']) def test_db_column(self): u1 = DBUser.create(username='u1') u2 = DBUser.create(username='u2') u2_db = DBUser.get(DBUser.user_id==u2._get_pk_value()) self.assertEqual(u2_db.username, 'u2') b1 = DBBlog.create(user=u1, title='b1') b2 = DBBlog.create(user=u2, title='b2') b2_db = DBBlog.get(DBBlog.blog_id==b2._get_pk_value()) self.assertEqual(b2_db.user.user_id, u2.user_id) self.assertEqual(b2_db.title, 'b2') self.assertEqual([b.title for b in u2.dbblog_set], ['b2']) class _SqliteDateTestHelper(PeeweeTestCase): datetimes = [ datetime.datetime(2000, 1, 2, 3, 4, 5), datetime.datetime(2000, 2, 3, 4, 5, 6), ] def create_date_model(self, date_fn): dp_db = SqliteDatabase(':memory:') class SqDp(Model): datetime_field = DateTimeField() date_field = DateField() time_field = TimeField() null_datetime_field = DateTimeField(null=True) class Meta: database = dp_db @classmethod def date_query(cls, field, part): return (SqDp .select(date_fn(field, part)) .tuples() .order_by(SqDp.id)) SqDp.create_table() for d in self.datetimes: SqDp.create(datetime_field=d, date_field=d.date(), time_field=d.time()) return SqDp class TestSQLiteDatePart(_SqliteDateTestHelper): def test_sqlite_date_part(self): date_fn = lambda field, part: fn.date_part(part, field) SqDp = self.create_date_model(date_fn) for part in ('year', 'month', 'day', 'hour', 'minute', 'second'): for i, dp in enumerate(SqDp.date_query(SqDp.datetime_field, part)): self.assertEqual(dp[0], getattr(self.datetimes[i], part)) for part in ('year', 'month', 'day'): for i, dp in enumerate(SqDp.date_query(SqDp.date_field, part)): self.assertEqual(dp[0], getattr(self.datetimes[i], part)) for part in ('hour', 'minute', 'second'): for i, dp in enumerate(SqDp.date_query(SqDp.time_field, part)): self.assertEqual(dp[0], getattr(self.datetimes[i], part)) # ensure that the where clause works query = SqDp.select().where(fn.date_part('year', SqDp.datetime_field) == 2000) self.assertEqual(query.count(), 2) query = SqDp.select().where(fn.date_part('month', SqDp.datetime_field) == 1) self.assertEqual(query.count(), 1) query = SqDp.select().where(fn.date_part('month', SqDp.datetime_field) == 3) self.assertEqual(query.count(), 0) null_sqdp = SqDp.create( datetime_field=datetime.datetime.now(), date_field=datetime.date.today(), time_field=datetime.time(0, 0), null_datetime_field=datetime.datetime(2014, 1, 1)) query = SqDp.select().where( fn.date_part('year', SqDp.null_datetime_field) == 2014) self.assertEqual(query.count(), 1) self.assertEqual(list(query), [null_sqdp]) class TestSQLiteDateTrunc(_SqliteDateTestHelper): def test_sqlite_date_trunc(self): date_fn = lambda field, part: fn.date_trunc(part, field) SqDp = self.create_date_model(date_fn) def assertQuery(field, part, expected): values = SqDp.date_query(field, part) self.assertEqual([r[0] for r in values], expected) assertQuery(SqDp.datetime_field, 'year', ['2000', '2000']) assertQuery(SqDp.datetime_field, 'month', ['2000-01', '2000-02']) assertQuery(SqDp.datetime_field, 'day', ['2000-01-02', '2000-02-03']) assertQuery(SqDp.datetime_field, 'hour', [ '2000-01-02 03', '2000-02-03 04']) assertQuery(SqDp.datetime_field, 'minute', [ '2000-01-02 03:04', '2000-02-03 04:05']) assertQuery(SqDp.datetime_field, 'second', [ '2000-01-02 03:04:05', '2000-02-03 04:05:06']) null_sqdp = SqDp.create( datetime_field=datetime.datetime.now(), date_field=datetime.date.today(), time_field=datetime.time(0, 0), null_datetime_field=datetime.datetime(2014, 1, 1)) assertQuery(SqDp.null_datetime_field, 'year', [None, None, '2014']) class TestCheckConstraints(ModelTestCase): requires = [CheckModel] def test_check_constraint(self): CheckModel.create(value=1) if isinstance(test_db, MySQLDatabase): # MySQL silently ignores all check constraints. CheckModel.create(value=0) else: with test_db.transaction() as txn: self.assertRaises(IntegrityError, CheckModel.create, value=0) txn.rollback() @skip_if(lambda: isinstance(test_db, MySQLDatabase)) class TestServerDefaults(ModelTestCase): requires = [ServerDefaultModel] def test_server_default(self): sd = ServerDefaultModel.create(name='baz') sd_db = ServerDefaultModel.get(ServerDefaultModel.id == sd.id) self.assertEqual(sd_db.name, 'baz') self.assertIsNotNone(sd_db.timestamp) sd2 = ServerDefaultModel.create( timestamp=datetime.datetime(2015, 1, 2, 3, 4)) sd2_db = ServerDefaultModel.get(ServerDefaultModel.id == sd2.id) self.assertEqual(sd2_db.name, 'foo') self.assertEqual(sd2_db.timestamp, datetime.datetime(2015, 1, 2, 3, 4)) class TestUUIDField(ModelTestCase): requires = [ TestingID, UUIDData, UUIDRelatedModel, ] def test_uuid(self): uuid_str = 'a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11' uuid_obj = uuid.UUID(uuid_str) t1 = TestingID.create(uniq=uuid_obj) t1_db = TestingID.get(TestingID.uniq == uuid_str) self.assertEqual(t1, t1_db) t2 = TestingID.get(TestingID.uniq == uuid_obj) self.assertEqual(t1, t2) def test_uuid_casting(self): uuid_obj = uuid.UUID('a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11') uuid_str = uuid_obj.hex uuid_str_short = uuid_str.replace("-", "") t1 = TestingID.create(uniq=uuid_obj) t1_db = TestingID.get(TestingID.uniq == uuid_str) self.assertEqual(t1_db.uniq, uuid_obj) t1_db = TestingID.get(TestingID.uniq == uuid_str_short) self.assertEqual(t1_db.uniq, uuid_obj) t1 = TestingID.create(uniq=uuid_str) t1_db = TestingID.get(TestingID.uniq == uuid_str) self.assertEqual(t1_db.uniq, uuid_obj) t1_db = TestingID.get(TestingID.uniq == uuid_str_short) self.assertEqual(t1_db.uniq, uuid_obj) t1 = TestingID.create(uniq=uuid_str_short) t1_db = TestingID.get(TestingID.uniq == uuid_str) self.assertEqual(t1_db.uniq, uuid_obj) t1_db = TestingID.get(TestingID.uniq == uuid_str_short) self.assertEqual(t1_db.uniq, uuid_obj) def test_uuid_foreign_keys(self): data_a = UUIDData.create(id=uuid.uuid4(), data='a') data_b = UUIDData.create(id=uuid.uuid4(), data='b') rel_a1 = UUIDRelatedModel.create(data=data_a, value=1) rel_a2 = UUIDRelatedModel.create(data=data_a, value=2) rel_none = UUIDRelatedModel.create(data=None, value=3) db_a = UUIDData.get(UUIDData.id == data_a.id) self.assertEqual(db_a.id, data_a.id) self.assertEqual(db_a.data, 'a') values = [rm.value for rm in db_a.related_models.order_by(UUIDRelatedModel.id)] self.assertEqual(values, [1, 2]) rnone = UUIDRelatedModel.get(UUIDRelatedModel.data >> None) self.assertEqual(rnone.value, 3) ra = (UUIDRelatedModel .select() .where(UUIDRelatedModel.data == data_a) .order_by(UUIDRelatedModel.value.desc())) self.assertEqual([r.value for r in ra], [2, 1]) def test_prefetch_regression(self): a = UUIDData.create(id=uuid.uuid4(), data='a') b = UUIDData.create(id=uuid.uuid4(), data='b') for i in range(5): for u in [a, b]: UUIDRelatedModel.create(data=u, value=i) with self.assertQueryCount(2): query = prefetch( UUIDData.select().order_by(UUIDData.data), UUIDRelatedModel.select().where(UUIDRelatedModel.value < 3)) accum = [] for item in query: accum.append((item.data, [ rel.value for rel in item.related_models_prefetch])) self.assertEqual(accum, [ ('a', [0, 1, 2]), ('b', [0, 1, 2]), ]) @skip_unless(lambda: isinstance(test_db, SqliteDatabase)) class TestForeignKeyConversion(ModelTestCase): requires = [UIntModel, UIntRelModel] def test_fk_conversion(self): u1 = UIntModel.create(data=1337) u2 = UIntModel.create(data=(1 << 31) + 1000) u1_db = UIntModel.get(UIntModel.data == 1337) self.assertEqual(u1_db.id, u1.id) u2_db = UIntModel.get(UIntModel.data == (1 << 31) + 1000) self.assertEqual(u2_db.id, u2.id) ur1 = UIntRelModel.create(uint_model=u1) ur2 = UIntRelModel.create(uint_model=u2) self.assertEqual(ur1.uint_model_id, 1337) self.assertEqual(ur2.uint_model_id, (1 << 31) + 1000) ur1_db = UIntRelModel.get(UIntRelModel.id == ur1.id) ur2_db = UIntRelModel.get(UIntRelModel.id == ur2.id) self.assertEqual(ur1_db.uint_model.id, u1.id) self.assertEqual(ur2_db.uint_model.id, u2.id) peewee-2.10.2/playhouse/tests/test_flask_utils.py000066400000000000000000000123711316645060400221630ustar00rootroot00000000000000import unittest from flask import Flask from peewee import * from playhouse.flask_utils import FlaskDB from playhouse.flask_utils import PaginatedQuery from playhouse.tests.base import ModelTestCase from playhouse.tests.base import PeeweeTestCase from playhouse.tests.base import test_db from playhouse.tests.models import * class TestPaginationHelpers(ModelTestCase): requires = [User] def setUp(self): super(TestPaginationHelpers, self).setUp() for i in range(10): User.create(username='u%02d' % i) self.app = Flask(__name__) def test_paginated_query(self): query = User.select().order_by(User.username) paginated_query = PaginatedQuery(query, 4) with self.app.test_request_context('/?page=2'): self.assertEqual(paginated_query.get_page(), 2) self.assertEqual(paginated_query.get_page_count(), 3) users = paginated_query.get_object_list() self.assertEqual( [user.username for user in users], ['u04', 'u05', 'u06', 'u07']) with self.app.test_request_context('/'): self.assertEqual(paginated_query.get_page(), 1) for value in ['1', '0', '-1', 'xxx']: with self.app.test_request_context('/?page=%s' % value): self.assertEqual(paginated_query.get_page(), 1) def test_bounds_checking(self): paginated_query = PaginatedQuery(User, 3, 'p', False) with self.app.test_request_context('/?p=5'): results = paginated_query.get_object_list() self.assertEqual(list(results), []) paginated_query = PaginatedQuery(User, 3, 'p', True) with self.app.test_request_context('/?p=2'): self.assertEqual(len(list(paginated_query.get_object_list())), 3) with self.app.test_request_context('/?p=4'): self.assertEqual(len(list(paginated_query.get_object_list())), 1) with self.app.test_request_context('/?p=5'): self.assertRaises(Exception, paginated_query.get_object_list) class TestFlaskDB(PeeweeTestCase): def tearDown(self): super(TestFlaskDB, self).tearDown() if not test_db.is_closed(): test_db.close() test_db.connect() def test_database(self): app = Flask(__name__) app.config.update({ 'DATABASE': { 'name': ':memory:', 'engine': 'peewee.SqliteDatabase'}}) database = FlaskDB(app) Model = database.Model self.assertTrue(isinstance(Model._meta.database, SqliteDatabase)) self.assertEqual(Model._meta.database.database, ':memory:') # Multiple calls reference the same object. self.assertTrue(database.Model is Model) def test_database_url(self): app = Flask(__name__) app.config['DATABASE'] = 'sqlite:///nugget.db' database = FlaskDB(app) Model = database.Model self.assertTrue(isinstance(Model._meta.database, SqliteDatabase)) self.assertEqual(Model._meta.database.database, 'nugget.db') # If a value is specified, it trumps config value. database = FlaskDB(app, 'sqlite:///nuglets.db') Model = database.Model self.assertEqual(Model._meta.database.database, 'nuglets.db') def test_database_instance(self): app = Flask(__name__) db = SqliteDatabase(':memory:') flask_db = FlaskDB(app, db) Model = flask_db.Model self.assertEqual(Model._meta.database, db) def test_database_instance_config(self): app = Flask(__name__) app.config['DATABASE'] = db = SqliteDatabase(':memory:') flask_db = FlaskDB(app) Model = flask_db.Model self.assertEqual(Model._meta.database, db) def test_deferred_database(self): app = Flask(__name__) app.config.update({ 'DATABASE': { 'name': ':memory:', 'engine': 'peewee.SqliteDatabase'}}) # Defer initialization of the database. database = FlaskDB() # Ensure we can access the Model attribute. Model = database.Model model_db = Model._meta.database # Because the database is not initialized, the models will point # to an uninitialized Proxy object. self.assertTrue(isinstance(model_db, Proxy)) self.assertRaises(AttributeError, lambda: model_db.database) class User(database.Model): username = CharField(unique=True) # Initialize the database with our Flask app. database.init_app(app) # Ensure the `Model` property points to the same object as it # did before. PostInitModel = database.Model self.assertTrue(Model is PostInitModel) # Ensure that the proxy is initialized. self.assertEqual(model_db.database, ':memory:') # Ensure we can use our database. User.create_table() for username in ['charlie', 'huey', 'zaizee']: User.create(username=username) self.assertEqual(User.select().count(), 3) users = User.select().order_by(User.username) self.assertEqual( [user.username for user in users], ['charlie', 'huey', 'zaizee']) self.assertEqual(User._meta.database, database.database) peewee-2.10.2/playhouse/tests/test_gfk.py000066400000000000000000000115701316645060400204120ustar00rootroot00000000000000from peewee import * from playhouse.gfk import * from playhouse.tests.base import database_initializer from playhouse.tests.base import ModelTestCase db = database_initializer.get_in_memory_database() class BaseModel(Model): class Meta: database = db def add_tag(self, tag): t = Tag(tag=tag) t.object = self t.save() return t class Tag(BaseModel): tag = CharField() object_type = CharField(null=True) object_id = IntegerField(null=True) object = GFKField() class Meta: indexes = ( (('tag', 'object_type', 'object_id'), True), ) order_by = ('tag',) class Appetizer(BaseModel): name = CharField() tags = ReverseGFK(Tag) class Entree(BaseModel): name = CharField() tags = ReverseGFK(Tag) class Dessert(BaseModel): name = CharField() tags = ReverseGFK(Tag) class GFKTestCase(ModelTestCase): requires = [Tag, Appetizer, Entree, Dessert] data = { Appetizer: ( ('wings', ('fried', 'spicy')), ('mozzarella sticks', ('fried', 'sweet')), ('potstickers', ('fried',)), ('edamame', ('salty',)), ), Entree: ( ('phad thai', ('spicy',)), ('fried chicken', ('fried', 'salty')), ('tacos', ('fried', 'spicy')), ), Dessert: ( ('sundae', ('sweet',)), ('churro', ('fried', 'sweet')), ) } def create(self): for model, foods in self.data.items(): for name, tags in foods: inst = model.create(name=name) for tag in tags: inst.add_tag(tag) def test_creation(self): t = Tag.create(tag='a tag') t.object = t t.save() t_db = Tag.get(Tag.id == t.id) self.assertEqual(t_db.object_id, t_db._get_pk_value()) self.assertEqual(t_db.object_type, 'tag') self.assertEqual(t_db.object, t_db) def test_querying(self): self.create() tacos = Entree.get(Entree.name == 'tacos') tags = Tag.select().where(Tag.object == tacos).order_by(Tag.tag) self.assertEqual([tag.tag for tag in tags], ['fried', 'spicy']) def _test_get_create(self, method): a = Appetizer.create(name='walrus mix') tag, created = method(tag='walrus-food', object=a) self.assertTrue(created) self.assertEqual(tag.object, a) tag_db = Tag.get(Tag.id == tag.id) self.assertEqual(tag_db.object, a) tag, created = method(tag='walrus-food', object=a) self.assertFalse(created) self.assertEqual(Tag.select().count(), 1) self.assertEqual(tag, tag_db) tag2, created = method(tag='walrus-treats', object=a) self.assertTrue(created) tag2_db = Tag.get(Tag.id == tag2.id) self.assertEqual(tag2_db.tag, 'walrus-treats') self.assertEqual(tag2_db.object, a) b = Appetizer.create(name='walrus-meal') tag3, created = method(tag='walrus-treats', object=b) self.assertTrue(created) tag3_db = Tag.get(Tag.id == tag3.id) self.assertEqual(tag3_db.tag, 'walrus-treats') self.assertEqual(tag3_db.object, b) def test_get_or_create(self): self._test_get_create(Tag.get_or_create) def test_gfk_api(self): self.create() # test instance api for model, foods in self.data.items(): for food, tags in foods: inst = model.get(model.name == food) self.assertEqual([t.tag for t in inst.tags], list(tags)) # test class api and ``object`` api apps_tags = [(t.tag, t.object.name) for t in Appetizer.tags.order_by(Tag.id)] data_tags = [] for food, tags in self.data[Appetizer]: for t in tags: data_tags.append((t, food)) self.assertEqual(apps_tags, data_tags) def test_missing(self): t = Tag.create(tag='sour') self.assertEqual(t.object, None) t.object_type = 'appetizer' t.object_id = 1 # accessing the descriptor will raise a DoesNotExist self.assertRaises(Appetizer.DoesNotExist, getattr, t, 'object') t.object_type = 'unknown' t.object_id = 1 self.assertRaises(AttributeError, getattr, t, 'object') def test_set_reverse(self): # assign query e = Entree.create(name='phad thai') s = Tag.create(tag='spicy') p = Tag.create(tag='peanuts') t = Tag.create(tag='thai') b = Tag.create(tag='beverage') e.tags = Tag.select().where(Tag.tag != 'beverage') self.assertEqual([t.tag for t in e.tags], ['peanuts', 'spicy', 'thai']) e = Entree.create(name='panang curry') c = Tag.create(tag='coconut') e.tags = [p, t, c, s] self.assertEqual([t.tag for t in e.tags], ['coconut', 'peanuts', 'spicy', 'thai']) peewee-2.10.2/playhouse/tests/test_helpers.py000066400000000000000000000110241316645060400212770ustar00rootroot00000000000000from peewee import * from peewee import sort_models_topologically from playhouse.tests.base import PeeweeTestCase from playhouse.tests.base import test_db class TestHelperMethods(PeeweeTestCase): def test_assert_query_count(self): def execute_queries(n): for i in range(n): test_db.execute_sql('select 1;') with self.assertQueryCount(0): pass with self.assertQueryCount(1): execute_queries(1) with self.assertQueryCount(2): execute_queries(2) def fails_low(): with self.assertQueryCount(2): execute_queries(1) def fails_high(): with self.assertQueryCount(1): execute_queries(2) self.assertRaises(AssertionError, fails_low) self.assertRaises(AssertionError, fails_high) class TestTopologicalSorting(PeeweeTestCase): def test_topological_sort_fundamentals(self): FKF = ForeignKeyField # we will be topo-sorting the following models class A(Model): pass class B(Model): a = FKF(A) # must follow A class C(Model): a, b = FKF(A), FKF(B) # must follow A and B class D(Model): c = FKF(C) # must follow A and B and C class E(Model): e = FKF('self') # but excluding this model, which is a child of E class Excluded(Model): e = FKF(E) # property 1: output ordering must not depend upon input order repeatable_ordering = None for input_ordering in permutations([A, B, C, D, E]): output_ordering = sort_models_topologically(input_ordering) repeatable_ordering = repeatable_ordering or output_ordering self.assertEqual(repeatable_ordering, output_ordering) # property 2: output ordering must have same models as input self.assertEqual(len(output_ordering), 5) self.assertFalse(Excluded in output_ordering) # property 3: parents must precede children def assert_precedes(X, Y): lhs, rhs = map(output_ordering.index, [X, Y]) self.assertTrue(lhs < rhs) assert_precedes(A, B) assert_precedes(B, C) # if true, C follows A by transitivity assert_precedes(C, D) # if true, D follows A and B by transitivity # property 4: independent model hierarchies must be in name order assert_precedes(A, E) class TestDeclaredDependencies(PeeweeTestCase): def test_declared_dependencies(self): class A(Model): pass class B(Model): a = ForeignKeyField(A) b = ForeignKeyField('self') class NA(Model): class Meta: depends_on = (A, B) class C(Model): b = ForeignKeyField(B) c = ForeignKeyField('self') class Meta: depends_on = (NA,) class D1(Model): na = ForeignKeyField(NA) class Meta: depends_on = (A, C) class D2(Model): class Meta: depends_on = (NA, D1, C, B) models = [A, B, C, D1, D2] ordered = list(models) for pmodels in permutations(models): ordering = sort_models_topologically(pmodels) self.assertEqual(ordering, ordered) def test_declared_dependencies_simple(self): class A(Model): pass class B(Model): class Meta: depends_on = (A,) class C(Model): b = ForeignKeyField(B) # Implicit dependency. class D(Model): class Meta: depends_on = (C,) models = [A, B, C, D] ordered = list(models) for pmodels in permutations(models): ordering = sort_models_topologically(pmodels) self.assertEqual(ordering, ordered) def test_declared_dependencies_2(self): class C(Model): pass class B(Model): c = ForeignKeyField(C) class A(Model): class Meta: depends_on = B, c = ForeignKeyField(C) models = [ C, B, A ] ordered = list(models) for pmodels in permutations(models): ordering = sort_models_topologically(pmodels) self.assertEqual(ordering, ordered) def permutations(xs): if not xs: yield [] else: for y, ys in selections(xs): for pys in permutations(ys): yield [y] + pys def selections(xs): for i in range(len(xs)): yield (xs[i], xs[:i] + xs[i + 1:]) peewee-2.10.2/playhouse/tests/test_hybrid.py000066400000000000000000000075641316645060400211340ustar00rootroot00000000000000from peewee import * from playhouse.hybrid import hybrid_method from playhouse.hybrid import hybrid_property from playhouse.tests.base import database_initializer from playhouse.tests.base import ModelTestCase db = database_initializer.get_in_memory_database() class BaseModel(Model): class Meta: database = db class Interval(BaseModel): start = IntegerField() end = IntegerField() @hybrid_property def length(self): return self.end - self.start @hybrid_method def contains(self, point): return (self.start <= point) & (point < self.end) @hybrid_property def radius(self): return int(abs(self.length) / 2) @radius.expression def radius(cls): return fn.abs(cls.length) / 2 class Person(BaseModel): first = CharField() last = CharField() @hybrid_property def full_name(self): return self.first + ' ' + self.last class TestHybrid(ModelTestCase): requires = [Interval, Person] def setUp(self): super(TestHybrid, self).setUp() intervals = ( (1, 5), (2, 6), (3, 5), (2, 5)) for start, end in intervals: Interval.create(start=start, end=end) def test_hybrid_property(self): query = Interval.select().where(Interval.length == 4) sql, params = query.sql() self.assertEqual(sql, ( 'SELECT "t1"."id", "t1"."start", "t1"."end" ' 'FROM "interval" AS t1 ' 'WHERE (("t1"."end" - "t1"."start") = ?)')) self.assertEqual(params, [4]) results = sorted( (interval.start, interval.end) for interval in query) self.assertEqual(results, [(1, 5), (2, 6)]) lengths = [4, 4, 2, 3] query = Interval.select().order_by(Interval.id) actuals = [interval.length for interval in query] self.assertEqual(actuals, lengths) def test_hybrid_method(self): query = Interval.select().where(Interval.contains(2)) sql, params = query.sql() self.assertEqual(sql, ( 'SELECT "t1"."id", "t1"."start", "t1"."end" ' 'FROM "interval" AS t1 ' 'WHERE (("t1"."start" <= ?) AND ("t1"."end" > ?))')) self.assertEqual(params, [2, 2]) results = sorted( (interval.start, interval.end) for interval in query) self.assertEqual(results, [(1, 5), (2, 5), (2, 6)]) contains = [True, True, False, True] query = Interval.select().order_by(Interval.id) actuals = [interval.contains(2) for interval in query] self.assertEqual(contains, actuals) def test_separate_expr(self): query = Interval.select().where(Interval.radius == 2) sql, params = query.sql() self.assertEqual(sql, ( 'SELECT "t1"."id", "t1"."start", "t1"."end" ' 'FROM "interval" AS t1 ' 'WHERE ((abs("t1"."end" - "t1"."start") / ?) = ?)')) self.assertEqual(params, [2, 2]) results = sorted( (interval.start, interval.end) for interval in query) self.assertEqual(results, [(1, 5), (2, 6)]) radii = [2, 2, 1, 1] query = Interval.select().order_by(Interval.id) actuals = [interval.radius for interval in query] self.assertEqual(actuals, radii) def test_string_fields(self): huey = Person.create(first='huey', last='cat') zaizee = Person.create(first='zaizee', last='kitty') self.assertEqual(huey.full_name, 'huey cat') self.assertEqual(zaizee.full_name, 'zaizee kitty') query = Person.select().where(Person.full_name == 'zaizee kitty') zaizee_db = query.get() self.assertEqual(zaizee_db, zaizee) query = Person.select().where(Person.full_name.startswith('huey c')) huey_db = query.get() self.assertEqual(huey_db, huey) peewee-2.10.2/playhouse/tests/test_introspection.py000066400000000000000000000102671316645060400225450ustar00rootroot00000000000000from playhouse.tests.base import database_class from playhouse.tests.base import ModelTestCase from playhouse.tests.base import test_db from playhouse.tests.models import * class TestMetadataIntrospection(ModelTestCase): requires = [ User, Blog, Comment, CompositeKeyModel, MultiIndexModel, UniqueModel, Category] def setUp(self): super(TestMetadataIntrospection, self).setUp() self.pk_index = database_class is not SqliteDatabase def test_get_tables(self): tables = test_db.get_tables() for model in self.requires: self.assertTrue(model._meta.db_table in tables) UniqueModel.drop_table() self.assertFalse(UniqueModel._meta.db_table in test_db.get_tables()) def test_get_indexes(self): indexes = test_db.get_indexes(UniqueModel._meta.db_table) num_indexes = self.pk_index and 2 or 1 self.assertEqual(len(indexes), num_indexes) idx, = [idx for idx in indexes if idx.name == 'uniquemodel_name'] self.assertEqual(idx.columns, ['name']) self.assertTrue(idx.unique) indexes = dict( (idx.name, idx) for idx in test_db.get_indexes(MultiIndexModel._meta.db_table)) num_indexes = self.pk_index and 3 or 2 self.assertEqual(len(indexes), num_indexes) idx_f1f2 = indexes['multiindexmodel_f1_f2'] self.assertEqual(sorted(idx_f1f2.columns), ['f1', 'f2']) self.assertTrue(idx_f1f2.unique) idx_f2f3 = indexes['multiindexmodel_f2_f3'] self.assertEqual(sorted(idx_f2f3.columns), ['f2', 'f3']) self.assertFalse(idx_f2f3.unique) self.assertEqual(idx_f2f3.table, 'multiindexmodel') # SQLite *will* create an index here, so we will always have one. indexes = test_db.get_indexes(CompositeKeyModel._meta.db_table) self.assertEqual(len(indexes), 1) self.assertEqual(sorted(indexes[0].columns), ['f1', 'f2']) self.assertTrue(indexes[0].unique) def test_get_columns(self): def get_columns(model): return dict( (column.name, column) for column in test_db.get_columns(model._meta.db_table)) def assertColumns(model, col_names, nullable, pks): columns = get_columns(model) self.assertEqual(sorted(columns), col_names) for column, metadata in columns.items(): self.assertEqual(metadata.null, column in nullable) self.assertEqual(metadata.table, model._meta.db_table) self.assertEqual(metadata.primary_key, column in pks) assertColumns(User, ['id', 'username'], [], ['id']) assertColumns( Blog, ['content', 'pk', 'pub_date', 'title', 'user_id'], ['pub_date'], ['pk']) assertColumns(UniqueModel, ['id', 'name'], [], ['id']) assertColumns(MultiIndexModel, ['f1', 'f2', 'f3', 'id'], [], ['id']) assertColumns( CompositeKeyModel, ['f1', 'f2', 'f3'], [], ['f1', 'f2']) assertColumns( Category, ['id', 'name', 'parent_id'], ['parent_id'], ['id']) def test_get_primary_keys(self): def assertPKs(model_class, expected): self.assertEqual( test_db.get_primary_keys(model_class._meta.db_table), expected) assertPKs(User, ['id']) assertPKs(Blog, ['pk']) assertPKs(MultiIndexModel, ['id']) assertPKs(CompositeKeyModel, ['f1', 'f2']) assertPKs(UniqueModel, ['id']) assertPKs(Category, ['id']) def test_get_foreign_keys(self): def assertFKs(model_class, expected): foreign_keys = test_db.get_foreign_keys(model_class._meta.db_table) self.assertEqual(len(foreign_keys), len(expected)) self.assertEqual( [(fk.column, fk.dest_table, fk.dest_column) for fk in foreign_keys], expected) assertFKs(Category, [('parent_id', 'category', 'id')]) assertFKs(User, []) assertFKs(Blog, [('user_id', 'users', 'id')]) assertFKs(Comment, [('blog_id', 'blog', 'pk')]) peewee-2.10.2/playhouse/tests/test_keys.py000066400000000000000000000447311316645060400206230ustar00rootroot00000000000000from peewee import DeferredRelation from peewee import Model from peewee import SqliteDatabase from playhouse.tests.base import compiler from playhouse.tests.base import database_initializer from playhouse.tests.base import ModelTestCase from playhouse.tests.base import PeeweeTestCase from playhouse.tests.base import skip_if from playhouse.tests.base import skip_test_if from playhouse.tests.base import test_db from playhouse.tests.models import * class TestForeignKeyToNonPrimaryKey(ModelTestCase): requires = [Package, PackageItem] def setUp(self): super(TestForeignKeyToNonPrimaryKey, self).setUp() for barcode in ['101', '102']: Package.create(barcode=barcode) for i in range(2): PackageItem.create( package=barcode, title='%s-%s' % (barcode, i)) def test_fk_resolution(self): pi = PackageItem.get(PackageItem.title == '101-0') self.assertEqual(pi._data['package'], '101') self.assertEqual(pi.package, Package.get(Package.barcode == '101')) def test_select_generation(self): p = Package.get(Package.barcode == '101') self.assertEqual( [item.title for item in p.items.order_by(PackageItem.title)], ['101-0', '101-1']) class TestMultipleForeignKey(ModelTestCase): requires = [Manufacturer, Component, Computer] test_values = [ ['3TB', '16GB', 'i7'], ['128GB', '1GB', 'ARM'], ] def setUp(self): super(TestMultipleForeignKey, self).setUp() intel = Manufacturer.create(name='Intel') amd = Manufacturer.create(name='AMD') kingston = Manufacturer.create(name='Kingston') for hard_drive, memory, processor in self.test_values: c = Computer.create( hard_drive=Component.create(name=hard_drive), memory=Component.create(name=memory, manufacturer=kingston), processor=Component.create(name=processor, manufacturer=intel)) # The 2nd computer has an AMD processor. c.processor.manufacturer = amd c.processor.save() def test_multi_join(self): HDD = Component.alias() HDDMf = Manufacturer.alias() Memory = Component.alias() MemoryMf = Manufacturer.alias() Processor = Component.alias() ProcessorMf = Manufacturer.alias() query = (Computer .select( Computer, HDD, Memory, Processor, HDDMf, MemoryMf, ProcessorMf) .join(HDD, on=( Computer.hard_drive == HDD.id).alias('hard_drive')) .join( HDDMf, JOIN.LEFT_OUTER, on=(HDD.manufacturer == HDDMf.id)) .switch(Computer) .join(Memory, on=( Computer.memory == Memory.id).alias('memory')) .join( MemoryMf, JOIN.LEFT_OUTER, on=(Memory.manufacturer == MemoryMf.id)) .switch(Computer) .join(Processor, on=( Computer.processor == Processor.id).alias('processor')) .join( ProcessorMf, JOIN.LEFT_OUTER, on=(Processor.manufacturer == ProcessorMf.id)) .order_by(Computer.id)) with self.assertQueryCount(1): vals = [] manufacturers = [] for computer in query: components = [ computer.hard_drive, computer.memory, computer.processor] vals.append([component.name for component in components]) for component in components: if component.manufacturer: manufacturers.append(component.manufacturer.name) else: manufacturers.append(None) self.assertEqual(vals, self.test_values) self.assertEqual(manufacturers, [ None, 'Kingston', 'Intel', None, 'Kingston', 'AMD', ]) class TestMultipleForeignKeysJoining(ModelTestCase): requires = [User, Relationship] def test_multiple_fks(self): a = User.create(username='a') b = User.create(username='b') c = User.create(username='c') self.assertEqual(list(a.relationships), []) self.assertEqual(list(a.related_to), []) r_ab = Relationship.create(from_user=a, to_user=b) self.assertEqual(list(a.relationships), [r_ab]) self.assertEqual(list(a.related_to), []) self.assertEqual(list(b.relationships), []) self.assertEqual(list(b.related_to), [r_ab]) r_bc = Relationship.create(from_user=b, to_user=c) following = User.select().join( Relationship, on=Relationship.to_user ).where(Relationship.from_user == a) self.assertEqual(list(following), [b]) followers = User.select().join( Relationship, on=Relationship.from_user ).where(Relationship.to_user == a.id) self.assertEqual(list(followers), []) following = User.select().join( Relationship, on=Relationship.to_user ).where(Relationship.from_user == b.id) self.assertEqual(list(following), [c]) followers = User.select().join( Relationship, on=Relationship.from_user ).where(Relationship.to_user == b.id) self.assertEqual(list(followers), [a]) following = User.select().join( Relationship, on=Relationship.to_user ).where(Relationship.from_user == c.id) self.assertEqual(list(following), []) followers = User.select().join( Relationship, on=Relationship.from_user ).where(Relationship.to_user == c.id) self.assertEqual(list(followers), [b]) class TestCompositePrimaryKey(ModelTestCase): requires = [Tag, Post, TagPostThrough, CompositeKeyModel, User, UserThing] def setUp(self): super(TestCompositePrimaryKey, self).setUp() tags = [Tag.create(tag='t%d' % i) for i in range(1, 4)] posts = [Post.create(title='p%d' % i) for i in range(1, 4)] p12 = Post.create(title='p12') for t, p in zip(tags, posts): TagPostThrough.create(tag=t, post=p) TagPostThrough.create(tag=tags[0], post=p12) TagPostThrough.create(tag=tags[1], post=p12) def test_create_table_query(self): query, params = compiler.create_table(TagPostThrough) self.assertEqual( query, 'CREATE TABLE "tagpostthrough" ' '("tag_id" INTEGER NOT NULL, ' '"post_id" INTEGER NOT NULL, ' 'PRIMARY KEY ("tag_id", "post_id"), ' 'FOREIGN KEY ("tag_id") REFERENCES "tag" ("id"), ' 'FOREIGN KEY ("post_id") REFERENCES "post" ("id")' ')') def test_get_set_id(self): tpt = (TagPostThrough .select() .join(Tag) .switch(TagPostThrough) .join(Post) .order_by(Tag.tag, Post.title)).get() # Sanity check. self.assertEqual(tpt.tag.tag, 't1') self.assertEqual(tpt.post.title, 'p1') tag = Tag.select().where(Tag.tag == 't1').get() post = Post.select().where(Post.title == 'p1').get() self.assertEqual(tpt._get_pk_value(), (tag, post)) # set_id is a no-op. tpt._set_pk_value(None) self.assertEqual(tpt._get_pk_value(), (tag, post)) def test_querying(self): posts = (Post.select() .join(TagPostThrough) .join(Tag) .where(Tag.tag == 't1') .order_by(Post.title)) self.assertEqual([p.title for p in posts], ['p1', 'p12']) tags = (Tag.select() .join(TagPostThrough) .join(Post) .where(Post.title == 'p12') .order_by(Tag.tag)) self.assertEqual([t.tag for t in tags], ['t1', 't2']) def test_composite_key_model(self): CKM = CompositeKeyModel values = [ ('a', 1, 1.0), ('a', 2, 2.0), ('b', 1, 1.0), ('b', 2, 2.0)] c1, c2, c3, c4 = [ CKM.create(f1=f1, f2=f2, f3=f3) for f1, f2, f3 in values] # Update a single row, giving it a new value for `f3`. CKM.update(f3=3.0).where((CKM.f1 == 'a') & (CKM.f2 == 2)).execute() c = CKM.get((CKM.f1 == 'a') & (CKM.f2 == 2)) self.assertEqual(c.f3, 3.0) # Update the `f3` value and call `save()`, triggering an update. c3.f3 = 4.0 c3.save() c = CKM.get((CKM.f1 == 'b') & (CKM.f2 == 1)) self.assertEqual(c.f3, 4.0) # Only 1 row updated. query = CKM.select().where(CKM.f3 == 4.0) self.assertEqual(query.wrapped_count(), 1) # Unfortunately this does not work since the original value of the # PK is lost (and hence cannot be used to update). c4.f1 = 'c' c4.save() self.assertRaises( CKM.DoesNotExist, CKM.get, (CKM.f1 == 'c') & (CKM.f2 == 2)) def test_count_composite_key(self): CKM = CompositeKeyModel values = [ ('a', 1, 1.0), ('a', 2, 2.0), ('b', 1, 1.0), ('b', 2, 1.0)] for f1, f2, f3 in values: CKM.create(f1=f1, f2=f2, f3=f3) self.assertEqual(CKM.select().wrapped_count(), 4) self.assertEqual(CKM.select().count(), 4) self.assertTrue(CKM.select().where( (CKM.f1 == 'a') & (CKM.f2 == 1)).exists()) self.assertFalse(CKM.select().where( (CKM.f1 == 'a') & (CKM.f2 == 3)).exists()) def test_delete_instance(self): u1, u2 = [User.create(username='u%s' % i) for i in range(2)] ut1 = UserThing.create(thing='t1', user=u1) ut2 = UserThing.create(thing='t2', user=u1) ut3 = UserThing.create(thing='t1', user=u2) ut4 = UserThing.create(thing='t3', user=u2) res = ut1.delete_instance() self.assertEqual(res, 1) self.assertEqual( [x.thing for x in UserThing.select().order_by(UserThing.thing)], ['t1', 't2', 't3']) def test_composite_key_inheritance(self): db = database_initializer.get_in_memory_database() class Person(TestModel): first = TextField() last = TextField() class Meta: database = db primary_key = CompositeKey('first', 'last') class Employee(Person): title = TextField() self.assertTrue(Employee._meta.composite_key) primary_key = Employee._meta.primary_key self.assertTrue(isinstance(primary_key, CompositeKey)) self.assertEqual(primary_key.field_names, ('first', 'last')) ddl, _ = compiler.create_table(Employee) self.assertEqual(ddl, ( 'CREATE TABLE "employee" ("first" TEXT NOT NULL, ' '"last" TEXT NOT NULL, "title" TEXT NOT NULL, ' 'PRIMARY KEY ("first", "last"))')) class TestForeignKeyNonPrimaryKeyCreateTable(PeeweeTestCase): def test_create_table(self): class A(TestModel): cf = CharField(max_length=100, unique=True) df = DecimalField( max_digits=4, decimal_places=2, auto_round=True, unique=True) class CF(TestModel): a = ForeignKeyField(A, to_field='cf') class DF(TestModel): a = ForeignKeyField(A, to_field='df') cf_create, _ = compiler.create_table(CF) self.assertEqual( cf_create, 'CREATE TABLE "cf" (' '"id" INTEGER NOT NULL PRIMARY KEY, ' '"a_id" VARCHAR(100) NOT NULL, ' 'FOREIGN KEY ("a_id") REFERENCES "a" ("cf"))') df_create, _ = compiler.create_table(DF) self.assertEqual( df_create, 'CREATE TABLE "df" (' '"id" INTEGER NOT NULL PRIMARY KEY, ' '"a_id" DECIMAL(4, 2) NOT NULL, ' 'FOREIGN KEY ("a_id") REFERENCES "a" ("df"))') class TestDeferredForeignKey(ModelTestCase): #requires = [Language, Snippet] def setUp(self): super(TestDeferredForeignKey, self).setUp() Snippet.drop_table(True) Language.drop_table(True) Language.create_table() Snippet.create_table() def tearDown(self): super(TestDeferredForeignKey, self).tearDown() Snippet.drop_table(True) Language.drop_table(True) def test_field_definitions(self): self.assertEqual(Snippet._meta.fields['language'].rel_model, Language) self.assertEqual(Language._meta.fields['selected_snippet'].rel_model, Snippet) def test_deferred_relation_resolution(self): orig = len(DeferredRelation._unresolved) class CircularRef1(Model): circ_ref2 = ForeignKeyField( DeferredRelation('circularref2'), null=True) self.assertEqual(len(DeferredRelation._unresolved), orig + 1) class CircularRef2(Model): circ_ref1 = ForeignKeyField(CircularRef1, null=True) self.assertEqual(CircularRef1.circ_ref2.rel_model, CircularRef2) self.assertEqual(CircularRef2.circ_ref1.rel_model, CircularRef1) self.assertEqual(len(DeferredRelation._unresolved), orig) def test_create_table_query(self): query, params = compiler.create_table(Snippet) self.assertEqual( query, 'CREATE TABLE "snippet" ' '("id" INTEGER NOT NULL PRIMARY KEY, ' '"code" TEXT NOT NULL, ' '"language_id" INTEGER NOT NULL, ' 'FOREIGN KEY ("language_id") REFERENCES "language" ("id")' ')') query, params = compiler.create_table(Language) self.assertEqual( query, 'CREATE TABLE "language" ' '("id" INTEGER NOT NULL PRIMARY KEY, ' '"name" VARCHAR(255) NOT NULL, ' '"selected_snippet_id" INTEGER)') def test_storage_retrieval(self): python = Language.create(name='python') javascript = Language.create(name='javascript') p1 = Snippet.create(code="print 'Hello world'", language=python) p2 = Snippet.create(code="print 'Goodbye world'", language=python) j1 = Snippet.create(code="alert('Hello world')", language=javascript) self.assertEqual(Snippet.get(Snippet.id == p1.id).language, python) self.assertEqual(Snippet.get(Snippet.id == j1.id).language, javascript) python.selected_snippet = p2 python.save() self.assertEqual( Language.get(Language.id == python.id).selected_snippet, p2) self.assertEqual( Language.get(Language.id == javascript.id).selected_snippet, None) def test_multiple_refs(self): person_ref = DeferredRelation() class Relationship(TestModel): person_from = ForeignKeyField(person_ref, related_name='f1') person_to = ForeignKeyField(person_ref, related_name='f2') class SomethingElse(TestModel): person = ForeignKeyField(person_ref) class Person(TestModel): name = CharField() person_ref.set_model(Person) p1 = Person(id=1, name='p1') p2 = Person(id=2, name='p2') p3 = Person(id=3, name='p3') r = Relationship(person_from=p1, person_to=p2) s = SomethingElse(person=p3) self.assertEqual(r.person_from.name, 'p1') self.assertEqual(r.person_to.name, 'p2') self.assertEqual(s.person.name, 'p3') class TestSQLiteDeferredForeignKey(PeeweeTestCase): def test_doc_example(self): db = database_initializer.get_in_memory_database() TweetDeferred = DeferredRelation() class Base(Model): class Meta: database = db class User(Base): username = CharField() favorite_tweet = ForeignKeyField(TweetDeferred, null=True) class Tweet(Base): user = ForeignKeyField(User) message = TextField() TweetDeferred.set_model(Tweet) with db.transaction(): User.create_table() Tweet.create_table() # SQLite does not support alter + add constraint. self.assertRaises( OperationalError, lambda: db.create_foreign_key(User, User.favorite_tweet)) class TestForeignKeyConstraints(ModelTestCase): requires = [User, Blog] def setUp(self): self.set_foreign_key_pragma(True) super(TestForeignKeyConstraints, self).setUp() def tearDown(self): self.set_foreign_key_pragma(False) super(TestForeignKeyConstraints, self).tearDown() def set_foreign_key_pragma(self, is_enabled): if not isinstance(test_db, SqliteDatabase): return state = 'on' if is_enabled else 'off' test_db.execute_sql('PRAGMA foreign_keys = %s' % state) def test_constraint_exists(self): # IntegrityError is raised when we specify a non-existent user_id. max_id = User.select(fn.Max(User.id)).scalar() or 0 def will_fail(): with test_db.transaction() as txn: Blog.create(user=max_id + 1, title='testing') self.assertRaises(IntegrityError, will_fail) @skip_test_if(lambda: isinstance(test_db, SqliteDatabase)) def test_constraint_creation(self): class FKC_a(TestModel): name = CharField() fkc_deferred = DeferredRelation() class FKC_b(TestModel): fkc_a = ForeignKeyField(fkc_deferred) fkc_deferred.set_model(FKC_a) with test_db.transaction() as txn: FKC_b.drop_table(True) FKC_a.drop_table(True) FKC_a.create_table() FKC_b.create_table() # Foreign key constraint is not enforced. fb = FKC_b.create(fkc_a=-1000) fb.delete_instance() # Add constraint. test_db.create_foreign_key(FKC_b, FKC_b.fkc_a) def _trigger_exc(): with test_db.savepoint() as s1: fb = FKC_b.create(fkc_a=-1000) self.assertRaises(IntegrityError, _trigger_exc) fa = FKC_a.create(name='fa') fb = FKC_b.create(fkc_a=fa) txn.rollback() peewee-2.10.2/playhouse/tests/test_kv.py000066400000000000000000000121341316645060400202600ustar00rootroot00000000000000import threading from peewee import * from playhouse.kv import JSONKeyStore from playhouse.kv import KeyStore from playhouse.kv import PickledKeyStore from playhouse.tests.base import PeeweeTestCase from playhouse.tests.base import skip_if class TestKeyStore(PeeweeTestCase): def setUp(self): super(TestKeyStore, self).setUp() self.kv = KeyStore(CharField()) self.ordered_kv = KeyStore(CharField(), ordered=True) self.pickled_kv = PickledKeyStore(ordered=True) self.json_kv = JSONKeyStore(ordered=True) self.kv.clear() self.json_kv.clear() def test_json(self): self.json_kv['foo'] = 'bar' self.json_kv['baze'] = {'baze': [1, 2, 3]} self.json_kv['nugget'] = None self.assertEqual(self.json_kv['foo'], 'bar') self.assertEqual(self.json_kv['baze'], {'baze': [1, 2, 3]}) self.assertIsNone(self.json_kv['nugget']) self.assertRaises(KeyError, lambda: self.json_kv['missing']) results = self.json_kv[self.json_kv.key << ['baze', 'bar', 'nugget']] self.assertEqual(results, [ {'baze': [1, 2, 3]}, None, ]) def test_storage(self): self.kv['a'] = 'A' self.kv['b'] = 1 self.assertEqual(self.kv['a'], 'A') self.assertEqual(self.kv['b'], '1') self.assertRaises(KeyError, self.kv.__getitem__, 'c') del(self.kv['a']) self.assertRaises(KeyError, self.kv.__getitem__, 'a') self.kv['a'] = 'A' self.kv['c'] = 'C' self.assertEqual(self.kv[self.kv.key << ('a', 'c')], ['A', 'C']) self.kv[self.kv.key << ('a', 'c')] = 'X' self.assertEqual(self.kv['a'], 'X') self.assertEqual(self.kv['b'], '1') self.assertEqual(self.kv['c'], 'X') key = self.kv.key results = self.kv[key << ('a', 'b')] self.assertEqual(results, ['X', '1']) del(self.kv[self.kv.key << ('a', 'c')]) self.assertRaises(KeyError, self.kv.__getitem__, 'a') self.assertRaises(KeyError, self.kv.__getitem__, 'c') self.assertEqual(self.kv['b'], '1') self.pickled_kv['a'] = 'A' self.pickled_kv['b'] = 1.1 self.assertEqual(self.pickled_kv['a'], 'A') self.assertEqual(self.pickled_kv['b'], 1.1) def test_container_properties(self): self.kv['x'] = 'X' self.kv['y'] = 'Y' self.assertEqual(len(self.kv), 2) self.assertTrue('x' in self.kv) self.assertFalse('a' in self.kv) def test_dict_methods(self): for kv in (self.ordered_kv, self.pickled_kv): kv['a'] = 'A' kv['c'] = 'C' kv['b'] = 'B' self.assertEqual(list(kv.keys()), ['a', 'b', 'c']) self.assertEqual(list(kv.values()), ['A', 'B', 'C']) self.assertEqual(list(kv.items()), [ ('a', 'A'), ('b', 'B'), ('c', 'C'), ]) def test_iteration(self): for kv in (self.ordered_kv, self.pickled_kv): kv['a'] = 'A' kv['c'] = 'C' kv['b'] = 'B' items = list(kv) self.assertEqual(items, [ ('a', 'A'), ('b', 'B'), ('c', 'C'), ]) def test_shared_mem(self): self.kv['a'] = 'xxx' self.assertEqual(self.ordered_kv['a'], 'xxx') def set_k(): kv_t = KeyStore(CharField()) kv_t['b'] = 'yyy' t = threading.Thread(target=set_k) t.start() t.join() self.assertEqual(self.kv['b'], 'yyy') def test_get(self): self.kv['a'] = 'A' self.kv['b'] = 'B' self.assertEqual(self.kv.get('a'), 'A') self.assertEqual(self.kv.get('x'), None) self.assertEqual(self.kv.get('x', 'y'), 'y') self.assertEqual( list(self.kv.get(self.kv.key << ('a', 'b'))), ['A', 'B']) self.assertEqual( list(self.kv.get(self.kv.key << ('x', 'y'))), []) def test_pop(self): self.ordered_kv['a'] = 'A' self.ordered_kv['b'] = 'B' self.ordered_kv['c'] = 'C' self.assertEqual(self.ordered_kv.pop('a'), 'A') self.assertEqual(list(self.ordered_kv.keys()), ['b', 'c']) self.assertRaises(KeyError, self.ordered_kv.pop, 'x') self.assertEqual(self.ordered_kv.pop('x', 'y'), 'y') self.assertEqual( list(self.ordered_kv.pop(self.ordered_kv.key << ['b', 'c'])), ['B', 'C']) self.assertEqual(list(self.ordered_kv.keys()), []) try: import psycopg2 except ImportError: psycopg2 = None @skip_if(lambda: psycopg2 is None) class TestPostgresqlKeyStore(PeeweeTestCase): def setUp(self): self.db = PostgresqlDatabase('peewee_test') self.kv = KeyStore(CharField(), ordered=True, database=self.db) self.kv.clear() def tearDown(self): self.db.close() def test_non_native_upsert(self): self.kv['a'] = 'A' self.kv['b'] = 'B' self.assertEqual(self.kv['a'], 'A') self.kv['a'] = 'C' self.assertEqual(self.kv['a'], 'C') peewee-2.10.2/playhouse/tests/test_manytomany.py000066400000000000000000000322701316645060400220370ustar00rootroot00000000000000from peewee import * from playhouse.fields import DeferredThroughModel from playhouse.fields import ManyToManyField from playhouse.tests.base import database_initializer from playhouse.tests.base import ModelTestCase db = database_initializer.get_in_memory_database() class BaseModel(Model): class Meta: database = db class User(BaseModel): username = CharField(unique=True) class Note(BaseModel): text = TextField() users = ManyToManyField(User) NoteUserThrough = Note.users.get_through_model() AltThroughDeferred = DeferredThroughModel() class AltNote(BaseModel): text = TextField() users = ManyToManyField(User, through_model=AltThroughDeferred) class AltThroughModel(BaseModel): user = ForeignKeyField(User, related_name='_xx_rel') note = ForeignKeyField(AltNote, related_name='_xx_rel') class Meta: primary_key = CompositeKey('user', 'note') AltThroughDeferred.set_model(AltThroughModel) class TestManyToManyField(ModelTestCase): requires = [User, Note, NoteUserThrough, AltThroughModel, AltNote] user_to_note = { 'charlie': [1, 2], 'huey': [2, 3], 'mickey': [3, 4], 'zaizee': [4, 5]} def setUp(self): super(TestManyToManyField, self).setUp() usernames = ['charlie', 'huey', 'mickey', 'zaizee'] n_notes = 5 for username in usernames: User.create(username=username) for i in range(n_notes): Note.create(text='note-%s' % (i + 1)) def test_through_model(self): self.assertEqual(len(NoteUserThrough._meta.fields), 3) fields = NoteUserThrough._meta.fields self.assertEqual(sorted(fields), ['id', 'note', 'user']) note_field = fields['note'] self.assertEqual(note_field.rel_model, Note) self.assertFalse(note_field.null) user_field = fields['user'] self.assertEqual(user_field.rel_model, User) self.assertFalse(user_field.null) def _create_relationship(self): for username, notes in self.user_to_note.items(): user = User.get(User.username == username) for note in notes: NoteUserThrough.create( note=Note.get(Note.text == 'note-%s' % note), user=user) def assertNotes(self, query, expected): notes = [note.text for note in query] self.assertEqual( sorted(notes), ['note-%s' % i for i in sorted(expected)]) def assertUsers(self, query, expected): usernames = [user.username for user in query] self.assertEqual(sorted(usernames), sorted(expected)) def test_descriptor_query(self): self._create_relationship() charlie, huey, mickey, zaizee = User.select().order_by(User.username) with self.assertQueryCount(1): self.assertNotes(charlie.notes, [1, 2]) with self.assertQueryCount(1): self.assertNotes(zaizee.notes, [4, 5]) u = User.create(username='beanie') self.assertNotes(u.notes, []) n1, n2, n3, n4, n5 = Note.select().order_by(Note.text) with self.assertQueryCount(1): self.assertUsers(n1.users, ['charlie']) with self.assertQueryCount(1): self.assertUsers(n2.users, ['charlie', 'huey']) with self.assertQueryCount(1): self.assertUsers(n5.users, ['zaizee']) n6 = Note.create(text='note-6') self.assertUsers(n6.users, []) def test_desciptor_filtering(self): self._create_relationship() charlie, huey, mickey, zaizee = User.select().order_by(User.username) with self.assertQueryCount(1): notes = charlie.notes.order_by(Note.text.desc()) self.assertNotes(notes, [2, 1]) with self.assertQueryCount(1): notes = huey.notes.where(Note.text != 'note-3') self.assertNotes(notes, [2]) def test_set_values(self): charlie = User.get(User.username == 'charlie') huey = User.get(User.username == 'huey') n1, n2, n3, n4, n5 = Note.select().order_by(Note.text) with self.assertQueryCount(2): charlie.notes = n1 self.assertNotes(charlie.notes, [1]) self.assertUsers(n1.users, ['charlie']) charlie.notes = [n2, n3] self.assertNotes(charlie.notes, [2, 3]) self.assertUsers(n1.users, []) self.assertUsers(n2.users, ['charlie']) self.assertUsers(n3.users, ['charlie']) with self.assertQueryCount(2): huey.notes = Note.select().where(~(Note.text.endswith('4'))) self.assertNotes(huey.notes, [1, 2, 3, 5]) def test_add(self): charlie = User.get(User.username == 'charlie') huey = User.get(User.username == 'huey') n1, n2, n3, n4, n5 = Note.select().order_by(Note.text) charlie.notes.add([n1, n2]) self.assertNotes(charlie.notes, [1, 2]) self.assertUsers(n1.users, ['charlie']) self.assertUsers(n2.users, ['charlie']) others = [n3, n4, n5] for note in others: self.assertUsers(note.users, []) with self.assertQueryCount(1): huey.notes.add(Note.select().where( fn.substr(Note.text, 6, 1) << ['1', '3', '5'])) self.assertNotes(huey.notes, [1, 3, 5]) self.assertUsers(n1.users, ['charlie', 'huey']) self.assertUsers(n2.users, ['charlie']) self.assertUsers(n3.users, ['huey']) self.assertUsers(n4.users, []) self.assertUsers(n5.users, ['huey']) with self.assertQueryCount(1): charlie.notes.add(n4) self.assertNotes(charlie.notes, [1, 2, 4]) with self.assertQueryCount(2): n3.users.add( User.select().where(User.username != 'charlie'), clear_existing=True) self.assertUsers(n3.users, ['huey', 'mickey', 'zaizee']) def test_add_by_ids(self): charlie = User.get(User.username == 'charlie') n1, n2, n3 = Note.select().order_by(Note.text).limit(3) charlie.notes.add([n1.id, n2.id]) self.assertNotes(charlie.notes, [1, 2]) self.assertUsers(n1.users, ['charlie']) self.assertUsers(n2.users, ['charlie']) self.assertUsers(n3.users, []) def test_unique(self): n1 = Note.get(Note.text == 'note-1') charlie = User.get(User.username == 'charlie') def add_user(note, user): with self.assertQueryCount(1): note.users.add(user) add_user(n1, charlie) self.assertRaises(IntegrityError, add_user, n1, charlie) add_user(n1, User.get(User.username == 'zaizee')) self.assertUsers(n1.users, ['charlie', 'zaizee']) def test_remove(self): self._create_relationship() charlie, huey, mickey, zaizee = User.select().order_by(User.username) n1, n2, n3, n4, n5 = Note.select().order_by(Note.text) with self.assertQueryCount(1): charlie.notes.remove([n1, n2, n3]) self.assertNotes(charlie.notes, []) self.assertNotes(huey.notes, [2, 3]) with self.assertQueryCount(1): huey.notes.remove(Note.select().where( Note.text << ['note-2', 'note-4', 'note-5'])) self.assertNotes(huey.notes, [3]) self.assertNotes(mickey.notes, [3, 4]) self.assertNotes(zaizee.notes, [4, 5]) with self.assertQueryCount(1): n4.users.remove([charlie, mickey]) self.assertUsers(n4.users, ['zaizee']) with self.assertQueryCount(1): n5.users.remove(User.select()) self.assertUsers(n5.users, []) def test_remove_by_id(self): self._create_relationship() charlie, huey, mickey, zaizee = User.select().order_by(User.username) n1, n2, n3, n4, n5 = Note.select().order_by(Note.text) charlie.notes.add([n3, n4]) with self.assertQueryCount(1): charlie.notes.remove([n1.id, n3.id]) self.assertNotes(charlie.notes, [2, 4]) self.assertNotes(huey.notes, [2, 3]) def test_clear(self): charlie = User.get(User.username == 'charlie') huey = User.get(User.username == 'huey') charlie.notes = Note.select() huey.notes = Note.select() self.assertEqual(charlie.notes.count(), 5) self.assertEqual(huey.notes.count(), 5) charlie.notes.clear() self.assertEqual(charlie.notes.count(), 0) self.assertEqual(huey.notes.count(), 5) n1 = Note.get(Note.text == 'note-1') n2 = Note.get(Note.text == 'note-2') n1.users = User.select() n2.users = User.select() self.assertEqual(n1.users.count(), 4) self.assertEqual(n2.users.count(), 4) n1.users.clear() self.assertEqual(n1.users.count(), 0) self.assertEqual(n2.users.count(), 4) def test_manual_through(self): charlie, huey, mickey, zaizee = User.select().order_by(User.username) alt_notes = [] for i in range(5): alt_notes.append(AltNote.create(text='note-%s' % (i + 1))) self.assertNotes(charlie.altnotes, []) for alt_note in alt_notes: self.assertUsers(alt_note.users, []) n1, n2, n3, n4, n5 = alt_notes # Test adding relationships by setting the descriptor. charlie.altnotes = [n1, n2] with self.assertQueryCount(2): huey.altnotes = AltNote.select().where( fn.substr(AltNote.text, 6, 1) << ['1', '3', '5']) mickey.altnotes.add([n1, n4]) with self.assertQueryCount(2): zaizee.altnotes = AltNote.select() # Test that the notes were added correctly. with self.assertQueryCount(1): self.assertNotes(charlie.altnotes, [1, 2]) with self.assertQueryCount(1): self.assertNotes(huey.altnotes, [1, 3, 5]) with self.assertQueryCount(1): self.assertNotes(mickey.altnotes, [1, 4]) with self.assertQueryCount(1): self.assertNotes(zaizee.altnotes, [1, 2, 3, 4, 5]) # Test removing notes. with self.assertQueryCount(1): charlie.altnotes.remove(n1) self.assertNotes(charlie.altnotes, [2]) with self.assertQueryCount(1): huey.altnotes.remove([n1, n2, n3]) self.assertNotes(huey.altnotes, [5]) with self.assertQueryCount(1): zaizee.altnotes.remove( AltNote.select().where( fn.substr(AltNote.text, 6, 1) << ['1', '2', '4'])) self.assertNotes(zaizee.altnotes, [3, 5]) # Test the backside of the relationship. n1.users = User.select().where(User.username != 'charlie') with self.assertQueryCount(1): self.assertUsers(n1.users, ['huey', 'mickey', 'zaizee']) with self.assertQueryCount(1): self.assertUsers(n2.users, ['charlie']) with self.assertQueryCount(1): self.assertUsers(n3.users, ['zaizee']) with self.assertQueryCount(1): self.assertUsers(n4.users, ['mickey']) with self.assertQueryCount(1): self.assertUsers(n5.users, ['huey', 'zaizee']) with self.assertQueryCount(1): n1.users.remove(User.select()) with self.assertQueryCount(1): n5.users.remove([charlie, huey]) with self.assertQueryCount(1): self.assertUsers(n1.users, []) with self.assertQueryCount(1): self.assertUsers(n5.users, ['zaizee']) class Person(BaseModel): name = CharField() class Soul(BaseModel): person = ForeignKeyField(Person, primary_key=True) class SoulList(BaseModel): name = CharField() souls = ManyToManyField(Soul, related_name='lists') SoulListThrough = SoulList.souls.get_through_model() class TestForeignKeyPrimaryKeyManyToMany(ModelTestCase): requires = [Person, Soul, SoulList, SoulListThrough] test_data = ( ('huey', ('cats', 'evil')), ('zaizee', ('cats', 'good')), ('mickey', ('dogs', 'good')), ('zombie', ()), ) def setUp(self): super(TestForeignKeyPrimaryKeyManyToMany, self).setUp() name2list = {} for name, lists in self.test_data: p = Person.create(name=name) s = Soul.create(person=p) for l in lists: if l not in name2list: name2list[l] = SoulList.create(name=l) name2list[l].souls.add(s) def soul_for(self, name): return Soul.select().join(Person).where(Person.name == name).get() def assertLists(self, l1, l2): self.assertEqual(sorted(list(l1)), sorted(list(l2))) def test_pk_is_fk(self): list2names = {} for name, lists in self.test_data: soul = self.soul_for(name) self.assertLists([l.name for l in soul.lists], lists) for l in lists: list2names.setdefault(l, []) list2names[l].append(name) for list_name, names in list2names.items(): soul_list = SoulList.get(SoulList.name == list_name) self.assertLists([s.person.name for s in soul_list.souls], names) def test_empty(self): sl = SoulList.create(name='empty') self.assertEqual(list(sl.souls), []) peewee-2.10.2/playhouse/tests/test_migrate.py000066400000000000000000000563231316645060400213000ustar00rootroot00000000000000import datetime import os from functools import partial from peewee import * from peewee import print_ from playhouse.migrate import * from playhouse.test_utils import count_queries from playhouse.tests.base import database_initializer from playhouse.tests.base import PeeweeTestCase from playhouse.tests.base import skip_if try: from psycopg2cffi import compat compat.register() except ImportError: pass try: import psycopg2 except ImportError: psycopg2 = None try: import MySQLdb as mysql except ImportError: try: import pymysql as mysql except ImportError: mysql = None if mysql: mysql_db = database_initializer.get_database('mysql') else: mysql_db = None if psycopg2: pg_db = database_initializer.get_database('postgres') else: pg_db = None sqlite_db = SqliteDatabase(':memory:') class Tag(Model): tag = CharField() class Person(Model): first_name = CharField() last_name = CharField() dob = DateField(null=True) class User(Model): id = CharField(primary_key=True, max_length=20) password = CharField(default='secret') class Meta: db_table = 'users' class Page(Model): name = CharField(max_length=100, unique=True, null=True) user = ForeignKeyField(User, null=True, related_name='pages') class Session(Model): user = ForeignKeyField(User, unique=True, related_name='sessions') updated_at = DateField(null=True) class IndexModel(Model): first_name = CharField() last_name = CharField() data = IntegerField(unique=True) class Meta: database = sqlite_db indexes = ( (('first_name', 'last_name'), True), ) MODELS = [ Person, Tag, User, Page, Session ] class BaseMigrationTestCase(object): database = None migrator_class = None # Each database behaves slightly differently. _exception_add_not_null = True _person_data = [ ('Charlie', 'Leifer', None), ('Huey', 'Kitty', datetime.date(2011, 5, 1)), ('Mickey', 'Dog', datetime.date(2008, 6, 1)), ] def setUp(self): super(BaseMigrationTestCase, self).setUp() for model_class in MODELS: model_class._meta.database = self.database self.database.drop_tables(MODELS, True) self.database.create_tables(MODELS) self.migrator = self.migrator_class(self.database) if 'newpages' in User._meta.reverse_rel: del User._meta.reverse_rel['newpages'] delattr(User, 'newpages') def tearDown(self): super(BaseMigrationTestCase, self).tearDown() for model_class in MODELS: model_class._meta.database = self.database self.database.drop_tables(MODELS, True) def test_add_column(self): # Create some fields with a variety of NULL / default values. df = DateTimeField(null=True) df_def = DateTimeField(default=datetime.datetime(2012, 1, 1)) cf = CharField(max_length=200, default='') bf = BooleanField(default=True) ff = FloatField(default=0) # Create two rows in the Tag table to test the handling of adding # non-null fields. t1 = Tag.create(tag='t1') t2 = Tag.create(tag='t2') # Convenience function for generating `add_column` migrations. add_column = partial(self.migrator.add_column, 'tag') # Run the migration. migrate( add_column('pub_date', df), add_column('modified_date', df_def), add_column('comment', cf), add_column('is_public', bf), add_column('popularity', ff)) # Create a new tag model to represent the fields we added. class NewTag(Model): tag = CharField() pub_date = df modified_date = df_def comment = cf is_public = bf popularity = ff class Meta: database = self.database db_table = Tag._meta.db_table query = (NewTag .select( NewTag.id, NewTag.tag, NewTag.pub_date, NewTag.modified_date, NewTag.comment, NewTag.is_public, NewTag.popularity) .order_by(NewTag.tag.asc())) # Verify the resulting rows are correct. self.assertEqual(list(query.tuples()), [ (t1.id, 't1', None, datetime.datetime(2012, 1, 1), '', True, 0.0), (t2.id, 't2', None, datetime.datetime(2012, 1, 1), '', True, 0.0), ]) def _create_people(self): for first, last, dob in self._person_data: Person.create(first_name=first, last_name=last, dob=dob) def get_column_names(self, tbl): cursor = self.database.execute_sql('select * from %s limit 1' % tbl) return set([col[0] for col in cursor.description]) def test_drop_column(self): self._create_people() migrate( self.migrator.drop_column('person', 'last_name'), self.migrator.drop_column('person', 'dob')) column_names = self.get_column_names('person') self.assertEqual(column_names, set(['id', 'first_name'])) User.create(id='charlie', password='12345') User.create(id='huey', password='meow') migrate(self.migrator.drop_column('users', 'password')) column_names = self.get_column_names('users') self.assertEqual(column_names, set(['id'])) data = [row for row in User.select(User.id).order_by(User.id).tuples()] self.assertEqual(data, [ ('charlie',), ('huey',),]) def test_rename_column(self): self._create_people() migrate( self.migrator.rename_column('person', 'first_name', 'first'), self.migrator.rename_column('person', 'last_name', 'last')) column_names = self.get_column_names('person') self.assertEqual(column_names, set(['id', 'first', 'last', 'dob'])) class NewPerson(Model): first = CharField() last = CharField() dob = DateField() class Meta: database = self.database db_table = Person._meta.db_table query = (NewPerson .select( NewPerson.first, NewPerson.last, NewPerson.dob) .order_by(NewPerson.first)) self.assertEqual(list(query.tuples()), self._person_data) def test_rename_gh380(self): u1 = User.create(id='charlie') u2 = User.create(id='huey') p1 = Page.create(name='p1-1', user=u1) p2 = Page.create(name='p2-1', user=u1) p3 = Page.create(name='p3-2', user=u2) migrate(self.migrator.rename_column('page', 'name', 'title')) column_names = self.get_column_names('page') self.assertEqual(column_names, set(['id', 'title', 'user_id'])) class NewPage(Model): title = CharField(max_length=100, unique=True, null=True) user = ForeignKeyField(User, null=True, related_name='newpages') class Meta: database = self.database db_table = Page._meta.db_table query = (NewPage .select( NewPage.title, NewPage.user) .order_by(NewPage.title)) self.assertEqual( [(np.title, np.user.id) for np in query], [('p1-1', 'charlie'), ('p2-1', 'charlie'), ('p3-2', 'huey')]) def test_add_not_null(self): self._create_people() def addNotNull(): with self.database.transaction(): migrate(self.migrator.add_not_null('person', 'dob')) # We cannot make the `dob` field not null because there is currently # a null value there. if self._exception_add_not_null: self.assertRaises(IntegrityError, addNotNull) (Person .update(dob=datetime.date(2000, 1, 2)) .where(Person.dob >> None) .execute()) # Now we can make the column not null. addNotNull() # And attempting to insert a null value results in an integrity error. with self.database.transaction(): with self.assertRaisesCtx((IntegrityError, OperationalError)): Person.create( first_name='Kirby', last_name='Snazebrauer', dob=None) def test_drop_not_null(self): self._create_people() migrate( self.migrator.drop_not_null('person', 'first_name'), self.migrator.drop_not_null('person', 'last_name')) p = Person.create(first_name=None, last_name=None) query = (Person .select() .where( (Person.first_name >> None) & (Person.last_name >> None))) self.assertEqual(query.count(), 1) def test_modify_not_null_foreign_key(self): user = User.create(id='charlie') Page.create(name='null user') Page.create(name='charlie', user=user) def addNotNull(): with self.database.transaction(): migrate(self.migrator.add_not_null('page', 'user_id')) if self._exception_add_not_null: self.assertRaises(IntegrityError, addNotNull) Page.update(user=user).where(Page.user.is_null()).execute() addNotNull() # And attempting to insert a null value results in an integrity error. with self.database.transaction(): with self.assertRaisesCtx((OperationalError, IntegrityError)): Page.create( name='fails', user=None) # Now we will drop it. with self.database.transaction(): migrate(self.migrator.drop_not_null('page', 'user_id')) self.assertEqual(Page.select().where(Page.user.is_null()).count(), 0) Page.create(name='succeeds', user=None) self.assertEqual(Page.select().where(Page.user.is_null()).count(), 1) def test_rename_table(self): t1 = Tag.create(tag='t1') t2 = Tag.create(tag='t2') # Move the tag data into a new model/table. class Tag_asdf(Tag): pass self.assertEqual(Tag_asdf._meta.db_table, 'tag_asdf') # Drop the new table just to be safe. Tag_asdf.drop_table(True) # Rename the tag table. migrate(self.migrator.rename_table('tag', 'tag_asdf')) # Verify the data was moved. query = (Tag_asdf .select() .order_by(Tag_asdf.tag)) self.assertEqual([t.tag for t in query], ['t1', 't2']) # Verify the old table is gone. with self.database.transaction(): self.assertRaises( DatabaseError, Tag.create, tag='t3') def test_add_index(self): # Create a unique index on first and last names. columns = ('first_name', 'last_name') migrate(self.migrator.add_index('person', columns, True)) Person.create(first_name='first', last_name='last') with self.database.transaction(): self.assertRaises( IntegrityError, Person.create, first_name='first', last_name='last') def test_add_unique_column(self): uf = CharField(default='', unique=True) # Run the migration. migrate(self.migrator.add_column('tag', 'unique_field', uf)) # Create a new tag model to represent the fields we added. class NewTag(Model): tag = CharField() unique_field = uf class Meta: database = self.database db_table = Tag._meta.db_table NewTag.create(tag='t1', unique_field='u1') NewTag.create(tag='t2', unique_field='u2') with self.database.atomic(): self.assertRaises(IntegrityError, NewTag.create, tag='t3', unique_field='u1') def test_drop_index(self): # Create a unique index. self.test_add_index() # Now drop the unique index. migrate( self.migrator.drop_index('person', 'person_first_name_last_name')) Person.create(first_name='first', last_name='last') query = (Person .select() .where( (Person.first_name == 'first') & (Person.last_name == 'last'))) self.assertEqual(query.count(), 2) def test_add_and_remove(self): operations = [] field = CharField(default='foo') for i in range(10): operations.append(self.migrator.add_column('tag', 'foo', field)) operations.append(self.migrator.drop_column('tag', 'foo')) migrate(*operations) col_names = self.get_column_names('tag') self.assertEqual(col_names, set(['id', 'tag'])) def test_multiple_operations(self): self.database.execute_sql('drop table if exists person_baze;') self.database.execute_sql('drop table if exists person_nugg;') self._create_people() field_n = CharField(null=True) field_d = CharField(default='test') operations = [ self.migrator.add_column('person', 'field_null', field_n), self.migrator.drop_column('person', 'first_name'), self.migrator.add_column('person', 'field_default', field_d), self.migrator.rename_table('person', 'person_baze'), self.migrator.rename_table('person_baze', 'person_nugg'), self.migrator.rename_column('person_nugg', 'last_name', 'last'), self.migrator.add_index('person_nugg', ('last',), True), ] migrate(*operations) class PersonNugg(Model): field_null = field_n field_default = field_d last = CharField() dob = DateField(null=True) class Meta: database = self.database db_table = 'person_nugg' people = (PersonNugg .select( PersonNugg.field_null, PersonNugg.field_default, PersonNugg.last, PersonNugg.dob) .order_by(PersonNugg.last) .tuples()) expected = [ (None, 'test', 'Dog', datetime.date(2008, 6, 1)), (None, 'test', 'Kitty', datetime.date(2011, 5, 1)), (None, 'test', 'Leifer', None), ] self.assertEqual(list(people), expected) with self.database.transaction(): self.assertRaises( IntegrityError, PersonNugg.create, last='Leifer', field_default='bazer') def test_add_foreign_key(self): if hasattr(Person, 'newtag_set'): delattr(Person, 'newtag_set') del Person._meta.reverse_rel['newtag_set'] # Ensure no foreign keys are present at the beginning of the test. self.assertEqual(self.database.get_foreign_keys('tag'), []) field = ForeignKeyField(Person, null=True, to_field=Person.id) migrate(self.migrator.add_column('tag', 'person_id', field)) class NewTag(Tag): person = field class Meta: db_table = 'tag' p = Person.create(first_name='First', last_name='Last') t1 = NewTag.create(tag='t1', person=p) t2 = NewTag.create(tag='t2') t1_db = NewTag.get(NewTag.tag == 't1') self.assertEqual(t1_db.person, p) t2_db = NewTag.get(NewTag.tag == 't2') self.assertEqual(t2_db.person, None) foreign_keys = self.database.get_foreign_keys('tag') self.assertEqual(len(foreign_keys), 1) foreign_key = foreign_keys[0] self.assertEqual(foreign_key.column, 'person_id') self.assertEqual(foreign_key.dest_column, 'id') self.assertEqual(foreign_key.dest_table, 'person') def test_drop_foreign_key(self): migrate(self.migrator.drop_column('page', 'user_id')) columns = self.database.get_columns('page') self.assertEqual( sorted(column.name for column in columns), ['id', 'name']) self.assertEqual(self.database.get_foreign_keys('page'), []) def test_rename_foreign_key(self): migrate(self.migrator.rename_column('page', 'user_id', 'huey_id')) columns = self.database.get_columns('page') self.assertEqual( sorted(column.name for column in columns), ['huey_id', 'id', 'name']) foreign_keys = self.database.get_foreign_keys('page') self.assertEqual(len(foreign_keys), 1) foreign_key = foreign_keys[0] self.assertEqual(foreign_key.column, 'huey_id') self.assertEqual(foreign_key.dest_column, 'id') self.assertEqual(foreign_key.dest_table, 'users') def test_rename_unique_foreign_key(self): migrate(self.migrator.rename_column('session', 'user_id', 'huey_id')) columns = self.database.get_columns('session') self.assertEqual( sorted(column.name for column in columns), ['huey_id', 'id', 'updated_at']) foreign_keys = self.database.get_foreign_keys('session') self.assertEqual(len(foreign_keys), 1) foreign_key = foreign_keys[0] self.assertEqual(foreign_key.column, 'huey_id') self.assertEqual(foreign_key.dest_column, 'id') self.assertEqual(foreign_key.dest_table, 'users') class SqliteMigrationTestCase(BaseMigrationTestCase, PeeweeTestCase): database = sqlite_db migrator_class = SqliteMigrator def setUp(self): super(SqliteMigrationTestCase, self).setUp() IndexModel.drop_table(True) IndexModel.create_table() def test_valid_column_required(self): self.assertRaises( ValueError, migrate, self.migrator.drop_column('page', 'column_does_not_exist')) self.assertRaises( ValueError, migrate, self.migrator.rename_column('page', 'xx', 'yy')) def test_table_case_insensitive(self): migrate(self.migrator.drop_column('PaGe', 'name')) column_names = self.get_column_names('page') self.assertEqual(column_names, set(['id', 'user_id'])) testing_field = CharField(default='xx') migrate(self.migrator.add_column('pAGE', 'testing', testing_field)) column_names = self.get_column_names('page') self.assertEqual(column_names, set(['id', 'user_id', 'testing'])) migrate(self.migrator.drop_column('indeXmOdel', 'first_name')) indexes = self.migrator.database.get_indexes('indexmodel') self.assertEqual(len(indexes), 1) self.assertEqual(indexes[0].name, 'indexmodel_data') def test_add_column_indexed_table(self): # Ensure that columns can be added to tables that have indexes. field = CharField(default='') migrate(self.migrator.add_column('indexmodel', 'foo', field)) db = self.migrator.database columns = db.get_columns('indexmodel') self.assertEqual(sorted(column.name for column in columns), ['data', 'first_name', 'foo', 'id', 'last_name']) indexes = db.get_indexes('indexmodel') self.assertEqual( sorted((index.name, index.columns) for index in indexes), [('indexmodel_data', ['data']), ('indexmodel_first_name_last_name', ['first_name', 'last_name'])]) def test_rename_column_to_table_name(self): db = self.migrator.database columns = lambda: sorted(col.name for col in db.get_columns('page')) indexes = lambda: sorted((idx.name, idx.columns) for idx in db.get_indexes('page')) orig_columns = columns() orig_indexes = indexes() # Rename "page"."name" to "page"."page". migrate(self.migrator.rename_column('page', 'name', 'page')) # Ensure that the index on "name" is preserved, and that the index on # the user_id foreign key is also preserved. self.assertEqual(columns(), ['id', 'page', 'user_id']) self.assertEqual(indexes(), [ ('page_name', ['page']), ('page_user_id', ['user_id'])]) # Revert the operation and verify migrate(self.migrator.rename_column('page', 'page', 'name')) self.assertEqual(columns(), orig_columns) self.assertEqual(indexes(), orig_indexes) def test_index_preservation(self): with count_queries() as qc: migrate(self.migrator.rename_column( 'indexmodel', 'first_name', 'first')) queries = [log.msg for log in qc.get_queries()] self.assertEqual(queries, [ # Get all the columns. ('PRAGMA table_info("indexmodel")', None), # Get the table definition. ('select name, sql from sqlite_master ' 'where type=? and LOWER(name)=?', ['table', 'indexmodel']), # Get the indexes and indexed columns for the table. ('SELECT name, sql FROM sqlite_master ' 'WHERE tbl_name = ? AND type = ? ORDER BY name', ('indexmodel', 'index')), ('PRAGMA index_list("indexmodel")', None), ('PRAGMA index_info("indexmodel_data")', None), ('PRAGMA index_info("indexmodel_first_name_last_name")', None), # Get foreign keys. ('PRAGMA foreign_key_list("indexmodel")', None), # Drop any temporary table, if it exists. ('DROP TABLE IF EXISTS "indexmodel__tmp__"', []), # Create a temporary table with the renamed column. ('CREATE TABLE "indexmodel__tmp__" (' '"id" INTEGER NOT NULL PRIMARY KEY, ' '"first" VARCHAR(255) NOT NULL, ' '"last_name" VARCHAR(255) NOT NULL, ' '"data" INTEGER NOT NULL)', []), # Copy data from original table into temporary table. ('INSERT INTO "indexmodel__tmp__" ' '("id", "first", "last_name", "data") ' 'SELECT "id", "first_name", "last_name", "data" ' 'FROM "indexmodel"', []), # Drop the original table. ('DROP TABLE "indexmodel"', []), # Rename the temporary table, replacing the original. ('ALTER TABLE "indexmodel__tmp__" RENAME TO "indexmodel"', []), # Re-create the indexes. ('CREATE UNIQUE INDEX "indexmodel_data" ' 'ON "indexmodel" ("data")', []), ('CREATE UNIQUE INDEX "indexmodel_first_name_last_name" ' 'ON "indexmodel" ("first", "last_name")', []) ]) @skip_if(lambda: psycopg2 is None) class PostgresqlMigrationTestCase(BaseMigrationTestCase, PeeweeTestCase): database = pg_db migrator_class = PostgresqlMigrator @skip_if(lambda: mysql is None) class MySQLMigrationTestCase(BaseMigrationTestCase, PeeweeTestCase): database = mysql_db migrator_class = MySQLMigrator # MySQL does not raise an exception when adding a not null constraint # to a column that contains NULL values. _exception_add_not_null = False def test_modify_not_null_foreign_key(self): pass peewee-2.10.2/playhouse/tests/test_models.py000066400000000000000000002314001316645060400211220ustar00rootroot00000000000000# encoding=utf-8 import sys from functools import partial from peewee import * from peewee import ModelOptions from peewee import sqlite3 from playhouse.tests.base import compiler from playhouse.tests.base import database_initializer from playhouse.tests.base import ModelTestCase from playhouse.tests.base import normal_compiler from playhouse.tests.base import PeeweeTestCase from playhouse.tests.base import skip_if from playhouse.tests.base import skip_unless from playhouse.tests.base import test_db from playhouse.tests.base import ulit from playhouse.tests.models import * in_memory_db = database_initializer.get_in_memory_database() supports_tuples = sqlite3.sqlite_version_info >= (3, 15, 0) class GCModel(Model): name = CharField(unique=True) key = CharField() value = CharField() number = IntegerField(default=0) class Meta: database = in_memory_db indexes = ( (('key', 'value'), True), ) def incrementer(): d = {'value': 0} def increment(): d['value'] += 1 return d['value'] return increment class DefaultsModel(Model): field = IntegerField(default=incrementer()) control = IntegerField(default=1) class Meta: database = in_memory_db class TestQueryingModels(ModelTestCase): requires = [User, Blog] def setUp(self): super(TestQueryingModels, self).setUp() self._orig_db_insert_many = test_db.insert_many def tearDown(self): super(TestQueryingModels, self).tearDown() test_db.insert_many = self._orig_db_insert_many def create_users_blogs(self, n=10, nb=5): for i in range(n): u = User.create(username='u%d' % i) for j in range(nb): b = Blog.create(title='b-%d-%d' % (i, j), content=str(j), user=u) def test_select(self): self.create_users_blogs() users = User.select().where(User.username << ['u0', 'u5']).order_by(User.username) self.assertEqual([u.username for u in users], ['u0', 'u5']) blogs = Blog.select().join(User).where( (User.username << ['u0', 'u3']) & (Blog.content == '4') ).order_by(Blog.title) self.assertEqual([b.title for b in blogs], ['b-0-4', 'b-3-4']) users = User.select().paginate(2, 3) self.assertEqual([u.username for u in users], ['u3', 'u4', 'u5']) def test_select_all(self): self.create_users_blogs(2, 2) all_cols = SQL('*') query = Blog.select(all_cols) blogs = [blog for blog in query.order_by(Blog.pk)] self.assertEqual( [b.title for b in blogs], ['b-0-0', 'b-0-1', 'b-1-0', 'b-1-1']) self.assertEqual( [b.user.username for b in blogs], ['u0', 'u0', 'u1', 'u1']) def test_select_subquery(self): # 10 users, 5 blogs each self.create_users_blogs(5, 3) # delete user 2's 2nd blog Blog.delete().where(Blog.title == 'b-2-2').execute() subquery = Blog.select(fn.Count(Blog.pk)).where(Blog.user == User.id).group_by(Blog.user) users = User.select(User, subquery.alias('ct')).order_by(R('ct'), User.id) self.assertEqual([(x.username, x.ct) for x in users], [ ('u2', 2), ('u0', 3), ('u1', 3), ('u3', 3), ('u4', 3), ]) def test_select_with_bind_to(self): self.create_users_blogs(1, 1) blog = Blog.select( Blog, User, (User.username == 'u0').alias('is_u0').bind_to(User), (User.username == 'u1').alias('is_u1').bind_to(User) ).join(User).get() self.assertTrue(blog.user.is_u0) self.assertFalse(blog.user.is_u1) def test_scalar(self): User.create_users(5) users = User.select(fn.Count(User.id)).scalar() self.assertEqual(users, 5) users = User.select(fn.Count(User.id)).where(User.username << ['u1', 'u2']) self.assertEqual(users.scalar(), 2) self.assertEqual(users.scalar(True), (2,)) users = User.select(fn.Count(User.id)).where(User.username == 'not-here') self.assertEqual(users.scalar(), 0) self.assertEqual(users.scalar(True), (0,)) users = User.select(fn.Count(User.id), fn.Count(User.username)) self.assertEqual(users.scalar(), 5) self.assertEqual(users.scalar(True), (5, 5)) User.create(username='u1') User.create(username='u2') User.create(username='u3') User.create(username='u99') users = User.select(fn.Count(fn.Distinct(User.username))).scalar() self.assertEqual(users, 6) def test_noop_query(self): query = User.noop() with self.assertQueryCount(1) as qc: result = [row for row in query] self.assertEqual(result, []) def test_update(self): User.create_users(5) uq = User.update(username='u-edited').where(User.username << ['u1', 'u2', 'u3']) self.assertEqual([u.username for u in User.select().order_by(User.id)], ['u1', 'u2', 'u3', 'u4', 'u5']) uq.execute() self.assertEqual([u.username for u in User.select().order_by(User.id)], ['u-edited', 'u-edited', 'u-edited', 'u4', 'u5']) self.assertRaises(KeyError, User.update, doesnotexist='invalid') def test_update_subquery(self): User.create_users(3) u1, u2, u3 = [user for user in User.select().order_by(User.id)] for i in range(4): Blog.create(title='b%s' % i, user=u1) for i in range(2): Blog.create(title='b%s' % i, user=u3) subquery = Blog.select(fn.COUNT(Blog.pk)).where(Blog.user == User.id) query = User.update(username=subquery) sql, params = normal_compiler.generate_update(query) self.assertEqual(sql, ( 'UPDATE "users" SET "username" = (' 'SELECT COUNT("t2"."pk") FROM "blog" AS t2 ' 'WHERE ("t2"."user_id" = "users"."id"))')) self.assertEqual(query.execute(), 3) usernames = [u.username for u in User.select().order_by(User.id)] self.assertEqual(usernames, ['4', '0', '2']) def test_insert(self): iq = User.insert(username='u1') self.assertEqual(User.select().count(), 0) uid = iq.execute() self.assertTrue(uid > 0) self.assertEqual(User.select().count(), 1) u = User.get(User.id==uid) self.assertEqual(u.username, 'u1') self.assertRaises(KeyError, lambda: User.insert(doesnotexist='invalid')) def test_insert_from(self): u0, u1, u2 = [User.create(username='U%s' % i) for i in range(3)] subquery = (User .select(fn.LOWER(User.username)) .where(User.username << ['U0', 'U2'])) iq = User.insert_from([User.username], subquery) sql, params = normal_compiler.generate_insert(iq) self.assertEqual(sql, ( 'INSERT INTO "users" ("username") ' 'SELECT LOWER("t2"."username") FROM "users" AS t2 ' 'WHERE ("t2"."username" IN (?, ?))')) self.assertEqual(params, ['U0', 'U2']) iq.execute() usernames = sorted([u.username for u in User.select()]) self.assertEqual(usernames, ['U0', 'U1', 'U2', 'u0', 'u2']) def test_insert_many(self): qc = len(self.queries()) iq = User.insert_many([ {'username': 'u1'}, {'username': 'u2'}, {'username': 'u3'}, {'username': 'u4'}]) self.assertTrue(iq.execute()) qc2 = len(self.queries()) if test_db.insert_many: self.assertEqual(qc2 - qc, 1) else: self.assertEqual(qc2 - qc, 4) self.assertEqual(User.select().count(), 4) sq = User.select(User.username).order_by(User.username) self.assertEqual([u.username for u in sq], ['u1', 'u2', 'u3', 'u4']) iq = User.insert_many([{'username': 'u5'}]) self.assertTrue(iq.execute()) self.assertEqual(User.select().count(), 5) iq = User.insert_many([ {User.username: 'u6'}, {User.username: 'u7'}, {'username': 'u8'}]).execute() sq = User.select(User.username).order_by(User.username) self.assertEqual([u.username for u in sq], ['u1', 'u2', 'u3', 'u4', 'u5', 'u6', 'u7', 'u8']) def test_insert_many_fallback(self): # Simulate database not supporting multiple insert (older versions of # sqlite). test_db.insert_many = False with self.assertQueryCount(4): iq = User.insert_many([ {'username': 'u1'}, {'username': 'u2'}, {'username': 'u3'}, {'username': 'u4'}]) self.assertTrue(iq.execute()) self.assertEqual(User.select().count(), 4) def test_insert_many_validates_fields_by_default(self): self.assertTrue(User.insert_many([])._validate_fields) def test_insert_many_without_field_validation(self): self.assertFalse(User.insert_many([], validate_fields=False)._validate_fields) def test_delete(self): User.create_users(5) dq = User.delete().where(User.username << ['u1', 'u2', 'u3']) self.assertEqual(User.select().count(), 5) nr = dq.execute() self.assertEqual(nr, 3) self.assertEqual([u.username for u in User.select()], ['u4', 'u5']) def test_raw(self): User.create_users(3) interpolation = test_db.interpolation with self.assertQueryCount(1): query = 'select * from users where username IN (%s, %s)' % ( interpolation, interpolation) rq = User.raw(query, 'u1', 'u3') self.assertEqual([u.username for u in rq], ['u1', 'u3']) # iterate again self.assertEqual([u.username for u in rq], ['u1', 'u3']) query = ('select id, username, %s as secret ' 'from users where username = %s') rq = User.raw( query % (interpolation, interpolation), 'sh', 'u2') self.assertEqual([u.secret for u in rq], ['sh']) self.assertEqual([u.username for u in rq], ['u2']) rq = User.raw('select count(id) from users') self.assertEqual(rq.scalar(), 3) rq = User.raw('select username from users').tuples() self.assertEqual([r for r in rq], [ ('u1',), ('u2',), ('u3',), ]) def test_limits_offsets(self): for i in range(10): User.create(username='u%d' % i) sq = User.select().order_by(User.id) offset_no_lim = sq.offset(3) self.assertEqual( [u.username for u in offset_no_lim], ['u%d' % i for i in range(3, 10)] ) offset_with_lim = sq.offset(5).limit(3) self.assertEqual( [u.username for u in offset_with_lim], ['u%d' % i for i in range(5, 8)] ) def test_raw_fn(self): self.create_users_blogs(3, 2) # 3 users, 2 blogs each. query = User.raw('select count(1) as ct from blog group by user_id') results = [x.ct for x in query] self.assertEqual(results, [2, 2, 2]) def test_model_iter(self): self.create_users_blogs(3, 2) usernames = [user.username for user in User] self.assertEqual(sorted(usernames), ['u0', 'u1', 'u2']) blogs = list(Blog) self.assertEqual(len(blogs), 6) class TestInsertEmptyModel(ModelTestCase): requires = [EmptyModel, NoPKModel] def test_insert_empty(self): query = EmptyModel.insert() sql, params = compiler.generate_insert(query) if isinstance(test_db, MySQLDatabase): self.assertEqual(sql, ( 'INSERT INTO "emptymodel" ("emptymodel"."id") ' 'VALUES (DEFAULT)')) else: self.assertEqual(sql, 'INSERT INTO "emptymodel" DEFAULT VALUES') self.assertEqual(params, []) # Verify the query works. pk = query.execute() em = EmptyModel.get(EmptyModel.id == pk) # Verify we can also use `create()`. em2 = EmptyModel.create() self.assertEqual(EmptyModel.select().count(), 2) def test_no_pk(self): obj = NoPKModel.create(data='1') self.assertEqual(NoPKModel.select(fn.COUNT('1')).scalar(), 1) res = (NoPKModel .update(data='1-e') .where(NoPKModel.data == '1') .execute()) self.assertEqual(res, 1) self.assertEqual(NoPKModel.select(fn.COUNT('1')).scalar(), 1) NoPKModel(data='2').save() NoPKModel(data='3').save() self.assertEqual( [obj.data for obj in NoPKModel.select().order_by(NoPKModel.data)], ['1-e', '2', '3']) class TestModelAPIs(ModelTestCase): requires = [User, Blog, Category, UserCategory, UniqueMultiField, NonIntModel] def setUp(self): super(TestModelAPIs, self).setUp() GCModel.drop_table(True) GCModel.create_table() def test_related_name(self): u1 = User.create(username='u1') u2 = User.create(username='u2') b11 = Blog.create(user=u1, title='b11') b12 = Blog.create(user=u1, title='b12') b2 = Blog.create(user=u2, title='b2') self.assertEqual( [b.title for b in u1.blog_set.order_by(Blog.title)], ['b11', 'b12']) self.assertEqual( [b.title for b in u2.blog_set.order_by(Blog.title)], ['b2']) def test_related_name_collision(self): class Foo(TestModel): f1 = CharField() def make_klass(): class FooRel(TestModel): foo = ForeignKeyField(Foo, related_name='f1') self.assertRaises(AttributeError, make_klass) def test_callable_related_name(self): class Foo(TestModel): pass def rel_name(field): return '%s_%s_ref' % (field.model_class._meta.name, field.name) class Bar(TestModel): fk1 = ForeignKeyField(Foo, related_name=rel_name) fk2 = ForeignKeyField(Foo, related_name=rel_name) class Baz(Bar): pass self.assertTrue(Foo.bar_fk1_ref.rel_model is Bar) self.assertTrue(Foo.bar_fk2_ref.rel_model is Bar) self.assertTrue(Foo.baz_fk1_ref.rel_model is Baz) self.assertTrue(Foo.baz_fk2_ref.rel_model is Baz) self.assertFalse(hasattr(Foo, 'bar_set')) self.assertFalse(hasattr(Foo, 'baz_set')) def test_fk_exceptions(self): c1 = Category.create(name='c1') c2 = Category.create(parent=c1, name='c2') self.assertEqual(c1.parent, None) self.assertEqual(c2.parent, c1) c2_db = Category.get(Category.id == c2.id) self.assertEqual(c2_db.parent, c1) u = User.create(username='u1') b = Blog.create(user=u, title='b') b2 = Blog(title='b2') self.assertEqual(b.user, u) self.assertRaises(User.DoesNotExist, getattr, b2, 'user') def test_fk_cache_invalidated(self): u1 = User.create(username='u1') u2 = User.create(username='u2') b = Blog.create(user=u1, title='b') blog = Blog.get(Blog.pk == b) with self.assertQueryCount(1): self.assertEqual(blog.user.id, u1.id) blog.user = u2.id with self.assertQueryCount(1): self.assertEqual(blog.user.id, u2.id) # No additional query. blog.user = u2.id with self.assertQueryCount(0): self.assertEqual(blog.user.id, u2.id) def test_fk_ints(self): c1 = Category.create(name='c1') c2 = Category.create(name='c2', parent=c1.id) c2_db = Category.get(Category.id == c2.id) self.assertEqual(c2_db.parent, c1) def test_fk_object_id(self): c1 = Category.create(name='c1') c2 = Category.create(name='c2') c2.parent_id = c1.id c2.save() self.assertEqual(c2.parent, c1) c2_db = Category.get(Category.name == 'c2') self.assertEqual(c2_db.parent, c1) def test_fk_caching(self): c1 = Category.create(name='c1') c2 = Category.create(name='c2', parent=c1) c2_db = Category.get(Category.id == c2.id) with self.assertQueryCount(1): parent = c2_db.parent self.assertEqual(parent, c1) parent = c2_db.parent def test_related_id(self): u1 = User.create(username='u1') u2 = User.create(username='u2') for u in [u1, u2]: for j in range(2): Blog.create(user=u, title='%s-%s' % (u.username, j)) with self.assertQueryCount(1): query = Blog.select().order_by(Blog.pk) user_ids = [blog.user_id for blog in query] self.assertEqual(user_ids, [u1.id, u1.id, u2.id, u2.id]) p1 = Category.create(name='p1') p2 = Category.create(name='p2') c1 = Category.create(name='c1', parent=p1) c2 = Category.create(name='c2', parent=p2) with self.assertQueryCount(1): query = Category.select().order_by(Category.id) self.assertEqual( [cat.parent_id for cat in query], [None, None, p1.id, p2.id]) def test_fk_object_id(self): u = User.create(username='u') b = Blog.create(user_id=u.id, title='b1') self.assertEqual(b._data['user'], u.id) self.assertFalse('user' in b._obj_cache) with self.assertQueryCount(1): u_db = b.user self.assertEqual(u_db.id, u.id) b_db = Blog.get(Blog.pk == b.pk) with self.assertQueryCount(0): self.assertEqual(b_db.user_id, u.id) u2 = User.create(username='u2') Blog.create(user=u, title='b1x') Blog.create(user=u2, title='b2') q = Blog.select().where(Blog.user_id == u2.id) self.assertEqual(q.count(), 1) self.assertEqual(q.get().title, 'b2') q = Blog.select(Blog.pk, Blog.user_id).where(Blog.user_id == u.id) self.assertEqual(q.count(), 2) result = q.order_by(Blog.pk).first() self.assertEqual(result.user_id, u.id) with self.assertQueryCount(1): self.assertEqual(result.user.id, u.id) def test_object_id_descriptor_naming(self): class Person(Model): pass class Foo(Model): me = ForeignKeyField(Person, db_column='me', related_name='foo1') another = ForeignKeyField(Person, db_column='_whatever_', related_name='foo2') another2 = ForeignKeyField(Person, db_column='person_id', related_name='foo3') plain = ForeignKeyField(Person, related_name='foo4') self.assertTrue(Foo.me is Foo.me_id) self.assertTrue(Foo.another is Foo._whatever_) self.assertTrue(Foo.another2 is Foo.person_id) self.assertTrue(Foo.plain is Foo.plain_id) self.assertRaises(AttributeError, lambda: Foo.another_id) self.assertRaises(AttributeError, lambda: Foo.another2_id) def test_category_select_related_alias(self): g1 = Category.create(name='g1') g2 = Category.create(name='g2') p1 = Category.create(name='p1', parent=g1) p2 = Category.create(name='p2', parent=g2) c1 = Category.create(name='c1', parent=p1) c11 = Category.create(name='c11', parent=p1) c2 = Category.create(name='c2', parent=p2) with self.assertQueryCount(1): Grandparent = Category.alias() Parent = Category.alias() sq = (Category .select(Category, Parent, Grandparent) .join(Parent, on=(Category.parent == Parent.id)) .join(Grandparent, on=(Parent.parent == Grandparent.id)) .where(Grandparent.name == 'g1') .order_by(Category.name)) self.assertEqual( [(c.name, c.parent.name, c.parent.parent.name) for c in sq], [('c1', 'p1', 'g1'), ('c11', 'p1', 'g1')]) def test_creation(self): User.create_users(10) self.assertEqual(User.select().count(), 10) def test_saving(self): self.assertEqual(User.select().count(), 0) u = User(username='u1') self.assertEqual(u.save(), 1) u.username = 'u2' self.assertEqual(u.save(), 1) self.assertEqual(User.select().count(), 1) self.assertEqual(u.delete_instance(), 1) self.assertEqual(u.save(), 0) def test_save_fk(self): blog = Blog(title='b1', content='') blog.user = User(username='u1') blog.user.save() with self.assertQueryCount(1): blog.save() with self.assertQueryCount(1): blog_db = (Blog .select(Blog, User) .join(User) .where(Blog.pk == blog.pk) .get()) self.assertEqual(blog_db.user.username, 'u1') def test_modify_model_cause_it_dirty(self): u = User(username='u1') u.save() self.assertFalse(u.is_dirty()) u.username = 'u2' self.assertTrue(u.is_dirty()) self.assertEqual(u.dirty_fields, [User.username]) u.save() self.assertFalse(u.is_dirty()) b = Blog.create(user=u, title='b1') self.assertFalse(b.is_dirty()) b.user = u self.assertTrue(b.is_dirty()) self.assertEqual(b.dirty_fields, [Blog.user]) def test_dirty_from_query(self): u1 = User.create(username='u1') b1 = Blog.create(title='b1', user=u1) b2 = Blog.create(title='b2', user=u1) u_db = User.get() self.assertFalse(u_db.is_dirty()) b_with_u = (Blog .select(Blog, User) .join(User) .where(Blog.title == 'b2') .get()) self.assertFalse(b_with_u.is_dirty()) self.assertFalse(b_with_u.user.is_dirty()) u_with_blogs = (User .select(User, Blog) .join(Blog) .order_by(Blog.title) .aggregate_rows())[0] self.assertFalse(u_with_blogs.is_dirty()) for blog in u_with_blogs.blog_set: self.assertFalse(blog.is_dirty()) b_with_users = (Blog .select(Blog, User) .join(User) .order_by(Blog.title) .aggregate_rows()) b1, b2 = b_with_users self.assertFalse(b1.is_dirty()) self.assertFalse(b1.user.is_dirty()) self.assertFalse(b2.is_dirty()) self.assertFalse(b2.user.is_dirty()) def test_save_only(self): u = User.create(username='u') b = Blog.create(user=u, title='b1', content='ct') b.title = 'b1-edit' b.content = 'ct-edit' b.save(only=[Blog.title]) b_db = Blog.get(Blog.pk == b.pk) self.assertEqual(b_db.title, 'b1-edit') self.assertEqual(b_db.content, 'ct') b = Blog(user=u, title='b2', content='foo') b.save(only=[Blog.user, Blog.title]) b_db = Blog.get(Blog.pk == b.pk) self.assertEqual(b_db.title, 'b2') self.assertEqual(b_db.content, '') def test_save_only_dirty_fields(self): u = User.create(username='u1') b = Blog.create(title='b1', user=u, content='huey') b_db = Blog.get(Blog.pk == b.pk) b.title = 'baby huey' b.save(only=b.dirty_fields) b_db.content = 'mickey-nugget' b_db.save(only=b_db.dirty_fields) saved = Blog.get(Blog.pk == b.pk) self.assertEqual(saved.title, 'baby huey') self.assertEqual(saved.content, 'mickey-nugget') def test_save_dirty_auto(self): User._meta.only_save_dirty = True Blog._meta.only_save_dirty = True try: with self.log_queries() as query_logger: u = User.create(username='u1') b = Blog.create(title='b1', user=u) # The default value for the blog content will be saved as well. self.assertEqual( [params for _, params in query_logger.queries], [['u1'], [u.id, 'b1', '']]) with self.assertQueryCount(0): self.assertTrue(u.save() is False) self.assertTrue(b.save() is False) u.username = 'u1-edited' b.title = 'b1-edited' with self.assertQueryCount(1): with self.log_queries() as query_logger: self.assertEqual(u.save(), 1) sql, params = query_logger.queries[0] self.assertTrue(sql.startswith('UPDATE')) self.assertEqual(params, ['u1-edited', u.id]) with self.assertQueryCount(1): with self.log_queries() as query_logger: self.assertEqual(b.save(), 1) sql, params = query_logger.queries[0] self.assertTrue(sql.startswith('UPDATE')) self.assertEqual(params, ['b1-edited', b.pk]) finally: User._meta.only_save_dirty = False Blog._meta.only_save_dirty = False def test_zero_id(self): if isinstance(test_db, MySQLDatabase): # Need to explicitly tell MySQL it's OK to use zero. test_db.execute_sql("SET SESSION sql_mode='NO_AUTO_VALUE_ON_ZERO'") query = 'insert into users (id, username) values (%s, %s)' % ( test_db.interpolation, test_db.interpolation) test_db.execute_sql(query, (0, 'foo')) Blog.insert(title='foo2', user=0).execute() u = User.get(User.id == 0) b = Blog.get(Blog.user == u) self.assertTrue(u == u) self.assertTrue(u == b.user) def test_saving_via_create_gh111(self): u = User.create(username='u') b = Blog.create(title='foo', user=u) last_sql, _ = self.queries()[-1] self.assertFalse('pub_date' in last_sql) self.assertEqual(b.pub_date, None) b2 = Blog(title='foo2', user=u) b2.save() last_sql, _ = self.queries()[-1] self.assertFalse('pub_date' in last_sql) self.assertEqual(b2.pub_date, None) def test_reading(self): u1 = User.create(username='u1') u2 = User.create(username='u2') self.assertEqual(u1, User.get(username='u1')) self.assertEqual(u2, User.get(username='u2')) self.assertFalse(u1 == u2) self.assertEqual(u1, User.get(User.username == 'u1')) self.assertEqual(u2, User.get(User.username == 'u2')) def test_get_exception(self): exc = None try: User.get(User.id == 0) except Exception as raised_exc: exc = raised_exc else: assert False self.assertEqual(exc.__module__, 'playhouse.tests.models') self.assertEqual( str(type(exc)), "") if sys.version_info[0] < 3: self.assertTrue(exc.message.startswith('Instance matching query')) self.assertTrue(exc.message.endswith('PARAMS: [0]')) def test_get_or_create(self): u1, created = User.get_or_create(username='u1') self.assertTrue(created) u1_x, created = User.get_or_create(username='u1') self.assertFalse(created) self.assertEqual(u1.id, u1_x.id) self.assertEqual(User.select().count(), 1) def test_get_or_create_extended(self): gc1, created = GCModel.get_or_create( name='huey', key='k1', value='v1', defaults={'number': 3}) self.assertTrue(created) self.assertEqual(gc1.name, 'huey') self.assertEqual(gc1.key, 'k1') self.assertEqual(gc1.value, 'v1') self.assertEqual(gc1.number, 3) gc1_db, created = GCModel.get_or_create( name='huey', defaults={'key': 'k2', 'value': 'v2'}) self.assertFalse(created) self.assertEqual(gc1_db.id, gc1.id) self.assertEqual(gc1_db.key, 'k1') def integrity_error(): gc2, created = GCModel.get_or_create( name='huey', key='kx', value='vx') self.assertRaises(IntegrityError, integrity_error) gc2, created = GCModel.get_or_create( name__ilike='%nugget%', defaults={ 'name': 'foo-nugget', 'key': 'k2', 'value': 'v2'}) self.assertTrue(created) self.assertEqual(gc2.name, 'foo-nugget') gc2_db, created = GCModel.get_or_create( name__ilike='%nugg%', defaults={'name': 'xx'}) self.assertFalse(created) self.assertEqual(gc2_db.id, gc2.id) self.assertEqual(GCModel.select().count(), 2) def test_peek(self): users = User.create_users(3) with self.assertQueryCount(1): sq = User.select().order_by(User.username) # call it once u1 = sq.peek() self.assertEqual(u1.username, 'u1') # check the result cache self.assertEqual(len(sq._qr._result_cache), 1) # call it again and we get the same result, but not an # extra query self.assertEqual(sq.peek().username, 'u1') with self.assertQueryCount(0): # no limit is applied. usernames = [u.username for u in sq] self.assertEqual(usernames, ['u1', 'u2', 'u3']) def test_first(self): users = User.create_users(3) with self.assertQueryCount(1): sq = User.select().order_by(User.username) # call it once first = sq.first() self.assertEqual(first.username, 'u1') # check the result cache self.assertEqual(len(sq._qr._result_cache), 1) # call it again and we get the same result, but not an # extra query self.assertEqual(sq.first().username, 'u1') with self.assertQueryCount(0): # also note that a limit has been applied. all_results = [obj for obj in sq] self.assertEqual(all_results, [first]) usernames = [u.username for u in sq] self.assertEqual(usernames, ['u1']) with self.assertQueryCount(0): # call first() after iterating self.assertEqual(sq.first().username, 'u1') usernames = [u.username for u in sq] self.assertEqual(usernames, ['u1']) # call it with an empty result sq = User.select().where(User.username == 'not-here') self.assertEqual(sq.first(), None) def test_deleting(self): u1 = User.create(username='u1') u2 = User.create(username='u2') self.assertEqual(User.select().count(), 2) u1.delete_instance() self.assertEqual(User.select().count(), 1) self.assertEqual(u2, User.get(User.username=='u2')) def test_counting(self): u1 = User.create(username='u1') u2 = User.create(username='u2') for u in [u1, u2]: for i in range(5): Blog.create(title='b-%s-%s' % (u.username, i), user=u) uc = User.select().where(User.username == 'u1').join(Blog).count() self.assertEqual(uc, 5) uc = User.select().where(User.username == 'u1').join(Blog).distinct().count() self.assertEqual(uc, 1) self.assertEqual(Blog.select().limit(4).offset(3).count(), 4) self.assertEqual(Blog.select().limit(4).offset(3).count(True), 10) # Calling `distinct()` will result in a call to wrapped_count(). uc = User.select().join(Blog).distinct().count() self.assertEqual(uc, 2) # Test with clear limit = True. self.assertEqual(User.select().limit(1).count(clear_limit=True), 2) self.assertEqual( User.select().limit(1).wrapped_count(clear_limit=True), 2) # Test with clear limit = False. self.assertEqual(User.select().limit(1).count(clear_limit=False), 1) self.assertEqual( User.select().limit(1).wrapped_count(clear_limit=False), 1) def test_ordering(self): u1 = User.create(username='u1') u2 = User.create(username='u2') u3 = User.create(username='u2') users = User.select().order_by(User.username.desc(), User.id.desc()) self.assertEqual([u._get_pk_value() for u in users], [u3.id, u2.id, u1.id]) def test_count_transaction(self): for i in range(10): User.create(username='u%d' % i) with test_db.transaction(): for user in User.select(): for i in range(20): Blog.create(user=user, title='b-%d-%d' % (user.id, i)) count = Blog.select().count() self.assertEqual(count, 200) def test_exists(self): u1 = User.create(username='u1') self.assertTrue(User.select().where(User.username == 'u1').exists()) self.assertFalse(User.select().where(User.username == 'u2').exists()) def test_unicode(self): # create a unicode literal ustr = ulit('Lýðveldið Ãsland') u = User.create(username=ustr) # query using the unicode literal u_db = User.get(User.username == ustr) # the db returns a unicode literal self.assertEqual(u_db.username, ustr) # delete the user self.assertEqual(u.delete_instance(), 1) # convert the unicode to a utf8 string utf8_str = ustr.encode('utf-8') # create using the utf8 string u2 = User.create(username=utf8_str) # query using unicode literal u2_db = User.get(User.username == ustr) # we get unicode back self.assertEqual(u2_db.username, ustr) def test_unicode_issue202(self): ustr = ulit('M\u00f6rk') user = User.create(username=ustr) self.assertEqual(user.username, ustr) def test_on_conflict(self): gc = GCModel.create(name='g1', key='k1', value='v1') query = GCModel.insert( name='g1', key='k2', value='v2') self.assertRaises(IntegrityError, query.execute) # Ensure that we can ignore errors. res = query.on_conflict('IGNORE').execute() self.assertEqual(res, gc.id) self.assertEqual(GCModel.select().count(), 1) # Error ignored, no changes. gc_db = GCModel.get() self.assertEqual(gc_db.name, 'g1') self.assertEqual(gc_db.key, 'k1') self.assertEqual(gc_db.value, 'v1') # Replace the old, conflicting row, with the new data. res = query.on_conflict('REPLACE').execute() self.assertNotEqual(res, gc.id) self.assertEqual(GCModel.select().count(), 1) gc_db = GCModel.get() self.assertEqual(gc_db.name, 'g1') self.assertEqual(gc_db.key, 'k2') self.assertEqual(gc_db.value, 'v2') # Replaces also can occur when violating multi-column indexes. query = GCModel.insert( name='g2', key='k2', value='v2').on_conflict('REPLACE') res = query.execute() self.assertNotEqual(res, gc_db.id) self.assertEqual(GCModel.select().count(), 1) gc_db = GCModel.get() self.assertEqual(gc_db.name, 'g2') self.assertEqual(gc_db.key, 'k2') self.assertEqual(gc_db.value, 'v2') def test_on_conflict_many(self): if not SqliteDatabase.insert_many: return for i in range(5): key = 'gc%s' % i GCModel.create(name=key, key=key, value=key) insert = [ {'name': key, 'key': 'x-%s' % key, 'value': key} for key in ['gc%s' % i for i in range(10)]] res = GCModel.insert_many(insert).on_conflict('IGNORE').execute() self.assertEqual(GCModel.select().count(), 10) gcs = list(GCModel.select().order_by(GCModel.id)) first_five, last_five = gcs[:5], gcs[5:] # The first five should all be "gcI", the last five will have # "x-gcI" for their keys. self.assertEqual( [gc.key for gc in first_five], ['gc0', 'gc1', 'gc2', 'gc3', 'gc4']) self.assertEqual( [gc.key for gc in last_five], ['x-gc5', 'x-gc6', 'x-gc7', 'x-gc8', 'x-gc9']) def test_meta_get_field_index(self): index = Blog._meta.get_field_index(Blog.content) self.assertEqual(index, 3) def test_meta_remove_field(self): class _Model(Model): title = CharField(max_length=25) content = TextField(default='') _Model._meta.remove_field('content') self.assertTrue('content' not in _Model._meta.fields) self.assertTrue('content' not in _Model._meta.sorted_field_names) self.assertEqual([f.name for f in _Model._meta.sorted_fields], ['id', 'title']) def test_meta_rel_for_model(self): class User(Model): pass class Category(Model): parent = ForeignKeyField('self') class Tweet(Model): user = ForeignKeyField(User) class Relationship(Model): from_user = ForeignKeyField(User, related_name='r1') to_user = ForeignKeyField(User, related_name='r2') UM = User._meta CM = Category._meta TM = Tweet._meta RM = Relationship._meta # Simple refs work. self.assertIsNone(UM.rel_for_model(Tweet)) self.assertEqual(UM.rel_for_model(Tweet, multi=True), []) self.assertEqual(UM.reverse_rel_for_model(Tweet), Tweet.user) self.assertEqual(UM.reverse_rel_for_model(Tweet, multi=True), [Tweet.user]) # Multi fks. self.assertEqual(RM.rel_for_model(User), Relationship.from_user) self.assertEqual(RM.rel_for_model(User, multi=True), [Relationship.from_user, Relationship.to_user]) self.assertEqual(UM.reverse_rel_for_model(Relationship), Relationship.from_user) self.assertEqual(UM.reverse_rel_for_model(Relationship, multi=True), [Relationship.from_user, Relationship.to_user]) # Self-refs work. self.assertEqual(CM.rel_for_model(Category), Category.parent) self.assertEqual(CM.reverse_rel_for_model(Category), Category.parent) # Field aliases work. UA = User.alias() self.assertEqual(TM.rel_for_model(UA), Tweet.user) class TestAggregatesWithModels(ModelTestCase): requires = [OrderedModel, User, Blog] def create_ordered_models(self): return [ OrderedModel.create( title=i, created=datetime.datetime(2013, 1, i + 1)) for i in range(3)] def create_user_blogs(self): users = [] ct = 0 for i in range(2): user = User.create(username='u-%d' % i) for j in range(2): ct += 1 Blog.create( user=user, title='b-%d-%d' % (i, j), pub_date=datetime.datetime(2013, 1, ct)) users.append(user) return users def test_annotate_int(self): users = self.create_user_blogs() annotated = User.select().annotate(Blog, fn.Count(Blog.pk).alias('ct')) for i, user in enumerate(annotated): self.assertEqual(user.ct, 2) self.assertEqual(user.username, 'u-%d' % i) def test_annotate_datetime(self): users = self.create_user_blogs() annotated = (User .select() .annotate(Blog, fn.Max(Blog.pub_date).alias('max_pub'))) user_0, user_1 = annotated self.assertEqual(user_0.max_pub, datetime.datetime(2013, 1, 2)) self.assertEqual(user_1.max_pub, datetime.datetime(2013, 1, 4)) def test_aggregate_int(self): models = self.create_ordered_models() max_id = OrderedModel.select().aggregate(fn.Max(OrderedModel.id)) self.assertEqual(max_id, models[-1].id) def test_aggregate_datetime(self): models = self.create_ordered_models() max_created = (OrderedModel .select() .aggregate(fn.Max(OrderedModel.created))) self.assertEqual(max_created, models[-1].created) class TestMultiTableFromClause(ModelTestCase): requires = [Blog, Comment, User] def setUp(self): super(TestMultiTableFromClause, self).setUp() for u in range(2): user = User.create(username='u%s' % u) for i in range(3): b = Blog.create(user=user, title='b%s-%s' % (u, i)) for j in range(i): Comment.create(blog=b, comment='c%s-%s' % (i, j)) def test_from_multi_table(self): q = (Blog .select(Blog, User) .from_(Blog, User) .where( (Blog.user == User.id) & (User.username == 'u0')) .order_by(Blog.pk) .naive()) with self.assertQueryCount(1): blogs = [b.title for b in q] self.assertEqual(blogs, ['b0-0', 'b0-1', 'b0-2']) usernames = [b.username for b in q] self.assertEqual(usernames, ['u0', 'u0', 'u0']) def test_subselect(self): inner = User.select(User.username) self.assertEqual( [u.username for u in inner.order_by(User.username)], ['u0', 'u1']) # Have to manually specify the alias as "t1" because the outer query # will expect that. outer = (User .select(User.username) .from_(inner.alias('t1'))) sql, params = compiler.generate_select(outer) self.assertEqual(sql, ( 'SELECT "users"."username" FROM ' '(SELECT "users"."username" FROM "users" AS users) AS t1')) self.assertEqual( [u.username for u in outer.order_by(User.username)], ['u0', 'u1']) def test_subselect_with_column(self): inner = User.select(User.username.alias('name')).alias('t1') outer = (User .select(inner.c.name) .from_(inner)) sql, params = compiler.generate_select(outer) self.assertEqual(sql, ( 'SELECT "t1"."name" FROM ' '(SELECT "users"."username" AS name FROM "users" AS users) AS t1')) query = outer.order_by(inner.c.name.desc()) self.assertEqual([u[0] for u in query.tuples()], ['u1', 'u0']) def test_subselect_with_join(self): inner = User.select(User.id, User.username).alias('q1') outer = (Blog .select(inner.c.id, inner.c.username) .from_(inner) .join(Comment, on=(inner.c.id == Comment.id))) sql, params = compiler.generate_select(outer) self.assertEqual(sql, ( 'SELECT "q1"."id", "q1"."username" FROM (' 'SELECT "users"."id", "users"."username" FROM "users" AS users) AS q1 ' 'INNER JOIN "comment" AS comment ON ("q1"."id" = "comment"."id")')) def test_join_on_query(self): u0 = User.get(User.username == 'u0') u1 = User.get(User.username == 'u1') inner = User.select().alias('j1') outer = (Blog .select(Blog.title, Blog.user) .join(inner, on=(Blog.user == inner.c.id)) .order_by(Blog.pk)) res = [row for row in outer.tuples()] self.assertEqual(res, [ ('b0-0', u0.id), ('b0-1', u0.id), ('b0-2', u0.id), ('b1-0', u1.id), ('b1-1', u1.id), ('b1-2', u1.id), ]) class TestDeleteRecursive(ModelTestCase): requires = [ Parent, Child, ChildNullableData, ChildPet, Orphan, OrphanPet, Package, PackageItem] def setUp(self): super(TestDeleteRecursive, self).setUp() self.p1 = p1 = Parent.create(data='p1') self.p2 = p2 = Parent.create(data='p2') c11 = Child.create(parent=p1) c12 = Child.create(parent=p1) c21 = Child.create(parent=p2) c22 = Child.create(parent=p2) o11 = Orphan.create(parent=p1) o12 = Orphan.create(parent=p1) o21 = Orphan.create(parent=p2) o22 = Orphan.create(parent=p2) for child in [c11, c12, c21, c22]: ChildPet.create(child=child) for orphan in [o11, o12, o21, o22]: OrphanPet.create(orphan=orphan) for i, child in enumerate([c11, c12]): for j in range(2): ChildNullableData.create( child=child, data='%s-%s' % (i, j)) def test_recursive_delete_parent_sql(self): with self.log_queries() as query_logger: with self.assertQueryCount(5): self.p1.delete_instance(recursive=True, delete_nullable=False) queries = query_logger.queries update_cnd = ('UPDATE `childnullabledata` ' 'SET `child_id` = %% ' 'WHERE (' '`childnullabledata`.`child_id` IN (' 'SELECT `t2`.`id` FROM `child` AS t2 WHERE (' '`t2`.`parent_id` = %%)))') delete_cp = ('DELETE FROM `childpet` WHERE (' '`child_id` IN (' 'SELECT `t1`.`id` FROM `child` AS t1 WHERE (' '`t1`.`parent_id` = %%)))') delete_c = 'DELETE FROM `child` WHERE (`parent_id` = %%)' update_o = ('UPDATE `orphan` SET `parent_id` = %% WHERE (' '`orphan`.`parent_id` = %%)') delete_p = 'DELETE FROM `parent` WHERE (`id` = %%)' sql_params = [ (update_cnd, [None, self.p1.id]), (delete_cp, [self.p1.id]), (delete_c, [self.p1.id]), (update_o, [None, self.p1.id]), (delete_p, [self.p1.id]), ] self.assertQueriesEqual(queries, sql_params) def test_recursive_delete_child_queries(self): c2 = self.p1.child_set.order_by(Child.id.desc()).get() with self.log_queries() as query_logger: with self.assertQueryCount(3): c2.delete_instance(recursive=True, delete_nullable=False) queries = query_logger.queries update_cnd = ('UPDATE `childnullabledata` SET `child_id` = %% WHERE (' '`childnullabledata`.`child_id` = %%)') delete_cp = 'DELETE FROM `childpet` WHERE (`child_id` = %%)' delete_c = 'DELETE FROM `child` WHERE (`id` = %%)' sql_params = [ (update_cnd, [None, c2.id]), (delete_cp, [c2.id]), (delete_c, [c2.id]), ] self.assertQueriesEqual(queries, sql_params) def assertQueriesEqual(self, queries, expected): queries.sort() expected.sort() for i in range(len(queries)): sql, params = queries[i] expected_sql, expected_params = expected[i] expected_sql = (expected_sql .replace('`', test_db.quote_char) .replace('%%', test_db.interpolation)) self.assertEqual(sql, expected_sql) self.assertEqual(params, expected_params) def test_recursive_update(self): self.p1.delete_instance(recursive=True) counts = ( #query,fk,p1,p2,tot (Child.select(), Child.parent, 0, 2, 2), (Orphan.select(), Orphan.parent, 0, 2, 4), (ChildPet.select().join(Child), Child.parent, 0, 2, 2), (OrphanPet.select().join(Orphan), Orphan.parent, 0, 2, 4), ) for query, fk, p1_ct, p2_ct, tot in counts: self.assertEqual(query.where(fk == self.p1).count(), p1_ct) self.assertEqual(query.where(fk == self.p2).count(), p2_ct) self.assertEqual(query.count(), tot) def test_recursive_delete(self): self.p1.delete_instance(recursive=True, delete_nullable=True) counts = ( #query,fk,p1,p2,tot (Child.select(), Child.parent, 0, 2, 2), (Orphan.select(), Orphan.parent, 0, 2, 2), (ChildPet.select().join(Child), Child.parent, 0, 2, 2), (OrphanPet.select().join(Orphan), Orphan.parent, 0, 2, 2), ) for query, fk, p1_ct, p2_ct, tot in counts: self.assertEqual(query.where(fk == self.p1).count(), p1_ct) self.assertEqual(query.where(fk == self.p2).count(), p2_ct) self.assertEqual(query.count(), tot) def test_recursive_non_pk_fk(self): for i in range(3): Package.create(barcode=str(i)) for j in range(4): PackageItem.create(package=str(i), title='%s-%s' % (i, j)) self.assertEqual(Package.select().count(), 3) self.assertEqual(PackageItem.select().count(), 12) Package.get(Package.barcode == '1').delete_instance(recursive=True) self.assertEqual(Package.select().count(), 2) self.assertEqual(PackageItem.select().count(), 8) items = (PackageItem .select(PackageItem.title) .order_by(PackageItem.id) .tuples()) self.assertEqual([i[0] for i in items], [ '0-0', '0-1', '0-2', '0-3', '2-0', '2-1', '2-2', '2-3', ]) @skip_if(lambda: isinstance(test_db, MySQLDatabase)) class TestTruncate(ModelTestCase): requires = [User] def test_truncate(self): for i in range(3): User.create(username='u%s' % i) User.truncate_table(restart_identity=True) self.assertEqual(User.select().count(), 0) u = User.create(username='ux') self.assertEqual(u.id, 1) class TestManyToMany(ModelTestCase): requires = [User, Category, UserCategory] def setUp(self): super(TestManyToMany, self).setUp() users = ['u1', 'u2', 'u3'] categories = ['c1', 'c2', 'c3', 'c12', 'c23'] user_to_cat = { 'u1': ['c1', 'c12'], 'u2': ['c2', 'c12', 'c23'], } for u in users: User.create(username=u) for c in categories: Category.create(name=c) for user, categories in user_to_cat.items(): user = User.get(User.username == user) for category in categories: UserCategory.create( user=user, category=Category.get(Category.name == category)) def test_m2m(self): def aU(q, exp): self.assertEqual([u.username for u in q.order_by(User.username)], exp) def aC(q, exp): self.assertEqual([c.name for c in q.order_by(Category.name)], exp) users = User.select().join(UserCategory).join(Category).where(Category.name == 'c1') aU(users, ['u1']) users = User.select().join(UserCategory).join(Category).where(Category.name == 'c3') aU(users, []) cats = Category.select().join(UserCategory).join(User).where(User.username == 'u1') aC(cats, ['c1', 'c12']) cats = Category.select().join(UserCategory).join(User).where(User.username == 'u2') aC(cats, ['c12', 'c2', 'c23']) cats = Category.select().join(UserCategory).join(User).where(User.username == 'u3') aC(cats, []) cats = Category.select().join(UserCategory).join(User).where( Category.name << ['c1', 'c2', 'c3'] ) aC(cats, ['c1', 'c2']) cats = Category.select().join(UserCategory, JOIN.LEFT_OUTER).join(User, JOIN.LEFT_OUTER).where( Category.name << ['c1', 'c2', 'c3'] ) aC(cats, ['c1', 'c2', 'c3']) def test_many_to_many_prefetch(self): categories = Category.select().order_by(Category.name) user_categories = UserCategory.select().order_by(UserCategory.id) users = User.select().order_by(User.username) results = {} result_list = [] with self.assertQueryCount(3): query = prefetch(categories, user_categories, users) for category in query: results.setdefault(category.name, set()) result_list.append(category.name) for user_category in category.usercategory_set_prefetch: results[category.name].add(user_category.user.username) result_list.append(user_category.user.username) self.assertEqual(results, { 'c1': set(['u1']), 'c12': set(['u1', 'u2']), 'c2': set(['u2']), 'c23': set(['u2']), 'c3': set(), }) self.assertEqual( sorted(result_list), ['c1', 'c12', 'c2', 'c23', 'c3', 'u1', 'u1', 'u2', 'u2', 'u2']) class TestCustomModelOptionsBase(PeeweeTestCase): def test_custom_model_options_base(self): db = SqliteDatabase(None) class DatabaseDescriptor(object): def __init__(self, db): self._db = db def __get__(self, instance_type, instance): if instance is not None: return self._db return self def __set__(self, instance, value): pass class TestModelOptions(ModelOptions): database = DatabaseDescriptor(db) class BaseModel(Model): class Meta: model_options_base = TestModelOptions class TestModel(BaseModel): pass class TestChildModel(TestModel): pass self.assertEqual(id(TestModel._meta.database), id(db)) self.assertEqual(id(TestChildModel._meta.database), id(db)) class TestModelOptionInheritance(PeeweeTestCase): def test_db_table(self): self.assertEqual(User._meta.db_table, 'users') class Foo(TestModel): pass self.assertEqual(Foo._meta.db_table, 'foo') class Foo2(TestModel): pass self.assertEqual(Foo2._meta.db_table, 'foo2') class Foo_3(TestModel): pass self.assertEqual(Foo_3._meta.db_table, 'foo_3') def test_custom_options(self): class A(Model): class Meta: a = 'a' class B1(A): class Meta: b = 1 class B2(A): class Meta: b = 2 self.assertEqual(A._meta.a, 'a') self.assertEqual(B1._meta.a, 'a') self.assertEqual(B2._meta.a, 'a') self.assertEqual(B1._meta.b, 1) self.assertEqual(B2._meta.b, 2) def test_option_inheritance(self): x_test_db = SqliteDatabase('testing.db') child2_db = SqliteDatabase('child2.db') class FakeUser(Model): pass class ParentModel(Model): title = CharField() user = ForeignKeyField(FakeUser) class Meta: database = x_test_db class ChildModel(ParentModel): pass class ChildModel2(ParentModel): special_field = CharField() class Meta: database = child2_db class GrandChildModel(ChildModel): pass class GrandChildModel2(ChildModel2): special_field = TextField() self.assertEqual(ParentModel._meta.database.database, 'testing.db') self.assertEqual(ParentModel._meta.model_class, ParentModel) self.assertEqual(ChildModel._meta.database.database, 'testing.db') self.assertEqual(ChildModel._meta.model_class, ChildModel) self.assertEqual(sorted(ChildModel._meta.fields.keys()), [ 'id', 'title', 'user' ]) self.assertEqual(ChildModel2._meta.database.database, 'child2.db') self.assertEqual(ChildModel2._meta.model_class, ChildModel2) self.assertEqual(sorted(ChildModel2._meta.fields.keys()), [ 'id', 'special_field', 'title', 'user' ]) self.assertEqual(GrandChildModel._meta.database.database, 'testing.db') self.assertEqual(GrandChildModel._meta.model_class, GrandChildModel) self.assertEqual(sorted(GrandChildModel._meta.fields.keys()), [ 'id', 'title', 'user' ]) self.assertEqual(GrandChildModel2._meta.database.database, 'child2.db') self.assertEqual(GrandChildModel2._meta.model_class, GrandChildModel2) self.assertEqual(sorted(GrandChildModel2._meta.fields.keys()), [ 'id', 'special_field', 'title', 'user' ]) self.assertTrue(isinstance(GrandChildModel2._meta.fields['special_field'], TextField)) def test_order_by_inheritance(self): class Base(TestModel): created = DateTimeField() class Meta: order_by = ('-created',) class Foo(Base): data = CharField() class Bar(Base): val = IntegerField() class Meta: order_by = ('-val',) foo_order_by = Foo._meta.order_by[0] self.assertTrue(isinstance(foo_order_by, Field)) self.assertTrue(foo_order_by.model_class is Foo) self.assertEqual(foo_order_by.name, 'created') bar_order_by = Bar._meta.order_by[0] self.assertTrue(isinstance(bar_order_by, Field)) self.assertTrue(bar_order_by.model_class is Bar) self.assertEqual(bar_order_by.name, 'val') def test_table_name_function(self): class Base(TestModel): class Meta: def db_table_func(model): return model.__name__.lower() + 's' class User(Base): pass class SuperUser(User): class Meta: db_table = 'nugget' class MegaUser(SuperUser): class Meta: def db_table_func(model): return 'mega' class Bear(Base): pass self.assertEqual(User._meta.db_table, 'users') self.assertEqual(Bear._meta.db_table, 'bears') self.assertEqual(SuperUser._meta.db_table, 'nugget') self.assertEqual(MegaUser._meta.db_table, 'mega') class TestModelInheritance(ModelTestCase): requires = [Blog, BlogTwo, User] def test_model_inheritance_attrs(self): self.assertEqual(Blog._meta.sorted_field_names, ['pk', 'user', 'title', 'content', 'pub_date']) self.assertEqual(BlogTwo._meta.sorted_field_names, ['pk', 'user', 'content', 'pub_date', 'title', 'extra_field']) self.assertEqual(Blog._meta.primary_key.name, 'pk') self.assertEqual(BlogTwo._meta.primary_key.name, 'pk') self.assertEqual(Blog.user.related_name, 'blog_set') self.assertEqual(BlogTwo.user.related_name, 'blogtwo_set') self.assertEqual(User.blog_set.rel_model, Blog) self.assertEqual(User.blogtwo_set.rel_model, BlogTwo) self.assertFalse(BlogTwo._meta.db_table == Blog._meta.db_table) def test_model_inheritance_flow(self): u = User.create(username='u') b = Blog.create(title='b', user=u) b2 = BlogTwo.create(title='b2', extra_field='foo', user=u) self.assertEqual(list(u.blog_set), [b]) self.assertEqual(list(u.blogtwo_set), [b2]) self.assertEqual(Blog.select().count(), 1) self.assertEqual(BlogTwo.select().count(), 1) b_from_db = Blog.get(Blog.pk==b.pk) b2_from_db = BlogTwo.get(BlogTwo.pk==b2.pk) self.assertEqual(b_from_db.user, u) self.assertEqual(b2_from_db.user, u) self.assertEqual(b2_from_db.extra_field, 'foo') def test_inheritance_primary_keys(self): self.assertFalse(hasattr(Model, 'id')) class M1(Model): pass self.assertTrue(hasattr(M1, 'id')) class M2(Model): key = CharField(primary_key=True) self.assertFalse(hasattr(M2, 'id')) class M3(Model): id = TextField() key = IntegerField(primary_key=True) self.assertTrue(hasattr(M3, 'id')) self.assertFalse(M3.id.primary_key) class C1(M1): pass self.assertTrue(hasattr(C1, 'id')) self.assertTrue(C1.id.model_class is C1) class C2(M2): pass self.assertFalse(hasattr(C2, 'id')) self.assertTrue(C2.key.primary_key) self.assertTrue(C2.key.model_class is C2) class C3(M3): pass self.assertTrue(hasattr(C3, 'id')) self.assertFalse(C3.id.primary_key) self.assertTrue(C3.id.model_class is C3) class TestAliasBehavior(ModelTestCase): requires = [UpperModel] def test_alias_with_coerce(self): UpperModel.create(data='test') um = UpperModel.get() self.assertEqual(um.data, 'TEST') Alias = UpperModel.alias() normal = (UpperModel.data == 'foo') aliased = (Alias.data == 'foo') _, normal_p = compiler.parse_node(normal) _, aliased_p = compiler.parse_node(aliased) self.assertEqual(normal_p, ['FOO']) self.assertEqual(aliased_p, ['FOO']) expected = ( 'SELECT "uppermodel"."id", "uppermodel"."data" ' 'FROM "uppermodel" AS uppermodel ' 'WHERE ("uppermodel"."data" = ?)') query = UpperModel.select().where(UpperModel.data == 'foo') sql, params = compiler.generate_select(query) self.assertEqual(sql, expected) self.assertEqual(params, ['FOO']) query = Alias.select().where(Alias.data == 'foo') sql, params = compiler.generate_select(query) self.assertEqual(sql, expected) self.assertEqual(params, ['FOO']) @skip_unless(lambda: isinstance(test_db, PostgresqlDatabase)) class TestInsertReturningModelAPI(PeeweeTestCase): def setUp(self): super(TestInsertReturningModelAPI, self).setUp() self.db = database_initializer.get_database( 'postgres', PostgresqlDatabase) class BaseModel(TestModel): class Meta: database = self.db self.BaseModel = BaseModel self.models = [] def tearDown(self): if self.models: self.db.drop_tables(self.models, True) super(TestInsertReturningModelAPI, self).tearDown() def test_insert_returning(self): class User(self.BaseModel): username = CharField() class Meta: db_table = 'users' self.models.append(User) User.create_table() query = User.insert(username='charlie') sql, params = query.sql() self.assertEqual(sql, ( 'INSERT INTO "users" ("username") VALUES (%s) RETURNING "id"')) self.assertEqual(params, ['charlie']) result = query.execute() charlie = User.get(User.username == 'charlie') self.assertEqual(result, charlie.id) result2 = User.insert(username='huey').execute() self.assertTrue(result2 > result) huey = User.get(User.username == 'huey') self.assertEqual(result2, huey.id) mickey = User.create(username='mickey') self.assertEqual(mickey.id, huey.id + 1) mickey.save() self.assertEqual(User.select().count(), 3) def test_non_int_pk(self): class User(self.BaseModel): username = CharField(primary_key=True) data = IntegerField() class Meta: db_table = 'users' self.models.append(User) User.create_table() query = User.insert(username='charlie', data=1337) sql, params = query.sql() self.assertEqual(sql, ( 'INSERT INTO "users" ("username", "data") ' 'VALUES (%s, %s) RETURNING "username"')) self.assertEqual(params, ['charlie', 1337]) self.assertEqual(query.execute(), 'charlie') charlie = User.get(User.data == 1337) self.assertEqual(charlie.username, 'charlie') huey = User.create(username='huey', data=1024) self.assertEqual(huey.username, 'huey') self.assertEqual(huey.data, 1024) huey_db = User.get(User.data == 1024) self.assertEqual(huey_db.username, 'huey') huey_db.save() self.assertEqual(huey_db.username, 'huey') self.assertEqual(User.select().count(), 2) def test_composite_key(self): class Person(self.BaseModel): first = CharField() last = CharField() data = IntegerField() class Meta: primary_key = CompositeKey('first', 'last') self.models.append(Person) Person.create_table() query = Person.insert(first='huey', last='leifer', data=3) sql, params = query.sql() self.assertEqual(sql, ( 'INSERT INTO "person" ("first", "last", "data") ' 'VALUES (%s, %s, %s) RETURNING "first", "last"')) self.assertEqual(params, ['huey', 'leifer', 3]) res = query.execute() self.assertEqual(res, ['huey', 'leifer']) huey = Person.get(Person.data == 3) self.assertEqual(huey.first, 'huey') self.assertEqual(huey.last, 'leifer') zaizee = Person.create(first='zaizee', last='owen', data=2) self.assertEqual(zaizee.first, 'zaizee') self.assertEqual(zaizee.last, 'owen') z_db = Person.get(Person.data == 2) self.assertEqual(z_db.first, 'zaizee') self.assertEqual(z_db.last, 'owen') z_db.save() self.assertEqual(Person.select().count(), 2) def test_insert_many(self): class User(self.BaseModel): username = CharField() class Meta: db_table = 'users' self.models.append(User) User.create_table() usernames = ['charlie', 'huey', 'zaizee'] data = [{'username': username} for username in usernames] query = User.insert_many(data) sql, params = query.sql() self.assertEqual(sql, ( 'INSERT INTO "users" ("username") ' 'VALUES (%s), (%s), (%s)')) self.assertEqual(params, usernames) res = query.execute() self.assertTrue(res is True) self.assertEqual(User.select().count(), 3) z = User.select().order_by(-User.username).get() self.assertEqual(z.username, 'zaizee') usernames = ['foo', 'bar', 'baz'] data = [{'username': username} for username in usernames] query = User.insert_many(data).return_id_list() sql, params = query.sql() self.assertEqual(sql, ( 'INSERT INTO "users" ("username") ' 'VALUES (%s), (%s), (%s) RETURNING "id"')) self.assertEqual(params, usernames) res = list(query.execute()) self.assertEqual(len(res), 3) foo = User.get(User.username == 'foo') bar = User.get(User.username == 'bar') baz = User.get(User.username == 'baz') self.assertEqual(res, [foo.id, bar.id, baz.id]) @skip_unless(lambda: isinstance(test_db, PostgresqlDatabase)) class TestReturningClause(ModelTestCase): requires = [User] def test_update_returning(self): User.create_users(3) u1, u2, u3 = [user for user in User.select().order_by(User.id)] uq = User.update(username='uII').where(User.id == u2.id) res = uq.execute() self.assertEqual(res, 1) # Number of rows modified. uq = uq.returning(User.username) users = [user for user in uq.execute()] self.assertEqual(len(users), 1) user, = users self.assertEqual(user.username, 'uII') self.assertIsNone(user.id) # Was not explicitly selected. uq = (User .update(username='huey') .where(User.username != 'uII') .returning(User)) users = [user for user in uq.execute()] self.assertEqual(len(users), 2) self.assertTrue(all([user.username == 'huey' for user in users])) self.assertTrue(all([user.id is not None for user in users])) uq = uq.dicts().returning(User.username) user_data = [data for data in uq.execute()] self.assertEqual( user_data, [{'username': 'huey'}, {'username': 'huey'}]) def test_delete_returning(self): User.create_users(10) dq = User.delete().where(User.username << ['u9', 'u10']) res = dq.execute() self.assertEqual(res, 2) # Number of rows modified. dq = (User .delete() .where(User.username << ['u7', 'u8']) .returning(User.username)) users = [user for user in dq.execute()] self.assertEqual(len(users), 2) usernames = sorted([user.username for user in users]) self.assertEqual(usernames, ['u7', 'u8']) ids = [user.id for user in users] self.assertEqual(ids, [None, None]) # Was not selected. dq = (User .delete() .where(User.username == 'u1') .returning(User)) users = [user for user in dq.execute()] self.assertEqual(len(users), 1) user, = users self.assertEqual(user.username, 'u1') self.assertIsNotNone(user.id) def test_insert_returning(self): iq = User.insert(username='zaizee').returning(User) users = [user for user in iq.execute()] self.assertEqual(len(users), 1) user, = users self.assertEqual(user.username, 'zaizee') self.assertIsNotNone(user.id) iq = (User .insert_many([ {'username': 'charlie'}, {'username': 'huey'}, {'username': 'connor'}, {'username': 'leslie'}, {'username': 'mickey'}]) .returning(User)) users = sorted([user for user in iq.tuples().execute()]) usernames = [username for _, username in users] self.assertEqual(usernames, [ 'charlie', 'huey', 'connor', 'leslie', 'mickey', ]) id_charlie = users[0][0] id_mickey = users[-1][0] self.assertEqual(id_mickey - id_charlie, 4) class TestModelHash(PeeweeTestCase): def test_hash(self): class MyUser(User): pass d = {} u1 = User(id=1) u2 = User(id=2) u3 = User(id=3) m1 = MyUser(id=1) m2 = MyUser(id=2) m3 = MyUser(id=3) d[u1] = 'u1' d[u2] = 'u2' d[m1] = 'm1' d[m2] = 'm2' self.assertTrue(u1 in d) self.assertTrue(u2 in d) self.assertFalse(u3 in d) self.assertTrue(m1 in d) self.assertTrue(m2 in d) self.assertFalse(m3 in d) self.assertEqual(d[u1], 'u1') self.assertEqual(d[u2], 'u2') self.assertEqual(d[m1], 'm1') self.assertEqual(d[m2], 'm2') un = User() mn = MyUser() d[un] = 'un' d[mn] = 'mn' self.assertTrue(un in d) # Hash implementation. self.assertTrue(mn in d) self.assertEqual(d[un], 'un') self.assertEqual(d[mn], 'mn') class TestDeleteNullableForeignKeys(ModelTestCase): requires = [User, Note, Flag, NoteFlagNullable] def test_delete(self): u = User.create(username='u') n = Note.create(user=u, text='n') f = Flag.create(label='f') nf1 = NoteFlagNullable.create(note=n, flag=f) nf2 = NoteFlagNullable.create(note=n, flag=None) nf3 = NoteFlagNullable.create(note=None, flag=f) nf4 = NoteFlagNullable.create(note=None, flag=None) self.assertEqual(nf1.delete_instance(), 1) self.assertEqual(nf2.delete_instance(), 1) self.assertEqual(nf3.delete_instance(), 1) self.assertEqual(nf4.delete_instance(), 1) class TestJoinNullableForeignKey(ModelTestCase): requires = [Parent, Orphan, Child] def setUp(self): super(TestJoinNullableForeignKey, self).setUp() p1 = Parent.create(data='p1') p2 = Parent.create(data='p2') for i in range(1, 3): Child.create(parent=p1, data='child%s-p1' % i) Child.create(parent=p2, data='child%s-p2' % i) Orphan.create(parent=p1, data='orphan%s-p1' % i) Orphan.create(data='orphan1-noparent') Orphan.create(data='orphan2-noparent') def test_no_empty_instances(self): with self.assertQueryCount(1): query = (Orphan .select(Orphan, Parent) .join(Parent, JOIN.LEFT_OUTER) .order_by(Orphan.id)) res = [(orphan.data, orphan.parent is None) for orphan in query] self.assertEqual(res, [ ('orphan1-p1', False), ('orphan2-p1', False), ('orphan1-noparent', True), ('orphan2-noparent', True), ]) def test_unselected_fk_pk(self): with self.assertQueryCount(1): query = (Orphan .select(Orphan.data, Parent.data) .join(Parent, JOIN.LEFT_OUTER) .order_by(Orphan.id)) res = [(orphan.data, orphan.parent is None) for orphan in query] self.assertEqual(res, [ ('orphan1-p1', False), ('orphan2-p1', False), ('orphan1-noparent', False), ('orphan2-noparent', False), ]) def test_non_null_fk_unselected_fk(self): with self.assertQueryCount(1): query = (Child .select(Child.data, Parent.data) .join(Parent, JOIN.LEFT_OUTER) .order_by(Child.id)) res = [(child.data, child.parent is None) for child in query] self.assertEqual(res, [ ('child1-p1', False), ('child1-p2', False), ('child2-p1', False), ('child2-p2', False), ]) res = [child.parent.data for child in query] self.assertEqual(res, ['p1', 'p2', 'p1', 'p2']) res = [(child._data['parent'], child.parent.id) for child in query] self.assertEqual(res, [ (None, None), (None, None), (None, None), (None, None), ]) class TestDefaultDirtyBehavior(PeeweeTestCase): def setUp(self): super(TestDefaultDirtyBehavior, self).setUp() DefaultsModel.drop_table(True) DefaultsModel.create_table() def test_default_dirty(self): DM = DefaultsModel DM._meta.only_save_dirty = True dm = DM() dm.save() self.assertEqual(dm.field, 1) self.assertEqual(dm.control, 1) dm_db = DM.get((DM.field == 1) & (DM.control == 1)) self.assertEqual(dm_db.field, 1) self.assertEqual(dm_db.control, 1) # No changes. self.assertFalse(dm_db.save()) dm2 = DM.create() self.assertEqual(dm2.field, 3) # One extra when fetched from DB. self.assertEqual(dm2.control, 1) dm._meta.only_save_dirty = False dm3 = DM() self.assertEqual(dm3.field, 4) self.assertEqual(dm3.control, 1) dm3.save() dm3_db = DM.get(DM.id == dm3.id) self.assertEqual(dm3_db.field, 4) class TestFunctionCoerceRegression(PeeweeTestCase): def test_function_coerce(self): class M1(Model): data = IntegerField() class Meta: database = in_memory_db class M2(Model): id = IntegerField() class Meta: database = in_memory_db in_memory_db.create_tables([M1, M2]) for i in range(3): M1.create(data=i) M2.create(id=i + 1) qm1 = M1.select(fn.GROUP_CONCAT(M1.data).coerce(False).alias('data')) qm2 = M2.select(fn.GROUP_CONCAT(M2.id).coerce(False).alias('ids')) m1 = qm1.get() self.assertEqual(m1.data, '0,1,2') m2 = qm2.get() self.assertEqual(m2.ids, '1,2,3') @skip_unless( lambda: (isinstance(test_db, PostgresqlDatabase) or (isinstance(test_db, SqliteDatabase) and supports_tuples))) class TestTupleComparison(ModelTestCase): requires = [User] def test_tuples(self): ua = User.create(username='user-a') ub = User.create(username='user-b') uc = User.create(username='user-c') query = User.select().where( Tuple(User.username, User.id) == ('user-b', ub.id)) self.assertEqual(query.count(), 1) obj = query.get() self.assertEqual(obj, ub) class TestModelObjectIDSpecification(PeeweeTestCase): def test_specify_object_id_name(self): class User(Model): pass class T0(Model): user = ForeignKeyField(User) class T1(Model): user = ForeignKeyField(User, db_column='uid') class T2(Model): user = ForeignKeyField(User, object_id_name='uid') class T3(Model): user = ForeignKeyField(User, db_column='x', object_id_name='uid') class T4(Model): foo = ForeignKeyField(User, db_column='user') class T5(Model): foo = ForeignKeyField(User, object_id_name='uid') user = User(id=1337) self.assertEqual(T0(user=user).user_id, 1337) self.assertEqual(T1(user=user).uid, 1337) self.assertEqual(T2(user=user).uid, 1337) self.assertEqual(T3(user=user).uid, 1337) self.assertEqual(T4(foo=user).user, 1337) self.assertEqual(T5(foo=user).uid, 1337) def conflicts_with_name(): class TE(Model): user = ForeignKeyField(User, object_id_name='user') self.assertRaises(ValueError, conflicts_with_name) peewee-2.10.2/playhouse/tests/test_pool.py000066400000000000000000000333561316645060400206220ustar00rootroot00000000000000import heapq import psycopg2 # Trigger import error if not installed. import threading import time from peewee import * from peewee import savepoint from peewee import transaction from playhouse.pool import * from playhouse.tests.base import database_initializer from playhouse.tests.base import PeeweeTestCase class FakeTransaction(transaction): def _add_history(self, message): self.db.transaction_history.append( '%s%s' % (message, self._conn)) def __enter__(self): self._conn = self.db.get_conn() self._add_history('O') def commit(self, begin=True): self._add_history('C') def __exit__(self, exc_type, exc_val, exc_tb): self._add_history('X') class FakeDatabase(SqliteDatabase): def __init__(self, *args, **kwargs): self.counter = 0 self.closed_counter = 0 self.transaction_history = [] super(FakeDatabase, self).__init__(*args, **kwargs) def _connect(self, *args, **kwargs): """ Return increasing integers instead of actual database connections. """ self.counter += 1 return self.counter def _close(self, conn): self.closed_counter += 1 def transaction(self): return FakeTransaction(self) class TestDB(PooledDatabase, FakeDatabase): def __init__(self, *args, **kwargs): super(TestDB, self).__init__(*args, **kwargs) self.conn_key = lambda conn: conn pooled_db = database_initializer.get_database( 'postgres', db_class=PooledPostgresqlDatabase) normal_db = database_initializer.get_database('postgres') class Number(Model): value = IntegerField() class Meta: database = pooled_db class TestPooledDatabase(PeeweeTestCase): def setUp(self): super(TestPooledDatabase, self).setUp() self.db = TestDB('testing') def test_connection_pool(self): # Ensure that a connection is created and accessible. self.assertEqual(self.db.get_conn(), 1) self.assertEqual(self.db.get_conn(), 1) # Ensure that closing and reopening will return the same connection. self.db.close() self.db.connect() self.assertEqual(self.db.get_conn(), 1) def test_concurrent_connections(self): db = TestDB('testing') signal = threading.Event() def open_conn(): db.connect() signal.wait() db.close() # Simulate 5 concurrent connections. threads = [threading.Thread(target=open_conn) for i in range(5)] for thread in threads: thread.start() # Wait for all connections to be opened. while db.counter < 5: time.sleep(.01) # Signal threads to close connections and join threads. signal.set() [t.join() for t in threads] self.assertEqual(db.counter, 5) self.assertEqual( sorted([conn for _, conn in db._connections]), [1, 2, 3, 4, 5]) # All 5 are ready to be re-used. self.assertEqual(db._in_use, {}) def test_max_conns(self): for i in range(self.db.max_connections): self.db._local.closed = True self.db.connect() self.assertEqual(self.db.get_conn(), i + 1) self.db._local.closed = True self.assertRaises(ValueError, self.db.connect) def test_stale_timeout(self): # Create a test database with a very short stale timeout. db = TestDB('testing', stale_timeout=.01) self.assertEqual(db.get_conn(), 1) self.assertTrue(1 in db._in_use) # Sleep long enough for the connection to be considered stale. time.sleep(.01) # When we close, since the conn is stale it won't be returned to # the pool. db.close() self.assertEqual(db._in_use, {}) self.assertEqual(db._connections, []) self.assertEqual(db._closed, set()) # A new connection will be returned. self.assertEqual(db.get_conn(), 2) def test_stale_on_checkout(self): # Create a test database with a very short stale timeout. db = TestDB('testing', stale_timeout=.01) self.assertEqual(db.get_conn(), 1) self.assertTrue(1 in db._in_use) # When we close, the conn should not be stale so it won't return to # the pool. db.close() # Sleep long enough for the connection to be considered stale. time.sleep(.01) self.assertEqual(db._in_use, {}) self.assertEqual(len(db._connections), 1) # A new connection will be returned, as the original one is stale. # The stale connection (1) will be removed and not placed in the # "closed" set. self.assertEqual(db.get_conn(), 2) self.assertEqual(db._closed, set()) def test_manual_close(self): conn = self.db.get_conn() self.assertEqual(conn, 1) self.db.manual_close() # When we manually close a connection that's not yet stale, we add it # back to the queue (because close() calls _close()), then close it # for real, and mark it with a tombstone. The next time it's checked # out, it will simply be removed and skipped over. self.assertEqual(self.db._closed, set([1])) self.assertEqual(len(self.db._connections), 1) self.assertEqual(self.db._in_use, {}) conn = self.db.get_conn() self.assertEqual(conn, 2) self.assertEqual(self.db._closed, set()) self.assertEqual(len(self.db._connections), 0) self.assertEqual(list(self.db._in_use.keys()), [2]) self.db.close() conn = self.db.get_conn() self.assertEqual(conn, 2) def test_stale_timeout_cascade(self): now = time.time() db = TestDB('testing', stale_timeout=10) conns = [ (now - 20, 1), (now - 15, 2), (now - 5, 3), (now, 4), ] for ts_conn in conns: heapq.heappush(db._connections, ts_conn) self.assertEqual(db.get_conn(), 3) self.assertEqual(db._in_use, {3: now - 5}) self.assertEqual(db._connections, [(now, 4)]) def test_connect_cascade(self): now = time.time() db = TestDB('testing', stale_timeout=10) conns = [ (now - 15, 1), # Skipped due to being stale. (now - 5, 2), # In the 'closed' set. (now - 3, 3), (now, 4), # In the 'closed' set. ] db._closed.add(2) db._closed.add(4) db.counter = 4 # The next connection we create will have id=5. for ts_conn in conns: heapq.heappush(db._connections, ts_conn) # Conn 3 is not stale or closed, so we will get it. self.assertEqual(db.get_conn(), 3) self.assertEqual(db._in_use, {3: now - 3}) self.assertEqual(db._connections, [(now, 4)]) # Since conn 4 is closed, we will open a new conn. db._local.closed = True # Pretend we're in a different thread. db.connect() self.assertEqual(db.get_conn(), 5) self.assertEqual(sorted(db._in_use.keys()), [3, 5]) self.assertEqual(db._connections, []) def test_execution_context(self): self.assertEqual(self.db.get_conn(), 1) with self.db.execution_context(): self.assertEqual(self.db.get_conn(), 2) self.assertEqual(self.db.transaction_history, ['O2']) self.assertEqual(self.db.get_conn(), 1) self.assertEqual(self.db.transaction_history, ['O2', 'C2', 'X2']) with self.db.execution_context(with_transaction=False): self.assertEqual(self.db.get_conn(), 2) self.assertEqual(self.db.transaction_history, ['O2', 'C2', 'X2']) self.assertEqual(self.db.get_conn(), 1) self.assertEqual(self.db.transaction_history, ['O2', 'C2', 'X2']) self.assertEqual(len(self.db._connections), 1) self.assertEqual(len(self.db._in_use), 1) def test_execution_context_nested(self): def assertInUse(n): self.assertEqual(len(self.db._in_use), n) def assertFree(n): self.assertEqual(len(self.db._connections), n) def assertHistory(history): self.assertEqual(self.db.transaction_history, history) @self.db.execution_context() def subroutine(): pass self.assertEqual(self.db.get_conn(), 1) assertFree(0) assertInUse(1) with self.db.execution_context(False): self.assertEqual(self.db.get_conn(), 2) assertFree(0) assertInUse(2) assertHistory([]) with self.db.execution_context(): self.assertEqual(self.db.get_conn(), 3) assertFree(0) assertInUse(3) assertHistory(['O3']) subroutine() assertFree(1) assertInUse(3) assertHistory(['O3', 'O4', 'C4', 'X4']) assertFree(2) assertInUse(2) assertHistory(['O3', 'O4', 'C4', 'X4', 'C3', 'X3']) # Since conn 3 has been returned to the pool, the subroutine # will use conn3 this time. subroutine() assertFree(2) assertInUse(2) assertHistory( ['O3', 'O4', 'C4', 'X4', 'C3', 'X3', 'O3', 'C3', 'X3']) self.assertEqual(self.db.get_conn(), 1) assertFree(3) assertInUse(1) assertHistory(['O3', 'O4', 'C4', 'X4', 'C3', 'X3', 'O3', 'C3', 'X3']) def test_execution_context_threads(self): signal = threading.Event() def create_context(): with self.db.execution_context(): signal.wait() # Simulate 5 concurrent connections. threads = [threading.Thread(target=create_context) for i in range(5)] for thread in threads: thread.start() # Wait for all connections to be opened. while len(self.db.transaction_history) < 5: time.sleep(.01) # Signal threads to close connections and join threads. signal.set() [t.join() for t in threads] self.assertEqual(self.db.counter, 5) self.assertEqual(len(self.db._connections), 5) self.assertEqual(len(self.db._in_use), 0) self.assertEqual( self.db.transaction_history[:5], ['O1', 'O2', 'O3', 'O4', 'O5']) rest = sorted(self.db.transaction_history[5:]) self.assertEqual( rest, ['C1', 'C2', 'C3', 'C4', 'C5', 'X1', 'X2', 'X3', 'X4', 'X5']) def test_execution_context_mixed_thread(self): sig_sub = threading.Event() sig_ctx = threading.Event() sig_in_sub = threading.Event() sig_in_ctx = threading.Event() self.assertEqual(self.db.get_conn(), 1) @self.db.execution_context() def subroutine(): sig_in_sub.set() sig_sub.wait() def target(): with self.db.execution_context(): subroutine() sig_in_ctx.set() sig_ctx.wait() t = threading.Thread(target=target) t.start() sig_in_sub.wait() self.assertEqual(len(self.db._in_use), 3) self.assertEqual(len(self.db._connections), 0) self.assertEqual(self.db.transaction_history, ['O2', 'O3']) sig_sub.set() sig_in_ctx.wait() self.assertEqual(len(self.db._in_use), 2) self.assertEqual(len(self.db._connections), 1) self.assertEqual( self.db.transaction_history, ['O2', 'O3', 'C3', 'X3']) sig_ctx.set() t.join() self.assertEqual(len(self.db._in_use), 1) self.assertEqual(len(self.db._connections), 2) self.assertEqual( self.db.transaction_history, ['O2', 'O3', 'C3', 'X3', 'C2', 'X2']) class TestConnectionPool(PeeweeTestCase): def setUp(self): super(TestConnectionPool, self).setUp() # Use an un-pooled database to drop/create the table. if Number._meta.db_table in normal_db.get_tables(): normal_db.drop_table(Number) normal_db.create_table(Number) def test_reuse_connection(self): for i in range(5): Number.create(value=i) conn_id = id(pooled_db.get_conn()) pooled_db.close() for i in range(5, 10): Number.create(value=i) self.assertEqual(id(pooled_db.get_conn()), conn_id) self.assertEqual( [x.value for x in Number.select().order_by(Number.id)], list(range(10))) def test_execution_context(self): with pooled_db.execution_context(): Number.create(value=1) with pooled_db.atomic() as sp: self.assertTrue(isinstance(sp, savepoint)) Number.create(value=2) sp.rollback() with pooled_db.atomic() as sp: self.assertTrue(isinstance(sp, savepoint)) Number.create(value=3) with pooled_db.execution_context(with_transaction=False): with pooled_db.atomic() as txn: self.assertTrue(isinstance(txn, transaction)) Number.create(value=4) # Executed in autocommit mode. Number.create(value=5) with pooled_db.execution_context(): numbers = [ number.value for number in Number.select().order_by(Number.value)] self.assertEqual(numbers, [1, 3, 4, 5]) def test_bad_connection(self): pooled_db.connect() try: pooled_db.execute_sql('select 1/0') except Exception as exc: pass pooled_db.close() pooled_db.connect() # Re-connect. pooled_db.execute_sql('select 1') # Can execute queries. pooled_db.close() peewee-2.10.2/playhouse/tests/test_postgres.py000066400000000000000000001072531316645060400215150ustar00rootroot00000000000000#coding:utf-8 import datetime import json import os import sys import uuid import psycopg2 try: from psycopg2.extras import Json except ImportError: Json = None from peewee import prefetch from playhouse.postgres_ext import * from playhouse.tests.base import database_initializer from playhouse.tests.base import ModelTestCase from playhouse.tests.base import PeeweeTestCase from playhouse.tests.base import skip_if from playhouse.tests.models import TestingID as _TestingID class TestPostgresqlExtDatabase(PostgresqlExtDatabase): insert_returning = False PYPY = 'PyPy' in sys.version test_db = database_initializer.get_database( 'postgres', db_class=TestPostgresqlExtDatabase) test_ss_db = database_initializer.get_database( 'postgres', db_class=TestPostgresqlExtDatabase, server_side_cursors=True, user='postgres') class BaseModel(Model): class Meta: database = test_db class Testing(BaseModel): name = CharField() data = HStoreField() class Meta: order_by = ('name',) try: class TestingJson(BaseModel): data = JSONField() class TestingJsonNull(BaseModel): data = JSONField(null=True) except: TestingJson = None try: class BJson(BaseModel): data = BinaryJSONField() except: BJson = None class TZModel(BaseModel): dt = DateTimeTZField() class ArrayModel(BaseModel): tags = ArrayField(CharField) ints = ArrayField(IntegerField, dimensions=2) class FakeIndexedField(IndexedFieldMixin, CharField): index_type = 'FAKE' class TestIndexModel(BaseModel): array_index = ArrayField(CharField) array_noindex= ArrayField(IntegerField, index=False) fake_index = FakeIndexedField() fake_index_with_type = FakeIndexedField(index_type='MAGIC') fake_noindex = FakeIndexedField(index=False) class SSCursorModel(Model): data = CharField() class Meta: database = test_ss_db class NormalModel(BaseModel): data = CharField() class FTSModel(BaseModel): title = CharField() data = TextField() fts_data = TSVectorField() class User(BaseModel): username = CharField(unique=True) class Meta: db_table = 'users_x' class Post(BaseModel): user = ForeignKeyField(User) content = TextField() timestamp = DateTimeField(default=datetime.datetime.now) class TestingID(_TestingID): class Meta: database = test_db class Event(BaseModel): name = CharField() duration = IntervalField() MODELS = [ Testing, TestingID, ArrayModel, FTSModel, NormalModel, ] class BasePostgresqlExtTestCase(ModelTestCase): requires = MODELS class TestCast(ModelTestCase): requires = [User] def create_users(self, *usernames): for username in usernames: User.create(username=username) def test_cast_int(self): self.create_users('100', '001', '101') username_i = User.username.cast('integer') query = (User .select(User.username, username_i.alias('username_i')) .order_by(User.username)) data = [(user.username, user.username_i) for user in query] self.assertEqual(data, [ ('001', 1), ('100', 100), ('101', 101), ]) def test_cast_float(self): self.create_users('00.01', '100', '1.2345') query = (User .select(User.username.cast('float').alias('u_f')) .order_by(SQL('u_f'))) self.assertEqual([user.u_f for user in query], [.01, 1.2345, 100.]) class TestTZField(BasePostgresqlExtTestCase): def test_tz_field(self): TZModel.drop_table(True) TZModel.create_table() test_db.execute_sql('set time zone "us/central";') dt = datetime.datetime.now() tz = TZModel.create(dt=dt) self.assertTrue(tz.dt.tzinfo is None) tz = TZModel.get(TZModel.id == tz.id) self.assertFalse(tz.dt.tzinfo is None) class TestHStoreField(BasePostgresqlExtTestCase): def setUp(self): super(TestHStoreField, self).setUp() self.t1 = None self.t2 = None def create(self): self.t1 = Testing.create(name='t1', data={'k1': 'v1', 'k2': 'v2'}) self.t2 = Testing.create(name='t2', data={'k2': 'v2', 'k3': 'v3'}) def test_hstore_storage(self): self.create() self.assertEqual(Testing.get(name='t1').data, {'k1': 'v1', 'k2': 'v2'}) self.assertEqual(Testing.get(name='t2').data, {'k2': 'v2', 'k3': 'v3'}) self.t1.data = {'k4': 'v4'} self.t1.save() self.assertEqual(Testing.get(name='t1').data, {'k4': 'v4'}) t = Testing.create(name='t3', data={}) self.assertEqual(Testing.get(name='t3').data, {}) def test_hstore_selecting(self): self.create() sq = Testing.select(Testing.name, Testing.data.keys().alias('keys')) self.assertEqual([(x.name, sorted(x.keys)) for x in sq], [ ('t1', ['k1', 'k2']), ('t2', ['k2', 'k3']) ]) sq = Testing.select(Testing.name, Testing.data.values().alias('vals')) self.assertEqual([(x.name, sorted(x.vals)) for x in sq], [ ('t1', ['v1', 'v2']), ('t2', ['v2', 'v3']) ]) sq = Testing.select(Testing.name, Testing.data.items().alias('mtx')) self.assertEqual([(x.name, sorted(x.mtx)) for x in sq], [ ('t1', [['k1', 'v1'], ['k2', 'v2']]), ('t2', [['k2', 'v2'], ['k3', 'v3']]), ]) sq = Testing.select(Testing.name, Testing.data.slice('k2', 'k3').alias('kz')) self.assertEqual([(x.name, x.kz) for x in sq], [ ('t1', {'k2': 'v2'}), ('t2', {'k2': 'v2', 'k3': 'v3'}), ]) sq = Testing.select(Testing.name, Testing.data.slice('k4').alias('kz')) self.assertEqual([(x.name, x.kz) for x in sq], [ ('t1', {}), ('t2', {}), ]) sq = Testing.select(Testing.name, Testing.data.exists('k3').alias('ke')) self.assertEqual([(x.name, x.ke) for x in sq], [ ('t1', False), ('t2', True), ]) sq = Testing.select(Testing.name, Testing.data.defined('k3').alias('ke')) self.assertEqual([(x.name, x.ke) for x in sq], [ ('t1', False), ('t2', True), ]) sq = Testing.select(Testing.name, Testing.data['k1'].alias('k1')) self.assertEqual([(x.name, x.k1) for x in sq], [ ('t1', 'v1'), ('t2', None), ]) sq = Testing.select(Testing.name).where(Testing.data['k1'] == 'v1') self.assertEqual([x.name for x in sq], ['t1']) def test_hstore_filtering(self): self.create() sq = Testing.select().where(Testing.data == {'k1': 'v1', 'k2': 'v2'}) self.assertEqual([x.name for x in sq], ['t1']) sq = Testing.select().where(Testing.data == {'k2': 'v2'}) self.assertEqual([x.name for x in sq], []) # test single key sq = Testing.select().where(Testing.data.contains('k3')) self.assertEqual([x.name for x in sq], ['t2']) # test list of keys sq = Testing.select().where(Testing.data.contains(['k2', 'k3'])) self.assertEqual([x.name for x in sq], ['t2']) sq = Testing.select().where(Testing.data.contains(['k2'])) self.assertEqual([x.name for x in sq], ['t1', 't2']) # test dict sq = Testing.select().where(Testing.data.contains({'k2': 'v2', 'k3': 'v3'})) self.assertEqual([x.name for x in sq], ['t2']) sq = Testing.select().where(Testing.data.contains({'k2': 'v2'})) self.assertEqual([x.name for x in sq], ['t1', 't2']) sq = Testing.select().where(Testing.data.contains({'k2': 'v3'})) self.assertEqual([x.name for x in sq], []) # test contains any. sq = Testing.select().where(Testing.data.contains_any('k3', 'kx')) self.assertEqual([x.name for x in sq], ['t2']) sq = Testing.select().where(Testing.data.contains_any('k2', 'x', 'k3')) self.assertEqual([x.name for x in sq], ['t1', 't2']) sq = Testing.select().where(Testing.data.contains_any('x', 'kx', 'y')) self.assertEqual([x.name for x in sq], []) def test_hstore_filter_functions(self): self.create() sq = Testing.select().where(Testing.data.exists('k2') == True) self.assertEqual([x.name for x in sq], ['t1', 't2']) sq = Testing.select().where(Testing.data.exists('k3') == True) self.assertEqual([x.name for x in sq], ['t2']) sq = Testing.select().where(Testing.data.defined('k2') == True) self.assertEqual([x.name for x in sq], ['t1', 't2']) sq = Testing.select().where(Testing.data.defined('k3') == True) self.assertEqual([x.name for x in sq], ['t2']) def test_hstore_update_functions(self): self.create() rc = Testing.update(data=Testing.data.update(k4='v4')).where( Testing.name == 't1' ).execute() self.assertEqual(rc, 1) self.assertEqual(Testing.get(name='t1').data, {'k1': 'v1', 'k2': 'v2', 'k4': 'v4'}) rc = Testing.update(data=Testing.data.update(k5='v5', k6='v6')).where( Testing.name == 't2' ).execute() self.assertEqual(rc, 1) self.assertEqual(Testing.get(name='t2').data, {'k2': 'v2', 'k3': 'v3', 'k5': 'v5', 'k6': 'v6'}) rc = Testing.update(data=Testing.data.update(k2='vxxx')).execute() self.assertEqual(rc, 2) self.assertEqual([x.data for x in Testing.select()], [ {'k1': 'v1', 'k2': 'vxxx', 'k4': 'v4'}, {'k2': 'vxxx', 'k3': 'v3', 'k5': 'v5', 'k6': 'v6'} ]) rc = Testing.update(data=Testing.data.delete('k4')).where( Testing.name == 't1' ).execute() self.assertEqual(rc, 1) self.assertEqual(Testing.get(name='t1').data, {'k1': 'v1', 'k2': 'vxxx'}) rc = Testing.update(data=Testing.data.delete('k5')).execute() self.assertEqual(rc, 2) self.assertEqual([x.data for x in Testing.select()], [ {'k1': 'v1', 'k2': 'vxxx'}, {'k2': 'vxxx', 'k3': 'v3', 'k6': 'v6'} ]) rc = Testing.update(data=Testing.data.delete('k1', 'k2')).execute() self.assertEqual(rc, 2) self.assertEqual([x.data for x in Testing.select()], [ {}, {'k3': 'v3', 'k6': 'v6'} ]) class TestArrayField(BasePostgresqlExtTestCase): def _create_am(self): return ArrayModel.create( tags=['alpha', 'beta', 'gamma', 'delta'], ints=[[1, 2], [3, 4], [5, 6]]) def test_joining_on_array_index(self): values = [ ['foo', 'bar'], ['foo', 'nugget'], ['baze', 'nugget']] for tags in values: ArrayModel.create(tags=tags, ints=[]) for value in ['nugget', 'herp', 'foo']: NormalModel.create(data=value) query = (ArrayModel .select() .join( NormalModel, on=(NormalModel.data == ArrayModel.tags[1])) .order_by(ArrayModel.id)) results = [am.tags for am in query] self.assertEqual(results, [ ['foo', 'nugget'], ['baze', 'nugget']]) def test_array_storage_retrieval(self): am = self._create_am() am_db = ArrayModel.get(ArrayModel.id == am.id) self.assertEqual(am_db.tags, ['alpha', 'beta', 'gamma', 'delta']) self.assertEqual(am_db.ints, [[1, 2], [3, 4], [5, 6]]) def test_array_iterables(self): am = ArrayModel.create(tags=('foo', 'bar'), ints=[]) am_db = ArrayModel.get(ArrayModel.id == am.id) self.assertEqual(am_db.tags, ['foo', 'bar']) def test_array_search(self): def assertAM(where, *instances): query = (ArrayModel .select() .where(where) .order_by(ArrayModel.id)) self.assertEqual([x.id for x in query], [x.id for x in instances]) am = self._create_am() am2 = ArrayModel.create(tags=['alpha', 'beta'], ints=[[1, 1]]) am3 = ArrayModel.create(tags=['delta'], ints=[[3, 4]]) am4 = ArrayModel.create(tags=['中文'], ints=[[3, 4]]) am5 = ArrayModel.create(tags=['中文', '汉语'], ints=[[3, 4]]) assertAM((Param('beta') == fn.Any(ArrayModel.tags)), am, am2) assertAM((Param('delta') == fn.Any(ArrayModel.tags)), am, am3) assertAM((Param('omega') == fn.Any(ArrayModel.tags))) # Check the contains operator. assertAM(SQL("tags @> ARRAY['beta']::varchar[]"), am, am2) # Use the nicer API. assertAM(ArrayModel.tags.contains('beta'), am, am2) assertAM(ArrayModel.tags.contains('omega', 'delta')) assertAM(ArrayModel.tags.contains('汉语'), am5) assertAM(ArrayModel.tags.contains('alpha', 'delta'), am) # Check for any. assertAM(ArrayModel.tags.contains_any('beta'), am, am2) assertAM(ArrayModel.tags.contains_any('中文'), am4, am5) assertAM(ArrayModel.tags.contains_any('omega', 'delta'), am, am3) assertAM(ArrayModel.tags.contains_any('alpha', 'delta'), am, am2, am3) def test_array_index_slice(self): self._create_am() res = (ArrayModel .select(ArrayModel.tags[1].alias('arrtags')) .dicts() .get()) self.assertEqual(res['arrtags'], 'beta') res = (ArrayModel .select(ArrayModel.tags[2:4].alias('foo')) .dicts() .get()) self.assertEqual(res['foo'], ['gamma', 'delta']) res = (ArrayModel .select(ArrayModel.ints[1][1].alias('ints')) .dicts() .get()) self.assertEqual(res['ints'], 4) res = (ArrayModel .select(ArrayModel.ints[1:2][0].alias('ints')) .dicts() .get()) self.assertEqual(res['ints'], [[3], [5]]) class TestTSVectorField(BasePostgresqlExtTestCase): messages = [ 'A faith is a necessity to a man. Woe to him who believes in nothing.', 'All who call on God in true faith, earnestly from the heart, will ' 'certainly be heard, and will receive what they have asked and desired.', 'Be faithful in small things because it is in them that your strength lies.', 'Faith consists in believing when it is beyond the power of reason to believe.', 'Faith has to do with things that are not seen and hope with things that are not at hand.', ] def setUp(self): super(TestTSVectorField, self).setUp() for idx, msg in enumerate(self.messages): FTSModel.create( title=str(idx), data=msg, fts_data=fn.to_tsvector(msg)) def assertMessages(self, expr, expected): query = FTSModel.select().where(expr).order_by(FTSModel.id) titles = [row.title for row in query] self.assertEqual(list(map(int, titles)), expected) def test_sql(self): query = FTSModel.select().where(Match(FTSModel.data, 'foo bar')) self.assertEqual(query.sql(), ( 'SELECT "t1"."id", "t1"."title", "t1"."data", "t1"."fts_data" ' 'FROM "ftsmodel" AS t1 ' 'WHERE (to_tsvector("t1"."data") @@ to_tsquery(%s))', ['foo bar'] )) def test_match_function(self): self.assertMessages(Match(FTSModel.data, 'heart'), [1]) self.assertMessages(Match(FTSModel.data, 'god'), [1]) self.assertMessages(Match(FTSModel.data, 'faith'), [0, 1, 2, 3, 4]) self.assertMessages(Match(FTSModel.data, 'thing'), [2, 4]) self.assertMessages(Match(FTSModel.data, 'faith & things'), [2, 4]) self.assertMessages(Match(FTSModel.data, 'god | things'), [1, 2, 4]) self.assertMessages(Match(FTSModel.data, 'god & things'), []) def test_tsvector_field(self): self.assertMessages(FTSModel.fts_data.match('heart'), [1]) self.assertMessages(FTSModel.fts_data.match('god'), [1]) self.assertMessages(FTSModel.fts_data.match('faith'), [0, 1, 2, 3, 4]) self.assertMessages(FTSModel.fts_data.match('thing'), [2, 4]) self.assertMessages(FTSModel.fts_data.match('faith & things'), [2, 4]) self.assertMessages(FTSModel.fts_data.match('god | things'), [1, 2, 4]) self.assertMessages(FTSModel.fts_data.match('god & things'), []) class SSCursorTestCase(PeeweeTestCase): counter = 0 def setUp(self): super(SSCursorTestCase, self).setUp() self.close_conn() # Close open connection. SSCursorModel.drop_table(True) NormalModel.drop_table(True) SSCursorModel.create_table() NormalModel.create_table() self.counter = 0 for i in range(3): self.create() if PYPY: self.ExceptionClass = psycopg2.OperationalError else: self.ExceptionClass = psycopg2.ProgrammingError def create(self): self.counter += 1 SSCursorModel.create(data=self.counter) NormalModel.create(data=self.counter) def close_conn(self): if not test_ss_db.is_closed(): test_ss_db.close() def assertList(self, iterable): self.assertEqual( [x.data for x in iterable], [str(i) for i in range(1, self.counter + 1)]) def test_model_interaction(self): query = SSCursorModel.select().order_by(SSCursorModel.data) self.assertList(query) query2 = query.clone() qr = query2.execute() self.assertList(qr) # The cursor is named and is still "alive" because we can still try # to fetch results. self.assertTrue(qr.cursor.name is not None) self.assertEqual(qr.cursor.fetchone(), None) # Execute the query in a transaction. with test_ss_db.transaction(): query3 = query.clone() qr2 = query3.execute() # Different named cursor self.assertFalse(qr2.cursor.name == qr.cursor.name) self.assertList(qr2) # After the transaction we cannot fetch a result because the cursor # is dead. self.assertRaises(self.ExceptionClass, qr2.cursor.fetchone) # Try using the helper. query4 = query.clone() self.assertList(ServerSide(query4)) # Named cursor is dead. self.assertRaises(self.ExceptionClass, query4._qr.cursor.fetchone) def test_serverside_normal_model(self): query = NormalModel.select().order_by(NormalModel.data) self.assertList(query) # The cursor is closed. self.assertTrue(query._qr.cursor.closed) clone = query.clone() self.assertList(ServerSide(clone)) # Named cursor is dead. self.assertRaises(self.ExceptionClass, clone._qr.cursor.fetchone) # Ensure where clause is preserved. query = query.where(NormalModel.data == '2') data = [x.data for x in ServerSide(query)] self.assertEqual(data, ['2']) # The cursor is open. self.assertFalse(query._qr.cursor.closed) def test_ss_cursor(self): tbl = SSCursorModel._meta.db_table name = str(uuid.uuid1()) # Get a named cursor and execute a select query. cursor = test_ss_db.get_cursor(name=name) cursor.execute('select data from %s order by id' % tbl) # Ensure the cursor attributes are as we expect. self.assertEqual(cursor.description, None) self.assertEqual(cursor.name, name) self.assertFalse(cursor.withhold) # Close cursor after commit. # Cursor works and populates description after fetching one row. self.assertEqual(cursor.fetchone(), ('1',)) self.assertEqual(cursor.description[0].name, 'data') # Explicitly close the cursor. test_ss_db.commit() self.assertRaises(self.ExceptionClass, cursor.fetchone) # This would not work is the named cursor was still holding a ref to # the table. test_ss_db.execute_sql('truncate table %s;' % tbl) test_ss_db.commit() class BaseJsonFieldTestCase(object): ModelClass = None # Subclasses must define this. def test_json_field(self): data = {'k1': ['a1', 'a2'], 'k2': {'k3': 'v3'}} j = self.ModelClass.create(data=data) j_db = self.ModelClass.get(j._pk_expr()) self.assertEqual(j_db.data, data) def test_joining_on_json_key(self): JsonModel = self.ModelClass values = [ {'foo': 'bar', 'baze': {'nugget': 'alpha'}}, {'foo': 'bar', 'baze': {'nugget': 'beta'}}, {'herp': 'derp', 'baze': {'nugget': 'epsilon'}}, {'herp': 'derp', 'bar': {'nuggie': 'alpha'}}, ] for data in values: JsonModel.create(data=data) for value in ['alpha', 'beta', 'gamma', 'delta']: NormalModel.create(data=value) query = (JsonModel .select() .join(NormalModel, on=( NormalModel.data == JsonModel.data['baze']['nugget'])) .order_by(JsonModel.id)) results = [jm.data for jm in query] self.assertEqual(results, [ {'foo': 'bar', 'baze': {'nugget': 'alpha'}}, {'foo': 'bar', 'baze': {'nugget': 'beta'}}, ]) def test_json_lookup_methods(self): data = { 'gp1': { 'p1': {'c1': 'foo'}, 'p2': {'c2': 'bar'}, }, 'gp2': {}} j = self.ModelClass.create(data=data) def assertLookup(lookup, expected): query = (self.ModelClass .select(lookup) .where(j._pk_expr()) .dicts()) self.assertEqual(query.get(), expected) expr = self.ModelClass.data['gp1']['p1'].alias('pdata') assertLookup(expr, {'pdata': '{"c1": "foo"}'}) assertLookup(expr.as_json(), {'pdata': {'c1': 'foo'}}) expr = self.ModelClass.data['gp1']['p1']['c1'].alias('cdata') assertLookup(expr, {'cdata': 'foo'}) assertLookup(expr.as_json(), {'cdata': 'foo'}) j.data = [ {'i1': ['foo', 'bar', 'baze']}, ['nugget', 'mickey']] j.save() expr = self.ModelClass.data[0]['i1'].alias('idata') assertLookup(expr, {'idata': '["foo", "bar", "baze"]'}) assertLookup(expr.as_json(), {'idata': ['foo', 'bar', 'baze']}) expr = self.ModelClass.data[1][1].alias('ldata') assertLookup(expr, {'ldata': 'mickey'}) assertLookup(expr.as_json(), {'ldata': 'mickey'}) def test_json_cast(self): self.ModelClass.create(data={'foo': {'bar': 3}}) self.ModelClass.create(data={'foo': {'bar': 5}}) query = self.ModelClass.select( self.ModelClass.data['foo']['bar'].cast('float') * 1.5 ).order_by(self.ModelClass.id).tuples() results = query[:] self.assertEqual(results, [ (4.5,), (7.5,)]) def test_json_path(self): data = { 'foo': { 'baz': { 'bar': ['i1', 'i2', 'i3'], 'baze': ['j1', 'j2'], }}} j = self.ModelClass.create(data=data) def assertPath(path, expected): query = (self.ModelClass .select(path) .where(j._pk_expr()) .dicts()) self.assertEqual(query.get(), expected) expr = self.ModelClass.data.path('foo', 'baz', 'bar').alias('p') assertPath(expr, {'p': '["i1", "i2", "i3"]'}) assertPath(expr.as_json(), {'p': ['i1', 'i2', 'i3']}) expr = self.ModelClass.data.path('foo', 'baz', 'baze', 1).alias('p') assertPath(expr, {'p': 'j2'}) assertPath(expr.as_json(), {'p': 'j2'}) def test_json_field_sql(self): j = (self.ModelClass .select() .where(self.ModelClass.data == {'foo': 'bar'})) sql, params = j.sql() self.assertEqual(sql, ( 'SELECT "t1"."id", "t1"."data" ' 'FROM "%s" AS t1 WHERE ("t1"."data" = %%s)') % self.ModelClass._meta.db_table) self.assertEqual(params[0].adapted, {'foo': 'bar'}) j = (self.ModelClass .select() .where(self.ModelClass.data['foo'] == 'bar')) sql, params = j.sql() self.assertEqual(sql, ( 'SELECT "t1"."id", "t1"."data" ' 'FROM "%s" AS t1 WHERE ("t1"."data"->>%%s = %%s)') % self.ModelClass._meta.db_table) self.assertEqual(params, ['foo', 'bar']) def assertItems(self, where, *items): query = (self.ModelClass .select() .where(where) .order_by(self.ModelClass.id)) self.assertEqual( [item.id for item in query], [item.id for item in items]) def test_lookup(self): t1 = self.ModelClass.create(data={'k1': 'v1', 'k2': {'k3': 'v3'}}) t2 = self.ModelClass.create(data={'k1': 'x1', 'k2': {'k3': 'x3'}}) t3 = self.ModelClass.create(data={'k1': 'v1', 'j2': {'j3': 'v3'}}) self.assertItems((self.ModelClass.data['k2']['k3'] == 'v3'), t1) self.assertItems((self.ModelClass.data['k1'] == 'v1'), t1, t3) # Valid key, no matching value. self.assertItems((self.ModelClass.data['k2'] == 'v1')) # Non-existent key. self.assertItems((self.ModelClass.data['not-here'] == 'v1')) # Non-existent nested key. self.assertItems((self.ModelClass.data['not-here']['xxx'] == 'v1')) self.assertItems((self.ModelClass.data['k2']['xxx'] == 'v1')) def json_ok(): if TestingJson is None: return False return pg93() def pg93(): conn = test_db.get_conn() return conn.server_version >= 90300 @skip_if(lambda: not json_ok()) class TestJsonField(BaseJsonFieldTestCase, ModelTestCase): ModelClass = TestingJson requires = [TestingJson, NormalModel, TestingJsonNull] def test_json_null(self): tjn = TestingJsonNull.create(data=None) tj = TestingJsonNull.create(data={'k1': 'v1'}) results = TestingJsonNull.select().order_by(TestingJsonNull.id) self.assertEqual( [tj_db.data for tj_db in results], [None, {'k1': 'v1'}]) query = TestingJsonNull.select().where( TestingJsonNull.data.is_null(True)) self.assertEqual(query.get(), tjn) def jsonb_ok(): if BJson is None: return False conn = test_db.get_conn() return conn.server_version >= 90400 @skip_if(lambda: not json_ok()) class TestBinaryJsonField(BaseJsonFieldTestCase, ModelTestCase): ModelClass = BJson requires = [BJson, NormalModel] def _create_test_data(self): data = [ {'k1': 'v1', 'k2': 'v2', 'k3': {'k4': ['i1', 'i2'], 'k5': {}}}, ['a1', 'a2', {'a3': 'a4'}], {'a1': 'x1', 'a2': 'x2', 'k4': ['i1', 'i2']}, list(range(10)), list(range(5, 15)), ['k4', 'k1']] self._bjson_objects = [] for json_value in data: self._bjson_objects.append(BJson.create(data=json_value)) def assertObjects(self, expr, *indexes): query = (BJson .select() .where(expr) .order_by(BJson.id)) self.assertEqual( [bjson.data for bjson in query], [self._bjson_objects[index].data for index in indexes]) def test_contained_by(self): self._create_test_data() item1 = ['a1', 'a2', {'a3': 'a4'}, 'a5'] self.assertObjects(BJson.data.contained_by(item1), 1) item2 = {'a1': 'x1', 'a2': 'x2', 'k4': ['i0', 'i1', 'i2'], 'x': 'y'} self.assertObjects(BJson.data.contained_by(item2), 2) def test_equality(self): data = {'k1': ['a1', 'a2'], 'k2': {'k3': 'v3'}} j = BJson.create(data=data) j_db = BJson.get(BJson.data == data) self.assertEqual(j.id, j_db.id) def test_subscript_contains(self): self._create_test_data() # 'k3' is mapped to another dictioary {'k4': [...]}. Therefore, # 'k3' is said to contain 'k4', but *not* ['k4'] or ['k4', 'k5']. self.assertObjects(BJson.data['k3'].contains('k4'), 0) self.assertObjects(BJson.data['k3'].contains(['k4'])) self.assertObjects(BJson.data['k3'].contains(['k4', 'k5'])) # We can check for the keys this way, though. self.assertObjects(BJson.data['k3'].contains_all('k4', 'k5'), 0) self.assertObjects(BJson.data['k3'].contains_any('k4', 'kx'), 0) # However, in test object index=2, 'k4' can be said to contain # both 'i1' and ['i1']. self.assertObjects(BJson.data['k4'].contains('i1'), 2) self.assertObjects(BJson.data['k4'].contains(['i1']), 2) # Interestingly, we can also specify the list of contained values # out-of-order. self.assertObjects(BJson.data['k4'].contains(['i2', 'i1']), 2) # We can test whether an object contains another JSON object fragment. self.assertObjects(BJson.data['k3'].contains({'k4': ['i1']}), 0) self.assertObjects(BJson.data['k3'].contains({'k4': ['i1', 'i2']}), 0) # Check multiple levels of nesting / containment. self.assertObjects(BJson.data['k3']['k4'].contains('i2'), 0) self.assertObjects(BJson.data['k3']['k4'].contains_all('i1', 'i2'), 0) self.assertObjects(BJson.data['k3']['k4'].contains_all('i0', 'i2')) self.assertObjects(BJson.data['k4'].contains_all('i1', 'i2'), 2) # Check array indexes. self.assertObjects(BJson.data[2].contains('a3'), 1) self.assertObjects(BJson.data[0].contains('a1'), 1) self.assertObjects(BJson.data[0].contains('k1')) def test_contains(self): self._create_test_data() # Test for keys. 'k4' is both an object key and an array element. self.assertObjects(BJson.data.contains('k4'), 2, 5) self.assertObjects(BJson.data.contains('a1'), 1, 2) self.assertObjects(BJson.data.contains('k3'), 0) # We can test for multiple top-level keys/indexes. self.assertObjects(BJson.data.contains_all('a1', 'a2'), 1, 2) # If we test for both with .contains(), though, it is treated as # an object match. self.assertObjects(BJson.data.contains(['a1', 'a2']), 1) # Check numbers. self.assertObjects(BJson.data.contains([2, 5, 6, 7, 8]), 3) self.assertObjects(BJson.data.contains([5, 6, 7, 8, 9]), 3, 4) # We can check for partial objects. self.assertObjects(BJson.data.contains({'a1': 'x1'}), 2) self.assertObjects(BJson.data.contains({'k3': {'k4': []}}), 0) self.assertObjects(BJson.data.contains([{'a3': 'a4'}]), 1) # Check for simple keys. self.assertObjects(BJson.data.contains('a1'), 1, 2) self.assertObjects(BJson.data.contains('k3'), 0) # Contains any. self.assertObjects(BJson.data.contains_any('a1', 'k1'), 0, 1, 2, 5) self.assertObjects(BJson.data.contains_any('k4', 'xx', 'yy', '2'), 2, 5) self.assertObjects(BJson.data.contains_any('i1', 'i2', 'a3')) # Contains all. self.assertObjects(BJson.data.contains_all('k1', 'k2', 'k3'), 0) self.assertObjects(BJson.data.contains_all('k1', 'k2', 'k3', 'k4')) def test_integer_index_weirdness(self): self._create_test_data() def fails(): with test_db.transaction(): results = list(BJson.select().where( BJson.data.contains_any(2, 8, 12))) self.assertRaises(ProgrammingError, fails) def test_selecting(self): self._create_test_data() query = (BJson .select(BJson.data['k3']['k4'].as_json().alias('k3k4')) .order_by(BJson.id)) k3k4_data = [obj.k3k4 for obj in query] self.assertEqual(k3k4_data, [ ['i1', 'i2'], None, None, None, None, None]) query = (BJson .select( BJson.data[0].as_json(), BJson.data[2].as_json()) .order_by(BJson.id) .tuples()) results = list(query) self.assertEqual(results, [ (None, None), ('a1', {'a3': 'a4'}), (None, None), (0, 2), (5, 7), ('k4', None), ]) @skip_if(lambda: not pg93()) class TestLateralJoin(ModelTestCase): requires = [User, Post] def setUp(self): super(TestLateralJoin, self).setUp() for username in ['charlie', 'zaizee', 'huey']: user = User.create(username=username) for i in range(10): Post.create(user=user, content='%s-%s' % (username, i)) def test_lateral_join(self): user_query = (User .select(User.id, User.username) .order_by(User.username) .alias('uq')) PostAlias = Post.alias() post_query = (PostAlias .select(PostAlias.content, PostAlias.timestamp) .where(PostAlias.user == user_query.c.id) .order_by(PostAlias.timestamp.desc()) .limit(3) .alias('pq')) # Now we join the outer and inner queries using the LEFT LATERAL # JOIN. The join predicate is *ON TRUE*, since we're effectively # joining in the post subquery's WHERE clause. join_clause = LateralJoin(user_query, post_query) # Finally, we'll wrap these up and SELECT from the result. query = (Post .select(SQL('*')) .from_(join_clause)) self.assertEqual([post.content for post in query], [ 'charlie-9', 'charlie-8', 'charlie-7', 'huey-9', 'huey-8', 'huey-7', 'zaizee-9', 'zaizee-8', 'zaizee-7']) class TestIndexedField(PeeweeTestCase): def test_indexed_field_ddl(self): compiler = test_db.compiler() create_sql, _ = compiler.create_table(TestIndexModel) self.assertEqual(create_sql, ( 'CREATE TABLE "testindexmodel" (' '"id" SERIAL NOT NULL PRIMARY KEY, ' '"array_index" VARCHAR(255)[] NOT NULL, ' '"array_noindex" INTEGER[] NOT NULL, ' '"fake_index" VARCHAR(255) NOT NULL, ' '"fake_index_with_type" VARCHAR(255) NOT NULL, ' '"fake_noindex" VARCHAR(255) NOT NULL)')) all_sql = TestIndexModel.sqlall() tbl, array_idx, fake_idx, fake_idx_type = all_sql self.assertEqual(tbl, create_sql) self.assertEqual(array_idx, ( 'CREATE INDEX "testindexmodel_array_index" ON "testindexmodel" ' 'USING GIN ("array_index")')) self.assertEqual(fake_idx, ( 'CREATE INDEX "testindexmodel_fake_index" ON "testindexmodel" ' 'USING GiST ("fake_index")')) self.assertEqual(fake_idx_type, ( 'CREATE INDEX "testindexmodel_fake_index_with_type" ' 'ON "testindexmodel" ' 'USING MAGIC ("fake_index_with_type")')) class TestIntervalField(ModelTestCase): requires = [Event] def test_interval_field(self): e1 = Event.create(name='hour', duration=datetime.timedelta(hours=1)) e2 = Event.create(name='mix', duration=datetime.timedelta( days=1, hours=2, minutes=3, seconds=4)) events = [(e.name, e.duration) for e in Event.select().order_by(Event.duration)] self.assertEqual(events, [ ('hour', datetime.timedelta(hours=1)), ('mix', datetime.timedelta(days=1, hours=2, minutes=3, seconds=4)) ]) if __name__ == '__main__': import unittest unittest.main(argv=sys.argv) peewee-2.10.2/playhouse/tests/test_pwiz.py000066400000000000000000000123261316645060400206340ustar00rootroot00000000000000import datetime import os try: from StringIO import StringIO except ImportError: from io import StringIO import textwrap import sys from peewee import * from pwiz import * from playhouse.tests.base import database_initializer from playhouse.tests.base import mock from playhouse.tests.base import PeeweeTestCase from playhouse.tests.base import skip_if db = database_initializer.get_database('sqlite') class BaseModel(Model): class Meta: database = db class User(BaseModel): username = CharField(primary_key=True) id = IntegerField(default=0) class Note(BaseModel): user = ForeignKeyField(User) text = TextField(index=True) data = IntegerField(default=0) misc = IntegerField(default=0) class Meta: indexes = ( (('user', 'text'), True), (('user', 'data', 'misc'), False), ) class Category(BaseModel): name = CharField(unique=True) parent = ForeignKeyField('self', null=True) class OddColumnNames(BaseModel): spaces = CharField(db_column='s p aces') symbols = CharField(db_column='w/-nug!') class capture_output(object): def __enter__(self): self._stdout = sys.stdout sys.stdout = self._buffer = StringIO() return self def __exit__(self, *args): self.data = self._buffer.getvalue() sys.stdout = self._stdout EXPECTED = """ from peewee import * database = SqliteDatabase('/tmp/peewee_test.db', **{}) class UnknownField(object): def __init__(self, *_, **__): pass class BaseModel(Model): class Meta: database = database class Category(BaseModel): name = CharField(unique=True) parent = ForeignKeyField(db_column='parent_id', null=True, rel_model='self', to_field='id') class Meta: db_table = 'category' class User(BaseModel): id = IntegerField() username = CharField(primary_key=True) class Meta: db_table = 'user' class Note(BaseModel): data = IntegerField() misc = IntegerField() text = TextField(index=True) user = ForeignKeyField(db_column='user_id', rel_model=User, to_field='username') class Meta: db_table = 'note' indexes = ( (('user', 'data', 'misc'), False), (('user', 'text'), True), ) """.strip() EXPECTED_ORDERED = """ from peewee import * database = SqliteDatabase('/tmp/peewee_test.db', **{}) class UnknownField(object): def __init__(self, *_, **__): pass class BaseModel(Model): class Meta: database = database class User(BaseModel): username = CharField(primary_key=True) id = IntegerField() class Meta: db_table = 'user' class Note(BaseModel): user = ForeignKeyField(db_column='user_id', rel_model=User, to_field='username') text = TextField(index=True) data = IntegerField() misc = IntegerField() class Meta: db_table = 'note' indexes = ( (('user', 'data', 'misc'), False), (('user', 'text'), True), ) """.strip() class BasePwizTestCase(PeeweeTestCase): models = [] def setUp(self): super(BasePwizTestCase, self).setUp() if os.path.exists(db.database): os.unlink(db.database) db.connect() db.create_tables(self.models) self.introspector = Introspector.from_database(db) def tearDown(self): super(BasePwizTestCase, self).tearDown() db.drop_tables(self.models) db.close() class TestPwiz(BasePwizTestCase): models = [User, Note, Category] def test_print_models(self): with capture_output() as output: print_models(self.introspector) self.assertEqual(output.data.strip(), EXPECTED) def test_print_header(self): cmdline = '-i -e sqlite %s' % db.database with capture_output() as output: with mock.patch('pwiz.datetime.datetime') as mock_datetime: now = mock_datetime.now.return_value now.strftime.return_value = 'February 03, 2015 15:30PM' print_header(cmdline, self.introspector) self.assertEqual(output.data.strip(), ( '# Code generated by:\n' '# python -m pwiz %s\n' '# Date: February 03, 2015 15:30PM\n' '# Database: %s\n' '# Peewee version: %s') % (cmdline, db.database, peewee_version)) @skip_if(lambda: sys.version_info[:2] < (2, 7)) class TestPwizOrdered(BasePwizTestCase): models = [User, Note] def test_ordered_columns(self): with capture_output() as output: print_models(self.introspector, preserve_order=True) self.assertEqual(output.data.strip(), EXPECTED_ORDERED) class TestPwizInvalidColumns(BasePwizTestCase): models = [OddColumnNames] def test_invalid_columns(self): with capture_output() as output: print_models(self.introspector) result = output.data.strip() expected = textwrap.dedent(""" class Oddcolumnnames(BaseModel): s_p_aces = CharField(db_column='s p aces') w_nug_ = CharField(db_column='w/-nug!') class Meta: db_table = 'oddcolumnnames'""").strip() actual = result[-len(expected):] self.assertEqual(actual, expected) peewee-2.10.2/playhouse/tests/test_pysqlite_ext.py000066400000000000000000000061501316645060400223730ustar00rootroot00000000000000from peewee import * from playhouse.pysqlite_ext import Database from playhouse.tests.base import ModelTestCase db = Database(':memory:') class User(Model): username = CharField() class Meta: database = db class TestPysqliteDatabase(ModelTestCase): requires = [ User, ] def tearDown(self): super(TestPysqliteDatabase, self).tearDown() db.on_commit(None) db.on_rollback(None) db.on_update(None) def test_commit_hook(self): state = {} @db.on_commit def on_commit(): state.setdefault('commits', 0) state['commits'] += 1 user = User.create(username='u1') self.assertEqual(state['commits'], 1) user.username = 'u1-e' user.save() self.assertEqual(state['commits'], 2) with db.atomic(): User.create(username='u2') User.create(username='u3') User.create(username='u4') self.assertEqual(state['commits'], 2) self.assertEqual(state['commits'], 3) with db.atomic() as txn: User.create(username='u5') txn.rollback() self.assertEqual(state['commits'], 3) self.assertEqual(User.select().count(), 4) def test_rollback_hook(self): state = {} @db.on_rollback def on_rollback(): state.setdefault('rollbacks', 0) state['rollbacks'] += 1 user = User.create(username='u1') self.assertEqual(state, {'rollbacks': 1}) with db.atomic() as txn: User.create(username='u2') txn.rollback() self.assertEqual(state['rollbacks'], 2) self.assertEqual(state['rollbacks'], 2) def test_update_hook(self): state = [] @db.on_update def on_update(query, db, table, rowid): state.append((query, db, table, rowid)) u = User.create(username='u1') u.username = 'u2' u.save() self.assertEqual(state, [ ('INSERT', 'main', 'user', 1), ('UPDATE', 'main', 'user', 1), ]) with db.atomic(): User.create(username='u3') User.create(username='u4') u.delete_instance() self.assertEqual(state, [ ('INSERT', 'main', 'user', 1), ('UPDATE', 'main', 'user', 1), ('INSERT', 'main', 'user', 2), ('INSERT', 'main', 'user', 3), ('DELETE', 'main', 'user', 1), ]) self.assertEqual(len(state), 5) def test_udf(self): @db.func() def backwards(s): return s[::-1] @db.func() def titled(s): return s.title() query = db.execute_sql('SELECT titled(backwards(?));', ('hello',)) result, = query.fetchone() self.assertEqual(result, 'Olleh') def test_properties(self): mem_used, mem_high = db.memory_used self.assertTrue(mem_high >= mem_used) self.assertFalse(mem_high == 0) conn = db.connection self.assertTrue(conn.cache_used is not None) peewee-2.10.2/playhouse/tests/test_queries.py000066400000000000000000002360741316645060400213300ustar00rootroot00000000000000from peewee import DeleteQuery from peewee import InsertQuery from peewee import prefetch_add_subquery from peewee import RawQuery from peewee import strip_parens from peewee import SelectQuery from peewee import UpdateQuery from playhouse.tests.base import compiler from playhouse.tests.base import ModelTestCase from playhouse.tests.base import normal_compiler from playhouse.tests.base import PeeweeTestCase from playhouse.tests.base import skip_if from playhouse.tests.base import test_db from playhouse.tests.base import TestDatabase from playhouse.tests.base import TestModel from playhouse.tests.models import * class TestSelectQuery(PeeweeTestCase): def test_selection(self): sq = SelectQuery(User) self.assertSelect(sq, '"users"."id", "users"."username"', []) sq = SelectQuery(Blog, Blog.pk, Blog.title, Blog.user, User.username).join(User) self.assertSelect(sq, '"blog"."pk", "blog"."title", "blog"."user_id", "users"."username"', []) sq = SelectQuery(User, fn.Lower(fn.Substr(User.username, 0, 1)).alias('lu'), fn.Count(Blog.pk)).join(Blog) self.assertSelect(sq, 'Lower(Substr("users"."username", ?, ?)) AS lu, Count("blog"."pk")', [0, 1]) sq = SelectQuery(User, User.username, fn.Count(Blog.select().where(Blog.user == User.id))) self.assertSelect(sq, '"users"."username", Count(SELECT "blog"."pk" FROM "blog" AS blog WHERE ("blog"."user_id" = "users"."id"))', []) sq = SelectQuery(Package, Package, fn.Count(PackageItem.id)).join(PackageItem) self.assertSelect(sq, '"package"."id", "package"."barcode", Count("packageitem"."id")', []) def test_select_distinct(self): sq = SelectQuery(User).distinct() self.assertEqual( compiler.generate_select(sq), ('SELECT DISTINCT "users"."id", "users"."username" ' 'FROM "users" AS users', [])) sq = sq.distinct(False) self.assertEqual( compiler.generate_select(sq), ('SELECT "users"."id", "users"."username" FROM "users" AS users', [])) sq = SelectQuery(User).distinct([User.username]) self.assertEqual( compiler.generate_select(sq), ('SELECT DISTINCT ON ("users"."username") "users"."id", ' '"users"."username" ' 'FROM "users" AS users', [])) sq = SelectQuery(Blog).distinct([Blog.user, Blog.title]) self.assertEqual( compiler.generate_select(sq), ('SELECT DISTINCT ON ("blog"."user_id", "blog"."title") ' '"blog"."pk", "blog"."user_id", "blog"."title", "blog"."content",' ' "blog"."pub_date" ' 'FROM "blog" AS blog', [])) sq = SelectQuery(Blog, Blog.user, Blog.title).distinct( [Blog.user, Blog.title]) self.assertEqual( compiler.generate_select(sq), ('SELECT DISTINCT ON ("blog"."user_id", "blog"."title") ' '"blog"."user_id", "blog"."title" ' 'FROM "blog" AS blog', [])) def test_reselect(self): sq = SelectQuery(User, User.username) self.assertSelect(sq, '"users"."username"', []) sq2 = sq.select() self.assertSelect(sq2, '"users"."id", "users"."username"', []) self.assertTrue(id(sq) != id(sq2)) sq3 = sq2.select(User.id) self.assertSelect(sq3, '"users"."id"', []) self.assertTrue(id(sq2) != id(sq3)) def test_select_subquery(self): subquery = SelectQuery(Child, fn.Count(Child.id)).where(Child.parent == Parent.id).group_by(Child.parent) sq = SelectQuery(Parent, Parent, subquery.alias('count')) sql = compiler.generate_select(sq) self.assertEqual(sql, ( 'SELECT "parent"."id", "parent"."data", ' + \ '(SELECT Count("child"."id") FROM "child" AS child ' + \ 'WHERE ("child"."parent_id" = "parent"."id") GROUP BY "child"."parent_id") ' + \ 'AS count FROM "parent" AS parent', [] )) def test_select_subquery_ordering(self): sq = Comment.select().join(Blog).where(Blog.pk == 1) sq1 = Comment.select().where( (Comment.id << sq) | (Comment.comment == '*') ) sq2 = Comment.select().where( (Comment.comment == '*') | (Comment.id << sq) ) sql1, params1 = normal_compiler.generate_select(sq1) self.assertEqual(sql1, ( 'SELECT "t1"."id", "t1"."blog_id", "t1"."comment" FROM "comment" AS t1 ' 'WHERE (("t1"."id" IN (' 'SELECT "t2"."id" FROM "comment" AS t2 ' 'INNER JOIN "blog" AS t3 ON ("t2"."blog_id" = "t3"."pk") ' 'WHERE ("t3"."pk" = ?))) OR ("t1"."comment" = ?))')) self.assertEqual(params1, [1, '*']) sql2, params2 = normal_compiler.generate_select(sq2) self.assertEqual(sql2, ( 'SELECT "t1"."id", "t1"."blog_id", "t1"."comment" FROM "comment" AS t1 ' 'WHERE (("t1"."comment" = ?) OR ("t1"."id" IN (' 'SELECT "t2"."id" FROM "comment" AS t2 ' 'INNER JOIN "blog" AS t3 ON ("t2"."blog_id" = "t3"."pk") ' 'WHERE ("t3"."pk" = ?))))')) self.assertEqual(params2, ['*', 1]) def test_multiple_subquery(self): sq2 = Comment.select().where(Comment.comment == '2').join(Blog) sq1 = Comment.select().where( (Comment.comment == '1') & (Comment.id << sq2) ).join(Blog) sq = Comment.select().where( Comment.id << sq1 ) sql, params = normal_compiler.generate_select(sq) self.assertEqual(sql, ( 'SELECT "t1"."id", "t1"."blog_id", "t1"."comment" ' 'FROM "comment" AS t1 ' 'WHERE ("t1"."id" IN (' 'SELECT "t2"."id" FROM "comment" AS t2 ' 'INNER JOIN "blog" AS t3 ON ("t2"."blog_id" = "t3"."pk") ' 'WHERE (("t2"."comment" = ?) AND ("t2"."id" IN (' 'SELECT "t4"."id" FROM "comment" AS t4 ' 'INNER JOIN "blog" AS t5 ON ("t4"."blog_id" = "t5"."pk") ' 'WHERE ("t4"."comment" = ?)' ')))))')) self.assertEqual(params, ['1', '2']) def test_composite_subselect(self): class Person(Model): first = CharField() last = CharField() class Meta: primary_key = CompositeKey('first', 'last') sql, params = compiler.generate_select(Person.select()) self.assertEqual(sql, ('SELECT "person"."first", "person"."last" ' 'FROM "person" AS person')) self.assertEqual(params, []) cond = Person.select() == ('huey', 'cat') class Note(Model): pass query = Note.select().where(cond) sql, params = compiler.generate_select(query) self.assertEqual(sql, ( 'SELECT "note"."id" FROM "note" AS note ' 'WHERE ((' 'SELECT "person"."first", "person"."last" ' 'FROM "person" AS person) = (?, ?))')) self.assertEqual(params, ['huey', 'cat']) def test_select_cloning(self): ct = fn.Count(Blog.pk) sq = SelectQuery(User, User, User.id.alias('extra_id'), ct.alias('blog_ct')).join( Blog, JOIN.LEFT_OUTER).group_by(User).order_by(ct.desc()) sql = compiler.generate_select(sq) self.assertEqual(sql, ( 'SELECT "users"."id", "users"."username", "users"."id" AS extra_id, Count("blog"."pk") AS blog_ct ' + \ 'FROM "users" AS users LEFT OUTER JOIN "blog" AS blog ON ("users"."id" = "blog"."user_id") ' + \ 'GROUP BY "users"."id", "users"."username" ' + \ 'ORDER BY Count("blog"."pk") DESC', [] )) self.assertEqual(User.id._alias, None) def test_joins(self): sq = SelectQuery(User).join(Blog) self.assertJoins(sq, ['INNER JOIN "blog" AS blog ON ("users"."id" = "blog"."user_id")']) sq = SelectQuery(Blog).join(User, JOIN.LEFT_OUTER) self.assertJoins(sq, ['LEFT OUTER JOIN "users" AS users ON ("blog"."user_id" = "users"."id")']) sq = SelectQuery(User).join(Relationship) self.assertJoins(sq, ['INNER JOIN "relationship" AS relationship ON ("users"."id" = "relationship"."from_user_id")']) sq = SelectQuery(User).join(Relationship, on=Relationship.to_user) self.assertJoins(sq, ['INNER JOIN "relationship" AS relationship ON ("users"."id" = "relationship"."to_user_id")']) sq = SelectQuery(User).join(Relationship, JOIN.LEFT_OUTER, Relationship.to_user) self.assertJoins(sq, ['LEFT OUTER JOIN "relationship" AS relationship ON ("users"."id" = "relationship"."to_user_id")']) sq = SelectQuery(Package).join(PackageItem) self.assertJoins(sq, ['INNER JOIN "packageitem" AS packageitem ON ("package"."barcode" = "packageitem"."package_id")']) sq = SelectQuery(PackageItem).join(Package) self.assertJoins(sq, ['INNER JOIN "package" AS package ON ("packageitem"."package_id" = "package"."barcode")']) sq = (SelectQuery(TestModelA) .join(TestModelB, on=(TestModelA.data == TestModelB.data)) .join(TestModelC, on=(TestModelC.field == TestModelB.field))) self.assertJoins(sq, [ 'INNER JOIN "testmodelb" AS testmodelb ON ("testmodela"."data" = "testmodelb"."data")', 'INNER JOIN "testmodelc" AS testmodelc ON ("testmodelc"."field" = "testmodelb"."field")', ]) inner = SelectQuery(User).alias('j1') sq = SelectQuery(Blog).join(inner, on=(Blog.user == inner.c.id)) join = ('INNER JOIN (' 'SELECT "users"."id" FROM "users" AS users) AS j1 ' 'ON ("blog"."user_id" = "j1"."id")') self.assertJoins(sq, [join]) inner_2 = SelectQuery(Comment).alias('j2') sq = sq.join(inner_2, on=(Blog.pk == inner_2.c.blog_id)) join_2 = ('INNER JOIN (' 'SELECT "comment"."id" FROM "comment" AS comment) AS j2 ' 'ON ("blog"."pk" = "j2"."blog_id")') self.assertJoins(sq, [join, join_2]) sq = sq.join(Comment) self.assertJoins(sq, [ join, join_2, 'INNER JOIN "comment" AS comment ON ("blog"."pk" = "comment"."blog_id")']) def test_join_self_referential(self): sq = SelectQuery(Category).join(Category) self.assertJoins(sq, ['INNER JOIN "category" AS category ON ("category"."parent_id" = "category"."id")']) def test_join_self_referential_alias(self): Parent = Category.alias() sq = SelectQuery(Category, Category, Parent).join(Parent, on=(Category.parent == Parent.id)).where( Parent.name == 'parent name' ).order_by(Parent.name) self.assertSelect(sq, '"t1"."id", "t1"."parent_id", "t1"."name", "t2"."id", "t2"."parent_id", "t2"."name"', [], normal_compiler) self.assertJoins(sq, [ 'INNER JOIN "category" AS t2 ON ("t1"."parent_id" = "t2"."id")', ], normal_compiler) self.assertWhere(sq, '("t2"."name" = ?)', ['parent name'], normal_compiler) self.assertOrderBy(sq, '"t2"."name"', [], normal_compiler) Grandparent = Category.alias() sq = SelectQuery(Category, Category, Parent, Grandparent).join( Parent, on=(Category.parent == Parent.id) ).join( Grandparent, on=(Parent.parent == Grandparent.id) ).where(Grandparent.name == 'g1') self.assertSelect(sq, '"t1"."id", "t1"."parent_id", "t1"."name", "t2"."id", "t2"."parent_id", "t2"."name", "t3"."id", "t3"."parent_id", "t3"."name"', [], normal_compiler) self.assertJoins(sq, [ 'INNER JOIN "category" AS t2 ON ("t1"."parent_id" = "t2"."id")', 'INNER JOIN "category" AS t3 ON ("t2"."parent_id" = "t3"."id")', ], normal_compiler) self.assertWhere(sq, '("t3"."name" = ?)', ['g1'], normal_compiler) def test_join_both_sides(self): sq = SelectQuery(Blog).join(Comment).switch(Blog).join(User) self.assertJoins(sq, [ 'INNER JOIN "comment" AS comment ON ("blog"."pk" = "comment"."blog_id")', 'INNER JOIN "users" AS users ON ("blog"."user_id" = "users"."id")', ]) sq = SelectQuery(Blog).join(User).switch(Blog).join(Comment) self.assertJoins(sq, [ 'INNER JOIN "users" AS users ON ("blog"."user_id" = "users"."id")', 'INNER JOIN "comment" AS comment ON ("blog"."pk" = "comment"."blog_id")', ]) def test_join_switching(self): class Artist(TestModel): pass class Track(TestModel): artist = ForeignKeyField(Artist) class Release(TestModel): artist = ForeignKeyField(Artist) class ReleaseTrack(TestModel): track = ForeignKeyField(Track) release = ForeignKeyField(Release) class Genre(TestModel): pass class TrackGenre(TestModel): genre = ForeignKeyField(Genre) track = ForeignKeyField(Track) multiple_first = Track.select().join(ReleaseTrack).join(Release).switch(Track).join(Artist).switch(Track).join(TrackGenre).join(Genre) self.assertSelect(multiple_first, '"track"."id", "track"."artist_id"', []) self.assertJoins(multiple_first, [ 'INNER JOIN "artist" AS artist ON ("track"."artist_id" = "artist"."id")', 'INNER JOIN "genre" AS genre ON ("trackgenre"."genre_id" = "genre"."id")', 'INNER JOIN "release" AS release ON ("releasetrack"."release_id" = "release"."id")', 'INNER JOIN "releasetrack" AS releasetrack ON ("track"."id" = "releasetrack"."track_id")', 'INNER JOIN "trackgenre" AS trackgenre ON ("track"."id" = "trackgenre"."track_id")', ]) single_first = Track.select().join(Artist).switch(Track).join(ReleaseTrack).join(Release).switch(Track).join(TrackGenre).join(Genre) self.assertSelect(single_first, '"track"."id", "track"."artist_id"', []) self.assertJoins(single_first, [ 'INNER JOIN "artist" AS artist ON ("track"."artist_id" = "artist"."id")', 'INNER JOIN "genre" AS genre ON ("trackgenre"."genre_id" = "genre"."id")', 'INNER JOIN "release" AS release ON ("releasetrack"."release_id" = "release"."id")', 'INNER JOIN "releasetrack" AS releasetrack ON ("track"."id" = "releasetrack"."track_id")', 'INNER JOIN "trackgenre" AS trackgenre ON ("track"."id" = "trackgenre"."track_id")', ]) def test_joining_expr(self): class A(TestModel): uniq_a = CharField(primary_key=True) class B(TestModel): uniq_ab = CharField(primary_key=True) uniq_b = CharField() class C(TestModel): uniq_bc = CharField(primary_key=True) sq = A.select(A, B, C).join( B, on=(A.uniq_a == B.uniq_ab) ).join( C, on=(B.uniq_b == C.uniq_bc) ) self.assertSelect(sq, '"a"."uniq_a", "b"."uniq_ab", "b"."uniq_b", "c"."uniq_bc"', []) self.assertJoins(sq, [ 'INNER JOIN "b" AS b ON ("a"."uniq_a" = "b"."uniq_ab")', 'INNER JOIN "c" AS c ON ("b"."uniq_b" = "c"."uniq_bc")', ]) def test_join_other_node_types(self): cond = fn.Magic(User.id, Blog.user).alias('magic') sq = User.select().join(Blog, on=cond) self.assertJoins(sq, [ 'INNER JOIN "blog" AS blog ON ' 'Magic("users"."id", "blog"."user_id")']) sq = User.select().join(Blog, on=Blog.user.as_entity(True)) self.assertJoins(sq, [ 'INNER JOIN "blog" AS blog ON ' '"blog"."user_id"']) def test_where(self): sq = SelectQuery(User).where(User.id < 5) self.assertWhere(sq, '("users"."id" < ?)', [5]) sq = SelectQuery(Blog).where(Blog.user << sq) self.assertWhere(sq, '("blog"."user_id" IN (SELECT "users"."id" FROM "users" AS users WHERE ("users"."id" < ?)))', [5]) p = SelectQuery(Package).where(Package.id == 2) sq = SelectQuery(PackageItem).where(PackageItem.package << p) self.assertWhere(sq, '("packageitem"."package_id" IN (SELECT "package"."barcode" FROM "package" AS package WHERE ("package"."id" = ?)))', [2]) def test_orwhere(self): sq = SelectQuery(User).orwhere(User.id < 5) self.assertWhere(sq, '("users"."id" < ?)', [5]) sq = sq.orwhere(User.id > 10) self.assertWhere(sq, '(("users"."id" < ?) OR ("users"."id" > ?))', [5, 10]) def test_fix_null(self): sq = SelectQuery(Blog).where(Blog.user == None) self.assertWhere(sq, '("blog"."user_id" IS ?)', [None]) sq = SelectQuery(Blog).where(Blog.user != None) self.assertWhere(sq, '("blog"."user_id" IS NOT ?)', [None]) sq = SelectQuery(Blog).where(~(Blog.user == None)) self.assertWhere(sq, 'NOT ("blog"."user_id" IS ?)', [None]) def test_is_null(self): sq = SelectQuery(Blog).where(Blog.user.is_null()) self.assertWhere(sq, '("blog"."user_id" IS ?)', [None]) sq = SelectQuery(Blog).where(Blog.user.is_null(False)) self.assertWhere(sq, '("blog"."user_id" IS NOT ?)', [None]) sq = SelectQuery(Blog).where(~(Blog.user.is_null())) self.assertWhere(sq, 'NOT ("blog"."user_id" IS ?)', [None]) sq = SelectQuery(Blog).where(~(Blog.user.is_null(False))) self.assertWhere(sq, 'NOT ("blog"."user_id" IS NOT ?)', [None]) def test_where_coercion(self): sq = SelectQuery(User).where(User.id < '5') self.assertWhere(sq, '("users"."id" < ?)', [5]) sq = SelectQuery(User).where(User.id < (User.id - '5')) self.assertWhere(sq, '("users"."id" < ("users"."id" - ?))', [5]) def test_where_lists(self): sq = SelectQuery(User).where(User.username << ['u1', 'u2']) self.assertWhere(sq, '("users"."username" IN (?, ?))', ['u1', 'u2']) sq = SelectQuery(User).where(User.username.in_(('u1', 'u2'))) self.assertWhere(sq, '("users"."username" IN (?, ?))', ['u1', 'u2']) sq = SelectQuery(User).where(User.username.not_in(['u1', 'u2'])) self.assertWhere(sq, '("users"."username" NOT IN (?, ?))', ['u1', 'u2']) sq = SelectQuery(User).where((User.username << ['u1', 'u2']) | (User.username << ['u3', 'u4'])) self.assertWhere(sq, '(("users"."username" IN (?, ?)) OR ("users"."username" IN (?, ?)))', ['u1', 'u2', 'u3', 'u4']) def test_where_in_empty(self): sq = SelectQuery(User).where(User.username << []) self.assertWhere(sq, '(0 = 1)', []) sq = SelectQuery(User).where(User.username << ()) self.assertWhere(sq, '(0 = 1)', []) # NOT IN is not affected. sq = SelectQuery(User).where(User.username.not_in([])) self.assertWhere(sq, '("users"."username" NOT IN ())', []) # But ~ (x IN y) is. sq = SelectQuery(User).where(~(User.username << ())) self.assertWhere(sq, 'NOT (0 = 1)', []) def test_where_sets(self): def where_sql(expr, query=None): if query is None: query = User.select() query = query.where(expr) return self.parse_query(query, query._where) sql, params = where_sql(User.username << set(['u1', 'u2'])) self.assertEqual(sql, '("users"."username" IN (?, ?))') self.assertTrue(isinstance(params, list)) self.assertEqual(sorted(params), ['u1', 'u2']) sql, params = where_sql(User.username.in_(set(['u1', 'u2']))) self.assertEqual(sql, '("users"."username" IN (?, ?))') self.assertEqual(sorted(params), ['u1', 'u2']) def test_where_joins(self): sq = SelectQuery(User).where( ((User.id == 1) | (User.id == 2)) & ((Blog.pk == 3) | (Blog.pk == 4)) ).where(User.id == 5).join(Blog) self.assertWhere(sq, '(((("users"."id" = ?) OR ("users"."id" = ?)) AND (("blog"."pk" = ?) OR ("blog"."pk" = ?))) AND ("users"."id" = ?))', [1, 2, 3, 4, 5]) def test_where_join_non_pk_fk(self): sq = (SelectQuery(Package) .join(PackageItem) .where(PackageItem.title == 'p1')) self.assertWhere(sq, '("packageitem"."title" = ?)', ['p1']) sq = (SelectQuery(PackageItem) .join(Package) .where(Package.barcode == 'b1')) self.assertWhere(sq, '("package"."barcode" = ?)', ['b1']) def test_where_functions(self): sq = SelectQuery(User).where(fn.Lower(fn.Substr(User.username, 0, 1)) == 'a') self.assertWhere(sq, '(Lower(Substr("users"."username", ?, ?)) = ?)', [0, 1, 'a']) def test_where_conversion(self): sq = SelectQuery(CSVRow).where(CSVRow.data == Param(['foo', 'bar'])) self.assertWhere(sq, '("csvrow"."data" = ?)', ['foo,bar']) sq = SelectQuery(CSVRow).where( CSVRow.data == fn.FOO(Param(['foo', 'bar']))) self.assertWhere(sq, '("csvrow"."data" = FOO(?))', ['foo,bar']) sq = SelectQuery(CSVRow).where( CSVRow.data == fn.FOO(Param(['foo', 'bar'])).coerce(False)) self.assertWhere(sq, '("csvrow"."data" = FOO(?))', [['foo', 'bar']]) def test_where_clauses(self): sq = SelectQuery(Blog).where( Blog.pub_date < (fn.NOW() - SQL('INTERVAL 1 HOUR'))) self.assertWhere(sq, '("blog"."pub_date" < (NOW() - INTERVAL 1 HOUR))', []) def test_where_r(self): sq = SelectQuery(Blog).where(Blog.pub_date < R('NOW() - INTERVAL 1 HOUR')) self.assertWhere(sq, '("blog"."pub_date" < NOW() - INTERVAL 1 HOUR)', []) sq = SelectQuery(Blog).where(Blog.pub_date < (fn.Now() - R('INTERVAL 1 HOUR'))) self.assertWhere(sq, '("blog"."pub_date" < (Now() - INTERVAL 1 HOUR))', []) def test_where_subqueries(self): sq = SelectQuery(User).where(User.id << User.select().where(User.username=='u1')) self.assertWhere(sq, '("users"."id" IN (SELECT "users"."id" FROM "users" AS users WHERE ("users"."username" = ?)))', ['u1']) sq = SelectQuery(User).where(User.username << User.select(User.username).where(User.username=='u1')) self.assertWhere(sq, '("users"."username" IN (SELECT "users"."username" FROM "users" AS users WHERE ("users"."username" = ?)))', ['u1']) sq = SelectQuery(Blog).where((Blog.pk == 3) | (Blog.user << User.select().where(User.username << ['u1', 'u2']))) self.assertWhere(sq, '(("blog"."pk" = ?) OR ("blog"."user_id" IN (SELECT "users"."id" FROM "users" AS users WHERE ("users"."username" IN (?, ?)))))', [3, 'u1', 'u2']) def test_where_fk(self): sq = SelectQuery(Blog).where(Blog.user == User(id=100)) self.assertWhere(sq, '("blog"."user_id" = ?)', [100]) sq = SelectQuery(Blog).where(Blog.user << [User(id=100), User(id=101)]) self.assertWhere(sq, '("blog"."user_id" IN (?, ?))', [100, 101]) sq = SelectQuery(PackageItem).where(PackageItem.package == Package(barcode='b1')) self.assertWhere(sq, '("packageitem"."package_id" = ?)', ['b1']) def test_where_negation(self): sq = SelectQuery(Blog).where(~(Blog.title == 'foo')) self.assertWhere(sq, 'NOT ("blog"."title" = ?)', ['foo']) sq = SelectQuery(Blog).where(~((Blog.title == 'foo') | (Blog.title == 'bar'))) self.assertWhere(sq, 'NOT (("blog"."title" = ?) OR ("blog"."title" = ?))', ['foo', 'bar']) sq = SelectQuery(Blog).where(~((Blog.title == 'foo') & (Blog.title == 'bar')) & (Blog.title == 'baz')) self.assertWhere(sq, '(NOT (("blog"."title" = ?) AND ("blog"."title" = ?)) AND ("blog"."title" = ?))', ['foo', 'bar', 'baz']) sq = SelectQuery(Blog).where(~((Blog.title == 'foo') & (Blog.title == 'bar')) & ((Blog.title == 'baz') & (Blog.title == 'fizz'))) self.assertWhere(sq, '(NOT (("blog"."title" = ?) AND ("blog"."title" = ?)) AND (("blog"."title" = ?) AND ("blog"."title" = ?)))', ['foo', 'bar', 'baz', 'fizz']) def test_where_negation_single_clause(self): sq = SelectQuery(Blog).where(~Blog.title) self.assertWhere(sq, 'NOT "blog"."title"', []) sq = sq.where(Blog.pk > 1) self.assertWhere(sq, '(NOT "blog"."title" AND ("blog"."pk" > ?))', [1]) def test_where_chaining_collapsing(self): sq = SelectQuery(User).where(User.id == 1).where(User.id == 2).where(User.id == 3) self.assertWhere(sq, '((("users"."id" = ?) AND ("users"."id" = ?)) AND ("users"."id" = ?))', [1, 2, 3]) sq = SelectQuery(User).where((User.id == 1) & (User.id == 2)).where(User.id == 3) self.assertWhere(sq, '((("users"."id" = ?) AND ("users"."id" = ?)) AND ("users"."id" = ?))', [1, 2, 3]) sq = SelectQuery(User).where((User.id == 1) | (User.id == 2)).where(User.id == 3) self.assertWhere(sq, '((("users"."id" = ?) OR ("users"."id" = ?)) AND ("users"."id" = ?))', [1, 2, 3]) sq = SelectQuery(User).where(User.id == 1).where((User.id == 2) & (User.id == 3)) self.assertWhere(sq, '(("users"."id" = ?) AND (("users"."id" = ?) AND ("users"."id" = ?)))', [1, 2, 3]) sq = SelectQuery(User).where(User.id == 1).where((User.id == 2) | (User.id == 3)) self.assertWhere(sq, '(("users"."id" = ?) AND (("users"."id" = ?) OR ("users"."id" = ?)))', [1, 2, 3]) sq = SelectQuery(User).where(~(User.id == 1)).where(User.id == 2).where(~(User.id == 3)) self.assertWhere(sq, '((NOT ("users"."id" = ?) AND ("users"."id" = ?)) AND NOT ("users"."id" = ?))', [1, 2, 3]) def test_tuples(self): sq = User.select().where(Tuple(User.id, User.username) == (1, 'hello')) self.assertWhere(sq, '(("users"."id", "users"."username") = (?, ?))', [1, 'hello']) def test_grouping(self): sq = SelectQuery(User).group_by(User.id) self.assertGroupBy(sq, '"users"."id"', []) sq = SelectQuery(User).group_by(User) self.assertGroupBy(sq, '"users"."id", "users"."username"', []) def test_having(self): sq = SelectQuery(User, fn.Count(Blog.pk)).join(Blog).group_by(User).having( fn.Count(Blog.pk) > 2 ) self.assertHaving(sq, '(Count("blog"."pk") > ?)', [2]) sq = SelectQuery(User, fn.Count(Blog.pk)).join(Blog).group_by(User).having( (fn.Count(Blog.pk) > 10) | (fn.Count(Blog.pk) < 2) ) self.assertHaving(sq, '((Count("blog"."pk") > ?) OR (Count("blog"."pk") < ?))', [10, 2]) def test_ordering(self): sq = SelectQuery(User).join(Blog).order_by(Blog.title) self.assertOrderBy(sq, '"blog"."title"', []) sq = SelectQuery(User).join(Blog).order_by(Blog.title.asc()) self.assertOrderBy(sq, '"blog"."title" ASC', []) sq = SelectQuery(User).join(Blog).order_by(Blog.title.desc()) self.assertOrderBy(sq, '"blog"."title" DESC', []) sq = SelectQuery(User).join(Blog).order_by(User.username.desc(), Blog.title.asc()) self.assertOrderBy(sq, '"users"."username" DESC, "blog"."title" ASC', []) base_sq = SelectQuery(User, User.username, fn.Count(Blog.pk).alias('count')).join(Blog).group_by(User.username) sq = base_sq.order_by(fn.Count(Blog.pk).desc()) self.assertOrderBy(sq, 'Count("blog"."pk") DESC', []) sq = base_sq.order_by(R('count')) self.assertOrderBy(sq, 'count', []) sq = OrderedModel.select() self.assertOrderBy(sq, '"orderedmodel"."created" DESC', []) sq = OrderedModel.select().order_by(OrderedModel.id.asc()) self.assertOrderBy(sq, '"orderedmodel"."id" ASC', []) sq = User.select().order_by(User.id * 5) self.assertOrderBy(sq, '("users"."id" * ?)', [5]) sql = compiler.generate_select(sq) self.assertEqual(sql, ( 'SELECT "users"."id", "users"."username" ' 'FROM "users" AS users ORDER BY ("users"."id" * ?)', [5])) def test_ordering_extend(self): sq = User.select().order_by(User.username, extend=True) self.assertEqual([f.name for f in sq._order_by], ['username']) sq = sq.order_by(User.id.desc(), extend=True) self.assertEqual([f.name for f in sq._order_by], ['username', 'id']) sq = sq.order_by(extend=True) self.assertEqual([f.name for f in sq._order_by], ['username', 'id']) sq = sq.order_by() self.assertTrue(sq._order_by is None) sq = sq.order_by(extend=True) self.assertTrue(sq._order_by is None) self.assertRaises(ValueError, lambda: sq.order_by(foo=True)) def test_ordering_sugar(self): sq = User.select().order_by(-User.username) self.assertOrderBy(sq, '"users"."username" DESC', []) sq = User.select().order_by(+User.username) self.assertOrderBy(sq, '"users"."username" ASC', []) sq = User.select().join(Blog).order_by( +User.username, -Blog.title) self.assertOrderBy( sq, '"users"."username" ASC, "blog"."title" DESC', []) def test_from_subquery(self): # e.g. annotate the number of blogs per user, then annotate the number # of users with that number of blogs. inner = (Blog .select(fn.COUNT(Blog.pk).alias('blog_ct')) .group_by(Blog.user)) blog_ct = SQL('blog_ct') outer = (Blog .select(blog_ct, fn.COUNT(blog_ct).alias('blog_ct_n')) .from_(inner) .group_by(blog_ct)) sql, params = compiler.generate_select(outer) self.assertEqual(sql, ( 'SELECT blog_ct, COUNT(blog_ct) AS blog_ct_n ' 'FROM (' 'SELECT COUNT("blog"."pk") AS blog_ct FROM "blog" AS blog ' 'GROUP BY "blog"."user_id") ' 'GROUP BY blog_ct')) def test_from_multiple(self): q = (User .select() .from_(User, Blog) .where(Blog.user == User.id)) sql, params = compiler.generate_select(q) self.assertEqual(sql, ( 'SELECT "users"."id", "users"."username" ' 'FROM "users" AS users, "blog" AS blog ' 'WHERE ("blog"."user_id" = "users"."id")')) q = (User .select() .from_(User, Blog, Comment) .where( (Blog.user == User.id) & (Comment.blog == Blog.pk))) sql, params = compiler.generate_select(q) self.assertEqual(sql, ( 'SELECT "users"."id", "users"."username" ' 'FROM "users" AS users, "blog" AS blog, "comment" AS comment ' 'WHERE (("blog"."user_id" = "users"."id") AND ' '("comment"."blog_id" = "blog"."pk"))')) def test_paginate(self): sq = SelectQuery(User).paginate(1, 20) self.assertEqual(sq._limit, 20) self.assertEqual(sq._offset, 0) sq = SelectQuery(User).paginate(3, 30) self.assertEqual(sq._limit, 30) self.assertEqual(sq._offset, 60) def test_limit(self): orig = User._meta.database.limit_max User._meta.database.limit_max = -1 try: sq = SelectQuery(User, User.id).limit(10).offset(5) sql, params = compiler.generate_select(sq) self.assertEqual(sql, ( 'SELECT "users"."id" FROM "users" AS users LIMIT 10 OFFSET 5')) sq = SelectQuery(User, User.id).offset(5) sql, params = compiler.generate_select(sq) self.assertEqual(sql, ( 'SELECT "users"."id" FROM "users" AS users LIMIT -1 OFFSET 5')) sq = SelectQuery(User, User.id).limit(0).offset(0) sql, params = compiler.generate_select(sq) self.assertEqual(sql, ( 'SELECT "users"."id" FROM "users" AS users LIMIT 0 OFFSET 0')) finally: User._meta.database.limit_max = orig def test_prefetch_subquery(self): sq = SelectQuery(User).where(User.username == 'foo') sq2 = SelectQuery(Blog).where(Blog.title == 'bar') sq3 = SelectQuery(Comment).where(Comment.comment == 'baz') fixed = prefetch_add_subquery(sq, (sq2, sq3)) fixed_sql = [ ('SELECT "t1"."id", "t1"."username" FROM "users" AS t1 WHERE ("t1"."username" = ?)', ['foo']), ('SELECT "t1"."pk", "t1"."user_id", "t1"."title", "t1"."content", "t1"."pub_date" FROM "blog" AS t1 WHERE (("t1"."title" = ?) AND ("t1"."user_id" IN (SELECT "t2"."id" FROM "users" AS t2 WHERE ("t2"."username" = ?))))', ['bar', 'foo']), ('SELECT "t1"."id", "t1"."blog_id", "t1"."comment" FROM "comment" AS t1 WHERE (("t1"."comment" = ?) AND ("t1"."blog_id" IN (SELECT "t2"."pk" FROM "blog" AS t2 WHERE (("t2"."title" = ?) AND ("t2"."user_id" IN (SELECT "t3"."id" FROM "users" AS t3 WHERE ("t3"."username" = ?)))))))', ['baz', 'bar', 'foo']), ] for prefetch_result, expected in zip(fixed, fixed_sql): self.assertEqual( normal_compiler.generate_select(prefetch_result.query), expected) fixed = prefetch_add_subquery(sq, (Blog,)) fixed_sql = [ ('SELECT "t1"."id", "t1"."username" FROM "users" AS t1 WHERE ("t1"."username" = ?)', ['foo']), ('SELECT "t1"."pk", "t1"."user_id", "t1"."title", "t1"."content", "t1"."pub_date" FROM "blog" AS t1 WHERE ("t1"."user_id" IN (SELECT "t2"."id" FROM "users" AS t2 WHERE ("t2"."username" = ?)))', ['foo']), ] for prefetch_result, expected in zip(fixed, fixed_sql): self.assertEqual( normal_compiler.generate_select(prefetch_result.query), expected) def test_prefetch_non_pk_fk(self): sq = SelectQuery(Package).where(Package.barcode % 'b%') sq2 = SelectQuery(PackageItem).where(PackageItem.title % 'n%') fixed = prefetch_add_subquery(sq, (sq2,)) fixed_sq = ( 'SELECT "t1"."id", "t1"."barcode" FROM "package" AS t1 ' 'WHERE ("t1"."barcode" LIKE ?)', ['b%']) fixed_sq2 = ( 'SELECT "t1"."id", "t1"."title", "t1"."package_id" ' 'FROM "packageitem" AS t1 ' 'WHERE (' '("t1"."title" LIKE ?) AND ' '("t1"."package_id" IN (' 'SELECT "t2"."barcode" FROM "package" AS t2 ' 'WHERE ("t2"."barcode" LIKE ?))))', ['n%', 'b%']) fixed_sql = [fixed_sq, fixed_sq2] for prefetch_result, expected in zip(fixed, fixed_sql): self.assertEqual( normal_compiler.generate_select(prefetch_result.query), expected) def test_prefetch_subquery_same_depth(self): sq = Parent.select() sq2 = Child.select() sq3 = Orphan.select() sq4 = ChildPet.select() sq5 = OrphanPet.select() fixed = prefetch_add_subquery(sq, (sq2, sq3, sq4, sq5)) fixed_sql = [ ('SELECT "t1"."id", "t1"."data" FROM "parent" AS t1', []), ('SELECT "t1"."id", "t1"."parent_id", "t1"."data" FROM "child" AS t1 WHERE ("t1"."parent_id" IN (SELECT "t2"."id" FROM "parent" AS t2))', []), ('SELECT "t1"."id", "t1"."parent_id", "t1"."data" FROM "orphan" AS t1 WHERE ("t1"."parent_id" IN (SELECT "t2"."id" FROM "parent" AS t2))', []), ('SELECT "t1"."id", "t1"."child_id", "t1"."data" FROM "childpet" AS t1 WHERE ("t1"."child_id" IN (SELECT "t2"."id" FROM "child" AS t2 WHERE ("t2"."parent_id" IN (SELECT "t3"."id" FROM "parent" AS t3))))', []), ('SELECT "t1"."id", "t1"."orphan_id", "t1"."data" FROM "orphanpet" AS t1 WHERE ("t1"."orphan_id" IN (SELECT "t2"."id" FROM "orphan" AS t2 WHERE ("t2"."parent_id" IN (SELECT "t3"."id" FROM "parent" AS t3))))', []), ] for prefetch_result, expected in zip(fixed, fixed_sql): self.assertEqual( normal_compiler.generate_select(prefetch_result.query), expected) def test_outer_inner_alias(self): expected = ('SELECT "t1"."id", "t1"."username", ' '(SELECT Sum("t2"."id") FROM "users" AS t2 ' 'WHERE ("t2"."id" = "t1"."id")) AS xxx FROM "users" AS t1') UA = User.alias() inner = SelectQuery(UA, fn.Sum(UA.id)).where(UA.id == User.id) query = User.select(User, inner.alias('xxx')) sql, _ = normal_compiler.generate_select(query) self.assertEqual(sql, expected) # Ensure that ModelAlias.select() does the right thing. inner = UA.select(fn.Sum(UA.id)).where(UA.id == User.id) query = User.select(User, inner.alias('xxx')) sql, _ = normal_compiler.generate_select(query) self.assertEqual(sql, expected) def test_parentheses_cleaning(self): query = (User .select( User.username, fn.Count( Blog .select(Blog.pk) .where(Blog.user == User.id)).alias('blog_ct'))) sql, params = normal_compiler.generate_select(query) self.assertEqual(sql, ( 'SELECT "t1"."username", ' 'Count(' 'SELECT "t2"."pk" FROM "blog" AS t2 ' 'WHERE ("t2"."user_id" = "t1"."id")) AS blog_ct FROM "users" AS t1')) query = (User .select(User.username) .where(fn.Exists(fn.Exists(User.select(User.id))))) sql, params = normal_compiler.generate_select(query) self.assertEqual(sql, ( 'SELECT "t1"."username" FROM "users" AS t1 ' 'WHERE Exists(Exists(' 'SELECT "t2"."id" FROM "users" AS t2))')) def test_division(self): query = User.select(User.id / 2) self.assertSelect(query, '("users"."id" / ?)', [2]) def test_select_from_alias(self): UA = User.alias() query = UA.select().where(UA.username == 'charlie') sql, params = normal_compiler.generate_select(query) self.assertEqual(sql, ( 'SELECT "t1"."id", "t1"."username" ' 'FROM "users" AS t1 ' 'WHERE ("t1"."username" = ?)')) self.assertEqual(params, ['charlie']) q2 = query.join(User, on=(User.id == UA.id)).where(User.id == 2) sql, params = normal_compiler.generate_select(q2) self.assertEqual(sql, ( 'SELECT "t1"."id", "t1"."username" ' 'FROM "users" AS t1 ' 'INNER JOIN "users" AS t2 ' 'ON ("t2"."id" = "t1"."id") ' 'WHERE (("t1"."username" = ?) AND ("t2"."id" = ?))')) self.assertEqual(params, ['charlie', 2]) class TestUpdateQuery(PeeweeTestCase): def setUp(self): super(TestUpdateQuery, self).setUp() self._orig_returning_clause = test_db.returning_clause def tearDown(self): super(TestUpdateQuery, self).tearDown() test_db.returning_clause = self._orig_returning_clause def test_update(self): uq = UpdateQuery(User, {User.username: 'updated'}) self.assertEqual(compiler.generate_update(uq), ( 'UPDATE "users" SET "username" = ?', ['updated'])) uq = UpdateQuery(Blog, {Blog.user: User(id=100, username='foo')}) self.assertEqual(compiler.generate_update(uq), ( 'UPDATE "blog" SET "user_id" = ?', [100])) uq = UpdateQuery(User, {User.id: User.id + 5}) self.assertEqual(compiler.generate_update(uq), ( 'UPDATE "users" SET "id" = ("users"."id" + ?)', [5])) uq = UpdateQuery(User, {User.id: 5 * (3 + User.id)}) self.assertEqual(compiler.generate_update(uq), ( 'UPDATE "users" SET "id" = (? * (? + "users"."id"))', [5, 3])) # set username to the maximum id of all users -- silly, yes, but lets see what happens uq = UpdateQuery(User, {User.username: User.select(fn.Max(User.id).alias('maxid'))}) self.assertEqual(compiler.generate_update(uq), ( 'UPDATE "users" SET "username" = (SELECT Max("users"."id") AS maxid ' 'FROM "users" AS users)', [])) uq = UpdateQuery(Blog, {Blog.title: 'foo', Blog.content: 'bar'}) self.assertEqual(compiler.generate_update(uq), ( 'UPDATE "blog" SET "title" = ?, "content" = ?', ['foo', 'bar'])) pub_date = datetime.datetime(2014, 1, 2, 3, 4) uq = UpdateQuery(Blog, { Blog.title: 'foo', Blog.pub_date: pub_date, Blog.user: User(id=15), Blog.content: 'bar'}) self.assertEqual(compiler.generate_update(uq), ( 'UPDATE "blog" SET ' '"user_id" = ?, "title" = ?, "content" = ?, "pub_date" = ?', [15, 'foo', 'bar', pub_date])) def test_via_model(self): uq = User.update(username='updated') self.assertEqual(compiler.generate_update(uq), ( 'UPDATE "users" SET "username" = ?', ['updated'])) uq = User.update({User.username: 'updated'}) self.assertEqual(compiler.generate_update(uq), ( 'UPDATE "users" SET "username" = ?', ['updated'])) uq = Blog.update({Blog.user: User(id=100)}) self.assertEqual(compiler.generate_update(uq), ( 'UPDATE "blog" SET "user_id" = ?', [100])) uq = User.update({User.id: User.id + 5}) self.assertEqual(compiler.generate_update(uq), ( 'UPDATE "users" SET "id" = ("users"."id" + ?)', [5])) def test_on_conflict(self): uq = UpdateQuery(User, { User.username: 'charlie'}).on_conflict('IGNORE') self.assertEqual(compiler.generate_update(uq), ( 'UPDATE OR IGNORE "users" SET "username" = ?', ['charlie'])) def test_update_special(self): uq = UpdateQuery(CSVRow, {CSVRow.data: ['foo', 'bar', 'baz']}) self.assertEqual(compiler.generate_update(uq), ( 'UPDATE "csvrow" SET "data" = ?', ['foo,bar,baz'])) uq = UpdateQuery(CSVRow, {CSVRow.data: []}) self.assertEqual(compiler.generate_update(uq), ( 'UPDATE "csvrow" SET "data" = ?', [''])) def test_where(self): uq = UpdateQuery(User, {User.username: 'updated'}).where(User.id == 2) self.assertWhere(uq, '("users"."id" = ?)', [2]) uq = (UpdateQuery(User, {User.username: 'updated'}) .where(User.id == 2) .where(User.username == 'old')) self.assertWhere(uq, '(("users"."id" = ?) AND ("users"."username" = ?))', [2, 'old']) def test_returning_clause(self): uq = UpdateQuery(User, {User.username: 'baze'}).where(User.id > 2) test_db.returning_clause = False self.assertRaises(ValueError, lambda: uq.returning(User.username)) test_db.returning_clause = True uq_returning = uq.returning(User.username) self.assertFalse(id(uq_returning) == id(uq)) self.assertIsNone(uq._returning) sql, params = normal_compiler.generate_update(uq_returning) self.assertEqual(sql, ( 'UPDATE "users" SET "username" = ? ' 'WHERE ("users"."id" > ?) ' 'RETURNING "users"."username"')) self.assertEqual(params, ['baze', 2]) uq2 = uq_returning.returning(User, SQL('1')) sql, params = normal_compiler.generate_update(uq2) self.assertEqual(sql, ( 'UPDATE "users" SET "username" = ? ' 'WHERE ("users"."id" > ?) ' 'RETURNING "users"."id", "users"."username", 1')) self.assertEqual(params, ['baze', 2]) uq_no_return = uq2.returning(None) sql, _ = normal_compiler.generate_update(uq_no_return) self.assertFalse('RETURNING' in sql) class TestInsertQuery(PeeweeTestCase): def setUp(self): super(TestInsertQuery, self).setUp() self._orig_returning_clause = test_db.returning_clause def tearDown(self): super(TestInsertQuery, self).tearDown() test_db.returning_clause = self._orig_returning_clause def test_insert(self): iq = InsertQuery(User, {User.username: 'inserted'}) self.assertEqual(compiler.generate_insert(iq), ( 'INSERT INTO "users" ("username") VALUES (?)', ['inserted'])) iq = InsertQuery(User, {'username': 'inserted'}) self.assertEqual(compiler.generate_insert(iq), ( 'INSERT INTO "users" ("username") VALUES (?)', ['inserted'])) pub_date = datetime.datetime(2014, 1, 2, 3, 4) iq = InsertQuery(Blog, { Blog.title: 'foo', Blog.content: 'bar', Blog.pub_date: pub_date, Blog.user: User(id=10)}) self.assertEqual(compiler.generate_insert(iq), ( 'INSERT INTO "blog" ("user_id", "title", "content", "pub_date") ' 'VALUES (?, ?, ?, ?)', [10, 'foo', 'bar', pub_date])) subquery = Blog.select(Blog.title) iq = InsertQuery(User, fields=[User.username], query=subquery) sql, params = normal_compiler.generate_insert(iq) self.assertEqual(sql, ( 'INSERT INTO "users" ("username") ' 'SELECT "t2"."title" FROM "blog" AS t2')) subquery = Blog.select(Blog.pk, Blog.title) iq = InsertQuery(User, query=subquery) sql, params = normal_compiler.generate_insert(iq) self.assertEqual(sql, ( 'INSERT INTO "users" ' 'SELECT "t2"."pk", "t2"."title" FROM "blog" AS t2')) def test_insert_default_vals(self): class DM(TestModel): name = CharField(default='peewee') value = IntegerField(default=1, null=True) other = FloatField() iq = InsertQuery(DM) self.assertEqual(compiler.generate_insert(iq), ( 'INSERT INTO "dm" ("name", "value") VALUES (?, ?)', ['peewee', 1])) iq = InsertQuery(DM, {'name': 'herman'}) self.assertEqual(compiler.generate_insert(iq), ( 'INSERT INTO "dm" ("name", "value") VALUES (?, ?)', ['herman', 1])) iq = InsertQuery(DM, {'value': None}) self.assertEqual(compiler.generate_insert(iq), ( 'INSERT INTO "dm" ("name", "value") VALUES (?, ?)', ['peewee', None])) iq = InsertQuery(DM, {DM.name: 'huey', 'other': 2.0}) self.assertEqual(compiler.generate_insert(iq), ( 'INSERT INTO "dm" ("name", "value", "other") VALUES (?, ?, ?)', ['huey', 1, 2.0])) def test_insert_default_callable(self): def default_fn(): return -1 class DM(TestModel): name = CharField() value = IntegerField(default=default_fn) iq = InsertQuery(DM, {DM.name: 'u1'}) self.assertEqual(compiler.generate_insert(iq), ( 'INSERT INTO "dm" ("name", "value") VALUES (?, ?)', ['u1', -1])) iq = InsertQuery(DM, {'name': 'u2', 'value': 1}) self.assertEqual(compiler.generate_insert(iq), ( 'INSERT INTO "dm" ("name", "value") VALUES (?, ?)', ['u2', 1])) def test_insert_many(self): iq = InsertQuery(User, rows=[ {'username': 'u1'}, {User.username: 'u2'}, {'username': 'u3'}, ]) self.assertEqual(compiler.generate_insert(iq), ( 'INSERT INTO "users" ("username") VALUES (?), (?), (?)', ['u1', 'u2', 'u3'])) iq = InsertQuery(User, rows=[{'username': 'u1'}]) self.assertEqual(compiler.generate_insert(iq), ( 'INSERT INTO "users" ("username") VALUES (?)', ['u1'])) iq = InsertQuery(User, rows=[]) if isinstance(test_db, MySQLDatabase): self.assertEqual(compiler.generate_insert(iq), ( 'INSERT INTO "users" ("users"."id") VALUES (DEFAULT)', [])) else: self.assertEqual(compiler.generate_insert(iq), ( 'INSERT INTO "users" DEFAULT VALUES', [])) def test_insert_many_defaults(self): class DefaultGenerator(object): def __init__(self): self.i = 0 def __call__(self): self.i += 1 return self.i default_gen = DefaultGenerator() class DM(TestModel): cd = IntegerField(default=default_gen) pd = IntegerField(default=-1) name = CharField() iq = InsertQuery(DM, rows=[{'name': 'u1'}, {'name': 'u2'}]) self.assertEqual(compiler.generate_insert(iq), ( 'INSERT INTO "dm" ("cd", "pd", "name") VALUES ' '(?, ?, ?), (?, ?, ?)', [1, -1, 'u1', 2, -1, 'u2'])) iq = InsertQuery(DM, rows=[ {DM.name: 'u3', DM.cd: 99}, {DM.name: 'u4', DM.pd: -2}, {DM.name: 'u5'}]) self.assertEqual(compiler.generate_insert(iq), ( 'INSERT INTO "dm" ("cd", "pd", "name") VALUES ' '(?, ?, ?), (?, ?, ?), (?, ?, ?)', [99, -1, 'u3', 3, -2, 'u4', 4, -1, 'u5'])) def test_insert_many_gen(self): def row_generator(): for i in range(3): yield {'username': 'u%s' % i} iq = InsertQuery(User, rows=row_generator()) self.assertEqual(compiler.generate_insert(iq), ( 'INSERT INTO "users" ("username") VALUES (?), (?), (?)', ['u0', 'u1', 'u2'])) def test_insert_special(self): iq = InsertQuery(CSVRow, {CSVRow.data: ['foo', 'bar', 'baz']}) self.assertEqual(compiler.generate_insert(iq), ( 'INSERT INTO "csvrow" ("data") VALUES (?)', ['foo,bar,baz'])) iq = InsertQuery(CSVRow, {CSVRow.data: []}) self.assertEqual(compiler.generate_insert(iq), ( 'INSERT INTO "csvrow" ("data") VALUES (?)', [''])) iq = InsertQuery(CSVRow, rows=[ {CSVRow.data: ['foo', 'bar', 'baz']}, {CSVRow.data: ['a', 'b']}, {CSVRow.data: ['b']}, {CSVRow.data: []}]) self.assertEqual(compiler.generate_insert(iq), ( 'INSERT INTO "csvrow" ("data") VALUES (?), (?), (?), (?)', ['foo,bar,baz', 'a,b', 'b', ''])) def test_empty_insert(self): class EmptyModel(TestModel): pass iq = InsertQuery(EmptyModel, {}) sql, params = compiler.generate_insert(iq) if isinstance(test_db, MySQLDatabase): self.assertEqual(sql, ( 'INSERT INTO "emptymodel" ("emptymodel"."id") ' 'VALUES (DEFAULT)')) else: self.assertEqual(sql, 'INSERT INTO "emptymodel" DEFAULT VALUES') def test_upsert(self): class TestUser(User): class Meta: database = SqliteDatabase(':memory:') sql, params = TestUser.insert(username='charlie').upsert().sql() self.assertEqual(sql, ( 'INSERT OR REPLACE INTO "testuser" ("username") VALUES (?)')) self.assertEqual(params, ['charlie']) def test_on_conflict(self): class TestUser(User): class Meta: database = SqliteDatabase(':memory:') sql, params = TestUser.insert(username='huey').on_conflict('IGNORE').sql() self.assertEqual(sql, ( 'INSERT OR IGNORE INTO "testuser" ("username") VALUES (?)')) self.assertEqual(params, ['huey']) def test_upsert_mysql(self): class TestUser(User): class Meta: database = MySQLDatabase('peewee_test') query = TestUser.insert(username='zaizee', id=3).upsert() sql, params = query.sql() self.assertEqual(sql, ( 'REPLACE INTO `testuser` (`id`, `username`) VALUES (%s, %s)')) self.assertEqual(params, [3, 'zaizee']) def test_returning(self): iq = User.insert(username='huey') test_db.returning_clause = False self.assertRaises(ValueError, lambda: iq.returning(User.id)) test_db.returning_clause = True iq_returning = iq.returning(User.id) self.assertFalse(id(iq_returning) == id(iq)) self.assertIsNone(iq._returning) sql, params = normal_compiler.generate_insert(iq_returning) self.assertEqual(sql, ( 'INSERT INTO "users" ("username") VALUES (?) ' 'RETURNING "users"."id"')) self.assertEqual(params, ['huey']) iq2 = iq_returning.returning(User, SQL('1')) sql, params = normal_compiler.generate_insert(iq2) self.assertEqual(sql, ( 'INSERT INTO "users" ("username") VALUES (?) ' 'RETURNING "users"."id", "users"."username", 1')) self.assertEqual(params, ['huey']) iq_no_return = iq2.returning(None) sql, _ = normal_compiler.generate_insert(iq_no_return) self.assertFalse('RETURNING' in sql) class TestInsertReturning(PeeweeTestCase): def setUp(self): super(TestInsertReturning, self).setUp() class TestReturningDatabase(TestDatabase): insert_returning = True db = TestReturningDatabase(':memory:') self.rc = db.compiler() class BaseModel(TestModel): class Meta: database = db self.BaseModel = BaseModel def assertInsertSQL(self, insert_query, sql, params=None): qsql, qparams = self.rc.generate_insert(insert_query) self.assertEqual(qsql, sql) self.assertEqual(qparams, params or []) def test_insert_returning(self): class User(self.BaseModel): username = CharField() self.assertInsertSQL( User.insert(username='charlie'), 'INSERT INTO "user" ("username") VALUES (?) RETURNING "id"', ['charlie']) def test_insert_non_int_pk(self): class User(self.BaseModel): username = CharField(primary_key=True) data = TextField(default='') self.assertInsertSQL( User.insert(username='charlie'), ('INSERT INTO "user" ("username", "data") ' 'VALUES (?, ?) RETURNING "username"'), ['charlie', '']) def test_insert_composite_key(self): class Person(self.BaseModel): first = CharField() last = CharField() dob = DateField() email = CharField() class Meta: primary_key = CompositeKey('first', 'last', 'dob') self.assertInsertSQL( Person.insert( first='huey', last='leifer', dob='05/01/2011', email='huey@kitties.cat'), ('INSERT INTO "person" ' '("first", "last", "dob", "email") ' 'VALUES (?, ?, ?, ?) ' 'RETURNING "first", "last", "dob"'), ['huey', 'leifer', '05/01/2011', 'huey@kitties.cat']) def test_insert_many(self): class User(self.BaseModel): username = CharField() data = [{'username': 'user-%s' % i} for i in range(3)] # Bulk inserts do not ask for returned primary keys. self.assertInsertSQL( User.insert_many(data), 'INSERT INTO "user" ("username") VALUES (?), (?), (?)', ['user-0', 'user-1', 'user-2']) class TestDeleteQuery(PeeweeTestCase): def setUp(self): super(TestDeleteQuery, self).setUp() self._orig_returning_clause = test_db.returning_clause def tearDown(self): super(TestDeleteQuery, self).tearDown() test_db.returning_clause = self._orig_returning_clause def test_returning(self): dq = DeleteQuery(User).where(User.id > 2) test_db.returning_clause = False self.assertRaises(ValueError, lambda: dq.returning(User.username)) test_db.returning_clause = True dq_returning = dq.returning(User.username) self.assertFalse(id(dq_returning) == id(dq)) self.assertIsNone(dq._returning) sql, params = normal_compiler.generate_delete(dq_returning) self.assertEqual(sql, ( 'DELETE FROM "users" ' 'WHERE ("id" > ?) ' 'RETURNING "username"')) self.assertEqual(params, [2]) dq2 = dq_returning.returning(User, SQL('1')) sql, params = normal_compiler.generate_delete(dq2) self.assertEqual(sql, ( 'DELETE FROM "users" WHERE ("id" > ?) ' 'RETURNING "id", "username", 1')) self.assertEqual(params, [2]) dq_no_return = dq2.returning(None) sql, _ = normal_compiler.generate_delete(dq_no_return) self.assertFalse('RETURNING' in sql) def test_where(self): dq = DeleteQuery(User).where(User.id == 2) self.assertWhere(dq, '("users"."id" = ?)', [2]) dq = (DeleteQuery(User) .where(User.id == 2) .where(User.username == 'old')) self.assertWhere(dq, '(("users"."id" = ?) AND ("users"."username" = ?))', [2, 'old']) class TestRawQuery(PeeweeTestCase): def test_raw(self): q = 'SELECT * FROM "users" WHERE id=?' rq = RawQuery(User, q, 100) self.assertEqual(rq.sql(), (q, [100])) class TestSchema(PeeweeTestCase): def test_schema(self): class WithSchema(TestModel): data = CharField() class Meta: schema = 'huey' query = WithSchema.select().where(WithSchema.data == 'mickey') sql, params = compiler.generate_select(query) self.assertEqual(sql, ( 'SELECT "withschema"."id", "withschema"."data" FROM ' '"huey"."withschema" AS withschema ' 'WHERE ("withschema"."data" = ?)')) class TestDjangoFilters(PeeweeTestCase): # test things like filter, annotate, aggregate def test_filter(self): sq = User.filter(username='u1') self.assertJoins(sq, []) self.assertWhere(sq, '("users"."username" = ?)', ['u1']) sq = Blog.filter(user__username='u1') self.assertJoins(sq, ['INNER JOIN "users" AS users ON ("blog"."user_id" = "users"."id")']) self.assertWhere(sq, '("users"."username" = ?)', ['u1']) sq = Blog.filter(user__username__in=['u1', 'u2'], comments__comment='hurp') self.assertJoins(sq, [ 'INNER JOIN "comment" AS comment ON ("blog"."pk" = "comment"."blog_id")', 'INNER JOIN "users" AS users ON ("blog"."user_id" = "users"."id")', ]) self.assertWhere(sq, '(("comment"."comment" = ?) AND ("users"."username" IN (?, ?)))', ['hurp', 'u1', 'u2']) sq = Blog.filter(user__username__in=['u1', 'u2']).filter(comments__comment='hurp') self.assertJoins(sq, [ 'INNER JOIN "users" AS users ON ("blog"."user_id" = "users"."id")', 'INNER JOIN "comment" AS comment ON ("blog"."pk" = "comment"."blog_id")', ]) self.assertWhere(sq, '(("users"."username" IN (?, ?)) AND ("comment"."comment" = ?))', ['u1', 'u2', 'hurp']) def test_filter_dq(self): sq = User.filter(~DQ(username='u1')) self.assertWhere(sq, 'NOT ("users"."username" = ?)', ['u1']) sq = User.filter(DQ(username='u1') | DQ(username='u2')) self.assertJoins(sq, []) self.assertWhere(sq, '(("users"."username" = ?) OR ("users"."username" = ?))', ['u1', 'u2']) sq = Comment.filter(DQ(blog__user__username='u1') | DQ(blog__title='b1'), DQ(comment='c1')) self.assertJoins(sq, [ 'INNER JOIN "blog" AS blog ON ("comment"."blog_id" = "blog"."pk")', 'INNER JOIN "users" AS users ON ("blog"."user_id" = "users"."id")', ]) self.assertWhere(sq, '((("users"."username" = ?) OR ("blog"."title" = ?)) AND ("comment"."comment" = ?))', ['u1', 'b1', 'c1']) sq = Blog.filter(DQ(user__username='u1') | DQ(comments__comment='c1')) self.assertJoins(sq, [ 'INNER JOIN "comment" AS comment ON ("blog"."pk" = "comment"."blog_id")', 'INNER JOIN "users" AS users ON ("blog"."user_id" = "users"."id")', ]) self.assertWhere(sq, '(("users"."username" = ?) OR ("comment"."comment" = ?))', ['u1', 'c1']) sq = Blog.filter(~DQ(user__username='u1') | DQ(user__username='b2')) self.assertJoins(sq, [ 'INNER JOIN "users" AS users ON ("blog"."user_id" = "users"."id")', ]) self.assertWhere(sq, '(NOT ("users"."username" = ?) OR ("users"."username" = ?))', ['u1', 'b2']) sq = Blog.filter(~( DQ(user__username='u1') | ~DQ(title='b1', pk=3))) self.assertJoins(sq, [ 'INNER JOIN "users" AS users ON ("blog"."user_id" = "users"."id")', ]) self.assertWhere(sq, 'NOT (("users"."username" = ?) OR NOT (("blog"."pk" = ?) AND ("blog"."title" = ?)))', ['u1', 3, 'b1']) def test_annotate(self): sq = User.select().annotate(Blog) self.assertSelect(sq, '"users"."id", "users"."username", Count("blog"."pk") AS count', []) self.assertJoins(sq, ['INNER JOIN "blog" AS blog ON ("users"."id" = "blog"."user_id")']) self.assertWhere(sq, '', []) self.assertGroupBy(sq, '"users"."id", "users"."username"', []) sq = User.select(User.username).annotate(Blog, fn.Sum(Blog.pk).alias('sum')).where(User.username == 'foo') self.assertSelect(sq, '"users"."username", Sum("blog"."pk") AS sum', []) self.assertJoins(sq, ['INNER JOIN "blog" AS blog ON ("users"."id" = "blog"."user_id")']) self.assertWhere(sq, '("users"."username" = ?)', ['foo']) self.assertGroupBy(sq, '"users"."username"', []) sq = User.select(User.username).annotate(Blog).annotate(Blog, fn.Max(Blog.pk).alias('mx')) self.assertSelect(sq, '"users"."username", Count("blog"."pk") AS count, Max("blog"."pk") AS mx', []) self.assertJoins(sq, ['INNER JOIN "blog" AS blog ON ("users"."id" = "blog"."user_id")']) self.assertWhere(sq, '', []) self.assertGroupBy(sq, '"users"."username"', []) sq = User.select().annotate(Blog).order_by(R('count DESC')) self.assertSelect(sq, '"users"."id", "users"."username", Count("blog"."pk") AS count', []) self.assertOrderBy(sq, 'count DESC', []) sq = User.select().join(Blog, JOIN.LEFT_OUTER).switch(User).annotate(Blog) self.assertSelect(sq, '"users"."id", "users"."username", Count("blog"."pk") AS count', []) self.assertJoins(sq, ['LEFT OUTER JOIN "blog" AS blog ON ("users"."id" = "blog"."user_id")']) self.assertWhere(sq, '', []) self.assertGroupBy(sq, '"users"."id", "users"."username"', []) sq = User.select().join(Blog, JOIN.LEFT_OUTER).annotate(Blog) self.assertSelect(sq, '"users"."id", "users"."username", Count("blog"."pk") AS count', []) self.assertJoins(sq, ['LEFT OUTER JOIN "blog" AS blog ON ("users"."id" = "blog"."user_id")']) self.assertWhere(sq, '', []) self.assertGroupBy(sq, '"users"."id", "users"."username"', []) def test_aggregate(self): sq = User.select().where(User.id < 10)._aggregate() self.assertSelect(sq, 'Count(*)', []) self.assertWhere(sq, '("users"."id" < ?)', [10]) sq = User.select()._aggregate(fn.Sum(User.id).alias('baz')) self.assertSelect(sq, 'Sum("users"."id") AS baz', []) class TestQueryCompiler(PeeweeTestCase): def test_clause(self): expr = fn.extract(Clause('year', R('FROM'), Blog.pub_date)) sql, params = compiler.parse_node(expr) self.assertEqual(sql, 'extract(? FROM "pub_date")') self.assertEqual(params, ['year']) def test_custom_alias(self): class Person(TestModel): name = CharField() class Meta: table_alias = 'person_tbl' class Pet(TestModel): name = CharField() owner = ForeignKeyField(Person) class Meta: table_alias = 'pet_tbl' sq = Person.select().where(Person.name == 'peewee') sql = normal_compiler.generate_select(sq) self.assertEqual( sql[0], 'SELECT "person_tbl"."id", "person_tbl"."name" FROM "person" AS ' 'person_tbl WHERE ("person_tbl"."name" = ?)') sq = Pet.select(Pet, Person.name).join(Person) sql = normal_compiler.generate_select(sq) self.assertEqual( sql[0], 'SELECT "pet_tbl"."id", "pet_tbl"."name", "pet_tbl"."owner_id", ' '"person_tbl"."name" ' 'FROM "pet" AS pet_tbl ' 'INNER JOIN "person" AS person_tbl ' 'ON ("pet_tbl"."owner_id" = "person_tbl"."id")') def test_alias_map(self): class A(TestModel): a = CharField() class Meta: table_alias = 'a_tbl' class B(TestModel): b = CharField() a_link = ForeignKeyField(A) class C(TestModel): c = CharField() b_link = ForeignKeyField(B) class D(TestModel): d = CharField() c_link = ForeignKeyField(C) class Meta: table_alias = 'd_tbl' sq = (D .select(D.d, C.c) .join(C) .where(C.b_link << ( B.select(B.id).join(A).where(A.a == 'a')))) sql, params = normal_compiler.generate_select(sq) self.assertEqual(sql, ( 'SELECT "d_tbl"."d", "t2"."c" ' 'FROM "d" AS d_tbl ' 'INNER JOIN "c" AS t2 ON ("d_tbl"."c_link_id" = "t2"."id") ' 'WHERE ("t2"."b_link_id" IN (' 'SELECT "t3"."id" FROM "b" AS t3 ' 'INNER JOIN "a" AS a_tbl ON ("t3"."a_link_id" = "a_tbl"."id") ' 'WHERE ("a_tbl"."a" = ?)))')) def test_fn_no_coerce(self): class A(TestModel): i = IntegerField() d = DateTimeField() query = A.select(A.id).where(A.d == '2013-01-02') sql, params = compiler.generate_select(query) self.assertEqual(sql, ( 'SELECT "a"."id" FROM "a" AS a WHERE ("a"."d" = ?)')) self.assertEqual(params, ['2013-01-02']) query = A.select(A.id).where(A.i == fn.Foo('test')) self.assertRaises(ValueError, query.sql) query = A.select(A.id).where(A.i == fn.Foo('test').coerce(False)) sql, params = compiler.generate_select(query) self.assertEqual(sql, ( 'SELECT "a"."id" FROM "a" AS a WHERE ("a"."i" = Foo(?))')) self.assertEqual(params, ['test']) def test_strip_parentheses(self): tests = ( ('x = 1', 'x = 1'), ('(x = 1)', 'x = 1'), ('(((((x = 1)))))', 'x = 1'), ('(((((x = (1))))))', 'x = (1)'), ('(((((x) = 1))))', '(x) = 1'), ('(x = (y = 2))', 'x = (y = 2)'), ('(((x = 1)', '((x = 1'), ('(x = 1)))', '(x = 1)))'), ('x = 1))', 'x = 1))'), ('((x = 1', '((x = 1'), ('(((()))', '('), ('((())))', ')'), ('', ''), ('(((())))', ''), ('((x), ((x) y))', '(x), ((x) y)'), ('(F(x) x), F(x)', '(F(x) x), F(x)'), ('((F(x) x) x), (F(x) F(x))', '((F(x) x) x), (F(x) F(x))'), ('(((F(x) x) x), (F(x) F(x)))', '((F(x) x) x), (F(x) F(x))'), ('((((F(x) x) x), (F(x) F(x))))', '((F(x) x) x), (F(x) F(x))'), ) for s, expected in tests: self.assertEqual(strip_parens(s), expected) def test_parens_in_queries(self): query = User.select( fn.MAX( fn.IFNULL(1, 10) * 151, fn.IFNULL(None, 10))) self.assertSelect( query, 'MAX((IFNULL(?, ?) * ?), IFNULL(?, ?))', [1, 10, 151, None, 10]) class TestValidation(PeeweeTestCase): def test_foreign_key_validation(self): def declare_bad(val): class Bad(TestModel): name = ForeignKeyField(val) vals_to_try = [ ForeignKeyField(User), 'Self', object, object()] for val in vals_to_try: self.assertRaises(TypeError, declare_bad, val) def test_backref_conflicts(self): class Note(TestModel): pass def declare_bad(related_name=None, backrefs=True): class Backref(Model): note = ForeignKeyField(Note, related_name=related_name) class Meta: validate_backrefs = backrefs # First call succeeds since related_name is not taken, second will # fail with AttributeError. declare_bad() self.assertRaises(AttributeError, declare_bad) # We can specify a new related_name and it will be accepted. declare_bad(related_name='valid_backref_name') # We can also silence any validation errors. declare_bad(backrefs=False) class TestProxy(PeeweeTestCase): def test_proxy(self): class A(object): def foo(self): return 'foo' a = Proxy() def raise_error(): a.foo() self.assertRaises(AttributeError, raise_error) a.initialize(A()) self.assertEqual(a.foo(), 'foo') def test_proxy_database(self): database_proxy = Proxy() class DummyModel(TestModel): test_field = CharField() class Meta: database = database_proxy # Un-initialized will raise an AttributeError. self.assertRaises(AttributeError, DummyModel.create_table) # Initialize the object. database_proxy.initialize(SqliteDatabase(':memory:')) # Do some queries, verify it is working. DummyModel.create_table() DummyModel.create(test_field='foo') self.assertEqual(DummyModel.get().test_field, 'foo') DummyModel.drop_table() def test_proxy_callbacks(self): p = Proxy() state = {} def cb1(obj): state['cb1'] = obj p.attach_callback(cb1) @p.attach_callback def cb2(obj): state['cb2'] = 'called' self.assertEqual(state, {}) p.initialize('test') self.assertEqual(state, { 'cb1': 'test', 'cb2': 'called', }) @skip_if(lambda: not test_db.window_functions) class TestWindowFunctions(ModelTestCase): """Use int_field & float_field to test window queries.""" requires = [NullModel] data = ( # int / float -- we'll use int for grouping. (1, 10), (1, 20), (2, 1), (2, 3), (3, 100), ) def setUp(self): super(TestWindowFunctions, self).setUp() for int_v, float_v in self.data: NullModel.create(int_field=int_v, float_field=float_v) def test_frame(self): query = (NullModel .select( NullModel.float_field, fn.AVG(NullModel.float_field).over( partition_by=[NullModel.int_field], start=Window.preceding(), end=Window.following(2)))) sql, params = query.sql() self.assertEqual(sql, ( 'SELECT "t1"."float_field", AVG("t1"."float_field") ' 'OVER (PARTITION BY "t1"."int_field" RANGE BETWEEN ' 'UNBOUNDED PRECEDING AND 2 FOLLOWING) FROM "nullmodel" AS t1')) self.assertEqual(params, []) query = (NullModel .select( NullModel.float_field, fn.AVG(NullModel.float_field).over( partition_by=[NullModel.int_field], start=SQL('CURRENT ROW'), end=Window.following()))) sql, params = query.sql() self.assertEqual(sql, ( 'SELECT "t1"."float_field", AVG("t1"."float_field") ' 'OVER (PARTITION BY "t1"."int_field" RANGE BETWEEN ' 'CURRENT ROW AND UNBOUNDED FOLLOWING) FROM "nullmodel" AS t1')) self.assertEqual(params, []) def test_partition_unordered(self): query = (NullModel .select( NullModel.int_field, NullModel.float_field, fn.Avg(NullModel.float_field).over( partition_by=[NullModel.int_field])) .order_by(NullModel.id)) self.assertEqual(list(query.tuples()), [ (1, 10.0, 15.0), (1, 20.0, 15.0), (2, 1.0, 2.0), (2, 3.0, 2.0), (3, 100.0, 100.0), ]) def test_named_window(self): window = Window(partition_by=[NullModel.int_field]) query = (NullModel .select( NullModel.int_field, NullModel.float_field, fn.Avg(NullModel.float_field).over(window)) .window(window) .order_by(NullModel.id)) self.assertEqual(list(query.tuples()), [ (1, 10.0, 15.0), (1, 20.0, 15.0), (2, 1.0, 2.0), (2, 3.0, 2.0), (3, 100.0, 100.0), ]) window = Window( partition_by=[NullModel.int_field], order_by=[NullModel.float_field.desc()]) query = (NullModel .select( NullModel.int_field, NullModel.float_field, fn.rank().over(window=window)) .window(window) .order_by(NullModel.id)) self.assertEqual(list(query.tuples()), [ (1, 10.0, 2), (1, 20.0, 1), (2, 1.0, 2), (2, 3.0, 1), (3, 100.0, 1), ]) def test_multi_window(self): w1 = Window(partition_by=[NullModel.int_field]).alias('w1') w2 = Window(order_by=[NullModel.int_field]).alias('w2') query = (NullModel .select( NullModel.int_field, NullModel.float_field, fn.Avg(NullModel.float_field).over(window=w1), fn.Rank().over(window=w2)) .window(w1, w2) .order_by(NullModel.id)) self.assertEqual(list(query.tuples()), [ (1, 10.0, 15.0, 1), (1, 20.0, 15.0, 1), (2, 1.0, 2.0, 3), (2, 3.0, 2.0, 3), (3, 100.0, 100.0, 5), ]) def test_ordered_unpartitioned(self): query = (NullModel .select( NullModel.int_field, NullModel.float_field, fn.rank().over( order_by=[NullModel.float_field])) .order_by(NullModel.id)) self.assertEqual(list(query.tuples()), [ (1, 10.0, 3), (1, 20.0, 4), (2, 1.0, 1), (2, 3.0, 2), (3, 100.0, 5), ]) def test_ordered_partitioned(self): query = (NullModel .select( NullModel.int_field, NullModel.float_field, fn.rank().over( partition_by=[NullModel.int_field], order_by=[NullModel.float_field.desc()])) .order_by(NullModel.id)) self.assertEqual(list(query.tuples()), [ (1, 10.0, 2), (1, 20.0, 1), (2, 1.0, 2), (2, 3.0, 1), (3, 100.0, 1), ]) def test_empty_over(self): query = (NullModel .select( NullModel.int_field, NullModel.float_field, fn.lag(NullModel.int_field, 1).over()) .order_by(NullModel.id)) self.assertEqual(list(query.tuples()), [ (1, 10.0, None), (1, 20.0, 1), (2, 1.0, 1), (2, 3.0, 2), (3, 100.0, 2), ]) def test_docs_example(self): NullModel.delete().execute() # Clear out the table. curr_dt = datetime.datetime(2014, 1, 1) one_day = datetime.timedelta(days=1) for i in range(3): for j in range(i + 1): NullModel.create(int_field=i, datetime_field=curr_dt) curr_dt += one_day query = (NullModel .select( NullModel.int_field, NullModel.datetime_field, fn.Count(NullModel.id).over( partition_by=[fn.date_trunc( 'day', NullModel.datetime_field)])) .order_by(NullModel.id)) self.assertEqual(list(query.tuples()), [ (0, datetime.datetime(2014, 1, 1), 1), (1, datetime.datetime(2014, 1, 2), 2), (1, datetime.datetime(2014, 1, 2), 2), (2, datetime.datetime(2014, 1, 3), 3), (2, datetime.datetime(2014, 1, 3), 3), (2, datetime.datetime(2014, 1, 3), 3), ]) @skip_if(lambda: not test_db.distinct_on) class TestDistinctOn(ModelTestCase): requires = [User, Blog] def test_distinct_on(self): for i in range(1, 4): u = User.create(username='u%s' % i) for j in range(i): Blog.create(user=u, title='b-%s-%s' % (i, j)) query = (Blog .select(User.username, Blog.title) .join(User) .order_by(User.username, Blog.title) .distinct([User.username]) .tuples()) self.assertEqual(list(query), [ ('u1', 'b-1-0'), ('u2', 'b-2-0'), ('u3', 'b-3-0')]) query = (Blog .select( fn.Distinct(User.username), User.username, Blog.title) .join(User) .order_by(Blog.title) .tuples()) self.assertEqual(list(query), [ ('u1', 'u1', 'b-1-0'), ('u2', 'u2', 'b-2-0'), ('u2', 'u2', 'b-2-1'), ('u3', 'u3', 'b-3-0'), ('u3', 'u3', 'b-3-1'), ('u3', 'u3', 'b-3-2'), ]) @skip_if(lambda: not test_db.for_update) class TestForUpdate(ModelTestCase): requires = [User] def tearDown(self): test_db.set_autocommit(True) def test_for_update(self): u1 = User.create(username='u1') u2 = User.create(username='u2') u3 = User.create(username='u3') test_db.set_autocommit(False) # select a user for update users = User.select().where(User.username == 'u1').for_update() updated = User.update(username='u1_edited').where(User.username == 'u1').execute() self.assertEqual(updated, 1) # open up a new connection to the database new_db = self.new_connection() # select the username, it will not register as being updated res = new_db.execute_sql('select username from users where id = %s;' % u1.id) username = res.fetchone()[0] self.assertEqual(username, 'u1') # committing will cause the lock to be released test_db.commit() # now we get the update res = new_db.execute_sql('select username from users where id = %s;' % u1.id) username = res.fetchone()[0] self.assertEqual(username, 'u1_edited') @skip_if(lambda: not test_db.for_update_nowait) class TestForUpdateNoWait(ModelTestCase): requires = [User] def tearDown(self): test_db.set_autocommit(True) def test_for_update_exc(self): u1 = User.create(username='u1') test_db.set_autocommit(False) user = (User .select() .where(User.username == 'u1') .for_update(nowait=True) .execute()) # Open up a second conn. new_db = self.new_connection() class User2(User): class Meta: database = new_db db_table = User._meta.db_table # Select the username -- it will raise an error. def try_lock(): user2 = (User2 .select() .where(User2.username == 'u1') .for_update(nowait=True) .execute()) self.assertRaises(OperationalError, try_lock) test_db.rollback() peewee-2.10.2/playhouse/tests/test_query_results.py000066400000000000000000002114261316645060400225730ustar00rootroot00000000000000import itertools import sys from peewee import ModelQueryResultWrapper from peewee import NaiveQueryResultWrapper from playhouse.tests.base import ModelTestCase from playhouse.tests.base import skip_test_if from playhouse.tests.base import test_db from playhouse.tests.models import * class TestQueryResultWrapper(ModelTestCase): requires = [User, Blog, Comment] def test_iteration(self): User.create_users(10) with self.assertQueryCount(1): sq = User.select() qr = sq.execute() first_five = [] for i, u in enumerate(qr): first_five.append(u.username) if i == 4: break self.assertEqual(first_five, ['u1', 'u2', 'u3', 'u4', 'u5']) names = lambda it: [obj.username for obj in it] self.assertEqual(names(sq[5:]), ['u6', 'u7', 'u8', 'u9', 'u10']) self.assertEqual(names(sq[2:5]), ['u3', 'u4', 'u5']) another_iter = names(qr) self.assertEqual(another_iter, ['u%d' % i for i in range(1, 11)]) another_iter = names(qr) self.assertEqual(another_iter, ['u%d' % i for i in range(1, 11)]) def test_count(self): User.create_users(5) with self.assertQueryCount(1): query = User.select() qr = query.execute() self.assertEqual(qr.count, 5) # Calling again does not incur another query. self.assertEqual(qr.count, 5) with self.assertQueryCount(1): query = query.where(User.username != 'u1') qr = query.execute() self.assertEqual(qr.count, 4) # Calling again does not incur another query. self.assertEqual(qr.count, 4) def test_len(self): User.create_users(5) with self.assertQueryCount(1): query = User.select() self.assertEqual(len(query), 5) qr = query.execute() self.assertEqual(len(qr), 5) with self.assertQueryCount(1): query = query.where(User.username != 'u1') qr = query.execute() self.assertEqual(len(qr), 4) self.assertEqual(len(query), 4) def test_nested_iteration(self): User.create_users(4) with self.assertQueryCount(1): sq = User.select() outer = [] inner = [] for i_user in sq: outer.append(i_user.username) for o_user in sq: inner.append(o_user.username) self.assertEqual(outer, ['u1', 'u2', 'u3', 'u4']) self.assertEqual(inner, ['u1', 'u2', 'u3', 'u4'] * 4) def test_iteration_protocol(self): User.create_users(3) with self.assertQueryCount(1): query = User.select().order_by(User.id) qr = query.execute() for _ in range(2): for user in qr: pass i = iter(qr) for obj in i: pass self.assertRaises(StopIteration, next, i) self.assertEqual([u.username for u in qr], ['u1', 'u2', 'u3']) self.assertEqual(query[0].username, 'u1') self.assertEqual(query[2].username, 'u3') self.assertRaises(StopIteration, next, i) def test_iterator(self): User.create_users(10) with self.assertQueryCount(1): qr = User.select().order_by(User.id).execute() usernames = [u.username for u in qr.iterator()] self.assertEqual(usernames, ['u%d' % i for i in range(1, 11)]) self.assertTrue(qr._populated) self.assertEqual(qr._result_cache, []) with self.assertQueryCount(0): again = [u.username for u in qr] self.assertEqual(again, []) with self.assertQueryCount(1): qr = User.select().where(User.username == 'xxx').execute() usernames = [u.username for u in qr.iterator()] self.assertEqual(usernames, []) def test_iterator_query_method(self): User.create_users(10) with self.assertQueryCount(1): qr = User.select().order_by(User.id) usernames = [u.username for u in qr.iterator()] self.assertEqual(usernames, ['u%d' % i for i in range(1, 11)]) with self.assertQueryCount(0): again = [u.username for u in qr] self.assertEqual(again, []) def test_iterator_extended(self): User.create_users(10) for i in range(1, 4): for j in range(i): Blog.create( title='blog-%s-%s' % (i, j), user=User.get(User.username == 'u%s' % i)) qr = (User .select( User.username, fn.Count(Blog.pk).alias('ct')) .join(Blog) .where(User.username << ['u1', 'u2', 'u3']) .group_by(User) .order_by(User.id) .naive()) accum = [] with self.assertQueryCount(1): for user in qr.iterator(): accum.append((user.username, user.ct)) self.assertEqual(accum, [ ('u1', 1), ('u2', 2), ('u3', 3)]) qr = (User .select(fn.Count(User.id).alias('ct')) .group_by(User.username << ['u1', 'u2', 'u3']) .order_by(fn.Count(User.id).desc())) accum = [] with self.assertQueryCount(1): for ct, in qr.tuples().iterator(): accum.append(ct) self.assertEqual(accum, [7, 3]) def test_fill_cache(self): def assertUsernames(qr, n): self.assertEqual([u.username for u in qr._result_cache], ['u%d' % i for i in range(1, n+1)]) User.create_users(20) with self.assertQueryCount(1): qr = User.select().execute() qr.fill_cache(5) self.assertFalse(qr._populated) assertUsernames(qr, 5) # a subsequent call will not "over-fill" qr.fill_cache(5) self.assertFalse(qr._populated) assertUsernames(qr, 5) # ask for one more and ye shall receive qr.fill_cache(6) self.assertFalse(qr._populated) assertUsernames(qr, 6) qr.fill_cache(21) self.assertTrue(qr._populated) assertUsernames(qr, 20) self.assertRaises(StopIteration, next, qr) def test_select_related(self): u1 = User.create(username='u1') u2 = User.create(username='u2') b1 = Blog.create(user=u1, title='b1') b2 = Blog.create(user=u2, title='b2') c11 = Comment.create(blog=b1, comment='c11') c12 = Comment.create(blog=b1, comment='c12') c21 = Comment.create(blog=b2, comment='c21') c22 = Comment.create(blog=b2, comment='c22') # missing comment.blog_id comments = (Comment .select(Comment.id, Comment.comment, Blog.pk, Blog.title) .join(Blog) .where(Blog.title == 'b1') .order_by(Comment.id)) with self.assertQueryCount(1): self.assertEqual([c.blog.title for c in comments], ['b1', 'b1']) # missing blog.pk comments = (Comment .select(Comment.id, Comment.comment, Comment.blog, Blog.title) .join(Blog) .where(Blog.title == 'b2') .order_by(Comment.id)) with self.assertQueryCount(1): self.assertEqual([c.blog.title for c in comments], ['b2', 'b2']) # both but going up 2 levels comments = (Comment .select(Comment, Blog, User) .join(Blog) .join(User) .where(User.username == 'u1') .order_by(Comment.id)) with self.assertQueryCount(1): self.assertEqual([c.comment for c in comments], ['c11', 'c12']) self.assertEqual([c.blog.title for c in comments], ['b1', 'b1']) self.assertEqual([c.blog.user.username for c in comments], ['u1', 'u1']) self.assertTrue(isinstance(comments._qr, ModelQueryResultWrapper)) comments = (Comment .select() .join(Blog) .join(User) .where(User.username == 'u1') .order_by(Comment.id)) with self.assertQueryCount(5): self.assertEqual([c.blog.user.username for c in comments], ['u1', 'u1']) self.assertTrue(isinstance(comments._qr, NaiveQueryResultWrapper)) # Go up two levels and use aliases for the joined instances. comments = (Comment .select(Comment, Blog, User) .join(Blog, on=(Comment.blog == Blog.pk).alias('bx')) .join(User, on=(Blog.user == User.id).alias('ux')) .where(User.username == 'u1') .order_by(Comment.id)) with self.assertQueryCount(1): self.assertEqual([c.comment for c in comments], ['c11', 'c12']) self.assertEqual([c.bx.title for c in comments], ['b1', 'b1']) self.assertEqual([c.bx.ux.username for c in comments], ['u1', 'u1']) def test_naive(self): u1 = User.create(username='u1') u2 = User.create(username='u2') b1 = Blog.create(user=u1, title='b1') b2 = Blog.create(user=u2, title='b2') users = User.select().naive() self.assertEqual([u.username for u in users], ['u1', 'u2']) self.assertTrue(isinstance(users._qr, NaiveQueryResultWrapper)) users = User.select(User, Blog).join(Blog).naive() self.assertEqual([u.username for u in users], ['u1', 'u2']) self.assertEqual([u.title for u in users], ['b1', 'b2']) query = Blog.select(Blog, User).join(User).order_by(Blog.title).naive() self.assertEqual(query.get().user, User.get(User.username == 'u1')) def test_tuples_dicts(self): u1 = User.create(username='u1') u2 = User.create(username='u2') b1 = Blog.create(user=u1, title='b1') b2 = Blog.create(user=u2, title='b2') users = User.select().tuples().order_by(User.id) self.assertEqual([r for r in users], [ (u1.id, 'u1'), (u2.id, 'u2'), ]) users = User.select().dicts() self.assertEqual([r for r in users], [ {'id': u1.id, 'username': 'u1'}, {'id': u2.id, 'username': 'u2'}, ]) users = User.select(User, Blog).join(Blog).order_by(User.id).tuples() self.assertEqual([r for r in users], [ (u1.id, 'u1', b1.pk, u1.id, 'b1', '', None), (u2.id, 'u2', b2.pk, u2.id, 'b2', '', None), ]) users = User.select(User, Blog).join(Blog).order_by(User.id).dicts() self.assertEqual([r for r in users], [ {'id': u1.id, 'username': 'u1', 'pk': b1.pk, 'user': u1.id, 'title': 'b1', 'content': '', 'pub_date': None}, {'id': u2.id, 'username': 'u2', 'pk': b2.pk, 'user': u2.id, 'title': 'b2', 'content': '', 'pub_date': None}, ]) users = User.select().order_by(User.id).namedtuples() self.assertEqual([(r.id, r.username) for r in users], [(u1.id, 'u1'), (u2.id, 'u2')]) users = (User .select( User.id, User.username, fn.UPPER(User.username).alias('USERNAME'), (User.id + 2).alias('xid')) .order_by(User.id) .namedtuples()) self.assertEqual( [(r.id, r.username, r.USERNAME, r.xid) for r in users], [(u1.id, 'u1', 'U1', u1.id + 2), (u2.id, 'u2', 'U2', u2.id + 2)]) def test_slicing_dicing(self): def assertUsernames(users, nums): self.assertEqual([u.username for u in users], ['u%d' % i for i in nums]) User.create_users(10) with self.assertQueryCount(1): uq = User.select().order_by(User.id) for i in range(2): res = uq[0] self.assertEqual(res.username, 'u1') with self.assertQueryCount(0): for i in range(2): res = uq[1] self.assertEqual(res.username, 'u2') with self.assertQueryCount(0): for i in range(2): res = uq[-1] self.assertEqual(res.username, 'u10') with self.assertQueryCount(0): for i in range(2): res = uq[:3] assertUsernames(res, [1, 2, 3]) with self.assertQueryCount(0): for i in range(2): res = uq[2:5] assertUsernames(res, [3, 4, 5]) with self.assertQueryCount(0): for i in range(2): res = uq[5:] assertUsernames(res, [6, 7, 8, 9, 10]) with self.assertQueryCount(0): for i in range(2): res = uq[-3:] assertUsernames(res, [8, 9, 10]) with self.assertQueryCount(0): for i in range(2): res = uq[-5:-3] assertUsernames(res, [6, 7]) with self.assertQueryCount(0): for i in range(2): res = uq[:-3] assertUsernames(res, list(range(1, 8))) with self.assertQueryCount(0): for i in range(2): res = uq[4:-4] assertUsernames(res, [5, 6]) with self.assertQueryCount(0): for i in range(2): res = uq[-6:6] assertUsernames(res, [5, 6]) self.assertRaises(IndexError, uq.__getitem__, 10) with self.assertQueryCount(0): res = uq[10:] self.assertEqual(res, []) uq = uq.clone() with self.assertQueryCount(1): for _ in range(2): res = uq[-1] self.assertEqual(res.username, 'u10') def test_indexing_fill_cache(self): def assertUser(query_or_qr, idx): self.assertEqual(query_or_qr[idx].username, 'u%d' % (idx + 1)) User.create_users(10) uq = User.select().order_by(User.id) with self.assertQueryCount(1): # Ensure we can grab the first 5 users in 1 query. for i in range(5): assertUser(uq, i) # Iterate in reverse and ensure only costs 1 query. uq = User.select().order_by(User.id) with self.assertQueryCount(1): for i in reversed(range(10)): assertUser(uq, i) # Execute the query and get reference to result wrapper. query = User.select().order_by(User.id) query.execute() qr = query._qr # Getting the first user will populate the result cache with 1 obj. assertUser(query, 0) self.assertEqual(len(qr._result_cache), 1) # Getting the last user will fill the cache. assertUser(query, 9) self.assertEqual(len(qr._result_cache), 10) def test_prepared(self): for i in range(2): u = User.create(username='u%d' % i) for j in range(2): Blog.create(title='b%d-%d' % (i, j), user=u, content='') for u in User.select(): # check prepared was called self.assertEqual(u.foo, u.username) for b in Blog.select(Blog, User).join(User): # prepared is called for select-related instances self.assertEqual(b.foo, b.title) self.assertEqual(b.user.foo, b.user.username) def test_aliasing_values(self): User.create_users(2) q = User.select(User.username.alias('xx')).order_by(User.username) results = [row for row in q.dicts()] self.assertEqual(results, [ {'xx': 'u1'}, {'xx': 'u2'}]) results = [user.xx for user in q] self.assertEqual(results, ['u1', 'u2']) # Force ModelQueryResultWrapper. q = (User .select(User.username.alias('xx'), Blog.pk) .join(Blog, JOIN.LEFT_OUTER) .order_by(User.username)) results = [user.xx for user in q] self.assertEqual(results, ['u1', 'u2']) # Use Model and Field aliases. UA = User.alias() q = (User .select( User.username.alias('x'), UA.username.alias('y')) .join(UA, on=(User.id == UA.id).alias('z')) .order_by(User.username)) results = [(user.x, user.z.y) for user in q] self.assertEqual(results, [('u1', 'u1'), ('u2', 'u2')]) q = q.naive() results = [(user.x, user.y) for user in q] self.assertEqual(results, [('u1', 'u1'), ('u2', 'u2')]) uq = User.select(User.id, User.username).alias('u2') q = (User .select( User.username.alias('x'), uq.c.username.alias('y')) .join(uq, on=(User.id == uq.c.id)) .order_by(User.username)) results = [(user.x, user.y) for user in q] self.assertEqual(results, [('u1', 'u1'), ('u2', 'u2')]) class TestJoinedInstanceConstruction(ModelTestCase): requires = [Blog, User, Relationship] def setUp(self): super(TestJoinedInstanceConstruction, self).setUp() u1 = User.create(username='u1') u2 = User.create(username='u2') Blog.create(user=u1, title='b1') Blog.create(user=u2, title='b2') def test_fk_missing_pk(self): # Not enough information. with self.assertQueryCount(1): q = (Blog .select(Blog.title, User.username) .join(User) .order_by(Blog.title, User.username)) results = [] for blog in q: results.append((blog.title, blog.user.username)) self.assertIsNone(blog.user.id) self.assertIsNone(blog.user_id) self.assertEqual(results, [('b1', 'u1'), ('b2', 'u2')]) def test_fk_with_pk(self): with self.assertQueryCount(1): q = (Blog .select(Blog.title, User.username, User.id) .join(User) .order_by(Blog.title, User.username)) results = [] for blog in q: results.append((blog.title, blog.user.username)) self.assertIsNotNone(blog.user.id) self.assertIsNotNone(blog.user_id) self.assertEqual(results, [('b1', 'u1'), ('b2', 'u2')]) def test_backref_missing_pk(self): with self.assertQueryCount(1): q = (User .select(User.username, Blog.title) .join(Blog) .order_by(User.username, Blog.title)) results = [] for user in q: results.append((user.username, user.blog.title)) self.assertIsNone(user.id) self.assertIsNone(user.blog.pk) self.assertIsNone(user.blog.user_id) self.assertEqual(results, [('u1', 'b1'), ('u2', 'b2')]) def test_fk_join_expr(self): with self.assertQueryCount(1): q = (User .select(User.username, Blog.title) .join(Blog, on=(User.id == Blog.user).alias('bx')) .order_by(User.username)) results = [] for user in q: results.append((user.username, user.bx.title)) self.assertEqual(results, [('u1', 'b1'), ('u2', 'b2')]) with self.assertQueryCount(1): q = (Blog .select(Blog.title, User.username) .join(User, on=(Blog.user == User.id).alias('ux')) .order_by(Blog.title)) results = [] for blog in q: results.append((blog.title, blog.ux.username)) self.assertEqual(results, [('b1', 'u1'), ('b2', 'u2')]) def test_aliases(self): B = Blog.alias() U = User.alias() with self.assertQueryCount(1): q = (U.select(U.username, B.title) .join(B, on=(U.id == B.user)) .order_by(U.username)) results = [] for user in q: results.append((user.username, user.blog.title)) self.assertEqual(results, [('u1', 'b1'), ('u2', 'b2')]) with self.assertQueryCount(1): q = (B.select(B.title, U.username) .join(U, on=(B.user == U.id)) .order_by(B.title)) results = [] for blog in q: results.append((blog.title, blog.user.username)) self.assertEqual(results, [('b1', 'u1'), ('b2', 'u2')]) # No explicit join condition. with self.assertQueryCount(1): q = (B.select(B.title, U.username) .join(U, on=B.user) .order_by(B.title)) results = [(blog.title, blog.user.username) for blog in q] self.assertEqual(results, [('b1', 'u1'), ('b2', 'u2')]) # No explicit condition, backref. Blog.create(user=User.get(User.username == 'u2'), title='b2-2') with self.assertQueryCount(1): q = (U.select(U.username, B.title) .join(B, on=B.user) .order_by(U.username, B.title)) results = [(user.username, user.blog.title) for user in q] self.assertEqual( results, [('u1', 'b1'), ('u2', 'b2'), ('u2', 'b2-2')]) def test_subqueries(self): uq = User.select() bq = Blog.select(Blog.title, Blog.user).alias('bq') with self.assertQueryCount(1): q = (User .select(User, bq.c.title.bind_to(Blog)) .join(bq, on=(User.id == bq.c.user_id).alias('blog')) .order_by(User.username)) results = [] for user in q: results.append((user.username, user.blog.title)) self.assertEqual(results, [('u1', 'b1'), ('u2', 'b2')]) def test_multiple_joins(self): Blog.delete().execute() User.delete().execute() users = [User.create(username='u%s' % i) for i in range(4)] for from_user, to_user in itertools.combinations(users, 2): Relationship.create(from_user=from_user, to_user=to_user) with self.assertQueryCount(1): ToUser = User.alias() q = (Relationship .select(Relationship, User, ToUser) .join(User, on=Relationship.from_user) .switch(Relationship) .join(ToUser, on=Relationship.to_user) .order_by(User.username, ToUser.username)) results = [(r.from_user.username, r.to_user.username) for r in q] self.assertEqual(results, [ ('u0', 'u1'), ('u0', 'u2'), ('u0', 'u3'), ('u1', 'u2'), ('u1', 'u3'), ('u2', 'u3'), ]) with self.assertQueryCount(1): ToUser = User.alias() q = (Relationship .select(Relationship, User, ToUser) .join(User, on=(Relationship.from_user == User.id)) .switch(Relationship) .join(ToUser, on=(Relationship.to_user == ToUser.id).alias('to_user')) .order_by(User.username, ToUser.username)) results = [(r.from_user.username, r.to_user.username) for r in q] self.assertEqual(results, [ ('u0', 'u1'), ('u0', 'u2'), ('u0', 'u3'), ('u1', 'u2'), ('u1', 'u3'), ('u2', 'u3'), ]) class TestQueryResultTypeConversion(ModelTestCase): requires = [User] def setUp(self): super(TestQueryResultTypeConversion, self).setUp() for i in range(3): User.create(username='u%d' % i) def assertNames(self, query, expected, attr='username'): id_field = query.model_class.id self.assertEqual( [getattr(item, attr) for item in query.order_by(id_field)], expected) def test_simple_select(self): query = UpperUser.select() self.assertNames(query, ['U0', 'U1', 'U2']) query = User.select() self.assertNames(query, ['u0', 'u1', 'u2']) def test_with_alias(self): # Even when aliased to a different attr, the column is coerced. query = UpperUser.select(UpperUser.username.alias('foo')) self.assertNames(query, ['U0', 'U1', 'U2'], 'foo') def test_scalar(self): max_username = (UpperUser .select(fn.Max(UpperUser.username)) .scalar(convert=True)) self.assertEqual(max_username, 'U2') max_username = (UpperUser .select(fn.Max(UpperUser.username)) .scalar()) self.assertEqual(max_username, 'u2') def test_function(self): substr = fn.SubStr(UpperUser.username, 1, 3) # Being the first parameter of the function, it meets the special-case # criteria. query = UpperUser.select(substr.alias('foo')) self.assertNames(query, ['U0', 'U1', 'U2'], 'foo') query = UpperUser.select(substr.coerce(False).alias('foo')) self.assertNames(query, ['u0', 'u1', 'u2'], 'foo') query = UpperUser.select(substr.coerce(False).alias('username')) self.assertNames(query, ['u0', 'u1', 'u2']) query = UpperUser.select(fn.Lower(UpperUser.username).alias('username')) self.assertNames(query, ['U0', 'U1', 'U2']) query = UpperUser.select( fn.Lower(UpperUser.username).alias('username').coerce(False)) self.assertNames(query, ['u0', 'u1', 'u2']) # Since it is aliased to an existing column, we will use that column's # coerce. query = UpperUser.select( fn.SubStr(fn.Lower(UpperUser.username), 1, 3).alias('username')) self.assertNames(query, ['U0', 'U1', 'U2']) query = UpperUser.select( fn.SubStr(fn.Lower(UpperUser.username), 1, 3).alias('foo')) self.assertNames(query, ['u0', 'u1', 'u2'], 'foo') class TestModelQueryResultWrapper(ModelTestCase): requires = [TestModelA, TestModelB, TestModelC, User, Blog] data = ( (TestModelA, ( ('pk1', 'a1'), ('pk2', 'a2'), ('pk3', 'a3'))), (TestModelB, ( ('pk1', 'b1'), ('pk2', 'b2'), ('pk3', 'b3'))), (TestModelC, ( ('pk1', 'c1'), ('pk2', 'c2'))), ) def setUp(self): super(TestModelQueryResultWrapper, self).setUp() for model_class, model_data in self.data: for pk, data in model_data: model_class.create(field=pk, data=data) def test_join_expr(self): def get_query(join_type=JOIN.INNER): sq = (TestModelA .select(TestModelA, TestModelB, TestModelC) .join( TestModelB, on=(TestModelA.field == TestModelB.field).alias('rel_b')) .join( TestModelC, join_type=join_type, on=(TestModelB.field == TestModelC.field)) .order_by(TestModelA.field)) return sq sq = get_query() self.assertEqual(sq.count(), 2) with self.assertQueryCount(1): results = list(sq) expected = (('b1', 'c1'), ('b2', 'c2')) for i, (b_data, c_data) in enumerate(expected): self.assertEqual(results[i].rel_b.data, b_data) self.assertEqual(results[i].rel_b.field.data, c_data) sq = get_query(JOIN.LEFT_OUTER) self.assertEqual(sq.count(), 3) with self.assertQueryCount(1): results = list(sq) expected = (('b1', 'c1'), ('b2', 'c2'), ('b3', None)) for i, (b_data, c_data) in enumerate(expected): self.assertEqual(results[i].rel_b.data, b_data) self.assertEqual(results[i].rel_b.field.data, c_data) def test_backward_join(self): u1 = User.create(username='u1') u2 = User.create(username='u2') for user in (u1, u2): Blog.create(title='b-%s' % user.username, user=user) # Create an additional blog for user 2. Blog.create(title='b-u2-2', user=u2) res = (User .select(User.username, Blog.title) .join(Blog) .order_by(User.username.asc(), Blog.title.asc())) self.assertEqual([(u.username, u.blog.title) for u in res], [ ('u1', 'b-u1'), ('u2', 'b-u2'), ('u2', 'b-u2-2')]) def test_joins_with_aliases(self): u1 = User.create(username='u1') u2 = User.create(username='u2') b1_1 = Blog.create(user=u1, title='b1-1') b1_2 = Blog.create(user=u1, title='b1-2') b2_1 = Blog.create(user=u2, title='b2-1') UserAlias = User.alias() BlogAlias = Blog.alias() def assertExpectedQuery(query, is_user_query): accum = [] with self.assertQueryCount(1): if is_user_query: for user in query: accum.append((user.username, user.blog.title)) else: for blog in query: accum.append((blog.user.username, blog.title)) self.assertEqual(accum, [ ('u1', 'b1-1'), ('u1', 'b1-2'), ('u2', 'b2-1'), ]) combinations = [ (User, BlogAlias, User.id == BlogAlias.user, True), (User, BlogAlias, BlogAlias.user == User.id, True), (User, Blog, User.id == Blog.user, True), (User, Blog, Blog.user == User.id, True), (User, Blog, None, True), (Blog, UserAlias, UserAlias.id == Blog.user, False), (Blog, UserAlias, Blog.user == UserAlias.id, False), (Blog, User, User.id == Blog.user, False), (Blog, User, Blog.user == User.id, False), (Blog, User, None, False), ] for Src, JoinModel, predicate, is_user_query in combinations: query = (Src .select(Src, JoinModel) .join(JoinModel, on=predicate) .order_by(SQL('1, 2'))) assertExpectedQuery(query, is_user_query) class TestModelQueryResultForeignKeys(ModelTestCase): requires = [Parent, Child] def test_foreign_key_assignment(self): parent = Parent.create(data='p1') child = Child.create(parent=parent, data='c1') ParentAlias = Parent.alias() query = Child.select(Child, ParentAlias) ljoin = (ParentAlias.id == Child.parent) rjoin = (Child.parent == ParentAlias.id) lhs_alias = query.join(ParentAlias, on=ljoin) rhs_alias = query.join(ParentAlias, on=rjoin) self.assertJoins(lhs_alias, [ 'INNER JOIN "parent" AS parent ' 'ON ("parent"."id" = "child"."parent_id")']) self.assertJoins(rhs_alias, [ 'INNER JOIN "parent" AS parent ' 'ON ("child"."parent_id" = "parent"."id")']) with self.assertQueryCount(1): lchild = lhs_alias.get() self.assertEqual(lchild.id, child.id) self.assertEqual(lchild.parent.id, parent.id) with self.assertQueryCount(1): rchild = rhs_alias.get() self.assertEqual(rchild.id, child.id) self.assertEqual(rchild.parent.id, parent.id) class TestSelectRelatedForeignKeyToNonPrimaryKey(ModelTestCase): requires = [Package, PackageItem] def test_select_related(self): p1 = Package.create(barcode='101') p2 = Package.create(barcode='102') pi11 = PackageItem.create(title='p11', package='101') pi12 = PackageItem.create(title='p12', package='101') pi21 = PackageItem.create(title='p21', package='102') pi22 = PackageItem.create(title='p22', package='102') # missing PackageItem.package_id. with self.assertQueryCount(1): items = (PackageItem .select( PackageItem.id, PackageItem.title, Package.barcode) .join(Package) .where(Package.barcode == '101') .order_by(PackageItem.id)) self.assertEqual( [i.package.barcode for i in items], ['101', '101']) with self.assertQueryCount(1): items = (PackageItem .select( PackageItem.id, PackageItem.title, PackageItem.package, Package.id) .join(Package) .where(Package.barcode == '101') .order_by(PackageItem.id)) self.assertEqual([i.package.id for i in items], [p1.id, p1.id]) class BaseTestPrefetch(ModelTestCase): requires = [ User, Blog, Comment, Parent, Child, Orphan, ChildPet, OrphanPet, Category, Post, Tag, TagPostThrough, TagPostThroughAlt, Category, UserCategory, Relationship, SpecialComment, ] user_data = [ ('u1', (('b1', ('b1-c1', 'b1-c2')), ('b2', ('b2-c1',)))), ('u2', ()), ('u3', (('b3', ('b3-c1', 'b3-c2')), ('b4', ()))), ('u4', (('b5', ('b5-c1', 'b5-c2')), ('b6', ('b6-c1',)))), ] parent_data = [ ('p1', ( # children ( ('c1', ('c1-p1', 'c1-p2')), ('c2', ('c2-p1',)), ('c3', ('c3-p1',)), ('c4', ()), ), # orphans ( ('o1', ('o1-p1', 'o1-p2')), ('o2', ('o2-p1',)), ('o3', ('o3-p1',)), ('o4', ()), ), )), ('p2', ((), ())), ('p3', ( # children ( ('c6', ()), ('c7', ('c7-p1',)), ), # orphans ( ('o6', ('o6-p1', 'o6-p2')), ('o7', ('o7-p1',)), ), )), ] category_tree = [ ['root', ['p1', 'p2']], ['p1', ['p1-1', 'p1-2']], ['p2', ['p2-1', 'p2-2']], ['p1-1', []], ['p1-2', []], ['p2-1', []], ['p2-2', []], ] def setUp(self): super(BaseTestPrefetch, self).setUp() for parent, (children, orphans) in self.parent_data: p = Parent.create(data=parent) for child_pets in children: child, pets = child_pets c = Child.create(parent=p, data=child) for pet in pets: ChildPet.create(child=c, data=pet) for orphan_pets in orphans: orphan, pets = orphan_pets o = Orphan.create(parent=p, data=orphan) for pet in pets: OrphanPet.create(orphan=o, data=pet) for user, blog_comments in self.user_data: u = User.create(username=user) for blog, comments in blog_comments: b = Blog.create(user=u, title=blog, content='') for c in comments: Comment.create(blog=b, comment=c) def _build_category_tree(self): def cc(name, parent=None): return Category.create(name=name, parent=parent) root = cc('root') p1 = cc('p1', root) p2 = cc('p2', root) for p in (p1, p2): for i in range(2): cc('%s-%s' % (p.name, i + 1), p) class TestPrefetch(BaseTestPrefetch): def test_prefetch_simple(self): sq = User.select().where(User.username != 'u3') sq2 = Blog.select().where(Blog.title != 'b2') sq3 = Comment.select() with self.assertQueryCount(3): prefetch_sq = prefetch(sq, sq2, sq3) results = [] for user in prefetch_sq: results.append(user.username) for blog in user.blog_set_prefetch: results.append(blog.title) for comment in blog.comments_prefetch: results.append(comment.comment) self.assertEqual(results, [ 'u1', 'b1', 'b1-c1', 'b1-c2', 'u2', 'u4', 'b5', 'b5-c1', 'b5-c2', 'b6', 'b6-c1', ]) with self.assertQueryCount(0): results = [] for user in prefetch_sq: for blog in user.blog_set_prefetch: results.append(blog.user.username) for comment in blog.comments_prefetch: results.append(comment.blog.title) self.assertEqual(results, [ 'u1', 'b1', 'b1', 'u4', 'b5', 'b5', 'u4', 'b6', ]) def test_prefetch_reverse(self): sq = User.select() sq2 = Blog.select().where(Blog.title != 'b2').order_by(Blog.pk) with self.assertQueryCount(2): prefetch_sq = prefetch(sq2, sq) results = [] for blog in prefetch_sq: results.append(blog.title) results.append(blog.user.username) self.assertEqual(results, [ 'b1', 'u1', 'b3', 'u3', 'b4', 'u3', 'b5', 'u4', 'b6', 'u4']) def test_prefetch_up_and_down(self): blogs = Blog.select(Blog, User).join(User).order_by(Blog.title) comments = Comment.select().order_by(Comment.comment.desc()) with self.assertQueryCount(2): query = prefetch(blogs, comments) results = [] for blog in query: results.append(( blog.user.username, blog.title, [comment.comment for comment in blog.comments_prefetch])) self.assertEqual(results, [ ('u1', 'b1', ['b1-c2', 'b1-c1']), ('u1', 'b2', ['b2-c1']), ('u3', 'b3', ['b3-c2', 'b3-c1']), ('u3', 'b4', []), ('u4', 'b5', ['b5-c2', 'b5-c1']), ('u4', 'b6', ['b6-c1']), ]) def test_prefetch_multi_depth(self): sq = Parent.select() sq2 = Child.select() sq3 = Orphan.select() sq4 = ChildPet.select() sq5 = OrphanPet.select() with self.assertQueryCount(5): prefetch_sq = prefetch(sq, sq2, sq3, sq4, sq5) results = [] for parent in prefetch_sq: results.append(parent.data) for child in parent.child_set_prefetch: results.append(child.data) for pet in child.childpet_set_prefetch: results.append(pet.data) for orphan in parent.orphan_set_prefetch: results.append(orphan.data) for pet in orphan.orphanpet_set_prefetch: results.append(pet.data) self.assertEqual(results, [ 'p1', 'c1', 'c1-p1', 'c1-p2', 'c2', 'c2-p1', 'c3', 'c3-p1', 'c4', 'o1', 'o1-p1', 'o1-p2', 'o2', 'o2-p1', 'o3', 'o3-p1', 'o4', 'p2', 'p3', 'c6', 'c7', 'c7-p1', 'o6', 'o6-p1', 'o6-p2', 'o7', 'o7-p1', ]) def test_prefetch_no_aggregate(self): with self.assertQueryCount(1): query = (User .select(User, Blog) .join(Blog, JOIN.LEFT_OUTER) .order_by(User.username, Blog.title)) results = [] for user in query: results.append(( user.username, user.blog.title)) self.assertEqual(results, [ ('u1', 'b1'), ('u1', 'b2'), ('u2', None), ('u3', 'b3'), ('u3', 'b4'), ('u4', 'b5'), ('u4', 'b6'), ]) def test_prefetch_group_by(self): users = (User .select(User, fn.Max(fn.Length(Blog.content)).alias('max_content_len')) .join(Blog, JOIN_LEFT_OUTER) .group_by(User) .order_by(User.id)) blogs = Blog.select() comments = Comment.select() with self.assertQueryCount(3): result = prefetch(users, blogs, comments) self.assertEqual(len(result), 4) def test_prefetch_self_join(self): self._build_category_tree() Child = Category.alias() with self.assertQueryCount(2): query = prefetch(Category.select().order_by(Category.id), Child) names_and_children = [ [parent.name, [child.name for child in parent.children_prefetch]] for parent in query] self.assertEqual(names_and_children, self.category_tree) def test_prefetch_specific_model(self): # User -> Blog # -> SpecialComment (fk to user and blog) Comment.delete().execute() Blog.delete().execute() User.delete().execute() u1 = User.create(username='u1') u2 = User.create(username='u2') for i in range(1, 3): for user in (u1, u2): b = Blog.create(user=user, title='%s-b%s' % (user.username, i)) SpecialComment.create( user=user, blog=b, name='%s-c%s' % (user.username, i)) u3 = User.create(username='u3') SpecialComment.create(user=u3, name='u3-c1') u4 = User.create(username='u4') Blog.create(user=u4, title='u4-b1') u5 = User.create(username='u5') with self.assertQueryCount(3): user_pf = prefetch( User.select(), Blog, (SpecialComment, User)) results = [] for user in user_pf: results.append(( user.username, [b.title for b in user.blog_set_prefetch], [c.name for c in user.special_comments_prefetch])) self.assertEqual(results, [ ('u1', ['u1-b1', 'u1-b2'], ['u1-c1', 'u1-c2']), ('u2', ['u2-b1', 'u2-b2'], ['u2-c1', 'u2-c2']), ('u3', [], ['u3-c1']), ('u4', ['u4-b1'], []), ('u5', [], []), ]) class TestPrefetchMultipleFKs(ModelTestCase): requires = [ User, Blog, Relationship, ] def create_users(self): names = ['charlie', 'huey', 'zaizee'] return [User.create(username=username) for username in names] def create_relationships(self, charlie, huey, zaizee): r1 = Relationship.create(from_user=charlie, to_user=huey) r2 = Relationship.create(from_user=charlie, to_user=zaizee) r3 = Relationship.create(from_user=huey, to_user=charlie) r4 = Relationship.create(from_user=zaizee, to_user=charlie) return r1, r2, r3, r4 def test_multiple_fks(self): charlie, huey, zaizee = self.create_users() r1, r2, r3, r4 = self.create_relationships(charlie, huey, zaizee) def assertRelationships(attr, values): for relationship, value in zip(attr, values): self.assertEqual(relationship._data, value) with self.assertQueryCount(2): users = User.select().order_by(User.id) relationships = Relationship.select() query = prefetch(users, relationships) results = [row for row in query] self.assertEqual(len(results), 3) cp, hp, zp = results assertRelationships(cp.relationships_prefetch, [ {'id': r1.id, 'from_user': charlie.id, 'to_user': huey.id}, {'id': r2.id, 'from_user': charlie.id, 'to_user': zaizee.id}]) assertRelationships(cp.related_to_prefetch, [ {'id': r3.id, 'from_user': huey.id, 'to_user': charlie.id}, {'id': r4.id, 'from_user': zaizee.id, 'to_user': charlie.id}]) assertRelationships(hp.relationships_prefetch, [ {'id': r3.id, 'from_user': huey.id, 'to_user': charlie.id}]) assertRelationships(hp.related_to_prefetch, [ {'id': r1.id, 'from_user': charlie.id, 'to_user': huey.id}]) assertRelationships(zp.relationships_prefetch, [ {'id': r4.id, 'from_user': zaizee.id, 'to_user': charlie.id}]) assertRelationships(zp.related_to_prefetch, [ {'id': r2.id, 'from_user': charlie.id, 'to_user': zaizee.id}]) def test_prefetch_multiple_fk_reverse(self): charlie, huey, zaizee = self.create_users() r1, r2, r3, r4 = self.create_relationships(charlie, huey, zaizee) with self.assertQueryCount(2): relationships = Relationship.select().order_by(Relationship.id) users = User.select() query = prefetch(relationships, users) results = [row for row in query] self.assertEqual(len(results), 4) expected = ( ('charlie', 'huey'), ('charlie', 'zaizee'), ('huey', 'charlie'), ('zaizee', 'charlie')) for (from_user, to_user), relationship in zip(expected, results): self.assertEqual(relationship.from_user.username, from_user) self.assertEqual(relationship.to_user.username, to_user) class TestPrefetchThroughM2M(ModelTestCase): requires = [User, Note, Flag, NoteFlag] test_data = [ ('charlie', [ ('rewrite peewee', ['todo']), ('rice desktop', ['done']), ('test peewee', ['todo', 'urgent']), ('write window-manager', [])]), ('huey', [ ('bite mickey', []), ('scratch furniture', ['todo', 'urgent']), ('vomit on carpet', ['done'])]), ('zaizee', []), ] def setUp(self): super(TestPrefetchThroughM2M, self).setUp() with test_db.atomic(): for username, note_data in self.test_data: user = User.create(username=username) for note, flags in note_data: self.create_note(user, note, *flags) def create_note(self, user, text, *flags): note = Note.create(user=user, text=text) for flag in flags: try: flag = Flag.get(Flag.label == flag) except Flag.DoesNotExist: flag = Flag.create(label=flag) NoteFlag.create(note=note, flag=flag) return note def test_prefetch_through_m2m(self): # One query for each table being prefetched. with self.assertQueryCount(4): users = User.select() notes = Note.select().order_by(Note.text) flags = Flag.select().order_by(Flag.label) query = prefetch(users, notes, NoteFlag, flags) accum = [] for user in query: notes = [] for note in user.notes_prefetch: flags = [] for nf in note.flags_prefetch: self.assertEqual(nf.note_id, note.id) self.assertEqual(nf.note.id, note.id) flags.append(nf.flag.label) notes.append((note.text, flags)) accum.append((user.username, notes)) self.assertEqual(self.test_data, accum) def test_aggregate_through_m2m(self): with self.assertQueryCount(1): query = (User .select(User, Note, NoteFlag, Flag) .join(Note, JOIN.LEFT_OUTER) .join(NoteFlag, JOIN.LEFT_OUTER) .join(Flag, JOIN.LEFT_OUTER) .order_by(User.id, Note.text, Flag.label) .aggregate_rows()) accum = [] for user in query: notes = [] for note in user.notes: flags = [] for nf in note.flags: self.assertEqual(nf.note_id, note.id) flags.append(nf.flag.label) notes.append((note.text, flags)) accum.append((user.username, notes)) self.assertEqual(self.test_data, accum) class TestAggregateRows(BaseTestPrefetch): def test_aggregate_users(self): with self.assertQueryCount(1): query = (User .select(User, Blog, Comment) .join(Blog, JOIN.LEFT_OUTER) .join(Comment, JOIN.LEFT_OUTER) .order_by(User.username, Blog.title, Comment.id) .aggregate_rows()) results = [] for user in query: results.append(( user.username, [(blog.title, [comment.comment for comment in blog.comments]) for blog in user.blog_set])) self.assertEqual(results, [ ('u1', [ ('b1', ['b1-c1', 'b1-c2']), ('b2', ['b2-c1'])]), ('u2', []), ('u3', [ ('b3', ['b3-c1', 'b3-c2']), ('b4', [])]), ('u4', [ ('b5', ['b5-c1', 'b5-c2']), ('b6', ['b6-c1'])]), ]) def test_aggregate_blogs(self): with self.assertQueryCount(1): query = (Blog .select(Blog, User, Comment) .join(User) .switch(Blog) .join(Comment, JOIN.LEFT_OUTER) .order_by(Blog.title, User.username, Comment.id) .aggregate_rows()) results = [] for blog in query: results.append(( blog.user.username, blog.title, [comment.comment for comment in blog.comments])) self.assertEqual(results, [ ('u1', 'b1', ['b1-c1', 'b1-c2']), ('u1', 'b2', ['b2-c1']), ('u3', 'b3', ['b3-c1', 'b3-c2']), ('u3', 'b4', []), ('u4', 'b5', ['b5-c1', 'b5-c2']), ('u4', 'b6', ['b6-c1']), ]) def test_aggregate_on_expression_join(self): with self.assertQueryCount(1): join_expr = (User.id == Blog.user) query = (User .select(User, Blog) .join(Blog, JOIN.LEFT_OUTER, on=join_expr) .order_by(User.username, Blog.title) .aggregate_rows()) results = [] for user in query: results.append(( user.username, [blog.title for blog in user.blog_set])) self.assertEqual(results, [ ('u1', ['b1', 'b2']), ('u2', []), ('u3', ['b3', 'b4']), ('u4', ['b5', 'b6']), ]) def test_aggregate_with_join_model_aliases(self): expected = [ ('u1', ['b1', 'b2']), ('u2', []), ('u3', ['b3', 'b4']), ('u4', ['b5', 'b6']), ] with self.assertQueryCount(1): query = (User .select(User, Blog) .join( Blog, JOIN.LEFT_OUTER, on=(User.id == Blog.user).alias('blogz')) .order_by(User.id, Blog.title) .aggregate_rows()) results = [ (user.username, [blog.title for blog in user.blogz]) for user in query] self.assertEqual(results, expected) BlogAlias = Blog.alias() with self.assertQueryCount(1): query = (User .select(User, BlogAlias) .join( BlogAlias, JOIN.LEFT_OUTER, on=(User.id == BlogAlias.user).alias('blogz')) .order_by(User.id, BlogAlias.title) .aggregate_rows()) results = [ (user.username, [blog.title for blog in user.blogz]) for user in query] self.assertEqual(results, expected) def test_aggregate_unselected_join_backref(self): cat_1 = Category.create(name='category 1') cat_2 = Category.create(name='category 2') with test_db.transaction(): for i, user in enumerate(User.select().order_by(User.username)): if i % 2 == 0: category = cat_2 else: category = cat_1 UserCategory.create(user=user, category=category) with self.assertQueryCount(1): # The join on UserCategory is a backref join (since the FK is on # UserCategory). Additionally, UserCategory/Category are not # selected and are only used for filtering the result set. query = (User .select(User, Blog) .join(Blog, JOIN.LEFT_OUTER) .switch(User) .join(UserCategory) .join(Category) .where(Category.name == cat_1.name) .order_by(User.username, Blog.title) .aggregate_rows()) results = [] for user in query: results.append(( user.username, [blog.title for blog in user.blog_set])) self.assertEqual(results, [ ('u2', []), ('u4', ['b5', 'b6']), ]) def test_aggregate_manytomany(self): p1 = Post.create(title='p1') p2 = Post.create(title='p2') Post.create(title='p3') p4 = Post.create(title='p4') t1 = Tag.create(tag='t1') t2 = Tag.create(tag='t2') t3 = Tag.create(tag='t3') TagPostThroughAlt.create(tag=t1, post=p1) TagPostThroughAlt.create(tag=t2, post=p1) TagPostThroughAlt.create(tag=t2, post=p2) TagPostThroughAlt.create(tag=t3, post=p2) TagPostThroughAlt.create(tag=t1, post=p4) TagPostThroughAlt.create(tag=t2, post=p4) TagPostThroughAlt.create(tag=t3, post=p4) with self.assertQueryCount(1): query = (Post .select(Post, TagPostThroughAlt, Tag) .join(TagPostThroughAlt, JOIN.LEFT_OUTER) .join(Tag, JOIN.LEFT_OUTER) .order_by(Post.id, TagPostThroughAlt.post, Tag.id) .aggregate_rows()) results = [] for post in query: post_data = [post.title] for tpt in post.tags_alt: post_data.append(tpt.tag.tag) results.append(post_data) self.assertEqual(results, [ ['p1', 't1', 't2'], ['p2', 't2', 't3'], ['p3'], ['p4', 't1', 't2', 't3'], ]) def test_aggregate_parent_child(self): with self.assertQueryCount(1): query = (Parent .select(Parent, Child, Orphan, ChildPet, OrphanPet) .join(Child, JOIN.LEFT_OUTER) .join(ChildPet, JOIN.LEFT_OUTER) .switch(Parent) .join(Orphan, JOIN.LEFT_OUTER) .join(OrphanPet, JOIN.LEFT_OUTER) .order_by( Parent.data, Child.data, ChildPet.id, Orphan.data, OrphanPet.id) .aggregate_rows()) results = [] for parent in query: results.append(( parent.data, [(child.data, [pet.data for pet in child.childpet_set]) for child in parent.child_set], [(orphan.data, [pet.data for pet in orphan.orphanpet_set]) for orphan in parent.orphan_set] )) # Without the `.aggregate_rows()` call, this would be 289!! self.assertEqual(results, [ ('p1', [('c1', ['c1-p1', 'c1-p2']), ('c2', ['c2-p1']), ('c3', ['c3-p1']), ('c4', [])], [('o1', ['o1-p1', 'o1-p2']), ('o2', ['o2-p1']), ('o3', ['o3-p1']), ('o4', [])], ), ('p2', [], []), ('p3', [('c6', []), ('c7', ['c7-p1'])], [('o6', ['o6-p1', 'o6-p2']), ('o7', ['o7-p1'])],) ]) def test_aggregate_with_unselected_joins(self): with self.assertQueryCount(1): query = (Child .select(Child, ChildPet, Parent) .join(ChildPet, JOIN.LEFT_OUTER) .switch(Child) .join(Parent) .join(Orphan) .join(OrphanPet) .where(OrphanPet.data == 'o6-p2') .order_by(Child.data, ChildPet.data) .aggregate_rows()) results = [] for child in query: results.append(( child.data, child.parent.data, [child_pet.data for child_pet in child.childpet_set])) self.assertEqual(results, [ ('c6', 'p3', []), ('c7', 'p3', ['c7-p1']), ]) with self.assertQueryCount(1): query = (Parent .select(Parent, Child, ChildPet) .join(Child, JOIN.LEFT_OUTER) .join(ChildPet, JOIN.LEFT_OUTER) .switch(Parent) .join(Orphan) .join(OrphanPet) .where(OrphanPet.data == 'o6-p2') .order_by(Parent.data, Child.data, ChildPet.data) .aggregate_rows()) results = [] for parent in query: results.append(( parent.data, [(child.data, [pet.data for pet in child.childpet_set]) for child in parent.child_set])) self.assertEqual(results, [('p3', [ ('c6', []), ('c7', ['c7-p1']), ])]) def test_aggregate_rows_ordering(self): # Refs github #519. with self.assertQueryCount(1): query = (User .select(User, Blog) .join(Blog, JOIN.LEFT_OUTER) .order_by(User.username.desc(), Blog.title.desc()) .aggregate_rows()) accum = [] for user in query: accum.append(( user.username, [blog.title for blog in user.blog_set])) if sys.version_info[:2] > (2, 6): self.assertEqual(accum, [ ('u4', ['b6', 'b5']), ('u3', ['b4', 'b3']), ('u2', []), ('u1', ['b2', 'b1']), ]) def test_aggregate_rows_self_join(self): self._build_category_tree() Child = Category.alias() # Same query, but this time use an `alias` on the join expr. with self.assertQueryCount(1): query = (Category .select(Category, Child) .join( Child, JOIN.LEFT_OUTER, on=(Category.id == Child.parent).alias('childrenx')) .order_by(Category.id, Child.id) .aggregate_rows()) names_and_children = [ [parent.name, [child.name for child in parent.childrenx]] for parent in query] self.assertEqual(names_and_children, self.category_tree) def test_multiple_fks(self): names = ['charlie', 'huey', 'zaizee'] charlie, huey, zaizee = [ User.create(username=username) for username in names] Relationship.create(from_user=charlie, to_user=huey) Relationship.create(from_user=charlie, to_user=zaizee) Relationship.create(from_user=huey, to_user=charlie) Relationship.create(from_user=zaizee, to_user=charlie) UserAlias = User.alias() with self.assertQueryCount(1): query = (User .select(User, Relationship, UserAlias) .join( Relationship, JOIN.LEFT_OUTER, on=Relationship.from_user) .join( UserAlias, on=( Relationship.to_user == UserAlias.id ).alias('to_user')) .order_by(User.username, Relationship.id) .where(User.username == 'charlie') .aggregate_rows()) results = [row for row in query] self.assertEqual(len(results), 1) user = results[0] self.assertEqual(user.username, 'charlie') self.assertEqual(len(user.relationships), 2) rh, rz = user.relationships self.assertEqual(rh.to_user.username, 'huey') self.assertEqual(rz.to_user.username, 'zaizee') FromUser = User.alias() ToUser = User.alias() from_join = (Relationship.from_user == FromUser.id) to_join = (Relationship.to_user == ToUser.id) with self.assertQueryCount(1): query = (Relationship .select(Relationship, FromUser, ToUser) .join(FromUser, on=from_join.alias('from_user')) .switch(Relationship) .join(ToUser, on=to_join.alias('to_user')) .order_by(Relationship.id) .aggregate_rows()) results = [ (relationship.from_user.username, relationship.to_user.username) for relationship in query] self.assertEqual(results, [ ('charlie', 'huey'), ('charlie', 'zaizee'), ('huey', 'charlie'), ('zaizee', 'charlie'), ]) def test_multiple_fks_multi_depth(self): names = ['charlie', 'huey', 'zaizee'] charlie, huey, zaizee = [ User.create(username=username) for username in names] Relationship.create(from_user=charlie, to_user=huey) Relationship.create(from_user=charlie, to_user=zaizee) Relationship.create(from_user=huey, to_user=charlie) Relationship.create(from_user=zaizee, to_user=charlie) human = Category.create(name='human') kitty = Category.create(name='kitty') UserCategory.create(user=charlie, category=human) UserCategory.create(user=huey, category=kitty) UserCategory.create(user=zaizee, category=kitty) FromUser = User.alias() ToUser = User.alias() from_join = (Relationship.from_user == FromUser.id) to_join = (Relationship.to_user == ToUser.id) FromUserCategory = UserCategory.alias() ToUserCategory = UserCategory.alias() from_uc_join = (FromUser.id == FromUserCategory.user) to_uc_join = (ToUser.id == ToUserCategory.user) FromCategory = Category.alias() ToCategory = Category.alias() from_c_join = (FromUserCategory.category == FromCategory.id) to_c_join = (ToUserCategory.category == ToCategory.id) with self.assertQueryCount(1): query = (Relationship .select( Relationship, FromUser, ToUser, FromUserCategory, ToUserCategory, FromCategory, ToCategory) .join(FromUser, on=from_join.alias('from_user')) .join(FromUserCategory, on=from_uc_join.alias('fuc')) .join(FromCategory, on=from_c_join.alias('category')) .switch(Relationship) .join(ToUser, on=to_join.alias('to_user')) .join(ToUserCategory, on=to_uc_join.alias('tuc')) .join(ToCategory, on=to_c_join.alias('category')) .order_by(Relationship.id) .aggregate_rows()) results = [] for obj in query: from_user = obj.from_user to_user = obj.to_user results.append(( from_user.username, from_user.fuc[0].category.name, to_user.username, to_user.tuc[0].category.name)) self.assertEqual(results, [ ('charlie', 'human', 'huey', 'kitty'), ('charlie', 'human', 'zaizee', 'kitty'), ('huey', 'kitty', 'charlie', 'human'), ('zaizee', 'kitty', 'charlie', 'human'), ]) class TestAggregateRowsRegression(ModelTestCase): requires = [ User, Blog, Comment, Category, CommentCategory, BlogData] def setUp(self): super(TestAggregateRowsRegression, self).setUp() u = User.create(username='u1') b = Blog.create(title='b1', user=u) BlogData.create(blog=b) c1 = Comment.create(blog=b, comment='c1') c2 = Comment.create(blog=b, comment='c2') cat1 = Category.create(name='cat1') cat2 = Category.create(name='cat2') CommentCategory.create(category=cat1, comment=c1, sort_order=1) CommentCategory.create(category=cat2, comment=c1, sort_order=1) CommentCategory.create(category=cat1, comment=c2, sort_order=2) CommentCategory.create(category=cat2, comment=c2, sort_order=2) def test_aggregate_rows_regression(self): comments = (Comment .select( Comment, CommentCategory, Category, Blog, BlogData) .join(CommentCategory, JOIN.LEFT_OUTER) .join(Category, JOIN.LEFT_OUTER) .switch(Comment) .join(Blog) .join(BlogData, JOIN.LEFT_OUTER) .where(Category.id == 1) .order_by(CommentCategory.sort_order)) with self.assertQueryCount(1): c_list = list(comments.aggregate_rows()) def test_regression_506(self): user = User.create(username='u2') for i in range(2): Blog.create(title='u2-%s' % i, user=user) users = (User .select() .order_by(User.id.desc()) .paginate(1, 5) .alias('users')) with self.assertQueryCount(1): query = (User .select(User, Blog) .join(Blog) .join(users, on=(User.id == users.c.id)) .order_by(User.username, Blog.title) .aggregate_rows()) results = [] for user in query: results.append(( user.username, [blog.title for blog in user.blog_set])) self.assertEqual(results, [ ('u1', ['b1']), ('u2', ['u2-0', 'u2-1']), ]) class TestPrefetchNonPKFK(ModelTestCase): requires = [Package, PackageItem] data = { '101': ['a', 'b'], '102': ['c'], '103': [], '104': ['a', 'b', 'c', 'd', 'e'], } def setUp(self): super(TestPrefetchNonPKFK, self).setUp() for barcode, titles in self.data.items(): Package.create(barcode=barcode) for title in titles: PackageItem.create(package=barcode, title=title) def test_prefetch(self): packages = Package.select().order_by(Package.barcode) items = PackageItem.select().order_by(PackageItem.id) query = prefetch(packages, items) for package, (barcode, titles) in zip(query, sorted(self.data.items())): self.assertEqual(package.barcode, barcode) self.assertEqual( [item.title for item in package.items_prefetch], titles) packages = (Package .select() .where(Package.barcode << ['101', '104']) .order_by(Package.id)) items = items.where(PackageItem.title << ['a', 'c', 'e']) query = prefetch(packages, items) accum = {} for package in query: accum[package.barcode] = [ item.title for item in package.items_prefetch] self.assertEqual(accum, { '101': ['a'], '104': ['a', 'c','e'], }) peewee-2.10.2/playhouse/tests/test_read_slave.py000066400000000000000000000114611316645060400217470ustar00rootroot00000000000000from peewee import * from peewee import Using from playhouse.read_slave import ReadSlaveModel from playhouse.tests.base import database_initializer from playhouse.tests.base import ModelTestCase queries = [] def reset(): global queries queries = [] class QueryLogDatabase(SqliteDatabase): name = '' def execute_sql(self, query, *args, **kwargs): queries.append((self.name, query)) return super(QueryLogDatabase, self).execute_sql( query, *args, **kwargs) class Master(QueryLogDatabase): name = 'master' class Slave1(QueryLogDatabase): name = 'slave1' class Slave2(QueryLogDatabase): name = 'slave2' master = database_initializer.get_database('sqlite', db_class=Master) slave1 = database_initializer.get_database('sqlite', db_class=Slave1) slave2 = database_initializer.get_database('sqlite', db_class=Slave2) # Models to use for testing read slaves. class BaseModel(ReadSlaveModel): class Meta: database = master read_slaves = [slave1, slave2] class User(BaseModel): username = CharField() class Thing(BaseModel): name = CharField() class Meta: read_slaves = [slave2] # Regular models to use for testing `Using`. class BaseMasterOnly(Model): class Meta: database = master class A(BaseMasterOnly): data = CharField() class B(BaseMasterOnly): data = CharField() class TestUsing(ModelTestCase): requires = [A, B] def setUp(self): super(TestUsing, self).setUp() reset() def assertDatabaseVerb(self, expected): db_and_verb = [(db, sql.split()[0]) for db, sql in queries] self.assertEqual(db_and_verb, expected) reset() def test_using_context(self): models = [A, B] with Using(slave1, models, False): A.create(data='a1') B.create(data='b1') self.assertDatabaseVerb([ ('slave1', 'INSERT'), ('slave1', 'INSERT')]) with Using(slave2, models, False): A.create(data='a2') B.create(data='b2') a_obj = A.select().order_by(A.id).get() self.assertEqual(a_obj.data, 'a1') self.assertDatabaseVerb([ ('slave2', 'INSERT'), ('slave2', 'INSERT'), ('slave2', 'SELECT')]) with Using(master, models, False): query = A.select().order_by(A.data.desc()) values = [a_obj.data for a_obj in query] self.assertEqual(values, ['a2', 'a1']) self.assertDatabaseVerb([('master', 'SELECT')]) def test_using_transactions(self): with Using(slave1, [A]) as txn: list(B.select()) A.create(data='a1') B.create(data='b1') self.assertDatabaseVerb([ ('slave1', 'BEGIN'), ('master', 'SELECT'), ('slave1', 'INSERT'), ('master', 'INSERT')]) def fail_with_exc(data): with Using(slave2, [A]): A.create(data=data) raise ValueError('xxx') self.assertRaises(ValueError, fail_with_exc, 'a2') self.assertDatabaseVerb([ ('slave2', 'BEGIN'), ('slave2', 'INSERT')]) with Using(slave1, [A, B]): a_objs = [a_obj.data for a_obj in A.select()] self.assertEqual(a_objs, ['a1']) class TestMasterSlave(ModelTestCase): requires = [User, Thing] def setUp(self): super(TestMasterSlave, self).setUp() User.create(username='peewee') Thing.create(name='something') reset() def assertQueries(self, databases): self.assertEqual([q[0] for q in queries], databases) def test_balance_pair(self): for i in range(6): User.get() self.assertQueries([ 'slave1', 'slave2', 'slave1', 'slave2', 'slave1', 'slave2']) def test_balance_single(self): for i in range(3): Thing.get() self.assertQueries(['slave2', 'slave2', 'slave2']) def test_query_types(self): u = User.create(username='charlie') User.select().where(User.username == 'charlie').get() self.assertQueries(['master', 'slave1']) User.get(User.username == 'charlie') self.assertQueries(['master', 'slave1', 'slave2']) u.username = 'edited' u.save() # Update. self.assertQueries(['master', 'slave1', 'slave2', 'master']) u.delete_instance() self.assertQueries(['master', 'slave1', 'slave2', 'master', 'master']) def test_raw_queries(self): User.raw('insert into user (username) values (?)', 'charlie').execute() rq = list(User.raw('select * from user where username = ?', 'charlie')) self.assertEqual(rq[0].username, 'charlie') self.assertQueries(['master', 'slave1']) peewee-2.10.2/playhouse/tests/test_reflection.py000066400000000000000000000404061316645060400217750ustar00rootroot00000000000000import os import re from peewee import * from peewee import create_model_tables from peewee import drop_model_tables from peewee import mysql from peewee import print_ from playhouse.reflection import * from playhouse.tests.base import database_initializer from playhouse.tests.base import PeeweeTestCase sqlite_db = database_initializer.get_database('sqlite') DATABASES = [sqlite_db] if mysql: DATABASES.append(database_initializer.get_database('mysql')) try: import psycopg2 DATABASES.append(database_initializer.get_database('postgres')) except ImportError: pass class BaseModel(Model): class Meta: database = sqlite_db class ColTypes(BaseModel): f1 = BigIntegerField(index=True) f2 = BlobField() f3 = BooleanField() f4 = CharField(max_length=50) f5 = DateField() f6 = DateTimeField() f7 = DecimalField() f8 = DoubleField() f9 = FloatField() f10 = IntegerField(unique=True) f11 = PrimaryKeyField() f12 = TextField() f13 = TimeField() class Meta: indexes = ( (('f10', 'f11'), True), (('f11', 'f8', 'f13'), False), ) class Nullable(BaseModel): nullable_cf = CharField(null=True) nullable_if = IntegerField(null=True) class RelModel(BaseModel): col_types = ForeignKeyField(ColTypes, related_name='foo') col_types_nullable = ForeignKeyField(ColTypes, null=True) class FKPK(BaseModel): col_types = ForeignKeyField(ColTypes, primary_key=True) class Underscores(BaseModel): _id = PrimaryKeyField() _name = CharField() class Category(BaseModel): name = CharField(max_length=10) parent = ForeignKeyField('self', null=True) class Nugget(BaseModel): category_id = ForeignKeyField(Category, db_column='category_id') category = CharField() class NumericColumn(BaseModel): three = CharField(db_column='3data') five = CharField(db_column='555_value') seven = CharField(db_column='7 eleven') MODELS = ( ColTypes, Nullable, RelModel, FKPK, Underscores, Category, Nugget) class TestReflection(PeeweeTestCase): def setUp(self): super(TestReflection, self).setUp() if os.path.exists(sqlite_db.database): os.unlink(sqlite_db.database) sqlite_db.connect() for model in MODELS: model._meta.database = sqlite_db def tearDown(self): sqlite_db.close() def test_generate_models(self): introspector = self.get_introspector() self.assertEqual(introspector.generate_models(), {}) for model in MODELS: model.create_table() models = introspector.generate_models() self.assertEqual(sorted(models.keys()), [ 'category', 'coltypes', 'fkpk', 'nugget', 'nullable', 'relmodel', 'underscores']) def assertIsInstance(obj, klass): self.assertTrue(isinstance(obj, klass)) category = models['category'] self.assertEqual( sorted(category._meta.fields), ['id', 'name', 'parent']) assertIsInstance(category.id, PrimaryKeyField) assertIsInstance(category.name, CharField) assertIsInstance(category.parent, ForeignKeyField) self.assertEqual(category.parent.rel_model, category) fkpk = models['fkpk'] self.assertEqual(sorted(fkpk._meta.fields), ['col_types']) assertIsInstance(fkpk.col_types, ForeignKeyField) self.assertEqual(fkpk.col_types.rel_model, models['coltypes']) self.assertTrue(fkpk.col_types.primary_key) relmodel = models['relmodel'] self.assertEqual( sorted(relmodel._meta.fields), ['col_types', 'col_types_nullable', 'id']) assertIsInstance(relmodel.col_types, ForeignKeyField) assertIsInstance(relmodel.col_types_nullable, ForeignKeyField) self.assertFalse(relmodel.col_types.null) self.assertTrue(relmodel.col_types_nullable.null) self.assertEqual(relmodel.col_types.rel_model, models['coltypes']) self.assertEqual(relmodel.col_types_nullable.rel_model, models['coltypes']) def test_generate_models_indexes(self): introspector = self.get_introspector() self.assertEqual(introspector.generate_models(), {}) for model in MODELS: model.create_table() models = introspector.generate_models() self.assertEqual(models['fkpk']._meta.indexes, []) self.assertEqual(models['relmodel']._meta.indexes, []) self.assertEqual(models['category']._meta.indexes, []) col_types = models['coltypes'] indexed = set(['f1']) unique = set(['f10']) for field in col_types._meta.sorted_fields: self.assertEqual(field.index, field.name in indexed) self.assertEqual(field.unique, field.name in unique) indexes = col_types._meta.indexes self.assertEqual(sorted(indexes), [ (['f10', 'f11'], True), (['f11', 'f8', 'f13'], False), ]) def test_table_subset(self): for model in MODELS: model.create_table() introspector = self.get_introspector() models = introspector.generate_models(table_names=[ 'category', 'coltypes', 'foobarbaz']) self.assertEqual(sorted(models.keys()), ['category', 'coltypes']) def test_invalid_python_field_names(self): NumericColumn.create_table() introspector = self.get_introspector() models = introspector.generate_models(table_names=['numericcolumn']) NC = models['numericcolumn'] self.assertEqual(sorted(NC._meta.fields), ['_3data', '_555_value', '_7_eleven', 'id']) def test_sqlite_fk_re(self): user_id_tests = [ 'FOREIGN KEY("user_id") REFERENCES "users"("id")', 'FOREIGN KEY(user_id) REFERENCES users(id)', 'FOREIGN KEY ([user_id]) REFERENCES [users] ([id])', '"user_id" NOT NULL REFERENCES "users" ("id")', 'user_id not null references users (id)', ] fk_pk_tests = [ ('"col_types_id" INTEGER NOT NULL PRIMARY KEY REFERENCES ' '"coltypes" ("f11")'), 'FOREIGN KEY ("col_types_id") REFERENCES "coltypes" ("f11")', ] regex = SqliteMetadata.re_foreign_key for test in user_id_tests: match = re.search(regex, test, re.I) self.assertEqual(match.groups(), ( 'user_id', 'users', 'id', )) for test in fk_pk_tests: match = re.search(regex, test, re.I) self.assertEqual(match.groups(), ( 'col_types_id', 'coltypes', 'f11', )) def get_introspector(self): return Introspector.from_database(sqlite_db) def test_make_column_name(self): introspector = self.get_introspector() tests = ( ('Column', 'column'), ('Foo_iD', 'foo'), ('foo_id', 'foo'), ('foo_id_id', 'foo_id'), ('foo', 'foo'), ('_id', '_id'), ('a123', 'a123'), ('and', 'and_'), ('Class', 'class_'), ('Class_ID', 'class_'), ) for col_name, expected in tests: self.assertEqual( introspector.make_column_name(col_name), expected) def test_make_model_name(self): introspector = self.get_introspector() tests = ( ('Table', 'Table'), ('table', 'Table'), ('table_baz', 'TableBaz'), ('foo__bar__baz2', 'FooBarBaz2'), ('foo12_3', 'Foo123'), ) for table_name, expected in tests: self.assertEqual( introspector.make_model_name(table_name), expected) def create_tables(self, db): for model in MODELS: model._meta.database = db drop_model_tables(MODELS, fail_silently=True) create_model_tables(MODELS) def generative_test(fn): def inner(self): for database in DATABASES: try: introspector = Introspector.from_database(database) self.create_tables(database) fn(self, introspector) finally: drop_model_tables(MODELS) return inner @generative_test def test_col_types(self, introspector): columns, primary_keys, foreign_keys, model_names, indexes =\ introspector.introspect() expected = ( ('coltypes', ( ('f1', BigIntegerField, False), # There do not appear to be separate constants for the blob and # text field types in MySQL's drivers. See GH#1034. ('f2', (BlobField, TextField), False), ('f3', (BooleanField, IntegerField), False), ('f4', CharField, False), ('f5', DateField, False), ('f6', DateTimeField, False), ('f7', DecimalField, False), ('f8', (DoubleField, FloatField), False), ('f9', FloatField, False), ('f10', IntegerField, False), ('f11', PrimaryKeyField, False), ('f12', TextField, False), ('f13', TimeField, False))), ('relmodel', ( ('col_types_id', ForeignKeyField, False), ('col_types_nullable_id', ForeignKeyField, True))), ('nugget', ( ('category_id', ForeignKeyField, False), ('category', CharField, False))), ('nullable', ( ('nullable_cf', CharField, True), ('nullable_if', IntegerField, True))), ('fkpk', ( ('col_types_id', ForeignKeyField, False),)), ('underscores', ( ('_id', PrimaryKeyField, False), ('_name', CharField, False))), ('category', ( ('name', CharField, False), ('parent_id', ForeignKeyField, True))), ) for table_name, expected_columns in expected: introspected_columns = columns[table_name] for field_name, field_class, is_null in expected_columns: if not isinstance(field_class, (list, tuple)): field_class = (field_class,) column = introspected_columns[field_name] self.assertTrue(column.field_class in field_class) self.assertEqual(column.nullable, is_null) @generative_test def test_foreign_keys(self, introspector): columns, primary_keys, foreign_keys, model_names, indexes =\ introspector.introspect() self.assertEqual(foreign_keys['coltypes'], []) rel_model = foreign_keys['relmodel'] self.assertEqual(len(rel_model), 2) fkpk = foreign_keys['fkpk'] self.assertEqual(len(fkpk), 1) fkpk_fk = fkpk[0] self.assertEqual(fkpk_fk.table, 'fkpk') self.assertEqual(fkpk_fk.column, 'col_types_id') self.assertEqual(fkpk_fk.dest_table, 'coltypes') self.assertEqual(fkpk_fk.dest_column, 'f11') category = foreign_keys['category'] self.assertEqual(len(category), 1) category_fk = category[0] self.assertEqual(category_fk.table, 'category') self.assertEqual(category_fk.column, 'parent_id') self.assertEqual(category_fk.dest_table, 'category') self.assertEqual(category_fk.dest_column, 'id') @generative_test def test_table_names(self, introspector): columns, primary_keys, foreign_keys, model_names, indexes =\ introspector.introspect() names = ( ('coltypes', 'Coltypes'), ('nullable', 'Nullable'), ('relmodel', 'Relmodel'), ('fkpk', 'Fkpk')) for k, v in names: self.assertEqual(model_names[k], v) @generative_test def test_column_meta(self, introspector): columns, primary_keys, foreign_keys, model_names, indexes =\ introspector.introspect() rel_model = columns['relmodel'] col_types_id = rel_model['col_types_id'] self.assertEqual(col_types_id.get_field_parameters(), { 'db_column': "'col_types_id'", 'rel_model': 'Coltypes', 'to_field': "'f11'", }) col_types_nullable_id = rel_model['col_types_nullable_id'] self.assertEqual(col_types_nullable_id.get_field_parameters(), { 'db_column': "'col_types_nullable_id'", 'null': True, 'related_name': "'coltypes_col_types_nullable_set'", 'rel_model': 'Coltypes', 'to_field': "'f11'", }) fkpk = columns['fkpk'] self.assertEqual(fkpk['col_types_id'].get_field_parameters(), { 'db_column': "'col_types_id'", 'rel_model': 'Coltypes', 'primary_key': True, 'to_field': "'f11'"}) category = columns['category'] parent_id = category['parent_id'] self.assertEqual(parent_id.get_field_parameters(), { 'db_column': "'parent_id'", 'null': True, 'rel_model': "'self'", 'to_field': "'id'", }) nugget = columns['nugget'] category_fk = nugget['category_id'] self.assertEqual(category_fk.name, 'category_id') self.assertEqual(category_fk.get_field_parameters(), { 'to_field': "'id'", 'rel_model': 'Category', 'db_column': "'category_id'", }) category = nugget['category'] self.assertEqual(category.name, 'category') @generative_test def test_get_field(self, introspector): columns, primary_keys, foreign_keys, model_names, indexes =\ introspector.introspect() expected = ( ('coltypes', ( ('f1', 'f1 = BigIntegerField(index=True)'), ('f2', 'f2 = BlobField()'), ('f4', 'f4 = CharField()'), ('f5', 'f5 = DateField()'), ('f6', 'f6 = DateTimeField()'), ('f7', 'f7 = DecimalField()'), ('f10', 'f10 = IntegerField(unique=True)'), ('f11', 'f11 = PrimaryKeyField()'), ('f12', ('f12 = TextField()', 'f12 = BlobField()')), ('f13', 'f13 = TimeField()'), )), ('nullable', ( ('nullable_cf', 'nullable_cf = ' 'CharField(null=True)'), ('nullable_if', 'nullable_if = IntegerField(null=True)'), )), ('fkpk', ( ('col_types_id', 'col_types = ForeignKeyField(' 'db_column=\'col_types_id\', primary_key=True, ' 'rel_model=Coltypes, to_field=\'f11\')'), )), ('nugget', ( ('category_id', 'category_id = ForeignKeyField(' 'db_column=\'category_id\', rel_model=Category, ' 'to_field=\'id\')'), ('category', 'category = CharField()'), )), ('relmodel', ( ('col_types_id', 'col_types = ForeignKeyField(' 'db_column=\'col_types_id\', rel_model=Coltypes, ' 'to_field=\'f11\')'), ('col_types_nullable_id', 'col_types_nullable = ' 'ForeignKeyField(db_column=\'col_types_nullable_id\', ' 'null=True, rel_model=Coltypes, ' 'related_name=\'coltypes_col_types_nullable_set\', ' 'to_field=\'f11\')'), )), ('underscores', ( ('_id', '_id = PrimaryKeyField()'), ('_name', '_name = CharField()'), )), ('category', ( ('name', 'name = CharField()'), ('parent_id', 'parent = ForeignKeyField(' 'db_column=\'parent_id\', null=True, rel_model=\'self\', ' 'to_field=\'id\')'), )), ) for table, field_data in expected: for field_name, fields in field_data: if not isinstance(fields, tuple): fields = (fields,) self.assertTrue(columns[table][field_name].get_field(), fields) peewee-2.10.2/playhouse/tests/test_shortcuts.py000066400000000000000000000505321316645060400217020ustar00rootroot00000000000000from peewee import * from peewee import Expression from peewee import OP from playhouse.hybrid import hybrid_method from playhouse.hybrid import hybrid_property from playhouse.shortcuts import * from playhouse.test_utils import assert_query_count from playhouse.tests.base import database_initializer from playhouse.tests.base import ModelTestCase from playhouse.tests.base import PeeweeTestCase from playhouse.tests.libs import mock db = database_initializer.get_in_memory_database() class BaseModel(Model): class Meta: database = db class TestModel(BaseModel): name = CharField() number = IntegerField() class Category(BaseModel): name = CharField() parent = ForeignKeyField('self', null=True, related_name='children') class User(BaseModel): username = CharField() @hybrid_method def name_hash(self): return sum(map(ord, self.username)) % 10 @hybrid_property def title(self): return self.username.title() @title.expression def title(self): return Expression( fn.UPPER(fn.SUBSTR(self.username, 1, 1)), OP_CONCAT, fn.SUBSTR(self.username, 2)) class Note(BaseModel): user = ForeignKeyField(User, related_name='notes') text = TextField() class Tag(BaseModel): tag = CharField() class NoteTag(BaseModel): note = ForeignKeyField(Note) tag = ForeignKeyField(Tag) class Meta: primary_key = CompositeKey('note', 'tag') MODELS = [ Category, User, Note, Tag, NoteTag] class TestRetryDatabaseMixin(PeeweeTestCase): def test_retry_success(self): class RetryDB(RetryOperationalError, SqliteDatabase): pass db = RetryDB(':memory:') conn1 = mock.Mock(name='conn1') conn2 = mock.Mock(name='conn2') curs_exc = mock.Mock(name='curs_exc') curs_exc.execute.side_effect = OperationalError() curs_ok = mock.Mock(name='curs_ok') conn1.cursor.side_effect = [curs_exc] conn2.cursor.side_effect = [curs_ok] with mock.patch.object(db, '_connect') as mc: mc.side_effect = [conn1, conn2] ret = db.execute_sql('fail query', (1, 2)) self.assertTrue(ret is curs_ok) self.assertFalse(ret is curs_exc) curs_exc.execute.assert_called_once_with('fail query', (1, 2)) self.assertEqual(conn1.commit.call_count, 0) self.assertEqual(conn1.rollback.call_count, 0) self.assertEqual(conn1.close.call_count, 1) curs_ok.execute.assert_called_once_with('fail query', (1, 2)) self.assertEqual(conn2.commit.call_count, 1) self.assertEqual(conn2.rollback.call_count, 0) self.assertEqual(conn2.close.call_count, 0) self.assertTrue(db.get_conn() is conn2) def test_retry_fail(self): class RetryDB(RetryOperationalError, SqliteDatabase): pass db = RetryDB(':memory:') conn1 = mock.Mock(name='conn1') conn2 = mock.Mock(name='conn2') curs_exc = mock.Mock(name='curs_exc') curs_exc.execute.side_effect = OperationalError() curs_exc2 = mock.Mock(name='curs_exc2') curs_exc2.execute.side_effect = OperationalError() conn1.cursor.side_effect = [curs_exc] conn2.cursor.side_effect = [curs_exc2] with mock.patch.object(db, '_connect') as mc: mc.side_effect = [conn1, conn2] self.assertRaises(OperationalError, db.execute_sql, 'fail2') curs_exc.execute.assert_called_once_with('fail2', ()) self.assertEqual(conn1.commit.call_count, 0) self.assertEqual(conn1.rollback.call_count, 0) self.assertEqual(conn1.close.call_count, 1) curs_exc2.execute.assert_called_once_with('fail2', ()) self.assertEqual(conn2.commit.call_count, 0) self.assertEqual(conn2.rollback.call_count, 0) self.assertEqual(conn2.close.call_count, 0) class TestCastShortcut(ModelTestCase): requires = [User] def test_cast_shortcut(self): for username in ['100', '001', '101']: User.create(username=username) query = (User .select( User.username, cast(User.username, 'int').alias('username_i')) .order_by(SQL('username_i'))) results = [(user.username, user.username_i) for user in query] self.assertEqual(results, [ ('001', 1), ('100', 100), ('101', 101), ]) class TestCaseShortcut(ModelTestCase): requires = [TestModel] values = ( ('alpha', 1), ('beta', 2), ('gamma', 3)) expected = [ {'name': 'alpha', 'number_string': 'one'}, {'name': 'beta', 'number_string': 'two'}, {'name': 'gamma', 'number_string': '?'}, ] def setUp(self): super(TestCaseShortcut, self).setUp() for name, number in self.values: TestModel.create(name=name, number=number) def test_predicate(self): query = (TestModel .select(TestModel.name, case(TestModel.number, ( (1, "one"), (2, "two")), "?").alias('number_string')) .order_by(TestModel.id)) self.assertEqual(list(query.dicts()), self.expected) def test_no_predicate(self): query = (TestModel .select(TestModel.name, case(None, ( (TestModel.number == 1, "one"), (TestModel.number == 2, "two")), "?").alias('number_string')) .order_by(TestModel.id)) self.assertEqual(list(query.dicts()), self.expected) class TestModelToDict(ModelTestCase): requires = MODELS def setUp(self): super(TestModelToDict, self).setUp() self.user = User.create(username='peewee') def test_simple(self): with assert_query_count(0): self.assertEqual(model_to_dict(self.user), { 'id': self.user.id, 'username': self.user.username}) def test_simple_recurse(self): note = Note.create(user=self.user, text='note-1') with assert_query_count(0): self.assertEqual(model_to_dict(note), { 'id': note.id, 'text': note.text, 'user': { 'id': self.user.id, 'username': self.user.username}}) with assert_query_count(0): self.assertEqual(model_to_dict(note, recurse=False), { 'id': note.id, 'text': note.text, 'user': self.user.id, }) def test_recurse_max_depth(self): n0, n1, n2 = [Note.create(user=self.user, text='n%s' % i) for i in range(3)] t0, tx = [Tag.create(tag=t) for t in ('t0', 'tx')] NoteTag.create(note=n0, tag=t0) NoteTag.create(note=n0, tag=tx) NoteTag.create(note=n1, tag=tx) data = model_to_dict(self.user, recurse=True, backrefs=True) self.assertEqual(data, { 'id': self.user.id, 'username': 'peewee', 'notes': [ {'id': n0.id, 'text': 'n0', 'notetag_set': [ {'tag': {'tag': 't0', 'id': t0.id}}, {'tag': {'tag': 'tx', 'id': tx.id}}, ]}, {'id': n1.id, 'text': 'n1', 'notetag_set': [ {'tag': {'tag': 'tx', 'id': tx.id}}, ]}, {'id': n2.id, 'text': 'n2', 'notetag_set': []}, ]}) data = model_to_dict(self.user, recurse=True, backrefs=True, max_depth=2) self.assertEqual(data, { 'id': self.user.id, 'username': 'peewee', 'notes': [ {'id': n0.id, 'text': 'n0', 'notetag_set': [ {'tag': t0.id}, {'tag': tx.id}, ]}, {'id': n1.id, 'text': 'n1', 'notetag_set': [ {'tag': tx.id}, ]}, {'id': n2.id, 'text': 'n2', 'notetag_set': []}, ]}) data = model_to_dict(self.user, recurse=True, backrefs=True, max_depth=1) self.assertEqual(data, { 'id': self.user.id, 'username': 'peewee', 'notes': [ {'id': n0.id, 'text': 'n0'}, {'id': n1.id, 'text': 'n1'}, {'id': n2.id, 'text': 'n2'}, ]}) data = model_to_dict(self.user, recurse=True, backrefs=True, max_depth=0) self.assertEqual(data, { 'id': self.user.id, 'username': 'peewee'}) def test_simple_backref(self): with assert_query_count(1): self.assertEqual(model_to_dict(self.user, backrefs=True), { 'id': self.user.id, 'notes': [], 'username': self.user.username}) # Create a note to populate backrefs list. note = Note.create(user=self.user, text='note-1') expected = { 'id': self.user.id, 'notes': [ {'id': note.id, 'notetag_set': [], 'text': note.text}, ], 'username': self.user.username} # Two queries: one to get related notes, one to get related notetags. with assert_query_count(2): self.assertEqual( model_to_dict(self.user, backrefs=True), expected) query = (User .select(User, Note, NoteTag) .join(Note, JOIN.LEFT_OUTER) .join(NoteTag, JOIN.LEFT_OUTER) .aggregate_rows()) user = query.get() with assert_query_count(0): self.assertEqual(model_to_dict(user, backrefs=True), expected) def test_recurse_backrefs(self): note = Note.create(user=self.user, text='note-1') # One query to retrieve the note-tag set. with assert_query_count(1): self.assertEqual(model_to_dict(note, backrefs=True), { 'id': note.id, 'notetag_set': [], 'text': note.text, 'user': { 'id': self.user.id, 'username': self.user.username, }, }) def test_recursive_fk(self): root = Category.create(name='root') child = Category.create(name='child', parent=root) grandchild = Category.create(name='grandchild', parent=child) with assert_query_count(0): self.assertEqual(model_to_dict(root), { 'id': root.id, 'name': root.name, 'parent': None, }) with assert_query_count(0): self.assertEqual(model_to_dict(root, recurse=False), { 'id': root.id, 'name': root.name, 'parent': None, }) with assert_query_count(1): self.assertEqual(model_to_dict(root, backrefs=True), { 'children': [{'id': child.id, 'name': child.name}], 'id': root.id, 'name': root.name, 'parent': None, }) with assert_query_count(1): self.assertEqual(model_to_dict(child, backrefs=True), { 'children': [{'id': grandchild.id, 'name': grandchild.name}], 'id': child.id, 'name': child.name, 'parent': { 'id': root.id, 'name': root.name, }, }) with assert_query_count(0): self.assertEqual(model_to_dict(child, backrefs=False), { 'id': child.id, 'name': child.name, 'parent': { 'id': root.id, 'name': root.name, }, }) def test_many_to_many(self): note = Note.create(user=self.user, text='note-1') t1 = Tag.create(tag='t1') t2 = Tag.create(tag='t2') Tag.create(tag='tx') # Note used on any notes. nt1 = NoteTag.create(note=note, tag=t1) nt2 = NoteTag.create(note=note, tag=t2) expected = { 'id': self.user.id, 'notes': [{ 'id': note.id, 'notetag_set': [ {'tag': {'id': t1.id, 'tag': t1.tag}}, {'tag': {'id': t2.id, 'tag': t2.tag}}, ], 'text': note.text, }], 'username': self.user.username, } # Query to retrieve notes, note-tags, and 2 tag queries. with assert_query_count(4): self.assertEqual( model_to_dict(self.user, backrefs=True), expected) def test_only(self): expected = {'username': self.user.username} self.assertEqual( model_to_dict(self.user, only=[User.username]), expected) self.assertEqual( model_to_dict(self.user, backrefs=True, only=[User.username]), expected) note = Note.create(user=self.user, text='note-1') expected = {'text': note.text, 'user': { 'username': self.user.username}} self.assertEqual( model_to_dict(note, only=[Note.text, Note.user, User.username]), expected) self.assertEqual( model_to_dict( note, backrefs=True, only=[Note.text, Note.user, User.username]), expected) expected['user'] = self.user.id self.assertEqual( model_to_dict( note, backrefs=True, recurse=False, only=[Note.text, Note.user, User.username]), expected) def test_exclude(self): self.assertEqual( model_to_dict(self.user, exclude=[User.id]), {'username': self.user.username}) self.assertEqual( model_to_dict( self.user, backrefs=True, exclude=[User.id, Note.user]), {'username': self.user.username}) self.assertEqual( model_to_dict( self.user, backrefs=True, exclude=[User.id, User.notes]), {'username': self.user.username}) note = Note.create(user=self.user, text='note-1') self.assertEqual( model_to_dict( note, backrefs=True, exclude=[Note.user, Note.notetag_set, Note.id]), {'text': note.text}) self.assertEqual( model_to_dict( note, backrefs=True, exclude=[User.id, Note.notetag_set, Note.id]), {'text': note.text, 'user': {'username': self.user.username}}) def test_extra_attrs(self): with assert_query_count(0): extra = ['name_hash', 'title'] self.assertEqual(model_to_dict(self.user, extra_attrs=extra), { 'id': self.user.id, 'username': self.user.username, 'name_hash': 5, 'title': 'Peewee', }) with assert_query_count(0): # Unknown attr causes AttributeError. def fails(): model_to_dict(self.user, extra_attrs=['xx']) self.assertRaises(AttributeError, fails) def test_fields_from_query(self): User.delete().execute() for i in range(3): user = User.create(username='u%s' % i) for x in range(i + 1): Note.create(user=user, text='%s-%s' % (user.username, x)) query = (User .select(User.username, fn.COUNT(Note.id).alias('ct')) .join(Note, JOIN.LEFT_OUTER) .group_by(User.username) .order_by(User.id)) with assert_query_count(1): u0, u1, u2 = list(query) self.assertEqual(model_to_dict(u0, fields_from_query=query), { 'username': 'u0', 'ct': 1}) self.assertEqual(model_to_dict(u2, fields_from_query=query), { 'username': 'u2', 'ct': 3}) notes = (Note .select(Note, User, SQL('1337').alias('magic')) .join(User) .order_by(Note.id) .limit(1)) with assert_query_count(1): n1, = notes res = model_to_dict(n1, fields_from_query=notes) self.assertEqual(res, { 'id': n1.id, 'magic': 1337, 'text': 'u0-0', 'user': { 'id': n1.user_id, 'username': 'u0', }, }) res = model_to_dict( n1, fields_from_query=notes, exclude=[User.id, Note.id]) self.assertEqual(res, { 'magic': 1337, 'text': 'u0-0', 'user': {'username': 'u0'}, }) # `only` has no effect when using `fields_from_query`. res = model_to_dict( n1, fields_from_query=notes, only=[User.username]) self.assertEqual(res, { 'id': n1.id, 'magic': 1337, 'text': 'u0-0', 'user': {'id': n1.user_id, 'username': 'u0'}, }) def test_only_backref(self): u = User.create(username='u1') Note.create(user=u, text='n1') Note.create(user=u, text='n2') Note.create(user=u, text='n3') d = model_to_dict(u, only=[ User.username, User.notes, Note.text], backrefs=True) if 'notes' in d: d['notes'].sort(key=lambda n: n['text']) self.assertEqual(d, { 'username': 'u1', 'notes': [ {'text': 'n1'}, {'text': 'n2'}, {'text': 'n3'}, ]}) class TestDictToModel(ModelTestCase): requires = MODELS def setUp(self): super(TestDictToModel, self).setUp() self.user = User.create(username='charlie') def test_simple(self): data = {'username': 'charlie', 'id': self.user.id} inst = dict_to_model(User, data) self.assertTrue(isinstance(inst, User)) self.assertEqual(inst.username, 'charlie') self.assertEqual(inst.id, self.user.id) def test_related(self): data = { 'id': 2, 'text': 'note-1', 'user': { 'id': self.user.id, 'username': 'charlie'}} with assert_query_count(0): inst = dict_to_model(Note, data) self.assertTrue(isinstance(inst, Note)) self.assertEqual(inst.id, 2) self.assertEqual(inst.text, 'note-1') self.assertTrue(isinstance(inst.user, User)) self.assertEqual(inst.user.id, self.user.id) self.assertEqual(inst.user.username, 'charlie') data['user'] = self.user.id with assert_query_count(0): inst = dict_to_model(Note, data) with assert_query_count(1): self.assertEqual(inst.user, self.user) def test_backrefs(self): data = { 'id': self.user.id, 'username': 'charlie', 'notes': [ {'id': 1, 'text': 'note-1'}, {'id': 2, 'text': 'note-2'}, ]} with assert_query_count(0): inst = dict_to_model(User, data) self.assertEqual(inst.id, self.user.id) self.assertEqual(inst.username, 'charlie') self.assertTrue(isinstance(inst.notes, list)) note_1, note_2 = inst.notes self.assertEqual(note_1.id, 1) self.assertEqual(note_1.text, 'note-1') self.assertEqual(note_1.user, self.user) self.assertEqual(note_2.id, 2) self.assertEqual(note_2.text, 'note-2') self.assertEqual(note_2.user, self.user) def test_unknown_attributes(self): data = { 'id': self.user.id, 'username': 'peewee', 'xx': 'does not exist'} self.assertRaises( AttributeError, dict_to_model, User, data) inst = dict_to_model(User, data, ignore_unknown=True) self.assertEqual(inst.xx, 'does not exist') peewee-2.10.2/playhouse/tests/test_signals.py000066400000000000000000000075241316645060400213070ustar00rootroot00000000000000from peewee import * from playhouse import signals from playhouse.tests.base import database_initializer from playhouse.tests.base import ModelTestCase db = database_initializer.get_in_memory_database() class BaseSignalModel(signals.Model): class Meta: database = db class ModelA(BaseSignalModel): a = CharField(default='') class ModelB(BaseSignalModel): b = CharField(default='') class SubclassOfModelB(ModelB): pass class SignalsTestCase(ModelTestCase): requires = [ModelA, ModelB, SubclassOfModelB] def tearDown(self): super(SignalsTestCase, self).tearDown() signals.pre_save._flush() signals.post_save._flush() signals.pre_delete._flush() signals.post_delete._flush() signals.pre_init._flush() signals.post_init._flush() def test_pre_save(self): state = [] @signals.pre_save() def pre_save(sender, instance, created): state.append((sender, instance, instance._get_pk_value(), created)) m = ModelA() res = m.save() self.assertEqual(state, [(ModelA, m, None, True)]) self.assertEqual(res, 1) res = m.save() self.assertTrue(m.id is not None) self.assertEqual(state[-1], (ModelA, m, m.id, False)) self.assertEqual(res, 1) def test_post_save(self): state = [] @signals.post_save() def post_save(sender, instance, created): state.append((sender, instance, instance._get_pk_value(), created)) m = ModelA() m.save() self.assertTrue(m.id is not None) self.assertEqual(state, [(ModelA, m, m.id, True)]) m.save() self.assertEqual(state[-1], (ModelA, m, m.id, False)) def test_pre_delete(self): state = [] m = ModelA() m.save() @signals.pre_delete() def pre_delete(sender, instance): state.append((sender, instance, ModelA.select().count())) res = m.delete_instance() self.assertEqual(state, [(ModelA, m, 1)]) self.assertEqual(res, 1) def test_post_delete(self): state = [] m = ModelA() m.save() @signals.post_delete() def post_delete(sender, instance): state.append((sender, instance, ModelA.select().count())) m.delete_instance() self.assertEqual(state, [(ModelA, m, 0)]) def test_pre_init(self): state = [] m = ModelA(a='a') m.save() @signals.pre_init() def pre_init(sender, instance): state.append((sender, instance.a)) ModelA.get() self.assertEqual(state, [(ModelA, '')]) def test_post_init(self): state = [] m = ModelA(a='a') m.save() @signals.post_init() def post_init(sender, instance): state.append((sender, instance.a)) ModelA.get() self.assertEqual(state, [(ModelA, 'a')]) def test_sender(self): state = [] @signals.post_save(sender=ModelA) def post_save(sender, instance, created): state.append(instance) m = ModelA.create() self.assertEqual(state, [m]) m2 = ModelB.create() self.assertEqual(state, [m]) def test_connect_disconnect(self): state = [] @signals.post_save(sender=ModelA) def post_save(sender, instance, created): state.append(instance) m = ModelA.create() self.assertEqual(state, [m]) signals.post_save.disconnect(post_save) m2 = ModelA.create() self.assertEqual(state, [m]) def test_subclass_instance_receive_signals(self): state = [] @signals.post_save(sender=ModelB) def post_save(sender, instance, created): state.append(instance) m = SubclassOfModelB.create() assert m in state peewee-2.10.2/playhouse/tests/test_speedups.py000066400000000000000000000124051316645060400214710ustar00rootroot00000000000000import datetime import unittest from peewee import * from playhouse import _speedups as speedups from playhouse.tests.base import database_initializer from playhouse.tests.base import ModelTestCase db = database_initializer.get_in_memory_database(use_speedups=True) class BaseModel(Model): class Meta: database = db class Note(BaseModel): content = TextField() timestamp = DateTimeField(default=datetime.datetime.now) class TestResultWrappers(ModelTestCase): requires = [Note] def setUp(self): super(TestResultWrappers, self).setUp() for i in range(10): Note.create(content='note-%s' % i) def test_dirty_fields(self): note = Note.create(content='huey') self.assertFalse(note.is_dirty()) self.assertEqual(note.dirty_fields, []) ndb = Note.get(Note.content == 'huey') self.assertFalse(ndb.is_dirty()) self.assertEqual(ndb.dirty_fields, []) ndb.content = 'x' self.assertTrue(ndb.is_dirty()) self.assertEqual(ndb.dirty_fields, ['content']) def test_gh_regression_1073_func_coerce(self): func = fn.GROUP_CONCAT(Note.id).alias('note_ids') query = Note.select(func) self.assertRaises(ValueError, query.get) query = Note.select(func.coerce(False)) result = query.get().note_ids self.assertEqual(result, ','.join(str(i) for i in range(1, 11))) def test_tuple_results(self): query = Note.select().order_by(Note.id).tuples() qr = query.execute() self.assertTrue(isinstance(qr, speedups._TuplesQueryResultWrapper)) results = list(qr) self.assertEqual(len(results), 10) first, last = results[0], results[-1] self.assertEqual(first[:2], (1, 'note-0')) self.assertEqual(last[:2], (10, 'note-9')) self.assertTrue(isinstance(first[2], datetime.datetime)) def test_dict_results(self): query = Note.select().order_by(Note.id).dicts() qr = query.execute() self.assertTrue(isinstance(qr, speedups._DictQueryResultWrapper)) results = list(qr) self.assertEqual(len(results), 10) first, last = results[0], results[-1] self.assertEqual(sorted(first.keys()), ['content', 'id', 'timestamp']) self.assertEqual(first['id'], 1) self.assertEqual(first['content'], 'note-0') self.assertTrue(isinstance(first['timestamp'], datetime.datetime)) self.assertEqual(last['id'], 10) self.assertEqual(last['content'], 'note-9') def test_model_results(self): query = Note.select().order_by(Note.id) qr = query.execute() self.assertTrue(isinstance(qr, speedups._ModelQueryResultWrapper)) results = list(qr) self.assertEqual(len(results), 10) first, last = results[0], results[-1] self.assertTrue(isinstance(first, Note)) self.assertEqual(first.id, 1) self.assertEqual(first.content, 'note-0') self.assertTrue(isinstance(first.timestamp, datetime.datetime)) self.assertEqual(last.id, 10) self.assertEqual(last.content, 'note-9') def test_aliases(self): query = (Note .select( Note.id, Note.content.alias('ct'), Note.timestamp.alias('ts')) .order_by(Note.id)) rows = list(query.tuples()) self.assertEqual(len(rows), 10) self.assertEqual(rows[0][:2], (1, 'note-0')) self.assertTrue(isinstance(rows[0][2], datetime.datetime)) rows = list(query.dicts()) first = rows[0] self.assertEqual(sorted(first.keys()), ['ct', 'id', 'ts']) self.assertEqual(first['id'], 1) self.assertEqual(first['ct'], 'note-0') self.assertTrue(isinstance(first['ts'], datetime.datetime)) rows = list(query) first = rows[0] self.assertTrue(isinstance(first, Note)) self.assertEqual(first.id, 1) self.assertEqual(first.ct, 'note-0') self.assertIsNone(first.content) self.assertTrue(isinstance(first.ts, datetime.datetime)) def test_fill_cache(self): with self.assertQueryCount(1): query = Note.select().order_by(Note.id) qr = query.execute() qr.fill_cache(3) self.assertEqual(qr._ct, 3) self.assertEqual(len(qr._result_cache), 3) # No changes to result wrapper. notes = query[:3] self.assertEqual([n.id for n in notes], [1, 2, 3]) self.assertEqual(qr._ct, 4) self.assertEqual(len(qr._result_cache), 4) self.assertFalse(qr._populated) qr.fill_cache(5) notes = query[:5] self.assertEqual([n.id for n in notes], [1, 2, 3, 4, 5]) self.assertEqual(qr._ct, 6) self.assertEqual(len(qr._result_cache), 6) notes = query[:7] self.assertEqual([n.id for n in notes], [1, 2, 3, 4, 5, 6, 7]) self.assertEqual(qr._ct, 8) self.assertFalse(qr._populated) qr.fill_cache() self.assertEqual( [n.id for n in query], [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]) self.assertEqual(qr._ct, 10) self.assertTrue(qr._populated) peewee-2.10.2/playhouse/tests/test_sqlcipher_ext.py000066400000000000000000000071441316645060400225170ustar00rootroot00000000000000import datetime from hashlib import sha1 from peewee import DatabaseError from playhouse.sqlcipher_ext import * from playhouse.sqlite_ext import * from playhouse.tests.base import database_initializer from playhouse.tests.base import ModelTestCase db = database_initializer.get_database('sqlcipher') ext_db = database_initializer.get_database( 'sqlcipher_ext', passphrase='testing sqlcipher') class BaseModel(Model): class Meta: database = db class Thing(BaseModel): name = CharField() @ext_db.func('shazam') def shazam(s): return sha1(s or '').hexdigest()[:5] class ExtModel(Model): class Meta: database = ext_db class FTSNote(FTSModel): content = TextField() class Meta: database = ext_db class Note(ExtModel): content = TextField() timestamp = DateTimeField(default=datetime.datetime.now) class SqlCipherTestCase(ModelTestCase): requires = [Thing] def test_good_and_bad_passphrases(self): things = ('t1', 't2', 't3') for thing in things: Thing.create(name=thing) # Try to open db with wrong passphrase secure = False bad_db = database_initializer.get_database( 'sqlcipher', passphrase='wrong passphrase') self.assertRaises(DatabaseError, bad_db.get_tables) # Assert that we can still access the data with the good passphrase. query = Thing.select().order_by(Thing.name) self.assertEqual([t.name for t in query], ['t1', 't2', 't3']) def test_passphrase_length(self): db = database_initializer.get_database('sqlcipher', passphrase='x') self.assertRaises(ImproperlyConfigured, db.connect) def test_kdf_iter(self): db = database_initializer.get_database('sqlcipher', kdf_iter=9999) self.assertRaises(ImproperlyConfigured, db.connect) class SqlCipherExtTestCase(ModelTestCase): requires = [Note] def setUp(self): super(SqlCipherExtTestCase, self).setUp() FTSNote.drop_table(True) FTSNote.create_table(tokenize='porter', content=Note.content) def tearDown(self): super(SqlCipherExtTestCase, self).tearDown() FTSNote.drop_table(True) def test_fts(self): strings = [ 'python and peewee for working with databases', 'relational databases are the best', 'sqlite is the best relational database', 'sqlcipher is a cool database extension'] for s in strings: Note.create(content=s) FTSNote.rebuild() query = (FTSNote .select(FTSNote, FTSNote.rank().alias('score')) .where(FTSNote.match('relational databases')) .order_by(SQL('score').desc())) notes = [note.content for note in query] self.assertEqual(notes, [ 'relational databases are the best', 'sqlite is the best relational database']) alt_conn = SqliteDatabase(ext_db.database) self.assertRaises( DatabaseError, alt_conn.execute_sql, 'SELECT * FROM "%s"' % (FTSNote._meta.db_table)) def test_func(self): Note.create(content='hello') Note.create(content='baz') Note.create(content='nug') query = (Note .select(Note.content, fn.shazam(Note.content).alias('shz')) .order_by(Note.id) .dicts()) results = list(query) self.assertEqual(results, [ {'content': 'hello', 'shz': 'aaf4c'}, {'content': 'baz', 'shz': 'bbe96'}, {'content': 'nug', 'shz': '52616'}, ]) peewee-2.10.2/playhouse/tests/test_sqlite_c_ext.py000066400000000000000000000211761316645060400223310ustar00rootroot00000000000000import datetime import unittest from peewee import * from playhouse.sqlite_ext import * from playhouse.sqlite_ext import _VirtualFieldMixin from playhouse.tests.base import database_initializer from playhouse.tests.base import ModelTestCase from playhouse.tests.libs import mock try: from playhouse import _sqlite_ext except ImportError: raise ImportError('Unable to load `_sqlite_ext` C extension.') db = SqliteExtDatabase(':memory:') class BaseModel(Model): class Meta: database = db class Note(BaseModel): content = TextField() timestamp = DateTimeField(default=datetime.datetime.now) class NoteIndex(FTSModel): docid = DocIDField() content = SearchField() class Meta: database = db extension_options = {'tokenize': 'porter'} @classmethod def index_note(cls, note): return NoteIndex.insert({ NoteIndex.docid: note.id, NoteIndex.content: note.content}).execute() class BaseTestCase(ModelTestCase): requires = [Note, NoteIndex] def setUp(self): super(BaseTestCase, self).setUp() functions_to_patch = [ 'peewee._sqlite_date_part', 'peewee._sqlite_date_trunc', 'peewee._sqlite_regexp', 'playhouse.sqlite_ext.bm25', 'playhouse.sqlite_ext.rank', ] def uncallable(fn): def side_effect(): raise AssertionError(fn.__name__) return side_effect self._patches = [ mock.patch(fn, side_effect=uncallable(fn)) for fn in functions_to_patch] for patch in self._patches: patch.start() def tearDown(self): super(BaseTestCase, self).tearDown() if not db.is_closed(): db.close() for patch in self._patches: patch.stop() class TestRank(BaseTestCase): test_content = ( ('A faith is a necessity to a man. Woe to him who believes in ' 'nothing.'), ('All who call on God in true faith, earnestly from the heart, will ' 'certainly be heard, and will receive what they have asked and ' 'desired.'), ('Be faithful in small things because it is in them that your ' 'strength lies.'), ('Faith consists in believing when it is beyond the power of reason ' 'to believe.'), ('Faith has to do with things that are not seen and hope with things ' 'that are not at hand.')) def setUp(self): super(TestRank, self).setUp() with db.atomic(): for content in self.test_content: note = Note.create(content=content) NoteIndex.index_note(note) def test_scoring_lucene(self): query = NoteIndex.search_lucene('things', [1.0], with_score=True) results = [(item[0], round(item[1], 2)) for item in query.tuples()] self.assertEqual(results, [ (self.test_content[4], -0.17), (self.test_content[2], -0.14)]) query = NoteIndex.search_lucene('faithful thing', [1.0], with_score=True) results = [(item[0], round(item[1], 2)) for item in query.tuples()] self.assertEqual(results, [ (self.test_content[4], 0.08), (self.test_content[2], 0.1)]) def test_scoring(self): query = NoteIndex.search('things', with_score=True).tuples() self.assertEqual(query[:], [ (self.test_content[4], -2.0 / 3), (self.test_content[2], -1.0 / 3), ]) query = NoteIndex.search('faithful', with_score=True).tuples() self.assertEqual([row[1] for row in query[:]], [ -.2, -.2, -.2, -.2, -.2 ]) def test_scoring_bm25(self): query = NoteIndex.search_bm25('things', [1.0], with_score=True) results = [(item[0], round(item[1], 2)) for item in query.tuples()] self.assertEqual(results, [ (self.test_content[4], -.45), (self.test_content[2], -.36), ]) query = (NoteIndex .select(NoteIndex.content, fn.fts_bm25( fn.matchinfo(NoteIndex.as_entity(), 'pcnalx'), 1.0).alias('score')) .where(NoteIndex.match('things')) .order_by(SQL('score')) .tuples()) results = [(item[0], round(item[1], 2)) for item in query] self.assertEqual(results, [ (self.test_content[4], -.45), (self.test_content[2], -.36), ]) class TestRegexp(BaseTestCase): def setUp(self): super(TestRegexp, self).setUp() self.test_content = ( 'foo bar baz', 'FOO nugBaRz', '01234 56789') for content in self.test_content: Note.create(content=content) def test_regexp(self): def assertMatches(regex, expected): query = (Note .select(Note.content.regexp(regex)) .order_by(Note.id) .tuples()) self.assertEqual([row[0] for row in query], expected) assertMatches('foo', [1, 1, 0]) assertMatches('BAR', [1, 1, 0]) assertMatches('\\bBAR\\b', [1, 0, 0]) assertMatches('[0-4]+', [0, 0, 1]) assertMatches('[0-4]{5}', [0, 0, 1]) assertMatches('[0-4]{6}', [0, 0, 0]) assertMatches('', [1, 1, 1]) assertMatches(None, [None, None, None]) class TestDateFunctions(BaseTestCase): def setUp(self): super(TestDateFunctions, self).setUp() dt = datetime.datetime self.test_datetimes = ( dt(2000, 1, 2, 3, 4, 5, 6), dt(2001, 2, 3, 4, 5, 6), dt(1999, 12, 31, 23, 59, 59), dt(2010, 3, 1), ) for i, value in enumerate(self.test_datetimes): Note.create(content=str(i), timestamp=value) def test_date_part(self): def Q(part): query = (Note .select(fn.date_part(part, Note.timestamp)) .order_by(Note.id) .tuples()) return [row[0] for row in query] self.assertEqual(Q('year'), [2000, 2001, 1999, 2010]) self.assertEqual(Q('month'), [1, 2, 12, 3]) self.assertEqual(Q('day'), [2, 3, 31, 1]) self.assertEqual(Q('hour'), [3, 4, 23, 0]) self.assertEqual(Q('minute'), [4, 5, 59, 0]) self.assertEqual(Q('second'), [5, 6, 59, 0]) self.assertEqual(Q(None), [None, None, None, None]) self.assertEqual(Q('foo'), [None, None, None, None]) self.assertEqual(Q(''), [None, None, None, None]) sql = 'SELECT date_part(?, ?)' result, = db.execute_sql(sql, ('year', None)).fetchone() self.assertIsNone(result) result, = db.execute_sql(sql, ('foo', None)).fetchone() self.assertIsNone(result) result, = db.execute_sql(sql, (None, None)).fetchone() self.assertIsNone(result) def test_date_trunc(self): def Q(part): query = (Note .select(fn.date_trunc(part, Note.timestamp)) .order_by(Note.id) .tuples()) return [row[0] for row in query] self.assertEqual(Q('year'), ['2000', '2001', '1999', '2010']) self.assertEqual(Q('month'), [ '2000-01', '2001-02', '1999-12', '2010-03']) self.assertEqual(Q('day'), [ '2000-01-02', '2001-02-03', '1999-12-31', '2010-03-01']) self.assertEqual(Q('hour'), [ '2000-01-02 03', '2001-02-03 04', '1999-12-31 23', '2010-03-01 00']) self.assertEqual(Q('minute'), [ '2000-01-02 03:04', '2001-02-03 04:05', '1999-12-31 23:59', '2010-03-01 00:00']) self.assertEqual(Q('second'), [ '2000-01-02 03:04:05', '2001-02-03 04:05:06', '1999-12-31 23:59:59', '2010-03-01 00:00:00']) self.assertEqual(Q(None), [None, None, None, None]) self.assertEqual(Q('foo'), [None, None, None, None]) self.assertEqual(Q(''), [None, None, None, None]) class TestMurmurHash(BaseTestCase): def assertHash(self, s, e): curs = db.execute_sql('select murmurhash(?)', (s,)) result = curs.fetchone()[0] self.assertEqual(result, e) def test_murmur_hash(self): self.assertHash('testkey', 3599487917) self.assertHash('murmur', 4160318927) self.assertHash('', 0) self.assertHash('this is a test of a longer string', 3556042345) self.assertHash(None, None) peewee-2.10.2/playhouse/tests/test_sqlite_ext.py000066400000000000000000001327571316645060400220370ustar00rootroot00000000000000import os import sqlite3 try: sqlite3.enable_callback_tracebacks(True) except AttributeError: pass from peewee import * from peewee import print_ from playhouse.sqlite_ext import * from playhouse.tests.base import database_initializer from playhouse.tests.base import ModelTestCase from playhouse.tests.base import PeeweeTestCase from playhouse.tests.base import skip_if from playhouse.tests.base import skip_unless # Use a disk-backed db since memory dbs only exist for a single connection and # we need to share the db w/2 for the locking tests. additionally, set the # sqlite_busy_timeout to 100ms so when we test locking it doesn't take forever ext_db = database_initializer.get_database( 'sqlite', c_extensions=False, db_class=SqliteExtDatabase, timeout=0.1, use_speedups=False) CLOSURE_EXTENSION = os.environ.get('CLOSURE_EXTENSION') if not CLOSURE_EXTENSION and os.path.exists('closure.so'): CLOSURE_EXTENSION = 'closure.so' FTS5_EXTENSION = FTS5Model.fts5_installed() # Test aggregate. class WeightedAverage(object): def __init__(self): self.total_weight = 0.0 self.total_ct = 0.0 def step(self, value, wt=None): wt = wt or 1.0 self.total_weight += wt self.total_ct += wt * value def finalize(self): if self.total_weight != 0.0: return self.total_ct / self.total_weight return 0.0 # Test collations. def _cmp(l, r): if l < r: return -1 elif r < l: return 1 return 0 def collate_reverse(s1, s2): return -_cmp(s1, s2) @ext_db.collation() def collate_case_insensitive(s1, s2): return _cmp(s1.lower(), s2.lower()) # Test functions. def title_case(s): return s.title() @ext_db.func() def rstrip(s, n): return s.rstrip(n) # Register test aggregates / collations / functions. ext_db.register_aggregate(WeightedAverage, 'weighted_avg', 1) ext_db.register_aggregate(WeightedAverage, 'weighted_avg2', 2) ext_db.register_collation(collate_reverse) ext_db.register_function(title_case) class BaseExtModel(Model): class Meta: database = ext_db class Post(BaseExtModel): message = TextField() class FTSPost(Post, FTSModel): """Automatically managed and populated via the Post model.""" # Need to specify this, since the `Post.id` primary key will take # precedence. docid = DocIDField() class Meta: extension_options = { 'content': Post, 'tokenize': 'porter'} class FTSDoc(FTSModel): """Manually managed and populated using queries.""" message = TextField() class Meta: database = ext_db extension_options = {'tokenize': 'porter'} class ManagedDoc(FTSModel): message = TextField() class Meta: database = ext_db extension_options = {'tokenize': 'porter', 'content': Post.message} class MultiColumn(FTSModel): c1 = CharField(default='') c2 = CharField(default='') c3 = CharField(default='') c4 = IntegerField() class Meta: database = ext_db extension_options = {'tokenize': 'porter'} class FTS5Test(FTS5Model): title = SearchField() data = SearchField() misc = SearchField(unindexed=True) class Meta: database = ext_db class Values(BaseExtModel): klass = IntegerField() value = FloatField() weight = FloatField() class RowIDModel(BaseExtModel): rowid = RowIDField() data = IntegerField() class TestVirtualModel(VirtualModel): class Meta: database = ext_db extension_module = 'test_ext' extension_options = { 'foo': 'bar', 'baze': 'nugget'} primary_key = False class APIData(BaseExtModel): data = JSONField() value = TextField() class TestVirtualModelChild(TestVirtualModel): pass def json_installed(): if sqlite3.sqlite_version_info < (3, 9, 0): return False # Test in-memory DB to determine if the FTS5 extension is installed. tmp_db = sqlite3.connect(':memory:') try: tmp_db.execute('select json(?)', (1337,)) except: return False finally: tmp_db.close() return True @skip_unless(json_installed) class TestJSONField(ModelTestCase): requires = [ APIData, ] test_data = [ {'metadata': {'tags': ['python', 'sqlite']}, 'title': 'My List of Python and SQLite Resources', 'url': 'http://charlesleifer.com/blog/my-list-of-python-and-sqlite-resources/'}, {'metadata': {'tags': ['nosql', 'python', 'sqlite', 'cython']}, 'title': "Using SQLite4's LSM Storage Engine as a Stand-alone NoSQL Database with Python", 'url': 'http://charlesleifer.com/blog/using-sqlite4-s-lsm-storage-engine-as-a-stand-alone-nosql-database-with-python/'}, {'metadata': {'tags': ['sqlite', 'search', 'python', 'peewee']}, 'title': 'Building the SQLite FTS5 Search Extension', 'url': 'http://charlesleifer.com/blog/building-the-sqlite-fts5-search-extension/'}, {'metadata': {'tags': ['nosql', 'python', 'unqlite', 'cython']}, 'title': 'Introduction to the fast new UnQLite Python Bindings', 'url': 'http://charlesleifer.com/blog/introduction-to-the-fast-new-unqlite-python-bindings/'}, {'metadata': {'tags': ['python', 'walrus', 'redis', 'nosql']}, 'title': 'Alternative Redis-Like Databases with Python', 'url': 'http://charlesleifer.com/blog/alternative-redis-like-databases-with-python/'}, ] def setUp(self): super(TestJSONField, self).setUp() with ext_db.execution_context(): for entry in self.test_data: APIData.create(data=entry, value=entry['title']) self.Q = APIData.select().order_by(APIData.id) def test_extract(self): titles = self.Q.select(APIData.data.extract('title')).tuples() self.assertEqual([row for row, in titles], [ 'My List of Python and SQLite Resources', 'Using SQLite4\'s LSM Storage Engine as a Stand-alone NoSQL Database with Python', 'Building the SQLite FTS5 Search Extension', 'Introduction to the fast new UnQLite Python Bindings', 'Alternative Redis-Like Databases with Python', ]) tags = (self.Q .select(APIData.data.extract('metadata.tags').alias('tags')) .dicts()) self.assertEqual(list(tags), [ {'tags': ['python', 'sqlite']}, {'tags': ['nosql', 'python', 'sqlite', 'cython']}, {'tags': ['sqlite', 'search', 'python', 'peewee']}, {'tags': ['nosql', 'python', 'unqlite', 'cython']}, {'tags': ['python', 'walrus', 'redis', 'nosql']}, ]) missing = self.Q.select(APIData.data.extract('foo.bar')).tuples() self.assertEqual([row for row, in missing], [None] * 5) def test_length(self): tag_len = (self.Q .select(APIData.data.length('metadata.tags').alias('len')) .dicts()) self.assertEqual(list(tag_len), [ {'len': 2}, {'len': 4}, {'len': 4}, {'len': 4}, {'len': 4}, ]) def test_remove(self): query = (self.Q .select( fn.json_extract( APIData.data.remove('metadata.tags'), '$.metadata')) .tuples()) self.assertEqual([row for row, in query], ['{}'] * 5) Clone = APIData.alias() query = (APIData .update( data=(Clone .select(Clone.data.remove('metadata.tags[2]')) .where(Clone.id == APIData.id))) .where( APIData.value.contains('LSM Storage') | APIData.value.contains('UnQLite Python')) .execute()) self.assertEqual(query, 2) tag_len = (self.Q .select(APIData.data.length('metadata.tags').alias('len')) .dicts()) self.assertEqual(list(tag_len), [ {'len': 2}, {'len': 3}, {'len': 4}, {'len': 3}, {'len': 4}, ]) def test_set(self): query = (self.Q .select( fn.json_extract( APIData.data.set( 'metadata', {'k1': {'k2': 'bar'}}), '$.metadata.k1')) .tuples()) self.assertEqual( [json.loads(row) for row, in query], [{'k2': 'bar'}] * 5) Clone = APIData.alias() query = (APIData .update( data=(Clone .select(Clone.data.set('title', 'hello')) .where(Clone.id == APIData.id))) .where(APIData.value.contains('LSM Storage')) .execute()) self.assertEqual(query, 1) titles = self.Q.select(APIData.data.extract('title')).tuples() for idx, (row,) in enumerate(titles): if idx == 1: self.assertEqual(row, 'hello') else: self.assertNotEqual(row, 'hello') def test_multi_set(self): Clone = APIData.alias() set_query = (Clone .select(Clone.data.set( 'foo', 'foo value', 'tagz', ['list', 'of', 'tags'], 'x.y.z', 3, 'metadata.foo', None, 'bar.baze', True)) .where(Clone.id == APIData.id)) query = (APIData .update(data=set_query) .where(APIData.value.contains('LSM Storage')) .execute()) self.assertEqual(query, 1) result = APIData.select().where(APIData.value.contains('LSM storage')).get() self.assertEqual(result.data, { 'bar': {'baze': 1}, 'foo': 'foo value', 'metadata': {'tags': ['nosql', 'python', 'sqlite', 'cython'], 'foo': None}, 'tagz': ['list', 'of', 'tags'], 'title': 'Using SQLite4\'s LSM Storage Engine as a Stand-alone NoSQL Database with Python', 'url': 'http://charlesleifer.com/blog/using-sqlite4-s-lsm-storage-engine-as-a-stand-alone-nosql-database-with-python/', 'x': {'y': {'z': 3}}, }) def test_children(self): children = APIData.data.children().alias('children') query = (APIData .select(children.c.value.alias('value')) .from_(APIData, children) .where(children.c.key.in_(['title', 'url'])) .order_by(SQL('1')) .tuples()) self.assertEqual([row for row, in query], [ 'Alternative Redis-Like Databases with Python', 'Building the SQLite FTS5 Search Extension', 'Introduction to the fast new UnQLite Python Bindings', 'My List of Python and SQLite Resources', 'Using SQLite4\'s LSM Storage Engine as a Stand-alone NoSQL Database with Python', 'http://charlesleifer.com/blog/alternative-redis-like-databases-with-python/', 'http://charlesleifer.com/blog/building-the-sqlite-fts5-search-extension/', 'http://charlesleifer.com/blog/introduction-to-the-fast-new-unqlite-python-bindings/', 'http://charlesleifer.com/blog/my-list-of-python-and-sqlite-resources/', 'http://charlesleifer.com/blog/using-sqlite4-s-lsm-storage-engine-as-a-stand-alone-nosql-database-with-python/', ]) class TestFTSModel(ModelTestCase): requires = [ FTSDoc, ManagedDoc, FTSPost, Post, MultiColumn, ] messages = [ ('A faith is a necessity to a man. Woe to him who believes in ' 'nothing.'), ('All who call on God in true faith, earnestly from the heart, will ' 'certainly be heard, and will receive what they have asked and ' 'desired.'), ('Be faithful in small things because it is in them that your ' 'strength lies.'), ('Faith consists in believing when it is beyond the power of reason ' 'to believe.'), ('Faith has to do with things that are not seen and hope with things ' 'that are not at hand.')] values = [ ('aaaaa bbbbb ccccc ddddd', 'aaaaa ccccc', 'zzzzz zzzzz', 1), ('bbbbb ccccc ddddd eeeee', 'bbbbb', 'zzzzz', 2), ('ccccc ccccc ddddd fffff', 'ccccc', 'yyyyy', 3), ('ddddd', 'ccccc', 'xxxxx', 4)] def test_virtual_model_options(self): compiler = ext_db.compiler() sql, params = compiler.create_table(TestVirtualModel) self.assertEqual(sql, ( 'CREATE VIRTUAL TABLE "testvirtualmodel" USING test_ext ' '(baze=nugget, foo=bar)')) self.assertEqual(params, []) sql, params = compiler.create_table(TestVirtualModelChild) self.assertEqual(sql, ( 'CREATE VIRTUAL TABLE "testvirtualmodelchild" USING test_ext ' '("id" INTEGER NOT NULL PRIMARY KEY, baze=nugget, foo=bar)')) self.assertEqual(params, []) test_options = {'baze': 'nugz', 'huey': 'mickey'} sql, params = compiler.create_table( TestVirtualModel, options=test_options) self.assertEqual(sql, ( 'CREATE VIRTUAL TABLE "testvirtualmodel" USING test_ext ' '(baze=nugz, foo=bar, huey=mickey)')) self.assertEqual(params, []) def test_pk_autoincrement(self): class AutoInc(Model): id = PrimaryKeyAutoIncrementField() foo = CharField() compiler = ext_db.compiler() sql, params = compiler.create_table(AutoInc) self.assertEqual( sql, 'CREATE TABLE "autoinc" ' '("id" INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT, ' '"foo" VARCHAR(255) NOT NULL)') def assertMessages(self, query, indices): self.assertEqual([x.message for x in query], [ self.messages[i] for i in indices]) def test_fts_manual(self): messages = [FTSDoc.create(message=msg) for msg in self.messages] q = (FTSDoc .select() .where(FTSDoc.match('believe')) .order_by(FTSDoc.docid)) self.assertMessages(q, [0, 3]) q = FTSDoc.search('believe') self.assertMessages(q, [3, 0]) q = FTSDoc.search('things', with_score=True) self.assertEqual([(x.message, x.score) for x in q], [ (self.messages[4], -2.0 / 3), (self.messages[2], -1.0 / 3), ]) def test_fts_delete_row(self): posts = [Post.create(message=message) for message in self.messages] FTSPost.rebuild() query = (FTSPost .select(FTSPost, FTSPost.rank().alias('score')) .where(FTSPost.match('believe')) .order_by(FTSPost.docid)) self.assertMessages(query, [0, 3]) fts_posts = FTSPost.select(FTSPost.docid).order_by(FTSPost.docid) for fts_post in fts_posts: self.assertEqual(fts_post.delete_instance(), 1) for post in posts: self.assertEqual( (FTSPost.delete() .where(FTSPost.message == post.message).execute()), 1) # None of the deletes went through. This is because the table is # managed. self.assertEqual(FTSPost.select().count(), 5) fts_docs = [FTSDoc.create(message=message) for message in self.messages] self.assertEqual(FTSDoc.select().count(), 5) for fts_doc in fts_docs: self.assertEqual(FTSDoc.delete().where( FTSDoc.message == fts_doc.message).execute(), 1) self.assertEqual(FTSDoc.select().count(), 0) def _create_multi_column(self): for c1, c2, c3, c4 in self.values: MultiColumn.create(c1=c1, c2=c2, c3=c3, c4=c4) def test_fts_multi_column(self): def assertResults(term, expected): results = [ (x.c4, round(x.score, 2)) for x in MultiColumn.search(term, with_score=True)] self.assertEqual(results, expected) self._create_multi_column() # `bbbbb` appears two times in `c1`, one time in `c2`. assertResults('bbbbb', [ (2, -1.5), # 1/2 + 1/1 (1, -0.5), # 1/2 ]) # `ccccc` appears four times in `c1`, three times in `c2`. assertResults('ccccc', [ (3, -.83), # 2/4 + 1/3 (1, -.58), # 1/4 + 1/3 (4, -.33), # 1/3 (2, -.25), # 1/4 ]) # `zzzzz` appears three times in c3. assertResults('zzzzz', [ (1, -.67), (2, -.33), ]) self.assertEqual( [x.score for x in MultiColumn.search('ddddd', with_score=True)], [-.25, -.25, -.25, -.25]) def test_weighting(self): self._create_multi_column() def assertResults(method, term, weights, expected): results = [ (x.c4, round(x.score, 2)) for x in method(term, weights=weights, with_score=True)] self.assertEqual(results, expected) assertResults(MultiColumn.search, 'bbbbb', None, [ (2, -1.5), # 1/2 + 1/1 (1, -0.5), # 1/2 ]) assertResults(MultiColumn.search, 'bbbbb', [1., 5., 0.], [ (2, -5.5), # 1/2 + (5 * 1/1) (1, -0.5), # 1/2 + (5 * 0) ]) assertResults(MultiColumn.search, 'bbbbb', [1., .5, 0.], [ (2, -1.), # 1/2 + (.5 * 1/1) (1, -0.5), # 1/2 + (.5 * 0) ]) assertResults(MultiColumn.search, 'bbbbb', [1., -1., 0.], [ (1, -0.5), # 1/2 + (-1 * 0) (2, 0.5), # 1/2 + (-1 * 1/1) ]) # BM25 assertResults(MultiColumn.search_bm25, 'bbbbb', None, [ (2, -0.85), (1, -0.)]) assertResults(MultiColumn.search_bm25, 'bbbbb', [1., 5., 0.], [ (2, -4.24), (1, -0.)]) assertResults(MultiColumn.search_bm25, 'bbbbb', [1., .5, 0.], [ (2, -0.42), (1, -0.)]) assertResults(MultiColumn.search_bm25, 'bbbbb', [1., -1., 0.], [ (1, -0.), (2, 0.85)]) def test_bm25(self): def assertResults(term, col_idx, expected): query = MultiColumn.search_bm25(term, [1.0, 0, 0, 0], True) self.assertEqual( [(mc.c4, round(mc.score, 2)) for mc in query], expected) self._create_multi_column() MultiColumn.create(c1='aaaaa fffff', c4=5) assertResults('aaaaa', 1, [ (5, -0.39), (1, -0.3), ]) assertResults('fffff', 1, [ (5, -0.39), (3, -0.3), ]) assertResults('eeeee', 1, [ (2, -0.97), ]) # No column specified, use the first text field. query = MultiColumn.search_bm25('fffff', [1.0, 0, 0, 0], True) self.assertEqual([(mc.c4, round(mc.score, 2)) for mc in query], [ (5, -0.39), (3, -0.3), ]) # Use helpers. query = (MultiColumn .select( MultiColumn.c4, MultiColumn.bm25(1.0).alias('score')) .where(MultiColumn.match('aaaaa')) .order_by(SQL('score'))) self.assertEqual([(mc.c4, round(mc.score, 2)) for mc in query], [ (5, -0.39), (1, -0.3), ]) def test_bm25_alt_corpus(self): for message in self.messages: FTSDoc.create(message=message) def assertResults(term, expected): query = FTSDoc.search_bm25(term, with_score=True) cleaned = [ (round(doc.score, 2), ' '.join(doc.message.split()[:2])) for doc in query] self.assertEqual(cleaned, expected) assertResults('things', [ (-0.45, 'Faith has'), (-0.36, 'Be faithful'), ]) # Indeterminate order since all are 0.0. All phrases contain the word # faith, so there is no meaningful score. results = [round(x.score, 2) for x in FTSDoc.search_bm25('faith', with_score=True)] self.assertEqual(results, [ -0., -0., -0., -0., -0.]) def _test_fts_auto(self, ModelClass): posts = [] for message in self.messages: posts.append(Post.create(message=message)) # Nothing matches, index is not built. pq = ModelClass.select().where(ModelClass.match('faith')) self.assertEqual(list(pq), []) ModelClass.rebuild() ModelClass.optimize() # it will stem faithful -> faith b/c we use the porter tokenizer pq = (ModelClass .select() .where(ModelClass.match('faith')) .order_by(ModelClass.docid)) self.assertMessages(pq, range(len(self.messages))) pq = (ModelClass .select() .where(ModelClass.match('believe')) .order_by(ModelClass.docid)) self.assertMessages(pq, [0, 3]) pq = (ModelClass .select() .where(ModelClass.match('thin*')) .order_by(ModelClass.docid)) self.assertMessages(pq, [2, 4]) pq = (ModelClass .select() .where(ModelClass.match('"it is"')) .order_by(ModelClass.docid)) self.assertMessages(pq, [2, 3]) pq = ModelClass.search('things', with_score=True) self.assertEqual([(x.message, x.score) for x in pq], [ (self.messages[4], -2.0 / 3), (self.messages[2], -1.0 / 3), ]) pq = (ModelClass .select(ModelClass.rank()) .where(ModelClass.match('faithful')) .tuples()) self.assertEqual([x[0] for x in pq], [-.2] * 5) pq = (ModelClass .search('faithful', with_score=True) .dicts()) self.assertEqual([x['score'] for x in pq], [-.2] * 5) def test_fts_auto_model(self): self._test_fts_auto(FTSPost) def test_fts_auto_field(self): self._test_fts_auto(ManagedDoc) class TestUserDefinedCallbacks(ModelTestCase): requires = [ Post, Values, ] def test_custom_agg(self): data = ( (1, 3.4, 1.0), (1, 6.4, 2.3), (1, 4.3, 0.9), (2, 3.4, 1.4), (3, 2.7, 1.1), (3, 2.5, 1.1), ) for klass, value, wt in data: Values.create(klass=klass, value=value, weight=wt) vq = (Values .select( Values.klass, fn.weighted_avg(Values.value).alias('wtavg'), fn.avg(Values.value).alias('avg')) .group_by(Values.klass)) q_data = [(v.klass, v.wtavg, v.avg) for v in vq] self.assertEqual(q_data, [ (1, 4.7, 4.7), (2, 3.4, 3.4), (3, 2.6, 2.6), ]) vq = (Values .select( Values.klass, fn.weighted_avg2(Values.value, Values.weight).alias('wtavg'), fn.avg(Values.value).alias('avg')) .group_by(Values.klass)) q_data = [(v.klass, str(v.wtavg)[:4], v.avg) for v in vq] self.assertEqual(q_data, [ (1, '5.23', 4.7), (2, '3.4', 3.4), (3, '2.6', 2.6), ]) def test_custom_collation(self): for i in [1, 4, 3, 5, 2]: Post.create(message='p%d' % i) pq = Post.select().order_by(Clause(Post.message, SQL('collate collate_reverse'))) self.assertEqual([p.message for p in pq], ['p5', 'p4', 'p3', 'p2', 'p1']) def test_collation_decorator(self): posts = [Post.create(message=m) for m in ['aaa', 'Aab', 'ccc', 'Bba', 'BbB']] pq = Post.select().order_by(collate_case_insensitive.collation(Post.message)) self.assertEqual([p.message for p in pq], [ 'aaa', 'Aab', 'Bba', 'BbB', 'ccc', ]) def test_custom_function(self): p1 = Post.create(message='this is a test') p2 = Post.create(message='another TEST') sq = Post.select().where(fn.title_case(Post.message) == 'This Is A Test') self.assertEqual(list(sq), [p1]) sq = Post.select(fn.title_case(Post.message)).tuples() self.assertEqual([x[0] for x in sq], [ 'This Is A Test', 'Another Test', ]) def test_function_decorator(self): [Post.create(message=m) for m in ['testing', 'chatting ', ' foo']] pq = Post.select(fn.rstrip(Post.message, 'ing')).order_by(Post.id) self.assertEqual([x[0] for x in pq.tuples()], [ 'test', 'chatting ', ' foo']) pq = Post.select(fn.rstrip(Post.message, ' ')).order_by(Post.id) self.assertEqual([x[0] for x in pq.tuples()], [ 'testing', 'chatting', ' foo']) def test_lock_type_transaction(self): conn = ext_db.get_conn() def test_locked_dbw(isolation_level): with ext_db.transaction(isolation_level): Post.create(message='p1') # Will not be saved. conn2 = ext_db._connect(ext_db.database, **ext_db.connect_kwargs) conn2.execute('insert into post (message) values (?);', ('x1',)) self.assertRaises(sqlite3.OperationalError, test_locked_dbw, 'exclusive') self.assertRaises(sqlite3.OperationalError, test_locked_dbw, 'immediate') self.assertRaises(sqlite3.OperationalError, test_locked_dbw, 'deferred') def test_locked_dbr(isolation_level): with ext_db.transaction(isolation_level): Post.create(message='p2') other_db = database_initializer.get_database( 'sqlite', db_class=SqliteExtDatabase, timeout=0.1, use_speedups=False) res = other_db.execute_sql('select message from post') return res.fetchall() # no read-only stuff with exclusive locks self.assertRaises(OperationalError, test_locked_dbr, 'exclusive') # ok to do readonly w/immediate and deferred (p2 is saved twice) self.assertEqual(test_locked_dbr('immediate'), []) self.assertEqual(test_locked_dbr('deferred'), [('p2',)]) # test everything by hand, by setting the default connection to # 'exclusive' and turning off autocommit behavior ext_db.set_autocommit(False) conn.isolation_level = 'exclusive' Post.create(message='p3') # uncommitted # now, open a second connection w/exclusive and try to read, it will # be locked conn2 = ext_db._connect(ext_db.database, **ext_db.connect_kwargs) conn2.isolation_level = 'exclusive' self.assertRaises(sqlite3.OperationalError, conn2.execute, 'select * from post') # rollback the first connection's transaction, releasing the exclusive lock conn.rollback() ext_db.set_autocommit(True) with ext_db.transaction('deferred'): Post.create(message='p4') res = conn2.execute('select message from post order by message;') self.assertEqual([x[0] for x in res.fetchall()], [ 'p2', 'p2', 'p4']) class TestRowIDField(ModelTestCase): requires = [RowIDModel] def test_model_meta(self): self.assertEqual(RowIDModel._meta.sorted_field_names, ['rowid', 'data']) self.assertEqual([f.name for f in RowIDModel._meta.declared_fields], ['data']) self.assertEqual(RowIDModel._meta.primary_key.name, 'rowid') self.assertTrue(RowIDModel._meta.auto_increment) def test_rowid_field(self): r1 = RowIDModel.create(data=10) self.assertEqual(r1.rowid, 1) self.assertEqual(r1.data, 10) r2 = RowIDModel.create(data=20) self.assertEqual(r2.rowid, 2) self.assertEqual(r2.data, 20) query = RowIDModel.select().where(RowIDModel.rowid == 2) sql, params = query.sql() self.assertEqual(sql, ( 'SELECT "t1"."data" ' 'FROM "rowidmodel" AS t1 ' 'WHERE ("t1"."rowid" = ?)')) self.assertEqual(params, [2]) r_db = query.get() self.assertEqual(r_db.rowid, None) self.assertEqual(r_db.data, 20) r_db2 = query.select(RowIDModel.rowid, RowIDModel.data).get() self.assertEqual(r_db2.rowid, 2) self.assertEqual(r_db2.data, 20) def test_insert_with_rowid(self): RowIDModel.insert({RowIDModel.rowid: 5, 'data': 1}).execute() self.assertEqual(5, RowIDModel.select(RowIDModel.rowid).first().rowid) def test_insert_many_with_rowid_without_field_validation(self): RowIDModel.insert_many([{RowIDModel.rowid: 5, 'data': 1}], validate_fields=False).execute() self.assertEqual(5, RowIDModel.select(RowIDModel.rowid).first().rowid) def test_insert_many_with_rowid_with_field_validation(self): RowIDModel.insert_many([{RowIDModel.rowid: 5, 'data': 1}], validate_fields=True).execute() self.assertEqual(5, RowIDModel.select(RowIDModel.rowid).first().rowid) class TestTransitiveClosure(PeeweeTestCase): def test_model_factory(self): class Category(BaseExtModel): name = CharField() parent = ForeignKeyField('self', null=True) Closure = ClosureTable(Category) self.assertEqual(Closure._meta.extension_module, 'transitive_closure') self.assertEqual(Closure._meta.columns, {}) self.assertEqual(Closure._meta.fields, {}) self.assertFalse(Closure._meta.primary_key) self.assertEqual(Closure._meta.extension_options, { 'idcolumn': 'id', 'parentcolumn': 'parent_id', 'tablename': 'category', }) class Alt(BaseExtModel): pk = PrimaryKeyField() ref = ForeignKeyField('self', null=True) Closure = ClosureTable(Alt) self.assertEqual(Closure._meta.columns, {}) self.assertEqual(Closure._meta.fields, {}) self.assertFalse(Closure._meta.primary_key) self.assertEqual(Closure._meta.extension_options, { 'idcolumn': 'pk', 'parentcolumn': 'ref_id', 'tablename': 'alt', }) class NoForeignKey(BaseExtModel): pass self.assertRaises(ValueError, ClosureTable, NoForeignKey) @skip_unless(lambda: FTS5_EXTENSION) class TestFTS5Extension(ModelTestCase): requires = [FTS5Test] corpus = ( ('foo aa bb', 'aa bb cc ' * 10, 1), ('bar bb cc', 'bb cc dd ' * 9, 2), ('baze cc dd', 'cc dd ee ' * 8, 3), ('nug aa dd', 'bb cc ' * 7, 4), ) def setUp(self): super(TestFTS5Extension, self).setUp() for title, data, misc in self.corpus: FTS5Test.create(title=title, data=data, misc=misc) def test_fts5_options(self): class Test1(FTS5Model): f1 = SearchField() f2 = SearchField(unindexed=True) f3 = SearchField() class Meta: database = ext_db extension_options = { 'prefix': [2, 3], 'tokenize': 'porter unicode61', 'content': Post, 'content_rowid': Post.id, } create_sql = Test1.sqlall() self.assertEqual(len(create_sql), 1) self.assertEqual(create_sql[0], ( 'CREATE VIRTUAL TABLE "test1" USING fts5 (' '"f1" , "f2" UNINDEXED, "f3" , ' 'content="post", content_rowid="id", ' 'prefix=\'2,3\', tokenize="porter unicode61")')) def assertResults(self, query, expected, scores=False, alias='score'): if scores: results = [ (obj.title, round(getattr(obj, alias), 7)) for obj in query] else: results = [obj.title for obj in query] self.assertEqual(results, expected) def test_search(self): query = FTS5Test.search('bb') self.assertEqual(query.sql(), ( ('SELECT "t1"."title", "t1"."data", "t1"."misc" ' 'FROM "fts5test" AS t1 ' 'WHERE ("fts5test" MATCH ?) ORDER BY rank'), ['bb'])) self.assertResults(query, ['nug aa dd', 'foo aa bb', 'bar bb cc']) query = FTS5Test.search('bb', with_score=True) self.assertEqual(query.sql(), ( ('SELECT "t1"."title", "t1"."data", "t1"."misc", rank AS score ' 'FROM "fts5test" AS t1 ' 'WHERE ("fts5test" MATCH ?) ORDER BY score'), ['bb'])) self.assertResults(query, [ ('nug aa dd', -2e-06), ('foo aa bb', -1.9e-06), ('bar bb cc', -1.9e-06)], True) query = FTS5Test.search('aa', with_score=True, score_alias='s') self.assertResults(query, [ ('foo aa bb', -1.9e-06), ('nug aa dd', -1.2e-06), ], True, 's') def test_search_bm25(self): query = FTS5Test.search_bm25('bb') self.assertEqual(query.sql(), ( ('SELECT "t1"."title", "t1"."data", "t1"."misc" ' 'FROM "fts5test" AS t1 ' 'WHERE ("fts5test" MATCH ?) ORDER BY rank'), ['bb'])) self.assertResults(query, ['nug aa dd', 'foo aa bb', 'bar bb cc']) query = FTS5Test.search_bm25('bb', with_score=True) self.assertEqual(query.sql(), ( ('SELECT "t1"."title", "t1"."data", "t1"."misc", rank AS score ' 'FROM "fts5test" AS t1 ' 'WHERE ("fts5test" MATCH ?) ORDER BY score'), ['bb'])) self.assertResults(query, [ ('nug aa dd', -2e-06), ('foo aa bb', -1.9e-06), ('bar bb cc', -1.9e-06)], True) def test_search_bm25_scores(self): query = FTS5Test.search_bm25('bb', {'title': 5.0}) self.assertEqual(query.sql(), ( ('SELECT "t1"."title", "t1"."data", "t1"."misc" ' 'FROM "fts5test" AS t1 ' 'WHERE ("fts5test" MATCH ?) ORDER BY bm25("fts5test", ?, ?, ?)'), ['bb', 5.0, 1.0, 1.0])) self.assertResults(query, ['bar bb cc', 'foo aa bb', 'nug aa dd']) query = FTS5Test.search_bm25('bb', {'title': 5.0}, True) self.assertEqual(query.sql(), ( ('SELECT "t1"."title", "t1"."data", "t1"."misc", ' 'bm25("fts5test", ?, ?, ?) AS score ' 'FROM "fts5test" AS t1 ' 'WHERE ("fts5test" MATCH ?) ORDER BY score'), [5.0, 1.0, 1.0, 'bb'])) self.assertResults(query, [ ('bar bb cc', -2e-06), ('foo aa bb', -2e-06), ('nug aa dd', -2e-06)], True) def test_set_rank(self): FTS5Test.set_rank('bm25(10.0, 1.0)') query = FTS5Test.search('bb', with_score=True) self.assertEqual(query.sql(), ( ('SELECT "t1"."title", "t1"."data", "t1"."misc", rank AS score ' 'FROM "fts5test" AS t1 ' 'WHERE ("fts5test" MATCH ?) ORDER BY score'), ['bb'])) self.assertResults(query, [ ('bar bb cc', -2.1e-06), ('foo aa bb', -2.1e-06), ('nug aa dd', -2e-06)], True) def test_vocab_model(self): Vocab = FTS5Test.VocabModel() if Vocab.table_exists(): Vocab.drop_table() Vocab.create_table() query = Vocab.select().where(Vocab.term == 'aa') self.assertEqual( query.dicts()[:], [{'doc': 2, 'term': 'aa', 'cnt': 12}]) query = Vocab.select().where(Vocab.cnt > 20).order_by(Vocab.cnt) self.assertEqual(query.dicts()[:], [ {'doc': 3, 'term': 'bb', 'cnt': 28}, {'doc': 4, 'term': 'cc', 'cnt': 36}]) def test_validate_query(self): data = ( ('testing one two three', True), ('"testing one" "two" three', True), ('\'testing one\' "two" three', False), ('"\'testing one\'" "two" three', True), ('k-means', False), ('"k-means"', True), ('0123 AND (4 OR 5)', True), ('it\'s', False), ) for phrase, valid in data: self.assertEqual(FTS5Model.validate_query(phrase), valid) def test_clean_query(self): data = ( ('testing one', 'testing one'), ('testing "one"', 'testing "one"'), ('testing \'one\'', 'testing _one_'), ('foo; bar [1 2 3] it\'s', 'foo_ bar _1 2 3_ it_s'), ) for inval, outval in data: self.assertEqual(FTS5Model.clean_query(inval, '_'), outval) @skip_if(lambda: not CLOSURE_EXTENSION) class TestTransitiveClosureManyToMany(PeeweeTestCase): def setUp(self): super(TestTransitiveClosureManyToMany, self).setUp() ext_db.load_extension(CLOSURE_EXTENSION.rstrip('.so')) ext_db.close() def tearDown(self): super(TestTransitiveClosureManyToMany, self).tearDown() ext_db.unload_extension(CLOSURE_EXTENSION.rstrip('.so')) ext_db.close() def test_manytomany(self): class Person(BaseExtModel): name = CharField() class Relationship(BaseExtModel): person = ForeignKeyField(Person) relation = ForeignKeyField(Person, related_name='related_to') PersonClosure = ClosureTable( Person, referencing_class=Relationship, foreign_key=Relationship.relation, referencing_key=Relationship.person) ext_db.drop_tables([Person, Relationship, PersonClosure], safe=True) ext_db.create_tables([Person, Relationship, PersonClosure]) c = Person.create(name='charlie') m = Person.create(name='mickey') h = Person.create(name='huey') z = Person.create(name='zaizee') Relationship.create(person=c, relation=h) Relationship.create(person=c, relation=m) Relationship.create(person=h, relation=z) Relationship.create(person=h, relation=m) def assertPeople(query, expected): self.assertEqual(sorted([p.name for p in query]), expected) PC = PersonClosure assertPeople(PC.descendants(c), []) assertPeople(PC.ancestors(c), ['huey', 'mickey', 'zaizee']) assertPeople(PC.siblings(c), ['huey']) assertPeople(PC.descendants(h), ['charlie']) assertPeople(PC.ancestors(h), ['mickey', 'zaizee']) assertPeople(PC.siblings(h), ['charlie']) assertPeople(PC.descendants(z), ['charlie', 'huey']) assertPeople(PC.ancestors(z), []) assertPeople(PC.siblings(z), []) @skip_if(lambda: not CLOSURE_EXTENSION) class TestTransitiveClosureIntegration(PeeweeTestCase): tree = { 'books': [ {'fiction': [ {'scifi': [ {'hard scifi': []}, {'dystopian': []}]}, {'westerns': []}, {'classics': []}, ]}, {'non-fiction': [ {'biographies': []}, {'essays': []}, ]}, ] } def setUp(self): super(TestTransitiveClosureIntegration, self).setUp() ext_db.load_extension(CLOSURE_EXTENSION.rstrip('.so')) ext_db.close() def tearDown(self): super(TestTransitiveClosureIntegration, self).tearDown() ext_db.unload_extension(CLOSURE_EXTENSION.rstrip('.so')) ext_db.close() def initialize_models(self): class Category(BaseExtModel): name = CharField() parent = ForeignKeyField('self', null=True) @classmethod def g(cls, name): return cls.get(cls.name == name) Closure = ClosureTable(Category) ext_db.drop_tables([Category, Closure], True) ext_db.create_tables([Category, Closure]) def build_tree(nodes, parent=None): for name, subnodes in nodes.items(): category = Category.create(name=name, parent=parent) if subnodes: for subnode in subnodes: build_tree(subnode, category) build_tree(self.tree) return Category, Closure def assertNodes(self, query, *expected): self.assertEqual( set([category.name for category in query]), set(expected)) def test_build_tree(self): Category, Closure = self.initialize_models() self.assertEqual(Category.select().count(), 10) def test_descendants(self): Category, Closure = self.initialize_models() books = Category.g('books') self.assertNodes( Closure.descendants(books), 'fiction', 'scifi', 'hard scifi', 'dystopian', 'westerns', 'classics', 'non-fiction', 'biographies', 'essays') self.assertNodes(Closure.descendants(books, 0), 'books') self.assertNodes( Closure.descendants(books, 1), 'fiction', 'non-fiction') self.assertNodes( Closure.descendants(books, 2), 'scifi', 'westerns', 'classics', 'biographies', 'essays') self.assertNodes( Closure.descendants(books, 3), 'hard scifi', 'dystopian') fiction = Category.g('fiction') self.assertNodes( Closure.descendants(fiction), 'scifi', 'hard scifi', 'dystopian', 'westerns', 'classics') self.assertNodes( Closure.descendants(fiction, 1), 'scifi', 'westerns', 'classics') self.assertNodes( Closure.descendants(fiction, 2), 'hard scifi', 'dystopian') self.assertNodes( Closure.descendants(Category.g('scifi')), 'hard scifi', 'dystopian') self.assertNodes( Closure.descendants(Category.g('scifi'), include_node=True), 'scifi', 'hard scifi', 'dystopian') self.assertNodes(Closure.descendants(Category.g('hard scifi'), 1)) def test_ancestors(self): Category, Closure = self.initialize_models() hard_scifi = Category.g('hard scifi') self.assertNodes( Closure.ancestors(hard_scifi), 'scifi', 'fiction', 'books') self.assertNodes( Closure.ancestors(hard_scifi, include_node=True), 'hard scifi', 'scifi', 'fiction', 'books') self.assertNodes(Closure.ancestors(hard_scifi, 2), 'fiction') self.assertNodes(Closure.ancestors(hard_scifi, 3), 'books') non_fiction = Category.g('non-fiction') self.assertNodes(Closure.ancestors(non_fiction), 'books') self.assertNodes(Closure.ancestors(non_fiction, include_node=True), 'non-fiction', 'books') self.assertNodes(Closure.ancestors(non_fiction, 1), 'books') books = Category.g('books') self.assertNodes(Closure.ancestors(books, include_node=True), 'books') self.assertNodes(Closure.ancestors(books)) self.assertNodes(Closure.ancestors(books, 1)) def test_siblings(self): Category, Closure = self.initialize_models() self.assertNodes( Closure.siblings(Category.g('hard scifi')), 'dystopian') self.assertNodes( Closure.siblings(Category.g('hard scifi'), include_node=True), 'hard scifi', 'dystopian') self.assertNodes( Closure.siblings(Category.g('classics')), 'scifi', 'westerns') self.assertNodes( Closure.siblings(Category.g('classics'), include_node=True), 'scifi', 'westerns', 'classics') self.assertNodes( Closure.siblings(Category.g('fiction')), 'non-fiction') def test_tree_changes(self): Category, Closure = self.initialize_models() books = Category.g('books') fiction = Category.g('fiction') dystopian = Category.g('dystopian') essays = Category.g('essays') new_root = Category.create(name='products') Category.create(name='magazines', parent=new_root) books.parent = new_root books.save() dystopian.delete_instance() essays.parent = books essays.save() Category.create(name='rants', parent=essays) Category.create(name='poetry', parent=books) query = (Category .select(Category.name, Closure.depth) .join(Closure, on=(Category.id == Closure.id)) .where(Closure.root == new_root) .order_by(Closure.depth, Category.name) .tuples()) self.assertEqual(list(query), [ ('products', 0), ('books', 1), ('magazines', 1), ('essays', 2), ('fiction', 2), ('non-fiction', 2), ('poetry', 2), ('biographies', 3), ('classics', 3), ('rants', 3), ('scifi', 3), ('westerns', 3), ('hard scifi', 4), ]) def test_id_not_overwritten(self): class Node(BaseExtModel): parent = ForeignKeyField('self', null=True) name = CharField() NodeClosure = ClosureTable(Node) ext_db.create_tables([Node, NodeClosure], True) root = Node.create(name='root') c1 = Node.create(name='c1', parent=root) c2 = Node.create(name='c2', parent=root) query = NodeClosure.descendants(root) self.assertEqual(sorted([(n.id, n.name) for n in query]), [(c1.id, 'c1'), (c2.id, 'c2')]) ext_db.drop_tables([Node, NodeClosure]) peewee-2.10.2/playhouse/tests/test_sqlite_udf.py000066400000000000000000000402431316645060400220010ustar00rootroot00000000000000import datetime import json import random try: import vtfunc except ImportError: vtfunc = None from peewee import * from playhouse.sqlite_ext import SqliteExtDatabase from playhouse.sqlite_udf import register_all from playhouse.tests.base import database_initializer from playhouse.tests.base import ModelTestCase from playhouse.tests.base import skip_test_unless from playhouse.tests.base import skip_unless from playhouse.tests.models import User as _User try: from playhouse import _sqlite_udf as cython_udf except ImportError: cython_udf = None def requires_cython(method): return skip_test_unless(lambda: cython_udf is not None)(method) def requires_vtfunc(testcase): return skip_unless(lambda: vtfunc is not None)(testcase) class UDFDatabase(SqliteExtDatabase): def _add_conn_hooks(self, conn): super(UDFDatabase, self)._add_conn_hooks(conn) register_all(conn) ext_db = database_initializer.get_database( 'sqlite', db_class=UDFDatabase) class BaseModel(Model): class Meta: database = ext_db class User(_User): class Meta: database = ext_db class APIResponse(BaseModel): url = TextField(default='') data = TextField(default='') timestamp = DateTimeField(default=datetime.datetime.now) class Generic(BaseModel): value = IntegerField(default=0) x = BareField(null=True) MODELS = [User, APIResponse, Generic] class FixedOffset(datetime.tzinfo): def __init__(self, offset, name, dstoffset=42): if isinstance(offset, int): offset = datetime.timedelta(minutes=offset) if isinstance(dstoffset, int): dstoffset = datetime.timedelta(minutes=dstoffset) self.__offset = offset self.__name = name self.__dstoffset = dstoffset def utcoffset(self, dt): return self.__offset def tzname(self, dt): return self.__name def dst(self, dt): return self.__dstoffset class BaseTestUDF(ModelTestCase): def sql1(self, sql, *params): cursor = ext_db.execute_sql(sql, params) return cursor.fetchone()[0] class TestAggregates(BaseTestUDF): requires = [Generic] def _store_values(self, *values): with ext_db.atomic(): for value in values: Generic.create(x=value) def mts(self, seconds): return (datetime.datetime(2015, 1, 1) + datetime.timedelta(seconds=seconds)) def test_min_avg_tdiff(self): self.assertEqual(self.sql1('select mintdiff(x) from generic;'), None) self.assertEqual(self.sql1('select avgtdiff(x) from generic;'), None) self._store_values(self.mts(10)) self.assertEqual(self.sql1('select mintdiff(x) from generic;'), None) self.assertEqual(self.sql1('select avgtdiff(x) from generic;'), 0) self._store_values(self.mts(15)) self.assertEqual(self.sql1('select mintdiff(x) from generic;'), 5) self.assertEqual(self.sql1('select avgtdiff(x) from generic;'), 5) self._store_values( self.mts(22), self.mts(52), self.mts(18), self.mts(41), self.mts(2), self.mts(33)) self.assertEqual(self.sql1('select mintdiff(x) from generic;'), 3) self.assertEqual( round(self.sql1('select avgtdiff(x) from generic;'), 1), 7.1) self._store_values(self.mts(22)) self.assertEqual(self.sql1('select mintdiff(x) from generic;'), 0) def test_duration(self): self.assertEqual(self.sql1('select duration(x) from generic;'), None) self._store_values(self.mts(10)) self.assertEqual(self.sql1('select duration(x) from generic;'), 0) self._store_values(self.mts(15)) self.assertEqual(self.sql1('select duration(x) from generic;'), 5) self._store_values( self.mts(22), self.mts(11), self.mts(52), self.mts(18), self.mts(41), self.mts(2), self.mts(33)) self.assertEqual(self.sql1('select duration(x) from generic;'), 50) @requires_cython def test_median(self): self.assertEqual(self.sql1('select median(x) from generic;'), None) self._store_values(1) self.assertEqual(self.sql1('select median(x) from generic;'), 1) self._store_values(3, 6, 6, 6, 7, 7, 7, 7, 12, 12, 17) self.assertEqual(self.sql1('select median(x) from generic;'), 7) Generic.delete().execute() self._store_values(9, 2, 2, 3, 3, 1) self.assertEqual(self.sql1('select median(x) from generic;'), 3) Generic.delete().execute() self._store_values(4, 4, 1, 8, 2, 2, 5, 8, 1) self.assertEqual(self.sql1('select median(x) from generic;'), 4) def test_mode(self): self.assertEqual(self.sql1('select mode(x) from generic;'), None) self._store_values(1) self.assertEqual(self.sql1('select mode(x) from generic;'), 1) self._store_values(4, 5, 6, 1, 3, 4, 1, 4, 9, 3, 4) self.assertEqual(self.sql1('select mode(x) from generic;'), 4) def test_ranges(self): self.assertEqual(self.sql1('select minrange(x) from generic'), None) self.assertEqual(self.sql1('select avgrange(x) from generic'), None) self.assertEqual(self.sql1('select range(x) from generic'), None) self._store_values(1) self.assertEqual(self.sql1('select minrange(x) from generic'), 0) self.assertEqual(self.sql1('select avgrange(x) from generic'), 0) self.assertEqual(self.sql1('select range(x) from generic'), 0) self._store_values(4, 8, 13, 19) self.assertEqual(self.sql1('select minrange(x) from generic'), 3) self.assertEqual(self.sql1('select avgrange(x) from generic'), 4.5) self.assertEqual(self.sql1('select range(x) from generic'), 18) Generic.delete().execute() self._store_values(19, 4, 5, 20, 5, 8) self.assertEqual(self.sql1('select range(x) from generic'), 16) class TestScalarFunctions(BaseTestUDF): requires = MODELS def test_if_then_else(self): User.create_users(4) with self.assertQueryCount(1): query = (User .select( User.username, fn.if_then_else( User.username << ['u1', 'u2'], 'one or two', 'other').alias('name_type')) .order_by(User.id)) self.assertEqual([row.name_type for row in query], [ 'one or two', 'one or two', 'other', 'other']) def test_strip_tz(self): dt = datetime.datetime(2015, 1, 1, 12, 0) # 13 hours, 37 minutes. dt_tz = dt.replace(tzinfo=FixedOffset(13 * 60 + 37, 'US/LFK')) api_dt = APIResponse.create(timestamp=dt) api_dt_tz = APIResponse.create(timestamp=dt_tz) # Re-fetch from the database. api_dt_db = APIResponse.get(APIResponse.id == api_dt.id) api_dt_tz_db = APIResponse.get(APIResponse.id == api_dt_tz.id) # Assert the timezone is present, first of all, and that they were # stored in the database. self.assertEqual(api_dt_db.timestamp, dt) query = (APIResponse .select( APIResponse.id, fn.strip_tz(APIResponse.timestamp).alias('ts')) .order_by(APIResponse.id)) ts, ts_tz = query[:] self.assertEqual(ts.ts, dt) self.assertEqual(ts_tz.ts, dt) def test_human_delta(self): values = [0, 1, 30, 300, 3600, 7530, 300000] for value in values: Generic.create(value=value) delta = fn.human_delta(Generic.value).coerce(False) query = (Generic .select( Generic.value, delta.alias('delta')) .order_by(Generic.value)) results = query.tuples()[:] self.assertEqual(results, [ (0, '0 seconds'), (1, '1 second'), (30, '30 seconds'), (300, '5 minutes'), (3600, '1 hour'), (7530, '2 hours, 5 minutes, 30 seconds'), (300000, '3 days, 11 hours, 20 minutes'), ]) def test_file_ext(self): data = ( ('test.py', '.py'), ('test.x.py', '.py'), ('test', ''), ('test.', '.'), ('/foo.bar/test/nug.py', '.py'), ('/foo.bar/test/nug', ''), ) for filename, ext in data: res = self.sql1('SELECT file_ext(?)', filename) self.assertEqual(res, ext) def test_gz(self): random.seed(1) A = ord('A') z = ord('z') with ext_db.atomic(): def randstr(l): return ''.join([ chr(random.randint(A, z)) for _ in range(l)]) data = ( 'a', 'a' * 1024, randstr(1024), randstr(4096), randstr(1024 * 64)) for s in data: compressed = self.sql1('select gzip(?)', s) decompressed = self.sql1('select gunzip(?)', compressed) self.assertEqual(decompressed, s) def test_hostname(self): r = json.dumps({'success': True}) data = ( ('http://charlesleifer.com/api/', r), ('https://a.charlesleifer.com/api/foo', r), ('www.nugget.com', r), ('nugz.com', r), ('http://a.b.c.peewee/foo', r), ('http://charlesleifer.com/xx', r), ('https://charlesleifer.com/xx', r), ) with ext_db.atomic(): for url, response in data: APIResponse.create(url=url, data=data) with self.assertQueryCount(1): query = (APIResponse .select( fn.hostname(APIResponse.url).alias('host'), fn.COUNT(APIResponse.id).alias('count')) .group_by(fn.hostname(APIResponse.url)) .order_by( fn.COUNT(APIResponse.id).desc(), fn.hostname(APIResponse.url))) results = query.tuples()[:] self.assertEqual(results, [ ('charlesleifer.com', 3), ('', 2), ('a.b.c.peewee', 1), ('a.charlesleifer.com', 1)]) def test_toggle(self): self.assertEqual(self.sql1('select toggle(?)', 'foo'), 1) self.assertEqual(self.sql1('select toggle(?)', 'bar'), 1) self.assertEqual(self.sql1('select toggle(?)', 'foo'), 0) self.assertEqual(self.sql1('select toggle(?)', 'foo'), 1) self.assertEqual(self.sql1('select toggle(?)', 'bar'), 0) self.assertEqual(self.sql1('select toggle(?, ?)', 'bar', False), 0) self.assertEqual(self.sql1('select toggle(?, ?)', 'bar', True), 1) self.assertEqual(self.sql1('select toggle(?, ?)', 'bar', True), 1) self.assertEqual(self.sql1('select clear_toggles()'), None) self.assertEqual(self.sql1('select toggle(?)', 'foo'), 1) def test_setting(self): self.assertEqual(self.sql1('select setting(?, ?)', 'k1', 'v1'), None) self.assertEqual(self.sql1('select setting(?, ?)', 'k2', 'v2'), None) self.assertEqual(self.sql1('select setting(?)', 'k1'), 'v1') self.assertEqual(self.sql1('select setting(?, ?)', 'k2', 'v2-x'), None) self.assertEqual(self.sql1('select setting(?)', 'k2'), 'v2-x') self.assertEqual(self.sql1('select setting(?)', 'kx'), None) self.assertEqual(self.sql1('select clear_settings()'), None) self.assertEqual(self.sql1('select setting(?)', 'k1'), None) def test_random_range(self): vals = ((1, 10), (1, 100), (0, 2), (1, 5, 2)) results = [] for params in vals: random.seed(1) results.append(random.randrange(*params)) for params, expected in zip(vals, results): random.seed(1) if len(params) == 3: pstr = '?, ?, ?' else: pstr = '?, ?' self.assertEqual( self.sql1('select randomrange(%s)' % pstr, *params), expected) def test_sqrt(self): self.assertEqual(self.sql1('select sqrt(?)', 4), 2) self.assertEqual(round(self.sql1('select sqrt(?)', 2), 2), 1.41) def test_tonumber(self): data = ( ('123', 123), ('1.23', 1.23), ('1e4', 10000), ('-10', -10), ('x', None), ('13d', None), ) for inp, outp in data: self.assertEqual(self.sql1('select tonumber(?)', inp), outp) @requires_cython def test_leven(self): self.assertEqual( self.sql1('select levenshtein_dist(?, ?)', 'abc', 'ba'), 2) self.assertEqual( self.sql1('select levenshtein_dist(?, ?)', 'abcde', 'eba'), 4) self.assertEqual( self.sql1('select levenshtein_dist(?, ?)', 'abcde', 'abcde'), 0) @requires_cython def test_str_dist(self): self.assertEqual( self.sql1('select str_dist(?, ?)', 'abc', 'ba'), 3) self.assertEqual( self.sql1('select str_dist(?, ?)', 'abcde', 'eba'), 6) self.assertEqual( self.sql1('select str_dist(?, ?)', 'abcde', 'abcde'), 0) def test_substr_count(self): self.assertEqual( self.sql1('select substr_count(?, ?)', 'foo bar baz', 'a'), 2) self.assertEqual( self.sql1('select substr_count(?, ?)', 'foo bor baz', 'o'), 3) self.assertEqual( self.sql1('select substr_count(?, ?)', 'foodooboope', 'oo'), 3) self.assertEqual(self.sql1('select substr_count(?, ?)', 'xx', ''), 0) self.assertEqual(self.sql1('select substr_count(?, ?)', '', ''), 0) def test_strip_chars(self): self.assertEqual( self.sql1('select strip_chars(?, ?)', ' hey foo ', ' '), 'hey foo') @requires_vtfunc class TestVirtualTableFunctions(ModelTestCase): requires = MODELS def sqln(self, sql, *p): cursor = ext_db.execute_sql(sql, p) return cursor.fetchall() def test_regex_search(self): usernames = [ 'charlie', 'hu3y17', 'zaizee2012', '1234.56789', 'hurr durr'] for username in usernames: User.create(username=username) rgx = '[0-9]+' results = self.sqln( ('SELECT user.username, regex_search.match ' 'FROM user, regex_search(?, user.username) ' 'ORDER BY regex_search.match'), rgx) self.assertEqual([row for row in results], [ ('1234.56789', '1234'), ('hu3y17', '17'), ('zaizee2012', '2012'), ('hu3y17', '3'), ('1234.56789', '56789'), ]) def test_date_series(self): ONE_DAY = 86400 def assertValues(start, stop, step_seconds, expected): results = self.sqln('select * from date_series(?, ?, ?)', start, stop, step_seconds) self.assertEqual(results, expected) assertValues('2015-01-01', '2015-01-05', 86400, [ ('2015-01-01',), ('2015-01-02',), ('2015-01-03',), ('2015-01-04',), ('2015-01-05',), ]) assertValues('2015-01-01', '2015-01-05', 86400 / 2, [ ('2015-01-01 00:00:00',), ('2015-01-01 12:00:00',), ('2015-01-02 00:00:00',), ('2015-01-02 12:00:00',), ('2015-01-03 00:00:00',), ('2015-01-03 12:00:00',), ('2015-01-04 00:00:00',), ('2015-01-04 12:00:00',), ('2015-01-05 00:00:00',), ]) assertValues('14:20:15', '14:24', 30, [ ('14:20:15',), ('14:20:45',), ('14:21:15',), ('14:21:45',), ('14:22:15',), ('14:22:45',), ('14:23:15',), ('14:23:45',), ]) peewee-2.10.2/playhouse/tests/test_sqliteq.py000066400000000000000000000153641316645060400213320ustar00rootroot00000000000000from functools import partial import os import sys import threading import time import unittest try: import gevent from gevent.event import Event as GreenEvent except ImportError: gevent = None from peewee import * from playhouse.sqliteq import ResultTimeout from playhouse.sqliteq import SqliteQueueDatabase from playhouse.sqliteq import WriterPaused from playhouse.tests.base import database_initializer from playhouse.tests.base import PeeweeTestCase from playhouse.tests.base import skip_if get_db = partial( database_initializer.get_database, 'sqlite', db_class=SqliteQueueDatabase) db = database_initializer.get_database('sqlite') class User(Model): name = TextField(unique=True) class Meta: database = db db_table = 'threaded_db_test_user' class BaseTestQueueDatabase(object): database_config = {} n_rows = 50 n_threads = 20 def setUp(self): super(BaseTestQueueDatabase, self).setUp() with db.execution_context(): User.create_table(True) User._meta.database = \ self.db = get_db(**self.database_config) # Sanity check at startup. self.assertEqual(self.db.queue_size(), 0) def tearDown(self): super(BaseTestQueueDatabase, self).tearDown() User._meta.database = db with db.execution_context(): User.drop_table() if not self.db.is_closed(): self.db.close() if not db.is_closed(): db.close() filename = db.database if os.path.exists(filename): os.unlink(filename) def test_query_error(self): self.db.start() curs = self.db.execute_sql('foo bar baz') self.assertRaises(OperationalError, curs.fetchone) self.db.stop() def test_query_execution(self): qr = User.select().execute() self.assertEqual(self.db.queue_size(), 0) self.db.start() users = list(qr) huey = User.create(name='huey') mickey = User.create(name='mickey') self.assertTrue(huey.id is not None) self.assertTrue(mickey.id is not None) self.assertEqual(self.db.queue_size(), 0) self.db.stop() def create_thread(self, fn, *args): raise NotImplementedError def create_event(self): raise NotImplementedError def test_multiple_threads(self): def create_rows(idx, nrows): for i in range(idx, idx + nrows): User.create(name='u-%s' % i) total = self.n_threads * self.n_rows self.db.start() threads = [self.create_thread(create_rows, i, self.n_rows) for i in range(0, total, self.n_rows)] [t.start() for t in threads] [t.join() for t in threads] self.assertEqual(User.select().count(), total) self.db.stop() def test_pause(self): event_a = self.create_event() event_b = self.create_event() def create_user(name, event, expect_paused): event.wait() if expect_paused: self.assertRaises(WriterPaused, lambda: User.create(name=name)) else: User.create(name=name) self.db.start() t_a = self.create_thread(create_user, 'a', event_a, True) t_a.start() t_b = self.create_thread(create_user, 'b', event_b, False) t_b.start() User.create(name='c') self.assertEqual(User.select().count(), 1) # Pause operations but preserve the writer thread/connection. self.db.pause() event_a.set() self.assertEqual(User.select().count(), 1) t_a.join() self.db.unpause() self.assertEqual(User.select().count(), 1) event_b.set() t_b.join() self.assertEqual(User.select().count(), 2) self.db.stop() def test_restart(self): self.db.start() User.create(name='a') self.db.stop() self.db._results_timeout = 0.0001 self.assertRaises(ResultTimeout, User.create, name='b') self.assertEqual(User.select().count(), 1) self.db.start() # Will execute the pending "b" INSERT. self.db._results_timeout = None User.create(name='c') self.assertEqual(User.select().count(), 3) self.assertEqual(sorted(u.name for u in User.select()), ['a', 'b', 'c']) def test_waiting(self): D = {} def create_user(name): D[name] = User.create(name=name).id threads = [self.create_thread(create_user, name) for name in ('huey', 'charlie', 'zaizee')] [t.start() for t in threads] def get_users(): D['users'] = [(user.id, user.name) for user in User.select()] tg = self.create_thread(get_users) tg.start() threads.append(tg) self.db.start() [t.join() for t in threads] self.db.stop() self.assertEqual(sorted(D), ['charlie', 'huey', 'users', 'zaizee']) def test_next_method(self): self.db.start() User.create(name='mickey') User.create(name='huey') query = iter(User.select().order_by(User.name)) self.assertEqual(next(query).name, 'huey') self.assertEqual(next(query).name, 'mickey') self.assertRaises(StopIteration, lambda: next(query)) self.assertEqual( next(self.db.execute_sql('PRAGMA journal_mode'))[0], 'wal') self.db.stop() class TestThreadedDatabaseThreads(BaseTestQueueDatabase, PeeweeTestCase): database_config = {'use_gevent': False} def tearDown(self): self.db._results_timeout = None super(TestThreadedDatabaseThreads, self).tearDown() def create_thread(self, fn, *args): t = threading.Thread(target=fn, args=args) t.daemon = True return t def create_event(self): return threading.Event() def test_timeout(self): @self.db.func() def slow(n): time.sleep(n) return 'I slept for %s seconds' % n self.db.start() # Make the result timeout very small, then call our function which # will cause the query results to time-out. self.db._results_timeout = 0.001 self.assertRaises( ResultTimeout, lambda: self.db.execute_sql('select slow(?)', (0.005,)).fetchone()) self.db.stop() @skip_if(lambda: gevent is None) class TestThreadedDatabaseGreenlets(BaseTestQueueDatabase, PeeweeTestCase): database_config = {'use_gevent': True} n_rows = 20 n_threads = 200 def create_thread(self, fn, *args): return gevent.Greenlet(fn, *args) def create_event(self): return GreenEvent() if __name__ == '__main__': unittest.main(argv=sys.argv) peewee-2.10.2/playhouse/tests/test_test_utils.py000066400000000000000000000163161316645060400220450ustar00rootroot00000000000000import functools from peewee import * from playhouse.test_utils import assert_query_count from playhouse.test_utils import count_queries from playhouse.test_utils import test_database from playhouse.tests.base import database_initializer from playhouse.tests.base import ModelTestCase db1 = database_initializer.get_in_memory_database() db1._flag = 'db1' db2 = database_initializer.get_in_memory_database() db2._flag = 'db2' class BaseModel(Model): class Meta: database = db1 class Data(BaseModel): key = CharField() class Meta: order_by = ('key',) class DataItem(BaseModel): data = ForeignKeyField(Data, related_name='items') value = CharField() class Meta: order_by = ('value',) class BaseTestCase(ModelTestCase): requires = [DataItem, Data] class TestTestDatabaseCtxMgr(BaseTestCase): def setUp(self): super(TestTestDatabaseCtxMgr, self).setUp() a = Data.create(key='a') b = Data.create(key='b') DataItem.create(data=a, value='a1') DataItem.create(data=a, value='a2') DataItem.create(data=b, value='b1') def tearDown(self): super(TestTestDatabaseCtxMgr, self).tearDown() # Drop tables from db2. db2.execute_sql('drop table if exists dataitem;') db2.execute_sql('drop table if exists data;') def assertUsing(self, db): self.assertEqual(Data._meta.database._flag, db) self.assertEqual(DataItem._meta.database._flag, db) def case_wrapper(fn): @functools.wraps(fn) def inner(self): self.assertUsing('db1') return fn(self) return inner @case_wrapper def test_no_options(self): with test_database(db2, (Data, DataItem), create_tables=True): self.assertUsing('db2') # Tables were created automatically. self.assertTrue(Data.table_exists()) self.assertTrue(DataItem.table_exists()) # There are no rows in the db. self.assertEqual(Data.select().count(), 0) self.assertEqual(DataItem.select().count(), 0) # Verify we can create items in the db. d = Data.create(key='c') self.assertEqual(Data.select().count(), 1) self.assertUsing('db1') # Ensure that no changes were made to db1. self.assertEqual([x.key for x in Data.select()], ['a', 'b']) # Ensure the tables were dropped. res = db2.execute_sql('select * from sqlite_master') self.assertEqual(res.fetchall(), []) @case_wrapper def test_explicit_create_tables(self): # Retrieve a reference to a model in db1 and verify that it # has the correct items. a = Data.get(Data.key == 'a') self.assertEqual([x.value for x in a.items], ['a1', 'a2']) with test_database(db2, (Data, DataItem), create_tables=False): self.assertUsing('db2') # Table hasn't been created. self.assertFalse(Data.table_exists()) self.assertFalse(DataItem.table_exists()) self.assertUsing('db1') # We can still fetch the related items for object 'a'. self.assertEqual([x.value for x in a.items], ['a1', 'a2']) @case_wrapper def test_exception_handling(self): def raise_exc(): with test_database(db2, (Data, DataItem)): self.assertUsing('db2') c = Data.create(key='c') # This will raise Data.DoesNotExist. Data.get(Data.key == 'a') # Ensure the exception is raised by the ctx mgr. self.assertRaises(Data.DoesNotExist, raise_exc) self.assertUsing('db1') # Ensure that the tables in db2 are removed. res = db2.execute_sql('select * from sqlite_master') self.assertEqual(res.fetchall(), []) # Ensure the data in db1 is intact. self.assertEqual([x.key for x in Data.select()], ['a', 'b']) @case_wrapper def test_exception_handling_explicit_cd(self): def raise_exc(): with test_database(db2, (Data, DataItem), create_tables=False): self.assertUsing('db2') Data.create_table() c = Data.create(key='c') # This will raise Data.DoesNotExist. Data.get(Data.key == 'a') self.assertRaises(Data.DoesNotExist, raise_exc) self.assertUsing('db1') # Ensure that the tables in db2 are still present. res = db2.execute_sql('select key from data;') self.assertEqual(res.fetchall(), [('c',)]) # Ensure the data in db1 is intact. self.assertEqual([x.key for x in Data.select()], ['a', 'b']) @case_wrapper def test_mismatch_models(self): a = Data.get(Data.key == 'a') with test_database(db2, (Data,)): d2_id = Data.insert(id=a.id, key='c').execute() c = Data.get(Data.id == d2_id) # Mismatches work and the queries are handled at the class # level, so the Data returned from the DataItems will # be from db2. self.assertEqual([x.value for x in c.items], ['a1', 'a2']) for item in c.items: self.assertEqual(item.data.key, 'c') class TestQueryCounter(BaseTestCase): def test_count(self): with count_queries() as count: Data.create(key='k1') Data.create(key='k2') self.assertEqual(count.count, 2) with count_queries() as count: items = [item.key for item in Data.select().order_by(Data.key)] self.assertEqual(items, ['k1', 'k2']) Data.get(Data.key == 'k1') Data.get(Data.key == 'k2') self.assertEqual(count.count, 3) def test_only_select(self): with count_queries(only_select=True) as count: for i in range(10): Data.create(key=str(i)) items = [item.key for item in Data.select()] Data.get(Data.key == '0') Data.get(Data.key == '9') Data.delete().where( Data.key << ['1', '3', '5', '7', '9']).execute() items = [item.key for item in Data.select().order_by(Data.key)] self.assertEqual(items, ['0', '2', '4', '6', '8']) self.assertEqual(count.count, 4) def test_assert_query_count_decorator(self): @assert_query_count(2) def will_fail_under(): Data.create(key='x') @assert_query_count(2) def will_fail_over(): for i in range(3): Data.create(key=str(i)) @assert_query_count(4) def will_succeed(): for i in range(4): Data.create(key=str(i + 100)) will_succeed() self.assertRaises(AssertionError, will_fail_under) self.assertRaises(AssertionError, will_fail_over) def test_assert_query_count_ctx_mgr(self): with assert_query_count(3): for i in range(3): Data.create(key=str(i)) def will_fail(): with assert_query_count(2): Data.create(key='x') self.assertRaises(AssertionError, will_fail) @assert_query_count(3) def test_only_three(self): for i in range(3): Data.create(key=str(i)) peewee-2.10.2/playhouse/tests/test_transactions.py000066400000000000000000000472051316645060400223570ustar00rootroot00000000000000import threading from peewee import _atomic from peewee import SqliteDatabase from peewee import transaction from playhouse.tests.base import database_class from playhouse.tests.base import mock from playhouse.tests.base import ModelTestCase from playhouse.tests.base import skip_if from playhouse.tests.base import test_db from playhouse.tests.models import * class TestTransaction(ModelTestCase): requires = [User, Blog] def tearDown(self): super(TestTransaction, self).tearDown() test_db.set_autocommit(True) def test_transaction_connection_handling(self): patch = 'peewee.Database' db = SqliteDatabase(':memory:') with mock.patch(patch, wraps=db) as patched_db: with transaction(patched_db): patched_db.begin.assert_called_once_with() self.assertEqual(patched_db.commit.call_count, 0) self.assertEqual(patched_db.rollback.call_count, 0) patched_db.begin.assert_called_once_with() patched_db.commit.assert_called_once_with() self.assertEqual(patched_db.rollback.call_count, 0) with mock.patch(patch, wraps=db) as patched_db: def _test_patched(): patched_db.commit.side_effect = ValueError with transaction(patched_db): pass self.assertRaises(ValueError, _test_patched) patched_db.begin.assert_called_once_with() patched_db.commit.assert_called_once_with() patched_db.rollback.assert_called_once_with() def test_atomic_nesting(self): db = SqliteDatabase(':memory:') db_patches = mock.patch.multiple( db, begin=mock.DEFAULT, commit=mock.DEFAULT, execute_sql=mock.DEFAULT, rollback=mock.DEFAULT) with mock.patch('peewee.Database', wraps=db) as patched_db: with db_patches as db_mocks: begin = db_mocks['begin'] commit = db_mocks['commit'] execute_sql = db_mocks['execute_sql'] rollback = db_mocks['rollback'] with _atomic(patched_db): patched_db.transaction.assert_called_once_with(None) begin.assert_called_once_with(lock_type=None) self.assertEqual(patched_db.savepoint.call_count, 0) with _atomic(patched_db): patched_db.transaction.assert_called_once_with(None) begin.assert_called_once_with(lock_type=None) patched_db.savepoint.assert_called_once_with() self.assertEqual(commit.call_count, 0) self.assertEqual(rollback.call_count, 0) with _atomic(patched_db): (patched_db.transaction .assert_called_once_with(None)) begin.assert_called_once_with(lock_type=None) self.assertEqual( patched_db.savepoint.call_count, 2) begin.assert_called_once_with(lock_type=None) self.assertEqual(commit.call_count, 0) self.assertEqual(rollback.call_count, 0) commit.assert_called_once_with() self.assertEqual(rollback.call_count, 0) def test_savepoint_explicit_commits(self): with test_db.atomic() as txn: User.create(username='txn-rollback') txn.rollback() User.create(username='txn-commit') txn.commit() with test_db.atomic() as sp: User.create(username='sp-rollback') sp.rollback() User.create(username='sp-commit') sp.commit() usernames = [u.username for u in User.select().order_by(User.username)] self.assertEqual(usernames, ['sp-commit', 'txn-commit']) def test_autocommit(self): test_db.set_autocommit(False) test_db.begin() u1 = User.create(username='u1') u2 = User.create(username='u2') # open up a new connection to the database, it won't register any blogs # as being created new_db = self.new_connection() res = new_db.execute_sql('select count(*) from users;') self.assertEqual(res.fetchone()[0], 0) # commit our blog inserts test_db.commit() # now the blogs are query-able from another connection res = new_db.execute_sql('select count(*) from users;') self.assertEqual(res.fetchone()[0], 2) def test_transactions(self): def transaction_generator(): with test_db.transaction(): User.create(username='u1') yield User.create(username='u2') gen = transaction_generator() next(gen) conn2 = self.new_connection() res = conn2.execute_sql('select count(*) from users;').fetchone() self.assertEqual(res[0], 0) self.assertEqual(User.select().count(), 1) # Consume the rest of the generator. for _ in gen: pass self.assertEqual(User.select().count(), 2) res = conn2.execute_sql('select count(*) from users;').fetchone() self.assertEqual(res[0], 2) def test_manual_commit_rollback(self): def assertUsers(expected): query = User.select(User.username).order_by(User.username) self.assertEqual( [username for username, in query.tuples()], expected) with test_db.transaction() as txn: User.create(username='charlie') txn.commit() User.create(username='huey') txn.rollback() assertUsers(['charlie']) with test_db.transaction() as txn: User.create(username='huey') txn.rollback() User.create(username='zaizee') assertUsers(['charlie', 'zaizee']) def test_transaction_decorator(self): @test_db.transaction() def create_user(username): User.create(username=username) create_user('charlie') self.assertEqual(User.select().count(), 1) def test_commit_on_success(self): self.assertTrue(test_db.get_autocommit()) @test_db.commit_on_success def will_fail(): User.create(username='u1') Blog.create() # no blog, will raise an error self.assertRaises(IntegrityError, will_fail) self.assertEqual(User.select().count(), 0) self.assertEqual(Blog.select().count(), 0) @test_db.commit_on_success def will_succeed(): u = User.create(username='u1') Blog.create(title='b1', user=u) will_succeed() self.assertEqual(User.select().count(), 1) self.assertEqual(Blog.select().count(), 1) def test_context_mgr(self): def do_will_fail(): with test_db.transaction(): User.create(username='u1') Blog.create() # no blog, will raise an error self.assertRaises(IntegrityError, do_will_fail) self.assertEqual(Blog.select().count(), 0) def do_will_succeed(): with transaction(test_db): u = User.create(username='u1') Blog.create(title='b1', user=u) do_will_succeed() self.assertEqual(User.select().count(), 1) self.assertEqual(Blog.select().count(), 1) def do_manual_rollback(): with test_db.transaction() as txn: User.create(username='u2') txn.rollback() do_manual_rollback() self.assertEqual(User.select().count(), 1) self.assertEqual(Blog.select().count(), 1) def test_nesting_transactions(self): @test_db.commit_on_success def outer(should_fail=False): self.assertEqual(test_db.transaction_depth(), 1) User.create(username='outer') inner(should_fail) self.assertEqual(test_db.transaction_depth(), 1) @test_db.commit_on_success def inner(should_fail): self.assertEqual(test_db.transaction_depth(), 2) User.create(username='inner') if should_fail: raise ValueError('failing') self.assertRaises(ValueError, outer, should_fail=True) self.assertEqual(User.select().count(), 0) self.assertEqual(test_db.transaction_depth(), 0) outer(should_fail=False) self.assertEqual(User.select().count(), 2) self.assertEqual(test_db.transaction_depth(), 0) class TestExecutionContext(ModelTestCase): requires = [User] def test_context_simple(self): with test_db.execution_context(): User.create(username='charlie') self.assertEqual(test_db.execution_context_depth(), 1) self.assertEqual(test_db.execution_context_depth(), 0) with test_db.execution_context(): self.assertTrue( User.select().where(User.username == 'charlie').exists()) self.assertEqual(test_db.execution_context_depth(), 1) self.assertEqual(test_db.execution_context_depth(), 0) queries = self.queries() def test_context_ext(self): with test_db.execution_context(): with test_db.execution_context() as inner_ctx: with test_db.execution_context(): User.create(username='huey') self.assertEqual(test_db.execution_context_depth(), 3) conn = test_db.get_conn() self.assertEqual(conn, inner_ctx.connection) self.assertTrue( User.select().where(User.username == 'huey').exists()) self.assertEqual(test_db.execution_context_depth(), 0) def test_context_multithreaded(self): conn = test_db.get_conn() evt = threading.Event() evt2 = threading.Event() def create(): with test_db.execution_context() as ctx: database = ctx.database self.assertEqual(database.execution_context_depth(), 1) evt2.set() evt.wait() self.assertNotEqual(conn, ctx.connection) User.create(username='huey') create_t = threading.Thread(target=create) create_t.daemon = True create_t.start() evt2.wait() self.assertEqual(test_db.execution_context_depth(), 0) evt.set() create_t.join() self.assertEqual(test_db.execution_context_depth(), 0) self.assertEqual(User.select().count(), 1) def test_context_concurrency(self): def create(i): with test_db.execution_context(): with test_db.execution_context() as ctx: User.create(username='u%s' % i) self.assertEqual(ctx.database.execution_context_depth(), 2) threads = [threading.Thread(target=create, args=(i,)) for i in range(5)] for thread in threads: thread.start() [thread.join() for thread in threads] self.assertEqual( [user.username for user in User.select().order_by(User.username)], ['u0', 'u1', 'u2', 'u3', 'u4']) def test_context_conn_error(self): class MagicException(Exception): pass class FailDB(SqliteDatabase): def _connect(self, *args, **kwargs): raise MagicException('boo') db = FailDB(':memory:') def generate_exc(): try: with db.execution_context(): db.execute_sql('SELECT 1;') except MagicException: db.get_conn() self.assertRaises(MagicException, generate_exc) class TestAutoRollback(ModelTestCase): requires = [User, Blog] def setUp(self): test_db.autorollback = True super(TestAutoRollback, self).setUp() def tearDown(self): test_db.autorollback = False test_db.set_autocommit(True) super(TestAutoRollback, self).tearDown() def test_auto_rollback(self): # Exceptions are still raised. self.assertRaises(IntegrityError, Blog.create) # The transaction should have been automatically rolled-back, allowing # us to create new objects (in a new transaction). u = User.create(username='u') self.assertTrue(u.id) # No-op, the previous INSERT was already committed. test_db.rollback() # Ensure we can get our user back. u_db = User.get(User.username == 'u') self.assertEqual(u.id, u_db.id) def test_transaction_ctx_mgr(self): 'Only auto-rollback when autocommit is enabled.' def create_error(): self.assertRaises(IntegrityError, Blog.create) # autocommit is disabled in a transaction ctx manager. with test_db.transaction(): # Error occurs, but exception is caught, leaving the current txn # in a bad state. create_error() try: create_error() except Exception as exc: # Subsequent call will raise an InternalError with postgres. self.assertTrue(isinstance(exc, InternalError)) else: self.assertFalse( issubclass(database_class, PostgresqlDatabase)) # New transactions are not affected. self.test_auto_rollback() def test_manual(self): test_db.set_autocommit(False) # Will not be rolled back. self.assertRaises(IntegrityError, Blog.create) if issubclass(database_class, PostgresqlDatabase): self.assertRaises(InternalError, User.create, username='u') test_db.rollback() u = User.create(username='u') test_db.commit() u_db = User.get(User.username == 'u') self.assertEqual(u.id, u_db.id) class TestSavepoints(ModelTestCase): requires = [User] def _outer(self, fail_outer=False, fail_inner=False): with test_db.savepoint(): User.create(username='outer') try: self._inner(fail_inner) except ValueError: pass if fail_outer: raise ValueError def _inner(self, fail_inner): with test_db.savepoint(): User.create(username='inner') if fail_inner: raise ValueError('failing') def assertNames(self, expected): query = User.select().order_by(User.username) self.assertEqual([u.username for u in query], expected) def test_success(self): with test_db.transaction(): self._outer() self.assertEqual(User.select().count(), 2) self.assertNames(['inner', 'outer']) def test_inner_failure(self): with test_db.transaction(): self._outer(fail_inner=True) self.assertEqual(User.select().count(), 1) self.assertNames(['outer']) def test_outer_failure(self): # Because the outer savepoint is rolled back, we'll lose the # inner savepoint as well. with test_db.transaction(): self.assertRaises(ValueError, self._outer, fail_outer=True) self.assertEqual(User.select().count(), 0) def test_failure(self): with test_db.transaction(): self.assertRaises( ValueError, self._outer, fail_outer=True, fail_inner=True) self.assertEqual(User.select().count(), 0) class TestAtomic(ModelTestCase): requires = [User, UniqueModel] def test_atomic(self): with test_db.atomic(): User.create(username='u1') with test_db.atomic(): User.create(username='u2') with test_db.atomic() as txn3: User.create(username='u3') txn3.rollback() with test_db.atomic(): User.create(username='u4') with test_db.atomic() as txn5: User.create(username='u5') txn5.rollback() User.create(username='u6') query = User.select().order_by(User.username) self.assertEqual( [u.username for u in query], ['u1', 'u2', 'u4', 'u6']) def test_atomic_second_connection(self): def test_separate_conn(expected): new_db = self.new_connection() cursor = new_db.execute_sql('select username from users;') usernames = sorted(row[0] for row in cursor.fetchall()) self.assertEqual(usernames, expected) new_db.close() with test_db.atomic(): User.create(username='u1') test_separate_conn([]) with test_db.atomic(): User.create(username='u2') with test_db.atomic() as tx3: User.create(username='u3') tx3.rollback() test_separate_conn([]) users = User.select(User.username).order_by(User.username) self.assertEqual( [user.username for user in users], ['u1', 'u2']) users = User.select(User.username).order_by(User.username) self.assertEqual( [user.username for user in users], ['u1', 'u2']) def test_atomic_decorator(self): @test_db.atomic() def create_user(username): User.create(username=username) create_user('charlie') self.assertEqual(User.select().count(), 1) def test_atomic_decorator_nesting(self): @test_db.atomic() def create_unique(name): UniqueModel.create(name=name) @test_db.atomic() def create_both(username): User.create(username=username) try: create_unique(username) except IntegrityError: pass create_unique('huey') self.assertEqual(UniqueModel.select().count(), 1) create_both('charlie') self.assertEqual(User.select().count(), 1) self.assertEqual(UniqueModel.select().count(), 2) create_both('huey') self.assertEqual(User.select().count(), 2) self.assertEqual(UniqueModel.select().count(), 2) def test_atomic_rollback(self): with test_db.atomic(): UniqueModel.create(name='charlie') try: with test_db.atomic(): UniqueModel.create(name='charlie') except IntegrityError: pass else: assert False with test_db.atomic(): UniqueModel.create(name='zaizee') try: with test_db.atomic(): UniqueModel.create(name='zaizee') except IntegrityError: pass else: assert False UniqueModel.create(name='mickey') UniqueModel.create(name='huey') names = [um.name for um in UniqueModel.select().order_by(UniqueModel.name)] self.assertEqual(names, ['charlie', 'huey', 'mickey', 'zaizee']) def test_atomic_with_delete(self): for i in range(3): User.create(username='u%s' % i) with test_db.atomic(): User.get(User.username == 'u1').delete_instance() usernames = [u.username for u in User.select()] self.assertEqual(sorted(usernames), ['u0', 'u2']) with test_db.atomic(): with test_db.atomic(): User.get(User.username == 'u2').delete_instance() usernames = [u.username for u in User.select()] self.assertEqual(usernames, ['u0']) peewee-2.10.2/pwiz.py000077500000000000000000000155721316645060400144330ustar00rootroot00000000000000#!/usr/bin/env python import datetime import sys from getpass import getpass from optparse import OptionParser from peewee import * from peewee import print_ from peewee import __version__ as peewee_version from playhouse.reflection import * TEMPLATE = """from peewee import *%s database = %s('%s', **%s) class UnknownField(object): def __init__(self, *_, **__): pass class BaseModel(Model): class Meta: database = database """ DATABASE_ALIASES = { MySQLDatabase: ['mysql', 'mysqldb'], PostgresqlDatabase: ['postgres', 'postgresql'], SqliteDatabase: ['sqlite', 'sqlite3'], } DATABASE_MAP = dict((value, key) for key in DATABASE_ALIASES for value in DATABASE_ALIASES[key]) def make_introspector(database_type, database_name, **kwargs): if database_type not in DATABASE_MAP: err('Unrecognized database, must be one of: %s' % ', '.join(DATABASE_MAP.keys())) sys.exit(1) schema = kwargs.pop('schema', None) DatabaseClass = DATABASE_MAP[database_type] db = DatabaseClass(database_name, **kwargs) return Introspector.from_database(db, schema=schema) def print_models(introspector, tables=None, preserve_order=False): database = introspector.introspect(table_names=tables) print_(TEMPLATE % ( introspector.get_additional_imports(), introspector.get_database_class().__name__, introspector.get_database_name(), repr(introspector.get_database_kwargs()))) def _print_table(table, seen, accum=None): accum = accum or [] foreign_keys = database.foreign_keys[table] for foreign_key in foreign_keys: dest = foreign_key.dest_table # In the event the destination table has already been pushed # for printing, then we have a reference cycle. if dest in accum and table not in accum: print_('# Possible reference cycle: %s' % dest) # If this is not a self-referential foreign key, and we have # not already processed the destination table, do so now. if dest not in seen and dest not in accum: seen.add(dest) if dest != table: _print_table(dest, seen, accum + [table]) print_('class %s(BaseModel):' % database.model_names[table]) columns = database.columns[table].items() if not preserve_order: columns = sorted(columns) primary_keys = database.primary_keys[table] for name, column in columns: skip = all([ name in primary_keys, name == 'id', len(primary_keys) == 1, column.field_class in introspector.pk_classes]) if skip: continue if column.primary_key and len(primary_keys) > 1: # If we have a CompositeKey, then we do not want to explicitly # mark the columns as being primary keys. column.primary_key = False print_(' %s' % column.get_field()) print_('') print_(' class Meta:') print_(' db_table = \'%s\'' % table) multi_column_indexes = database.multi_column_indexes(table) if multi_column_indexes: print_(' indexes = (') for fields, unique in sorted(multi_column_indexes): print_(' ((%s), %s),' % ( ', '.join("'%s'" % field for field in fields), unique, )) print_(' )') if introspector.schema: print_(' schema = \'%s\'' % introspector.schema) if len(primary_keys) > 1: pk_field_names = sorted([ field.name for col, field in columns if col in primary_keys]) pk_list = ', '.join("'%s'" % pk for pk in pk_field_names) print_(' primary_key = CompositeKey(%s)' % pk_list) elif not primary_keys: print_(' primary_key = False') print_('') seen.add(table) seen = set() for table in sorted(database.model_names.keys()): if table not in seen: if not tables or table in tables: _print_table(table, seen) def print_header(cmd_line, introspector): timestamp = datetime.datetime.now() print_('# Code generated by:') print_('# python -m pwiz %s' % cmd_line) print_('# Date: %s' % timestamp.strftime('%B %d, %Y %I:%M%p')) print_('# Database: %s' % introspector.get_database_name()) print_('# Peewee version: %s' % peewee_version) print_('') def err(msg): sys.stderr.write('\033[91m%s\033[0m\n' % msg) sys.stderr.flush() def get_option_parser(): parser = OptionParser(usage='usage: %prog [options] database_name') ao = parser.add_option ao('-H', '--host', dest='host') ao('-p', '--port', dest='port', type='int') ao('-u', '--user', dest='user') ao('-P', '--password', dest='password', action='store_true') engines = sorted(DATABASE_MAP) ao('-e', '--engine', dest='engine', default='postgresql', choices=engines, help=('Database type, e.g. sqlite, mysql or postgresql. Default ' 'is "postgresql".')) ao('-s', '--schema', dest='schema') ao('-t', '--tables', dest='tables', help=('Only generate the specified tables. Multiple table names should ' 'be separated by commas.')) ao('-i', '--info', dest='info', action='store_true', help=('Add database information and other metadata to top of the ' 'generated file.')) ao('-o', '--preserve-order', action='store_true', dest='preserve_order', help='Model definition column ordering matches source table.') return parser def get_connect_kwargs(options): ops = ('host', 'port', 'user', 'schema') kwargs = dict((o, getattr(options, o)) for o in ops if getattr(options, o)) if options.password: kwargs['password'] = getpass() return kwargs if __name__ == '__main__': raw_argv = sys.argv parser = get_option_parser() options, args = parser.parse_args() if options.preserve_order: try: from collections import OrderedDict except ImportError: err('Preserve order requires Python >= 2.7.') sys.exit(1) if len(args) < 1: err('Missing required parameter "database"') parser.print_help() sys.exit(1) connect = get_connect_kwargs(options) database = args[-1] tables = None if options.tables: tables = [table.strip() for table in options.tables.split(',') if table.strip()] introspector = make_introspector(options.engine, database, **connect) if options.info: cmd_line = ' '.join(raw_argv[1:]) print_header(cmd_line, introspector) print_models(introspector, tables, preserve_order=options.preserve_order) peewee-2.10.2/runtests.py000077500000000000000000000237511316645060400153270ustar00rootroot00000000000000#!/usr/bin/env python import optparse import os import shutil import sys import unittest def collect(): import tests runtests(tests, 1) def runtests(suite, verbosity): results = unittest.TextTestRunner(verbosity=verbosity).run(suite) return results.failures, results.errors def get_option_parser(): parser = optparse.OptionParser() basic = optparse.OptionGroup(parser, 'Basic test options') basic.add_option( '-e', '--engine', dest='engine', help=('Database engine to test, one of ' '[sqlite, postgres, mysql, apsw, sqlcipher, berkeleydb]')) basic.add_option('-v', '--verbosity', dest='verbosity', default=1, type='int', help='Verbosity of output') suite = optparse.OptionGroup(parser, 'Simple test suite options') suite.add_option('-a', '--all', dest='all', default=False, action='store_true', help='Run all tests, including extras') suite.add_option('-x', '--extra', dest='extra', default=False, action='store_true', help='Run only extras tests') cases = optparse.OptionGroup(parser, 'Individual test module options') cases.add_option('--apsw', dest='apsw', default=False, action='store_true', help='apsw tests (requires apsw)') cases.add_option('--berkeleydb', dest='berkeleydb', default=False, action='store_true', help='berkeleydb tests (requires pysqlite compiled against berkeleydb)') cases.add_option('--csv', dest='csv', default=False, action='store_true', help='csv tests') cases.add_option('--dataset', dest='dataset', default=False, action='store_true', help='dataset tests') cases.add_option('--db-url', dest='db_url', default=False, action='store_true', help='db url tests') cases.add_option('--djpeewee', dest='djpeewee', default=False, action='store_true', help='djpeewee tests') cases.add_option('--fields', dest='fields', default=False, action='store_true', help='extra field tests') cases.add_option('--flask', dest='flask', default=False, action='store_true', help='flask utils tests') cases.add_option('--gfk', dest='gfk', default=False, action='store_true', help='gfk tests') cases.add_option('--hybrid', dest='hybrid', default=False, action='store_true', help='hybrid property/method tests') cases.add_option('--kv', dest='kv', default=False, action='store_true', help='key/value store tests') cases.add_option('--manytomany', dest='manytomany', default=False, action='store_true', help='manytomany field tests') cases.add_option('--migrations', dest='migrations', default=False, action='store_true', help='migration helper tests (requires psycopg2)') cases.add_option('--pool', dest='pool', default=False, action='store_true', help='connection pool tests') cases.add_option('--postgres-ext', dest='postgres_ext', default=False, action='store_true', help='postgres_ext tests (requires psycopg2)') cases.add_option('--pwiz', dest='pwiz', default=False, action='store_true', help='pwiz, model code generator') cases.add_option('--read-slave', dest='read_slave', default=False, action='store_true', help='read_slave tests') cases.add_option('--reflection', dest='reflection', default=False, action='store_true', help='reflection schema introspector') cases.add_option('--signals', dest='signals', default=False, action='store_true', help='signals tests') cases.add_option('--shortcuts', dest='shortcuts', default=False, action='store_true', help='shortcuts tests') cases.add_option('--speedups', dest='speedups', default=False, action='store_true', help='speedups c extension tests') cases.add_option('--sqlcipher-ext', dest='sqlcipher', default=False, action='store_true', help='sqlcipher_ext tests (requires pysqlcipher)') cases.add_option('--sqliteq', dest='sqliteq', default=False, action='store_true', help='sqliteq tests') cases.add_option('--sqlite-c-ext', dest='sqlite_c', default=False, action='store_true', help='sqlite c extension tests') cases.add_option('--sqlite-ext', dest='sqlite_ext', default=False, action='store_true', help='sqlite_ext tests') cases.add_option('--sqlite-udf', dest='sqlite_udf', default=False, action='store_true', help='sqlite_udf tests') cases.add_option('--test-utils', dest='test_utils', default=False, action='store_true', help='test_utils tests') parser.add_option_group(basic) parser.add_option_group(suite) parser.add_option_group(cases) return parser def collect_modules(options): modules = [] xtra = lambda op: op or options.extra or options.all if xtra(options.apsw): try: from playhouse.tests import test_apsw modules.append(test_apsw) except ImportError: print_('Unable to import apsw tests, skipping') if xtra(options.berkeleydb): try: from playhouse.tests import test_berkeleydb modules.append(test_berkeleydb) except ImportError: print_('Unable to import berkeleydb tests, skipping') if xtra(options.csv): from playhouse.tests import test_csv_utils modules.append(test_csv_utils) if xtra(options.dataset): from playhouse.tests import test_dataset modules.append(test_dataset) if xtra(options.db_url): from playhouse.tests import test_db_url modules.append(test_db_url) if xtra(options.djpeewee): from playhouse.tests import test_djpeewee modules.append(test_djpeewee) if xtra(options.fields): from playhouse.tests import test_extra_fields from playhouse.tests import test_manytomany modules.append(test_extra_fields) if test_manytomany not in modules: modules.append(test_manytomany) if xtra(options.flask): try: import flask except ImportError: print_('Unable to import Flask tests, Flask is not installed.') else: from playhouse.tests import test_flask_utils modules.append(test_flask_utils) if xtra(options.gfk): from playhouse.tests import test_gfk modules.append(test_gfk) if xtra(options.hybrid): from playhouse.tests import test_hybrid modules.append(test_hybrid) if xtra(options.kv): from playhouse.tests import test_kv modules.append(test_kv) if xtra(options.manytomany): from playhouse.tests import test_manytomany if test_manytomany not in modules: modules.append(test_manytomany) if xtra(options.migrations): try: from playhouse.tests import test_migrate modules.append(test_migrate) except ImportError: print_('Unable to import migration tests, skipping') if xtra(options.pool): try: from playhouse.tests import test_pool modules.append(test_pool) except ImportError: print_('Unable to import connection pool tests, skipping') if xtra(options.postgres_ext): try: from playhouse.tests import test_postgres modules.append(test_postgres) except ImportError: print_('Unable to import postgres-ext tests, skipping') if xtra(options.pwiz): from playhouse.tests import test_pwiz modules.append(test_pwiz) if xtra(options.read_slave): from playhouse.tests import test_read_slave modules.append(test_read_slave) if xtra(options.reflection): from playhouse.tests import test_reflection modules.append(test_reflection) if xtra(options.signals): from playhouse.tests import test_signals modules.append(test_signals) if xtra(options.shortcuts): from playhouse.tests import test_shortcuts modules.append(test_shortcuts) if xtra(options.speedups): try: from playhouse.tests import test_speedups modules.append(test_speedups) except ImportError: print_('Unable to import speedups tests, skipping') if xtra(options.sqlcipher): try: from playhouse.tests import test_sqlcipher_ext modules.append(test_sqlcipher_ext) except ImportError: print_('Unable to import pysqlcipher tests, skipping') if xtra(options.sqliteq): from playhouse.tests import test_sqliteq modules.append(test_sqliteq) if xtra(options.sqlite_c): try: from playhouse.tests import test_sqlite_c_ext modules.append(test_sqlite_c_ext) except ImportError: print_('Unable to import SQLite C extension tests, skipping') if xtra(options.sqlite_ext): from playhouse.tests import test_sqlite_ext modules.append(test_sqlite_ext) if xtra(options.sqlite_udf): from playhouse.tests import test_sqlite_udf modules.append(test_sqlite_udf) if xtra(options.test_utils): from playhouse.tests import test_test_utils modules.append(test_test_utils) if not modules or options.all: import tests modules.insert(0, tests) return modules if __name__ == '__main__': parser = get_option_parser() options, args = parser.parse_args() if options.engine: os.environ['PEEWEE_TEST_BACKEND'] = options.engine os.environ['PEEWEE_TEST_VERBOSITY'] = str(options.verbosity) from peewee import print_ suite = unittest.TestSuite() for module in collect_modules(options): print_('Adding tests for "%s"' % module.__name__) module_suite = unittest.TestLoader().loadTestsFromModule(module) suite.addTest(module_suite) failures, errors = runtests(suite, options.verbosity) files_to_delete = [ 'peewee_test.db', 'peewee_test', 'tmp.db', 'peewee_test.bdb.db', 'peewee_test.cipher.db'] paths_to_delete = ['peewee_test.bdb.db-journal'] for filename in files_to_delete: if os.path.exists(filename): os.unlink(filename) for path in paths_to_delete: if os.path.exists(path): shutil.rmtree(path) if errors: sys.exit(2) elif failures: sys.exit(1) sys.exit(0) peewee-2.10.2/setup.py000066400000000000000000000050611316645060400145670ustar00rootroot00000000000000import os import platform import sys import warnings from distutils.core import setup from distutils.extension import Extension from distutils.version import LooseVersion f = open(os.path.join(os.path.dirname(__file__), 'README.rst')) readme = f.read() f.close() setup_kwargs = {} cython_min_version = '0.22.1' try: from Cython.Distutils import build_ext from Cython import __version__ as cython_version except ImportError: cython_installed = False warnings.warn('Cython C extensions for peewee will NOT be built, because ' 'Cython does not seem to be installed. To enable Cython C ' 'extensions, install Cython >=' + cython_min_version + '.') else: if platform.python_implementation() != 'CPython': cython_installed = False warnings.warn('Cython C extensions disabled as you are not using ' 'CPython.') elif LooseVersion(cython_version) < LooseVersion(cython_min_version): cython_installed = False warnings.warn('Cython C extensions for peewee will NOT be built, ' 'because the installed Cython version ' '(' + cython_version + ') is too old. To enable Cython ' 'C extensions, install Cython >=' + cython_min_version + '.') else: cython_installed = True speedups_ext_module = Extension( 'playhouse._speedups', ['playhouse/_speedups.pyx']) sqlite_udf_module = Extension( 'playhouse._sqlite_udf', ['playhouse/_sqlite_udf.pyx']) sqlite_ext_module = Extension( 'playhouse._sqlite_ext', ['playhouse/_sqlite_ext.pyx']) ext_modules = [] if cython_installed: ext_modules.extend([ speedups_ext_module, sqlite_udf_module, sqlite_ext_module]) if ext_modules: setup_kwargs.update( cmdclass={'build_ext': build_ext}, ext_modules=ext_modules) setup( name='peewee', version=__import__('peewee').__version__, description='a little orm', long_description=readme, author='Charles Leifer', author_email='coleifer@gmail.com', url='http://github.com/coleifer/peewee/', packages=['playhouse'], py_modules=['peewee', 'pwiz'], classifiers=[ 'Development Status :: 5 - Production/Stable', 'Intended Audience :: Developers', 'License :: OSI Approved :: MIT License', 'Operating System :: OS Independent', 'Programming Language :: Python', 'Programming Language :: Python :: 3', ], scripts = ['pwiz.py', 'playhouse/pskel'], **setup_kwargs ) peewee-2.10.2/tests.py000066400000000000000000000022231316645060400145660ustar00rootroot00000000000000""" Aggregate all the test modules and run from the command-line. For information about running tests, see the README located in the `playhouse/tests` directory. """ import sys import unittest from playhouse.tests.test_apis import * from playhouse.tests.test_compound_queries import * from playhouse.tests.test_database import * from playhouse.tests.test_fields import * from playhouse.tests.test_helpers import * from playhouse.tests.test_introspection import * from playhouse.tests.test_keys import * from playhouse.tests.test_models import * from playhouse.tests.test_queries import * from playhouse.tests.test_query_results import * from playhouse.tests.test_transactions import * if __name__ == '__main__': from peewee import print_ print_("""\033[1;31m ______ ______ ______ __ __ ______ ______ /\ == \ /\ ___\ /\ ___\ /\ \ _ \ \ /\ ___\ /\ ___\\ \ \ _-/ \ \ __\ \ \ __\ \ \ \/ ".\ \ \ \ __\ \ \ __\\ \ \_\ \ \_____\ \ \_____\ \ \__/".~\_\ \ \_____\ \ \_____\\ \/_/ \/_____/ \/_____/ \/_/ \/_/ \/_____/ \/_____/ \033[0m""") unittest.main(argv=sys.argv)