pax_global_header00006660000000000000000000000064143426504500014515gustar00rootroot0000000000000052 comment=e56cf0cf6bc7499cd8e12618d7022a8bc614d90a sqlitedict-2.1.0/000077500000000000000000000000001434265045000136625ustar00rootroot00000000000000sqlitedict-2.1.0/.github/000077500000000000000000000000001434265045000152225ustar00rootroot00000000000000sqlitedict-2.1.0/.github/workflows/000077500000000000000000000000001434265045000172575ustar00rootroot00000000000000sqlitedict-2.1.0/.github/workflows/python-package.yml000066400000000000000000000017241434265045000227200ustar00rootroot00000000000000name: Test on: [push, pull_request] jobs: build: runs-on: ubuntu-latest strategy: matrix: include: - python-version: '3.7' - python-version: '3.8' - python-version: '3.9' - python-version: '3.10' steps: - uses: actions/checkout@v2 - uses: actions/setup-python@v2 with: python-version: ${{ matrix.python-version }} - name: Update pip run: python -m pip install -U coverage flake8 pip pytest pytest-coverage pytest-benchmark - name: Flake8 run: flake8 sqlitedict.py tests - name: Install sqlitedict run: python setup.py install - name: Prepare tests subdirectory run: | rm -f tests/db mkdir -p tests/db - name: Run tests run: pytest tests --cov=sqlitedict - name: Run benchmarks run: pytest benchmarks - name: Run doctests run: python -m doctest README.rst sqlitedict-2.1.0/.github/workflows/release.yml000066400000000000000000000021511434265045000214210ustar00rootroot00000000000000name: Release to PyPI on: push: tags: - 'v*.*.*' jobs: tarball: if: github.event_name == 'push' timeout-minutes: 1 runs-on: ubuntu-20.04 env: PYPI_USERNAME: ${{ secrets.PYPI_USERNAME }} PYPI_PASSWORD: ${{ secrets.PYPI_PASSWORD }} steps: - uses: actions/checkout@v1 - uses: actions/setup-python@v1 with: python-version: "3.8.x" # https://github.community/t/how-to-get-just-the-tag-name/16241/4 - name: Extract the version number id: get_version run: | echo ::set-output name=V::$(python sqlitedict.py) - name: Install dependencies run: | python -m pip install --upgrade pip python -m venv venv . venv/bin/activate pip install twine - name: Build tarball run: | . venv/bin/activate python setup.py sdist - name: Upload tarball to PyPI run: | . venv/bin/activate twine upload dist/sqlitedict-${{ steps.get_version.outputs.V }}.tar.gz -u ${{ env.PYPI_USERNAME }} -p ${{ env.PYPI_PASSWORD }} sqlitedict-2.1.0/.gitignore000066400000000000000000000003301434265045000156460ustar00rootroot00000000000000*.py[co] # Packages *.egg *.egg-info dist build eggs parts bin var sdist develop-eggs .installed.cfg # Installer logs pip-log.txt # Unit test / coverage reports .coverage .tox .cache # sqlite databases *.sqlite sqlitedict-2.1.0/CHANGELOG.md000066400000000000000000000065401434265045000155000ustar00rootroot00000000000000# Changes ## 2.1.0, 2022-12-03 - Introduced weak references (PR [#165](https://github.com/RaRe-Technologies/sqlitedict/pull/165), [@mpenkov](https://github.com/mpenkov)) - Properly handled race condition (PR [#164](https://github.com/RaRe-Technologies/sqlitedict/pull/164), [@mpenkov](https://github.com/mpenkov)) - Added optional (not enabled by default) ability to encode keys (PR [#161](https://github.com/RaRe-Technologies/sqlitedict/pull/161), [@rdyro](https://github.com/rdyro)) - Changed logging from info to debug (PR [#163](https://github.com/RaRe-Technologies/sqlitedict/pull/163), [@nvllsvm](https://github.com/nvllsvm)) - Updated supported versions in readme (PR [#158](https://github.com/RaRe-Technologies/sqlitedict/pull/158), [@plague006](https://github.com/plague006)) - Corrected spelling mistakes (PR [#166](https://github.com/RaRe-Technologies/sqlitedict/pull/166), [@EdwardBetts](https://github.com/EdwardBetts)) ## 2.0.0, 2022-03-04 This release supports Python 3.7 and above. If you need support for older versions, please use the previous release, 1.7.0. - Do not create tables when in read-only mode (PR [#128](https://github.com/RaRe-Technologies/sqlitedict/pull/128), [@hholst80](https://github.com/hholst80)) - Use tempfile.mkstemp for safer temp file creation (PR [#106](https://github.com/RaRe-Technologies/sqlitedict/pull/106), [@ergoithz](https://github.com/ergoithz)) - Fix deadlock where opening database fails (PR [#107](https://github.com/RaRe-Technologies/sqlitedict/pull/107), [@padelt](https://github.com/padelt)) - Make outer_stack a parameter (PR [#148](https://github.com/RaRe-Technologies/sqlitedict/pull/148), [@mpenkov](https://github.com/padelt)) ## 1.7.0, 2018-09-04 * Add a blocking commit after each modification if autocommit is enabled. (PR [#94](https://github.com/RaRe-Technologies/sqlitedict/pull/94), [@endlisnis](https://github.com/endlisnis)) * Clean up license file names (PR [#99](https://github.com/RaRe-Technologies/sqlitedict/pull/99), [@r-barnes](https://github.com/r-barnes)) * support double quotes in table names (PR [#113](https://github.com/RaRe-Technologies/sqlitedict/pull/113), [@vcalv](https://github.com/vcalv)) ## 1.6.0, 2018-09-18 * Add Add `get_tablenames` method (@transfluxus, #72) * Add license files to dist (@toddrme2178, #79) * Replace `easy_install` -> `pip` in README (@thechief389, #77) * Update build badge (@menshikh-iv) ## 1.5.0, 2017-02-13 * Add encode and decode parameters to store json, compressed or pickled objects (@erosennin, #65) * Python 3.6 fix: commit before turning off synchronous (@bit, #59) * Update sqlite version to 3.8.2 (@tmylk, #63) ## 1.4.2, 2016-08-26 * Fix some hangs on closing. Let __enter__ re-open a closed connection. (@ecederstrand, #55) * Surround table names with quotes. (@Digenis, #50) ## 1.4.1, 2016-05-15 * Read-only mode (@nrhine1, #37) * Check file exists before deleting (@adibo, #39) * AttributeError after SqliteDict is closed (@guyskk, #40) * Python 3.5 support (@jtatum, #47) * Pickle when updating with 2-tuples seq (@Digenis, #49) * Fix exit errors: TypeError("'NoneType' object is not callable",) (@janrygl, #45) ## 1.4.0 * fix regression where iterating over keys/values/items returned a full list instead of iterator ## 1.3.0 * improve error handling in multithreading (PR #28); 100% test coverage. ## 1.2.0 * full python 3 support, continuous testing via Travis CI. sqlitedict-2.1.0/LICENSE.md000066400000000000000000000262111434265045000152700ustar00rootroot00000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright (c) 2011-now `Radim Řehůřek `_ and contributors. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. sqlitedict-2.1.0/MAINTAINERS.md000066400000000000000000000014571434265045000157650ustar00rootroot00000000000000# Maintainers' Notes To release a new version: 1. Update CHANGELOG.md 2. Bump version 3. Test! 4. Add a tag and push it to upstream ## Updating CHANGELOG.md Look through the list of [recently closed pull requests](https://github.com/RaRe-Technologies/sqlitedict/pulls?q=is%3Apr+is%3Aclosed). For each pull-request, run: ```bash release/summarize_pr.py {prid} ``` and copy-paste the result into CHANGELOG.md ## Bumping version Do this in two places: 1. sqlitedict.py 2. setup.py ## Testing Run: ```bash pytest tests ``` All tests should pass. ## Tagging Run: ```bash git tag v{version} git push origin --tags ``` The leading "v" is important, our CI will use that to identify the release. Once the tag is uploaded to github, CI will take care of everything, including uploading the release to PyPI. sqlitedict-2.1.0/MANIFEST.in000066400000000000000000000001701434265045000154160ustar00rootroot00000000000000include README.rst include setup.py include sqlitedict.py include Makefile include LICENSE.md recursive-include tests * sqlitedict-2.1.0/Makefile000066400000000000000000000007201434265045000153210ustar00rootroot00000000000000test-all: @ echo '- removing old data' @ rm -f -R tests/db/ @ echo '- creating new tests/db/ (required for tests)' @ mkdir -p tests/db @ nosetests --cover-package=sqlitedict --verbosity=1 --cover-erase -l DEBUG test-all-with-coverage: @ echo '- removing old data' @ rm -f -R tests/db/ @ echo '- creating new tests/db/ (required for tests)' @ mkdir -p tests/db @ nosetests --cover-package=sqlitedict --verbosity=1 --cover-erase --with-coverage -l DEBUG sqlitedict-2.1.0/README.rst000066400000000000000000000210611434265045000153510ustar00rootroot00000000000000=================================================== sqlitedict -- persistent ``dict``, backed by SQLite =================================================== |GithubActions|_ |License|_ .. |GithubActions| image:: https://github.com/RaRe-Technologies/sqlitedict/actions/workflows/python-package.yml/badge.svg .. |Downloads| image:: https://img.shields.io/pypi/dm/sqlitedict.svg .. |License| image:: https://img.shields.io/pypi/l/sqlitedict.svg .. _GithubActions: https://github.com/RaRe-Technologies/sqlitedict/actions/workflows/python-package.yml .. _Downloads: https://pypi.python.org/pypi/sqlitedict .. _License: https://pypi.python.org/pypi/sqlitedict A lightweight wrapper around Python's sqlite3 database with a simple, Pythonic dict-like interface and support for multi-thread access: Usage ===== Write ----- .. code-block:: python >>> from sqlitedict import SqliteDict >>> db = SqliteDict("example.sqlite") >>> >>> db["1"] = {"name": "first item"} >>> db["2"] = {"name": "second item"} >>> db["3"] = {"name": "yet another item"} >>> >>> # Commit to save the objects. >>> db.commit() >>> >>> db["4"] = {"name": "yet another item"} >>> # Oops, forgot to commit here, that object will never be saved. >>> # Always remember to commit, or enable autocommit with SqliteDict("example.sqlite", autocommit=True) >>> # Autocommit is off by default for performance. >>> >>> db.close() Read ---- .. code-block:: python >>> from sqlitedict import SqliteDict >>> db = SqliteDict("example.sqlite") >>> >>> print("There are %d items in the database" % len(db)) There are 3 items in the database >>> >>> # Standard dict interface. items() values() keys() etc... >>> for key, item in db.items(): ... print("%s=%s" % (key, item)) 1={'name': 'first item'} 2={'name': 'second item'} 3={'name': 'yet another item'} >>> >>> db.close() Efficiency ---------- By default, sqlitedict's exception handling favors verbosity over efficiency. It extracts and outputs the outer exception stack to the error logs. If you favor efficiency, then initialize the DB with outer_stack=False. .. code-block:: python >>> from sqlitedict import SqliteDict >>> db = SqliteDict("example.sqlite", outer_stack=False) # True is the default >>> db[1] {'name': 'first item'} Context Manager --------------- .. code-block:: python >>> from sqlitedict import SqliteDict >>> >>> # The database is automatically closed when leaving the with section. >>> # Uncommitted objects are not saved on close. REMEMBER TO COMMIT! >>> >>> with SqliteDict("example.sqlite") as db: ... print("There are %d items in the database" % len(db)) There are 3 items in the database Tables ------ A database file can store multiple tables. A default table is used when no table name is specified. Note: Writes are serialized, having multiple tables does not improve performance. .. code-block:: python >>> from sqlitedict import SqliteDict >>> >>> products = SqliteDict("example.sqlite", tablename="product", autocommit=True) >>> manufacturers = SqliteDict("example.sqlite", tablename="manufacturer", autocommit=True) >>> >>> products["1"] = {"name": "first item", "manufacturer_id": "1"} >>> products["2"] = {"name": "second item", "manufacturer_id": "1"} >>> >>> manufacturers["1"] = {"manufacturer_name": "afactory", "location": "US"} >>> manufacturers["2"] = {"manufacturer_name": "anotherfactory", "location": "UK"} >>> >>> tables = products.get_tablenames('example.sqlite') >>> print(tables) ['unnamed', 'product', 'manufacturer'] >>> >>> products.close() >>> manufacturers.close() In case you're wondering, the unnamed table comes from the previous examples, where we did not specify a table name. Serialization ------------- Keys are strings. Values are any serializeable object. By default Pickle is used internally to (de)serialize the values. It's possible to use a custom (de)serializer, notably for JSON and for compression. .. code-block:: python >>> # Use JSON instead of pickle >>> import json >>> with SqliteDict("example.sqlite", encode=json.dumps, decode=json.loads) as mydict: ... pass >>> >>> # Apply zlib compression after pickling >>> import zlib, pickle, sqlite3 >>> >>> def my_encode(obj): ... return sqlite3.Binary(zlib.compress(pickle.dumps(obj, pickle.HIGHEST_PROTOCOL))) >>> >>> def my_decode(obj): ... return pickle.loads(zlib.decompress(bytes(obj))) >>> >>> with SqliteDict("example.sqlite", encode=my_encode, decode=my_decode) as mydict: ... pass It's also possible to use a custom (de)serializer for keys to allow non-string keys. .. code-block:: python >>> # Use key encoding instead of default string keys only >>> from sqlitedict import encode_key, decode_key >>> with SqliteDict("example.sqlite", encode_key=encode_key, decode_key=decode_key) as mydict: ... pass More ---- Functions are well documented, see docstrings directly in ``sqlitedict.py`` or call ``help(sqlitedict)``. **Beware**: because of Python semantics, ``sqlitedict`` cannot know when a mutable SqliteDict-backed entry was modified in RAM. You'll need to explicitly assign the mutated object back to SqliteDict: .. code-block:: python >>> from sqlitedict import SqliteDict >>> db = SqliteDict("example.sqlite") >>> db["colors"] = {"red": (255, 0, 0)} >>> db.commit() >>> >>> colors = db["colors"] >>> colors["blue"] = (0, 0, 255) # sqlite DB not updated here! >>> db["colors"] = colors # now updated >>> >>> db.commit() # remember to commit (or set autocommit) >>> db.close() Features ======== * Values can be **any picklable objects** (uses ``pickle`` with the highest protocol). * Support for **multiple tables** (=dicts) living in the same database file. * Support for **access from multiple threads** to the same connection (needed by e.g. Pyro). Vanilla sqlite3 gives you ``ProgrammingError: SQLite objects created in a thread can only be used in that same thread.`` Concurrent requests are still serialized internally, so this "multithreaded support" **doesn't** give you any performance benefits. It is a work-around for sqlite limitations in Python. * Support for **custom serialization or compression**: .. code-block:: python # use JSON instead of pickle >>> import json >>> mydict = SqliteDict('./my_db.sqlite', encode=json.dumps, decode=json.loads) # apply zlib compression after pickling >>> import zlib, pickle, sqlite3 >>> def my_encode(obj): ... return sqlite3.Binary(zlib.compress(pickle.dumps(obj, pickle.HIGHEST_PROTOCOL))) >>> def my_decode(obj): ... return pickle.loads(zlib.decompress(bytes(obj))) >>> mydict = SqliteDict('./my_db.sqlite', encode=my_encode, decode=my_decode) * sqlite is efficient and can work effectively with large databases (multi gigabytes), not limited by memory. * sqlitedict is mostly a thin wrapper around sqlite. * ``items()`` ``keys()`` ``values()`` are iterating one by one, the rows are loaded in a worker thread and queued in memory. * ``len()`` is calling sqlite to count rows, that is scanning the whole table. * For better performance, write objects in batch and ``commit()`` once. Installation ============ The module has no dependencies beyond Python itself. The minimum supported Python version is 3.7, continuously tested on Python 3.7, 3.8, 3.9, and 3.10 `on Travis `_. Install or upgrade with:: pip install -U sqlitedict or from the `source tar.gz `_:: python setup.py install Contributions ============= Testing ------- Install:: $ pip install pytest coverage pytest-coverage To perform all tests:: $ mkdir -p tests/db $ pytest tests $ python -m doctest README.rst To perform all tests with coverage:: $ pytest tests --cov=sqlitedict Comments, bug reports --------------------- ``sqlitedict`` resides on `github `_. You can file issues or pull requests there. License ======= ``sqlitedict`` is open source software released under the `Apache 2.0 license `_. Copyright (c) 2011-now `Radim Řehůřek `_ and contributors. Housekeeping ============ Clean up the test database to keep each doctest run idempotent: .. code-block:: python >>> import os >>> if __name__ == '__main__': ... os.unlink('example.sqlite') sqlitedict-2.1.0/benchmarks/000077500000000000000000000000001434265045000157775ustar00rootroot00000000000000sqlitedict-2.1.0/benchmarks/test_insert.py000066400000000000000000000004401434265045000207120ustar00rootroot00000000000000import tempfile from sqlitedict import SqliteDict def insert(): with tempfile.NamedTemporaryFile() as tmp: for j in range(100): with SqliteDict(tmp.name) as d: d["tmp"] = j d.commit() def test(benchmark): benchmark(insert) sqlitedict-2.1.0/release/000077500000000000000000000000001434265045000153025ustar00rootroot00000000000000sqlitedict-2.1.0/release/summarize_pr.py000077500000000000000000000014741434265045000204020ustar00rootroot00000000000000#!/usr/bin/env python import json import sys import urllib.request def copy_to_clipboard(text): try: import pyperclip except ImportError: print('pyperclip is missing.', file=sys.stderr) print('copy-paste the following text manually:', file=sys.stderr) print(' ' + text, file=sys.stderr) else: pyperclip.copy(text) for prid in sys.argv[1:]: url = "https://api.github.com/repos/RaRe-Technologies/sqlitedict/pulls/%s" % prid with urllib.request.urlopen(url) as fin: prinfo = json.load(fin) prinfo['user_login'] = prinfo['user']['login'] prinfo['user_html_url'] = prinfo['user']['html_url'] text = '- %(title)s (PR [#%(number)s](%(html_url)s), [@%(user_login)s](%(user_html_url)s))' % prinfo print(text) sqlitedict-2.1.0/setup.py000077500000000000000000000047071434265045000154070ustar00rootroot00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- # # This code is distributed under the terms and conditions # from the Apache License, Version 2.0 # # http://opensource.org/licenses/apache2.0.php """ Run with: python ./setup.py install """ import os import io import subprocess import setuptools.command.develop from setuptools import setup def read(fname): path = os.path.join(os.path.dirname(__file__), fname) return io.open(path, encoding='utf8').read() class SetupDevelop(setuptools.command.develop.develop): """Docstring is overwritten.""" def run(self): """ Prepare environment for development. - Ensures 'nose' and 'coverage.py' are installed for testing. - Call super()'s run method. """ subprocess.check_call(('pip', 'install', 'nose', 'coverage')) # Call super() (except develop is an old-style class, so we must call # directly). The effect is that the development egg-link is installed. setuptools.command.develop.develop.run(self) SetupDevelop.__doc__ = setuptools.command.develop.develop.__doc__ setup( name='sqlitedict', version='2.1.0', description='Persistent dict in Python, backed up by sqlite3 and pickle, multithread-safe.', long_description=read('README.rst'), py_modules=['sqlitedict'], # there is a bug in python2.5, preventing distutils from using any non-ascii characters :( # http://bugs.python.org/issue2562 author='Radim Rehurek, Victor R. Escobar, Andrey Usov, Prasanna Swaminathan, Jeff Quast', author_email="me@radimrehurek.com", maintainer='Radim Rehurek', maintainer_email='me@radimrehurek.com', url='https://github.com/piskvorky/sqlitedict', download_url='http://pypi.python.org/pypi/sqlitedict', keywords='sqlite, persistent dict, multithreaded', license='Apache 2.0', platforms='any', classifiers=[ # from http://pypi.python.org/pypi?%3Aaction=list_classifiers 'Development Status :: 5 - Production/Stable', 'Environment :: Console', 'Intended Audience :: Developers', 'License :: OSI Approved :: Apache Software License', 'Operating System :: OS Independent', 'Programming Language :: Python :: 3.7', 'Programming Language :: Python :: 3.8', 'Programming Language :: Python :: 3.9', 'Programming Language :: Python :: 3.10', 'Topic :: Database :: Front-Ends', ], cmdclass={'develop': SetupDevelop}, ) sqlitedict-2.1.0/sqlitedict.py000077500000000000000000000630701434265045000164120ustar00rootroot00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- # # This code is distributed under the terms and conditions # from the Apache License, Version 2.0 # # http://opensource.org/licenses/apache2.0.php # # This code was inspired by: # * http://code.activestate.com/recipes/576638-draft-for-an-sqlite3-based-dbm/ # * http://code.activestate.com/recipes/526618/ """ A lightweight wrapper around Python's sqlite3 database, with a dict-like interface and multi-thread access support:: >>> mydict = SqliteDict('some.db', autocommit=True) # the mapping will be persisted to file `some.db` >>> mydict['some_key'] = any_picklable_object >>> print mydict['some_key'] >>> print len(mydict) # etc... all dict functions work Pickle is used internally to serialize the values. Keys are strings. If you don't use autocommit (default is no autocommit for performance), then don't forget to call `mydict.commit()` when done with a transaction. """ import sqlite3 import os import sys import tempfile import threading import logging import traceback from base64 import b64decode, b64encode import weakref __version__ = '2.1.0' def reraise(tp, value, tb=None): if value is None: value = tp() if value.__traceback__ is not tb: raise value.with_traceback(tb) raise value try: from cPickle import dumps, loads, HIGHEST_PROTOCOL as PICKLE_PROTOCOL except ImportError: from pickle import dumps, loads, HIGHEST_PROTOCOL as PICKLE_PROTOCOL # some Python 3 vs 2 imports try: from collections import UserDict as DictClass except ImportError: from UserDict import DictMixin as DictClass try: from queue import Queue except ImportError: from Queue import Queue logger = logging.getLogger(__name__) # # There's a thread that holds the actual SQL connection (SqliteMultithread). # We communicate with this thread via queues (request and responses). # The requests can either be SQL commands or one of the "special" commands # below: # # _REQUEST_CLOSE: request that the SQL connection be closed # _REQUEST_COMMIT: request that any changes be committed to the DB # # Responses are either SQL records (e.g. results of a SELECT) or the magic # _RESPONSE_NO_MORE command, which indicates nothing else will ever be written # to the response queue. # _REQUEST_CLOSE = '--close--' _REQUEST_COMMIT = '--commit--' _RESPONSE_NO_MORE = '--no more--' # # We work with weak references for better memory efficiency. # Dereferencing, checking the referent queue still exists, and putting to it # is boring and repetitive, so we have a _put function to handle it for us. # _PUT_OK, _PUT_REFERENT_DESTROYED, _PUT_NOOP = 0, 1, 2 def _put(queue_reference, item): if queue_reference is not None: queue = queue_reference() if queue is None: # # We got a reference to a queue, but that queue no longer exists # retval = _PUT_REFERENT_DESTROYED else: queue.put(item) retval = _PUT_OK del queue return retval # # We didn't get a reference to a queue, so do nothing (no-op). # return _PUT_NOOP def open(*args, **kwargs): """See documentation of the SqliteDict class.""" return SqliteDict(*args, **kwargs) def encode(obj): """Serialize an object using pickle to a binary format accepted by SQLite.""" return sqlite3.Binary(dumps(obj, protocol=PICKLE_PROTOCOL)) def decode(obj): """Deserialize objects retrieved from SQLite.""" return loads(bytes(obj)) def encode_key(key): """Serialize a key using pickle + base64 encoding to text accepted by SQLite.""" return b64encode(dumps(key, protocol=PICKLE_PROTOCOL)).decode("ascii") def decode_key(key): """Deserialize a key retrieved from SQLite.""" return loads(b64decode(key.encode("ascii"))) def identity(obj): """Identity f(x) = x function for encoding/decoding.""" return obj class SqliteDict(DictClass): VALID_FLAGS = ['c', 'r', 'w', 'n'] def __init__(self, filename=None, tablename='unnamed', flag='c', autocommit=False, journal_mode="DELETE", encode=encode, decode=decode, encode_key=identity, decode_key=identity, timeout=5, outer_stack=True): """ Initialize a thread-safe sqlite-backed dictionary. The dictionary will be a table `tablename` in database file `filename`. A single file (=database) may contain multiple tables. If no `filename` is given, a random file in temp will be used (and deleted from temp once the dict is closed/deleted). If you enable `autocommit`, changes will be committed after each operation (more inefficient but safer). Otherwise, changes are committed on `self.commit()`, `self.clear()` and `self.close()`. Set `journal_mode` to 'OFF' if you're experiencing sqlite I/O problems or if you need performance and don't care about crash-consistency. Set `outer_stack` to False to disable the output of the outer exception to the error logs. This may improve the efficiency of sqlitedict operation at the expense of a detailed exception trace. The `flag` parameter. Exactly one of: 'c': default mode, open for read/write, creating the db/table if necessary. 'w': open for r/w, but drop `tablename` contents first (start with empty table) 'r': open as read-only 'n': create a new database (erasing any existing tables, not just `tablename`!). The `encode` and `decode` parameters are used to customize how the values are serialized and deserialized. The `encode` parameter must be a function that takes a single Python object and returns a serialized representation. The `decode` function must be a function that takes the serialized representation produced by `encode` and returns a deserialized Python object. The default is to use pickle. The `timeout` defines the maximum time (in seconds) to wait for initial Thread startup. """ self.in_temp = filename is None if self.in_temp: fd, filename = tempfile.mkstemp(prefix='sqldict') os.close(fd) if flag not in SqliteDict.VALID_FLAGS: raise RuntimeError("Unrecognized flag: %s" % flag) self.flag = flag if flag == 'n': if os.path.exists(filename): os.remove(filename) dirname = os.path.dirname(filename) if dirname: if not os.path.exists(dirname): raise RuntimeError('Error! The directory does not exist, %s' % dirname) self.filename = filename # Use standard SQL escaping of double quote characters in identifiers, by doubling them. # See https://github.com/RaRe-Technologies/sqlitedict/pull/113 self.tablename = tablename.replace('"', '""') self.autocommit = autocommit self.journal_mode = journal_mode self.encode = encode self.decode = decode self.encode_key = encode_key self.decode_key = decode_key self._outer_stack = outer_stack logger.debug("opening Sqlite table %r in %r" % (tablename, filename)) self.conn = self._new_conn() if self.flag == 'r': if self.tablename not in SqliteDict.get_tablenames(self.filename): msg = 'Refusing to create a new table "%s" in read-only DB mode' % tablename raise RuntimeError(msg) else: MAKE_TABLE = 'CREATE TABLE IF NOT EXISTS "%s" (key TEXT PRIMARY KEY, value BLOB)' % self.tablename self.conn.execute(MAKE_TABLE) self.conn.commit() if flag == 'w': self.clear() def _new_conn(self): return SqliteMultithread( self.filename, autocommit=self.autocommit, journal_mode=self.journal_mode, outer_stack=self._outer_stack, ) def __enter__(self): if not hasattr(self, 'conn') or self.conn is None: self.conn = self._new_conn() return self def __exit__(self, *exc_info): self.close() def __str__(self): return "SqliteDict(%s)" % (self.filename) def __repr__(self): return str(self) # no need of something complex def __len__(self): # `select count (*)` is super slow in sqlite (does a linear scan!!) # As a result, len() is very slow too once the table size grows beyond trivial. # We could keep the total count of rows ourselves, by means of triggers, # but that seems too complicated and would slow down normal operation # (insert/delete etc). GET_LEN = 'SELECT COUNT(*) FROM "%s"' % self.tablename rows = self.conn.select_one(GET_LEN)[0] return rows if rows is not None else 0 def __bool__(self): # No elements is False, otherwise True GET_MAX = 'SELECT MAX(ROWID) FROM "%s"' % self.tablename m = self.conn.select_one(GET_MAX)[0] # Explicit better than implicit and bla bla return True if m is not None else False def iterkeys(self): GET_KEYS = 'SELECT key FROM "%s" ORDER BY rowid' % self.tablename for key in self.conn.select(GET_KEYS): yield self.decode_key(key[0]) def itervalues(self): GET_VALUES = 'SELECT value FROM "%s" ORDER BY rowid' % self.tablename for value in self.conn.select(GET_VALUES): yield self.decode(value[0]) def iteritems(self): GET_ITEMS = 'SELECT key, value FROM "%s" ORDER BY rowid' % self.tablename for key, value in self.conn.select(GET_ITEMS): yield self.decode_key(key), self.decode(value) def keys(self): return self.iterkeys() def values(self): return self.itervalues() def items(self): return self.iteritems() def __contains__(self, key): HAS_ITEM = 'SELECT 1 FROM "%s" WHERE key = ?' % self.tablename return self.conn.select_one(HAS_ITEM, (self.encode_key(key),)) is not None def __getitem__(self, key): GET_ITEM = 'SELECT value FROM "%s" WHERE key = ?' % self.tablename item = self.conn.select_one(GET_ITEM, (self.encode_key(key),)) if item is None: raise KeyError(key) return self.decode(item[0]) def __setitem__(self, key, value): if self.flag == 'r': raise RuntimeError('Refusing to write to read-only SqliteDict') ADD_ITEM = 'REPLACE INTO "%s" (key, value) VALUES (?,?)' % self.tablename self.conn.execute(ADD_ITEM, (self.encode_key(key), self.encode(value))) if self.autocommit: self.commit() def __delitem__(self, key): if self.flag == 'r': raise RuntimeError('Refusing to delete from read-only SqliteDict') if key not in self: raise KeyError(key) DEL_ITEM = 'DELETE FROM "%s" WHERE key = ?' % self.tablename self.conn.execute(DEL_ITEM, (self.encode_key(key),)) if self.autocommit: self.commit() def update(self, items=(), **kwds): if self.flag == 'r': raise RuntimeError('Refusing to update read-only SqliteDict') try: items = items.items() except AttributeError: pass items = [(self.encode_key(k), self.encode(v)) for k, v in items] UPDATE_ITEMS = 'REPLACE INTO "%s" (key, value) VALUES (?, ?)' % self.tablename self.conn.executemany(UPDATE_ITEMS, items) if kwds: self.update(kwds) if self.autocommit: self.commit() def __iter__(self): return self.iterkeys() def clear(self): if self.flag == 'r': raise RuntimeError('Refusing to clear read-only SqliteDict') # avoid VACUUM, as it gives "OperationalError: database schema has changed" CLEAR_ALL = 'DELETE FROM "%s";' % self.tablename self.conn.commit() self.conn.execute(CLEAR_ALL) self.conn.commit() @staticmethod def get_tablenames(filename): """get the names of the tables in an sqlite db as a list""" if not os.path.isfile(filename): raise IOError('file %s does not exist' % (filename)) GET_TABLENAMES = 'SELECT name FROM sqlite_master WHERE type="table"' with sqlite3.connect(filename) as conn: cursor = conn.execute(GET_TABLENAMES) res = cursor.fetchall() return [name[0] for name in res] def commit(self, blocking=True): """ Persist all data to disk. When `blocking` is False, the commit command is queued, but the data is not guaranteed persisted (default implication when autocommit=True). """ if self.conn is not None: self.conn.commit(blocking) sync = commit def close(self, do_log=True, force=False): if do_log: logger.debug("closing %s" % self) if hasattr(self, 'conn') and self.conn is not None: if self.conn.autocommit and not force: # typically calls to commit are non-blocking when autocommit is # used. However, we need to block on close() to ensure any # awaiting exceptions are handled and that all data is # persisted to disk before returning. self.conn.commit(blocking=True) self.conn.close(force=force) self.conn = None if self.in_temp: try: os.remove(self.filename) except Exception: pass def terminate(self): """Delete the underlying database file. Use with care.""" if self.flag == 'r': raise RuntimeError('Refusing to terminate read-only SqliteDict') self.close() if self.filename == ':memory:': return logger.info("deleting %s" % self.filename) try: if os.path.isfile(self.filename): os.remove(self.filename) except (OSError, IOError): logger.exception("failed to delete %s" % (self.filename)) def __del__(self): # like close(), but assume globals are gone by now (do not log!) try: self.close(do_log=False, force=True) except Exception: # prevent error log flood in case of multiple SqliteDicts # closed after connection lost (exceptions are always ignored # in __del__ method. pass class SqliteMultithread(threading.Thread): """ Wrap sqlite connection in a way that allows concurrent requests from multiple threads. This is done by internally queueing the requests and processing them sequentially in a separate thread (in the same order they arrived). """ def __init__(self, filename, autocommit, journal_mode, outer_stack=True): super(SqliteMultithread, self).__init__() self.filename = filename self.autocommit = autocommit self.journal_mode = journal_mode # use request queue of unlimited size self.reqs = Queue() self.daemon = True self._outer_stack = outer_stack self.log = logging.getLogger('sqlitedict.SqliteMultithread') # # Parts of this object's state get accessed from different threads, so # we use synchronization to avoid race conditions. For example, # .exception gets set inside the new daemon thread that we spawned, but # gets read from the main thread. This is particularly important # during initialization: the Thread needs some time to actually start # working, and until this happens, any calls to e.g. # check_raise_error() will prematurely return None, meaning all is # well. If the that connection happens to fail, we'll never know about # it, and instead wait for a result that never arrives (effectively, # deadlocking). Locking solves this problem by eliminating the race # condition. # self._lock = threading.Lock() self._lock.acquire() self.exception = None self.start() def _connect(self): """Connect to the underlying database. Raises an exception on failure. Returns the connection and cursor on success. """ try: if self.autocommit: conn = sqlite3.connect(self.filename, isolation_level=None, check_same_thread=False) else: conn = sqlite3.connect(self.filename, check_same_thread=False) except Exception: self.log.exception("Failed to initialize connection for filename: %s" % self.filename) self.exception = sys.exc_info() raise try: conn.execute('PRAGMA journal_mode = %s' % self.journal_mode) conn.text_factory = str cursor = conn.cursor() conn.commit() cursor.execute('PRAGMA synchronous=OFF') except Exception: self.log.exception("Failed to execute PRAGMA statements.") self.exception = sys.exc_info() raise return conn, cursor def run(self): # # Nb. this is what actually runs inside the new daemon thread. # self._lock is locked at this stage - see the initializer function. # try: conn, cursor = self._connect() finally: self._lock.release() res_ref = None while True: # # req: an SQL command or one of the --magic-- commands we use internally # arg: arguments for the command # res_ref: a weak reference to the queue into which responses must be placed # outer_stack: the outer stack, for producing more informative traces in case of error # req, arg, res_ref, outer_stack = self.reqs.get() if req == _REQUEST_CLOSE: assert res_ref, ('--close-- without return queue', res_ref) break elif req == _REQUEST_COMMIT: conn.commit() _put(res_ref, _RESPONSE_NO_MORE) else: try: cursor.execute(req, arg) except Exception: with self._lock: self.exception = (e_type, e_value, e_tb) = sys.exc_info() inner_stack = traceback.extract_stack() # An exception occurred in our thread, but we may not # immediately able to throw it in our calling thread, if it has # no return `res` queue: log as level ERROR both the inner and # outer exception immediately. # # Any iteration of res.get() or any next call will detect the # inner exception and re-raise it in the calling Thread; though # it may be confusing to see an exception for an unrelated # statement, an ERROR log statement from the 'sqlitedict.*' # namespace contains the original outer stack location. self.log.error('Inner exception:') for item in traceback.format_list(inner_stack): self.log.error(item) self.log.error('') # deliniate traceback & exception w/blank line for item in traceback.format_exception_only(e_type, e_value): self.log.error(item) self.log.error('') # exception & outer stack w/blank line if self._outer_stack: self.log.error('Outer stack:') for item in traceback.format_list(outer_stack): self.log.error(item) self.log.error('Exception will be re-raised at next call.') else: self.log.error( 'Unable to show the outer stack. Pass ' 'outer_stack=True when initializing the ' 'SqliteDict instance to show the outer stack.' ) if res_ref: for rec in cursor: if _put(res_ref, rec) == _PUT_REFERENT_DESTROYED: # # The queue we are sending responses to got garbage # collected. Nobody is listening anymore, so we # stop sending responses. # break _put(res_ref, _RESPONSE_NO_MORE) if self.autocommit: conn.commit() self.log.debug('received: %s, send: --no more--', req) conn.close() _put(res_ref, _RESPONSE_NO_MORE) def check_raise_error(self): """ Check for and raise exception for any previous sqlite query. For the `execute*` family of method calls, such calls are non-blocking and any exception raised in the thread cannot be handled by the calling Thread (usually MainThread). This method is called on `close`, and prior to any subsequent calls to the `execute*` methods to check for and raise an exception in a previous call to the MainThread. """ with self._lock: if self.exception: e_type, e_value, e_tb = self.exception # clear self.exception, if the caller decides to handle such # exception, we should not repeatedly re-raise it. self.exception = None self.log.error('An exception occurred from a previous statement, view ' 'the logging namespace "sqlitedict" for outer stack.') # The third argument to raise is the traceback object, and it is # substituted instead of the current location as the place where # the exception occurred, this is so that when using debuggers such # as `pdb', or simply evaluating the naturally raised traceback, we # retain the original (inner) location of where the exception # occurred. reraise(e_type, e_value, e_tb) def execute(self, req, arg=None, res=None): """ `execute` calls are non-blocking: just queue up the request and return immediately. :param req: The request (an SQL command) :param arg: Arguments to the SQL command :param res: A queue in which to place responses as they become available """ self.check_raise_error() stack = None if self._outer_stack: # NOTE: This might be a lot of information to pump into an input # queue, affecting performance. I've also seen earlier versions of # jython take a severe performance impact for throwing exceptions # so often. stack = traceback.extract_stack()[:-1] # # We pass a weak reference to the response queue instead of a regular # reference, because we want the queues to be garbage-collected # more aggressively. # res_ref = None if res: res_ref = weakref.ref(res) self.reqs.put((req, arg or tuple(), res_ref, stack)) def executemany(self, req, items): for item in items: self.execute(req, item) self.check_raise_error() def select(self, req, arg=None): """ Unlike sqlite's native select, this select doesn't handle iteration efficiently. The result of `select` starts filling up with values as soon as the request is dequeued, and although you can iterate over the result normally (`for res in self.select(): ...`), the entire result will be in memory. """ res = Queue() # results of the select will appear as items in this queue self.execute(req, arg, res) while True: rec = res.get() self.check_raise_error() if rec == _RESPONSE_NO_MORE: break yield rec def select_one(self, req, arg=None): """Return only the first row of the SELECT, or None if there are no matching rows.""" try: return next(iter(self.select(req, arg))) except StopIteration: return None def commit(self, blocking=True): if blocking: # by default, we await completion of commit() unless # blocking=False. This ensures any available exceptions for any # previous statement are thrown before returning, and that the # data has actually persisted to disk! self.select_one(_REQUEST_COMMIT) else: # otherwise, we fire and forget as usual. self.execute(_REQUEST_COMMIT) def close(self, force=False): if force: # If a SqliteDict is being killed or garbage-collected, then select_one() # could hang forever because run() might already have exited and therefore # can't process the request. Instead, push the close command to the requests # queue directly. If run() is still alive, it will exit gracefully. If not, # then there's nothing we can do anyway. self.reqs.put((_REQUEST_CLOSE, None, weakref.ref(Queue()), None)) else: # we abuse 'select' to "iter" over a "--close--" statement so that we # can confirm the completion of close before joining the thread and # returning (by semaphore '--no more--' self.select_one(_REQUEST_CLOSE) self.join() # # This is here for .github/workflows/release.yml # if __name__ == '__main__': print(__version__) sqlitedict-2.1.0/tests/000077500000000000000000000000001434265045000150245ustar00rootroot00000000000000sqlitedict-2.1.0/tests/accessories.py000066400000000000000000000004561434265045000177060ustar00rootroot00000000000000"""Accessories for test cases.""" import os def norm_file(fname): """Normalize test filename, creating a directory path to it if necessary""" fname = os.path.abspath(fname) dirname = os.path.dirname(fname) if not os.path.exists(dirname): os.makedirs(dirname) return fname sqlitedict-2.1.0/tests/autocommit.py000066400000000000000000000002001434265045000175470ustar00rootroot00000000000000import sqlitedict d = sqlitedict.SqliteDict('tests/db/autocommit.sqlite', autocommit=True) for i in range(1000): d[i] = i sqlitedict-2.1.0/tests/test_autocommit.py000066400000000000000000000007161434265045000206220ustar00rootroot00000000000000import os import sys import sqlitedict def test(): "Verify autocommit just before program exits." assert os.system('env PYTHONPATH=. %s tests/autocommit.py' % sys.executable) == 0 # The above script relies on the autocommit feature working correctly. # Now, let's check if it actually worked. d = sqlitedict.SqliteDict('tests/db/autocommit.sqlite') for i in range(1000): assert d[i] == i, "actual: %s expected: %s" % (d[i], i) sqlitedict-2.1.0/tests/test_core.py000066400000000000000000000272171434265045000173760ustar00rootroot00000000000000# std imports import json import unittest import tempfile import os from unittest.mock import patch # local import sqlitedict from sqlitedict import SqliteDict from test_temp_db import TempSqliteDictTest from accessories import norm_file class SqliteMiscTest(unittest.TestCase): def test_with_statement(self): """Verify using sqlitedict as a contextmanager . """ with SqliteDict() as d: self.assertTrue(isinstance(d, SqliteDict)) self.assertEqual(dict(d), {}) self.assertEqual(list(d), []) self.assertEqual(len(d), 0) def test_reopen_conn(self): """Verify using a contextmanager that a connection can be reopened.""" fname = norm_file('tests/db/sqlitedict-override-test.sqlite') db = SqliteDict(filename=fname) with db: db['key'] = 'value' db.commit() with db: db['key'] = 'value' db.commit() def test_as_str(self): """Verify SqliteDict.__str__().""" # given, db = SqliteDict() # exercise db.__str__() # test when db closed db.close() db.__str__() def test_as_repr(self): """Verify SqliteDict.__repr__().""" # given, db = SqliteDict() # exercise db.__repr__() def test_directory_notfound(self): """Verify RuntimeError: directory does not exist.""" # given: a non-existent directory, folder = tempfile.mkdtemp(prefix='sqlitedict-test') os.rmdir(folder) # exercise, with self.assertRaises(RuntimeError): SqliteDict(filename=os.path.join(folder, 'nonexistent')) def test_commit_nonblocking(self): """Coverage for non-blocking commit.""" # given, with SqliteDict(autocommit=True) as d: # exercise: the implicit commit is nonblocking d['key'] = 'value' d.commit(blocking=False) def test_cancel_iterate(self): import time class EndlessKeysIterator: def __init__(self) -> None: self.value = 0 def __iter__(self): return self def __next__(self): self.value += 1 return [self.value] with patch('sqlitedict.sqlite3') as mock_sqlite3: ki = EndlessKeysIterator() cursor = mock_sqlite3.connect().cursor() cursor.__iter__.return_value = ki with SqliteDict(autocommit=True) as d: for i, k in enumerate(d.keys()): assert i + 1 == k if k > 100: break assert ki.value > 101 # Release GIL, let background threads run. # Don't use gc.collect because this is simulate user code. time.sleep(0.01) current = ki.value time.sleep(1) assert current == ki.value, 'Will not read more after iterate stop' class NamedSqliteDictCreateOrReuseTest(TempSqliteDictTest): """Verify default flag='c', and flag='n' of SqliteDict().""" def test_default_reuse_existing_flag_c(self): """Re-opening of a database does not destroy it.""" # given, fname = norm_file('tests/db/sqlitedict-override-test.sqlite') orig_db = SqliteDict(filename=fname) orig_db['key'] = 'value' orig_db.commit() orig_db.close() next_db = SqliteDict(filename=fname) self.assertIn('key', next_db.keys()) self.assertEqual(next_db['key'], 'value') def test_overwrite_using_flag_n(self): """Re-opening of a database with flag='c' destroys it all.""" # given, fname = norm_file('tests/db/sqlitedict-override-test.sqlite') orig_db = SqliteDict(filename=fname, tablename='sometable') orig_db['key'] = 'value' orig_db.commit() orig_db.close() # verify, next_db = SqliteDict(filename=fname, tablename='sometable', flag='n') self.assertNotIn('key', next_db.keys()) def test_unrecognized_flag(self): def build_with_bad_flag(): fname = norm_file('tests/db/sqlitedict-override-test.sqlite') SqliteDict(filename=fname, flag='FOO') with self.assertRaises(RuntimeError): build_with_bad_flag() def test_readonly(self): fname = norm_file('tests/db/sqlitedict-override-test.sqlite') orig_db = SqliteDict(filename=fname) orig_db['key'] = 'value' orig_db['key_two'] = 2 orig_db.commit() orig_db.close() readonly_db = SqliteDict(filename=fname, flag='r') self.assertTrue(readonly_db['key'] == 'value') self.assertTrue(readonly_db['key_two'] == 2) def attempt_write(): readonly_db['key'] = ['new_value'] def attempt_update(): readonly_db.update(key='value2', key_two=2.1) def attempt_delete(): del readonly_db['key'] def attempt_clear(): readonly_db.clear() def attempt_terminate(): readonly_db.terminate() attempt_funcs = [attempt_write, attempt_update, attempt_delete, attempt_clear, attempt_terminate] for func in attempt_funcs: with self.assertRaises(RuntimeError): func() def test_readonly_table(self): """ Read-only access on a non-existent tablename should raise RuntimeError, and not create a new (empty) table. """ fname = norm_file('tests/db/sqlitedict-override-test.sqlite') dummy_tablename = 'table404' orig_db = SqliteDict(filename=fname) orig_db['key'] = 'value' orig_db['key_two'] = 2 orig_db.commit() orig_db.close() self.assertFalse(dummy_tablename in SqliteDict.get_tablenames(fname)) with self.assertRaises(RuntimeError): SqliteDict(filename=fname, tablename=dummy_tablename, flag='r') self.assertFalse(dummy_tablename in SqliteDict.get_tablenames(fname)) def test_irregular_tablenames(self): """Irregular table names need to be quoted""" def __test_irregular_tablenames(tablename): filename = ':memory:' db = SqliteDict(filename, tablename=tablename) db['key'] = 'value' db.commit() self.assertEqual(db['key'], 'value') db.close() __test_irregular_tablenames('9nine') __test_irregular_tablenames('outer space') __test_irregular_tablenames('table with a "quoted" name') __test_irregular_tablenames("table with a \"quoted \xe1cute\" name") def test_overwrite_using_flag_w(self): """Re-opening of a database with flag='w' destroys only the target table.""" # given, fname = norm_file('tests/db/sqlitedict-override-test.sqlite') orig_db_1 = SqliteDict(filename=fname, tablename='one') orig_db_1['key'] = 'value' orig_db_1.commit() orig_db_1.close() orig_db_2 = SqliteDict(filename=fname, tablename='two') orig_db_2['key'] = 'value' orig_db_2.commit() orig_db_2.close() # verify, when re-opening table space 'one' with flag='2', we destroy # its contents. However, when re-opening table space 'two' with # default flag='r', its contents remain. next_db_1 = SqliteDict(filename=fname, tablename='one', flag='w') self.assertNotIn('key', next_db_1.keys()) next_db_2 = SqliteDict(filename=fname, tablename='two') self.assertIn('key', next_db_2.keys()) class SqliteDictTerminateTest(unittest.TestCase): def test_terminate_instead_close(self): ''' make terminate() instead of close() ''' d = sqlitedict.open('tests/db/sqlitedict-terminate.sqlite') d['abc'] = 'def' d.commit() self.assertEqual(d['abc'], 'def') d.terminate() self.assertFalse(os.path.isfile('tests/db/sqlitedict-terminate.sqlite')) class SqliteDictTerminateFailTest(unittest.TestCase): """Provide Coverage for SqliteDict.terminate().""" def setUp(self): self.fname = norm_file('tests/db-permdenied/sqlitedict.sqlite') self.db = SqliteDict(filename=self.fname) os.chmod(self.fname, 0o000) os.chmod(os.path.dirname(self.fname), 0o000) def tearDown(self): os.chmod(os.path.dirname(self.fname), 0o700) os.chmod(self.fname, 0o600) os.unlink(self.fname) os.rmdir(os.path.dirname(self.fname)) def test_terminate_cannot_delete(self): # exercise, self.db.terminate() # deletion failed, but no exception raised! # verify, os.chmod(os.path.dirname(self.fname), 0o700) os.chmod(self.fname, 0o600) self.assertTrue(os.path.exists(self.fname)) class SqliteDictJsonSerializationTest(unittest.TestCase): def setUp(self): self.fname = norm_file('tests/db-json/sqlitedict.sqlite') self.db = SqliteDict( filename=self.fname, tablename='test', encode=json.dumps, decode=json.loads ) def tearDown(self): self.db.close() os.unlink(self.fname) os.rmdir(os.path.dirname(self.fname)) def get_json(self, key): return self.db.conn.select_one('SELECT value FROM test WHERE key = ?', (self.db.encode_key(key),))[0] def test_int(self): self.db['test'] = -42 assert self.db['test'] == -42 assert self.get_json('test') == '-42' def test_str(self): test_str = u'Test \u30c6\u30b9\u30c8' self.db['test'] = test_str assert self.db['test'] == test_str assert self.get_json('test') == r'"Test \u30c6\u30b9\u30c8"' def test_bool(self): self.db['test'] = False assert self.db['test'] is False assert self.get_json('test') == 'false' def test_none(self): self.db['test'] = None assert self.db['test'] is None assert self.get_json('test') == 'null' def test_complex_struct(self): test_value = { 'version': 2.5, 'items': ['one', 'two'], } self.db['test'] = test_value assert self.db['test'] == test_value assert self.get_json('test') == json.dumps(test_value) class TablenamesTest(unittest.TestCase): def tearDown(self): for f in ('tablenames-test-1.sqlite', 'tablenames-test-2.sqlite'): path = norm_file(os.path.join('tests/db', f)) if os.path.isfile(path): os.unlink(path) def test_tablenames_unnamed(self): fname = norm_file('tests/db/tablenames-test-1.sqlite') SqliteDict(fname) self.assertEqual(SqliteDict.get_tablenames(fname), ['unnamed']) def test_tablenams_named(self): fname = norm_file('tests/db/tablenames-test-2.sqlite') with SqliteDict(fname, tablename='table1'): self.assertEqual(SqliteDict.get_tablenames(fname), ['table1']) with SqliteDict(fname, tablename='table2'): self.assertEqual(SqliteDict.get_tablenames(fname), ['table1', 'table2']) tablenames = SqliteDict.get_tablenames('tests/db/tablenames-test-2.sqlite') self.assertEqual(tablenames, ['table1', 'table2']) class SqliteDictKeySerializationTest(unittest.TestCase): def setUp(self): self.fname = norm_file('tests/db-encode-key/sqlitedict.sqlite') self.db = SqliteDict( filename=self.fname, tablename='test', encode_key=sqlitedict.encode_key, decode_key=sqlitedict.decode_key, ) def test_nonstr_keys(self): self.db['test'] = -42 assert self.db['test'] == -42 self.db[(0, 1, 2)] = 17 assert self.db[(0, 1, 2)] == 17 sqlitedict-2.1.0/tests/test_named_db.py000066400000000000000000000023531434265045000201710ustar00rootroot00000000000000import sqlitedict from test_temp_db import TempSqliteDictTest from accessories import norm_file class InMemorySqliteDictTest(TempSqliteDictTest): def setUp(self): self.d = sqlitedict.SqliteDict(filename=':memory:', autocommit=True) def tearDown(self): self.d.terminate() class NamedSqliteDictTest(TempSqliteDictTest): def setUp(self): db = norm_file('tests/db/sqlitedict-with-def.sqlite') self.d = sqlitedict.SqliteDict(filename=db) class CreateNewSqliteDictTest(TempSqliteDictTest): def setUp(self): db = norm_file('tests/db/sqlitedict-with-n-flag.sqlite') self.d = sqlitedict.SqliteDict(filename=db, flag="n") def tearDown(self): self.d.terminate() class StartsWithEmptySqliteDictTest(TempSqliteDictTest): def setUp(self): db = norm_file('tests/db/sqlitedict-with-w-flag.sqlite') self.d = sqlitedict.SqliteDict(filename=db, flag="w") def tearDown(self): self.d.terminate() class SqliteDictAutocommitTest(TempSqliteDictTest): def setUp(self): db = norm_file('tests/db/sqlitedict-autocommit.sqlite') self.d = sqlitedict.SqliteDict(filename=db, autocommit=True) def tearDown(self): self.d.terminate() sqlitedict-2.1.0/tests/test_onimport.py000066400000000000000000000025521434265045000203100ustar00rootroot00000000000000"""Test cases for on-import logic.""" import unittest import sys class SqliteDict_cPickleImportTest(unittest.TestCase): """Verify fallback to 'pickle' module when 'cPickle' is not found.""" def setUp(self): self.orig_meta_path = sys.meta_path self.orig_sqlitedict = sys.modules.pop('sqlitedict', None) class FauxMissingImport(object): def __init__(self, *args): self.module_names = args def find_module(self, fullname, path=None): if fullname in self.module_names: return self return None def load_module(self, name): raise ImportError("No module named %s (FauxMissingImport)" % (name,)) # ensure cPickle/pickle is not cached sys.modules.pop('cPickle', None) sys.modules.pop('pickle', None) # add our custom importer sys.meta_path.insert(0, FauxMissingImport('cPickle')) def tearDown(self): sys.meta_path = self.orig_meta_path if self.orig_sqlitedict: sys.modules['sqlitedict'] = self.orig_sqlitedict def test_cpickle_fallback_to_pickle(self): # exercise, sqlitedict = __import__("sqlitedict") # verify, self.assertIn('pickle', sys.modules.keys()) self.assertIs(sqlitedict.dumps, sys.modules['pickle'].dumps) sqlitedict-2.1.0/tests/test_temp_db.py000066400000000000000000000055261434265045000200570ustar00rootroot00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- # # This code is distributed under the terms and conditions # from the Apache License, Version 2.0 import unittest import sqlitedict from sys import version_info major_version = version_info[0] class TempSqliteDictTest(unittest.TestCase): def setUp(self): self.d = sqlitedict.SqliteDict() def tearDown(self): self.d.close() def test_create_sqlitedict(self): ''' test_create_sqlitedict ''' self.assertIsInstance(self.d, sqlitedict.SqliteDict) self.assertEqual(dict(self.d), {}) self.assertEqual(list(self.d), []) self.assertEqual(len(self.d), 0) def test_assign_values(self): ''' test_assign_values ''' self.d['abc'] = 'edf' self.assertEqual(self.d['abc'], 'edf') self.assertEqual(len(self.d), 1) def test_clear_data(self): ''' test_clear_data ''' self.d.update(a=1, b=2, c=3) self.assertEqual(len(self.d), 3) self.d.clear() self.assertEqual(len(self.d), 0) def test_manage_one_record(self): ''' test_manage_one_record ''' self.d['abc'] = 'rsvp' * 100 self.assertEqual(self.d['abc'], 'rsvp' * 100) self.d['abc'] = 'lmno' self.assertEqual(self.d['abc'], 'lmno') self.assertEqual(len(self.d), 1) del self.d['abc'] self.assertEqual(len(self.d), 0) self.assertTrue(not self.d) def test_manage_few_records(self): ''' test_manage_few_records ''' self.d['abc'] = 'lmno' self.d['xyz'] = 'pdq' self.assertEqual(len(self.d), 2) if major_version == 2: self.assertEqual(list(self.d.iteritems()), [('abc', 'lmno'), ('xyz', 'pdq')]) self.assertEqual(list(self.d.items()), [('abc', 'lmno'), ('xyz', 'pdq')]) self.assertEqual(list(self.d.values()), ['lmno', 'pdq']) self.assertEqual(list(self.d.keys()), ['abc', 'xyz']) self.assertEqual(list(self.d), ['abc', 'xyz']) def test_update_records(self): ''' test_update_records ''' self.d.update([('v', 'w')], p='x', q='y', r='z') self.assertEqual(len(self.d), 4) # As far as I know dicts does not need to return # the elements in a specified order (sort() is required ) self.assertEqual(sorted(self.d.items()), sorted([('q', 'y'), ('p', 'x'), ('r', 'z'), ('v', 'w')])) self.assertEqual(sorted(list(self.d)), sorted(['q', 'p', 'r', 'v'])) def test_handling_errors(self): ''' test_handling_errors ''' def get_value(d, k): return d[k] def remove_nonexists(d, k): del d[k] with self.assertRaises(KeyError): remove_nonexists(self.d, 'abc') with self.assertRaises(KeyError): get_value(self.d, 'abc') sqlitedict-2.1.0/tox.ini000066400000000000000000000001051434265045000151710ustar00rootroot00000000000000[flake8] ignore = E12, W503 max-line-length = 120 show-source = True