././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1691353871.9934785 aiocache-0.12.2/0000755000175100001730000000000014464001420012724 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/.coveragerc0000644000175100001730000000004414464001404015045 0ustar00runnerdocker[run] concurrency = multiprocessing ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/CHANGES.rst0000644000175100001730000003264414464001404014541 0ustar00runnerdocker======= CHANGES ======= .. towncrier release notes start 0.12.2 (2023-08-06) =================== * Fixed an error when the ``pairs`` argument to ``.multi_set()`` doesn't support ``len()``. 0.12.1 (2023-04-23) =================== * Added ``skip_cache_func`` to dynamically skip caching certain results. * Removed typing support due to producing unresolvable errors (until v1.0). * Stopped installing ``tests`` as part of the package. 0.12.0 (2023-01-13) =================== * Added ``async with`` support to ``BaseCache``. * Added initial typing support. * Migrated to ``redis`` library (``aioredis`` is no longer supported). * ``SimpleMemoryBackend`` now has a cache per instance, rather than a global cache. * Improved support for ``build_key(key, namespace)`` [#569](https://github.com/aio-libs/aiocache/issues/569) -- Padraic Shafer * Removed deprecated ``loop`` parameters. * Removed deprecated ``cache`` parameter from ``create()``. * Added support for keyword arguments in ``TimingPlugin`` methods. * Fixed inconsistent enum keys between different Python versions. -- Padraic Shafer * Fixed ``.clear()`` breaking when no keys are present. * Fixed ``from aiocache import *``. * Fixed ``.delete()`` when values are falsy. 0.11.1 (2019-07-31) =================== * Don't hardcode import redis and memcached in factory [#461](https://github.com/argaen/aiocache/issues/461) - Manuel Miranda 0.11.0 (2019-07-31) =================== * Support str for timeout and ttl [#454](https://github.com/argaen/aiocache/issues/454) - Manuel Miranda * Add aiocache_wait_for_write decorator param [#448](https://github.com/argaen/aiocache/issues/448) - Manuel Miranda * Extend and improve usage of Cache class [#446](https://github.com/argaen/aiocache/issues/446) - Manuel Miranda * Add caches.add functionality [#440](https://github.com/argaen/aiocache/issues/440) - Manuel Miranda * Use raw msgpack attribute for loads [#439](https://github.com/argaen/aiocache/issues/439) - Manuel Miranda * Add docs regarding plugin timeouts and multicached [#438](https://github.com/argaen/aiocache/issues/438) - Manuel Miranda * Fix typehints in lock.py [#434](https://github.com/argaen/aiocache/issues/434) - Aviv * Use pytest_configure instead of pytest_namespace [#436](https://github.com/argaen/aiocache/issues/436) - Manuel Miranda * Add Cache class factory [#430](https://github.com/argaen/aiocache/issues/430) - Manuel Miranda 0.10.1 (2018-11-15) =================== * Cancel the previous ttl timer if exists when setting a new value in the in-memory cache [#424](https://github.com/argaen/aiocache/issues/424) - Minh Tu Le * Add python 3.7 to CI, now its supported! [#420](https://github.com/argaen/aiocache/issues/420) - Manuel Miranda * Add function as parameter for key_builder [#417](https://github.com/argaen/aiocache/issues/417) - Manuel Miranda * Always use __name__ when getting logger [#412](https://github.com/argaen/aiocache/issues/412) - Mansur Mamkin * Format code with black [#410](https://github.com/argaen/aiocache/issues/410) - Manuel Miranda 0.10.0 (2018-06-17) =================== * Cache can be disabled in decorated functions using `cache_read` and `cache_write` [#404](https://github.com/argaen/aiocache/issues/404) - Josep Cugat * Cache constructor can receive now default ttl [#405](https://github.com/argaen/aiocache/issues/405) - Josep Cugat 0.9.1 (2018-04-27) ================== * Single deploy step [#395](https://github.com/argaen/aiocache/issues/395) - Manuel Miranda * Catch ImportError when importing optional msgpack [#398](https://github.com/argaen/aiocache/issues/398) - Paweł Kowalski * Lazy load redis asyncio.Lock [#397](https://github.com/argaen/aiocache/issues/397) - Jordi Soucheiron 0.9.0 (2018-04-24) ================== * Bug #389/propagate redlock exceptions [#394](https://github.com/argaen/aiocache/issues/394) - Manuel Miranda ___aexit__ was returning whether asyncio Event was removed or not. In some cases this was avoiding the context manager to propagate exceptions happening inside. Now its not returning anything and will raise always any exception raised from inside_ * Fix sphinx build [#392](https://github.com/argaen/aiocache/issues/392) - Manuel Miranda _Also add extra step in build pipeline to avoid future errors._ * Update alias config when config already exists [#383](https://github.com/argaen/aiocache/issues/383) - Josep Cugat * Ensure serializers are instances [#379](https://github.com/argaen/aiocache/issues/379) - Manuel Miranda * Add MsgPackSerializer [#370](https://github.com/argaen/aiocache/issues/370) - Adam Hopkins * Add create_connection_timeout for redis>=1.0.0 when creating connections [#368](https://github.com/argaen/aiocache/issues/368) - tmarques82 * Fixed spelling error in serializers.py [#371](https://github.com/argaen/aiocache/issues/371) - Jared Shields 0.8.0 (2017-11-08) ================== * Add pypy support in build pipeline [#359](https://github.com/argaen/aiocache/issues/359) - Manuel Miranda * Fix multicached bug when using keys as an arg rather than kwarg [#356](https://github.com/argaen/aiocache/issues/356) - Manuel Miranda * Reuse cache when using decorators with alias [#355](https://github.com/argaen/aiocache/issues/355) - Manuel Miranda * Cache available from function.cache object for decorated functions [#354](https://github.com/argaen/aiocache/issues/354) - Manuel Miranda * aioredis and aiomcache are now optional dependencies [#337](https://github.com/argaen/aiocache/issues/337) - Jair Henrique * Generate wheel package on release [#338](https://github.com/argaen/aiocache/issues/338) - Jair Henrique * Add key_builder param to caches to customize keys [#315](https://github.com/argaen/aiocache/issues/315) - Manuel Miranda 0.7.2 (2017-07-23) ================== * Add key_builder param to caches to customize keys [#310](https://github.com/argaen/aiocache/issues/310) - Manuel Miranda * Propagate correct message on memcached connector error [#309](https://github.com/argaen/aiocache/issues/309) - Manuel Miranda 0.7.1 (2017-07-15) ================== * Remove explicit loop usages [#305](https://github.com/argaen/aiocache/issues/305) - Manuel Miranda * Remove bad logging configuration [#304](https://github.com/argaen/aiocache/issues/304) - Manuel Miranda 0.7.0 (2017-07-01) ================== * Upgrade to aioredis 0.3.3. - Manuel Miranda * Get CMD now returns values that evaluate to False correctly [#282](https://github.com/argaen/aiocache/issues/282) - Manuel Miranda * New locks public API exposed [#279](https://github.com/argaen/aiocache/issues/279) - Manuel Miranda _Users can now use aiocache.lock.RedLock and aiocache.lock.OptimisticLock_ * Memory now uses new NullSerializer [#273](https://github.com/argaen/aiocache/issues/273) - Manuel Miranda _Memory is a special case and doesn't need a serializer because anything can be stored in memory. Created a new NullSerializer that does nothing which is the default that SimpleMemoryCache will use now._ * Multi_cached can use args for key_from_attr [#271](https://github.com/argaen/aiocache/issues/271) - Manuel Miranda _before only params defined in kwargs where working due to the behavior defined in _get_args_dict function. This has now been fixed and it behaves as expected._ * Removed cached key_from_attr [#274](https://github.com/argaen/aiocache/issues/274) - Manuel Miranda _To reproduce the same behavior, use the new `key_builder` attr_ * Removed settings module. - Manuel Miranda 0.6.1 (2017-06-12) ================== * Removed connection reusage for decorators [#267](https://github.com/argaen/aiocache/issues/267)- Manuel Miranda (thanks @dmzkrsk) _when decorated function is costly connections where being kept while being iddle. This is a bad scenario and this reverts back to using a connection from the cache pool for every cache operation_ * Key_builder for cached [#265](https://github.com/argaen/aiocache/issues/265) - Manuel Miranda _Also fixed a bug with multi_cached where key_builder wasn't applied when saving the keys_ * Updated aioredis (0.3.1) and aiomcache (0.5.2) versions - Manuel Miranda 0.6.0 (2017-06-05) ================== New +++ * Cached supports stampede locking [#249](https://github.com/argaen/aiocache/issues/249) - Manuel Miranda * Memory redlock implementation [#241](https://github.com/argaen/aiocache/issues/241) - Manuel Miranda * Memcached redlock implementation [#240](https://github.com/argaen/aiocache/issues/240) - Manuel Miranda * Redis redlock implementation [#235](https://github.com/argaen/aiocache/issues/235) - Manuel Miranda * Add close function to clean up resources [#236](https://github.com/argaen/aiocache/issues/236) - Quinn Perfetto _Call `await cache.close()` to close a pool and its connections_ * `caches.create` works without alias [#253](https://github.com/argaen/aiocache/issues/253) - Manuel Miranda Changes +++++++ * Decorators use JsonSerializer by default now [#258](https://github.com/argaen/aiocache/issues/258) - Manuel Miranda _Also renamed DefaultSerializer to StringSerializer_ * Decorators use single connection [#257](https://github.com/argaen/aiocache/issues/257) - Manuel Miranda _Decorators (except cached_stampede) now use a single connection for each function call. This means connection doesn't go back to the pool after each cache call. Since the cache instance is the same for a decorated function, this means that the pool size must be high if there is big expected concurrency for that given function_ * Change close to clear for redis [#239](https://github.com/argaen/aiocache/issues/239) - Manuel Miranda _clear will free connections but will allow the user to still use the cache if needed (same behavior for aiomcache and ofc memory)_ 0.5.2 ===== * Reuse connection context manager [#225](https://github.com/argaen/aiocache/issues/225) [argaen] * Add performance footprint tests [#228](https://github.com/argaen/aiocache/issues/228) [argaen] * Timeout=0 takes precedence over self.timeout [#227](https://github.com/argaen/aiocache/issues/227) [argaen] * Lock when acquiring redis connection [#224](https://github.com/argaen/aiocache/issues/224) [argaen] * Added performance concurrency tests [#216](https://github.com/argaen/aiocache/issues/216) [argaen] 0.5.1 ===== * Deprecate settings module [#215](https://github.com/argaen/aiocache/issues/215) [argaen] * Decorators support introspection [#213](https://github.com/argaen/aiocache/issues/213) [argaen] 0.5.0 (2017-04-29) ================== * Removed pool reusage for redis. A new one is created for each instance [argaen] * Soft dependencies for redis and memcached [#197](https://github.com/argaen/aiocache/issues/197) [argaen] * Added incr CMD [#188](https://github.com/argaen/aiocache/issues/188>) [Manuel Miranda] * Create factory accepts cache args [#209](https://github.com/argaen/aiocache/issues/209) [argaen] * Cached and multi_cached can use alias caches (creates new instance per call) [#205](https://github.com/argaen/aiocache/issues/205) [argaen] * Method ``create`` to create new instances from alias [#204](https://github.com/argaen/aiocache/issues/204) [argaen] * Remove unnecessary warning [#200](https://github.com/argaen/aiocache/issues/200) [Petr Timofeev] * Add asyncio trove classifier [#199](https://github.com/argaen/aiocache/issues/199) [Thanos Lefteris] * Pass pool_size to the underlayed aiomcache [#189](https://github.com/argaen/aiocache/issues/189) [Aurélien Busi] * Added marshmallow example [#181](https://github.com/argaen/aiocache/issues/181) [argaen] * Added example for compression serializer [#179](https://github.com/argaen/aiocache/issues/179) [argaen] * Added BasePlugin.add_hook helper [#173](https://github.com/argaen/aiocache/issues/173) [argaen] Breaking ++++++++ * Refactored how settings and defaults work. Now aliases are the only way. [#193](https://github.com/argaen/aiocache/issues/193) [argaen] * Consistency between backends and serializers. With SimpleMemoryCache, some data will change on how its stored when using DefaultSerializer [#191](https://github.com/argaen/aiocache/issues/191) [argaen] 0.3.3 (2017-04-06) ================== * Added CHANGELOG and release process [#172](https://github.com/argaen/aiocache/issues/172) [argaen] * Added pool_min_size pool_max_size to redisbackend [#167](https://github.com/argaen/aiocache/issues/167) [argaen] * Timeout per function. Propagate it correctly with defaults. [#166](https://github.com/argaen/aiocache/issues/166) [argaen] * Added noself arg to cached decorator [#137](https://github.com/argaen/aiocache/issues/137) [argaen] * Cache instance in decorators is built in every call [#135](https://github.com/argaen/aiocache/issues/135) [argaen] 0.3.1 (2017-02-13) ================== * Changed add redis to use set with not existing flag [#119](https://github.com/argaen/aiocache/issues/119) [argaen] * Memcached multi_set with ensure_future [#114](https://github.com/argaen/aiocache/issues/114) [argaen] 0.3.0 (2017-01-12) ================== * Fixed asynctest issues for timeout tests [#109](https://github.com/argaen/aiocache/issues/109) [argaen] * Created new API class [#108](https://github.com/argaen/aiocache/issues/108) [argaen] * Set multicached keys only when non existing [#98](https://github.com/argaen/aiocache/issues/98) [argaen] * Added expire command [#97](https://github.com/argaen/aiocache/issues/97) [argaen] * Ttl tasks are cancelled for memory backend if key is deleted [#92](https://github.com/argaen/aiocache/issues/92) [argaen] * Ignore caching if AIOCACHE_DISABLED is set to 1 [#90](https://github.com/argaen/aiocache/issues/90) [argaen] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/LICENSE0000644000175100001730000000272414464001404013740 0ustar00runnerdockerCopyright (c) 2016, Manuel Miranda de Cid All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/MANIFEST.in0000644000175100001730000000035614464001404014470 0ustar00runnerdockerinclude CHANGES.rst include LICENSE include README.rst include Makefile include requirements.txt include requirements-dev.txt include setup.cfg include .coveragerc graft aiocache graft docs graft examples graft tests global-exclude *.pyc ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/Makefile0000644000175100001730000000105114464001404014363 0ustar00runnerdockercov-report = true lint: flake8 tests/ aiocache/ install-dev: pip install -e .[redis,memcached,msgpack,dev] pylint: pylint --disable=C0111 aiocache unit: coverage run -m pytest tests/ut @if [ $(cov-report) = true ]; then\ coverage combine;\ coverage report;\ fi acceptance: pytest -sv tests/acceptance doc: make -C docs/ html functional: bash examples/run_all.sh performance: pytest -sv tests/performance test: lint unit acceptance functional _release: scripts/make_release release: test _release changelog: gitchangelog ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1691353871.9934785 aiocache-0.12.2/PKG-INFO0000644000175100001730000001766514464001420014040 0ustar00runnerdockerMetadata-Version: 2.1 Name: aiocache Version: 0.12.2 Summary: multi backend asyncio cache Home-page: https://github.com/aio-libs/aiocache Author: Manuel Miranda Author-email: manu.mirandad@gmail.com Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 3.7 Classifier: Programming Language :: Python :: 3.8 Classifier: Programming Language :: Python :: 3.9 Classifier: Programming Language :: Python :: 3.10 Classifier: Programming Language :: Python :: 3.11 Classifier: Framework :: AsyncIO Provides-Extra: redis Provides-Extra: memcached Provides-Extra: msgpack License-File: LICENSE aiocache ######## Asyncio cache supporting multiple backends (memory, redis and memcached). .. image:: https://travis-ci.org/argaen/aiocache.svg?branch=master :target: https://travis-ci.org/argaen/aiocache .. image:: https://codecov.io/gh/argaen/aiocache/branch/master/graph/badge.svg :target: https://codecov.io/gh/argaen/aiocache .. image:: https://badge.fury.io/py/aiocache.svg :target: https://pypi.python.org/pypi/aiocache .. image:: https://img.shields.io/pypi/pyversions/aiocache.svg :target: https://pypi.python.org/pypi/aiocache .. image:: https://api.codacy.com/project/badge/Grade/96f772e38e63489ca884dbaf6e9fb7fd :target: https://www.codacy.com/app/argaen/aiocache .. image:: https://img.shields.io/badge/code%20style-black-000000.svg :target: https://github.com/ambv/black This library aims for simplicity over specialization. All caches contain the same minimum interface which consists on the following functions: - ``add``: Only adds key/value if key does not exist. - ``get``: Retrieve value identified by key. - ``set``: Sets key/value. - ``multi_get``: Retrieves multiple key/values. - ``multi_set``: Sets multiple key/values. - ``exists``: Returns True if key exists False otherwise. - ``increment``: Increment the value stored in the given key. - ``delete``: Deletes key and returns number of deleted items. - ``clear``: Clears the items stored. - ``raw``: Executes the specified command using the underlying client. .. role:: python(code) :language: python .. contents:: .. section-numbering: Installing ========== - ``pip install aiocache`` - ``pip install aiocache[redis]`` - ``pip install aiocache[memcached]`` - ``pip install aiocache[redis,memcached]`` - ``pip install aiocache[msgpack]`` Usage ===== Using a cache is as simple as .. code-block:: python >>> import asyncio >>> from aiocache import Cache >>> cache = Cache(Cache.MEMORY) # Here you can also use Cache.REDIS and Cache.MEMCACHED, default is Cache.MEMORY >>> with asyncio.Runner() as runner: >>> runner.run(cache.set('key', 'value')) True >>> runner.run(cache.get('key')) 'value' Or as a decorator .. code-block:: python import asyncio from collections import namedtuple from aiocache import cached, Cache from aiocache.serializers import PickleSerializer # With this we can store python objects in backends like Redis! Result = namedtuple('Result', "content, status") @cached( ttl=10, cache=Cache.REDIS, key="key", serializer=PickleSerializer(), port=6379, namespace="main") async def cached_call(): print("Sleeping for three seconds zzzz.....") await asyncio.sleep(3) return Result("content", 200) async def run(): await cached_call() await cached_call() await cached_call() cache = Cache(Cache.REDIS, endpoint="127.0.0.1", port=6379, namespace="main") await cache.delete("key") if __name__ == "__main__": asyncio.run(run()) The recommended approach to instantiate a new cache is using the `Cache` constructor. However you can also instantiate directly using `aiocache.RedisCache`, `aiocache.SimpleMemoryCache` or `aiocache.MemcachedCache`. You can also setup cache aliases so its easy to reuse configurations .. code-block:: python import asyncio from aiocache import caches # You can use either classes or strings for referencing classes caches.set_config({ 'default': { 'cache': "aiocache.SimpleMemoryCache", 'serializer': { 'class': "aiocache.serializers.StringSerializer" } }, 'redis_alt': { 'cache': "aiocache.RedisCache", 'endpoint': "127.0.0.1", 'port': 6379, 'timeout': 1, 'serializer': { 'class': "aiocache.serializers.PickleSerializer" }, 'plugins': [ {'class': "aiocache.plugins.HitMissRatioPlugin"}, {'class': "aiocache.plugins.TimingPlugin"} ] } }) async def default_cache(): cache = caches.get('default') # This always returns the SAME instance await cache.set("key", "value") assert await cache.get("key") == "value" async def alt_cache(): cache = caches.create('redis_alt') # This creates a NEW instance on every call await cache.set("key", "value") assert await cache.get("key") == "value" async def test_alias(): await default_cache() await alt_cache() await caches.get("redis_alt").delete("key") if __name__ == "__main__": asyncio.run(test_alias()) How does it work ================ Aiocache provides 3 main entities: - **backends**: Allow you specify which backend you want to use for your cache. Currently supporting: SimpleMemoryCache, RedisCache using redis_ and MemCache using aiomcache_. - **serializers**: Serialize and deserialize the data between your code and the backends. This allows you to save any Python object into your cache. Currently supporting: StringSerializer, PickleSerializer, JsonSerializer, and MsgPackSerializer. But you can also build custom ones. - **plugins**: Implement a hooks system that allows to execute extra behavior before and after of each command. If you are missing an implementation of backend, serializer or plugin you think it could be interesting for the package, do not hesitate to open a new issue. .. image:: docs/images/architecture.png :align: center Those 3 entities combine during some of the cache operations to apply the desired command (backend), data transformation (serializer) and pre/post hooks (plugins). To have a better vision of what happens, here you can check how ``set`` function works in ``aiocache``: .. image:: docs/images/set_operation_flow.png :align: center Amazing examples ================ In `examples folder `_ you can check different use cases: - `Sanic, Aiohttp and Tornado `_ - `Python object in Redis `_ - `Custom serializer for compressing data `_ - `TimingPlugin and HitMissRatioPlugin demos `_ - `Using marshmallow as a serializer `_ - `Using cached decorator `_. - `Using multi_cached decorator `_. Documentation ============= - `Usage `_ - `Caches `_ - `Serializers `_ - `Plugins `_ - `Configuration `_ - `Decorators `_ - `Testing `_ - `Examples `_ .. _redis: https://github.com/redis/redis-py .. _aiomcache: https://github.com/aio-libs/aiomcache ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/README.rst0000644000175100001730000001650714464001404014426 0ustar00runnerdockeraiocache ######## Asyncio cache supporting multiple backends (memory, redis and memcached). .. image:: https://travis-ci.org/argaen/aiocache.svg?branch=master :target: https://travis-ci.org/argaen/aiocache .. image:: https://codecov.io/gh/argaen/aiocache/branch/master/graph/badge.svg :target: https://codecov.io/gh/argaen/aiocache .. image:: https://badge.fury.io/py/aiocache.svg :target: https://pypi.python.org/pypi/aiocache .. image:: https://img.shields.io/pypi/pyversions/aiocache.svg :target: https://pypi.python.org/pypi/aiocache .. image:: https://api.codacy.com/project/badge/Grade/96f772e38e63489ca884dbaf6e9fb7fd :target: https://www.codacy.com/app/argaen/aiocache .. image:: https://img.shields.io/badge/code%20style-black-000000.svg :target: https://github.com/ambv/black This library aims for simplicity over specialization. All caches contain the same minimum interface which consists on the following functions: - ``add``: Only adds key/value if key does not exist. - ``get``: Retrieve value identified by key. - ``set``: Sets key/value. - ``multi_get``: Retrieves multiple key/values. - ``multi_set``: Sets multiple key/values. - ``exists``: Returns True if key exists False otherwise. - ``increment``: Increment the value stored in the given key. - ``delete``: Deletes key and returns number of deleted items. - ``clear``: Clears the items stored. - ``raw``: Executes the specified command using the underlying client. .. role:: python(code) :language: python .. contents:: .. section-numbering: Installing ========== - ``pip install aiocache`` - ``pip install aiocache[redis]`` - ``pip install aiocache[memcached]`` - ``pip install aiocache[redis,memcached]`` - ``pip install aiocache[msgpack]`` Usage ===== Using a cache is as simple as .. code-block:: python >>> import asyncio >>> from aiocache import Cache >>> cache = Cache(Cache.MEMORY) # Here you can also use Cache.REDIS and Cache.MEMCACHED, default is Cache.MEMORY >>> with asyncio.Runner() as runner: >>> runner.run(cache.set('key', 'value')) True >>> runner.run(cache.get('key')) 'value' Or as a decorator .. code-block:: python import asyncio from collections import namedtuple from aiocache import cached, Cache from aiocache.serializers import PickleSerializer # With this we can store python objects in backends like Redis! Result = namedtuple('Result', "content, status") @cached( ttl=10, cache=Cache.REDIS, key="key", serializer=PickleSerializer(), port=6379, namespace="main") async def cached_call(): print("Sleeping for three seconds zzzz.....") await asyncio.sleep(3) return Result("content", 200) async def run(): await cached_call() await cached_call() await cached_call() cache = Cache(Cache.REDIS, endpoint="127.0.0.1", port=6379, namespace="main") await cache.delete("key") if __name__ == "__main__": asyncio.run(run()) The recommended approach to instantiate a new cache is using the `Cache` constructor. However you can also instantiate directly using `aiocache.RedisCache`, `aiocache.SimpleMemoryCache` or `aiocache.MemcachedCache`. You can also setup cache aliases so its easy to reuse configurations .. code-block:: python import asyncio from aiocache import caches # You can use either classes or strings for referencing classes caches.set_config({ 'default': { 'cache': "aiocache.SimpleMemoryCache", 'serializer': { 'class': "aiocache.serializers.StringSerializer" } }, 'redis_alt': { 'cache': "aiocache.RedisCache", 'endpoint': "127.0.0.1", 'port': 6379, 'timeout': 1, 'serializer': { 'class': "aiocache.serializers.PickleSerializer" }, 'plugins': [ {'class': "aiocache.plugins.HitMissRatioPlugin"}, {'class': "aiocache.plugins.TimingPlugin"} ] } }) async def default_cache(): cache = caches.get('default') # This always returns the SAME instance await cache.set("key", "value") assert await cache.get("key") == "value" async def alt_cache(): cache = caches.create('redis_alt') # This creates a NEW instance on every call await cache.set("key", "value") assert await cache.get("key") == "value" async def test_alias(): await default_cache() await alt_cache() await caches.get("redis_alt").delete("key") if __name__ == "__main__": asyncio.run(test_alias()) How does it work ================ Aiocache provides 3 main entities: - **backends**: Allow you specify which backend you want to use for your cache. Currently supporting: SimpleMemoryCache, RedisCache using redis_ and MemCache using aiomcache_. - **serializers**: Serialize and deserialize the data between your code and the backends. This allows you to save any Python object into your cache. Currently supporting: StringSerializer, PickleSerializer, JsonSerializer, and MsgPackSerializer. But you can also build custom ones. - **plugins**: Implement a hooks system that allows to execute extra behavior before and after of each command. If you are missing an implementation of backend, serializer or plugin you think it could be interesting for the package, do not hesitate to open a new issue. .. image:: docs/images/architecture.png :align: center Those 3 entities combine during some of the cache operations to apply the desired command (backend), data transformation (serializer) and pre/post hooks (plugins). To have a better vision of what happens, here you can check how ``set`` function works in ``aiocache``: .. image:: docs/images/set_operation_flow.png :align: center Amazing examples ================ In `examples folder `_ you can check different use cases: - `Sanic, Aiohttp and Tornado `_ - `Python object in Redis `_ - `Custom serializer for compressing data `_ - `TimingPlugin and HitMissRatioPlugin demos `_ - `Using marshmallow as a serializer `_ - `Using cached decorator `_. - `Using multi_cached decorator `_. Documentation ============= - `Usage `_ - `Caches `_ - `Serializers `_ - `Plugins `_ - `Configuration `_ - `Decorators `_ - `Testing `_ - `Examples `_ .. _redis: https://github.com/redis/redis-py .. _aiomcache: https://github.com/aio-libs/aiomcache ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1691353871.9774783 aiocache-0.12.2/aiocache/0000755000175100001730000000000014464001420014460 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/aiocache/__init__.py0000644000175100001730000000202314464001404016570 0ustar00runnerdockerimport logging from typing import Dict, Type from .backends.memory import SimpleMemoryCache from .base import BaseCache __version__ = "0.12.2" logger = logging.getLogger(__name__) AIOCACHE_CACHES: Dict[str, Type[BaseCache]] = {SimpleMemoryCache.NAME: SimpleMemoryCache} try: import redis except ImportError: logger.debug("redis not installed, RedisCache unavailable") else: from aiocache.backends.redis import RedisCache AIOCACHE_CACHES[RedisCache.NAME] = RedisCache del redis try: import aiomcache except ImportError: logger.debug("aiomcache not installed, Memcached unavailable") else: from aiocache.backends.memcached import MemcachedCache AIOCACHE_CACHES[MemcachedCache.NAME] = MemcachedCache del aiomcache from .decorators import cached, cached_stampede, multi_cached # noqa: E402,I202 from .factory import Cache, caches # noqa: E402 __all__ = ( "caches", "Cache", "cached", "cached_stampede", "multi_cached", *(c.__name__ for c in AIOCACHE_CACHES.values()), ) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1691353871.9774783 aiocache-0.12.2/aiocache/backends/0000755000175100001730000000000014464001420016232 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/aiocache/backends/__init__.py0000644000175100001730000000000014464001404020333 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/aiocache/backends/memcached.py0000644000175100001730000001336614464001404020525 0ustar00runnerdockerimport asyncio import aiomcache from aiocache.base import BaseCache from aiocache.serializers import JsonSerializer class MemcachedBackend(BaseCache): def __init__(self, endpoint="127.0.0.1", port=11211, pool_size=2, **kwargs): super().__init__(**kwargs) self.endpoint = endpoint self.port = port self.pool_size = int(pool_size) self.client = aiomcache.Client( self.endpoint, self.port, pool_size=self.pool_size ) async def _get(self, key, encoding="utf-8", _conn=None): value = await self.client.get(key) if encoding is None or value is None: return value return value.decode(encoding) async def _gets(self, key, encoding="utf-8", _conn=None): key = key.encode() if isinstance(key, str) else key _, token = await self.client.gets(key) return token async def _multi_get(self, keys, encoding="utf-8", _conn=None): values = [] for value in await self.client.multi_get(*keys): if encoding is None or value is None: values.append(value) else: values.append(value.decode(encoding)) return values async def _set(self, key, value, ttl=0, _cas_token=None, _conn=None): value = value.encode() if isinstance(value, str) else value if _cas_token is not None: return await self._cas(key, value, _cas_token, ttl=ttl, _conn=_conn) try: return await self.client.set(key, value, exptime=ttl or 0) except aiomcache.exceptions.ValidationException as e: raise TypeError("aiomcache error: {}".format(str(e))) async def _cas(self, key, value, token, ttl=None, _conn=None): return await self.client.cas(key, value, token, exptime=ttl or 0) async def _multi_set(self, pairs, ttl=0, _conn=None): tasks = [] for key, value in pairs: value = str.encode(value) if isinstance(value, str) else value tasks.append(self.client.set(key, value, exptime=ttl or 0)) try: await asyncio.gather(*tasks) except aiomcache.exceptions.ValidationException as e: raise TypeError("aiomcache error: {}".format(str(e))) return True async def _add(self, key, value, ttl=0, _conn=None): value = str.encode(value) if isinstance(value, str) else value try: ret = await self.client.add(key, value, exptime=ttl or 0) except aiomcache.exceptions.ValidationException as e: raise TypeError("aiomcache error: {}".format(str(e))) if not ret: raise ValueError("Key {} already exists, use .set to update the value".format(key)) return True async def _exists(self, key, _conn=None): return await self.client.append(key, b"") async def _increment(self, key, delta, _conn=None): incremented = None try: if delta > 0: incremented = await self.client.incr(key, delta) else: incremented = await self.client.decr(key, abs(delta)) except aiomcache.exceptions.ClientException as e: if "NOT_FOUND" in str(e): await self._set(key, str(delta).encode()) else: raise TypeError("aiomcache error: {}".format(str(e))) return incremented or delta async def _expire(self, key, ttl, _conn=None): return await self.client.touch(key, ttl) async def _delete(self, key, _conn=None): return 1 if await self.client.delete(key) else 0 async def _clear(self, namespace=None, _conn=None): if namespace: raise ValueError("MemcachedBackend doesnt support flushing by namespace") else: await self.client.flush_all() return True async def _raw(self, command, *args, encoding="utf-8", _conn=None, **kwargs): value = await getattr(self.client, command)(*args, **kwargs) if command in {"get", "multi_get"}: if encoding is not None and value is not None: return value.decode(encoding) return value async def _redlock_release(self, key, _): # Not ideal, should check the value coincides first but this would introduce # race conditions return await self._delete(key) async def _close(self, *args, _conn=None, **kwargs): await self.client.close() class MemcachedCache(MemcachedBackend): """ Memcached cache implementation with the following components as defaults: - serializer: :class:`aiocache.serializers.JsonSerializer` - plugins: [] Config options are: :param serializer: obj derived from :class:`aiocache.serializers.BaseSerializer`. :param plugins: list of :class:`aiocache.plugins.BasePlugin` derived classes. :param namespace: string to use as default prefix for the key used in all operations of the backend. Default is None :param timeout: int or float in seconds specifying maximum timeout for the operations to last. By default its 5. :param endpoint: str with the endpoint to connect to. Default is 127.0.0.1. :param port: int with the port to connect to. Default is 11211. :param pool_size: int size for memcached connections pool. Default is 2. """ NAME = "memcached" def __init__(self, serializer=None, **kwargs): super().__init__(serializer=serializer or JsonSerializer(), **kwargs) @classmethod def parse_uri_path(cls, path): return {} def _build_key(self, key, namespace=None): ns_key = super()._build_key(key, namespace=namespace).replace(" ", "_") return str.encode(ns_key) def __repr__(self): # pragma: no cover return "MemcachedCache ({}:{})".format(self.endpoint, self.port) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/aiocache/backends/memory.py0000644000175100001730000001025514464001404020121 0ustar00runnerdockerimport asyncio from typing import Dict from aiocache.base import BaseCache from aiocache.serializers import NullSerializer class SimpleMemoryBackend(BaseCache): """ Wrapper around dict operations to use it as a cache backend """ def __init__(self, **kwargs): super().__init__(**kwargs) self._cache: Dict[str, object] = {} self._handlers: Dict[str, asyncio.TimerHandle] = {} async def _get(self, key, encoding="utf-8", _conn=None): return self._cache.get(key) async def _gets(self, key, encoding="utf-8", _conn=None): return await self._get(key, encoding=encoding, _conn=_conn) async def _multi_get(self, keys, encoding="utf-8", _conn=None): return [self._cache.get(key) for key in keys] async def _set(self, key, value, ttl=None, _cas_token=None, _conn=None): if _cas_token is not None and _cas_token != self._cache.get(key): return 0 if key in self._handlers: self._handlers[key].cancel() self._cache[key] = value if ttl: loop = asyncio.get_running_loop() self._handlers[key] = loop.call_later(ttl, self.__delete, key) return True async def _multi_set(self, pairs, ttl=None, _conn=None): for key, value in pairs: await self._set(key, value, ttl=ttl) return True async def _add(self, key, value, ttl=None, _conn=None): if key in self._cache: raise ValueError("Key {} already exists, use .set to update the value".format(key)) await self._set(key, value, ttl=ttl) return True async def _exists(self, key, _conn=None): return key in self._cache async def _increment(self, key, delta, _conn=None): if key not in self._cache: self._cache[key] = delta else: try: self._cache[key] = int(self._cache[key]) + delta except ValueError: raise TypeError("Value is not an integer") from None return self._cache[key] async def _expire(self, key, ttl, _conn=None): if key in self._cache: handle = self._handlers.pop(key, None) if handle: handle.cancel() if ttl: loop = asyncio.get_running_loop() self._handlers[key] = loop.call_later(ttl, self.__delete, key) return True return False async def _delete(self, key, _conn=None): return self.__delete(key) async def _clear(self, namespace=None, _conn=None): if namespace: for key in list(self._cache): if key.startswith(namespace): self.__delete(key) else: self._cache = {} self._handlers = {} return True async def _raw(self, command, *args, encoding="utf-8", _conn=None, **kwargs): return getattr(self._cache, command)(*args, **kwargs) async def _redlock_release(self, key, value): if self._cache.get(key) == value: self._cache.pop(key) return 1 return 0 def __delete(self, key): if self._cache.pop(key, None) is not None: handle = self._handlers.pop(key, None) if handle: handle.cancel() return 1 return 0 class SimpleMemoryCache(SimpleMemoryBackend): """ Memory cache implementation with the following components as defaults: - serializer: :class:`aiocache.serializers.NullSerializer` - plugins: None Config options are: :param serializer: obj derived from :class:`aiocache.serializers.BaseSerializer`. :param plugins: list of :class:`aiocache.plugins.BasePlugin` derived classes. :param namespace: string to use as default prefix for the key used in all operations of the backend. Default is None. :param timeout: int or float in seconds specifying maximum timeout for the operations to last. By default its 5. """ NAME = "memory" def __init__(self, serializer=None, **kwargs): super().__init__(serializer=serializer or NullSerializer(), **kwargs) @classmethod def parse_uri_path(cls, path): return {} ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/aiocache/backends/redis.py0000644000175100001730000002075614464001404017726 0ustar00runnerdockerimport itertools import warnings import redis.asyncio as redis from redis.exceptions import ResponseError as IncrbyException from aiocache.base import BaseCache, _ensure_key from aiocache.serializers import JsonSerializer _NOT_SET = object() class RedisBackend(BaseCache): RELEASE_SCRIPT = ( "if redis.call('get',KEYS[1]) == ARGV[1] then" " return redis.call('del',KEYS[1])" " else" " return 0" " end" ) CAS_SCRIPT = ( "if redis.call('get',KEYS[1]) == ARGV[2] then" " if #ARGV == 4 then" " return redis.call('set', KEYS[1], ARGV[1], ARGV[3], ARGV[4])" " else" " return redis.call('set', KEYS[1], ARGV[1])" " end" " else" " return 0" " end" ) def __init__( self, endpoint="127.0.0.1", port=6379, db=0, password=None, pool_min_size=_NOT_SET, pool_max_size=None, create_connection_timeout=None, **kwargs, ): super().__init__(**kwargs) if pool_min_size is not _NOT_SET: warnings.warn( "Parameter 'pool_min_size' is deprecated since aiocache 0.12", DeprecationWarning, ) self.endpoint = endpoint self.port = int(port) self.db = int(db) self.password = password # TODO: Remove int() call some time after adding type annotations. self.pool_max_size = None if pool_max_size is None else int(pool_max_size) self.create_connection_timeout = ( float(create_connection_timeout) if create_connection_timeout else None ) # NOTE: decoding can't be controlled on API level after switching to # redis, we need to disable decoding on global/connection level # (decode_responses=False), because some of the values are saved as # bytes directly, like pickle serialized values, which may raise an # exception when decoded with 'utf-8'. self.client = redis.Redis(host=self.endpoint, port=self.port, db=self.db, password=self.password, decode_responses=False, socket_connect_timeout=self.create_connection_timeout, max_connections=self.pool_max_size) async def _get(self, key, encoding="utf-8", _conn=None): value = await self.client.get(key) if encoding is None or value is None: return value return value.decode(encoding) async def _gets(self, key, encoding="utf-8", _conn=None): return await self._get(key, encoding=encoding, _conn=_conn) async def _multi_get(self, keys, encoding="utf-8", _conn=None): values = await self.client.mget(*keys) if encoding is None: return values return [v if v is None else v.decode(encoding) for v in values] async def _set(self, key, value, ttl=None, _cas_token=None, _conn=None): if _cas_token is not None: return await self._cas(key, value, _cas_token, ttl=ttl, _conn=_conn) if ttl is None: return await self.client.set(key, value) if isinstance(ttl, float): ttl = int(ttl * 1000) return await self.client.psetex(key, ttl, value) return await self.client.setex(key, ttl, value) async def _cas(self, key, value, token, ttl=None, _conn=None): args = () if ttl is not None: args = ("PX", int(ttl * 1000)) if isinstance(ttl, float) else ("EX", ttl) return await self._raw("eval", self.CAS_SCRIPT, 1, key, value, token, *args, _conn=_conn) async def _multi_set(self, pairs, ttl=None, _conn=None): ttl = ttl or 0 flattened = list(itertools.chain.from_iterable((key, value) for key, value in pairs)) if ttl: await self.__multi_set_ttl(flattened, ttl) else: await self.client.execute_command("MSET", *flattened) return True async def __multi_set_ttl(self, flattened, ttl): async with self.client.pipeline(transaction=True) as p: p.execute_command("MSET", *flattened) ttl, exp = (int(ttl * 1000), p.pexpire) if isinstance(ttl, float) else (ttl, p.expire) for key in flattened[::2]: exp(key, time=ttl) await p.execute() async def _add(self, key, value, ttl=None, _conn=None): kwargs = {"nx": True} if isinstance(ttl, float): kwargs["px"] = int(ttl * 1000) else: kwargs["ex"] = ttl was_set = await self.client.set(key, value, **kwargs) if not was_set: raise ValueError("Key {} already exists, use .set to update the value".format(key)) return was_set async def _exists(self, key, _conn=None): number = await self.client.exists(key) return bool(number) async def _increment(self, key, delta, _conn=None): try: return await self.client.incrby(key, delta) except IncrbyException: raise TypeError("Value is not an integer") from None async def _expire(self, key, ttl, _conn=None): if ttl == 0: return await self.client.persist(key) return await self.client.expire(key, ttl) async def _delete(self, key, _conn=None): return await self.client.delete(key) async def _clear(self, namespace=None, _conn=None): if namespace: keys = await self.client.keys("{}:*".format(namespace)) if keys: await self.client.delete(*keys) else: await self.client.flushdb() return True async def _raw(self, command, *args, encoding="utf-8", _conn=None, **kwargs): value = await getattr(self.client, command)(*args, **kwargs) if encoding is not None: if command == "get" and value is not None: value = value.decode(encoding) elif command in {"keys", "mget"}: value = [v if v is None else v.decode(encoding) for v in value] return value async def _redlock_release(self, key, value): return await self._raw("eval", self.RELEASE_SCRIPT, 1, key, value) async def _close(self, *args, _conn=None, **kwargs): await self.client.close() class RedisCache(RedisBackend): """ Redis cache implementation with the following components as defaults: - serializer: :class:`aiocache.serializers.JsonSerializer` - plugins: [] Config options are: :param serializer: obj derived from :class:`aiocache.serializers.BaseSerializer`. :param plugins: list of :class:`aiocache.plugins.BasePlugin` derived classes. :param namespace: string to use as default prefix for the key used in all operations of the backend. Default is None. :param timeout: int or float in seconds specifying maximum timeout for the operations to last. By default its 5. :param endpoint: str with the endpoint to connect to. Default is "127.0.0.1". :param port: int with the port to connect to. Default is 6379. :param db: int indicating database to use. Default is 0. :param password: str indicating password to use. Default is None. :param pool_max_size: int maximum pool size for the redis connections pool. Default is None. :param create_connection_timeout: int timeout for the creation of connection. Default is None """ NAME = "redis" def __init__(self, serializer=None, **kwargs): super().__init__(serializer=serializer or JsonSerializer(), **kwargs) @classmethod def parse_uri_path(cls, path): """ Given a uri path, return the Redis specific configuration options in that path string according to iana definition http://www.iana.org/assignments/uri-schemes/prov/redis :param path: string containing the path. Example: "/0" :return: mapping containing the options. Example: {"db": "0"} """ options = {} db, *_ = path[1:].split("/") if db: options["db"] = db return options def _build_key(self, key, namespace=None): if namespace is not None: return "{}{}{}".format( namespace, ":" if namespace else "", _ensure_key(key)) if self.namespace is not None: return "{}{}{}".format( self.namespace, ":" if self.namespace else "", _ensure_key(key)) return key def __repr__(self): # pragma: no cover return "RedisCache ({}:{})".format(self.endpoint, self.port) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/aiocache/base.py0000644000175100001730000004743714464001404015765 0ustar00runnerdockerimport asyncio import functools import logging import os import time from enum import Enum from types import TracebackType from typing import Callable, Optional, Set, Type from aiocache import serializers logger = logging.getLogger(__name__) SENTINEL = object() class API: CMDS: Set[Callable[..., object]] = set() @classmethod def register(cls, func): API.CMDS.add(func) return func @classmethod def unregister(cls, func): API.CMDS.discard(func) @classmethod def timeout(cls, func): """ This decorator sets a maximum timeout for a coroutine to execute. The timeout can be both set in the ``self.timeout`` attribute or in the ``timeout`` kwarg of the function call. I.e if you have a function ``get(self, key)``, if its decorated with this decorator, you will be able to call it with ``await get(self, "my_key", timeout=4)``. Use 0 or None to disable the timeout. """ NOT_SET = "NOT_SET" @functools.wraps(func) async def _timeout(self, *args, timeout=NOT_SET, **kwargs): timeout = self.timeout if timeout == NOT_SET else timeout if timeout == 0 or timeout is None: return await func(self, *args, **kwargs) return await asyncio.wait_for(func(self, *args, **kwargs), timeout) return _timeout @classmethod def aiocache_enabled(cls, fake_return=None): """ Use this decorator to be able to fake the return of the function by setting the ``AIOCACHE_DISABLE`` environment variable """ def enabled(func): @functools.wraps(func) async def _enabled(*args, **kwargs): if os.getenv("AIOCACHE_DISABLE") == "1": return fake_return return await func(*args, **kwargs) return _enabled return enabled @classmethod def plugins(cls, func): @functools.wraps(func) async def _plugins(self, *args, **kwargs): start = time.monotonic() for plugin in self.plugins: await getattr(plugin, "pre_{}".format(func.__name__))(self, *args, **kwargs) ret = await func(self, *args, **kwargs) end = time.monotonic() for plugin in self.plugins: await getattr(plugin, "post_{}".format(func.__name__))( self, *args, took=end - start, ret=ret, **kwargs ) return ret return _plugins class BaseCache: """ Base class that agregates the common logic for the different caches that may exist. Cache related available options are: :param serializer: obj derived from :class:`aiocache.serializers.BaseSerializer`. Default is :class:`aiocache.serializers.StringSerializer`. :param plugins: list of :class:`aiocache.plugins.BasePlugin` derived classes. Default is empty list. :param namespace: string to use as default prefix for the key used in all operations of the backend. Default is None :param key_builder: alternative callable to build the key. Receives the key and the namespace as params and should return something that can be used as key by the underlying backend. :param timeout: int or float in seconds specifying maximum timeout for the operations to last. By default its 5. Use 0 or None if you want to disable it. :param ttl: int the expiration time in seconds to use as a default in all operations of the backend. It can be overriden in the specific calls. """ NAME: str def __init__( self, serializer=None, plugins=None, namespace=None, key_builder=None, timeout=5, ttl=None ): self.timeout = float(timeout) if timeout is not None else timeout self.namespace = namespace self.ttl = float(ttl) if ttl is not None else ttl self.build_key = key_builder or self._build_key self._serializer = None self.serializer = serializer or serializers.StringSerializer() self._plugins = None self.plugins = plugins or [] @property def serializer(self): return self._serializer @serializer.setter def serializer(self, value): self._serializer = value @property def plugins(self): return self._plugins @plugins.setter def plugins(self, value): self._plugins = value @API.register @API.aiocache_enabled(fake_return=True) @API.timeout @API.plugins async def add(self, key, value, ttl=SENTINEL, dumps_fn=None, namespace=None, _conn=None): """ Stores the value in the given key with ttl if specified. Raises an error if the key already exists. :param key: str :param value: obj :param ttl: int the expiration time in seconds. Due to memcached restrictions if you want compatibility use int. In case you need miliseconds, redis and memory support float ttls :param dumps_fn: callable alternative to use as dumps function :param namespace: str alternative namespace to use :param timeout: int or float in seconds specifying maximum timeout for the operations to last :returns: True if key is inserted :raises: - ValueError if key already exists - :class:`asyncio.TimeoutError` if it lasts more than self.timeout """ start = time.monotonic() dumps = dumps_fn or self._serializer.dumps ns = namespace if namespace is not None else self.namespace ns_key = self.build_key(key, namespace=ns) await self._add(ns_key, dumps(value), ttl=self._get_ttl(ttl), _conn=_conn) logger.debug("ADD %s %s (%.4f)s", ns_key, True, time.monotonic() - start) return True async def _add(self, key, value, ttl, _conn=None): raise NotImplementedError() @API.register @API.aiocache_enabled() @API.timeout @API.plugins async def get(self, key, default=None, loads_fn=None, namespace=None, _conn=None): """ Get a value from the cache. Returns default if not found. :param key: str :param default: obj to return when key is not found :param loads_fn: callable alternative to use as loads function :param namespace: str alternative namespace to use :param timeout: int or float in seconds specifying maximum timeout for the operations to last :returns: obj loaded :raises: :class:`asyncio.TimeoutError` if it lasts more than self.timeout """ start = time.monotonic() loads = loads_fn or self._serializer.loads ns = namespace if namespace is not None else self.namespace ns_key = self.build_key(key, namespace=ns) value = loads(await self._get(ns_key, encoding=self.serializer.encoding, _conn=_conn)) logger.debug("GET %s %s (%.4f)s", ns_key, value is not None, time.monotonic() - start) return value if value is not None else default async def _get(self, key, encoding, _conn=None): raise NotImplementedError() async def _gets(self, key, encoding="utf-8", _conn=None): raise NotImplementedError() @API.register @API.aiocache_enabled(fake_return=[]) @API.timeout @API.plugins async def multi_get(self, keys, loads_fn=None, namespace=None, _conn=None): """ Get multiple values from the cache, values not found are Nones. :param keys: list of str :param loads_fn: callable alternative to use as loads function :param namespace: str alternative namespace to use :param timeout: int or float in seconds specifying maximum timeout for the operations to last :returns: list of objs :raises: :class:`asyncio.TimeoutError` if it lasts more than self.timeout """ start = time.monotonic() loads = loads_fn or self._serializer.loads ns = namespace if namespace is not None else self.namespace ns_keys = [self.build_key(key, namespace=ns) for key in keys] values = [ loads(value) for value in await self._multi_get( ns_keys, encoding=self.serializer.encoding, _conn=_conn ) ] logger.debug( "MULTI_GET %s %d (%.4f)s", ns_keys, len([value for value in values if value is not None]), time.monotonic() - start, ) return values async def _multi_get(self, keys, encoding, _conn=None): raise NotImplementedError() @API.register @API.aiocache_enabled(fake_return=True) @API.timeout @API.plugins async def set( self, key, value, ttl=SENTINEL, dumps_fn=None, namespace=None, _cas_token=None, _conn=None ): """ Stores the value in the given key with ttl if specified :param key: str :param value: obj :param ttl: int the expiration time in seconds. Due to memcached restrictions if you want compatibility use int. In case you need miliseconds, redis and memory support float ttls :param dumps_fn: callable alternative to use as dumps function :param namespace: str alternative namespace to use :param timeout: int or float in seconds specifying maximum timeout for the operations to last :returns: True if the value was set :raises: :class:`asyncio.TimeoutError` if it lasts more than self.timeout """ start = time.monotonic() dumps = dumps_fn or self._serializer.dumps ns = namespace if namespace is not None else self.namespace ns_key = self.build_key(key, namespace=ns) res = await self._set( ns_key, dumps(value), ttl=self._get_ttl(ttl), _cas_token=_cas_token, _conn=_conn ) logger.debug("SET %s %d (%.4f)s", ns_key, True, time.monotonic() - start) return res async def _set(self, key, value, ttl, _cas_token=None, _conn=None): raise NotImplementedError() @API.register @API.aiocache_enabled(fake_return=True) @API.timeout @API.plugins async def multi_set(self, pairs, ttl=SENTINEL, dumps_fn=None, namespace=None, _conn=None): """ Stores multiple values in the given keys. :param pairs: list of two element iterables. First is key and second is value :param ttl: int the expiration time in seconds. Due to memcached restrictions if you want compatibility use int. In case you need miliseconds, redis and memory support float ttls :param dumps_fn: callable alternative to use as dumps function :param namespace: str alternative namespace to use :param timeout: int or float in seconds specifying maximum timeout for the operations to last :returns: True :raises: :class:`asyncio.TimeoutError` if it lasts more than self.timeout """ start = time.monotonic() dumps = dumps_fn or self._serializer.dumps ns = namespace if namespace is not None else self.namespace tmp_pairs = [] for key, value in pairs: tmp_pairs.append((self.build_key(key, namespace=ns), dumps(value))) await self._multi_set(tmp_pairs, ttl=self._get_ttl(ttl), _conn=_conn) logger.debug( "MULTI_SET %s %d (%.4f)s", [key for key, value in tmp_pairs], len(tmp_pairs), time.monotonic() - start, ) return True async def _multi_set(self, pairs, ttl, _conn=None): raise NotImplementedError() @API.register @API.aiocache_enabled(fake_return=0) @API.timeout @API.plugins async def delete(self, key, namespace=None, _conn=None): """ Deletes the given key. :param key: Key to be deleted :param namespace: str alternative namespace to use :param timeout: int or float in seconds specifying maximum timeout for the operations to last :returns: int number of deleted keys :raises: :class:`asyncio.TimeoutError` if it lasts more than self.timeout """ start = time.monotonic() ns = namespace if namespace is not None else self.namespace ns_key = self.build_key(key, namespace=ns) ret = await self._delete(ns_key, _conn=_conn) logger.debug("DELETE %s %d (%.4f)s", ns_key, ret, time.monotonic() - start) return ret async def _delete(self, key, _conn=None): raise NotImplementedError() @API.register @API.aiocache_enabled(fake_return=False) @API.timeout @API.plugins async def exists(self, key, namespace=None, _conn=None): """ Check key exists in the cache. :param key: str key to check :param namespace: str alternative namespace to use :param timeout: int or float in seconds specifying maximum timeout for the operations to last :returns: True if key exists otherwise False :raises: :class:`asyncio.TimeoutError` if it lasts more than self.timeout """ start = time.monotonic() ns = namespace if namespace is not None else self.namespace ns_key = self.build_key(key, namespace=ns) ret = await self._exists(ns_key, _conn=_conn) logger.debug("EXISTS %s %d (%.4f)s", ns_key, ret, time.monotonic() - start) return ret async def _exists(self, key, _conn=None): raise NotImplementedError() @API.register @API.aiocache_enabled(fake_return=1) @API.timeout @API.plugins async def increment(self, key, delta=1, namespace=None, _conn=None): """ Increments value stored in key by delta (can be negative). If key doesn't exist, it creates the key with delta as value. :param key: str key to check :param delta: int amount to increment/decrement :param namespace: str alternative namespace to use :param timeout: int or float in seconds specifying maximum timeout for the operations to last :returns: Value of the key once incremented. -1 if key is not found. :raises: :class:`asyncio.TimeoutError` if it lasts more than self.timeout :raises: :class:`TypeError` if value is not incrementable """ start = time.monotonic() ns = namespace if namespace is not None else self.namespace ns_key = self.build_key(key, namespace=ns) ret = await self._increment(ns_key, delta, _conn=_conn) logger.debug("INCREMENT %s %d (%.4f)s", ns_key, ret, time.monotonic() - start) return ret async def _increment(self, key, delta, _conn=None): raise NotImplementedError() @API.register @API.aiocache_enabled(fake_return=False) @API.timeout @API.plugins async def expire(self, key, ttl, namespace=None, _conn=None): """ Set the ttl to the given key. By setting it to 0, it will disable it :param key: str key to expire :param ttl: int number of seconds for expiration. If 0, ttl is disabled :param namespace: str alternative namespace to use :param timeout: int or float in seconds specifying maximum timeout for the operations to last :returns: True if set, False if key is not found :raises: :class:`asyncio.TimeoutError` if it lasts more than self.timeout """ start = time.monotonic() ns = namespace if namespace is not None else self.namespace ns_key = self.build_key(key, namespace=ns) ret = await self._expire(ns_key, ttl, _conn=_conn) logger.debug("EXPIRE %s %d (%.4f)s", ns_key, ret, time.monotonic() - start) return ret async def _expire(self, key, ttl, _conn=None): raise NotImplementedError() @API.register @API.aiocache_enabled(fake_return=True) @API.timeout @API.plugins async def clear(self, namespace=None, _conn=None): """ Clears the cache in the cache namespace. If an alternative namespace is given, it will clear those ones instead. :param namespace: str alternative namespace to use :param timeout: int or float in seconds specifying maximum timeout for the operations to last :returns: True :raises: :class:`asyncio.TimeoutError` if it lasts more than self.timeout """ start = time.monotonic() ret = await self._clear(namespace, _conn=_conn) logger.debug("CLEAR %s %d (%.4f)s", namespace, ret, time.monotonic() - start) return ret async def _clear(self, namespace, _conn=None): raise NotImplementedError() @API.register @API.aiocache_enabled() @API.timeout @API.plugins async def raw(self, command, *args, _conn=None, **kwargs): """ Send the raw command to the underlying client. Note that by using this CMD you will lose compatibility with other backends. Due to limitations with aiomcache client, args have to be provided as bytes. For rest of backends, str. :param command: str with the command. :param timeout: int or float in seconds specifying maximum timeout for the operations to last :returns: whatever the underlying client returns :raises: :class:`asyncio.TimeoutError` if it lasts more than self.timeout """ start = time.monotonic() ret = await self._raw( command, *args, encoding=self.serializer.encoding, _conn=_conn, **kwargs ) logger.debug("%s (%.4f)s", command, time.monotonic() - start) return ret async def _raw(self, command, *args, **kwargs): raise NotImplementedError() async def _redlock_release(self, key, value): raise NotImplementedError() @API.timeout async def close(self, *args, _conn=None, **kwargs): """ Perform any resource clean up necessary to exit the program safely. After closing, cmd execution is still possible but you will have to close again before exiting. :raises: :class:`asyncio.TimeoutError` if it lasts more than self.timeout """ start = time.monotonic() ret = await self._close(*args, _conn=_conn, **kwargs) logger.debug("CLOSE (%.4f)s", time.monotonic() - start) return ret async def _close(self, *args, **kwargs): pass def _build_key(self, key, namespace=None): if namespace is not None: return "{}{}".format(namespace, _ensure_key(key)) if self.namespace is not None: return "{}{}".format(self.namespace, _ensure_key(key)) return key def _get_ttl(self, ttl): return ttl if ttl is not SENTINEL else self.ttl def get_connection(self): return _Conn(self) async def acquire_conn(self): return self async def release_conn(self, conn): pass async def __aenter__(self): return self async def __aexit__( self, exc_type: Optional[Type[BaseException]], exc: Optional[BaseException], tb: Optional[TracebackType] ) -> None: await self.close() class _Conn: def __init__(self, cache): self._cache = cache self._conn = None async def __aenter__(self): self._conn = await self._cache.acquire_conn() return self async def __aexit__(self, exc_type, exc_value, traceback): await self._cache.release_conn(self._conn) def __getattr__(self, name): return self._cache.__getattribute__(name) @classmethod def _inject_conn(cls, cmd_name): async def _do_inject_conn(self, *args, **kwargs): return await getattr(self._cache, cmd_name)(*args, _conn=self._conn, **kwargs) return _do_inject_conn def _ensure_key(key): if isinstance(key, Enum): return key.value else: return key for cmd in API.CMDS: setattr(_Conn, cmd.__name__, _Conn._inject_conn(cmd.__name__)) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/aiocache/decorators.py0000644000175100001730000004454514464001404017215 0ustar00runnerdockerimport asyncio import functools import inspect import logging from aiocache.base import SENTINEL from aiocache.factory import Cache, caches from aiocache.lock import RedLock logger = logging.getLogger(__name__) class cached: """ Caches the functions return value into a key generated with module_name, function_name and args. The cache is available in the function object as ``.cache``. In some cases you will need to send more args to configure the cache object. An example would be endpoint and port for the Redis cache. You can send those args as kwargs and they will be propagated accordingly. Only one cache instance is created per decorated call. If you expect high concurrency of calls to the same function, you should adapt the pool size as needed. Extra args that are injected in the function that you can use to control the cache behavior are: - ``cache_read``: Controls whether the function call will try to read from cache first or not. Enabled by default. - ``cache_write``: Controls whether the function call will try to write in the cache once the result has been retrieved. Enabled by default. - ``aiocache_wait_for_write``: Controls whether the call of the function will wait for the value in the cache to be written. If set to False, the write happens in the background. Enabled by default :param ttl: int seconds to store the function call. Default is None which means no expiration. :param key: str value to set as key for the function return. Takes precedence over key_builder param. If key and key_builder are not passed, it will use module_name + function_name + args + kwargs :param namespace: string to use as default prefix for the key used in all operations of the backend. Default is None :param key_builder: Callable that allows to build the function dynamically. It receives the function plus same args and kwargs passed to the function. This behavior is necessarily different than ``BaseCache.build_key()`` :param skip_cache_func: Callable that receives the result after calling the wrapped function and should return `True` if the value should skip the cache (or `False` to store in the cache). e.g. to avoid caching `None` results: `lambda r: r is None` :param cache: cache class to use when calling the ``set``/``get`` operations. Default is :class:`aiocache.SimpleMemoryCache`. :param serializer: serializer instance to use when calling the ``dumps``/``loads``. If its None, default one from the cache backend is used. :param plugins: list plugins to use when calling the cmd hooks Default is pulled from the cache class being used. :param alias: str specifying the alias to load the config from. If alias is passed, other config parameters are ignored. Same cache identified by alias is used on every call. If you need a per function cache, specify the parameters explicitly without using alias. :param noself: bool if you are decorating a class function, by default self is also used to generate the key. This will result in same function calls done by different class instances to use different cache keys. Use noself=True if you want to ignore it. """ def __init__( self, ttl=SENTINEL, key=None, namespace=None, key_builder=None, skip_cache_func=lambda x: False, cache=Cache.MEMORY, serializer=None, plugins=None, alias=None, noself=False, **kwargs, ): self.ttl = ttl self.key = key self.key_builder = key_builder self.skip_cache_func = skip_cache_func self.noself = noself self.alias = alias self.cache = None self._cache = cache self._serializer = serializer self._namespace = namespace self._plugins = plugins self._kwargs = kwargs def __call__(self, f): if self.alias: self.cache = caches.get(self.alias) for arg in ("serializer", "namespace", "plugins"): if getattr(self, f'_{arg}', None) is not None: logger.warning(f"Using cache alias; ignoring '{arg}' argument.") else: self.cache = _get_cache( cache=self._cache, serializer=self._serializer, namespace=self._namespace, plugins=self._plugins, **self._kwargs, ) @functools.wraps(f) async def wrapper(*args, **kwargs): return await self.decorator(f, *args, **kwargs) wrapper.cache = self.cache return wrapper async def decorator( self, f, *args, cache_read=True, cache_write=True, aiocache_wait_for_write=True, **kwargs ): key = self.get_cache_key(f, args, kwargs) if cache_read: value = await self.get_from_cache(key) if value is not None: return value result = await f(*args, **kwargs) if self.skip_cache_func(result): return result if cache_write: if aiocache_wait_for_write: await self.set_in_cache(key, result) else: # TODO: Use aiojobs to avoid warnings. asyncio.create_task(self.set_in_cache(key, result)) return result def get_cache_key(self, f, args, kwargs): if self.key: return self.key if self.key_builder: return self.key_builder(f, *args, **kwargs) return self._key_from_args(f, args, kwargs) def _key_from_args(self, func, args, kwargs): ordered_kwargs = sorted(kwargs.items()) return ( (func.__module__ or "") + func.__name__ + str(args[1:] if self.noself else args) + str(ordered_kwargs) ) async def get_from_cache(self, key: str): try: return await self.cache.get(key) except Exception: logger.exception("Couldn't retrieve %s, unexpected error", key) return None async def set_in_cache(self, key, value): try: await self.cache.set(key, value, ttl=self.ttl) except Exception: logger.exception("Couldn't set %s in key %s, unexpected error", value, key) class cached_stampede(cached): """ Caches the functions return value into a key generated with module_name, function_name and args while avoids for cache stampede effects. In some cases you will need to send more args to configure the cache object. An example would be endpoint and port for the Redis cache. You can send those args as kwargs and they will be propagated accordingly. Only one cache instance is created per decorated function. If you expect high concurrency of calls to the same function, you should adapt the pool size as needed. :param lease: int seconds to lock function call to avoid cache stampede effects. If 0 or None, no locking happens (default is 2). redis and memory backends support float ttls :param ttl: int seconds to store the function call. Default is None which means no expiration. :param key: str value to set as key for the function return. Takes precedence over key_from_attr param. If key and key_from_attr are not passed, it will use module_name + function_name + args + kwargs :param key_from_attr: str arg or kwarg name from the function to use as a key. :param namespace: string to use as default prefix for the key used in all operations of the backend. Default is None :param key_builder: Callable that allows to build the function dynamically. It receives the function plus same args and kwargs passed to the function. This behavior is necessarily different than ``BaseCache.build_key()`` :param skip_cache_func: Callable that receives the result after calling the wrapped function and should return `True` if the value should skip the cache (or `False` to store in the cache). e.g. to avoid caching `None` results: `lambda r: r is None` :param cache: cache class to use when calling the ``set``/``get`` operations. Default is :class:`aiocache.SimpleMemoryCache`. :param serializer: serializer instance to use when calling the ``dumps``/``loads``. Default is JsonSerializer. :param plugins: list plugins to use when calling the cmd hooks Default is pulled from the cache class being used. :param alias: str specifying the alias to load the config from. If alias is passed, other config parameters are ignored. New cache is created every time. :param noself: bool if you are decorating a class function, by default self is also used to generate the key. This will result in same function calls done by different class instances to use different cache keys. Use noself=True if you want to ignore it. """ def __init__(self, lease=2, **kwargs): super().__init__(**kwargs) self.lease = lease async def decorator(self, f, *args, **kwargs): key = self.get_cache_key(f, args, kwargs) value = await self.get_from_cache(key) if value is not None: return value async with RedLock(self.cache, key, self.lease): value = await self.get_from_cache(key) if value is not None: return value result = await f(*args, **kwargs) if self.skip_cache_func(result): return result await self.set_in_cache(key, result) return result def _get_cache(cache=Cache.MEMORY, serializer=None, plugins=None, **cache_kwargs): return Cache(cache, serializer=serializer, plugins=plugins, **cache_kwargs) def _get_args_dict(func, args, kwargs): defaults = { arg_name: arg.default for arg_name, arg in inspect.signature(func).parameters.items() if arg.default is not inspect._empty # TODO: bug prone.. } args_names = func.__code__.co_varnames[: func.__code__.co_argcount] return {**defaults, **dict(zip(args_names, args)), **kwargs} class multi_cached: """ Only supports functions that return dict-like structures. This decorator caches each key/value of the dict-like object returned by the function. The dict keys of the returned data should match the set of keys that are passed to the decorated callable in an iterable object. The name of that argument is passed to this decorator via the parameter ``keys_from_attr``. ``keys_from_attr`` can be the name of a positional or keyword argument. If the argument specified by ``keys_from_attr`` is an empty list, the cache will be ignored and the function will be called. If only some of the keys in ``keys_from_attr``are cached (and ``cache_read`` is True) those values will be fetched from the cache, and only the uncached keys will be passed to the callable via the argument specified by ``keys_from_attr``. By default, the callable's name and call signature are not incorporated into the cache key, so if there is another cached function returning a dict with same keys, those keys will be overwritten. To avoid this, use a specific ``namespace`` in each cache decorator or pass a ``key_builder``. If ``key_builder`` is passed, then the values of ``keys_from_attr`` will be transformed before requesting them from the cache. Equivalently, the keys in the dict-like mapping returned by the decorated callable will be transformed before storing them in the cache. The cache is available in the function object as ``.cache``. Only one cache instance is created per decorated function. If you expect high concurrency of calls to the same function, you should adapt the pool size as needed. Extra args that are injected in the function that you can use to control the cache behavior are: - ``cache_read``: Controls whether the function call will try to read from cache first or not. Enabled by default. - ``cache_write``: Controls whether the function call will try to write in the cache once the result has been retrieved. Enabled by default. - ``aiocache_wait_for_write``: Controls whether the call of the function will wait for the value in the cache to be written. If set to False, the write happens in the background. Enabled by default :param keys_from_attr: name of the arg or kwarg in the decorated callable that contains an iterable that yields the keys returned by the decorated callable. :param namespace: string to use as default prefix for the key used in all operations of the backend. Default is None :param key_builder: Callable that enables mapping the decorated function's keys to the keys used by the cache. Receives a key from the iterable corresponding to ``keys_from_attr``, the decorated callable, and the positional and keyword arguments that were passed to the decorated callable. This behavior is necessarily different than ``BaseCache.build_key()`` and the call signature differs from ``cached.key_builder``. :param skip_cache_func: Callable that receives both key and value and returns True if that key-value pair should not be cached (or False to store in cache). The keys and values to be passed are taken from the wrapped function result. :param ttl: int seconds to store the keys. Default is 0 which means no expiration. :param cache: cache class to use when calling the ``multi_set``/``multi_get`` operations. Default is :class:`aiocache.SimpleMemoryCache`. :param serializer: serializer instance to use when calling the ``dumps``/``loads``. If its None, default one from the cache backend is used. :param plugins: plugins to use when calling the cmd hooks Default is pulled from the cache class being used. :param alias: str specifying the alias to load the config from. If alias is passed, other config parameters are ignored. Same cache identified by alias is used on every call. If you need a per function cache, specify the parameters explicitly without using alias. """ def __init__( self, keys_from_attr, namespace=None, key_builder=None, skip_cache_func=lambda k, v: False, ttl=SENTINEL, cache=Cache.MEMORY, serializer=None, plugins=None, alias=None, **kwargs, ): self.keys_from_attr = keys_from_attr self.key_builder = key_builder or (lambda key, f, *args, **kwargs: key) self.skip_cache_func = skip_cache_func self.ttl = ttl self.alias = alias self.cache = None self._cache = cache self._serializer = serializer self._namespace = namespace self._plugins = plugins self._kwargs = kwargs def __call__(self, f): if self.alias: self.cache = caches.get(self.alias) for arg in ("serializer", "namespace", "plugins"): if getattr(self, f'_{arg}', None) is not None: logger.warning(f"Using cache alias; ignoring '{arg}' argument.") else: self.cache = _get_cache( cache=self._cache, serializer=self._serializer, namespace=self._namespace, plugins=self._plugins, **self._kwargs, ) @functools.wraps(f) async def wrapper(*args, **kwargs): return await self.decorator(f, *args, **kwargs) wrapper.cache = self.cache return wrapper async def decorator( self, f, *args, cache_read=True, cache_write=True, aiocache_wait_for_write=True, **kwargs ): missing_keys = [] partial = {} keys, new_args, args_index = self.get_cache_keys(f, args, kwargs) if cache_read: values = await self.get_from_cache(*keys) for key, value in zip(keys, values): if value is None: missing_keys.append(key) else: partial[key] = value if values and None not in values: return partial else: missing_keys = list(keys) if args_index > -1: new_args[args_index] = missing_keys else: kwargs[self.keys_from_attr] = missing_keys result = await f(*new_args, **kwargs) result.update(partial) to_cache = {k: v for k, v in result.items() if not self.skip_cache_func(k, v)} if not to_cache: return result if cache_write: if aiocache_wait_for_write: await self.set_in_cache(to_cache, f, args, kwargs) else: # TODO: Use aiojobs to avoid warnings. asyncio.create_task(self.set_in_cache(to_cache, f, args, kwargs)) return result def get_cache_keys(self, f, args, kwargs): args_dict = _get_args_dict(f, args, kwargs) keys = args_dict.get(self.keys_from_attr, []) or [] keys = [self.key_builder(key, f, *args, **kwargs) for key in keys] args_names = f.__code__.co_varnames[: f.__code__.co_argcount] new_args = list(args) keys_index = -1 if self.keys_from_attr in args_names and self.keys_from_attr not in kwargs: keys_index = args_names.index(self.keys_from_attr) new_args[keys_index] = keys return keys, new_args, keys_index async def get_from_cache(self, *keys): if not keys: return [] try: values = await self.cache.multi_get(keys) return values except Exception: logger.exception("Couldn't retrieve %s, unexpected error", keys) return [None] * len(keys) async def set_in_cache(self, result, fn, fn_args, fn_kwargs): try: await self.cache.multi_set( [(self.key_builder(k, fn, *fn_args, **fn_kwargs), v) for k, v in result.items()], ttl=self.ttl, ) except Exception: logger.exception("Couldn't set %s, unexpected error", result) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/aiocache/exceptions.py0000644000175100001730000000005414464001404017214 0ustar00runnerdockerclass InvalidCacheType(Exception): pass ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/aiocache/factory.py0000644000175100001730000002021014464001404016476 0ustar00runnerdockerimport logging import urllib from copy import deepcopy from typing import Dict from aiocache import AIOCACHE_CACHES from aiocache.base import BaseCache from aiocache.exceptions import InvalidCacheType logger = logging.getLogger(__name__) def _class_from_string(class_path): class_name = class_path.split(".")[-1] module_name = class_path.rstrip(class_name).rstrip(".") return getattr(__import__(module_name, fromlist=[class_name]), class_name) def _create_cache(cache, serializer=None, plugins=None, **kwargs): if serializer is not None: cls = serializer.pop("class") cls = _class_from_string(cls) if isinstance(cls, str) else cls serializer = cls(**serializer) plugins_instances = [] if plugins is not None: for plugin in plugins: cls = plugin.pop("class") cls = _class_from_string(cls) if isinstance(cls, str) else cls plugins_instances.append(cls(**plugin)) cache = _class_from_string(cache) if isinstance(cache, str) else cache instance = cache(serializer=serializer, plugins=plugins_instances, **kwargs) return instance class Cache: """ This class is just a proxy to the specific cache implementations like :class:`aiocache.SimpleMemoryCache`, :class:`aiocache.RedisCache` and :class:`aiocache.MemcachedCache`. It is the preferred method of instantiating new caches over using the backend specific classes. You can instatiate a new one using the ``cache_type`` attribute like: >>> from aiocache import Cache >>> Cache(Cache.REDIS) RedisCache (127.0.0.1:6379) If you don't specify anything, ``Cache.MEMORY`` is used. Only ``Cache.MEMORY``, ``Cache.REDIS`` and ``Cache.MEMCACHED`` types are allowed. If the type passed is invalid, it will raise a :class:`aiocache.exceptions.InvalidCacheType` exception. """ MEMORY = AIOCACHE_CACHES["memory"] REDIS = AIOCACHE_CACHES.get("redis") MEMCACHED = AIOCACHE_CACHES.get("memcached") def __new__(cls, cache_class=MEMORY, **kwargs): if not issubclass(cache_class, BaseCache): raise InvalidCacheType( "Invalid cache type, you can only use {}".format(list(AIOCACHE_CACHES.keys())) ) instance = cache_class.__new__(cache_class, **kwargs) instance.__init__(**kwargs) return instance @classmethod def _get_cache_class(cls, scheme): return AIOCACHE_CACHES[scheme] @classmethod def get_scheme_class(cls, scheme): try: return cls._get_cache_class(scheme) except KeyError as e: raise InvalidCacheType( "Invalid cache type, you can only use {}".format(list(AIOCACHE_CACHES.keys())) ) from e @classmethod def from_url(cls, url): """ Given a resource uri, return an instance of that cache initialized with the given parameters. An example usage: >>> from aiocache import Cache >>> Cache.from_url('memory://') a more advanced usage using queryparams to configure the cache: >>> from aiocache import Cache >>> cache = Cache.from_url('redis://localhost:10/1?pool_max_size=1') >>> cache RedisCache (localhost:10) >>> cache.db 1 >>> cache.pool_max_size 1 :param url: string identifying the resource uri of the cache to connect to """ parsed_url = urllib.parse.urlparse(url) kwargs = dict(urllib.parse.parse_qsl(parsed_url.query)) cache_class = Cache.get_scheme_class(parsed_url.scheme) if parsed_url.path: kwargs.update(cache_class.parse_uri_path(parsed_url.path)) if parsed_url.hostname: kwargs["endpoint"] = parsed_url.hostname if parsed_url.port: kwargs["port"] = parsed_url.port if parsed_url.password: kwargs["password"] = parsed_url.password return Cache(cache_class, **kwargs) class CacheHandler: _config: Dict[str, Dict[str, object]] = { "default": { "cache": "aiocache.SimpleMemoryCache", "serializer": {"class": "aiocache.serializers.StringSerializer"}, } } def __init__(self): self._caches = {} def add(self, alias: str, config: Dict[str, object]) -> None: """ Add a cache to the current config. If the key already exists, it will overwrite it:: >>> caches.add('default', { 'cache': "aiocache.SimpleMemoryCache", 'serializer': { 'class': "aiocache.serializers.StringSerializer" } }) :param alias: The alias for the cache :param config: Mapping containing the cache configuration """ self._config[alias] = config def get(self, alias: str) -> object: """ Retrieve cache identified by alias. Will return always the same instance If the cache was not instantiated yet, it will do it lazily the first time this is called. :param alias: str cache alias :return: cache instance """ try: return self._caches[alias] except KeyError: pass config = self.get_alias_config(alias) cache = _create_cache(**deepcopy(config)) self._caches[alias] = cache return cache def create(self, alias: str, **kwargs): """Create a new cache. You can use kwargs to pass extra parameters to configure the cache. :param alias: alias to pull configuration from :return: New cache instance """ config = self.get_alias_config(alias) # TODO(PY39): **config | kwargs return _create_cache(**{**config, **kwargs}) def get_alias_config(self, alias): config = self.get_config() if alias not in config: raise KeyError( "Could not find config for '{0}', ensure you include {0} when calling" "caches.set_config specifying the config for that cache".format(alias) ) return config[alias] def get_config(self): """ Return copy of current stored config """ return deepcopy(self._config) def set_config(self, config): """ Set (override) the default config for cache aliases from a dict-like structure. The structure is the following:: { 'default': { 'cache': "aiocache.SimpleMemoryCache", 'serializer': { 'class': "aiocache.serializers.StringSerializer" } }, 'redis_alt': { 'cache': "aiocache.RedisCache", 'endpoint': "127.0.0.10", 'port': 6378, 'serializer': { 'class': "aiocache.serializers.PickleSerializer" }, 'plugins': [ {'class': "aiocache.plugins.HitMissRatioPlugin"}, {'class': "aiocache.plugins.TimingPlugin"} ] } } 'default' key must always exist when passing a new config. Default configuration is:: { 'default': { 'cache': "aiocache.SimpleMemoryCache", 'serializer': { 'class': "aiocache.serializers.StringSerializer" } } } You can set your own classes there. The class params accept both str and class types. All keys in the config are optional, if they are not passed the defaults for the specified class will be used. If a config key already exists, it will be updated with the new values. """ if "default" not in config: raise ValueError("default config must be provided") for config_name in config.keys(): self._caches.pop(config_name, None) self._config = config caches = CacheHandler() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/aiocache/lock.py0000644000175100001730000001425314464001404015771 0ustar00runnerdockerimport asyncio import uuid from typing import Any, Dict, Union from aiocache.base import BaseCache class RedLock: """ Implementation of `Redlock `_ with a single instance because aiocache is focused on single instance cache. This locking has some limitations and shouldn't be used in situations where consistency is critical. Those locks are aimed for performance reasons where failing on locking from time to time is acceptable. TLDR: do NOT use this if you need real resource exclusion. Couple of considerations with the implementation: - If the lease expires and there are calls waiting, all of them will pass (blocking just happens for the first time). - When a new call arrives, it will wait always at most lease time. This means that the call could end up blocked longer than needed in case the lease from the blocker expires. Backend specific implementation: - Redis implements correctly the redlock algorithm. It sets the key if it doesn't exist. To release, it checks the value is the same as the instance trying to release and if it is, it removes the lock. If not it will do nothing - Memcached follows the same approach with a difference. Due to memcached lacking a way to execute the operation get and delete commands atomically, any client is able to release the lock. This is a limitation that can't be fixed without introducing race conditions. - Memory implementation is not distributed, it will only apply to the process running. Say you have 4 processes running APIs with aiocache, the locking will apply only per process (still useful to reduce load per process). Example usage:: from aiocache import Cache from aiocache.lock import RedLock cache = Cache(Cache.REDIS) async with RedLock(cache, 'key', lease=1): # Calls will wait here result = await cache.get('key') if result is not None: return result result = await super_expensive_function() await cache.set('key', result) In the example, first call will start computing the ``super_expensive_function`` while consecutive calls will block at most 1 second. If the blocking lasts for more than 1 second, the calls will proceed to also calculate the result of ``super_expensive_function``. """ _EVENTS: Dict[str, asyncio.Event] = {} def __init__(self, client: BaseCache, key: str, lease: Union[int, float]): self.client = client self.key = self.client.build_key(key + "-lock") self.lease = lease self._value = "" async def __aenter__(self): return await self._acquire() async def _acquire(self): self._value = str(uuid.uuid4()) try: await self.client._add(self.key, self._value, ttl=self.lease) RedLock._EVENTS[self.key] = asyncio.Event() except ValueError: await self._wait_for_release() async def _wait_for_release(self): try: await asyncio.wait_for(RedLock._EVENTS[self.key].wait(), self.lease) except asyncio.TimeoutError: pass except KeyError: # lock was released when wait_for was rescheduled pass async def __aexit__(self, exc_type, exc_value, traceback): await self._release() async def _release(self): removed = await self.client._redlock_release(self.key, self._value) if removed: RedLock._EVENTS.pop(self.key).set() class OptimisticLock: """ Implementation of `optimistic lock `_ Optimistic locking assumes multiple transactions can happen at the same time and they will only fail if before finish, conflicting modifications with other transactions are found, producing a roll back. Finding a conflict will end up raising an `aiocache.lock.OptimisticLockError` exception. A conflict happens when the value at the storage is different from the one we retrieved when the lock started. Example usage:: cache = Cache(Cache.REDIS) # The value stored in 'key' will be checked here async with OptimisticLock(cache, 'key') as lock: result = await super_expensive_call() await lock.cas(result) If any other call sets the value of ``key`` before the ``lock.cas`` is called, an :class:`aiocache.lock.OptimisticLockError` will be raised. A way to make the same call crash would be to change the value inside the lock like:: cache = Cache(Cache.REDIS) # The value stored in 'key' will be checked here async with OptimisticLock(cache, 'key') as lock: result = await super_expensive_call() await cache.set('random_value') # This will make the `lock.cas` call fail await lock.cas(result) If the lock is created with an unexisting key, there will never be conflicts. """ def __init__(self, client: BaseCache, key: str): self.client = client self.key = key self.ns_key = self.client.build_key(key) self._token = None async def __aenter__(self): return await self._acquire() async def _acquire(self): self._token = await self.client._gets(self.ns_key) return self async def __aexit__(self, exc_type, exc_value, traceback): pass async def cas(self, value: Any, **kwargs: Any) -> bool: """ Checks and sets the specified value for the locked key. If the value has changed since the lock was created, it will raise an :class:`aiocache.lock.OptimisticLockError` exception. :raises: :class:`aiocache.lock.OptimisticLockError` """ success = await self.client.set(self.key, value, _cas_token=self._token, **kwargs) if not success: raise OptimisticLockError("Value has changed since the lock started") return True class OptimisticLockError(Exception): """ Raised when a conflict is found during an optimistic lock """ ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/aiocache/plugins.py0000644000175100001730000000657714464001404016534 0ustar00runnerdocker""" This module implements different plugins you can attach to your cache instance. They are coded in a collaborative so you can use multiple inheritance. """ from aiocache.base import API class BasePlugin: @classmethod def add_hook(cls, func, hooks): for hook in hooks: setattr(cls, hook, func) async def do_nothing(self, *args, **kwargs): pass BasePlugin.add_hook( BasePlugin.do_nothing, ["pre_{}".format(method.__name__) for method in API.CMDS] ) BasePlugin.add_hook( BasePlugin.do_nothing, ["post_{}".format(method.__name__) for method in API.CMDS] ) class TimingPlugin(BasePlugin): """ Calculates average, min and max times each command takes. The data is saved in the cache class as a dict attribute called ``profiling``. For example, to access the average time of the operation get, you can do ``cache.profiling['get_avg']`` """ @classmethod def save_time(cls, method): async def do_save_time(self, client, *args, took=0, **kwargs): if not hasattr(client, "profiling"): client.profiling = {} previous_total = client.profiling.get("{}_total".format(method), 0) previous_avg = client.profiling.get("{}_avg".format(method), 0) previous_max = client.profiling.get("{}_max".format(method), 0) previous_min = client.profiling.get("{}_min".format(method)) client.profiling["{}_total".format(method)] = previous_total + 1 client.profiling["{}_avg".format(method)] = previous_avg + (took - previous_avg) / ( previous_total + 1 ) client.profiling["{}_max".format(method)] = max(took, previous_max) client.profiling["{}_min".format(method)] = ( min(took, previous_min) if previous_min else took ) return do_save_time for method in API.CMDS: TimingPlugin.add_hook( TimingPlugin.save_time(method.__name__), ["post_{}".format(method.__name__)] ) class HitMissRatioPlugin(BasePlugin): """ Calculates the ratio of hits the cache has. The data is saved in the cache class as a dict attribute called ``hit_miss_ratio``. For example, to access the hit ratio of the cache, you can do ``cache.hit_miss_ratio['hit_ratio']``. It also provides the "total" and "hits" keys. """ async def post_get(self, client, key, took=0, ret=None, **kwargs): if not hasattr(client, "hit_miss_ratio"): client.hit_miss_ratio = {} client.hit_miss_ratio["total"] = 0 client.hit_miss_ratio["hits"] = 0 client.hit_miss_ratio["total"] += 1 if ret is not None: client.hit_miss_ratio["hits"] += 1 client.hit_miss_ratio["hit_ratio"] = ( client.hit_miss_ratio["hits"] / client.hit_miss_ratio["total"] ) async def post_multi_get(self, client, keys, took=0, ret=None, **kwargs): if not hasattr(client, "hit_miss_ratio"): client.hit_miss_ratio = {} client.hit_miss_ratio["total"] = 0 client.hit_miss_ratio["hits"] = 0 client.hit_miss_ratio["total"] += len(keys) for result in ret: if result is not None: client.hit_miss_ratio["hits"] += 1 client.hit_miss_ratio["hit_ratio"] = ( client.hit_miss_ratio["hits"] / client.hit_miss_ratio["total"] ) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1691353871.9774783 aiocache-0.12.2/aiocache/serializers/0000755000175100001730000000000014464001420017014 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/aiocache/serializers/__init__.py0000644000175100001730000000102314464001404021123 0ustar00runnerdockerimport logging from .serializers import ( BaseSerializer, JsonSerializer, NullSerializer, PickleSerializer, StringSerializer, ) logger = logging.getLogger(__name__) try: import msgpack except ImportError: logger.debug("msgpack not installed, MsgPackSerializer unavailable") else: from .serializers import MsgPackSerializer del msgpack __all__ = [ "BaseSerializer", "NullSerializer", "StringSerializer", "PickleSerializer", "JsonSerializer", "MsgPackSerializer", ] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/aiocache/serializers/serializers.py0000644000175100001730000001267114464001404021733 0ustar00runnerdockerimport logging import pickle # noqa: S403 from typing import Any, Optional logger = logging.getLogger(__name__) try: import ujson as json # noqa: I900 except ImportError: logger.debug("ujson module not found, using json") import json # type: ignore[no-redef] try: import msgpack except ImportError: msgpack = None logger.debug("msgpack not installed, MsgPackSerializer unavailable") _NOT_SET = object() class BaseSerializer: DEFAULT_ENCODING: Optional[str] = "utf-8" def __init__(self, *args, encoding=_NOT_SET, **kwargs): self.encoding = self.DEFAULT_ENCODING if encoding is _NOT_SET else encoding super().__init__(*args, **kwargs) # TODO(PY38): Positional-only def dumps(self, value: Any) -> str: raise NotImplementedError("dumps method must be implemented") # TODO(PY38): Positional-only def loads(self, value: str) -> Any: raise NotImplementedError("loads method must be implemented") class NullSerializer(BaseSerializer): """ This serializer does nothing. Its only recommended to be used by :class:`aiocache.SimpleMemoryCache` because for other backends it will produce incompatible data unless you work only with str types because it store data as is. DISCLAIMER: Be careful with mutable types and memory storage. The following behavior is considered normal (same as ``functools.lru_cache``):: cache = Cache() my_list = [1] await cache.set("key", my_list) my_list.append(2) await cache.get("key") # Will return [1, 2] """ def dumps(self, value): """ Returns the same value """ return value def loads(self, value): """ Returns the same value """ return value class StringSerializer(BaseSerializer): """ Converts all input values to str. All return values are also str. Be careful because this means that if you store an ``int(1)``, you will get back '1'. The transformation is done by just casting to str in the ``dumps`` method. If you want to keep python types, use ``PickleSerializer``. ``JsonSerializer`` may also be useful to keep type of symple python types. """ def dumps(self, value): """ Serialize the received value casting it to str. :param value: obj Anything support cast to str :returns: str """ return str(value) def loads(self, value): """ Returns value back without transformations """ return value class PickleSerializer(BaseSerializer): """ Transform data to bytes using pickle.dumps and pickle.loads to retrieve it back. """ DEFAULT_ENCODING = None def __init__(self, *args, protocol=pickle.DEFAULT_PROTOCOL, **kwargs): super().__init__(*args, **kwargs) self.protocol = protocol def dumps(self, value): """ Serialize the received value using ``pickle.dumps``. :param value: obj :returns: bytes """ return pickle.dumps(value, protocol=self.protocol) def loads(self, value): """ Deserialize value using ``pickle.loads``. :param value: bytes :returns: obj """ if value is None: return None return pickle.loads(value) # noqa: S301 class JsonSerializer(BaseSerializer): """ Transform data to json string with json.dumps and json.loads to retrieve it back. Check https://docs.python.org/3/library/json.html#py-to-json-table for how types are converted. ujson will be used by default if available. Be careful with differences between built in json module and ujson: - ujson dumps supports bytes while json doesn't - ujson and json outputs may differ sometimes """ def dumps(self, value): """ Serialize the received value using ``json.dumps``. :param value: dict :returns: str """ return json.dumps(value) def loads(self, value): """ Deserialize value using ``json.loads``. :param value: str :returns: output of ``json.loads``. """ if value is None: return None return json.loads(value) class MsgPackSerializer(BaseSerializer): """ Transform data to bytes using msgpack.dumps and msgpack.loads to retrieve it back. You need to have ``msgpack`` installed in order to be able to use this serializer. :param encoding: str. Can be used to change encoding param for ``msg.loads`` method. Default is utf-8. :param use_list: bool. Can be used to change use_list param for ``msgpack.loads`` method. Default is True. """ def __init__(self, *args, use_list=True, **kwargs): if not msgpack: raise RuntimeError("msgpack not installed, MsgPackSerializer unavailable") self.use_list = use_list super().__init__(*args, **kwargs) def dumps(self, value): """ Serialize the received value using ``msgpack.dumps``. :param value: obj :returns: bytes """ return msgpack.dumps(value) def loads(self, value): """ Deserialize value using ``msgpack.loads``. :param value: bytes :returns: obj """ raw = False if self.encoding == "utf-8" else True if value is None: return None return msgpack.loads(value, raw=raw, use_list=self.use_list) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1691353871.9774783 aiocache-0.12.2/aiocache.egg-info/0000755000175100001730000000000014464001420016152 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353871.0 aiocache-0.12.2/aiocache.egg-info/PKG-INFO0000644000175100001730000001766514464001417017274 0ustar00runnerdockerMetadata-Version: 2.1 Name: aiocache Version: 0.12.2 Summary: multi backend asyncio cache Home-page: https://github.com/aio-libs/aiocache Author: Manuel Miranda Author-email: manu.mirandad@gmail.com Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 3.7 Classifier: Programming Language :: Python :: 3.8 Classifier: Programming Language :: Python :: 3.9 Classifier: Programming Language :: Python :: 3.10 Classifier: Programming Language :: Python :: 3.11 Classifier: Framework :: AsyncIO Provides-Extra: redis Provides-Extra: memcached Provides-Extra: msgpack License-File: LICENSE aiocache ######## Asyncio cache supporting multiple backends (memory, redis and memcached). .. image:: https://travis-ci.org/argaen/aiocache.svg?branch=master :target: https://travis-ci.org/argaen/aiocache .. image:: https://codecov.io/gh/argaen/aiocache/branch/master/graph/badge.svg :target: https://codecov.io/gh/argaen/aiocache .. image:: https://badge.fury.io/py/aiocache.svg :target: https://pypi.python.org/pypi/aiocache .. image:: https://img.shields.io/pypi/pyversions/aiocache.svg :target: https://pypi.python.org/pypi/aiocache .. image:: https://api.codacy.com/project/badge/Grade/96f772e38e63489ca884dbaf6e9fb7fd :target: https://www.codacy.com/app/argaen/aiocache .. image:: https://img.shields.io/badge/code%20style-black-000000.svg :target: https://github.com/ambv/black This library aims for simplicity over specialization. All caches contain the same minimum interface which consists on the following functions: - ``add``: Only adds key/value if key does not exist. - ``get``: Retrieve value identified by key. - ``set``: Sets key/value. - ``multi_get``: Retrieves multiple key/values. - ``multi_set``: Sets multiple key/values. - ``exists``: Returns True if key exists False otherwise. - ``increment``: Increment the value stored in the given key. - ``delete``: Deletes key and returns number of deleted items. - ``clear``: Clears the items stored. - ``raw``: Executes the specified command using the underlying client. .. role:: python(code) :language: python .. contents:: .. section-numbering: Installing ========== - ``pip install aiocache`` - ``pip install aiocache[redis]`` - ``pip install aiocache[memcached]`` - ``pip install aiocache[redis,memcached]`` - ``pip install aiocache[msgpack]`` Usage ===== Using a cache is as simple as .. code-block:: python >>> import asyncio >>> from aiocache import Cache >>> cache = Cache(Cache.MEMORY) # Here you can also use Cache.REDIS and Cache.MEMCACHED, default is Cache.MEMORY >>> with asyncio.Runner() as runner: >>> runner.run(cache.set('key', 'value')) True >>> runner.run(cache.get('key')) 'value' Or as a decorator .. code-block:: python import asyncio from collections import namedtuple from aiocache import cached, Cache from aiocache.serializers import PickleSerializer # With this we can store python objects in backends like Redis! Result = namedtuple('Result', "content, status") @cached( ttl=10, cache=Cache.REDIS, key="key", serializer=PickleSerializer(), port=6379, namespace="main") async def cached_call(): print("Sleeping for three seconds zzzz.....") await asyncio.sleep(3) return Result("content", 200) async def run(): await cached_call() await cached_call() await cached_call() cache = Cache(Cache.REDIS, endpoint="127.0.0.1", port=6379, namespace="main") await cache.delete("key") if __name__ == "__main__": asyncio.run(run()) The recommended approach to instantiate a new cache is using the `Cache` constructor. However you can also instantiate directly using `aiocache.RedisCache`, `aiocache.SimpleMemoryCache` or `aiocache.MemcachedCache`. You can also setup cache aliases so its easy to reuse configurations .. code-block:: python import asyncio from aiocache import caches # You can use either classes or strings for referencing classes caches.set_config({ 'default': { 'cache': "aiocache.SimpleMemoryCache", 'serializer': { 'class': "aiocache.serializers.StringSerializer" } }, 'redis_alt': { 'cache': "aiocache.RedisCache", 'endpoint': "127.0.0.1", 'port': 6379, 'timeout': 1, 'serializer': { 'class': "aiocache.serializers.PickleSerializer" }, 'plugins': [ {'class': "aiocache.plugins.HitMissRatioPlugin"}, {'class': "aiocache.plugins.TimingPlugin"} ] } }) async def default_cache(): cache = caches.get('default') # This always returns the SAME instance await cache.set("key", "value") assert await cache.get("key") == "value" async def alt_cache(): cache = caches.create('redis_alt') # This creates a NEW instance on every call await cache.set("key", "value") assert await cache.get("key") == "value" async def test_alias(): await default_cache() await alt_cache() await caches.get("redis_alt").delete("key") if __name__ == "__main__": asyncio.run(test_alias()) How does it work ================ Aiocache provides 3 main entities: - **backends**: Allow you specify which backend you want to use for your cache. Currently supporting: SimpleMemoryCache, RedisCache using redis_ and MemCache using aiomcache_. - **serializers**: Serialize and deserialize the data between your code and the backends. This allows you to save any Python object into your cache. Currently supporting: StringSerializer, PickleSerializer, JsonSerializer, and MsgPackSerializer. But you can also build custom ones. - **plugins**: Implement a hooks system that allows to execute extra behavior before and after of each command. If you are missing an implementation of backend, serializer or plugin you think it could be interesting for the package, do not hesitate to open a new issue. .. image:: docs/images/architecture.png :align: center Those 3 entities combine during some of the cache operations to apply the desired command (backend), data transformation (serializer) and pre/post hooks (plugins). To have a better vision of what happens, here you can check how ``set`` function works in ``aiocache``: .. image:: docs/images/set_operation_flow.png :align: center Amazing examples ================ In `examples folder `_ you can check different use cases: - `Sanic, Aiohttp and Tornado `_ - `Python object in Redis `_ - `Custom serializer for compressing data `_ - `TimingPlugin and HitMissRatioPlugin demos `_ - `Using marshmallow as a serializer `_ - `Using cached decorator `_. - `Using multi_cached decorator `_. Documentation ============= - `Usage `_ - `Caches `_ - `Serializers `_ - `Plugins `_ - `Configuration `_ - `Decorators `_ - `Testing `_ - `Examples `_ .. _redis: https://github.com/redis/redis-py .. _aiomcache: https://github.com/aio-libs/aiomcache ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353871.0 aiocache-0.12.2/aiocache.egg-info/SOURCES.txt0000644000175100001730000000420514464001417020045 0ustar00runnerdocker.coveragerc CHANGES.rst LICENSE MANIFEST.in Makefile README.rst pyproject.toml requirements-dev.txt requirements.txt setup.cfg setup.py aiocache/__init__.py aiocache/base.py aiocache/decorators.py aiocache/exceptions.py aiocache/factory.py aiocache/lock.py aiocache/plugins.py aiocache.egg-info/PKG-INFO aiocache.egg-info/SOURCES.txt aiocache.egg-info/dependency_links.txt aiocache.egg-info/requires.txt aiocache.egg-info/top_level.txt aiocache/backends/__init__.py aiocache/backends/memcached.py aiocache/backends/memory.py aiocache/backends/redis.py aiocache/serializers/__init__.py aiocache/serializers/serializers.py docs/Makefile docs/caches.rst docs/conf.py docs/configuration.rst docs/decorators.rst docs/index.rst docs/locking.rst docs/plugins.rst docs/readthedocs.yml docs/serializers.rst docs/testing.rst docs/images/architecture.png docs/images/set_operation_flow.png examples/alt_key_builder.py examples/cached_alias_config.py examples/cached_decorator.py examples/marshmallow_serializer_class.py examples/multicached_decorator.py examples/optimistic_lock.py examples/plugins.py examples/python_object.py examples/redlock.py examples/run_all.sh examples/serializer_class.py examples/serializer_function.py examples/simple_redis.py examples/testing.py examples/frameworks/aiohttp_example.py examples/frameworks/sanic_example.py examples/frameworks/tornado_example.py tests/__init__.py tests/utils.py tests/acceptance/__init__.py tests/acceptance/conftest.py tests/acceptance/test_base.py tests/acceptance/test_decorators.py tests/acceptance/test_factory.py tests/acceptance/test_lock.py tests/acceptance/test_plugins.py tests/acceptance/test_serializers.py tests/performance/__init__.py tests/performance/conftest.py tests/performance/server.py tests/performance/test_concurrency.py tests/performance/test_footprint.py tests/ut/__init__.py tests/ut/conftest.py tests/ut/test_base.py tests/ut/test_decorators.py tests/ut/test_exceptions.py tests/ut/test_factory.py tests/ut/test_lock.py tests/ut/test_plugins.py tests/ut/test_serializers.py tests/ut/backends/__init__.py tests/ut/backends/test_memcached.py tests/ut/backends/test_memory.py tests/ut/backends/test_redis.py././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353871.0 aiocache-0.12.2/aiocache.egg-info/dependency_links.txt0000644000175100001730000000000114464001417022226 0ustar00runnerdocker ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353871.0 aiocache-0.12.2/aiocache.egg-info/requires.txt0000644000175100001730000000011614464001417020556 0ustar00runnerdocker [memcached] aiomcache>=0.5.2 [msgpack] msgpack>=0.5.5 [redis] redis>=4.2.0 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353871.0 aiocache-0.12.2/aiocache.egg-info/top_level.txt0000644000175100001730000000001114464001417020702 0ustar00runnerdockeraiocache ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1691353871.9814782 aiocache-0.12.2/docs/0000755000175100001730000000000014464001420013654 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/docs/Makefile0000644000175100001730000001667614464001404015336 0ustar00runnerdocker# Makefile for Sphinx documentation # # You can set these variables from the command line. SPHINXOPTS = SPHINXBUILD = sphinx-build PAPER = BUILDDIR = _build # Internal variables. PAPEROPT_a4 = -D latex_paper_size=a4 PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . # the i18n builder cannot share the environment and doctrees with the others I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . .PHONY: help help: @echo "Please use \`make ' where is one of" @echo " html to make standalone HTML files" @echo " dirhtml to make HTML files named index.html in directories" @echo " singlehtml to make a single large HTML file" @echo " pickle to make pickle files" @echo " json to make JSON files" @echo " htmlhelp to make HTML files and a HTML help project" @echo " qthelp to make HTML files and a qthelp project" @echo " applehelp to make an Apple Help Book" @echo " devhelp to make HTML files and a Devhelp project" @echo " epub to make an epub" @echo " epub3 to make an epub3" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" @echo " latexpdf to make LaTeX files and run them through pdflatex" @echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx" @echo " text to make text files" @echo " man to make manual pages" @echo " texinfo to make Texinfo files" @echo " info to make Texinfo files and run them through makeinfo" @echo " gettext to make PO message catalogs" @echo " changes to make an overview of all changed/added/deprecated items" @echo " xml to make Docutils-native XML files" @echo " pseudoxml to make pseudoxml-XML files for display purposes" @echo " linkcheck to check all external links for integrity" @echo " doctest to run all doctests embedded in the documentation (if enabled)" @echo " coverage to run coverage check of the documentation (if enabled)" @echo " dummy to check syntax errors of document sources" .PHONY: clean clean: rm -rf $(BUILDDIR)/* .PHONY: html html: $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." .PHONY: dirhtml dirhtml: $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." .PHONY: singlehtml singlehtml: $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml @echo @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." .PHONY: pickle pickle: $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle @echo @echo "Build finished; now you can process the pickle files." .PHONY: json json: $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json @echo @echo "Build finished; now you can process the JSON files." .PHONY: htmlhelp htmlhelp: $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp @echo @echo "Build finished; now you can run HTML Help Workshop with the" \ ".hhp project file in $(BUILDDIR)/htmlhelp." .PHONY: qthelp qthelp: $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp @echo @echo "Build finished; now you can run "qcollectiongenerator" with the" \ ".qhcp project file in $(BUILDDIR)/qthelp, like this:" @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/aiocache.qhcp" @echo "To view the help file:" @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/aiocache.qhc" .PHONY: applehelp applehelp: $(SPHINXBUILD) -b applehelp $(ALLSPHINXOPTS) $(BUILDDIR)/applehelp @echo @echo "Build finished. The help book is in $(BUILDDIR)/applehelp." @echo "N.B. You won't be able to view it unless you put it in" \ "~/Library/Documentation/Help or install it in your application" \ "bundle." .PHONY: devhelp devhelp: $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp @echo @echo "Build finished." @echo "To view the help file:" @echo "# mkdir -p $$HOME/.local/share/devhelp/aiocache" @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/aiocache" @echo "# devhelp" .PHONY: epub epub: $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub @echo @echo "Build finished. The epub file is in $(BUILDDIR)/epub." .PHONY: epub3 epub3: $(SPHINXBUILD) -b epub3 $(ALLSPHINXOPTS) $(BUILDDIR)/epub3 @echo @echo "Build finished. The epub3 file is in $(BUILDDIR)/epub3." .PHONY: latex latex: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." @echo "Run \`make' in that directory to run these through (pdf)latex" \ "(use \`make latexpdf' here to do that automatically)." .PHONY: latexpdf latexpdf: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo "Running LaTeX files through pdflatex..." $(MAKE) -C $(BUILDDIR)/latex all-pdf @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." .PHONY: latexpdfja latexpdfja: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo "Running LaTeX files through platex and dvipdfmx..." $(MAKE) -C $(BUILDDIR)/latex all-pdf-ja @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." .PHONY: text text: $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text @echo @echo "Build finished. The text files are in $(BUILDDIR)/text." .PHONY: man man: $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man @echo @echo "Build finished. The manual pages are in $(BUILDDIR)/man." .PHONY: texinfo texinfo: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo @echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo." @echo "Run \`make' in that directory to run these through makeinfo" \ "(use \`make info' here to do that automatically)." .PHONY: info info: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo "Running Texinfo files through makeinfo..." make -C $(BUILDDIR)/texinfo info @echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo." .PHONY: gettext gettext: $(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale @echo @echo "Build finished. The message catalogs are in $(BUILDDIR)/locale." .PHONY: changes changes: $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes @echo @echo "The overview file is in $(BUILDDIR)/changes." .PHONY: linkcheck linkcheck: $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck @echo @echo "Link check complete; look for any errors in the above output " \ "or in $(BUILDDIR)/linkcheck/output.txt." .PHONY: doctest doctest: $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest @echo "Testing of doctests in the sources finished, look at the " \ "results in $(BUILDDIR)/doctest/output.txt." .PHONY: coverage coverage: $(SPHINXBUILD) -b coverage $(ALLSPHINXOPTS) $(BUILDDIR)/coverage @echo "Testing of coverage in the sources finished, look at the " \ "results in $(BUILDDIR)/coverage/python.txt." .PHONY: xml xml: $(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml @echo @echo "Build finished. The XML files are in $(BUILDDIR)/xml." .PHONY: pseudoxml pseudoxml: $(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml @echo @echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml." .PHONY: dummy dummy: $(SPHINXBUILD) -b dummy $(ALLSPHINXOPTS) $(BUILDDIR)/dummy @echo @echo "Build finished. Dummy builder generates no files." ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/docs/caches.rst0000644000175100001730000000361414464001404015642 0ustar00runnerdocker.. _caches: Caches ====== You can use different caches according to your needs. All the caches implement the same interface. Caches are always working together with a serializer which transforms data when storing and retrieving from the backend. It may also contain plugins that are able to enrich the behavior of your cache (like adding metrics, logs, etc). This is the flow of the ``set`` command: .. image:: images/set_operation_flow.png :align: center Let's go with a more specific case. Let's pick Redis as the cache with namespace "test" and PickleSerializer as the serializer: #. We receive ``set("key", "value")``. #. Hook ``pre_set`` of all attached plugins (none by default) is called. #. "key" will become "test:key" when calling ``build_key``. #. "value" will become an array of bytes when calling ``serializer.dumps`` because of ``PickleSerializer``. #. the byte array is stored together with the key using ``set`` cmd in Redis. #. Hook ``post_set`` of all attached plugins is called. By default, all commands are covered by a timeout that will trigger an ``asyncio.TimeoutError`` in case of timeout. Timeout can be set at instance level or when calling the command. The supported commands are: - add - get - set - multi_get - multi_set - delete - exists - increment - expire - clear - raw If you feel a command is missing here do not hesitate to `open an issue `_ .. _basecache: BaseCache --------- .. autoclass:: aiocache.base.BaseCache :members: .. _cache: Cache ----- .. autoclass:: aiocache.Cache :members: .. _rediscache: RedisCache ---------- .. autoclass:: aiocache.RedisCache :members: .. _simplememorycache: SimpleMemoryCache ----------------- .. autoclass:: aiocache.SimpleMemoryCache :members: .. _memcachedcache: MemcachedCache -------------- .. autoclass:: aiocache.MemcachedCache :members: ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/docs/conf.py0000644000175100001730000002403714464001404015163 0ustar00runnerdocker#!/usr/bin/env python3 # -*- coding: utf-8 -*- # # aiocache documentation build configuration file, created by # sphinx-quickstart on Sat Oct 1 16:53:45 2016. # # This file is execfile()d with the current directory set to its # containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. # import re import os import sys from pathlib import Path sys.path.insert(0, os.path.abspath("..")) sys.path.insert(0, os.path.abspath(".")) # -- General configuration ------------------------------------------------ # If your documentation needs a minimal Sphinx version, state it here. # # needs_sphinx = '1.0' # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom # ones. extensions = [ "sphinx.ext.autodoc", "sphinx.ext.viewcode", ] # Add any paths that contain templates here, relative to this directory. templates_path = ["_templates"] # The suffix(es) of source filenames. # You can specify multiple suffix as a list of string: # # source_suffix = ['.rst', '.md'] source_suffix = ".rst" # The encoding of source files. # # source_encoding = 'utf-8-sig' # The master toctree document. master_doc = "index" # General information about the project. project = "aiocache" copyright = "2016, Manuel Miranda" author = "Manuel Miranda" # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # _path = Path(__file__).parent.parent / "aiocache/__init__.py" try: version = re.findall(r'__version__ = "(.+?)"', _path.read_text())[0] release = version except IndexError: raise RuntimeError("Unable to determine version.") # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. # # This is also used if you do content translation via gettext catalogs. # Usually you set "language" from the command line for these cases. language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: # # today = '' # # Else, today_fmt is used as the format for a strftime call. # # today_fmt = '%B %d, %Y' # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. # This patterns also effect to html_static_path and html_extra_path exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"] # The reST default role (used for this markup: `text`) to use for all # documents. # # default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. # # add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). # # add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. # # show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = "sphinx" # A list of ignored prefixes for module index sorting. # modindex_common_prefix = [] # If true, keep warnings as "system message" paragraphs in the built documents. # keep_warnings = False # If true, `todo` and `todoList` produce output, else they produce nothing. todo_include_todos = False # -- Options for HTML output ---------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. # html_theme = "default" on_rtd = os.environ.get("READTHEDOCS", None) == "True" if not on_rtd: import sphinx_rtd_theme html_theme = "sphinx_rtd_theme" html_theme_path = [sphinx_rtd_theme.get_html_theme_path()] # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. # # html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. # html_theme_path = [] # The name for this set of Sphinx documents. # " v documentation" by default. # # html_title = 'aiocache v0.0.1' # A shorter title for the navigation bar. Default is the same as html_title. # # html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. # # html_logo = None # The name of an image file (relative to this directory) to use as a favicon of # the docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. # # html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ["_static"] # Add any extra paths that contain custom files (such as robots.txt or # .htaccess) here, relative to this directory. These files are copied # directly to the root of the documentation. # # html_extra_path = [] # If not None, a 'Last updated on:' timestamp is inserted at every page # bottom, using the given strftime format. # The empty string is equivalent to '%b %d, %Y'. # # html_last_updated_fmt = None # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. # # html_use_smartypants = True # Custom sidebar templates, maps document names to template names. # # html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. # # html_additional_pages = {} # If false, no module index is generated. # # html_domain_indices = True # If false, no index is generated. # # html_use_index = True # If true, the index is split into individual pages for each letter. # # html_split_index = False # If true, links to the reST sources are added to the pages. # # html_show_sourcelink = True # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. # # html_show_sphinx = True # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. # # html_show_copyright = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. # # html_use_opensearch = '' # This is the file name suffix for HTML files (e.g. ".xhtml"). # html_file_suffix = None # Language to be used for generating the HTML full-text search index. # Sphinx supports the following languages: # 'da', 'de', 'en', 'es', 'fi', 'fr', 'h', 'it', 'ja' # 'nl', 'no', 'pt', 'ro', 'r', 'sv', 'tr', 'zh' # # html_search_language = 'en' # A dictionary with options for the search language support, empty by default. # 'ja' uses this config value. # 'zh' user can custom change `jieba` dictionary path. # # html_search_options = {'type': 'default'} # The name of a javascript file (relative to the configuration directory) that # implements a search results scorer. If empty, the default will be used. # # html_search_scorer = 'scorer.js' # Output file base name for HTML help builder. htmlhelp_basename = "aiocachedoc" # -- Options for LaTeX output --------------------------------------------- latex_elements = { # The paper size ('letterpaper' or 'a4paper'). # # 'papersize': 'letterpaper', # The font size ('10pt', '11pt' or '12pt'). # # 'pointsize': '10pt', # Additional stuff for the LaTeX preamble. # # 'preamble': '', # Latex figure (float) alignment # # 'figure_align': 'htbp', } # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, # author, documentclass [howto, manual, or own class]). latex_documents = [ (master_doc, "aiocache.tex", "aiocache Documentation", "Manuel Miranda", "manual"), ] # The name of an image file (relative to this directory) to place at the top of # the title page. # # latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. # # latex_use_parts = False # If true, show page references after internal links. # # latex_show_pagerefs = False # If true, show URL addresses after external links. # # latex_show_urls = False # Documents to append as an appendix to all manuals. # # latex_appendices = [] # It false, will not define \strong, \code, itleref, \crossref ... but only # \sphinxstrong, ..., \sphinxtitleref, ... To help avoid clash with user added # packages. # # latex_keep_old_macro_names = True # If false, no module index is generated. # # latex_domain_indices = True # -- Options for manual page output --------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [(master_doc, "aiocache", "aiocache Documentation", [author], 1)] # If true, show URL addresses after external links. # # man_show_urls = False # -- Options for Texinfo output ------------------------------------------- # Grouping the document tree into Texinfo files. List of tuples # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ ( master_doc, "aiocache", "aiocache Documentation", author, "aiocache", "One line description of project.", "Miscellaneous" ), ] # Documents to append as an appendix to all manuals. # # texinfo_appendices = [] # If false, no module index is generated. # # texinfo_domain_indices = True # How to display URL addresses: 'footnote', 'no', or 'inline'. # # texinfo_show_urls = 'footnote' # If true, do not generate a @detailmenu in the "Top" node's menu. # # texinfo_no_detailmenu = False ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/docs/configuration.rst0000644000175100001730000000211614464001404017257 0ustar00runnerdocker.. _configuration: Configuration ============= The caches module allows to setup cache configurations and then use them either using an alias or retrieving the config explicitly. To set the config, call ``caches.set_config``: .. automethod:: aiocache.caches.set_config To retrieve a copy of the current config, you can use ``caches.get_config`` or ``caches.get_alias_config`` for an alias config. Next snippet shows an example usage: .. literalinclude:: ../examples/cached_alias_config.py :language: python :linenos: :emphasize-lines: 6-26 When you do ``caches.get('alias_name')``, the cache instance is built lazily the first time. Next accesses will return the **same** instance. If instead of reusing the same instance, you need a new one every time, use ``caches.create('alias_name')``. One of the advantages of ``caches.create`` is that it accepts extra args that then are passed to the cache constructor. This way you can override args like namespace, endpoint, etc. .. automethod:: aiocache.caches.add .. automethod:: aiocache.caches.get .. automethod:: aiocache.caches.create ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/docs/decorators.rst0000644000175100001730000000106714464001404016561 0ustar00runnerdocker.. _decorators: Decorators ========== aiocache comes with a couple of decorators for caching results from asynchronous functions. Do not use the decorator in synchronous functions, it may lead to unexpected behavior. .. _cached: cached ------ .. automodule:: aiocache :members: cached .. literalinclude:: ../examples/cached_decorator.py :language: python :linenos: .. _multi_cached: multi_cached ------------ .. automodule:: aiocache :members: multi_cached .. literalinclude:: ../examples/multicached_decorator.py :language: python :linenos: ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1691353871.9814782 aiocache-0.12.2/docs/images/0000755000175100001730000000000014464001420015121 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/docs/images/architecture.png0000644000175100001730000011717314464001404020325 0ustar00runnerdockerPNG  IHDRe|qAgAMA a cHRMz&u0`:pQ<bKGDIDATxw|UB%.>sBZ Cy`MB%KPZb &<|0cm&B!Qša(28t B!$l:#2%aY !B5/xFxMU@!b,g S(E B!C6E(g0)E:!B àjFL"Bh!B T B!=7u9j&BcYvīB!D/ P@!b\& Q"B!2aRn¦j"BP ztj"BP>+3UE!M@q蚧8z{! z(:??-|m93 A77okxUB!t|TS*r1M+t,X#{ì >lu3T=|ۿB;E!E!B`zB! ziRy/+fL~@5:=gy?,zDMB zHg&\z.n[࿾E!QM@~]:?`XMu‚r9?k]{]=!Kb9b0;kݶX OIPu+A.]Kxq^#zjT3k` qvnt560EJy;? !LX /̙>N: ڻm1vlcۯ㫱!CQv!BH&,ynV/@̓IdNVZ@K7Ҿw஥B 8C=z'&(_O f;!`_ðz; Q[ a7o =83VmLˑqq'̺`Ũ;^ǠzBKfjgOpu'|(_RҲ1daB!) #1WrLi,ߴBbV^W-'UyzOjrL@Je<oT+31BȽHٻ{$pRJtI0wxl_/{qMvH͙>s^l]g'\>rv ̺( 3Ɨc>z4"?S}4S .B!jJBC | wͰz51ؾ^L^q^=q_'}tizJn^څ0cvOy%!B: XMes`d[G6 Zc oKܼ2w. Ap>XӇW!P[?k}_k& 3̺8ӻɞiR=3̝1^w)!Bh!j,z$!^ΰz}/W!{WA}fLpZ2`˦L &X@mQN^!`)Kg4#MPz¬ 6j?źᵾ/clZX3RCA4B!Cp_0^L@M̞%*rQ,~^++(q^xPO?z_Pgڔϱg뢇gqZ{ך"yH_*&C 򥿣*/&B!t|b9*s熴X712Y7B~'RZΓ)F^VP{h~Un0 zv;uuh!BhO}cVЭØ1틎1 &B!&i*- KCyA;tp)M!BMed B h!BME@!Bh(B!B@B!&B!P4B!& B!4M!B h!Bh(&B!&h!Bh(&B!&h!Bh(B!B@B!&B!<>òӱǙXh/?ͅ멍 h1n[ Q ih!Bh:_!^2>p0>p0̺@GAap9~p_ʳa~hh!Bh:8Y7exia|u3A~g7cuP,GMA0 M!BM@G4 sZGիB1")mL|0di6^2:? fODLXWA!5 ;i!BM2JW}Qu8|=q_E)!H?fNY7d%#3ϷcwsgPEB}"/ӏk}_nWM!B xD ~}:P'3}!j QS۰z}?򥿣:?(Aa0:<{"n綢Kgq/(Gq7bg@!BhVO@Byz"/qn4 j 1g&Mž1m?v}͘&:?͐}Wvn+M!B (5Ĵ)n"9yiz`Y{- - 6 HuCf5X;QemB!00Nۿ]:?P¹k!@ti)MZ54GfHs𼼛&B!< ]<=JW!~a;jBZ@lݷG.YlXj/gexkUL"B!4Y19M~s]J jN;סJ,o@!Bh(B!B@B!&B!P4B!& B!4M!B h!BME@!B@Q4B!4E@!B@Q4B!4E@!B@B!&B!P4B!& B!4M!B h!BME@!B ^9tp9|n{:w8?EmQN~-kmur,B!cW@ckE!B`zYG!@MANZߗur,B!)!Y7dL!ػmqCxPH@!B q^#6jo VO4!;exrvSc=4|aT!0M+ǹrTaztߣL s  ӇWK}~'M\xP3k`ѻt.Rz#!Tӿȏߣ B!m5 0E!@޼vH&BzDk}_Ɛo:?H2_r#0!?ճ1si fqtO0iLo B!e4jÇ|}Yz?mؾ^R,z T#?n綢27I{7aǵ< .ì bgQ@X/gvY7i &=@&@}?wO|P Ƚ :4nit̝@!B  #1EXsYhh Uu}X6 fLPjPS/>O?zn{bqn(ŠgB˻ n&Xd&}m4&%rB;A?!vB~4OJoṙ"B! 8Fz0~V])F^3kP' _?ON |J:G$Z)Gv`OХOD1MAapsiҁ"zݢ%3m}o*4u(B!5Q~'R<y^On*~MA0J~ٯ @^'JU>m:Wy)ZǪug߀&B!&22B!P4B!& B!4M!B h!BME@!Bh(B!B@B!&B!P4B!4E@!B@Q4B!4E@!B@Q4B!4M!B h!BME@!Bh(B!B@B!297{[1E3G>ven@KP,GmQ:?(‰ 8w9M!B1+? !t~88B!pe>ny?,zDm̺8k#x]݃̄kYd%B!0 !@q齊L F~j uܪ@XjO͝1W{}dp|f3P,Ǵ)KB!1!^O/؀!@MA020g8)xޱ^2 `ag.ף:?}!̺ *$P,0fn&8w9d?LA 0u(=fL匊[1O8k)6Φ[f5!v^zO=Ə.~X@B!b<&]BE?2ӏQwA!.+DmQ?3Fj\<Bv۾^TOx\): W-$)69B}:8=|7\]b^l.W-ڢe nn'R,zDtiB!b<&̺ & ϣKgs]-0fܔf~ḡkS=t MѴ)clt{їP@.BHyQ,Ǒaѻ'S==gvy >2@zߜ4&B!b<&FxBtqnS3k`ѻ'}0fTOB7zHCQGj zB2dL{xΆmY{5E!1.&B!&@W&uxBt ?z[I& 8jBg"YZ=MfƢ;QQS U}=gòSz4=B!ĨL@Áz¬ 6j?źᵾ/clʳa{.BmQJU>`l_/Rl[ڢly4HX}:z&( VZ`lԾP߹oZaW5{Άerv֘~B!bP& YbO<Qp9^A!Mn!qE!l?B鳻 WĢwOiO?z%Y-فRc(O roB!7mY7= >+*aԾ@>G(#-֭QP5KDa6fCs[ M!B1::k)8w9n]{1>4UQ.mrpA4B!&@+1h`?!zٜn4r;UZ|m=[B!PF&B!&B!P4B!& B!4M!B h!BME@!Bh(B!B@B!&B!&h!Bh(&B!&h!Bh(&B!& B!4M!B h!BM@tdr,MCV{ eӐt},>ug\9ϓ0(R9#\ g4B!&6s!peZz¬ r~,?v8֯jB0~pܵw=g"Xj&U7uuM!B hM#?~A@!Gپ^l/@"2U VZ w¢wO$߀իrFu~:>{ E =ڢd%#5 #\#?~ [6+~I@!Bhbn[ B^%̝1+~``| J|f5XkdaPœa.؀ hd%z@?θB!'e(CO fL :@iL@C!! 9 /T@tӦ|!߀̺H9(c|\žB!BЮz4o5O˳ڢB;\ӏŗc>}jB~`\2<!a&"7cuM!B h سucD 5~ǥ<@JJk _g 9{}fL95;;вa7oJe/Ey&B!<.y]f(z27P%]ZogE!ZcZ:OBFzi03M!B б*reVOA{ttOx˭ |?m D@!Bhi_iP^ߏݛJ=,gF& CZvjqh!BME@!Bh(B!B@B!&B!&h!Bh(&B!&h!Bh(&B!& B h!BME@!Bh(B!B@B!&B!P4B!& B!4M!B MinϷc1i'>ݦ&v,bB B %AFV|@/g4B!,a 0f2o}P W-h~3KxuB!tTP,aЪmS=ۼoy?,z M6!B: Ā&@ īB! Xit!B: zA+MΕ.B!lCNa!B: X 9Ųi(Ur:3vo^ڢ$߀)p hl"B&4gh3}F~Kk hRMB!`J#Pu D!g&$lXY!0M+Ąi~in`!|M@It^MB!zeB9OĩC?Ȉro8q`U3UyB! a#B1F# ?`j!BWP&byZ @B!}6E lVH>W !B!8B!Fh!Bԥq=)@B!n s ( `B!c2a}A1c^?B!lJ¦841(4 %aY !BH(54Ay`rB!Ҽ! zMB%KPZb &<|0cm&B!.B_%B!kB!btB ! B!Y-(B!bЈB!0kE5[$86B!נ?W/R*{!B18!*__~}?"B!B ' B!&B!B@!B B!B!Bh!B!4B!B!BM!B!&B!B@!B B!B!Bh!B B!B!Bh!B!4B!B!BM!B!&B!HN?8Ⱥ狰pZ[3`a2ؽP=0ٳ5/ Y]?<Ⱥ/ !By =||I]no^J*UeFUUBrl7+@֥4 B! ?8u{:`߯Wꌂouש30B!b~>Wc@5C o^5h!B^NB]**{d߯ ^9yO]B!#d]`B0O fY7 B!O `W`QJ^aPUJgB!~<}>0#Sp z֍-c-&B!R [3ӛ~4]_X !BH_֥.֊!ZեɺB!eor2:׿WSysTeWj_]^[e76זQ[ekK=wז܁kK=޿A]fԔʃWW0MkW!BSC*8AR(qEI@JRQ"/G:W!(q-ǧTW6zzyB!K9^zfg(bW[zk?JQXG6 (-WP35 ^/ߙ2D!w_C*C''זyl-Zٵ%7k<6>(?9YO%}AńB!6VB5Gw_2(wД"ݨᆰr(=ze+%BӳyzއҠ8Ҡ8JтuY^}B!O<_`=yꏪSQz JS:o~LA JoDթed/ Lx5B!<3eߣ uT֖؄Ҡm!-fz4_?ëB!DGdb"Y(ƀ^ڃgW31BB!n%_ =g~BI`Ǒ.0@XpQB!6PׇY~jj6s7JB? 3_[깻f?iV!.BF!bߕmPu%JOeJz6L+po^B!8=5-jBյWc}2 !b_+ac[mٍ(3=ŵe76c6_y BN+%Y~%A %> ,x p̻!B 9KzʾPZzP2%!  )xpe~+,.4F!'}oSm z \[F 7C7ޕ!53R7TB B!1[3L(>jacjRlx"Bz0ھʰ~ +@!G0?<Sp-F~~.F!6+(@z)D}u!&BHk?U% zJ}.0t#sAߍ@_Ļ!B$ \)xw#P~+rBX]ҿck49k!4 `C 4jj~%SaB!Opc%jAnހ { _ ?nL!<^ˆ\l?0Xjj@2ƄBXԖrF JU[j3 8XL!$<(l 2.ko`ׇwwB!Y1%ާ$RSj"(L!46NkpP[ 2|g&8q!1B!7`Tc; )cQ5Vc&x'BpmA328f\@řo7y'^8v5'=sP{}?-fHںy)IySvnkǺpɣ sgǽK ۵Ipb4䈪@l]gU~Z|ySpmq*rX6 JWTaOsqFc7u=]y'bOUpxݨ2q^!!&~.%YZٻBXwTi}|Bu3}?yr $?v8֯A'- fL`qn(UU iřވΓQ)UwOvZߗaaqV-ڢ#^4,M*88!FD$@n&`֕_ @rΠץ`:D{At]z?+fLrLXiS>2XfLQ=lyKe0EzjB / ҂, B1:J`c@@&JB4& = B`\x Bnji_H9 lXjCs $%+]%` Y`1(a{)6O j„Bc jj JB#\b92`ѻ'Z s Bx_ qlV9i9CxM Ф%Y7|zB8YV~o ??g "btNϳLzzҁ3Π@w6Ϟ%P,ǁfM@1ڢCA-yV/h2tu3AL凚iS>Ǵ)ol!4UF(X&Ն& <?v8j đ0܉܋`97[7/˴W- b`ڔϱg"fk(vL@}P폁oZI= FJW!wb^+ՆC!}591h<|aB쑵 ;6rZ: B߱97?̜!=5BfLpMwbvqL@AMի82DD1i܂K}p͉S b`Vmu4ku33LBqwcAa0jBgFǩ?gCPcL勁oZ5z 2(cҴ͍-prEЌMo,/B Ba7 :VnB}q/ա ? bn'sBzŨc1xpZGnk eHj B VEe0LEU`(bѰ.\&!DG(3ãU T=e(3yu4O.d] j0 )]$ss*.lA^! w6:Pud+Q3_!j^Od\SiU:z:YꑥnUpxSm.s'!73#a&@ cK@az rVǮ-bAM͖dTYi ^)]).;n1^GFILzIJ}S9l{1#bkd#6Acz ]L= !wR""-\s7+! q~:ɫ'p#Y;qFڮt4ŪuT~ltuwcqb|IJ_.L. Iw;-/nqVz( ń1Թ ^Ba!^CWdFaxǙv% a)0eSN"Zw!нGwlܻdBdڢ5;Ve5;VcPdDz=}Nll]9e>K)n?.;;FTˏ`@I SiVI;ɹY3 c \?}Am;1# {A`ϋ[}5XaCLy/s\Hp"ȴ7Iy/s'Ir9!_9֮EG>t{7`PdFzļؽƶJ/>79)kc\f:*+^@-JJ⮺EЌ1N.1g21v/0# xL2K}zK.\Dz1j^EJy @}Nf B`줱Y?~'շ#oaĨ,rn'[㒏+UJv-ԹVnY!gنeXz1͞a,qQ- 8zDgϋȏ0bXOƔ'ft)ӧ` ۿoߜtid/}kUBNQXv{(Qn^z @ӦnNU/}5&@2=Bi,֔ V)..B~ nW0015Wmu=!7niRԵw (۵V+T'awzd4Vm] L2-ǍFqI?B.EJ!FYā|Z*o}PnDZef9 N !c!"3 5g~φ$͎u=* eIuxUE4i seN%#B=VwRv͈$UKT5NTRxUCwI5#T譻Y bMĈQ#Ȉ"# cvRw8hr*%BCdWTgD5J79fJO|-=B c nBZ)M1_}=~9Z[ކeV)K+K5Q-/},*JR Yme0V8Yzn$L++7籙JTW$K'>}a5 # 'Q'ǦU͑G3%ݹFß8mVѲ ˤAOI3  NvyLrnʦ#${X;kRvuJ]Nʮ]v'm XL@dZ7菡Ȍ`oR <5 a8pv?|}PҬ=mjHkl.Hss7B7 ,iTvX}/AkPd߻E&VQe={\9'l 6ܸ\#'e+lRVKzp7ZFA,cOU @Dˆza=p傳gpA ~nW^N;iDGbBb&`نeJya!<5 픎7&Mn{cˁ-Pf)zp B# ;CGPb]zҌ,}ŧc>2KP\y2Q`Xu^eKb9BiUpxUwVŪu8HPAB~\J~*8 0{'gW*ccT&`o7}/i ?;DJPXZYJmLLMp#[j0i!͚*Cܥ+R]:H̱ӨW/J-Ҭq ~]@tⴉZ&`-ߖʻVhe n 'Ǒ d\VlgbuT~GiATRYA|~bX/'swa/z_]3,AUb0lR%kuH?a-n,oߚEBZq#MٯȈAV4F)l͌#܀jee]"kPgM{ez>PS>XX<c53_0k< 6HL@zrYRvHBZ]g :T(sTb 86(I]3/8 fMEXZj9F(ַ@zj*̿ m6V;P.JPT0襚=N.ޢ ht:0 WLWVԓXe<9(ށx&Q /|o߃P4Oe@N殎`a3Q&&@Jxu Q\c^1zF=h(v`۷c`ֿ+%' 5E z)I5EHg1m_&& h y5l!YگDea )/Et'r36oP44MA,|N7AFB+\0& h sK:q;xjЅe& x&̶s1G6sda.#ZDTz$ρzj6{B<!U?+ݹinƮc;݆=ZMS](َtAѝlSOJv mrOtÞ НNzcRقbTcK` dN~_h&`5BCC . St|2c-meK015A@C˲N!0z$v8oi!Ȍjw{{Ybꗽ<ՙti@v3X44OP2wwN;G]R שs6^eKVͨl1)jj|l|Ry:BkD">ypnw4ziF (1Pzv-2!Z"HחV/&RHɜZSjO[M4?<+?9ˉB9{ JH)d`雀 x<ں!,1ѓKuX!pF2 ?Cį&{#=ߑ& z5tDTz$=߸h|\DzZ=L6 ~\+~FK>\i {ncR:QsGGb)ظw?/² ˰8_}=^r6Kl֨}x{ JlѸu+#ZlYB/?iRY[_~7D gE dX,;x{wy/s,[h J ƽy;v8o+܃q _&L~2LLMunW$dD'i5\GFǨ ȟX J aTU{m 1 xr&`g5]fTbNp/ơjj~JӉ[, ='p]MͰHPf tn`BHeИM2 `'t8D&K& ^OGÏ+7?a)0eqQ鑈V)7܃1qDXy/s_6Mp@e_L/&2Ke;cϴb̨'G>XaYi>G EFb}1p@5ִfj!p8UJL6#F"# (۵V2g]Mz3LL.T,D Ӂo(k,y+[q2.^-%b5ԹxFTz$\W/<0ǓlJĨ ; ⎺)Q]]]re̓a~7o z1x{xXE[䘀f옴rb3/BXb>~9rB 2Z<M@;6tIA͚>}z/R`#h*20tPWܴbZs',k}nbj6ZKa)2MAg[ fH=\+ǠRf;-0v=ĝ&L@\vXlRZT3M艨(y';> $&0L΃Bs)BwT2s#c_bm JOD&I+>Ov]}/A{6ek8C@8&}K(3xׁ =*@VO G]ME`$Xzd E!'/;ФKEJv1#RZAb@@P4<ĪYKw5CPQ427E*ӫ0+P4M@]g㔔绋uȸV 37Eكp$AB3U7H8/sñ+ME@@'WÔWsx:ujKʍ@QAO;Z]$fi?5UyG? hh(6Y0%ci9ɋ@IA:1.BiAҐd}6TsY&& h pzq4J9e?eݡګXu]Azn$"P҂0T81"ġ(.rґb$q.J/;1NP44M@M?қՋUٚHV!59Ʌ*W,e"7/yKGn^:r2,rEzN>R.cO=ހc3ME@@XA RnŖV)PU,;JX& hLl@TORQl6 h F)FPP:_Xu)C)R[`ۗ& h;C+jx,bt ʆ2-@P44M@;MJEl(ѝݙ && hyQѬ/J`#Grﻍ}6/P44M@(ż쿓`zR2w[yi(& =(VB>epJR'hi(&&`#Qzj< TD6pJUzsP44MS[0,)w T,6p!0h(0'܎cJG(e;B!0ۉw]:yDi!Np k{TSJQt?L}?Cht*S0s"\^a<6%|b+2i،JfT"AU{96:ؚl=jWnF|VuzCvK1a9ɜ0Oْ|i[ueomyb+`d dKVK's¶}VU 69.=/2 ͲC"5;Vr0:wO~abj!\/},<}/އ&Q m k eJ^ )WiVB2M"^8w-Rt55K>HPHPBIZXv':3fTbgdi%mՆG +Vxot55cqU=F&` 8x '@{Qu&&[jۮcFG>t7+ h>45Mhr=VJ@) )Wz~1OkH)ŋ,p=/YZa['M+FqR6+~ZO>dǦ%NwtZf 뗽KP`my'L~Gѣ>kS0~T~X8iV)&J-^XzfӽGw\$:wOycˁ-Ro[!Щs'ܲB2%6,Ջ1m4 4c Ϸ,,wpF&i?/G#?ˆQ#`=S %6om9:Dƴ QBv)aHfgjxSM0cU?,y+b+͌ۤxk/q?VzL[iԹ - qU8 F\f"ȱҾ;qGE2S4 w HWS3l{YՍ^< 8yUz?qW٤!VHR&` q଴IWoL<]MͰna|7BISg!N!xbʷsr/~]稫q%k9.^iS015[DG}x.a!0qUJ|x!0a.xƐCcގ!LLM`ǯoo@qѽGwvKl=z$ݏ5;VKA-!\q (1Pqlٿ{N.`O!:!QG1ht?A"e {Ƭ4M?Y`ǁXXV~1-Lz<q?/0džGRفOFYcޓx{ˬxkлIw_^au.f:w-Bp#0 XC\fdn$!2wx nX’aʮ)OlF%݀w=|0̘3]2K۸v㷧Joi]vJ` ۳gRqn& t>t2D+z *{}b=a*;, - )xŪ_`7 SgE&H& U ԬQ5inGG#?~F~ ]iF2GPi!Rń1Xީ| > N ±KGDz>.]n޽Gw؋^dy}}0zhZcBۿ/z-}'7+m*MlkF(· X)ݯp!e4, 탂m y@o6JҤ44\My39֥S.g*B|?[,ĦW`Iz㗵ҁ4!$.>)|DI2 K4*'/%K+r3I7x&M ?V`jjkJĦW &ٲkQASkfL'%K+KcCPXw8Kdg~QMZ; شo#LLMp>>dR0b 2K8x b}abjB3&,,1c&KGADzN#4Y7O| eE`=cB!C74Q`Xu^eK6&Q [6 >ˠ\(97l[・5 xtU S1c#hf(fX~7⳪yo~Kފ;Bsv|F#0/iwlwJ?,1Cs7~]MpG zBx%7;09lx>)pVHӤeVqzt55[PfLɅ2tVw5u g=H1jwCRbⴉMMփ[uM: ̥B[n?"3 g=h~ },$G>}E3h- xl;yp9VJR388T:hS!Da9J*soeI! [Z+pMPHA}0l(eVasR09xFXb>:BͿGL$!pUh;'ե'\k^<yZu>wC-^ꇵ[6;N6[2b/ iVgˤgUk+};nGDx֣QlN;&@#7nz 2-Bpx+j4~DZ8<=j4z#"# q~1hq^K ^k4*86P3hto*RF*uL"נp@ xuN+2)@+@ EV}[@hBAT,@hEGH\6eZx#DkUe]c0Q\,L?X!_4ҽRF+~PBx!3`iTGċ,('HMEӔ Z?<@ l\)SˣZ $s!D[Ulz6:ޮ[i'nh(ϵzzbM6IU4&`s !: !_ӂ !FP4M,v/݆)l(SZ@ Ә_Ş&28ER^n t2Tz$VnY!hg!s̴9ٿ~fcPf)p f\66Kly;"R"<5 3OF]b+in{]:+R3&f,UF%0ONnL@_uCvK1a9ɜ0O%k- 96ǰz~gUky ,_S˷"/@fkCj >f| I:w=̶Q9KC|;G*?Uo$V&!׿)&2EVQ)0,`Bk;y^?v&e;[lB^^SտȌڝuSYZYb!Z y/svRnHS *yS7\Pn015U?+XҚxpt ;@Wὡ#ȻEx] lvoʣ-e)_e?l^{ߓY.MنLc e-t6ź By+7{yֵؽg&@3& tBB !B;c\/NVCkmMy2ֽЬĕ9I¼9NfoUmԂ ?Sf)0tPt nWttcب5X9~ixE+65wz_ִB`OM.u)^ve4S^U6 Ko\glZ2B\ h(}5'00 ()wWif΄"708n*n(_ܢ МӾ`z-+(&`c"2{OZxQnLZpqÏ+7=VדTJѲ5;0yleV5ia؈QZ 7^hmcp8pez g MM@~TR(Ac&|b 7VB-Z۸_AΝv'܎qtT 5w&!7{#^hA{ѩs'إB#mBhq]|c!:>efʧjp[FZ&-@Iix-{OOF;mc#&-\!(ߦӁ2w?öMEP xP ,]D2Oћ3J}ӾnI5j+=n4>1QJ y/ssnpB|1a47M \_Gk> siLۃG|V4>[[!IZ^aX4@~߂G@m ( ⫯T:BPrL@BfS:ac"5#E|1aӖ|z7+<=1rg؂Agg`K=-x}z7zj17:w[Pf_ou𒥕OlF%|#R/G.>eV!. ' W}“ '%did8&%F?x?Y4Mفp0فAΝL@knS?ϾaPr ^w9 8~嘴gjQ:b(LLMh[]}ח&0x#pF>Z+xgUHs53x4_a& AU[!IBސ] /YZ5 ;E>MK9tv4Mա 8`7b8v5u|c| G#?ͰM=0Fz"_?^K-N& 2-}Ō3׼?u7K E>XaYN^δEP j̓9A'=OۃG]pUD)oD*ljU:PZhopBsq݃m N@b8v5F_o'sş P4W ~=Jr Өow*%7+I~x()?~JѿuQy~7,,Soڷ (L62~0]wծ.xy"Xk7oumZ O*ohmRdi5;PÔ*i M΃YM߸ +hvdQz_źCP4RҊ]S= u1FMNJͨ"C4,1>)Ui{Zcߤ*ܽ€S@P4w\@?<ҽ H`Fr_x6VP7&@P4OUS jAF2:ݯ^9i`3VP7;3|OP4O'T 5kū*ذQF*u76P4M@%.͛p9{P̆2%eʛM1&@]kabj!YaӾZ+qWQ鑏ݹi:·nw t2K}h:\okJ >ˠҵTl(Y-87l3ڗ Nد"Ovn}،J%_dN'spmi{N`KtWzJ'nY  !&N1zh!0a(>ƽtFGbvi S'˸?=,yvkvOk h&`akR j5Mi] ]kuG:ǕuZkJ!Щs7t:B*ڭjlU;~O?B,۰L}LLMu޲3HF)}1aL#)o{WlQhQk h&MNZx[g2e۴ p#MF7o`BE8H7;X[e[LÕe|D b3*a=a*U'Ӂiⴉ0h@x.~~!BK)>%\/"Z_KGCJM= (%j-xb)qpfM[ۉ8 "RI&nVyncR}} ~V8rH*Sq^ea-V)%6RN[Lf04#F'?Ƥo'bܿזr<5z7lJ׽àҽbRFM46@oci[:# B8hٺ1yleVa qIWoL뀸*(S2)Yit6ͿGN^EWS3!0qWU a$l} ]M N^]GgLf?v:Tj,YB6>6?BRްgeVᨋ'|;+7"oRBaieSv2iS8u$8raJoL~]Tz$7q2/->qUJ|x! s!Cʦƾ{abj"V̶!>sl? nW!7(1},`Ѹu+(.nX3mg|2c,\hNedjMfe–[>i_Nns9hXo@禞VŨ#R\2Aqt xryJۭMYY ف{Y݃ K_  D8!m? {yԹ "Ӥ8w-P2 W0c#!<@b+L-oD*> >=qUZ&,\N ;~YkmwnafTne4h[M@XJ> :w¡ \hmҁSNQzk (1_L3ꡘ8mz+#~r{(2iF7((b|1a BDzVHErAJ;9ՙ mP.vJjk9hhppv)0H ^wO5VB1+aV k=*Lc{Y/YJWXrt S1atrVnE =$!p3(Az)fۆjj>Fmx N\>{tqSxHw5501( <dMhB\qo#=#-ѬYa]g6^*zt܆AuK}}ro@i&+ԈQ# Oc8#:T= w5 BX0Cz xrñk+9ݞA+) )Wz~}{j1>0ehv&R۾šmCRW,DUZsp klmJG;xӚUGj z9+~ _Q ,1iq+ AjhW1ixŪPߌZcZz 4Uvy|nG]<5kL,x{B ]gJs=BlسIA`}͎ޣ;<=5&`͎ݧ7| dNv 𛰴Čkeo6莳g 7W8B+~qBlٿQp<!Vm]*l2i?S?=5iwCз_!p8.]E ls9ht0P y<@N YX7pզQ7"_?Bi0e?m[!daQֈWj8,]/ Н1QJ觍RNlzn+,]& h1x{xXEIdZ215S{%nWL^Ԗr<¿] WRS,R&?{jXQ~GV,q-L5_4 r)FْՈM̅FnHh 0ylT||:ziѻ%)ZIDATy]O21 8?&6ys(h4H 2i ux+\ 4Y8Fez,['adxeϠz )x^ߌ=hbǪlrˆ2Wd4[_Z^`t|SHyr35Uj?ۡϲe)Ij(M@&`3S"X_(ݧ1` 0<}v*t55C]eI*?ME{f}tF28w^& c#@7 .;,u}JP:`t'$@,iE@LA?SE3SRQԂE f#MEP4zGT+FPP `cGՕ2amE@mo~W~ zCq0E5Vr^L}? `h(=6 i܊-R>`ݡtX6vAaš:p74ME`FUM㖐xuҵTl(>VV& h B𼦁))BQzjy4ME`0T{(a/@ P 4ME``)A KUbt8$Qz4#PMU^/4ME`F@7Dq SJJP4MR5zbQG (gGux%+ӫ0)Aց& h 7I$\pc1U)cTZAz8& hbpoMzu B6T՝f@@P4F0*8^ٛD~D6TU}/~M| [& 0J`cyv7gL6T Is+)[& 0ƴ i퀸)JRg0pvI hR3&f̟u6٪+R鸷#oa To͞;8q W:; ?i҂JĨ \)ݦ%p P]]reoi@Nb+@0ܸ<2Z0ylȖn~4OP}/A=#8)H볥k@EFTkbjx4_c5 @`,FؽBO'ӂRO]w~ ȱ+4-v?p>w?ClF%M@GlA!_9&BF@ٶ LLM,*Ksxlܻlw 0قrN3pt?[P.QnniD^bvy=}tTE2Sg~~Ν0vغ(fIcq+*%wsabjOAWS!`^ xy/s!`bjs7"Z!C6k*^ !^:z}gT-7KԵ5{O%(`؂h=n4;h1jndo -"&{3VSX4]&-&`[۵'i: ޣ;\d5B+vZ&&8u$.]'? ¦}M6tT)CU[W3&:wB@݈Q#pi|4#!0c p;!C`舡PdF r` f LLMج 8xX}5}|0Qۿ/v߅GwJErAJ;9I6ۨ5@h,~m,[B`@Tz$.z]@dZ6^L@Jmj޲;?er8_*Cs& c0a#"з_,۰ g=H)L?9@!{,X0@o4)EW0pto102"f]&&&Qn͚ ut q~ز@8#H3^aʍAҹ'[cJGDX{tN;aɚ%H ot{7H ך}}0zhd!cm 4 y/s| āaieg5f&]M{%WJR]i+3v}xw hv?S4,En4}9Pv/&!(1}/!ZĭOl;UJj4zⴉx{AJ `M@TFQ'6,CJ3%d9&};Qz] LiϤ< \)U^^n= *0MMM@脥BrbVЭ16,CTz$\<{a6 W:JӒp;.BqM&z5|/&q+C wvnmڏ&h:y\À-!' zx% MKsZ%4Wyh(&ؘ6llr|wRSRߌ@.6 @fzUSޭuj XרGPME` a~ƹܞ+Xt'sE[ bul@攒{Q*|cU5oT{ h rQVpyvJQ:IiR\QfADUDR̂F1\b-K4MQXuX%P(zuit4 *.P\jT*` evME` rϲn೺?FՈsWeލcO@V'TU ME`܆+զ|\i+7N`e Q)wWXm8wy5嬇T;T~6H@loӟ`&N| I,:4RzPWL2"f??.zT5άT;,U"&`_`۷-!o?!ld 9'zep OxUPIW hLL M(t>.;2upR0RoSawcddPeb>x&Q uj! (*4P4FmWKRr7ǨŬ1W'zoև࿑Mۜ.f{iW}ŻjN.JPTSW84MViKIRV^w_N_Rv>ծ!ɹYUl;Sܬ%ջZU*akY:bZP(gqPͨf+ ME`|DOyA[^acJ1߿217`^a7Ta\DuM%俦29W}¹6FsKbv۔tg|r0@@i*!{mA&QG峞?݅iHz?dNTW~U݅^UB2`=;6 a QK'4-i O7wVŪ#XǓ;^0UJ~$86#-~\J~rUlwK!9 TWYQW Ыui4@VgOB)4zG9l^loRpiV|+>G~-, tݜkl]ws e^E%W@`nAc{Pz0m%l|3&&My_EfX ,W҃^x-bI^qWMڔWCUTqܔM\+&l*8H {wGN>],A]ak_>@0KaÆXy߰P2 (Ka[Q Ǯ2KLL?LTOdJb3^Ru+S7$2%9W}P]ꭓzU Ǯ+5 # ^3zU͑G1Ilit?1fo# ҚEZHT)Ib{@lvD\RS?}$fK.7IqɹY}uX.6q*Nʮ~]!Q`iƉXJT?XvHR?.)fD uW 3h }C`bipGɿ*!'FZQd z%޸kQ r_nN$uua.S|#%?yU:We"wABK|A+M.g^|nrRƸluTO-1dOO>SrS6&eLRWfw_]t3%7ecvcWŻ!FFJ&@ UeHA ?-g)reyǨ2;h材?]Nف|w\~QcNvw2wsg>z,λ!FH:z &̘?!V)a,],|c|0mT4War@wcqb`XudjZYx|ey:256;<$!'51owrލgTz{YžSܝw/1QaG P)D*ТU2Ptâʭ+W5I10̜s=qeEIHUWmta[Si]شr`, šzaa<1Q}y_wIwe<)q?SS§ݓgR '&OM.H~=4zqxn.YgN/|ڝHx.E>Yj~Iqhfrw|[1k*5qb*;5yz2yk06{{(foM&o NnJNM&NJYGg.wPU3ǒͫR 4FE"!;lmNMßggάzN>O/ 2UAf@3Xf4\r{7zu򽓺p?nxNwRo+#"/5 KaEfiR ?4\F m Xzn`;'dq>axr2))dKv56;MQKVcc4LH$_59HaKVAF x+Vb.aabQdll[Y4~+`WdQRmG, ؙIFzv2(w0 6`EFmO2 b41zy"0`=$uD Y۱Ia6ᜄt|Gq|$ O=μ"u4( љhXy Ft&S6IǛ)c?7K1k(9Lk41zę$ל:)L_MDƓbF2+k!˓G+%@97; RPdsR3`Էbxoo~ɬenl?' #~F=9Y r-d=(fǟFHʠoSQ BQ-Y{)&;+ՌbWJ2[(.IfK\*F-:m+$P`QRQ PІ T' ~hBuF o쪌N Sk;f@SHN%F@I6veZFS^Sc>$ٕ*M-yX.DVɷ3#=ĜzYY{ bufeR/2'wb"wKf- Q-o)$q9o ?)ofؕn";@w@s? 5-II+ԜNQ)lO+ԜԱ:4IftT~wY+S 7Vɨ^TdWV}y *oQ]>{+O!J1.%)P+Z Pd4_O{;O6Qcpר$]ѤyJaH@6Cw_*U蛁w{kL/UkCu#RЖԨXǘ d>jg>PaB~H2WN꯿RGٵƿ'- ﯟWnPvCaįV%tEXtdate:create2020-11-02T11:00:55+00:00c%tEXtdate:modify2020-11-02T11:00:55+00:00YIENDB`././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/docs/images/set_operation_flow.png0000644000175100001730000005757214464001404021553 0ustar00runnerdockerPNG  IHDR4WI9gAMA a cHRMz&u0`:pQ<bKGD^IDATxy|ev9]%(PD+r* jEq?t .rX\Eѭ.x 4P@!I$"*̤.&~~@=}iN&3,nCZ`q&r<BAt -"eΣ@ E<B)ha,  PDђ"Ca !xA%"X:dlA<:؀ yC3!/A1.AU"XVz!c:,לּ3Ӌ9cދ"J({DƋEС$RBUy&!o̙g#R~ݏ Y}h̳g"x:;٨GNb }+$Q|]y&ݜ&wRXVL]1<),+w;99, EmHata [C/=) 3τԩ9;в?~Pw 8t w.!ǹY*-e3% PgBw[q[[& lGGQksu6y&nA1.Ac"VE=EPosgB(A,-\ǵE˄,-[F͗Ŭ*_RL  e'DcCvz֟N$N!Jǝ` e?pA6$%8y|]y&AeKSx|<5ʞO 3τ2Ȣ p*OV*[*zeO*{eL(,` 7!\V\,X\3 (<4cBƖU%%B< 2L=5}WVYٳ݀|}p"ok*?ڷk3=US*׊Wչi 2Ȣ 6fE^j[=_S/?ex7ſ=%Wjw?Cɠ1 2[]mep1׌WE-(r8lޮg0sZAlL$> b 53\01š +od _}|"o8 ⛯R{a=0sOׯ@@-kG Op XR*{ $y"{@RA㾏??.IF!6&.gtdFb)} [P@A6٣fyk`8hq@tye Bzbˮlg!*?zGb}!Y0h0 ŮC"DIaYz{ihժbbض7ƌE61|B 9 2"6%ғա};X ̙y32:ˠa`.<֭[(Zp;Tbc"?.l@pH[7v8s/-yYJ^lL$pܼݺv[vSoLKK_cp{go\^Ib0'#+9so'vc@J2{n;6|a%j~׌_DVQZ=A;&{,2;wƗ!J ^ L()68zŁ9CO@{1!y'bixv(m ־)DIϿqkCV _U J <`=qPosCh2Ȣ 6Dp k8qz{u Ss8@T+3֠~>zܾ°ᛯmh6E?0Bw=gs=ן߆?2% gqKu}()tdy)Ut*e1O?V>>VgeBQ/eh%l7&^'U] &ot,r~pCL.]p >a$[nKǕ#UY>۰^= WaѦMk%`Gz<}ڷʹ k_~wOQҸQכ۫=Z >IFp X: bc?ǝI_Bv;{S+4L?AL{..M]DYN%m۶S_ijժ ?cRS,8@ :VމZ7ozL56ܼNinp 0}} o'v{ hk-.!.!Wv n-Ky7u\>xgZn] BwQ\chӦsƈ轿.w"*ЫyzvÏ֯*gOн[g0NVZ~>:vЧP~;;oz1ګq"'˞{zv/kxsDEUG~F\0}+\LRZ7OcF J }Bq8ܼ׌W/t ^=ឤ W WuWcH\?Db⸫s(ճ6sf\kz;[1jXDDEWg+$(H@OhU%=DDGod^pѪU+ '?XDʋ׶!o8dj߯Sy?V7~Z!&:Qa{dUڦM+67&\q߭xa>.67,]:gx:uN<-E]=]:hVT# !%І{;H{>Wf?3 1V+I񒦰VyeO d0lB 'q=)snSva"` ׿jAދR:j}Ba>iW۩< Xf%߄ so `ȁW:ZҞͨg.w9< hp.=+p &ß""~Wksƅ?}p;:8V T/ik1ٓJa]P,H@ƒCп^XtJ%9cJڧ4$jչjzNQZޭ3Эk'w."&Uk +\v?]sl|eEj&^?F)KXh"OxO|+~ٔべ#wD jlMY%Vj  :l Zm;:FUIopz"~xu,_zuW۩<װ||S=SYXp8DG`WZW.}۵0HN(3E?|EL4=,- 3H`io7}UzNmdp30r@zhUWgAHP:Vgp4ܞ4q7: :{myr L.LJïa#Gs v-Gn!8)lۛQc"JA|w"J>ZI/_k_~ݻutlz>3 맮KNܺukz D@-J :owi7˞{PiQ"]:wTɿ¼nq{[N>w c xi# _+vk%y'"G/W%u!\v*寢K8{`}G- S.8hq@<M()ܥq-8| Jy!=zzLu}OQRm7L{1!y'bij֞AML(XXs7P_odPhPK05 ޓS۱ R_TͳN|d6O2حk'5=y?˝~>a-{A\w޳ϯH=|hkܞמnq%= R# EYZlCE|I15 nڠ޾bxZලԩzQ'4M2X~g3R?o;ѯ U?jDIVta,J ǏĆ/p|ykv^Š*WNhv̗(CK AO7!cc\9e Wc?_PWX~ 6ĵWU#0r@iZ>Vx|=ؘHd}ir/-yEgX*vB{uq{V.}ʹ); RK1ğQb=YOLIJoJ3܋Ⱦ*_PП[[}of̪pJHPUwȃ()|whӦ zGեkwvFz ? Cz_()nܥ^B7dxʠMmwPYB؂6Kg%߄5/VXNqDZˠl߁m` p Ue #Ww޳HM7\>i-jkܞwXרoUWi04lAЖڀ 5 n~H=l('dE+0㞿yѧq׬yjt/ڞ~1e=;; QR7:7sg3#g(9d&{t6 @(,`cKKh_~6İqZ øxZs~:DI?+㪫()4 ]S1:v7()`Q V'Z^4 ;=w&߇s$ npy\ǩj7ݚw>Al~o߄CGd/()XM,m4I9  (M+MO,,~fBlL$5MijoBdP\52;75qUCk-5MCthz>Fl"o\>=q8._=Ыg7ܓ^M\ݿȕMRқBx% ]aW+]M|S}8 9 oyhza+o}?p0Zce0q K0;1~TWQwbԘp1IM7d8zGO,x56QѸn g 'mxWbt (!nCPT7ϚKehDIIkllȤw0ɳhӦ |^pi=7PVa eElf!\2jBٳ[d0npUkHgBdQ[h8'e0-jeɞTo-z8a#0xlqdp~ bbUY7L+Da eEl?Sؚ0d,{RdH?J0τ2Ȣ zN1:Ė=$ױBL(,ʠ '%Xze%$CCI%U":PYA/2NKpxzU96$OeUNc:,לּ3ӋpH%$%PjU%p L:DQRJ(yѡ0τB4 p N^ېOeW㰰S,8<B)'x vJ_QlgJ|N$NrAq'gB!UaQ.DP&ei!͗%$` ! nc\(.A1]lwM) lGosgB!B؂6|hX؂?B &{t.:<B }Ct()y& !L0LQ_l`0τBA݆NDII d !4.s\£8db?ABHKHSL <ɞT\^N0τBP݆?NQ \Dٓg+0τBAOg'N$ =I~#Xa !4DDERPEI)Q, L!mHKp:6$JBXp>A=NvgB!-(x v9?Ўbc<N$#sڏ;<B!,%\M(hK!KɛE0_RL B Rm Q"pdKnB !lApt^64lA<:BdDPggB!^..!Kd0G4ѡdx :BO az `:i,l0τB|P݆N'()i-$3!,KISa=YgB!~ .!D0{45dOj,/'{gB!~$nCJ p/B=)ML!tvR$]+$5I`y&BJIKWIK!:DQRJIKD<B !t A'܆MBXp>A= ;ł3!bc<{=ϽHN$^9ǝ` !eiLuA{eh%T̗(y&B6%k)z!k[9!6٣:z<B!-nC:ĽE^ ]Bu6y&B*HKȨF3w!:*eС0τBH BL^!MRpL!\R݆Ci%% 2τBH%,^ @!pB!P !BeB!P !BeB!P !BeB!P !BeB!P !BeB!P !BeB!P !BeB!P !BeB!P !BeB!P !BeB!P !(X,yB*!B B! !B B! !B B! !B B! !B B! !B B! !Bw5MFy2."B_5YFSh~(j4S."BoV]?h4x?!Bs5͙XZg4$B Vh. WpB!{+H!h4\AB!$Xk+H!Xdbm`N`=3!;J4dU2qo/4m`N#JJ()%y&[$3J*ӹW׉KHK@bNb"QR`2XEP-Gn"sJ|ѡdA LiƿJa f]"XZYg-{AW␱)vA6٣RK{e0τ0XdF,Ba ^#.nc\F,+=)btClG_Bf !MCHSDP-)Grf,-\"XZ&e1eh%T,|I12gBH#7HNsU&XϽH\pu!c;);o{]DE%N0τZ`Mg}DPBX7IېଗnsJZ qGՒ=N*t6sJ4 =I>*+0τ:ْ"Xn:."6GȜrI,7l<"W"X&\hݟD%s='կDP0τfɼ/ELк?B4z%e?`gB|dNk,[Ϙ  i~-eBȜ+@rBȅ}G0= DS?&)"s2τx+GnF Z\.["`"XZ̩?:@A LWU [E6R3`B,mI C3!-a f}@`#ZFtC-e9itClG"XN6y&ȑsb/)GB$EYZ%BPŜ m(e/)&QLH7HNsU)U]TbN3Z,6mS2ȜǝIXE%N0τ4kU8)~5lqZ \h%EmHKpRj,'/8 'ůƕJby晐&A5"K(|ʅ[B]B"\B eVUz:DQRJ(|ѡ0ɜsv^^f`yge&x^ɜ"XhMգʅ֛SOg'QQS/X'W =3:9|d~sG3g>- EP|Zo6* :9AٓBkаq@0q켓²b IaYsva"FB.޴CélDSo=F!=gqBZe~PwkP%\ !Zo\†Q{Ӗ1%F0`}['elN ˊ[t*(pM1Zoܡ46M"iˈ`I9=h̳!㣙3϶E%VɜNqkR!B#ƢI9mFlNikҹ:y/GVc%Y}./3f őAakrr`f)9zJʡmŝOlbU|bS;h-"uf]Xg Ea ۠cѬC:0M"&{tf=Pg.yg8DCyg|]'2{ȖO-_ mHߧ#L#({R(Z>5lBd ZdK*'0ZKH'+WEI``I N͒y ʧpq~}p1\,X\),AdNPz~:܆4~BӨ()i*4 e*)RU_}/ZKH/fvmq"oko7ڿߩc:uWVY,6IIDU]+^}"L |V- J_ev[H7~Gn`/~1exS@AѡdPYqCz; ˧d[E6]j:Axd8F'^^X$3N^xɌA ]_"3!*: aaH/'s`|g8d Bz`~(Ɍ/QL@V/ve/ncD6b!c޾O/؂6Krܼ c#&:ڨXa%StL67"{aSqCX轘@ E?53\01šk4CcႹ4nbc"im2ODozK@1<>#aps6W b vk3HD51ƐAmsZvA6٣vyk`8hq@tye Bzbˮlg!*?zGb}!Y0h0 ŮC"DIaYz{ihժbbض7ƌE61B  ^cE6]w ;оC{lf#,O,^ S&"+k 퉽5^\X$3V[ mCL2O/{ɌHm$+Hf>k]S _U`$3{tǑ&oBaɅ6%KX܃KiQ|* Ԣ8 n7"o"<:~* 1x {6Wn8je0'hݺ5vl~p lJ\}*eu1Kv;_\(npur ۡqkuk5qn 82xpfNk-wM]w v/1BcB/N)5%Q}S' &(.X+F_tfG.,GNe?\k ; G}^\|{>0ll19>:p ,{A\w޳S܆/zp \^ܾ{\1|ڃW vAߴ#DR۷k~VZnh܈N;SYK_Ы29c izk3XRөsS`BQEP͗/ u.[&MIm?IgA¸%So O/^&-rtvCpNw()82c;h9yGb_{pDZ dPt㭒+罥>",X[^ۊΝt5q]d0C;F/?~Svs6#C y 3gLw%q$ccϿeB96$D Vo>{S+4L?AL{..M]DYN%m۶S_ijժ ?cξRS,8@d ZDd.5y~ݻB+< d]_ o ?`Li",_!%c+`̈NS:l/X$3V[Ʒ9{pP<6mZC> ɌACa`kًq!_\ hpp"\BIS}Is2m| p G! cx.ß"O(N~* }{)׈m;| pE[n6\6/E@-jAj?V>^_~a!حg_v"U=gNi*$z)ϢCI%@It;[0DI]و9?Ԣ|m%o߄CGCV导 QRЯ u!fQR7()x hݺ5 ڴi#ߟ()2kŁ|M KD` `_hMެuC1 -Cڿ^Uh߯WGj#'=.]\vX$3V }01H}oޣ:=f~p+'OI| h4fե~Dq:;$vLj mTo,~f{1gUҹ#6}20$mWz #7 {6`H\?<>D"6&v{= Gn0f8RjkWi;OK{q-ѫg7\52Om6̼iI^$BO/fFd~-WbݿPuXv56QѸn jW{`@PtWAyŵO‚#G/u(ѱSg|vbԘp1O._WivןeSY٣+eXdK dհ|]S/F9uh4U5NpRؽϪ</ZY=)/Uðq(c ҽ NK{֩ףUVOV.ö]}ۥSqݓ㮪_>)ْJaBz_5 .VI.!êEz֘c'údive[NXqj_#(J۹?Avmթj;ɠY2/j[k^\4M1\J5IMY%Vm뵷?ivWe]rj`۶A{v/ 3ȞA{3b` ɠhxɩ"a!E x`ybCsYp`sY-|`9U6]rj?`mZRe&;۶koj|SKw^7Fw,&n9Ձ&npwzm z}Cߟnot^M;2XXE?A6qZ D Wehݺuʠ3׬ۥd6ϧ TgIJ67"p?yrQCѿ_O.Tp x{s@TD^;?YupGګ;OEO`S/4LcHXO.Z݅Ib1l(uξRT}cהi$N5f,yy6cV_0 n(gp؉w?Zi[<p܇1jXDEG@-8}c7lۯ%O-ZȾW^]D,xn9.<}"xjU^lۛ{a;1Ϸ}QR0q4j 1mq@ 1HfĤpP}>̨0tx["Xj"WBs ?W[1c =C{bQ;ix0.e8r„g/ĘT_;ШF'M>6bw7Ak- w Ƣ?o_ [L9r֗~@QK0pw;8

߉~ZogQGlٕU`~.AVp;g%I& ة3^Xޯ62Xw췢Cp ܃Gb_7 KIuy RUcE6K60o1b-ROE2#a|:w AC2 #VB=Gv q7U~* N1M}ʫ߀E2ۜ=hۮm} ;>䩓ܱF'ovŰYotC|nc\W_׎@[pZ[A/ 7&USYXp8DG`W|2+^gMqz}3pddtXWl~!=Ck-׌ؠ>U`sx tʎV$Ul;wVcøq W'ǎy^ЦMTKx&An3e2Xa-"u վ/3dfoSKytƶV A!嗔y )+*•g/;V˽[)Bobi'F^/; 'Wa?eGuk ` PAK_  =ZARg꫆b'k*0$~ 1m<|=gMq/sZvA6٣k ~bzݏ"&bQ^6Uoz;^XG$w@z>F![g܋g^x6 ^柊1'1v”:`UKǡ妐A3e2X%GnFC#)oC}ŧ?"n dC^YV >|:{0;{5GW+|6"8d;Xaߢk ɌO23pܻUVB-0k.9 -@6Ƹ m۵},3fͨ vy{mHs>}biW9r3G.!ÛZ鱗,̙y3VX~?!q^=A>t_ }IL1p nL7 \(ƏVD 8 M":Ɩ;DIa+Ƞڶk#ߟ()kV] 2~'F]118*=O"q5Í9[/9V&IJzcSC_'+M-3gc?p0NgV/ȶw(\7<0)Lye0smڴQe0o?}0Q[t.,b8gc>ZjlBg eBJqA- _vx`}BpQW*2dOjK`ɳk< GH_ @EW1-o<&{|.ϔA`]0C%[RnC >54BaXЪoBXp>A=NʗW ;ł>g e o,vdNs<.bc<;8?Ўb#s;o;E&)NE)#h-D!kɲr-&k¢,-\B EYic!.C/)& YUD~g elSZa06C=)=icbsClGO1kA٣9~g eq&؂,YGAkV` b :6:4sD.lGGAkVs^flq_#/3L͠5C9r3S)t l$6v(f(yw+>mVMwkfɜNaku(BΆI9m^BII5"~#ϱ򿒏>b˜ (nM"ik il,dh9mII5I}s2͜yC7D|4sٜ-.lYLk2K%k!\FQ9mBٳ֨0y>²baS'egȖT\3ʟ&2Ne(ŜzdO*%=gqBZe~P-s` BBkYzS؀4hh9uٓBk"磛ĝN ˊ9dC'eyG7'yU$]=,&gtvzl{ =Izz>gu4sYs'6bX|b#͜y+B5"K(yÚȟ2NK(aS*K`NС$RBɫUyHNؼŝ2X;+/3b^,Rk!,&XeWColqZ lZt8.I٫pS_‚ qRjvgBar㭒Nj]t/bc<;(62>q'mbU93!,h-D,_Se:Hp & `2(9?eh%D,|I12gBZ bzJ٬7:!L  K͜26;&{A6٣93!-0dͺA  ց9vA6٣ poHtfAJF@ʠCa V9=NQI!L0dNPRb´4mB!-@@!A_"[%e~.i ʞ~.3!>(~p* KHSdNc'/=3!>,)~&)<~CgCi#ʞ?[Yy&ױZBk $)+$E`y&oaMHʅA]B"\B` \sJzJ()%>*%Ca ;!,&XeӧzeZ`$nC\D 9%|({>6, 3τm4[%7E 6}@;)N$#sڏ;<9Zd1y ZL9r>d!,%\M(bNɥ͗7`D3!a f37:\h6%Tp3{:z<p lA٬2`La ۠aSRvA6٣2晐@ trsR /A4ѡdx :rdNou92ѰIJz _,<B* aZ >'B!S舒B2<B"[7%uR!\2Ȝ&&{72τK aj33̅I%629&{Rey9<Bj-)M,\4\݆&fNI!ʞ&^Yy& k5I,rsxBx:;IDt6sJc&,0τz Úh%$%V BPH"XCt(4y&4P VlPolqZ \4 p nsJZ^ 'aaXpy&4R4[%~SX̅I a1nQldNp܉x93!qɑsb ZL9r>''EYZSeЄ,x2bKI<BbZ37:\4!p Znx/6;&{\gXosgBH7LYd"ƅI !lAptց9%>"&{tA ` !͇őQ:r9)t  sJ|ѡdT)y& Vɜu9F!L2gIJ.a !-.iE B!0sJ|QR. L̒y z!t )'!d !B!V%tEXtdate:create2020-11-02T11:00:55+00:00c%tEXtdate:modify2020-11-02T11:00:55+00:00YIENDB`././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/docs/index.rst0000644000175100001730000000534514464001404015526 0ustar00runnerdocker.. aiocache documentation master file, created by sphinx-quickstart on Sat Oct 1 16:53:45 2016. You can adapt this file completely to your liking, but it should at least contain the root `toctree` directive. Welcome to aiocache's documentation! ==================================== Installing ---------- - ``pip install aiocache`` - ``pip install aiocache[redis]`` - ``pip install aiocache[memcached]`` - ``pip install aiocache[redis,memcached]`` Usage ----- Using a cache is as simple as .. code-block:: python >>> import asyncio >>> from aiocache import Cache >>> cache = Cache() >>> with asyncio.Runner() as runner: >>> runner.run(cache.set("key", "value")) True >>> runner.run(cache.get("key")) 'value' Here we are using the :ref:`simplememorycache` but you can use any other listed in :ref:`caches`. All caches contain the same minimum interface which consists on the following functions: - ``add``: Only adds key/value if key does not exist. Otherwise raises ValueError. - ``get``: Retrieve value identified by key. - ``set``: Sets key/value. - ``multi_get``: Retrieves multiple key/values. - ``multi_set``: Sets multiple key/values. - ``exists``: Returns True if key exists False otherwise. - ``increment``: Increment the value stored in the given key. - ``delete``: Deletes key and returns number of deleted items. - ``clear``: Clears the items stored. - ``raw``: Executes the specified command using the underlying client. You can also setup cache aliases like in Django settings: .. literalinclude:: ../examples/cached_alias_config.py :language: python :linenos: :emphasize-lines: 6-26 In `examples folder `_ you can check different use cases: - `Sanic, Aiohttp and Tornado `_ - `Python object in Redis `_ - `Custom serializer for compressing data `_ - `TimingPlugin and HitMissRatioPlugin demos `_ - `Using marshmallow as a serializer `_ - `Using cached decorator `_. - `Using multi_cached decorator `_. Contents -------- .. toctree:: caches serializers plugins configuration decorators locking testing Indices and tables ================== * :ref:`genindex` * :ref:`modindex` * :ref:`search` ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/docs/locking.rst0000644000175100001730000000137214464001404016041 0ustar00runnerdocker.. _locking: .. WARNING:: This was added in version 0.7.0 and the API is new. This means its open to breaking changes in future versions until the API is considered stable. Locking ======= .. WARNING:: The implementations provided are **NOT** intented for consistency/synchronization purposes. If you need a locking mechanism focused on consistency, consider implementing your mechanism based on more serious tools like https://zookeeper.apache.org/. There are a couple of locking implementations than can help you to protect against different scenarios: .. _redlock: RedLock ------- .. autoclass:: aiocache.lock.RedLock :members: .. _optimisticlock: OptimisticLock -------------- .. autoclass:: aiocache.lock.OptimisticLock :members: ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/docs/plugins.rst0000644000175100001730000000265214464001404016076 0ustar00runnerdocker.. _plugins: Plugins ======= Plugins can be used to enrich the behavior of the cache. By default all caches are configured without any plugin but can add new ones in the constructor or after initializing the cache class:: >>> from aiocache import Cache >>> from aiocache.plugins import TimingPlugin cache = Cache(plugins=[HitMissRatioPlugin()]) cache.plugins += [TimingPlugin()] You can define your custom plugin by inheriting from `BasePlugin`_ and overriding the needed methods (the overrides NEED to be async). All commands have ``pre_`` and ``post_`` hooks. .. WARNING:: Both pre and post hooks are executed awaiting the coroutine. If you perform expensive operations with the hooks, you will add more latency to the command being executed and thus, there are more probabilities of raising a timeout error. If a timeout error is raised, be aware that previous actions **won't be rolled back**. A complete example of using plugins: .. literalinclude:: ../examples/plugins.py :language: python :linenos: .. _baseplugin: BasePlugin ---------- .. autoclass:: aiocache.plugins.BasePlugin :members: :undoc-members: .. _timingplugin: TimingPlugin ------------ .. autoclass:: aiocache.plugins.TimingPlugin :members: :undoc-members: .. _hitmissratioplugin: HitMissRatioPlugin ------------------ .. autoclass:: aiocache.plugins.HitMissRatioPlugin :members: :undoc-members: ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/docs/readthedocs.yml0000644000175100001730000000022614464001404016666 0ustar00runnerdockerformats: - none build: image: latest python: version: 3.7 pip_install: true extra_requirements: - redis - memcached - msgpack ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/docs/serializers.rst0000644000175100001730000000360714464001404016752 0ustar00runnerdocker.. _serializers: Serializers =========== Serializers can be attached to backends in order to serialize/deserialize data sent and retrieved from the backend. This allows to apply transformations to data in case you want it to be saved in a specific format in your cache backend. For example, imagine you have your ``Model`` and want to serialize it to something that Redis can understand (Redis can't store python objects). This is the task of a serializer. To use a specific serializer:: >>> from aiocache import Cache >>> from aiocache.serializers import PickleSerializer cache = Cache(Cache.MEMORY, serializer=PickleSerializer()) Currently the following are built in: .. _nullserializer: NullSerializer -------------- .. autoclass:: aiocache.serializers.NullSerializer :members: .. _stringserializer: StringSerializer ---------------- .. autoclass:: aiocache.serializers.StringSerializer :members: .. _pickleserializer: PickleSerializer ---------------- .. autoclass:: aiocache.serializers.PickleSerializer :members: .. _jsonserializer: JsonSerializer -------------- .. autoclass:: aiocache.serializers.JsonSerializer :members: .. _msgpackserializer: MsgPackSerializer ----------------- .. autoclass:: aiocache.serializers.MsgPackSerializer :members: In case the current serializers are not covering your needs, you can always define your custom serializer as shown in ``examples/serializer_class.py``: .. literalinclude:: ../examples/serializer_class.py :language: python :linenos: You can also use marshmallow as your serializer (``examples/marshmallow_serializer_class.py``): .. literalinclude:: ../examples/marshmallow_serializer_class.py :language: python :linenos: By default cache backends assume they are working with ``str`` types. If your custom implementation transform data to bytes, you will need to set the class attribute ``encoding`` to ``None``. ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/docs/testing.rst0000644000175100001730000000046214464001404016067 0ustar00runnerdockerTesting ======= It's really easy to cut the dependency with aiocache functionality: .. literalinclude:: ../examples/testing.py Note that we are passing the :ref:`basecache` as the spec for the Mock. Also, for debuging purposes you can use `AIOCACHE_DISABLE = 1 python myscript.py` to disable caching. ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1691353871.9854784 aiocache-0.12.2/examples/0000755000175100001730000000000014464001420014542 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/examples/alt_key_builder.py0000644000175100001730000002467414464001404020271 0ustar00runnerdocker"""alt_key_builder.py ``key_builder`` is used in two contexts within ``aiocache``, with different meanings. 1. Custom ``key_builder`` for a cache -- Prepends a namespace to the key 2. Custom ``key_builder`` for a cache decorator -- Creates a cache key from the decorated callable and the callable's arguments -------------------------------------------------------------------------- 1. A custom ``key_builder`` for a cache can manipulate the name of a cache key; for example to meet naming requirements of the backend. ``key_builder`` can also optionally mark the key as belonging to a namespace group. This enables commonly used key names to be disambiguated by their ``namespace`` value. It also enables bulk operation on cache keys, such as expiring all keys in the same namespace. ``key_builder`` is expected (but not required) to prefix the passed key argument with the namespace argument. After initializing the cache object, the key builder can be accessed via the cache's ``build_key`` member. Args: key (str): undecorated key name namespace (str, optional): Prefix to add to the key. Defaults to None. Returns: By default, ``cache.build_key()`` returns ``f'{namespace}{sep}{key}'``, where some backends might include an optional separator, ``sep``. Some backends might strip or replace illegal characters, and encode the result before returning it. Typically str or bytes. -------------------------------------------------------------------------- 2. Custom ``key_builder`` for a cache decorator automatically generates a cache key from the call signature of the decorated callable. It does not accept a ``namespace`` parameter, and it should not add a naemspace to the key that it outputs. Args: func (callable): name of the decorated callable *args: Positional arguments when ``func`` was called. **kwargs: Keyword arguments when ``func`` was called. Returns (str): By default, the output key is a concatenation of the module and name of ``func`` + the positional arguments + the sorted keyword arguments. """ import asyncio from typing import List, Dict from aiocache import Cache, cached async def demo_key_builders(): await demo_cache_key_builders() await demo_cache_key_builders(namespace="demo") await demo_decorator_key_builders() # 1. Custom ``key_builder`` for a cache # ------------------------------------- def ensure_no_spaces(key, namespace=None, replace='_'): """Prefix key with namespace; replace each space with ``replace``""" aggregate_key = f"{namespace or ''}{key}" custom_key = aggregate_key.replace(' ', replace) return custom_key def bytes_key(key, namespace=None): """Prefix key with namespace; convert output to bytes""" aggregate_key = f"{namespace or ''}{key}" custom_key = aggregate_key.encode() return custom_key def fixed_key(key, namespace=None): """Ignore input, generate a fixed key""" unchanging_key = "universal key" return unchanging_key async def demo_cache_key_builders(namespace=None): """Demonstrate usage and behavior of the custom key_builder functions""" cache_ns = "cache_namespace" async with Cache(Cache.MEMORY, key_builder=ensure_no_spaces, namespace=cache_ns) as cache: raw_key = "Key With Unwanted Spaces" return_value = 42 await cache.add(raw_key, return_value, namespace=namespace) exists = await cache.exists(raw_key, namespace=namespace) assert exists is True custom_key = cache.build_key(raw_key, namespace=namespace) assert ' ' not in custom_key if namespace is not None: assert custom_key.startswith(namespace) else: # Using cache.namespace instead exists = await cache.exists(raw_key, namespace=cache_ns) assert exists is True custom_key = cache.build_key(raw_key, namespace=cache_ns) assert custom_key.startswith(cache_ns) cached_value = await cache.get(raw_key, namespace=namespace) assert cached_value == return_value await cache.delete(raw_key, namespace=namespace) async with Cache(Cache.MEMORY, key_builder=bytes_key) as cache: raw_key = "string-key" return_value = 42 await cache.add(raw_key, return_value, namespace=namespace) exists = await cache.exists(raw_key, namespace=namespace) assert exists is True custom_key = cache.build_key(raw_key, namespace=namespace) assert isinstance(custom_key, bytes) cached_value = await cache.get(raw_key, namespace=namespace) assert cached_value == return_value await cache.delete(raw_key, namespace=namespace) async with Cache(Cache.MEMORY, key_builder=fixed_key) as cache: unchanging_key = "universal key" for raw_key, return_value in zip( ("key_1", "key_2", "key_3"), ("val_1", "val_2", "val_3")): await cache.set(raw_key, return_value, namespace=namespace) exists = await cache.exists(raw_key, namespace=namespace) assert exists is True custom_key = cache.build_key(raw_key, namespace=namespace) assert custom_key == unchanging_key cached_value = await cache.get(raw_key, namespace=namespace) assert cached_value == return_value # Cache key exists regardless of raw_key name for raw_key in ("key_1", "key_2", "key_3"): exists = await cache.exists(raw_key, namespace=namespace) assert exists is True cached_value = await cache.get(raw_key, namespace=namespace) assert cached_value == "val_3" # The last value that was set await cache.delete(raw_key, namespace=namespace) # Deleting one cache key deletes them all for raw_key in ("key_1", "key_2", "key_3"): exists = await cache.exists(raw_key, namespace=namespace) assert exists is False # 2. Custom ``key_builder`` for a cache decorator # ----------------------------------------------- def ignore_kwargs(func, *args, **kwargs): """Do not use keyword arguments in the cache key's name""" return ( (func.__module__ or "") + func.__name__ + str(args) ) def module_override(func, *args, **kwargs): """Override the module-name prefix for the cache key""" ordered_kwargs = sorted(kwargs.items()) return ( "my_module_alias" + func.__name__ + str(args) + str(ordered_kwargs) ) def hashed_args(*args, **kwargs): """Return a hashable key from a callable's parameters""" key = tuple() for arg in args: if isinstance(arg, List): key += tuple(hashed_args(_arg) for _arg in arg) elif isinstance(arg, Dict): key += tuple(sorted( (_key, hashed_args(_value)) for (_key, _value) in arg.items() )) else: key += (arg, ) key += tuple(sorted( (_key, hashed_args(_value)) for (_key, _value) in kwargs.items() )) return key def structured_key(func, *args, **kwargs): """String representation of a structured call signature""" key = tuple() key += (func.__module__ or '', ) key += (func.__qualname__ or func.__name__, ) key += hashed_args(*args, **kwargs) return str(key) async def demo_decorator_key_builders(): """Demonstrate usage and behavior of the custom key_builder functions""" await demo_ignore_kwargs_decorator() await demo_module_override_decorator() await demo_structured_key_decorator() async def demo_ignore_kwargs_decorator(): """Cache key from positional arguments in call to decorated function""" @cached(key_builder=ignore_kwargs) async def fn(a, b=2, c=3): return (a, b) (a, b) = (5, 1) demo_params = ( dict(args=(a, b), kwargs=dict(c=3), ret=(a, b)), dict(args=(a, ), kwargs=dict(b=b, c=3), ret=(a, b)), dict(args=(a, ), kwargs=dict(c=3), ret=(a, b)), # b from previous call dict(args=(a, b, 6), kwargs={}, ret=(a, b)), ) demo_keys = list() for params in demo_params: args = params["args"] kwargs = params["kwargs"] await fn(*args, **kwargs) cache = fn.cache decorator = cached(key_builder=ignore_kwargs) key = decorator.get_cache_key(fn, args=args, kwargs=kwargs) exists = await cache.exists(key) assert exists is True assert key.endswith(str(args)) cached_value = await cache.get(key) assert cached_value == params["ret"] demo_keys.append(key) assert demo_keys[1] == demo_keys[2] assert demo_keys[0] != demo_keys[1] assert demo_keys[0] != demo_keys[3] assert demo_keys[1] != demo_keys[3] for key in set(demo_keys): await cache.delete(key) async def demo_module_override_decorator(): """Cache key uses custom module name for decorated function""" @cached(key_builder=module_override) async def fn(a, b=2, c=3): return (a, b) (a, b) = (5, 1) args = (a, b) kwargs = dict(c=3) return_value = (a, b) await fn(*args, **kwargs) cache = fn.cache decorator = cached(key_builder=module_override) key = decorator.get_cache_key(fn, args=args, kwargs=kwargs) exists = await cache.exists(key) assert exists is True assert key.startswith("my_module_alias") cached_value = await cache.get(key) assert cached_value == return_value await cache.delete(key) async def demo_structured_key_decorator(): """Cache key expresses structure of decorated function call""" @cached(key_builder=structured_key) async def fn(a, b=2, c=3): return (a, b) (a, b) = (5, 1) args = (a, b) kwargs = dict(c=3) return_value = (a, b) fn_module = fn.__module__ or '' fn_name = fn.__qualname__ or fn.__name__ key_name = str((fn_module, fn_name) + hashed_args(*args, **kwargs)) await fn(*args, **kwargs) cache = fn.cache decorator = cached(key_builder=structured_key) key = decorator.get_cache_key(fn, args=args, kwargs=kwargs) exists = await cache.exists(key) assert exists is True assert key == key_name cached_value = await cache.get(key) assert cached_value == return_value await cache.delete(key) # --------------------------------------------------------------------------- if __name__ == "__main__": asyncio.run(demo_key_builders()) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/examples/cached_alias_config.py0000644000175100001730000000340014464001404021020 0ustar00runnerdockerimport asyncio from aiocache import caches, Cache from aiocache.serializers import StringSerializer, PickleSerializer caches.set_config({ 'default': { 'cache': "aiocache.SimpleMemoryCache", 'serializer': { 'class': "aiocache.serializers.StringSerializer" } }, 'redis_alt': { 'cache': "aiocache.RedisCache", 'endpoint': "127.0.0.1", 'port': 6379, 'timeout': 1, 'serializer': { 'class': "aiocache.serializers.PickleSerializer" }, 'plugins': [ {'class': "aiocache.plugins.HitMissRatioPlugin"}, {'class': "aiocache.plugins.TimingPlugin"} ] } }) async def default_cache(): cache = caches.get('default') # This always returns the same instance await cache.set("key", "value") assert await cache.get("key") == "value" assert isinstance(cache, Cache.MEMORY) assert isinstance(cache.serializer, StringSerializer) async def alt_cache(): # This generates a new instance every time! You can also use # `caches.create("alt", namespace="test", etc...)` to override extra args cache = caches.create("redis_alt") await cache.set("key", "value") assert await cache.get("key") == "value" assert isinstance(cache, Cache.REDIS) assert isinstance(cache.serializer, PickleSerializer) assert len(cache.plugins) == 2 assert cache.endpoint == "127.0.0.1" assert cache.timeout == 1 assert cache.port == 6379 await cache.close() async def test_alias(): await default_cache() await alt_cache() cache = Cache(Cache.REDIS) await cache.delete("key") await cache.close() await caches.get("default").close() if __name__ == "__main__": asyncio.run(test_alias()) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/examples/cached_decorator.py0000644000175100001730000000125014464001404020365 0ustar00runnerdockerimport asyncio from collections import namedtuple from aiocache import cached, Cache from aiocache.serializers import PickleSerializer Result = namedtuple('Result', "content, status") @cached( ttl=10, cache=Cache.REDIS, key="key", serializer=PickleSerializer(), port=6379, namespace="main") async def cached_call(): return Result("content", 200) async def test_cached(): async with Cache(Cache.REDIS, endpoint="127.0.0.1", port=6379, namespace="main") as cache: await cached_call() exists = await cache.exists("key") assert exists is True await cache.delete("key") if __name__ == "__main__": asyncio.run(test_cached()) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1691353871.9854784 aiocache-0.12.2/examples/frameworks/0000755000175100001730000000000014464001420016722 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/examples/frameworks/aiohttp_example.py0000644000175100001730000000270314464001404022463 0ustar00runnerdockerimport asyncio import logging from datetime import datetime from aiohttp import web from aiocache import cached from aiocache.serializers import JsonSerializer @cached(key="function_key", serializer=JsonSerializer()) async def time(): return {"time": datetime.now().isoformat()} async def handle(request): return web.json_response(await time()) # It is also possible to cache the whole route, but for this you will need to # override `cached.get_from_cache` and regenerate the response since aiohttp # forbids reusing responses class CachedOverride(cached): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) async def get_from_cache(self, key): try: value = await self.cache.get(key) if type(value) == web.Response: return web.Response( body=value.body, status=value.status, reason=value.reason, headers=value.headers, ) return value except Exception: logging.exception("Couldn't retrieve %s, unexpected error", key) return None @CachedOverride(key="route_key", serializer=JsonSerializer()) async def handle2(request): return web.json_response(await asyncio.sleep(3)) if __name__ == "__main__": app = web.Application() app.router.add_get('/handle', handle) app.router.add_get('/handle2', handle2) web.run_app(app) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/examples/frameworks/sanic_example.py0000644000175100001730000000207114464001404022106 0ustar00runnerdocker""" Example of caching using aiocache package: /: Does a 3 seconds sleep. Only the first time because its using the `cached` decorator /reuse: Returns the data stored in "main" endpoint """ import asyncio from sanic import Sanic from sanic.response import json from sanic.log import logger from aiocache import cached, Cache from aiocache.serializers import JsonSerializer app = Sanic(__name__) @cached(key="my_custom_key", serializer=JsonSerializer()) async def expensive_call(): logger.info("Expensive has been called") await asyncio.sleep(3) return {"test": True} async def reuse_data(): cache = Cache(serializer=JsonSerializer()) # Not ideal to define here data = await cache.get("my_custom_key") # Note the key is defined in `cached` decorator return data @app.route("/") async def main(request): logger.info("Received GET /") return json(await expensive_call()) @app.route("/reuse") async def reuse(request): logger.info("Received GET /reuse") return json(await reuse_data()) app.run(host="0.0.0.0", port=8000) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/examples/frameworks/tornado_example.py0000644000175100001730000000145414464001404022463 0ustar00runnerdockerimport tornado.web import tornado.ioloop from datetime import datetime from aiocache import cached from aiocache.serializers import JsonSerializer class MainHandler(tornado.web.RequestHandler): # Due some incompatibilities between tornado and asyncio, caches can't use the "timeout" feature # in order to make it work, you will have to specify it always to 0 @cached(key="my_custom_key", serializer=JsonSerializer(), timeout=0) async def time(self): return {"time": datetime.now().isoformat()} async def get(self): self.write(await self.time()) if __name__ == "__main__": tornado.ioloop.IOLoop.configure('tornado.platform.asyncio.AsyncIOLoop') app = tornado.web.Application([(r"/", MainHandler)]) app.listen(8888) tornado.ioloop.IOLoop.current().start() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/examples/marshmallow_serializer_class.py0000644000175100001730000000312114464001404023057 0ustar00runnerdockerimport random import string import asyncio from marshmallow import fields, Schema, post_load from aiocache import Cache from aiocache.serializers import BaseSerializer class RandomModel: MY_CONSTANT = "CONSTANT" def __init__(self, int_type=None, str_type=None, dict_type=None, list_type=None): self.int_type = int_type or random.randint(1, 10) self.str_type = str_type or random.choice(string.ascii_lowercase) self.dict_type = dict_type or {} self.list_type = list_type or [] def __eq__(self, obj): return self.__dict__ == obj.__dict__ class MarshmallowSerializer(Schema, BaseSerializer): # type: ignore[misc] int_type = fields.Integer() str_type = fields.String() dict_type = fields.Dict() list_type = fields.List(fields.Integer()) # marshmallow Schema class doesn't play nicely with multiple inheritance and won't call # BaseSerializer.__init__ encoding = 'utf-8' @post_load def build_my_type(self, data, **kwargs): return RandomModel(**data) class Meta: strict = True cache = Cache(serializer=MarshmallowSerializer(), namespace="main") async def serializer(): model = RandomModel() await cache.set("key", model) result = await cache.get("key") assert result.int_type == model.int_type assert result.str_type == model.str_type assert result.dict_type == model.dict_type assert result.list_type == model.list_type async def test_serializer(): await serializer() await cache.delete("key") if __name__ == "__main__": asyncio.run(test_serializer()) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/examples/multicached_decorator.py0000644000175100001730000000174214464001404021446 0ustar00runnerdockerimport asyncio from aiocache import multi_cached, Cache DICT = { 'a': "Z", 'b': "Y", 'c': "X", 'd': "W" } @multi_cached("ids", cache=Cache.REDIS, namespace="main") async def multi_cached_ids(ids=None): return {id_: DICT[id_] for id_ in ids} @multi_cached("keys", cache=Cache.REDIS, namespace="main") async def multi_cached_keys(keys=None): return {id_: DICT[id_] for id_ in keys} cache = Cache(Cache.REDIS, endpoint="127.0.0.1", port=6379, namespace="main") async def test_multi_cached(): await multi_cached_ids(ids=("a", "b")) await multi_cached_ids(ids=("a", "c")) await multi_cached_keys(keys=("d",)) assert await cache.exists("a") assert await cache.exists("b") assert await cache.exists("c") assert await cache.exists("d") await cache.delete("a") await cache.delete("b") await cache.delete("c") await cache.delete("d") await cache.close() if __name__ == "__main__": asyncio.run(test_multi_cached()) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/examples/optimistic_lock.py0000644000175100001730000000240614464001404020314 0ustar00runnerdockerimport asyncio import logging import random from aiocache import Cache from aiocache.lock import OptimisticLock, OptimisticLockError logger = logging.getLogger(__name__) cache = Cache(Cache.REDIS, endpoint='127.0.0.1', port=6379, namespace='main') async def expensive_function(): logger.warning('Expensive is being executed...') await asyncio.sleep(random.uniform(0, 2)) return 'result' async def my_view(): async with OptimisticLock(cache, 'key') as lock: result = await expensive_function() try: await lock.cas(result) except OptimisticLockError: logger.warning( 'I failed setting the value because it is different since the lock started!') return result async def concurrent(): await cache.set('key', 'initial_value') # All three calls will read 'initial_value' as the value to check and only # the first one finishing will succeed because the others, when trying to set # the value, will see that the value is not the same as when the lock started await asyncio.gather(my_view(), my_view(), my_view()) async def test_redis(): await concurrent() await cache.delete("key") await cache.close() if __name__ == '__main__': asyncio.run(test_redis()) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/examples/plugins.py0000644000175100001730000000257714464001404016612 0ustar00runnerdockerimport asyncio import random import logging from aiocache import Cache from aiocache.plugins import HitMissRatioPlugin, TimingPlugin, BasePlugin logger = logging.getLogger(__name__) class MyCustomPlugin(BasePlugin): async def pre_set(self, *args, **kwargs): logger.info("I'm the pre_set hook being called with %s %s" % (args, kwargs)) async def post_set(self, *args, **kwargs): logger.info("I'm the post_set hook being called with %s %s" % (args, kwargs)) cache = Cache( plugins=[HitMissRatioPlugin(), TimingPlugin(), MyCustomPlugin()], namespace="main") async def run(): await cache.set("a", "1") await cache.set("b", "2") await cache.set("c", "3") await cache.set("d", "4") possible_keys = ["a", "b", "c", "d", "e", "f"] for t in range(1000): await cache.get(random.choice(possible_keys)) assert cache.hit_miss_ratio["hit_ratio"] > 0.5 assert cache.hit_miss_ratio["total"] == 1000 assert cache.profiling["get_min"] > 0 assert cache.profiling["set_min"] > 0 assert cache.profiling["get_max"] > 0 assert cache.profiling["set_max"] > 0 print(cache.hit_miss_ratio) print(cache.profiling) async def test_run(): await run() await cache.delete("a") await cache.delete("b") await cache.delete("c") await cache.delete("d") if __name__ == "__main__": asyncio.run(test_run()) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/examples/python_object.py0000644000175100001730000000115514464001404017767 0ustar00runnerdockerimport asyncio from collections import namedtuple from aiocache import Cache from aiocache.serializers import PickleSerializer MyObject = namedtuple("MyObject", ["x", "y"]) cache = Cache(Cache.REDIS, serializer=PickleSerializer(), namespace="main") async def complex_object(): obj = MyObject(x=1, y=2) await cache.set("key", obj) my_object = await cache.get("key") assert my_object.x == 1 assert my_object.y == 2 async def test_python_object(): await complex_object() await cache.delete("key") await cache.close() if __name__ == "__main__": asyncio.run(test_python_object()) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/examples/redlock.py0000644000175100001730000000167114464001404016546 0ustar00runnerdockerimport asyncio import logging from aiocache import Cache from aiocache.lock import RedLock logger = logging.getLogger(__name__) cache = Cache(Cache.REDIS, endpoint='127.0.0.1', port=6379, namespace='main') async def expensive_function(): logger.warning('Expensive is being executed...') await asyncio.sleep(1) return 'result' async def my_view(): async with RedLock(cache, 'key', lease=2): # Wait at most 2 seconds result = await cache.get('key') if result is not None: logger.info('Found the value in the cache hurray!') return result result = await expensive_function() await cache.set('key', result) return result async def concurrent(): await asyncio.gather(my_view(), my_view(), my_view()) async def test_redis(): await concurrent() await cache.delete("key") await cache.close() if __name__ == '__main__': asyncio.run(test_redis()) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/examples/run_all.sh0000755000175100001730000000030014464001404016530 0ustar00runnerdocker#!/bin/bash pushd "$(dirname "$0")" for f in `find . -name '*.py' -not -path "./frameworks/*"`; do echo "########## Running $f #########" python $f || exit 1 echo;echo;echo done popd ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/examples/serializer_class.py0000644000175100001730000000337414464001404020463 0ustar00runnerdockerimport asyncio import zlib from aiocache import Cache from aiocache.serializers import BaseSerializer class CompressionSerializer(BaseSerializer): # This is needed because zlib works with bytes. # this way the underlying backend knows how to # store/retrieve values DEFAULT_ENCODING = None def dumps(self, value): print("I've received:\n{}".format(value)) compressed = zlib.compress(value.encode()) print("But I'm storing:\n{}".format(compressed)) return compressed def loads(self, value): print("I've retrieved:\n{}".format(value)) decompressed = zlib.decompress(value).decode() print("But I'm returning:\n{}".format(decompressed)) return decompressed cache = Cache(Cache.REDIS, serializer=CompressionSerializer(), namespace="main") async def serializer(): text = ( "Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt" "ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation" "ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in" "reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur" "sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit" "anim id est laborum.") await cache.set("key", text) print("-----------------------------------") real_value = await cache.get("key") compressed_value = await cache.raw("get", "main:key") assert len(compressed_value) < len(real_value.encode()) async def test_serializer(): await serializer() await cache.delete("key") await cache.close() if __name__ == "__main__": asyncio.run(test_serializer()) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/examples/serializer_function.py0000644000175100001730000000206614464001404021200 0ustar00runnerdockerimport asyncio import json from marshmallow import Schema, fields, post_load from aiocache import Cache class MyType: def __init__(self, x, y): self.x = x self.y = y class MyTypeSchema(Schema): x = fields.Number() y = fields.Number() @post_load def build_object(self, data, **kwargs): return MyType(data['x'], data['y']) def dumps(value): return MyTypeSchema().dumps(value) def loads(value): return MyTypeSchema().loads(value) cache = Cache(Cache.REDIS, namespace="main") async def serializer_function(): await cache.set("key", MyType(1, 2), dumps_fn=dumps) obj = await cache.get("key", loads_fn=loads) assert obj.x == 1 assert obj.y == 2 assert await cache.get("key") == json.loads(('{"y": 2.0, "x": 1.0}')) assert json.loads(await cache.raw("get", "main:key")) == {"y": 2.0, "x": 1.0} async def test_serializer_function(): await serializer_function() await cache.delete("key") await cache.close() if __name__ == "__main__": asyncio.run(test_serializer_function()) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/examples/simple_redis.py0000644000175100001730000000110014464001404017565 0ustar00runnerdockerimport asyncio from aiocache import Cache cache = Cache(Cache.REDIS, endpoint="127.0.0.1", port=6379, namespace="main") async def redis(): await cache.set("key", "value") await cache.set("expire_me", "value", ttl=10) assert await cache.get("key") == "value" assert await cache.get("expire_me") == "value" assert await cache.raw("ttl", "main:expire_me") > 0 async def test_redis(): await redis() await cache.delete("key") await cache.delete("expire_me") await cache.close() if __name__ == "__main__": asyncio.run(test_redis()) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/examples/testing.py0000644000175100001730000000053414464001404016575 0ustar00runnerdockerimport asyncio from unittest.mock import MagicMock from aiocache.base import BaseCache async def main(): mocked_cache = MagicMock(spec=BaseCache) mocked_cache.get.return_value = "world" print(await mocked_cache.get("hello")) if __name__ == "__main__": import sys if sys.version_info >= (3, 8): asyncio.run(main()) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/pyproject.toml0000644000175100001730000000011614464001404015640 0ustar00runnerdocker[tool.black] line-length = 99 target-version = ['py37','py38','py39','py310'] ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/requirements-dev.txt0000644000175100001730000000033514464001404016767 0ustar00runnerdocker-r requirements.txt flake8==6.0.0 flake8-bandit==4.1.1 flake8-bugbear==22.12.6 flake8-import-order==0.18.2 flake8-requirements==1.7.6 mypy==0.991; implementation_name=="cpython" types-redis==4.4.0.0 types-ujson==5.7.0.0 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/requirements.txt0000644000175100001730000000024114464001404016207 0ustar00runnerdocker-e . aiomcache==0.8.0 aiohttp==3.8.3 marshmallow==3.19.0 msgpack==1.0.4 pytest==7.2.0 pytest-asyncio==0.20.3 pytest-cov==4.0.0 pytest-mock==3.10.0 redis==4.4.2 ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1691353871.9934785 aiocache-0.12.2/setup.cfg0000644000175100001730000000104314464001420014543 0ustar00runnerdocker[bdist_wheel] universal = 1 [pep8] max-line-length = 100 [tool:pytest] addopts = --cov=aiocache --cov=tests/ --cov-report term --strict-markers asyncio_mode = auto junit_suite_name = aiohttp_test_suite filterwarnings = error testpaths = tests/ junit_family = xunit2 xfail_strict = true markers = memcached: tests requiring memcached backend redis: tests requiring redis backend [coverage:run] branch = True parallel = True source = aiocache [coverage:report] show_missing = true skip_covered = true [egg_info] tag_build = tag_date = 0 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/setup.py0000644000175100001730000000222614464001404014442 0ustar00runnerdockerimport re from pathlib import Path from setuptools import setup p = Path(__file__).with_name("aiocache") / "__init__.py" try: version = re.findall(r"^__version__ = \"([^']+)\"\r?$", p.read_text(), re.M)[0] except IndexError: raise RuntimeError("Unable to determine version.") readme = Path(__file__).with_name("README.rst").read_text() setup( name="aiocache", version=version, author="Manuel Miranda", url="https://github.com/aio-libs/aiocache", author_email="manu.mirandad@gmail.com", description="multi backend asyncio cache", long_description=readme, classifiers=[ "Programming Language :: Python", "Programming Language :: Python :: 3.7", "Programming Language :: Python :: 3.8", "Programming Language :: Python :: 3.9", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Framework :: AsyncIO", ], packages=("aiocache",), install_requires=None, extras_require={ "redis": ["redis>=4.2.0"], "memcached": ["aiomcache>=0.5.2"], "msgpack": ["msgpack>=0.5.5"], }, include_package_data=True, ) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1691353871.9854784 aiocache-0.12.2/tests/0000755000175100001730000000000014464001420014066 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/tests/__init__.py0000644000175100001730000000000014464001404016167 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1691353871.9894783 aiocache-0.12.2/tests/acceptance/0000755000175100001730000000000014464001420016154 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/tests/acceptance/__init__.py0000644000175100001730000000000014464001404020255 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/tests/acceptance/conftest.py0000644000175100001730000000236014464001404020356 0ustar00runnerdockerimport asyncio import pytest from aiocache import Cache, caches from ..utils import KEY_LOCK, Keys @pytest.fixture(autouse=True) def reset_caches(): caches._caches = {} caches.set_config( { "default": { "cache": "aiocache.SimpleMemoryCache", "serializer": {"class": "aiocache.serializers.NullSerializer"}, } } ) @pytest.fixture async def redis_cache(): async with Cache(Cache.REDIS, namespace="test") as cache: yield cache await asyncio.gather(*(cache.delete(k) for k in (*Keys, KEY_LOCK))) @pytest.fixture async def memory_cache(): async with Cache(namespace="test") as cache: yield cache await asyncio.gather(*(cache.delete(k) for k in (*Keys, KEY_LOCK))) @pytest.fixture async def memcached_cache(): async with Cache(Cache.MEMCACHED, namespace="test") as cache: yield cache await asyncio.gather(*(cache.delete(k) for k in (*Keys, KEY_LOCK))) @pytest.fixture( params=( pytest.param("redis_cache", marks=pytest.mark.redis), "memory_cache", pytest.param("memcached_cache", marks=pytest.mark.memcached), )) def cache(request): return request.getfixturevalue(request.param) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/tests/acceptance/test_base.py0000644000175100001730000002260714464001404020510 0ustar00runnerdockerimport asyncio import pytest from aiocache.backends.memory import SimpleMemoryCache from aiocache.base import _Conn from ..utils import Keys class TestCache: """ This class ensures that all caches behave the same way and have the minimum functionality. To add a new cache just create the fixture for the new cache and add id as a param for the cache fixture """ async def test_setup(self, cache): assert cache.namespace == "test" async def test_get_missing(self, cache): assert await cache.get(Keys.KEY) is None assert await cache.get(Keys.KEY, default=1) == 1 async def test_get_existing(self, cache): await cache.set(Keys.KEY, "value") assert await cache.get(Keys.KEY) == "value" async def test_multi_get(self, cache): await cache.set(Keys.KEY, "value") assert await cache.multi_get([Keys.KEY, Keys.KEY_1]) == ["value", None] async def test_delete_missing(self, cache): result = await cache.delete(Keys.KEY) assert result == 0 async def test_delete_existing(self, cache): await cache.set(Keys.KEY, "value") result = await cache.delete(Keys.KEY) assert result == 1 value = await cache.get(Keys.KEY) assert value is None async def test_set(self, cache): assert await cache.set(Keys.KEY, "value") is True async def test_set_cancel_previous_ttl_handle(self, cache): await cache.set(Keys.KEY, "value", ttl=4) await asyncio.sleep(2.1) # Smaller ttl seems flaky, as if this call takes >0.5s... result = await cache.get(Keys.KEY) assert result == "value" await cache.set(Keys.KEY, "new_value", ttl=4) await asyncio.sleep(2) result = await cache.get(Keys.KEY) assert result == "new_value" async def test_multi_set(self, cache): pairs = [(Keys.KEY, "value"), [Keys.KEY_1, "random_value"]] assert await cache.multi_set(pairs) is True assert await cache.multi_get([Keys.KEY, Keys.KEY_1]) == ["value", "random_value"] async def test_multi_set_with_ttl(self, cache): pairs = [(Keys.KEY, "value"), [Keys.KEY_1, "random_value"]] assert await cache.multi_set(pairs, ttl=1) is True await asyncio.sleep(1.1) assert await cache.multi_get([Keys.KEY, Keys.KEY_1]) == [None, None] async def test_set_with_ttl(self, cache): await cache.set(Keys.KEY, "value", ttl=1) await asyncio.sleep(1.1) assert await cache.get(Keys.KEY) is None async def test_add_missing(self, cache): assert await cache.add(Keys.KEY, "value", ttl=1) is True async def test_add_existing(self, cache): assert await cache.set(Keys.KEY, "value") is True with pytest.raises(ValueError): await cache.add(Keys.KEY, "value") async def test_exists_missing(self, cache): assert await cache.exists(Keys.KEY) is False async def test_exists_existing(self, cache): await cache.set(Keys.KEY, "value") assert await cache.exists(Keys.KEY) is True async def test_increment_missing(self, cache): assert await cache.increment(Keys.KEY, delta=2) == 2 assert await cache.increment(Keys.KEY_1, delta=-2) == -2 async def test_increment_existing(self, cache): await cache.set(Keys.KEY, 2) assert await cache.increment(Keys.KEY, delta=2) == 4 assert await cache.increment(Keys.KEY, delta=1) == 5 assert await cache.increment(Keys.KEY, delta=-3) == 2 async def test_increment_typeerror(self, cache): await cache.set(Keys.KEY, "value") with pytest.raises(TypeError): assert await cache.increment(Keys.KEY) async def test_expire_existing(self, cache): await cache.set(Keys.KEY, "value") assert await cache.expire(Keys.KEY, 1) is True await asyncio.sleep(1.1) assert await cache.exists(Keys.KEY) is False async def test_expire_with_0(self, cache): await cache.set(Keys.KEY, "value", 1) assert await cache.expire(Keys.KEY, 0) is True await asyncio.sleep(1.1) assert await cache.exists(Keys.KEY) is True async def test_expire_missing(self, cache): assert await cache.expire(Keys.KEY, 1) is False async def test_clear(self, cache): await cache.set(Keys.KEY, "value") await cache.clear() assert await cache.exists(Keys.KEY) is False async def test_close_pool_only_clears_resources(self, cache): await cache.set(Keys.KEY, "value") await cache.close() assert await cache.set(Keys.KEY, "value") is True assert await cache.get(Keys.KEY) == "value" async def test_single_connection(self, cache): async with cache.get_connection() as conn: assert isinstance(conn, _Conn) assert await conn.set(Keys.KEY, "value") is True assert await conn.get(Keys.KEY) == "value" class TestMemoryCache: async def test_accept_explicit_args(self): with pytest.raises(TypeError): SimpleMemoryCache(random_attr="wtf") async def test_set_float_ttl(self, memory_cache): await memory_cache.set(Keys.KEY, "value", ttl=0.1) await asyncio.sleep(0.15) assert await memory_cache.get(Keys.KEY) is None async def test_multi_set_float_ttl(self, memory_cache): pairs = [(Keys.KEY, "value"), [Keys.KEY_1, "random_value"]] assert await memory_cache.multi_set(pairs, ttl=0.1) is True await asyncio.sleep(0.15) assert await memory_cache.multi_get([Keys.KEY, Keys.KEY_1]) == [None, None] async def test_raw(self, memory_cache): await memory_cache.raw("setdefault", "key", "value") assert await memory_cache.raw("get", "key") == "value" assert list(await memory_cache.raw("keys")) == ["key"] async def test_clear_with_namespace_memory(self, memory_cache): await memory_cache.set(Keys.KEY, "value", namespace="test") await memory_cache.clear(namespace="test") assert await memory_cache.exists(Keys.KEY, namespace="test") is False @pytest.mark.memcached class TestMemcachedCache: async def test_accept_explicit_args(self): from aiocache.backends.memcached import MemcachedCache with pytest.raises(TypeError): MemcachedCache(random_attr="wtf") async def test_set_too_long_key(self, memcached_cache): with pytest.raises(TypeError) as exc_info: await memcached_cache.set("a" * 2000, "value") assert str(exc_info.value).startswith("aiomcache error: invalid key") async def test_set_float_ttl_fails(self, memcached_cache): with pytest.raises(TypeError) as exc_info: await memcached_cache.set(Keys.KEY, "value", ttl=0.1) assert str(exc_info.value) == "aiomcache error: exptime not int: 0.1" async def test_multi_set_float_ttl(self, memcached_cache): with pytest.raises(TypeError) as exc_info: pairs = [(Keys.KEY, "value"), [Keys.KEY_1, "random_value"]] assert await memcached_cache.multi_set(pairs, ttl=0.1) is True assert str(exc_info.value) == "aiomcache error: exptime not int: 0.1" async def test_raw(self, memcached_cache): await memcached_cache.raw("set", b"key", b"value") assert await memcached_cache.raw("get", b"key") == "value" assert await memcached_cache.raw("prepend", b"key", b"super") is True assert await memcached_cache.raw("get", b"key") == "supervalue" async def test_clear_with_namespace_memcached(self, memcached_cache): await memcached_cache.set(Keys.KEY, "value", namespace="test") with pytest.raises(ValueError): await memcached_cache.clear(namespace="test") assert await memcached_cache.exists(Keys.KEY, namespace="test") is True async def test_close(self, memcached_cache): await memcached_cache.set(Keys.KEY, "value") await memcached_cache._close() assert memcached_cache.client._pool._pool.qsize() == 0 @pytest.mark.redis class TestRedisCache: async def test_accept_explicit_args(self): from aiocache.backends.redis import RedisCache with pytest.raises(TypeError): RedisCache(random_attr="wtf") async def test_float_ttl(self, redis_cache): await redis_cache.set(Keys.KEY, "value", ttl=0.1) await asyncio.sleep(0.15) assert await redis_cache.get(Keys.KEY) is None async def test_multi_set_float_ttl(self, redis_cache): pairs = [(Keys.KEY, "value"), [Keys.KEY_1, "random_value"]] assert await redis_cache.multi_set(pairs, ttl=0.1) is True await asyncio.sleep(0.15) assert await redis_cache.multi_get([Keys.KEY, Keys.KEY_1]) == [None, None] async def test_raw(self, redis_cache): await redis_cache.raw("set", "key", "value") assert await redis_cache.raw("get", "key") == "value" assert await redis_cache.raw("keys", "k*") == ["key"] # .raw() doesn't build key with namespace prefix, clear it manually await redis_cache.raw("delete", "key") async def test_clear_with_namespace_redis(self, redis_cache): await redis_cache.set(Keys.KEY, "value", namespace="test") await redis_cache.clear(namespace="test") assert await redis_cache.exists(Keys.KEY, namespace="test") is False async def test_close(self, redis_cache): await redis_cache.set(Keys.KEY, "value") await redis_cache._close() ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/tests/acceptance/test_decorators.py0000644000175100001730000001535014464001404021740 0ustar00runnerdockerimport asyncio import random from unittest import mock import pytest from aiocache import cached, cached_stampede, multi_cached from aiocache.base import _ensure_key from ..utils import Keys async def return_dict(keys=None): ret = {} for value, key in enumerate(keys or [Keys.KEY, Keys.KEY_1]): ret[key] = str(value) return ret async def stub(arg: float, seconds: int = 0) -> str: await asyncio.sleep(seconds) return str(random.randint(1, 50)) class TestCached: @pytest.fixture(autouse=True) def default_cache(self, mocker, cache): mocker.patch("aiocache.decorators._get_cache", autospec=True, return_value=cache) async def test_cached_ttl(self, cache): @cached(ttl=2, key=Keys.KEY) async def fn(): return str(random.randint(1, 50)) resp1 = await fn() resp2 = await fn() assert await cache.get(Keys.KEY) == resp1 == resp2 await asyncio.sleep(2.1) assert await cache.get(Keys.KEY) is None async def test_cached_key_builder(self, cache): def build_key(f, self, a, b): return "{}_{}_{}_{}".format(self, f.__name__, a, b) @cached(key_builder=build_key) async def fn(self, a, b=2): return "1" await fn("self", 1, 3) assert await cache.exists(build_key(fn, "self", 1, 3)) is True @pytest.mark.parametrize("decorator", (cached, cached_stampede)) async def test_cached_skip_cache_func(self, cache, decorator): @decorator(skip_cache_func=lambda r: r is None) async def sk_func(x): return x if x > 0 else None arg = 1 res = await sk_func(arg) assert res key = decorator().get_cache_key(sk_func, args=(1,), kwargs={}) assert key assert await cache.exists(key) assert await cache.get(key) == res arg = -1 await sk_func(arg) key = decorator().get_cache_key(sk_func, args=(-1,), kwargs={}) assert key assert not await cache.exists(key) async def test_cached_without_namespace(self, cache): """Default cache key is created when no namespace is provided""" @cached(namespace=None) async def fn(): return "1" await fn() decorator = cached(namespace=None) key = decorator.get_cache_key(fn, args=(), kwargs={}) assert await cache.exists(key, namespace=None) is True async def test_cached_with_namespace(self, cache): """Cache key is prefixed with provided namespace""" key_prefix = "test" @cached(namespace=key_prefix) async def ns_fn(): return "1" await ns_fn() decorator = cached(namespace=key_prefix) key = decorator.get_cache_key(ns_fn, args=(), kwargs={}) assert await cache.exists(key, namespace=key_prefix) is True class TestCachedStampede: @pytest.fixture(autouse=True) def default_cache(self, mocker, cache): mocker.patch("aiocache.decorators._get_cache", autospec=True, return_value=cache) async def test_cached_stampede(self, mocker, cache): mocker.spy(cache, "get") mocker.spy(cache, "set") decorator = cached_stampede(ttl=10, lease=3) await asyncio.gather(decorator(stub)(0.5), decorator(stub)(0.5)) cache.get.assert_called_with("tests.acceptance.test_decoratorsstub(0.5,)[]") assert cache.get.call_count == 4 cache.set.assert_called_with("tests.acceptance.test_decoratorsstub(0.5,)[]", mock.ANY, ttl=10) assert cache.set.call_count == 1, cache.set.call_args_list async def test_locking_dogpile_lease_expiration(self, mocker, cache): mocker.spy(cache, "get") mocker.spy(cache, "set") decorator = cached_stampede(ttl=10, lease=3) await asyncio.gather( decorator(stub)(1, seconds=1), decorator(stub)(1, seconds=2), decorator(stub)(1, seconds=3), ) assert cache.get.call_count == 6 assert cache.set.call_count == 3 async def test_locking_dogpile_task_cancellation(self, cache): @cached_stampede() async def cancel_task(): raise asyncio.CancelledError() with pytest.raises(asyncio.CancelledError): await cancel_task() class TestMultiCachedDecorator: @pytest.fixture(autouse=True) def default_cache(self, mocker, cache): mocker.patch("aiocache.decorators._get_cache", autospec=True, return_value=cache) async def test_multi_cached(self, cache): multi_cached_decorator = multi_cached("keys") default_keys = {Keys.KEY, Keys.KEY_1} await multi_cached_decorator(return_dict)(keys=default_keys) for key in default_keys: assert await cache.get(key) is not None async def test_keys_without_kwarg(self, cache): @multi_cached("keys") async def fn(keys): return {Keys.KEY: 1} await fn([Keys.KEY]) assert await cache.exists(Keys.KEY) is True async def test_multi_cached_key_builder(self, cache): def build_key(key, f, self, keys, market="ES"): return "{}_{}_{}".format(f.__name__, _ensure_key(key), market) @multi_cached(keys_from_attr="keys", key_builder=build_key) async def fn(self, keys, market="ES"): return {Keys.KEY: 1, Keys.KEY_1: 2} await fn("self", keys=[Keys.KEY, Keys.KEY_1]) assert await cache.exists("fn_" + _ensure_key(Keys.KEY) + "_ES") is True assert await cache.exists("fn_" + _ensure_key(Keys.KEY_1) + "_ES") is True async def test_multi_cached_skip_keys(self, cache): @multi_cached(keys_from_attr="keys", skip_cache_func=lambda _, v: v is None) async def multi_sk_fn(keys, values): return {k: v for k, v in zip(keys, values)} res = await multi_sk_fn(keys=[Keys.KEY, Keys.KEY_1], values=[42, None]) assert res assert Keys.KEY in res and Keys.KEY_1 in res assert await cache.exists(Keys.KEY) assert await cache.get(Keys.KEY) == res[Keys.KEY] assert not await cache.exists(Keys.KEY_1) async def test_fn_with_args(self, cache): @multi_cached("keys") async def fn(keys, *args): assert len(args) == 1 return {Keys.KEY: 1} await fn([Keys.KEY], "arg") assert await cache.exists(Keys.KEY) is True async def test_double_decorator(self, cache): def dummy_d(fn): async def wrapper(*args, **kwargs): await fn(*args, **kwargs) return wrapper @dummy_d @multi_cached("keys") async def fn(keys): return {Keys.KEY: 1} await fn([Keys.KEY]) assert await cache.exists(Keys.KEY) is True ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/tests/acceptance/test_factory.py0000644000175100001730000000334314464001404021241 0ustar00runnerdockerimport pytest from aiocache import Cache from aiocache.backends.memory import SimpleMemoryCache class TestCache: async def test_from_url_memory(self): async with Cache.from_url("memory://") as cache: assert isinstance(cache, SimpleMemoryCache) def test_from_url_memory_no_endpoint(self): with pytest.raises(TypeError): Cache.from_url("memory://endpoint:10") @pytest.mark.redis async def test_from_url_redis(self): from aiocache.backends.redis import RedisCache url = ("redis://endpoint:1000/0/?password=pass" + "&pool_max_size=50&create_connection_timeout=20") async with Cache.from_url(url) as cache: assert isinstance(cache, RedisCache) assert cache.endpoint == "endpoint" assert cache.port == 1000 assert cache.password == "pass" assert cache.pool_max_size == 50 assert cache.create_connection_timeout == 20 @pytest.mark.memcached async def test_from_url_memcached(self): from aiocache.backends.memcached import MemcachedCache url = "memcached://endpoint:1000?pool_size=10" async with Cache.from_url(url) as cache: assert isinstance(cache, MemcachedCache) assert cache.endpoint == "endpoint" assert cache.port == 1000 assert cache.pool_size == 10 @pytest.mark.parametrize( "scheme", (pytest.param("redis", marks=pytest.mark.redis), "memory", pytest.param("memcached", marks=pytest.mark.memcached), )) def test_from_url_unexpected_param(self, scheme): with pytest.raises(TypeError): Cache.from_url("{}://?arg1=arg1".format(scheme)) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/tests/acceptance/test_lock.py0000644000175100001730000002352214464001404020523 0ustar00runnerdockerimport asyncio import pytest from aiocache.lock import OptimisticLock, OptimisticLockError, RedLock from aiocache.serializers import StringSerializer from ..utils import KEY_LOCK, Keys @pytest.fixture def lock(cache): return RedLock(cache, Keys.KEY, 20) def build_key(key, namespace=None): return "custom_key" def build_key_bytes(key, namespace=None): return b"custom_key" @pytest.fixture def custom_redis_cache(mocker, redis_cache, build_key=build_key): mocker.patch.object(redis_cache, "build_key", new=build_key) yield redis_cache @pytest.fixture def custom_memory_cache(mocker, memory_cache, build_key=build_key): mocker.patch.object(memory_cache, "build_key", new=build_key) yield memory_cache @pytest.fixture def custom_memcached_cache(mocker, memcached_cache, build_key=build_key_bytes): mocker.patch.object(memcached_cache, "build_key", new=build_key) yield memcached_cache class TestRedLock: async def test_acquire(self, cache, lock): cache.serializer = StringSerializer() async with lock: assert await cache.get(KEY_LOCK) == lock._value async def test_release_does_nothing_when_no_lock(self, lock): assert await lock.__aexit__("exc_type", "exc_value", "traceback") is None async def test_acquire_release(self, cache, lock): async with lock: pass assert await cache.get(KEY_LOCK) is None async def test_locking_dogpile(self, mocker, cache): mocker.spy(cache, "get") mocker.spy(cache, "set") mocker.spy(cache, "_add") async def dummy(): res = await cache.get(Keys.KEY) assert res is None async with RedLock(cache, Keys.KEY, lease=5): res = await cache.get(Keys.KEY) if res is not None: return await asyncio.sleep(0.1) await cache.set(Keys.KEY, "value") await asyncio.gather(dummy(), dummy(), dummy(), dummy()) assert cache._add.call_count == 4 assert cache.get.call_count == 8 assert cache.set.call_count == 1, cache.set.call_args_list async def test_locking_dogpile_lease_expiration(self, cache): async def dummy() -> None: res = await cache.get(Keys.KEY) assert res is None # Lease should expire before cache is set, so res is still None. async with RedLock(cache, Keys.KEY, lease=1): res = await cache.get(Keys.KEY) assert res is None await asyncio.sleep(1.1) await cache.set(Keys.KEY, "value") await asyncio.gather(dummy(), dummy(), dummy(), dummy()) async def test_locking_dogpile_propagates_exceptions(self, cache): async def dummy(): async with RedLock(cache, Keys.KEY, lease=1): raise ValueError() with pytest.raises(ValueError): await dummy() class TestMemoryRedLock: @pytest.fixture def lock(self, memory_cache): return RedLock(memory_cache, Keys.KEY, 20) async def test_acquire_key_builder(self, custom_memory_cache, lock): async with lock: assert await custom_memory_cache.get(KEY_LOCK) == lock._value async def test_acquire_release_key_builder(self, custom_memory_cache, lock): async with lock: assert await custom_memory_cache.get(KEY_LOCK) is not None assert await custom_memory_cache.get(KEY_LOCK) is None async def test_release_wrong_token_fails(self, lock): await lock.__aenter__() lock._value = "random" assert await lock.__aexit__("exc_type", "exc_value", "traceback") is None async def test_release_wrong_client_fails(self, memory_cache, lock): wrong_lock = RedLock(memory_cache, Keys.KEY, 20) await lock.__aenter__() assert await wrong_lock.__aexit__("exc_type", "exc_value", "traceback") is None async def test_float_lease(self, memory_cache): lock = RedLock(memory_cache, Keys.KEY, 0.1) await lock.__aenter__() await asyncio.sleep(0.2) assert await lock.__aexit__("exc_type", "exc_value", "traceback") is None @pytest.mark.redis class TestRedisRedLock: @pytest.fixture def lock(self, redis_cache): return RedLock(redis_cache, Keys.KEY, 20) async def test_acquire_key_builder(self, custom_redis_cache, lock): custom_redis_cache.serializer = StringSerializer() async with lock: assert await custom_redis_cache.get(KEY_LOCK) == lock._value async def test_acquire_release_key_builder(self, custom_redis_cache, lock): custom_redis_cache.serializer = StringSerializer() async with lock: assert await custom_redis_cache.get(KEY_LOCK) is not None assert await custom_redis_cache.get(KEY_LOCK) is None async def test_release_wrong_token_fails(self, lock): await lock.__aenter__() lock._value = "random" assert await lock.__aexit__("exc_type", "exc_value", "traceback") is None async def test_release_wrong_client_fails(self, redis_cache, lock): wrong_lock = RedLock(redis_cache, Keys.KEY, 20) await lock.__aenter__() assert await wrong_lock.__aexit__("exc_type", "exc_value", "traceback") is None async def test_float_lease(self, redis_cache): lock = RedLock(redis_cache, Keys.KEY, 0.1) await lock.__aenter__() await asyncio.sleep(0.2) assert await lock.__aexit__("exc_type", "exc_value", "traceback") is None @pytest.mark.memcached class TestMemcachedRedLock: @pytest.fixture def lock(self, memcached_cache): return RedLock(memcached_cache, Keys.KEY, 20) async def test_acquire_key_builder(self, custom_memcached_cache, lock): custom_memcached_cache.serializer = StringSerializer() async with lock: assert await custom_memcached_cache.get(KEY_LOCK) == lock._value async def test_acquire_release_key_builder(self, custom_memcached_cache, lock): custom_memcached_cache.serializer = StringSerializer() async with lock: assert await custom_memcached_cache.get(KEY_LOCK) is not None assert await custom_memcached_cache.get(KEY_LOCK) is None async def test_release_wrong_token_succeeds_meh(self, lock): await lock.__aenter__() lock._value = "random" assert await lock.__aexit__("exc_type", "exc_value", "traceback") is None async def test_release_wrong_client_succeeds_meh(self, memcached_cache, lock): wrong_lock = RedLock(memcached_cache, Keys.KEY, 20) await lock.__aenter__() assert await wrong_lock.__aexit__("exc_type", "exc_value", "traceback") is None async def test_float_lease(self, memcached_cache): lock = RedLock(memcached_cache, Keys.KEY, 0.1) with pytest.raises(TypeError): await lock.__aenter__() class TestOptimisticLock: @pytest.fixture def lock(self, cache): return OptimisticLock(cache, Keys.KEY) async def test_acquire(self, cache, lock): await cache.set(Keys.KEY, "value") async with lock: assert lock._token == await cache._gets(cache._build_key(Keys.KEY)) async def test_release_does_nothing(self, lock): assert await lock.__aexit__("exc_type", "exc_value", "traceback") is None async def test_check_and_set_not_existing_never_fails(self, cache, lock): async with lock as locked: await cache.set(Keys.KEY, "conflicting_value") await locked.cas("value") assert await cache.get(Keys.KEY) == "value" async def test_check_and_set(self, cache, lock): await cache.set(Keys.KEY, "previous_value") async with lock as locked: await locked.cas("value") assert await cache.get(Keys.KEY) == "value" async def test_check_and_set_fail(self, cache, lock): await cache.set(Keys.KEY, "previous_value") with pytest.raises(OptimisticLockError): async with lock as locked: await cache.set(Keys.KEY, "conflicting_value") await locked.cas("value") async def test_check_and_set_with_int_ttl(self, cache, lock): await cache.set(Keys.KEY, "previous_value") async with lock as locked: await locked.cas("value", ttl=1) await asyncio.sleep(1) assert await cache.get(Keys.KEY) is None class TestMemoryOptimisticLock: @pytest.fixture def lock(self, memory_cache): return OptimisticLock(memory_cache, Keys.KEY) async def test_acquire_key_builder(self, custom_memory_cache, lock): await custom_memory_cache.set(Keys.KEY, "value") async with lock: assert await custom_memory_cache.get(KEY_LOCK) == lock._token await custom_memory_cache.delete(Keys.KEY, "value") async def test_check_and_set_with_float_ttl(self, memory_cache, lock): await memory_cache.set(Keys.KEY, "previous_value") async with lock as locked: await locked.cas("value", ttl=0.1) await asyncio.sleep(1) assert await memory_cache.get(Keys.KEY) is None @pytest.mark.redis class TestRedisOptimisticLock: @pytest.fixture def lock(self, redis_cache): return OptimisticLock(redis_cache, Keys.KEY) async def test_acquire_key_builder(self, custom_redis_cache, lock): custom_redis_cache.serializer = StringSerializer() await custom_redis_cache.set(Keys.KEY, "value") async with lock: assert await custom_redis_cache.get(KEY_LOCK) == lock._token await custom_redis_cache.delete(Keys.KEY, "value") async def test_check_and_set_with_float_ttl(self, redis_cache, lock): await redis_cache.set(Keys.KEY, "previous_value") async with lock as locked: await locked.cas("value", ttl=0.1) await asyncio.sleep(1) assert await redis_cache.get(Keys.KEY) is None ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/tests/acceptance/test_plugins.py0000644000175100001730000000551014464001404021251 0ustar00runnerdockerimport pytest from aiocache.plugins import HitMissRatioPlugin, TimingPlugin class TestHitMissRatioPlugin: @pytest.mark.parametrize( "data, ratio", [ ({"testa": 1, "testb": 2, "testc": 3}, 0.6), ({"testa": 1, "testz": 0}, 0.2), ({}, 0), ({"testa": 1, "testb": 2, "testc": 3, "testd": 4, "teste": 5}, 1), ], ) async def test_get_hit_miss_ratio(self, memory_cache, data, ratio): keys = ["a", "b", "c", "d", "e", "f"] memory_cache.plugins = [HitMissRatioPlugin()] memory_cache._cache = data for key in keys: await memory_cache.get(key) hits = [x for x in keys if "test" + x in data] assert memory_cache.hit_miss_ratio["hits"] == len(hits) assert ( memory_cache.hit_miss_ratio["hit_ratio"] == len(hits) / memory_cache.hit_miss_ratio["total"] ) @pytest.mark.parametrize( "data, ratio", [ ({"testa": 1, "testb": 2, "testc": 3}, 0.6), ({"testa": 1, "testz": 0}, 0.2), ({}, 0), ({"testa": 1, "testb": 2, "testc": 3, "testd": 4, "teste": 5}, 1), ], ) async def test_multi_get_hit_miss_ratio(self, memory_cache, data, ratio): keys = ["a", "b", "c", "d", "e", "f"] memory_cache.plugins = [HitMissRatioPlugin()] memory_cache._cache = data for key in keys: await memory_cache.multi_get([key]) hits = [x for x in keys if "test" + x in data] assert memory_cache.hit_miss_ratio["hits"] == len(hits) assert ( memory_cache.hit_miss_ratio["hit_ratio"] == len(hits) / memory_cache.hit_miss_ratio["total"] ) async def test_set_and_get_using_namespace(self, memory_cache): memory_cache.plugins = [HitMissRatioPlugin()] key = "A" namespace = "test" value = 1 await memory_cache.set(key, value, namespace=namespace) result = await memory_cache.get(key, namespace=namespace) assert result == value class TestTimingPlugin: @pytest.mark.parametrize( "data, ratio", [ ({"testa": 1, "testb": 2, "testc": 3}, 0.6), ({"testa": 1, "testz": 0}, 0.2), ({}, 0), ({"testa": 1, "testb": 2, "testc": 3, "testd": 4, "teste": 5}, 1), ], ) async def test_get_avg_min_max(self, memory_cache, data, ratio): keys = ["a", "b", "c", "d", "e", "f"] memory_cache.plugins = [TimingPlugin()] memory_cache._cache = data for key in keys: await memory_cache.get(key) assert "get_max" in memory_cache.profiling assert "get_min" in memory_cache.profiling assert "get_total" in memory_cache.profiling assert "get_avg" in memory_cache.profiling ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/tests/acceptance/test_serializers.py0000644000175100001730000001225714464001404022132 0ustar00runnerdockerimport pickle import random import pytest from marshmallow import Schema, fields, post_load try: import ujson as json # noqa: I900 except ImportError: import json # type: ignore[no-redef] from aiocache.serializers import ( BaseSerializer, JsonSerializer, NullSerializer, PickleSerializer, StringSerializer, ) from ..utils import Keys class MyType: MY_CONSTANT = "CONSTANT" def __init__(self, r=None): self.r = r or random.randint(1, 10) def __eq__(self, obj): return self.__dict__ == obj.__dict__ class MyTypeSchema(Schema, BaseSerializer): r = fields.Integer() encoding = "utf-8" def dumps(self, *args, **kwargs): return super().dumps(*args, **kwargs) def loads(self, *args, **kwargs): return super().loads(*args, **kwargs) @post_load def build_my_type(self, data, **kwargs): return MyType(**data) class Meta: strict = True class TestNullSerializer: TYPES = (1, 2.0, "hi", True, ["1", 1], {"key": "value"}, MyType()) @pytest.mark.parametrize("obj", TYPES) async def test_set_get_types(self, memory_cache, obj): memory_cache.serializer = NullSerializer() assert await memory_cache.set(Keys.KEY, obj) is True assert await memory_cache.get(Keys.KEY) is obj @pytest.mark.parametrize("obj", TYPES) async def test_add_get_types(self, memory_cache, obj): memory_cache.serializer = NullSerializer() assert await memory_cache.add(Keys.KEY, obj) is True assert await memory_cache.get(Keys.KEY) is obj @pytest.mark.parametrize("obj", TYPES) async def test_multi_set_multi_get_types(self, memory_cache, obj): memory_cache.serializer = NullSerializer() assert await memory_cache.multi_set([(Keys.KEY, obj)]) is True assert (await memory_cache.multi_get([Keys.KEY]))[0] is obj class TestStringSerializer: TYPES = (1, 2.0, "hi", True, ["1", 1], {"key": "value"}, MyType()) @pytest.mark.parametrize("obj", TYPES) async def test_set_get_types(self, cache, obj): cache.serializer = StringSerializer() assert await cache.set(Keys.KEY, obj) is True assert await cache.get(Keys.KEY) == str(obj) @pytest.mark.parametrize("obj", TYPES) async def test_add_get_types(self, cache, obj): cache.serializer = StringSerializer() assert await cache.add(Keys.KEY, obj) is True assert await cache.get(Keys.KEY) == str(obj) @pytest.mark.parametrize("obj", TYPES) async def test_multi_set_multi_get_types(self, cache, obj): cache.serializer = StringSerializer() assert await cache.multi_set([(Keys.KEY, obj)]) is True assert await cache.multi_get([Keys.KEY]) == [str(obj)] class TestJsonSerializer: TYPES = (1, 2.0, "hi", True, ["1", 1], {"key": "value"}) @pytest.mark.parametrize("obj", TYPES) async def test_set_get_types(self, cache, obj): cache.serializer = JsonSerializer() assert await cache.set(Keys.KEY, obj) is True assert await cache.get(Keys.KEY) == json.loads(json.dumps(obj)) @pytest.mark.parametrize("obj", TYPES) async def test_add_get_types(self, cache, obj): cache.serializer = JsonSerializer() assert await cache.add(Keys.KEY, obj) is True assert await cache.get(Keys.KEY) == json.loads(json.dumps(obj)) @pytest.mark.parametrize("obj", TYPES) async def test_multi_set_multi_get_types(self, cache, obj): cache.serializer = JsonSerializer() assert await cache.multi_set([(Keys.KEY, obj)]) is True assert await cache.multi_get([Keys.KEY]) == [json.loads(json.dumps(obj))] class TestPickleSerializer: TYPES = (1, 2.0, "hi", True, ["1", 1], {"key": "value"}, MyType()) @pytest.mark.parametrize("obj", TYPES) async def test_set_get_types(self, cache, obj): cache.serializer = PickleSerializer() assert await cache.set(Keys.KEY, obj) is True assert await cache.get(Keys.KEY) == pickle.loads(pickle.dumps(obj)) @pytest.mark.parametrize("obj", TYPES) async def test_add_get_types(self, cache, obj): cache.serializer = PickleSerializer() assert await cache.add(Keys.KEY, obj) is True assert await cache.get(Keys.KEY) == pickle.loads(pickle.dumps(obj)) @pytest.mark.parametrize("obj", TYPES) async def test_multi_set_multi_get_types(self, cache, obj): cache.serializer = PickleSerializer() assert await cache.multi_set([(Keys.KEY, obj)]) is True assert await cache.multi_get([Keys.KEY]) == [pickle.loads(pickle.dumps(obj))] class TestAltSerializers: async def test_get_set_alt_serializer_functions(self, cache): cache.serializer = StringSerializer() await cache.set(Keys.KEY, "value", dumps_fn=lambda _: "v4lu3") assert await cache.get(Keys.KEY) == "v4lu3" assert await cache.get(Keys.KEY, loads_fn=lambda _: "value") == "value" async def test_get_set_alt_serializer_class(self, cache): my_serializer = MyTypeSchema() my_obj = MyType() cache.serializer = my_serializer await cache.set(Keys.KEY, my_obj) assert await cache.get(Keys.KEY) == my_serializer.loads(my_serializer.dumps(my_obj)) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1691353871.9894783 aiocache-0.12.2/tests/performance/0000755000175100001730000000000014464001420016367 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/tests/performance/__init__.py0000644000175100001730000000000014464001404020470 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/tests/performance/conftest.py0000644000175100001730000000070214464001404020567 0ustar00runnerdockerimport pytest from aiocache import Cache @pytest.fixture async def redis_cache(): # redis connection pool raises ConnectionError but doesn't wait for conn reuse # when exceeding max pool size. async with Cache(Cache.REDIS, namespace="test", pool_max_size=1) as cache: yield cache @pytest.fixture async def memcached_cache(): async with Cache(Cache.MEMCACHED, namespace="test", pool_size=1) as cache: yield cache ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/tests/performance/server.py0000644000175100001730000000230214464001404020246 0ustar00runnerdockerimport asyncio import logging import uuid from aiohttp import web from aiocache import Cache logging.getLogger("aiohttp.access").propagate = False class CacheManager: def __init__(self, backend: str): backends = { "memory": Cache.MEMORY, "redis": Cache.REDIS, "memcached": Cache.MEMCACHED, } self.cache = Cache(backends[backend]) async def get(self, key): return await self.cache.get(key, timeout=0.1) async def set(self, key, value): return await self.cache.set(key, value, timeout=0.1) async def close(self, *_): await self.cache.close() async def handler_get(req): try: data = await req.app["cache"].get("testkey") if data: return web.Response(text=data) except asyncio.TimeoutError: return web.Response(status=404) data = str(uuid.uuid4()) await req.app["cache"].set("testkey", data) return web.Response(text=str(data)) def run_server(backend: str) -> None: app = web.Application() app["cache"] = CacheManager(backend) app.on_shutdown.append(app["cache"].close) app.router.add_route("GET", "/", handler_get) web.run_app(app) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/tests/performance/test_concurrency.py0000644000175100001730000000246514464001404022343 0ustar00runnerdockerimport platform import re import subprocess import time from multiprocessing import Process import pytest from .server import run_server # TODO: Fix and readd "memcached" (currently fails >98% of requests) @pytest.fixture(params=("memory", "redis")) def server(request): p = Process(target=run_server, args=(request.param,)) p.start() time.sleep(1) yield p.terminate() p.join(timeout=15) @pytest.mark.skipif(platform.python_implementation() == "PyPy", reason="Not working currently.") def test_concurrency_error_rates(server): """Test with Apache benchmark tool.""" total_requests = 1500 # On some platforms, it's required to enlarge number of "open file descriptors" # with "ulimit -n number" before doing the benchmark. cmd = ("ab", "-n", str(total_requests), "-c", "500", "http://127.0.0.1:8080/") result = subprocess.run(cmd, capture_output=True, check=True, encoding="utf-8") m = re.search(r"Failed requests:\s+([0-9]+)", result.stdout) assert m, "Missing output from ab: " + result.stdout failed_requests = int(m.group(1)) m = re.search(r"Non-2xx responses:\s+([0-9]+)", result.stdout) non_200 = int(m.group(1)) if m else 0 assert failed_requests / total_requests < 0.75, result.stdout assert non_200 / total_requests < 0.75, result.stdout ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/tests/performance/test_footprint.py0000644000175100001730000001251014464001404022025 0ustar00runnerdockerimport platform import time import aiomcache import pytest import redis.asyncio as redis @pytest.fixture async def redis_client() -> redis.Redis: async with redis.Redis(host="127.0.0.1", port=6379, max_connections=1) as r: yield r @pytest.mark.skipif(platform.python_implementation() == "PyPy", reason="Too slow") class TestRedis: async def test_redis_getsetdel(self, redis_client, redis_cache): N = 10000 redis_total_time = 0 for _n in range(N): start = time.time() await redis_client.set("hi", "value") await redis_client.get("hi") await redis_client.delete("hi") redis_total_time += time.time() - start aiocache_total_time = 0 for _n in range(N): start = time.time() await redis_cache.set("hi", "value", timeout=0) await redis_cache.get("hi", timeout=0) await redis_cache.delete("hi", timeout=0) aiocache_total_time += time.time() - start print( "\n{:0.2f}/{:0.2f}: {:0.2f}".format( aiocache_total_time, redis_total_time, aiocache_total_time / redis_total_time ) ) print("aiocache avg call: {:0.5f}s".format(aiocache_total_time / N)) print("redis avg call: {:0.5f}s".format(redis_total_time / N)) assert aiocache_total_time / redis_total_time < 1.35 async def test_redis_multigetsetdel(self, redis_client, redis_cache): N = 5000 redis_total_time = 0 values = ["a", "b", "c", "d", "e", "f"] for _n in range(N): start = time.time() await redis_client.mset({x: x for x in values}) await redis_client.mget(values) for k in values: await redis_client.delete(k) redis_total_time += time.time() - start aiocache_total_time = 0 for _n in range(N): start = time.time() await redis_cache.multi_set([(x, x) for x in values], timeout=0) await redis_cache.multi_get(values, timeout=0) for k in values: await redis_cache.delete(k, timeout=0) aiocache_total_time += time.time() - start print( "\n{:0.2f}/{:0.2f}: {:0.2f}".format( aiocache_total_time, redis_total_time, aiocache_total_time / redis_total_time ) ) print("aiocache avg call: {:0.5f}s".format(aiocache_total_time / N)) print("redis_client avg call: {:0.5f}s".format(redis_total_time / N)) assert aiocache_total_time / redis_total_time < 1.35 @pytest.fixture async def aiomcache_pool(): client = aiomcache.Client("127.0.0.1", 11211, pool_size=1) yield client await client.close() @pytest.mark.skipif(platform.python_implementation() == "PyPy", reason="Too slow") class TestMemcached: async def test_memcached_getsetdel(self, aiomcache_pool, memcached_cache): N = 10000 aiomcache_total_time = 0 for _n in range(N): start = time.time() await aiomcache_pool.set(b"hi", b"value") await aiomcache_pool.get(b"hi") await aiomcache_pool.delete(b"hi") aiomcache_total_time += time.time() - start aiocache_total_time = 0 for _n in range(N): start = time.time() await memcached_cache.set("hi", "value", timeout=0) await memcached_cache.get("hi", timeout=0) await memcached_cache.delete("hi", timeout=0) aiocache_total_time += time.time() - start print( "\n{:0.2f}/{:0.2f}: {:0.2f}".format( aiocache_total_time, aiomcache_total_time, aiocache_total_time / aiomcache_total_time, ) ) print("aiocache avg call: {:0.5f}s".format(aiocache_total_time / N)) print("aiomcache avg call: {:0.5f}s".format(aiomcache_total_time / N)) assert aiocache_total_time / aiomcache_total_time < 1.40 async def test_memcached_multigetsetdel(self, aiomcache_pool, memcached_cache): N = 2000 aiomcache_total_time = 0 values = [b"a", b"b", b"c", b"d", b"e", b"f"] for _n in range(N): start = time.time() for k in values: await aiomcache_pool.set(k, k) await aiomcache_pool.multi_get(*values) for k in values: await aiomcache_pool.delete(k) aiomcache_total_time += time.time() - start aiocache_total_time = 0 values = ["a", "b", "c", "d", "e", "f"] for _n in range(N): start = time.time() await memcached_cache.multi_set([(x, x) for x in values], timeout=0) await memcached_cache.multi_get(values, timeout=0) for k in values: await memcached_cache.delete(k, timeout=0) aiocache_total_time += time.time() - start print( "\n{:0.2f}/{:0.2f}: {:0.2f}".format( aiocache_total_time, aiomcache_total_time, aiocache_total_time / aiomcache_total_time, ) ) print("aiocache avg call: {:0.5f}s".format(aiocache_total_time / N)) print("aiomcache avg call: {:0.5f}s".format(aiomcache_total_time / N)) assert aiocache_total_time / aiomcache_total_time < 1.40 ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1691353871.9934785 aiocache-0.12.2/tests/ut/0000755000175100001730000000000014464001420014516 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/tests/ut/__init__.py0000644000175100001730000000000014464001404016617 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1691353871.9934785 aiocache-0.12.2/tests/ut/backends/0000755000175100001730000000000014464001420016270 5ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/tests/ut/backends/__init__.py0000644000175100001730000000000014464001404020371 0ustar00runnerdocker././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/tests/ut/backends/test_memcached.py0000644000175100001730000002622514464001404021620 0ustar00runnerdockerfrom unittest.mock import AsyncMock, patch import aiomcache import pytest from aiocache.backends.memcached import MemcachedBackend, MemcachedCache from aiocache.base import BaseCache, _ensure_key from aiocache.serializers import JsonSerializer from ...utils import Keys @pytest.fixture def memcached(): memcached = MemcachedBackend() with patch.object(memcached, "client", autospec=True) as m: # Autospec messes up the signature on the decorated methods. for method in ( "get", "gets", "multi_get", "stats", "set", "cas", "replace", "append", "prepend", "incr", "decr", "touch", "version", "flush_all" ): setattr(m, method, AsyncMock(return_value=None, spec_set=())) m.add = AsyncMock(return_value=True, spec_set=()) m.delete = AsyncMock(return_value=True, spec_set=()) yield memcached class TestMemcachedBackend: def test_setup(self): with patch.object(aiomcache, "Client", autospec=True) as aiomcache_client: memcached = MemcachedBackend() aiomcache_client.assert_called_with("127.0.0.1", 11211, pool_size=2) assert memcached.endpoint == "127.0.0.1" assert memcached.port == 11211 assert memcached.pool_size == 2 def test_setup_override(self): with patch.object(aiomcache, "Client", autospec=True) as aiomcache_client: memcached = MemcachedBackend(endpoint="127.0.0.2", port=2, pool_size=10) aiomcache_client.assert_called_with("127.0.0.2", 2, pool_size=10) assert memcached.endpoint == "127.0.0.2" assert memcached.port == 2 assert memcached.pool_size == 10 def test_setup_casts(self): with patch.object(aiomcache, "Client", autospec=True) as aiomcache_client: memcached = MemcachedBackend(pool_size="10") aiomcache_client.assert_called_with("127.0.0.1", 11211, pool_size=10) assert memcached.pool_size == 10 async def test_get(self, memcached): memcached.client.get.return_value = b"value" assert await memcached._get(Keys.KEY) == "value" memcached.client.get.assert_called_with(Keys.KEY) async def test_gets(self, memcached): memcached.client.gets.return_value = b"value", 12345 assert await memcached._gets(Keys.KEY) == 12345 memcached.client.gets.assert_called_with(Keys.KEY.encode()) async def test_get_none(self, memcached): memcached.client.get.return_value = None assert await memcached._get(Keys.KEY) is None memcached.client.get.assert_called_with(Keys.KEY) async def test_get_no_encoding(self, memcached): memcached.client.get.return_value = b"value" assert await memcached._get(Keys.KEY, encoding=None) == b"value" memcached.client.get.assert_called_with(Keys.KEY) async def test_set(self, memcached): await memcached._set(Keys.KEY, "value") memcached.client.set.assert_called_with(Keys.KEY, b"value", exptime=0) await memcached._set(Keys.KEY, "value", ttl=1) memcached.client.set.assert_called_with(Keys.KEY, b"value", exptime=1) async def test_set_float_ttl(self, memcached): memcached.client.set.side_effect = aiomcache.exceptions.ValidationException("msg") with pytest.raises(TypeError) as exc_info: await memcached._set(Keys.KEY, "value", ttl=0.1) assert str(exc_info.value) == "aiomcache error: msg" async def test_set_cas_token(self, mocker, memcached): mocker.spy(memcached, "_cas") await memcached._set(Keys.KEY, "value", _cas_token="token") memcached._cas.assert_called_with(Keys.KEY, b"value", "token", ttl=0, _conn=None) async def test_cas(self, memcached): memcached.client.cas.return_value = True assert await memcached._cas(Keys.KEY, b"value", "token", ttl=0) is True memcached.client.cas.assert_called_with(Keys.KEY, b"value", "token", exptime=0) async def test_cas_fail(self, memcached): memcached.client.cas.return_value = False assert await memcached._cas(Keys.KEY, b"value", "token", ttl=0) is False memcached.client.cas.assert_called_with(Keys.KEY, b"value", "token", exptime=0) async def test_multi_get(self, memcached): memcached.client.multi_get.return_value = [b"value", b"random"] assert await memcached._multi_get([Keys.KEY, Keys.KEY_1]) == ["value", "random"] memcached.client.multi_get.assert_called_with(Keys.KEY, Keys.KEY_1) async def test_multi_get_none(self, memcached): memcached.client.multi_get.return_value = [b"value", None] assert await memcached._multi_get([Keys.KEY, Keys.KEY_1]) == ["value", None] memcached.client.multi_get.assert_called_with(Keys.KEY, Keys.KEY_1) async def test_multi_get_no_encoding(self, memcached): memcached.client.multi_get.return_value = [b"value", None] assert await memcached._multi_get([Keys.KEY, Keys.KEY_1], encoding=None) == [ b"value", None, ] memcached.client.multi_get.assert_called_with(Keys.KEY, Keys.KEY_1) async def test_multi_set(self, memcached): await memcached._multi_set([(Keys.KEY, "value"), (Keys.KEY_1, "random")]) memcached.client.set.assert_any_call(Keys.KEY, b"value", exptime=0) memcached.client.set.assert_any_call(Keys.KEY_1, b"random", exptime=0) assert memcached.client.set.call_count == 2 await memcached._multi_set([(Keys.KEY, "value"), (Keys.KEY_1, "random")], ttl=1) memcached.client.set.assert_any_call(Keys.KEY, b"value", exptime=1) memcached.client.set.assert_any_call(Keys.KEY_1, b"random", exptime=1) assert memcached.client.set.call_count == 4 async def test_multi_set_float_ttl(self, memcached): memcached.client.set.side_effect = aiomcache.exceptions.ValidationException("msg") with pytest.raises(TypeError) as exc_info: await memcached._multi_set([(Keys.KEY, "value"), (Keys.KEY_1, "random")], ttl=0.1) assert str(exc_info.value) == "aiomcache error: msg" async def test_add(self, memcached): await memcached._add(Keys.KEY, "value") memcached.client.add.assert_called_with(Keys.KEY, b"value", exptime=0) await memcached._add(Keys.KEY, "value", ttl=1) memcached.client.add.assert_called_with(Keys.KEY, b"value", exptime=1) async def test_add_existing(self, memcached): memcached.client.add.return_value = False with pytest.raises(ValueError): await memcached._add(Keys.KEY, "value") async def test_add_float_ttl(self, memcached): memcached.client.add.side_effect = aiomcache.exceptions.ValidationException("msg") with pytest.raises(TypeError) as exc_info: await memcached._add(Keys.KEY, "value", 0.1) assert str(exc_info.value) == "aiomcache error: msg" async def test_exists(self, memcached): await memcached._exists(Keys.KEY) memcached.client.append.assert_called_with(Keys.KEY, b"") async def test_increment(self, memcached): await memcached._increment(Keys.KEY, 2) memcached.client.incr.assert_called_with(Keys.KEY, 2) async def test_increment_negative(self, memcached): await memcached._increment(Keys.KEY, -2) memcached.client.decr.assert_called_with(Keys.KEY, 2) async def test_increment_missing(self, memcached): memcached.client.incr.side_effect = aiomcache.exceptions.ClientException("NOT_FOUND") await memcached._increment(Keys.KEY, 2) memcached.client.incr.assert_called_with(Keys.KEY, 2) memcached.client.set.assert_called_with(Keys.KEY, b"2", exptime=0) async def test_increment_missing_negative(self, memcached): memcached.client.decr.side_effect = aiomcache.exceptions.ClientException("NOT_FOUND") await memcached._increment(Keys.KEY, -2) memcached.client.decr.assert_called_with(Keys.KEY, 2) memcached.client.set.assert_called_with(Keys.KEY, b"-2", exptime=0) async def test_increment_typerror(self, memcached): memcached.client.incr.side_effect = aiomcache.exceptions.ClientException("msg") with pytest.raises(TypeError) as exc_info: await memcached._increment(Keys.KEY, 2) assert str(exc_info.value) == "aiomcache error: msg" async def test_expire(self, memcached): await memcached._expire(Keys.KEY, 1) memcached.client.touch.assert_called_with(Keys.KEY, 1) async def test_delete(self, memcached): assert await memcached._delete(Keys.KEY) == 1 memcached.client.delete.assert_called_with(Keys.KEY) async def test_delete_missing(self, memcached): memcached.client.delete.return_value = False assert await memcached._delete(Keys.KEY) == 0 memcached.client.delete.assert_called_with(Keys.KEY) async def test_clear(self, memcached): await memcached._clear() memcached.client.flush_all.assert_called_with() async def test_clear_with_namespace(self, memcached): with pytest.raises(ValueError): await memcached._clear("nm") async def test_raw(self, memcached): await memcached._raw("get", Keys.KEY) await memcached._raw("set", Keys.KEY, 1) memcached.client.get.assert_called_with(Keys.KEY) memcached.client.set.assert_called_with(Keys.KEY, 1) async def test_raw_bytes(self, memcached): await memcached._raw("set", Keys.KEY, "asd") await memcached._raw("get", Keys.KEY, encoding=None) memcached.client.get.assert_called_with(Keys.KEY) memcached.client.set.assert_called_with(Keys.KEY, "asd") async def test_redlock_release(self, mocker, memcached): mocker.spy(memcached, "_delete") await memcached._redlock_release(Keys.KEY, "random") memcached._delete.assert_called_with(Keys.KEY) async def test_close(self, memcached): await memcached._close() assert memcached.client.close.call_count == 1 class TestMemcachedCache: @pytest.fixture def set_test_namespace(self, memcached_cache): memcached_cache.namespace = "test" yield memcached_cache.namespace = None def test_name(self): assert MemcachedCache.NAME == "memcached" def test_inheritance(self): assert isinstance(MemcachedCache(), BaseCache) def test_default_serializer(self): assert isinstance(MemcachedCache().serializer, JsonSerializer) def test_parse_uri_path(self): assert MemcachedCache().parse_uri_path("/1/2/3") == {} @pytest.mark.parametrize( "namespace, expected", ([None, "test" + _ensure_key(Keys.KEY)], ["", _ensure_key(Keys.KEY)], ["my_ns", "my_ns" + _ensure_key(Keys.KEY)]), # type: ignore[attr-defined] # noqa: B950 ) def test_build_key_bytes(self, set_test_namespace, memcached_cache, namespace, expected): assert memcached_cache.build_key(Keys.KEY, namespace=namespace) == expected.encode() def test_build_key_no_namespace(self, memcached_cache): assert memcached_cache.build_key(Keys.KEY, namespace=None) == Keys.KEY.encode() def test_build_key_no_spaces(self, memcached_cache): assert memcached_cache.build_key("hello world") == b"hello_world" ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/tests/ut/backends/test_memory.py0000644000175100001730000001717614464001404021227 0ustar00runnerdockerimport asyncio from unittest.mock import ANY, MagicMock, create_autospec, patch import pytest from aiocache.backends.memory import SimpleMemoryBackend, SimpleMemoryCache from aiocache.base import BaseCache from aiocache.serializers import NullSerializer from ...utils import Keys @pytest.fixture def memory(mocker): memory = SimpleMemoryBackend() mocker.spy(memory, "_cache") return memory class TestSimpleMemoryBackend: async def test_get(self, memory): await memory._get(Keys.KEY) memory._cache.get.assert_called_with(Keys.KEY) async def test_gets(self, mocker, memory): mocker.spy(memory, "_get") await memory._gets(Keys.KEY) memory._get.assert_called_with(Keys.KEY, encoding="utf-8", _conn=ANY) async def test_set(self, memory): await memory._set(Keys.KEY, "value") memory._cache.__setitem__.assert_called_with(Keys.KEY, "value") async def test_set_no_ttl_no_handle(self, memory): await memory._set(Keys.KEY, "value", ttl=0) assert Keys.KEY not in memory._handlers await memory._set(Keys.KEY, "value") assert Keys.KEY not in memory._handlers async def test_set_cancel_previous_ttl_handle(self, memory): with patch("asyncio.get_running_loop", autospec=True): await memory._set(Keys.KEY, "value", ttl=0.1) memory._handlers[Keys.KEY].cancel.assert_not_called() await memory._set(Keys.KEY, "new_value", ttl=0.1) memory._handlers[Keys.KEY].cancel.assert_called_once_with() async def test_set_ttl_handle(self, memory): await memory._set(Keys.KEY, "value", ttl=100) assert Keys.KEY in memory._handlers assert isinstance(memory._handlers[Keys.KEY], asyncio.Handle) async def test_set_cas_token(self, memory): memory._cache.get.return_value = "old_value" assert await memory._set(Keys.KEY, "value", _cas_token="old_value") == 1 memory._cache.__setitem__.assert_called_with(Keys.KEY, "value") async def test_set_cas_fail(self, memory): memory._cache.get.return_value = "value" assert await memory._set(Keys.KEY, "value", _cas_token="old_value") == 0 assert memory._cache.__setitem__.call_count == 0 async def test_multi_get(self, memory): await memory._multi_get([Keys.KEY, Keys.KEY_1]) memory._cache.get.assert_any_call(Keys.KEY) memory._cache.get.assert_any_call(Keys.KEY_1) async def test_multi_set(self, memory): await memory._multi_set([(Keys.KEY, "value"), (Keys.KEY_1, "random")]) memory._cache.__setitem__.assert_any_call(Keys.KEY, "value") memory._cache.__setitem__.assert_any_call(Keys.KEY_1, "random") async def test_add(self, memory, mocker): mocker.spy(memory, "_set") await memory._add(Keys.KEY, "value") memory._set.assert_called_with(Keys.KEY, "value", ttl=None) async def test_add_existing(self, memory): memory._cache.__contains__.return_value = True with pytest.raises(ValueError): await memory._add(Keys.KEY, "value") async def test_exists(self, memory): await memory._exists(Keys.KEY) memory._cache.__contains__.assert_called_with(Keys.KEY) async def test_increment(self, memory): await memory._increment(Keys.KEY, 2) memory._cache.__contains__.assert_called_with(Keys.KEY) memory._cache.__setitem__.assert_called_with(Keys.KEY, 2) async def test_increment_missing(self, memory): memory._cache.__contains__.return_value = True memory._cache.__getitem__.return_value = 2 await memory._increment(Keys.KEY, 2) memory._cache.__getitem__.assert_called_with(Keys.KEY) memory._cache.__setitem__.assert_called_with(Keys.KEY, 4) async def test_increment_typerror(self, memory): memory._cache.__contains__.return_value = True memory._cache.__getitem__.return_value = "asd" with pytest.raises(TypeError): await memory._increment(Keys.KEY, 2) async def test_expire_no_handle_no_ttl(self, memory): memory._cache.__contains__.return_value = True await memory._expire(Keys.KEY, 0) assert memory._handlers.get(Keys.KEY) is None async def test_expire_no_handle_ttl(self, memory): memory._cache.__contains__.return_value = True await memory._expire(Keys.KEY, 1) assert isinstance(memory._handlers.get(Keys.KEY), asyncio.Handle) async def test_expire_handle_ttl(self, memory): fake = create_autospec(asyncio.TimerHandle, instance=True) memory._handlers[Keys.KEY] = fake memory._cache.__contains__.return_value = True await memory._expire(Keys.KEY, 1) assert fake.cancel.call_count == 1 assert isinstance(memory._handlers.get(Keys.KEY), asyncio.Handle) async def test_expire_missing(self, memory): memory._cache.__contains__.return_value = False assert await memory._expire(Keys.KEY, 1) is False async def test_delete(self, memory): fake = create_autospec(asyncio.TimerHandle, instance=True) memory._handlers[Keys.KEY] = fake await memory._delete(Keys.KEY) assert fake.cancel.call_count == 1 assert Keys.KEY not in memory._handlers memory._cache.pop.assert_called_with(Keys.KEY, None) async def test_delete_missing(self, memory): memory._cache.pop.return_value = None await memory._delete(Keys.KEY) memory._cache.pop.assert_called_with(Keys.KEY, None) async def test_delete_non_truthy(self, memory): non_truthy = MagicMock(spec_set=("__bool__",)) non_truthy.__bool__.side_effect = ValueError("Does not implement truthiness") with pytest.raises(ValueError): bool(non_truthy) memory._cache.pop.return_value = non_truthy await memory._delete(Keys.KEY) assert non_truthy.__bool__.call_count == 1 memory._cache.pop.assert_called_with(Keys.KEY, None) async def test_clear_namespace(self, memory): memory._cache.__iter__.return_value = iter(["nma", "nmb", "no"]) await memory._clear("nm") assert memory._cache.pop.call_count == 2 memory._cache.pop.assert_any_call("nma", None) memory._cache.pop.assert_any_call("nmb", None) async def test_clear_no_namespace(self, memory): memory._handlers = "asdad" memory._cache = "asdad" await memory._clear() memory._handlers = {} memory._cache = {} async def test_raw(self, memory): await memory._raw("get", Keys.KEY) memory._cache.get.assert_called_with(Keys.KEY) await memory._set(Keys.KEY, "value") memory._cache.__setitem__.assert_called_with(Keys.KEY, "value") async def test_redlock_release(self, memory): memory._cache.get.return_value = "lock" assert await memory._redlock_release(Keys.KEY, "lock") == 1 memory._cache.get.assert_called_with(Keys.KEY) memory._cache.pop.assert_called_with(Keys.KEY) async def test_redlock_release_nokey(self, memory): memory._cache.get.return_value = None assert await memory._redlock_release(Keys.KEY, "lock") == 0 memory._cache.get.assert_called_with(Keys.KEY) assert memory._cache.pop.call_count == 0 class TestSimpleMemoryCache: def test_name(self): assert SimpleMemoryCache.NAME == "memory" def test_inheritance(self): assert isinstance(SimpleMemoryCache(), BaseCache) def test_default_serializer(self): assert isinstance(SimpleMemoryCache().serializer, NullSerializer) def test_parse_uri_path(self): assert SimpleMemoryCache().parse_uri_path("/1/2/3") == {} ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/tests/ut/backends/test_redis.py0000644000175100001730000002306314464001404021015 0ustar00runnerdockerfrom unittest.mock import ANY, AsyncMock, create_autospec, patch import pytest from redis.asyncio.client import Pipeline from redis.exceptions import ResponseError from aiocache.backends.redis import RedisBackend, RedisCache from aiocache.base import BaseCache, _ensure_key from aiocache.serializers import JsonSerializer from ...utils import Keys @pytest.fixture def redis(): redis = RedisBackend() with patch.object(redis, "client", autospec=True) as m: # These methods actually return an awaitable. for method in ( "eval", "expire", "get", "psetex", "setex", "execute_command", "exists", "incrby", "persist", "delete", "keys", "flushdb", ): setattr(m, method, AsyncMock(return_value=None, spec_set=())) m.mget = AsyncMock(return_value=[None], spec_set=()) m.set = AsyncMock(return_value=True, spec_set=()) m.pipeline.return_value = create_autospec(Pipeline, instance=True) m.pipeline.return_value.__aenter__.return_value = m.pipeline.return_value yield redis class TestRedisBackend: default_redis_kwargs = { "host": "127.0.0.1", "port": 6379, "db": 0, "password": None, "socket_connect_timeout": None, "decode_responses": False, "max_connections": None, } @patch("redis.asyncio.Redis", name="mock_class", autospec=True) def test_setup(self, mock_class): redis_backend = RedisBackend() kwargs = self.default_redis_kwargs.copy() mock_class.assert_called_with(**kwargs) assert redis_backend.endpoint == "127.0.0.1" assert redis_backend.port == 6379 assert redis_backend.db == 0 assert redis_backend.password is None assert redis_backend.pool_max_size is None @patch("redis.asyncio.Redis", name="mock_class", autospec=True) def test_setup_override(self, mock_class): override = {"db": 2, "password": "pass"} redis_backend = RedisBackend(**override) kwargs = self.default_redis_kwargs.copy() kwargs.update(override) mock_class.assert_called_with(**kwargs) assert redis_backend.endpoint == "127.0.0.1" assert redis_backend.port == 6379 assert redis_backend.db == 2 assert redis_backend.password == "pass" @patch("redis.asyncio.Redis", name="mock_class", autospec=True) def test_setup_casts(self, mock_class): override = { "db": "2", "port": "6379", "pool_max_size": "10", "create_connection_timeout": "1.5", } redis_backend = RedisBackend(**override) kwargs = self.default_redis_kwargs.copy() kwargs.update({ "db": 2, "port": 6379, "max_connections": 10, "socket_connect_timeout": 1.5, }) mock_class.assert_called_with(**kwargs) assert redis_backend.db == 2 assert redis_backend.port == 6379 assert redis_backend.pool_max_size == 10 assert redis_backend.create_connection_timeout == 1.5 async def test_get(self, redis): redis.client.get.return_value = b"value" assert await redis._get(Keys.KEY) == "value" redis.client.get.assert_called_with(Keys.KEY) async def test_gets(self, mocker, redis): mocker.spy(redis, "_get") await redis._gets(Keys.KEY) redis._get.assert_called_with(Keys.KEY, encoding="utf-8", _conn=ANY) async def test_set(self, redis): await redis._set(Keys.KEY, "value") redis.client.set.assert_called_with(Keys.KEY, "value") await redis._set(Keys.KEY, "value", ttl=1) redis.client.setex.assert_called_with(Keys.KEY, 1, "value") async def test_set_cas_token(self, mocker, redis): mocker.spy(redis, "_cas") await redis._set(Keys.KEY, "value", _cas_token="old_value", _conn=redis.client) redis._cas.assert_called_with( Keys.KEY, "value", "old_value", ttl=None, _conn=redis.client ) async def test_cas(self, mocker, redis): mocker.spy(redis, "_raw") await redis._cas(Keys.KEY, "value", "old_value", ttl=10, _conn=redis.client) redis._raw.assert_called_with( "eval", redis.CAS_SCRIPT, 1, *[Keys.KEY, "value", "old_value", "EX", 10], _conn=redis.client, ) async def test_cas_float_ttl(self, mocker, redis): mocker.spy(redis, "_raw") await redis._cas(Keys.KEY, "value", "old_value", ttl=0.1, _conn=redis.client) redis._raw.assert_called_with( "eval", redis.CAS_SCRIPT, 1, *[Keys.KEY, "value", "old_value", "PX", 100], _conn=redis.client, ) async def test_multi_get(self, redis): await redis._multi_get([Keys.KEY, Keys.KEY_1]) redis.client.mget.assert_called_with(Keys.KEY, Keys.KEY_1) async def test_multi_set(self, redis): await redis._multi_set([(Keys.KEY, "value"), (Keys.KEY_1, "random")]) redis.client.execute_command.assert_called_with( "MSET", Keys.KEY, "value", Keys.KEY_1, "random" ) async def test_multi_set_with_ttl(self, redis): await redis._multi_set([(Keys.KEY, "value"), (Keys.KEY_1, "random")], ttl=1) assert redis.client.pipeline.call_count == 1 pipeline = redis.client.pipeline.return_value pipeline.execute_command.assert_called_with( "MSET", Keys.KEY, "value", Keys.KEY_1, "random" ) pipeline.expire.assert_any_call(Keys.KEY, time=1) pipeline.expire.assert_any_call(Keys.KEY_1, time=1) assert pipeline.execute.call_count == 1 async def test_add(self, redis): await redis._add(Keys.KEY, "value") redis.client.set.assert_called_with(Keys.KEY, "value", nx=True, ex=None) await redis._add(Keys.KEY, "value", 1) redis.client.set.assert_called_with(Keys.KEY, "value", nx=True, ex=1) async def test_add_existing(self, redis): redis.client.set.return_value = False with pytest.raises(ValueError): await redis._add(Keys.KEY, "value") async def test_add_float_ttl(self, redis): await redis._add(Keys.KEY, "value", 0.1) redis.client.set.assert_called_with(Keys.KEY, "value", nx=True, px=100) async def test_exists(self, redis): redis.client.exists.return_value = 1 await redis._exists(Keys.KEY) redis.client.exists.assert_called_with(Keys.KEY) async def test_increment(self, redis): await redis._increment(Keys.KEY, delta=2) redis.client.incrby.assert_called_with(Keys.KEY, 2) async def test_increment_typerror(self, redis): redis.client.incrby.side_effect = ResponseError("msg") with pytest.raises(TypeError): await redis._increment(Keys.KEY, delta=2) redis.client.incrby.assert_called_with(Keys.KEY, 2) async def test_expire(self, redis): await redis._expire(Keys.KEY, 1) redis.client.expire.assert_called_with(Keys.KEY, 1) await redis._increment(Keys.KEY, 2) async def test_expire_0_ttl(self, redis): await redis._expire(Keys.KEY, ttl=0) redis.client.persist.assert_called_with(Keys.KEY) async def test_delete(self, redis): await redis._delete(Keys.KEY) redis.client.delete.assert_called_with(Keys.KEY) async def test_clear(self, redis): redis.client.keys.return_value = ["nm:a", "nm:b"] await redis._clear("nm") redis.client.delete.assert_called_with("nm:a", "nm:b") async def test_clear_no_keys(self, redis): redis.client.keys.return_value = [] await redis._clear("nm") redis.client.delete.assert_not_called() async def test_clear_no_namespace(self, redis): await redis._clear() assert redis.client.flushdb.call_count == 1 async def test_raw(self, redis): await redis._raw("get", Keys.KEY) await redis._raw("set", Keys.KEY, 1) redis.client.get.assert_called_with(Keys.KEY) redis.client.set.assert_called_with(Keys.KEY, 1) async def test_redlock_release(self, mocker, redis): mocker.spy(redis, "_raw") await redis._redlock_release(Keys.KEY, "random") redis._raw.assert_called_with("eval", redis.RELEASE_SCRIPT, 1, Keys.KEY, "random") async def test_close(self, redis): await redis._close() assert redis.client.close.call_count == 1 class TestRedisCache: @pytest.fixture def set_test_namespace(self, redis_cache): redis_cache.namespace = "test" yield redis_cache.namespace = None def test_name(self): assert RedisCache.NAME == "redis" def test_inheritance(self): assert isinstance(RedisCache(), BaseCache) def test_default_serializer(self): assert isinstance(RedisCache().serializer, JsonSerializer) @pytest.mark.parametrize( "path,expected", [("", {}), ("/", {}), ("/1", {"db": "1"}), ("/1/2/3", {"db": "1"})] ) def test_parse_uri_path(self, path, expected): assert RedisCache().parse_uri_path(path) == expected @pytest.mark.parametrize( "namespace, expected", ([None, "test:" + _ensure_key(Keys.KEY)], ["", _ensure_key(Keys.KEY)], ["my_ns", "my_ns:" + _ensure_key(Keys.KEY)]), # noqa: B950 ) def test_build_key_double_dot(self, set_test_namespace, redis_cache, namespace, expected): assert redis_cache.build_key(Keys.KEY, namespace=namespace) == expected def test_build_key_no_namespace(self, redis_cache): assert redis_cache.build_key(Keys.KEY, namespace=None) == Keys.KEY ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/tests/ut/conftest.py0000644000175100001730000000332214464001404016717 0ustar00runnerdockerimport sys from contextlib import ExitStack from unittest.mock import create_autospec, patch import pytest from aiocache import caches from aiocache.base import BaseCache from aiocache.plugins import BasePlugin if sys.version_info < (3, 8): # Missing AsyncMock on 3.7 collect_ignore_glob = ["*"] @pytest.fixture(autouse=True) def reset_caches(): caches.set_config( { "default": { "cache": "aiocache.SimpleMemoryCache", "serializer": {"class": "aiocache.serializers.NullSerializer"}, } } ) @pytest.fixture def mock_cache(mocker): return create_autospec(BaseCache, instance=True) @pytest.fixture def mock_base_cache(): """Return BaseCache instance with unimplemented methods mocked out.""" plugin = create_autospec(BasePlugin, instance=True) cache = BaseCache(timeout=0.002, plugins=(plugin,)) methods = ("_add", "_get", "_gets", "_set", "_multi_get", "_multi_set", "_delete", "_exists", "_increment", "_expire", "_clear", "_raw", "_close", "_redlock_release", "acquire_conn", "release_conn") with ExitStack() as stack: for f in methods: stack.enter_context(patch.object(cache, f, autospec=True)) stack.enter_context(patch.object(cache, "_serializer", autospec=True)) yield cache @pytest.fixture def base_cache(): return BaseCache() @pytest.fixture async def redis_cache(): from aiocache.backends.redis import RedisCache async with RedisCache() as cache: yield cache @pytest.fixture async def memcached_cache(): from aiocache.backends.memcached import MemcachedCache async with MemcachedCache() as cache: yield cache ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/tests/ut/test_base.py0000644000175100001730000006033314464001404017050 0ustar00runnerdockerimport asyncio import os from unittest.mock import ANY, AsyncMock, MagicMock, patch import pytest from aiocache.base import API, BaseCache, _Conn, _ensure_key from ..utils import Keys class TestAPI: def test_register(self): @API.register def dummy(): """Dummy function.""" assert dummy in API.CMDS API.unregister(dummy) def test_unregister(self): @API.register def dummy(): "Dummy function.""" API.unregister(dummy) assert dummy not in API.CMDS def test_unregister_unexisting(self): def dummy(): "Dummy function.""" API.unregister(dummy) assert dummy not in API.CMDS async def test_aiocache_enabled(self): @API.aiocache_enabled() async def dummy(*args, **kwargs): return True assert await dummy() is True async def test_aiocache_enabled_disabled(self): @API.aiocache_enabled(fake_return=[]) async def dummy(*args, **kwargs): """Dummy function.""" with patch.dict(os.environ, {"AIOCACHE_DISABLE": "1"}): assert await dummy() == [] async def test_timeout_no_timeout(self): self = MagicMock(spec_set=("timeout",)) self.timeout = 0 @API.timeout async def dummy(self): self() with patch("asyncio.wait_for") as wait_for: await dummy(self) assert self.call_count == 1 assert wait_for.call_count == 0 async def test_timeout_self(self): self = MagicMock(spec_set=("timeout",)) self.timeout = 0.002 @API.timeout async def dummy(self): await asyncio.sleep(0.005) with pytest.raises(asyncio.TimeoutError): await dummy(self) async def test_timeout_kwarg_0(self): self = MagicMock(spec_set=("timeout",)) self.timeout = 0.002 @API.timeout async def dummy(self): await asyncio.sleep(0.005) return True assert await dummy(self, timeout=0) is True async def test_timeout_kwarg_None(self): self = MagicMock(spec_set=("timeout",)) self.timeout = 0.002 @API.timeout async def dummy(self): await asyncio.sleep(0.005) return True assert await dummy(self, timeout=None) is True async def test_timeout_kwarg(self): self = MagicMock(spec_set=("timeout",)) @API.timeout async def dummy(self): await asyncio.sleep(0.005) with pytest.raises(asyncio.TimeoutError): await dummy(self, timeout=0.002) async def test_timeout_self_kwarg(self): self = MagicMock(spec_set=("timeout",)) self.timeout = 5 @API.timeout async def dummy(self): await asyncio.sleep(0.005) with pytest.raises(asyncio.TimeoutError): await dummy(self, timeout=0.003) async def test_plugins(self): self = MagicMock(spec_set=("plugins",)) plugin1 = MagicMock(spec_set=("pre_dummy", "post_dummy")) plugin1.pre_dummy = AsyncMock(spec_set=()) plugin1.post_dummy = AsyncMock(spec_set=()) plugin2 = MagicMock(spec_set=("pre_dummy", "post_dummy")) plugin2.pre_dummy = AsyncMock(spec_set=()) plugin2.post_dummy = AsyncMock(spec_set=()) self.plugins = (plugin1, plugin2) @API.plugins async def dummy(self, *args, **kwargs): return True assert await dummy(self) is True plugin1.pre_dummy.assert_called_with(self) plugin1.post_dummy.assert_called_with(self, took=ANY, ret=True) plugin2.pre_dummy.assert_called_with(self) plugin2.post_dummy.assert_called_with(self, took=ANY, ret=True) class TestBaseCache: def test_str_ttl(self): cache = BaseCache(ttl="1.5") assert cache.ttl == 1.5 def test_str_timeout(self): cache = BaseCache(timeout="1.5") assert cache.timeout == 1.5 async def test_add(self, base_cache): with pytest.raises(NotImplementedError): await base_cache._add(Keys.KEY, "value", 0) async def test_get(self, base_cache): with pytest.raises(NotImplementedError): await base_cache._get(Keys.KEY, "utf-8") async def test_set(self, base_cache): with pytest.raises(NotImplementedError): await base_cache._set(Keys.KEY, "value", 0) async def test_multi_get(self, base_cache): with pytest.raises(NotImplementedError): await base_cache._multi_get([Keys.KEY], encoding="utf-8") async def test_multi_set(self, base_cache): with pytest.raises(NotImplementedError): await base_cache._multi_set([(Keys.KEY, "value")], 0) async def test_delete(self, base_cache): with pytest.raises(NotImplementedError): await base_cache._delete(Keys.KEY) async def test_exists(self, base_cache): with pytest.raises(NotImplementedError): await base_cache._exists(Keys.KEY) async def test_increment(self, base_cache): with pytest.raises(NotImplementedError): await base_cache._increment(Keys.KEY, 2) async def test_expire(self, base_cache): with pytest.raises(NotImplementedError): await base_cache._expire(Keys.KEY, 0) async def test_clear(self, base_cache): with pytest.raises(NotImplementedError): await base_cache._clear("namespace") async def test_raw(self, base_cache): with pytest.raises(NotImplementedError): await base_cache._raw("get", Keys.KEY) async def test_close(self, base_cache): assert await base_cache._close() is None async def test_acquire_conn(self, base_cache): assert await base_cache.acquire_conn() == base_cache async def test_release_conn(self, base_cache): assert await base_cache.release_conn("mock") is None @pytest.fixture def set_test_namespace(self, base_cache): base_cache.namespace = "test" yield base_cache.namespace = None @pytest.mark.parametrize( "namespace, expected", ([None, "test" + _ensure_key(Keys.KEY)], ["", _ensure_key(Keys.KEY)], ["my_ns", "my_ns" + _ensure_key(Keys.KEY)]), # type: ignore[attr-defined] # noqa: B950 ) def test_build_key(self, set_test_namespace, base_cache, namespace, expected): assert base_cache.build_key(Keys.KEY, namespace=namespace) == expected def test_alt_build_key(self): cache = BaseCache(key_builder=lambda key, namespace: "x") assert cache.build_key(Keys.KEY, "namespace") == "x" @pytest.fixture def alt_base_cache(self, init_namespace="test"): """Custom key_builder for cache""" def build_key(key, namespace=None): ns = namespace if namespace is not None else "" sep = ":" if namespace else "" return f"{ns}{sep}{_ensure_key(key)}" cache = BaseCache(key_builder=build_key, namespace=init_namespace) return cache @pytest.mark.parametrize( "namespace, expected", ([None, _ensure_key(Keys.KEY)], ["", _ensure_key(Keys.KEY)], ["my_ns", "my_ns:" + _ensure_key(Keys.KEY)]), # type: ignore[attr-defined] # noqa: B950 ) def test_alt_build_key_override_namespace(self, alt_base_cache, namespace, expected): """Custom key_builder overrides namespace of cache""" cache = alt_base_cache assert cache.build_key(Keys.KEY, namespace=namespace) == expected @pytest.mark.parametrize( "init_namespace, expected", ([None, _ensure_key(Keys.KEY)], ["", _ensure_key(Keys.KEY)], ["test", "test:" + _ensure_key(Keys.KEY)]), # type: ignore[attr-defined] # noqa: B950 ) async def test_alt_build_key_default_namespace( self, init_namespace, alt_base_cache, expected): """Custom key_builder for cache with or without namespace specified. Cache member functions that accept a ``namespace`` parameter should default to using ``self.namespace`` if the ``namespace`` argument is ``None``. This enables a cache to correctly build keys when the cache is initialized with both a ``namespace`` and a ``key_builder``, even when that cache is supplied to a lock or to a decorator using the ``alias`` argument. """ cache = alt_base_cache cache.namespace = init_namespace # Verify that private members are called with the correct ns_key await self._assert_add__alt_build_key_default_namespace(cache, expected) await self._assert_get__alt_build_key_default_namespace(cache, expected) await self._assert_multi_get__alt_build_key_default_namespace(cache, expected) await self._assert_set__alt_build_key_default_namespace(cache, expected) await self._assert_multi_set__alt_build_key_default_namespace(cache, expected) await self._assert_exists__alt_build_key_default_namespace(cache, expected) await self._assert_increment__alt_build_key_default_namespace(cache, expected) await self._assert_delete__alt_build_key_default_namespace(cache, expected) await self._assert_expire__alt_build_key_default_namespace(cache, expected) async def _assert_add__alt_build_key_default_namespace(self, cache, expected): with patch.object(cache, "_add", autospec=True) as _add: await cache.add(Keys.KEY, "value") _add.assert_called_once_with(expected, "value", _conn=None, ttl=None) async def _assert_get__alt_build_key_default_namespace(self, cache, expected): with patch.object(cache, "_get", autospec=True) as _get: await cache.get(Keys.KEY) _get.assert_called_once_with( expected, _conn=None, encoding=cache.serializer.encoding) async def _assert_multi_get__alt_build_key_default_namespace(self, cache, expected): with patch.object(cache, "_multi_get", autospec=True) as _multi_get: await cache.multi_get([Keys.KEY]) _multi_get.assert_called_once_with( [expected], _conn=None, encoding=cache.serializer.encoding) async def _assert_set__alt_build_key_default_namespace(self, cache, expected): with patch.object(cache, "_set", autospec=True) as _set: await cache.set(Keys.KEY, "value") _set.assert_called_once_with( expected, "value", _conn=None, ttl=None, _cas_token=None) async def _assert_multi_set__alt_build_key_default_namespace(self, cache, expected): with patch.object(cache, "_multi_set", autospec=True) as _multi_set: await cache.multi_set([(Keys.KEY, "value")]) _multi_set.assert_called_once_with( [(expected, "value")], _conn=None, ttl=None) async def _assert_exists__alt_build_key_default_namespace(self, cache, expected): with patch.object(cache, "_exists", autospec=True) as _exists: await cache.exists(Keys.KEY) _exists.assert_called_once_with(expected, _conn=None) async def _assert_increment__alt_build_key_default_namespace(self, cache, expected): with patch.object(cache, "_increment", autospec=True) as _increment: await cache.increment(Keys.KEY) _increment.assert_called_once_with(expected, delta=1, _conn=None) async def _assert_delete__alt_build_key_default_namespace(self, cache, expected): with patch.object(cache, "_delete", autospec=True) as _delete: await cache.delete(Keys.KEY) _delete.assert_called_once_with(expected, _conn=None) async def _assert_expire__alt_build_key_default_namespace(self, cache, expected): with patch.object(cache, "_expire", autospec=True) as _expire: await cache.expire(Keys.KEY, 0) _expire.assert_called_once_with(expected, 0, _conn=None) async def test_add_ttl_cache_default(self, base_cache): with patch.object(base_cache, "_add", autospec=True) as m: await base_cache.add(Keys.KEY, "value") m.assert_called_once_with(Keys.KEY, "value", _conn=None, ttl=None) async def test_add_ttl_default(self, base_cache): base_cache.ttl = 10 with patch.object(base_cache, "_add", autospec=True) as m: await base_cache.add(Keys.KEY, "value") m.assert_called_once_with(Keys.KEY, "value", _conn=None, ttl=10) async def test_add_ttl_overriden(self, base_cache): base_cache.ttl = 10 with patch.object(base_cache, "_add", autospec=True) as m: await base_cache.add(Keys.KEY, "value", ttl=20) m.assert_called_once_with(Keys.KEY, "value", _conn=None, ttl=20) async def test_add_ttl_none(self, base_cache): base_cache.ttl = 10 with patch.object(base_cache, "_add", autospec=True) as m: await base_cache.add(Keys.KEY, "value", ttl=None) m.assert_called_once_with(Keys.KEY, "value", _conn=None, ttl=None) async def test_set_ttl_cache_default(self, base_cache): with patch.object(base_cache, "_set", autospec=True) as m: await base_cache.set(Keys.KEY, "value") m.assert_called_once_with( Keys.KEY, "value", _cas_token=None, _conn=None, ttl=None ) async def test_set_ttl_default(self, base_cache): base_cache.ttl = 10 with patch.object(base_cache, "_set", autospec=True) as m: await base_cache.set(Keys.KEY, "value") m.assert_called_once_with( Keys.KEY, "value", _cas_token=None, _conn=None, ttl=10 ) async def test_set_ttl_overriden(self, base_cache): base_cache.ttl = 10 with patch.object(base_cache, "_set", autospec=True) as m: await base_cache.set(Keys.KEY, "value", ttl=20) m.assert_called_once_with( Keys.KEY, "value", _cas_token=None, _conn=None, ttl=20 ) async def test_set_ttl_none(self, base_cache): base_cache.ttl = 10 with patch.object(base_cache, "_set", autospec=True) as m: await base_cache.set(Keys.KEY, "value", ttl=None) m.assert_called_once_with( Keys.KEY, "value", _cas_token=None, _conn=None, ttl=None ) async def test_multi_set_ttl_cache_default(self, base_cache): with patch.object(base_cache, "_multi_set", autospec=True) as m: await base_cache.multi_set([[Keys.KEY, "value"], [Keys.KEY_1, "value1"]]) m.assert_called_once_with( [(Keys.KEY, "value"), (Keys.KEY_1, "value1")], _conn=None, ttl=None ) async def test_multi_set_ttl_default(self, base_cache): base_cache.ttl = 10 with patch.object(base_cache, "_multi_set", autospec=True) as m: await base_cache.multi_set([[Keys.KEY, "value"], [Keys.KEY_1, "value1"]]) m.assert_called_once_with( [(Keys.KEY, "value"), (Keys.KEY_1, "value1")], _conn=None, ttl=10 ) async def test_multi_set_ttl_overriden(self, base_cache): base_cache.ttl = 10 with patch.object(base_cache, "_multi_set", autospec=True) as m: await base_cache.multi_set([[Keys.KEY, "value"], [Keys.KEY_1, "value1"]], ttl=20) m.assert_called_once_with( [(Keys.KEY, "value"), (Keys.KEY_1, "value1")], _conn=None, ttl=20 ) async def test_multi_set_ttl_none(self, base_cache): base_cache.ttl = 10 with patch.object(base_cache, "_multi_set", autospec=True) as m: await base_cache.multi_set([[Keys.KEY, "value"], [Keys.KEY_1, "value1"]], ttl=None) m.assert_called_once_with( [(Keys.KEY, "value"), (Keys.KEY_1, "value1")], _conn=None, ttl=None ) class TestCache: """ This class ensures that all backends behave the same way at logic level. It tries to ensure the calls to the necessary methods like serialization and strategies are performed when needed. To add a new backend just create the fixture for the new backend and add id as a param for the cache fixture. The calls to the client are mocked so it doesn't interact with any storage. """ async def asleep(self, *args, **kwargs): await asyncio.sleep(0.005) async def test_get(self, mock_base_cache): await mock_base_cache.get(Keys.KEY) mock_base_cache._get.assert_called_with( mock_base_cache._build_key(Keys.KEY), encoding=ANY, _conn=ANY ) assert mock_base_cache.plugins[0].pre_get.call_count == 1 assert mock_base_cache.plugins[0].post_get.call_count == 1 async def test_get_timeouts(self, mock_base_cache): mock_base_cache._get = self.asleep with pytest.raises(asyncio.TimeoutError): await mock_base_cache.get(Keys.KEY) async def test_get_default(self, mock_base_cache): mock_base_cache._serializer.loads.return_value = None assert await mock_base_cache.get(Keys.KEY, default=1) == 1 async def test_get_negative_default(self, mock_base_cache): mock_base_cache._serializer.loads.return_value = False assert await mock_base_cache.get(Keys.KEY) is False async def test_set(self, mock_base_cache): await mock_base_cache.set(Keys.KEY, "value", ttl=2) mock_base_cache._set.assert_called_with( mock_base_cache._build_key(Keys.KEY), ANY, ttl=2, _cas_token=None, _conn=ANY ) assert mock_base_cache.plugins[0].pre_set.call_count == 1 assert mock_base_cache.plugins[0].post_set.call_count == 1 async def test_set_timeouts(self, mock_base_cache): mock_base_cache._set = self.asleep with pytest.raises(asyncio.TimeoutError): await mock_base_cache.set(Keys.KEY, "value") async def test_add(self, mock_base_cache): mock_base_cache._exists = AsyncMock(return_value=False) await mock_base_cache.add(Keys.KEY, "value", ttl=2) key = mock_base_cache._build_key(Keys.KEY) mock_base_cache._add.assert_called_with(key, ANY, ttl=2, _conn=ANY) assert mock_base_cache.plugins[0].pre_add.call_count == 1 assert mock_base_cache.plugins[0].post_add.call_count == 1 async def test_add_timeouts(self, mock_base_cache): mock_base_cache._add = self.asleep with pytest.raises(asyncio.TimeoutError): await mock_base_cache.add(Keys.KEY, "value") async def test_mget(self, mock_base_cache): await mock_base_cache.multi_get([Keys.KEY, Keys.KEY_1]) mock_base_cache._multi_get.assert_called_with( [mock_base_cache._build_key(Keys.KEY), mock_base_cache._build_key(Keys.KEY_1)], encoding=ANY, _conn=ANY, ) assert mock_base_cache.plugins[0].pre_multi_get.call_count == 1 assert mock_base_cache.plugins[0].post_multi_get.call_count == 1 async def test_mget_timeouts(self, mock_base_cache): mock_base_cache._multi_get = self.asleep with pytest.raises(asyncio.TimeoutError): await mock_base_cache.multi_get(Keys.KEY, "value") async def test_mset(self, mock_base_cache): await mock_base_cache.multi_set([[Keys.KEY, "value"], [Keys.KEY_1, "value1"]], ttl=2) key = mock_base_cache._build_key(Keys.KEY) key1 = mock_base_cache._build_key(Keys.KEY_1) mock_base_cache._multi_set.assert_called_with( [(key, ANY), (key1, ANY)], ttl=2, _conn=ANY) assert mock_base_cache.plugins[0].pre_multi_set.call_count == 1 assert mock_base_cache.plugins[0].post_multi_set.call_count == 1 async def test_mset_timeouts(self, mock_base_cache): mock_base_cache._multi_set = self.asleep with pytest.raises(asyncio.TimeoutError): await mock_base_cache.multi_set([[Keys.KEY, "value"], [Keys.KEY_1, "value1"]]) async def test_exists(self, mock_base_cache): await mock_base_cache.exists(Keys.KEY) mock_base_cache._exists.assert_called_with(mock_base_cache._build_key(Keys.KEY), _conn=ANY) assert mock_base_cache.plugins[0].pre_exists.call_count == 1 assert mock_base_cache.plugins[0].post_exists.call_count == 1 async def test_exists_timeouts(self, mock_base_cache): mock_base_cache._exists = self.asleep with pytest.raises(asyncio.TimeoutError): await mock_base_cache.exists(Keys.KEY) async def test_increment(self, mock_base_cache): await mock_base_cache.increment(Keys.KEY, 2) key = mock_base_cache._build_key(Keys.KEY) mock_base_cache._increment.assert_called_with(key, 2, _conn=ANY) assert mock_base_cache.plugins[0].pre_increment.call_count == 1 assert mock_base_cache.plugins[0].post_increment.call_count == 1 async def test_increment_timeouts(self, mock_base_cache): mock_base_cache._increment = self.asleep with pytest.raises(asyncio.TimeoutError): await mock_base_cache.increment(Keys.KEY) async def test_delete(self, mock_base_cache): await mock_base_cache.delete(Keys.KEY) mock_base_cache._delete.assert_called_with(mock_base_cache._build_key(Keys.KEY), _conn=ANY) assert mock_base_cache.plugins[0].pre_delete.call_count == 1 assert mock_base_cache.plugins[0].post_delete.call_count == 1 async def test_delete_timeouts(self, mock_base_cache): mock_base_cache._delete = self.asleep with pytest.raises(asyncio.TimeoutError): await mock_base_cache.delete(Keys.KEY) async def test_expire(self, mock_base_cache): await mock_base_cache.expire(Keys.KEY, 1) key = mock_base_cache._build_key(Keys.KEY) mock_base_cache._expire.assert_called_with(key, 1, _conn=ANY) assert mock_base_cache.plugins[0].pre_expire.call_count == 1 assert mock_base_cache.plugins[0].post_expire.call_count == 1 async def test_expire_timeouts(self, mock_base_cache): mock_base_cache._expire = self.asleep with pytest.raises(asyncio.TimeoutError): await mock_base_cache.expire(Keys.KEY, 0) async def test_clear(self, mock_base_cache): await mock_base_cache.clear(Keys.KEY) mock_base_cache._clear.assert_called_with(mock_base_cache._build_key(Keys.KEY), _conn=ANY) assert mock_base_cache.plugins[0].pre_clear.call_count == 1 assert mock_base_cache.plugins[0].post_clear.call_count == 1 async def test_clear_timeouts(self, mock_base_cache): mock_base_cache._clear = self.asleep with pytest.raises(asyncio.TimeoutError): await mock_base_cache.clear(Keys.KEY) async def test_raw(self, mock_base_cache): await mock_base_cache.raw("get", Keys.KEY) mock_base_cache._raw.assert_called_with( "get", mock_base_cache._build_key(Keys.KEY), encoding=ANY, _conn=ANY ) assert mock_base_cache.plugins[0].pre_raw.call_count == 1 assert mock_base_cache.plugins[0].post_raw.call_count == 1 async def test_raw_timeouts(self, mock_base_cache): mock_base_cache._raw = self.asleep with pytest.raises(asyncio.TimeoutError): await mock_base_cache.raw("clear") async def test_close(self, mock_base_cache): await mock_base_cache.close() assert mock_base_cache._close.call_count == 1 async def test_get_connection(self, mock_base_cache): async with mock_base_cache.get_connection(): pass assert mock_base_cache.acquire_conn.call_count == 1 assert mock_base_cache.release_conn.call_count == 1 @pytest.fixture def conn(mock_base_cache): yield _Conn(mock_base_cache) class TestConn: def test_conn(self, conn, mock_base_cache): assert conn._cache == mock_base_cache def test_conn_getattr(self, conn, mock_base_cache): assert conn.timeout == mock_base_cache.timeout assert conn.namespace == mock_base_cache.namespace assert conn.serializer is mock_base_cache.serializer async def test_conn_context_manager(self, conn): async with conn: assert conn._cache.acquire_conn.call_count == 1 conn._cache.release_conn.assert_called_with(conn._cache.acquire_conn.return_value) async def test_inject_conn(self, conn): conn._conn = "connection" conn._cache.dummy = AsyncMock(spec_set=()) await _Conn._inject_conn("dummy")(conn, "a", b="b") conn._cache.dummy.assert_called_with("a", _conn=conn._conn, b="b") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/tests/ut/test_decorators.py0000644000175100001730000005417214464001404020307 0ustar00runnerdockerimport asyncio import inspect import random import sys from unittest.mock import ANY, create_autospec, patch import pytest from aiocache import cached, cached_stampede, multi_cached from aiocache.backends.memory import SimpleMemoryCache from aiocache.base import BaseCache, SENTINEL from aiocache.decorators import _get_args_dict from aiocache.lock import RedLock async def stub(*args, value=None, seconds=0, **kwargs): await asyncio.sleep(seconds) if value: return str(value) return str(random.randint(1, 50)) class TestCached: @pytest.fixture def decorator(self, mock_cache): with patch("aiocache.decorators._get_cache", autospec=True, return_value=mock_cache): yield cached() @pytest.fixture def decorator_call(self, decorator): d = decorator(stub) yield d @pytest.fixture(autouse=True) def spy_stub(self, mocker): module = sys.modules[globals()["__name__"]] mocker.spy(module, "stub") def test_init(self): c = cached( ttl=1, key="key", key_builder="fn", cache=SimpleMemoryCache, plugins=None, alias=None, noself=False, namespace="test", unused_kwarg="unused", ) assert c.ttl == 1 assert c.key == "key" assert c.key_builder == "fn" assert c.cache is None assert c._cache == SimpleMemoryCache assert c._serializer is None assert c._namespace == "test" assert c._kwargs == {"unused_kwarg": "unused"} def test_fails_at_instantiation(self): with pytest.raises(TypeError): @cached(wrong_param=1) async def fn() -> None: """Dummy function.""" def test_alias_takes_precedence(self, mock_cache): with patch( "aiocache.decorators.caches.get", autospec=True, return_value=mock_cache ) as mock_get: c = cached(alias="default", cache=SimpleMemoryCache, namespace="test") c(stub) mock_get.assert_called_with("default") assert c.cache is mock_cache def test_get_cache_key_with_key(self, decorator): decorator.key = "key" decorator.key_builder = "fn" assert decorator.get_cache_key(stub, (1, 2), {"a": 1, "b": 2}) == "key" def test_get_cache_key_without_key_and_attr(self, decorator): assert ( decorator.get_cache_key(stub, (1, 2), {"a": 1, "b": 2}) == "stub(1, 2)[('a', 1), ('b', 2)]" ) def test_get_cache_key_without_key_and_attr_noself(self, decorator): decorator.noself = True assert ( decorator.get_cache_key(stub, ("self", 1, 2), {"a": 1, "b": 2}) == "stub(1, 2)[('a', 1), ('b', 2)]" ) def test_get_cache_key_with_key_builder(self, decorator): decorator.key_builder = lambda *args, **kwargs: kwargs["market"].upper() assert decorator.get_cache_key(stub, (), {"market": "es"}) == "ES" async def test_calls_get_and_returns(self, decorator, decorator_call): decorator.cache.get.return_value = 1 await decorator_call() decorator.cache.get.assert_called_with("stub()[]") assert decorator.cache.set.call_count == 0 assert stub.call_count == 0 async def test_cache_read_disabled(self, decorator, decorator_call): await decorator_call(cache_read=False) assert decorator.cache.get.call_count == 0 assert decorator.cache.set.call_count == 1 assert stub.call_count == 1 async def test_cache_write_disabled(self, decorator, decorator_call): decorator.cache.get.return_value = None await decorator_call(cache_write=False) assert decorator.cache.get.call_count == 1 assert decorator.cache.set.call_count == 0 assert stub.call_count == 1 async def test_disable_params_not_propagated(self, decorator, decorator_call): decorator.cache.get.return_value = None await decorator_call(cache_read=False, cache_write=False) stub.assert_called_once_with() async def test_get_from_cache_returns(self, decorator, decorator_call): decorator.cache.get.return_value = 1 assert await decorator.get_from_cache("key") == 1 async def test_get_from_cache_exception(self, decorator, decorator_call): decorator.cache.get.side_effect = Exception assert await decorator.get_from_cache("key") is None async def test_get_from_cache_none(self, decorator, decorator_call): decorator.cache.get.return_value = None assert await decorator.get_from_cache("key") is None async def test_calls_fn_set_when_get_none(self, mocker, decorator, decorator_call): mocker.spy(decorator, "get_from_cache") mocker.spy(decorator, "set_in_cache") decorator.cache.get.return_value = None await decorator_call(value="value") assert decorator.get_from_cache.call_count == 1 decorator.set_in_cache.assert_called_with("stub()[('value', 'value')]", "value") stub.assert_called_once_with(value="value") async def test_calls_fn_raises_exception(self, decorator, decorator_call): decorator.cache.get.return_value = None stub.side_effect = Exception() with pytest.raises(Exception): assert await decorator_call() async def test_cache_write_waits_for_future(self, decorator, decorator_call): with patch.object(decorator, "get_from_cache", autospec=True, return_value=None) as m: await decorator_call() m.assert_awaited() async def test_cache_write_doesnt_wait_for_future(self, mocker, decorator, decorator_call): mocker.spy(decorator, "set_in_cache") with patch.object(decorator, "get_from_cache", autospec=True, return_value=None): with patch("aiocache.decorators.asyncio.ensure_future", autospec=True): await decorator_call(aiocache_wait_for_write=False, value="value") decorator.set_in_cache.assert_not_awaited() decorator.set_in_cache.assert_called_once_with("stub()[('value', 'value')]", "value") async def test_set_calls_set(self, decorator, decorator_call): await decorator.set_in_cache("key", "value") decorator.cache.set.assert_called_with("key", "value", ttl=SENTINEL) async def test_set_calls_set_ttl(self, decorator, decorator_call): decorator.ttl = 10 await decorator.set_in_cache("key", "value") decorator.cache.set.assert_called_with("key", "value", ttl=decorator.ttl) async def test_set_catches_exception(self, decorator, decorator_call): decorator.cache.set.side_effect = Exception assert await decorator.set_in_cache("key", "value") is None async def test_decorate(self, mock_cache): mock_cache.get.return_value = None with patch("aiocache.decorators._get_cache", autospec=True, return_value=mock_cache): @cached() async def fn(n): return n assert await fn(1) == 1 assert await fn(2) == 2 assert fn.cache == mock_cache async def test_keeps_signature(self, mock_cache): with patch("aiocache.decorators._get_cache", autospec=True, return_value=mock_cache): @cached() async def what(self, a, b): """Dummy function.""" assert what.__name__ == "what" assert str(inspect.signature(what)) == "(self, a, b)" assert inspect.getfullargspec(what.__wrapped__).args == ["self", "a", "b"] async def test_reuses_cache_instance(self): with patch("aiocache.decorators._get_cache", autospec=True) as get_c: cache = create_autospec(BaseCache, instance=True) get_c.side_effect = [cache, None] @cached() async def what(): """Dummy function.""" await what() await what() assert get_c.call_count == 1 assert cache.get.call_count == 2 async def test_cache_per_function(self): @cached() async def foo(): """First function.""" @cached() async def bar(): """Second function.""" assert foo.cache != bar.cache class TestCachedStampede: @pytest.fixture def decorator(self, mock_cache): with patch("aiocache.decorators._get_cache", autospec=True, return_value=mock_cache): yield cached_stampede() @pytest.fixture def decorator_call(self, decorator): yield decorator(stub) @pytest.fixture(autouse=True) def spy_stub(self, mocker): module = sys.modules[globals()["__name__"]] mocker.spy(module, "stub") def test_inheritance(self): assert isinstance(cached_stampede(), cached) def test_init(self): c = cached_stampede( lease=3, ttl=1, key="key", key_builder="fn", cache=SimpleMemoryCache, plugins=None, alias=None, noself=False, namespace="test", unused_kwarg="unused", ) assert c.ttl == 1 assert c.key == "key" assert c.key_builder == "fn" assert c.cache is None assert c._cache == SimpleMemoryCache assert c._serializer is None assert c.lease == 3 assert c._namespace == "test" assert c._kwargs == {"unused_kwarg": "unused"} async def test_calls_get_and_returns(self, decorator, decorator_call): decorator.cache.get.return_value = 1 await decorator_call() decorator.cache.get.assert_called_with("stub()[]") assert decorator.cache.set.call_count == 0 assert stub.call_count == 0 async def test_calls_fn_raises_exception(self, decorator, decorator_call): decorator.cache.get.return_value = None stub.side_effect = Exception() with pytest.raises(Exception): assert await decorator_call() async def test_calls_redlock(self, decorator, decorator_call): decorator.cache.get.return_value = None lock = create_autospec(RedLock, instance=True) with patch("aiocache.decorators.RedLock", autospec=True, return_value=lock): await decorator_call(value="value") assert decorator.cache.get.call_count == 2 assert lock.__aenter__.call_count == 1 assert lock.__aexit__.call_count == 1 decorator.cache.set.assert_called_with( "stub()[('value', 'value')]", "value", ttl=SENTINEL ) stub.assert_called_once_with(value="value") async def test_calls_locked_client(self, decorator, decorator_call): decorator.cache.get.side_effect = [None, None, None, "value"] decorator.cache._add.side_effect = [True, ValueError] lock1 = create_autospec(RedLock, instance=True) lock2 = create_autospec(RedLock, instance=True) with patch("aiocache.decorators.RedLock", autospec=True, side_effect=[lock1, lock2]): await asyncio.gather(decorator_call(value="value"), decorator_call(value="value")) assert decorator.cache.get.call_count == 4 assert lock1.__aenter__.call_count == 1 assert lock1.__aexit__.call_count == 1 assert lock2.__aenter__.call_count == 1 assert lock2.__aexit__.call_count == 1 decorator.cache.set.assert_called_with( "stub()[('value', 'value')]", "value", ttl=SENTINEL ) assert stub.call_count == 1 async def stub_dict(*args, keys=None, **kwargs): values = {"a": random.randint(1, 50), "b": random.randint(1, 50), "c": random.randint(1, 50)} return {k: values.get(k) for k in keys} class TestMultiCached: @pytest.fixture def decorator(self, mock_cache): with patch("aiocache.decorators._get_cache", autospec=True, return_value=mock_cache): yield multi_cached(keys_from_attr="keys") @pytest.fixture def decorator_call(self, decorator): d = decorator(stub_dict) decorator._conn = decorator.cache.get_connection() yield d @pytest.fixture(autouse=True) def spy_stub_dict(self, mocker): module = sys.modules[globals()["__name__"]] mocker.spy(module, "stub_dict") def test_init(self): mc = multi_cached( keys_from_attr="keys", key_builder=None, ttl=1, cache=SimpleMemoryCache, plugins=None, alias=None, namespace="test", unused_kwarg="unused", ) def f(): """Dummy function. Not called.""" assert mc.ttl == 1 assert mc.key_builder("key", f) == "key" assert mc.keys_from_attr == "keys" assert mc.cache is None assert mc._cache == SimpleMemoryCache assert mc._serializer is None assert mc._namespace == "test" assert mc._kwargs == {"unused_kwarg": "unused"} def test_fails_at_instantiation(self): with pytest.raises(TypeError): @multi_cached(wrong_param=1) async def fn() -> None: """Dummy function.""" def test_alias_takes_precedence(self, mock_cache): with patch( "aiocache.decorators.caches.get", autospec=True, return_value=mock_cache ) as mock_get: mc = multi_cached( keys_from_attr="keys", alias="default", cache=SimpleMemoryCache, namespace="test" ) mc(stub_dict) mock_get.assert_called_with("default") assert mc.cache is mock_cache def test_get_cache_keys(self, decorator): keys = decorator.get_cache_keys(stub_dict, (), {"keys": ["a", "b"]}) assert keys == (["a", "b"], [], -1) def test_get_cache_keys_empty_list(self, decorator): assert decorator.get_cache_keys(stub_dict, (), {"keys": []}) == ([], [], -1) def test_get_cache_keys_missing_kwarg(self, decorator): assert decorator.get_cache_keys(stub_dict, (), {}) == ([], [], -1) def test_get_cache_keys_arg_key_from_attr(self, decorator): def fake(keys, a=1, b=2): """Dummy function.""" assert decorator.get_cache_keys(fake, (["a"]), {}) == (["a"], [["a"]], 0) def test_get_cache_keys_with_none(self, decorator): assert decorator.get_cache_keys(stub_dict, (), {"keys": None}) == ([], [], -1) def test_get_cache_keys_with_key_builder(self, decorator): decorator.key_builder = lambda key, *args, **kwargs: kwargs["market"] + "_" + key.upper() assert decorator.get_cache_keys(stub_dict, (), {"keys": ["a", "b"], "market": "ES"}) == ( ["ES_A", "ES_B"], [], -1, ) async def test_get_from_cache(self, decorator, decorator_call): decorator.cache.multi_get.return_value = [1, 2, 3] assert await decorator.get_from_cache("a", "b", "c") == [1, 2, 3] decorator.cache.multi_get.assert_called_with(("a", "b", "c")) async def test_get_from_cache_no_keys(self, decorator, decorator_call): assert await decorator.get_from_cache() == [] assert decorator.cache.multi_get.call_count == 0 async def test_get_from_cache_exception(self, decorator, decorator_call): decorator.cache.multi_get.side_effect = Exception assert await decorator.get_from_cache("a", "b", "c") == [None, None, None] decorator.cache.multi_get.assert_called_with(("a", "b", "c")) async def test_get_from_cache_conn(self, decorator, decorator_call): decorator.cache.multi_get.return_value = [1, 2, 3] assert await decorator.get_from_cache("a", "b", "c") == [1, 2, 3] decorator.cache.multi_get.assert_called_with(("a", "b", "c")) async def test_calls_no_keys(self, decorator, decorator_call): await decorator_call(keys=[]) assert decorator.cache.multi_get.call_count == 0 assert stub_dict.call_count == 1 async def test_returns_from_multi_set(self, mocker, decorator, decorator_call): mocker.spy(decorator, "get_from_cache") mocker.spy(decorator, "set_in_cache") decorator.cache.multi_get.return_value = [1, 2] assert await decorator_call(1, keys=["a", "b"]) == {"a": 1, "b": 2} decorator.get_from_cache.assert_called_once_with("a", "b") assert decorator.set_in_cache.call_count == 0 assert stub_dict.call_count == 0 async def test_calls_fn_multi_set_when_multi_get_none(self, mocker, decorator, decorator_call): mocker.spy(decorator, "get_from_cache") mocker.spy(decorator, "set_in_cache") decorator.cache.multi_get.return_value = [None, None] ret = await decorator_call(1, keys=["a", "b"], value="value") decorator.get_from_cache.assert_called_once_with("a", "b") decorator.set_in_cache.assert_called_with(ret, stub_dict, ANY, ANY) stub_dict.assert_called_once_with(1, keys=["a", "b"], value="value") async def test_cache_write_waits_for_future(self, mocker, decorator, decorator_call): mocker.spy(decorator, "set_in_cache") with patch.object(decorator, "get_from_cache", autospec=True, return_value=[None, None]): await decorator_call(1, keys=["a", "b"], value="value") decorator.set_in_cache.assert_awaited() async def test_cache_write_doesnt_wait_for_future(self, mocker, decorator, decorator_call): mocker.spy(decorator, "set_in_cache") with patch.object(decorator, "get_from_cache", autospec=True, return_value=[None, None]): with patch("aiocache.decorators.asyncio.ensure_future", autospec=True): await decorator_call(1, keys=["a", "b"], value="value", aiocache_wait_for_write=False) decorator.set_in_cache.assert_not_awaited() decorator.set_in_cache.assert_called_once_with({"a": ANY, "b": ANY}, stub_dict, ANY, ANY) async def test_calls_fn_with_only_missing_keys(self, mocker, decorator, decorator_call): mocker.spy(decorator, "set_in_cache") decorator.cache.multi_get.return_value = [1, None] assert await decorator_call(1, keys=["a", "b"], value="value") == {"a": ANY, "b": ANY} decorator.set_in_cache.assert_called_once_with({"a": ANY, "b": ANY}, stub_dict, ANY, ANY) stub_dict.assert_called_once_with(1, keys=["b"], value="value") async def test_calls_fn_raises_exception(self, decorator, decorator_call): decorator.cache.multi_get.return_value = [None] stub_dict.side_effect = Exception() with pytest.raises(Exception): assert await decorator_call(keys=[]) async def test_cache_read_disabled(self, decorator, decorator_call): await decorator_call(1, keys=["a", "b"], cache_read=False) assert decorator.cache.multi_get.call_count == 0 assert decorator.cache.multi_set.call_count == 1 assert stub_dict.call_count == 1 async def test_cache_write_disabled(self, decorator, decorator_call): decorator.cache.multi_get.return_value = [None, None] await decorator_call(1, keys=["a", "b"], cache_write=False) assert decorator.cache.multi_get.call_count == 1 assert decorator.cache.multi_set.call_count == 0 assert stub_dict.call_count == 1 async def test_disable_params_not_propagated(self, decorator, decorator_call): decorator.cache.multi_get.return_value = [None, None] await decorator_call(1, keys=["a", "b"], cache_read=False, cache_write=False) stub_dict.assert_called_once_with(1, keys=["a", "b"]) async def test_set_in_cache(self, decorator, decorator_call): await decorator.set_in_cache({"a": 1, "b": 2}, stub_dict, (), {}) call_args = decorator.cache.multi_set.call_args[0][0] assert ("a", 1) in call_args assert ("b", 2) in call_args assert decorator.cache.multi_set.call_args[1]["ttl"] is SENTINEL async def test_set_in_cache_with_ttl(self, decorator, decorator_call): decorator.ttl = 10 await decorator.set_in_cache({"a": 1, "b": 2}, stub_dict, (), {}) assert decorator.cache.multi_set.call_args[1]["ttl"] == decorator.ttl async def test_set_in_cache_exception(self, decorator, decorator_call): decorator.cache.multi_set.side_effect = Exception assert await decorator.set_in_cache({"a": 1, "b": 2}, stub_dict, (), {}) is None async def test_decorate(self, mock_cache): mock_cache.multi_get.return_value = [None] with patch("aiocache.decorators._get_cache", autospec=True, return_value=mock_cache): @multi_cached(keys_from_attr="keys") async def fn(keys=None): return {"test": 1} assert await fn(keys=["test"]) == {"test": 1} assert await fn(["test"]) == {"test": 1} assert fn.cache == mock_cache async def test_keeps_signature(self): @multi_cached(keys_from_attr="keys") async def what(self, keys=None, what=1): """Dummy function.""" assert what.__name__ == "what" assert str(inspect.signature(what)) == "(self, keys=None, what=1)" assert inspect.getfullargspec(what.__wrapped__).args == ["self", "keys", "what"] async def test_reuses_cache_instance(self): with patch("aiocache.decorators._get_cache", autospec=True) as get_c: cache = create_autospec(BaseCache, instance=True) cache.multi_get.return_value = [None] get_c.side_effect = [cache, None] @multi_cached("keys") async def what(keys=None): return {} await what(keys=["a"]) await what(keys=["a"]) assert get_c.call_count == 1 assert cache.multi_get.call_count == 2 async def test_cache_per_function(self): @multi_cached("keys") async def foo(): """First function.""" @multi_cached("keys") async def bar(): """Second function.""" assert foo.cache != bar.cache def test_get_args_dict(): def fn(a, b, *args, keys=None, **kwargs): """Dummy function.""" args_dict = _get_args_dict(fn, ("a", "b", "c", "d"), {"what": "what"}) assert args_dict == {"a": "a", "b": "b", "keys": None, "what": "what"} ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/tests/ut/test_exceptions.py0000644000175100001730000000021314464001404020306 0ustar00runnerdockerfrom aiocache.exceptions import InvalidCacheType def test_inherit_from_exception(): assert isinstance(InvalidCacheType(), Exception) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/tests/ut/test_factory.py0000644000175100001730000003101414464001404017577 0ustar00runnerdockerfrom unittest.mock import Mock, patch import pytest from aiocache import AIOCACHE_CACHES, Cache, caches from aiocache.backends.memory import SimpleMemoryCache from aiocache.exceptions import InvalidCacheType from aiocache.factory import _class_from_string, _create_cache from aiocache.plugins import HitMissRatioPlugin, TimingPlugin from aiocache.serializers import JsonSerializer, PickleSerializer CACHE_NAMES = [Cache.MEMORY.NAME] try: from aiocache.backends.memcached import MemcachedCache except ImportError: MemcachedCache = None else: assert Cache.MEMCACHED is not None CACHE_NAMES.append(Cache.MEMCACHED.NAME) try: from aiocache.backends.redis import RedisCache except ImportError: RedisCache = None else: assert Cache.REDIS is not None CACHE_NAMES.append(Cache.REDIS.NAME) @pytest.mark.redis def test_class_from_string(): assert _class_from_string("aiocache.RedisCache") == RedisCache @pytest.mark.redis def test_create_simple_cache(): redis = _create_cache(RedisCache, endpoint="127.0.0.10", port=6378) assert isinstance(redis, RedisCache) assert redis.endpoint == "127.0.0.10" assert redis.port == 6378 def test_create_cache_with_everything(): cache = _create_cache( SimpleMemoryCache, serializer={"class": PickleSerializer, "encoding": "encoding"}, plugins=[{"class": "aiocache.plugins.TimingPlugin"}], ) assert isinstance(cache.serializer, PickleSerializer) assert cache.serializer.encoding == "encoding" assert isinstance(cache.plugins[0], TimingPlugin) class TestCache: def test_cache_types(self): assert Cache.MEMORY == SimpleMemoryCache assert Cache.REDIS == RedisCache assert Cache.MEMCACHED == MemcachedCache @pytest.mark.parametrize("cache_type", CACHE_NAMES) async def test_new(self, cache_type): kwargs = {"a": 1, "b": 2} cache_class = Cache.get_scheme_class(cache_type) with patch("aiocache.{}.__init__".format(cache_class.__name__)) as init: cache = Cache(cache_class, **kwargs) assert isinstance(cache, cache_class) init.assert_called_once_with(**kwargs) def test_new_defaults_to_memory(self): assert isinstance(Cache(), Cache.MEMORY) def test_new_invalid_cache_raises(self): with pytest.raises(InvalidCacheType) as e: Cache(object) assert str(e.value) == "Invalid cache type, you can only use {}".format( list(AIOCACHE_CACHES.keys()) ) @pytest.mark.parametrize("scheme", CACHE_NAMES) def test_get_scheme_class(self, scheme): assert Cache.get_scheme_class(scheme) == AIOCACHE_CACHES[scheme] def test_get_scheme_class_invalid(self): with pytest.raises(InvalidCacheType): Cache.get_scheme_class("http") @pytest.mark.parametrize("scheme", CACHE_NAMES) def test_from_url_returns_cache_from_scheme(self, scheme): assert isinstance(Cache.from_url("{}://".format(scheme)), Cache.get_scheme_class(scheme)) @pytest.mark.parametrize( "url,expected_args", [ ("redis://", {}), ("redis://localhost", {"endpoint": "localhost"}), ("redis://localhost/", {"endpoint": "localhost"}), ("redis://localhost:6379", {"endpoint": "localhost", "port": 6379}), ( "redis://localhost/?arg1=arg1&arg2=arg2", {"endpoint": "localhost", "arg1": "arg1", "arg2": "arg2"}, ), ( "redis://localhost:6379/?arg1=arg1&arg2=arg2", {"endpoint": "localhost", "port": 6379, "arg1": "arg1", "arg2": "arg2"}, ), ("redis:///?arg1=arg1", {"arg1": "arg1"}), ("redis:///?arg2=arg2", {"arg2": "arg2"}), ( "redis://:password@localhost:6379", {"endpoint": "localhost", "password": "password", "port": 6379}, ), ( "redis://:password@localhost:6379?password=pass", {"endpoint": "localhost", "password": "password", "port": 6379}, ), ], ) def test_from_url_calls_cache_with_args(self, url, expected_args): with patch("aiocache.factory.Cache", autospec=True) as mock: Cache.from_url(url) mock.assert_called_once_with(mock.get_scheme_class.return_value, **expected_args) def test_calls_parse_uri_path_from_cache(self): p_mock = Mock(spec_set=(), return_value={"arg1": "arg1"}) with patch("aiocache.factory.Cache", autospec=True) as mock: mock.get_scheme_class.return_value.parse_uri_path = p_mock Cache.from_url("redis:///") mock.get_scheme_class.return_value.parse_uri_path.assert_called_once_with("/") mock.assert_called_once_with(mock.get_scheme_class.return_value, arg1="arg1") def test_from_url_invalid_protocol(self): with pytest.raises(InvalidCacheType): Cache.from_url("http://") class TestCacheHandler: @pytest.fixture(autouse=True) def remove_caches(self): caches._caches = {} def test_add_new_entry(self): alias = "memory" config = { "cache": "aiocache.SimpleMemoryCache", "serializer": {"class": "aiocache.serializers.StringSerializer"}, } caches.add(alias, config) assert caches.get_config()[alias] == config def test_add_updates_existing_entry(self): alias = "memory" config = { "cache": "aiocache.SimpleMemoryCache", "serializer": {"class": "aiocache.serializers.StringSerializer"}, } caches.add(alias, {}) caches.add(alias, config) assert caches.get_config()[alias] == config def test_get_wrong_alias(self): with pytest.raises(KeyError): caches.get("wrong_cache") with pytest.raises(KeyError): caches.create("wrong_cache") def test_reuse_instance(self): assert caches.get("default") is caches.get("default") def test_create_not_reuse(self): assert caches.create("default") is not caches.create("default") @pytest.mark.redis def test_create_extra_args(self): caches.set_config( { "default": { "cache": "aiocache.RedisCache", "endpoint": "127.0.0.9", "db": 10, "port": 6378, } } ) cache = caches.create("default", namespace="whatever", endpoint="127.0.0.10", db=10) assert cache.namespace == "whatever" assert cache.endpoint == "127.0.0.10" assert cache.db == 10 @pytest.mark.redis def test_retrieve_cache(self): caches.set_config( { "default": { "cache": "aiocache.RedisCache", "endpoint": "127.0.0.10", "port": 6378, "ttl": 10, "serializer": { "class": "aiocache.serializers.PickleSerializer", "encoding": "encoding", }, "plugins": [ {"class": "aiocache.plugins.HitMissRatioPlugin"}, {"class": "aiocache.plugins.TimingPlugin"}, ], } } ) cache = caches.get("default") assert isinstance(cache, RedisCache) assert cache.endpoint == "127.0.0.10" assert cache.port == 6378 assert cache.ttl == 10 assert isinstance(cache.serializer, PickleSerializer) assert cache.serializer.encoding == "encoding" assert len(cache.plugins) == 2 @pytest.mark.redis def test_retrieve_cache_new_instance(self): caches.set_config( { "default": { "cache": "aiocache.RedisCache", "endpoint": "127.0.0.10", "port": 6378, "serializer": { "class": "aiocache.serializers.PickleSerializer", "encoding": "encoding", }, "plugins": [ {"class": "aiocache.plugins.HitMissRatioPlugin"}, {"class": "aiocache.plugins.TimingPlugin"}, ], } } ) cache = caches.create("default") assert isinstance(cache, RedisCache) assert cache.endpoint == "127.0.0.10" assert cache.port == 6378 assert isinstance(cache.serializer, PickleSerializer) assert cache.serializer.encoding == "encoding" assert len(cache.plugins) == 2 @pytest.mark.redis def test_multiple_caches(self): caches.set_config( { "default": { "cache": "aiocache.RedisCache", "endpoint": "127.0.0.10", "port": 6378, "serializer": {"class": "aiocache.serializers.PickleSerializer"}, "plugins": [ {"class": "aiocache.plugins.HitMissRatioPlugin"}, {"class": "aiocache.plugins.TimingPlugin"}, ], }, "alt": {"cache": "aiocache.SimpleMemoryCache"}, } ) default = caches.get("default") alt = caches.get("alt") assert isinstance(default, RedisCache) assert default.endpoint == "127.0.0.10" assert default.port == 6378 assert isinstance(default.serializer, PickleSerializer) assert len(default.plugins) == 2 assert isinstance(alt, SimpleMemoryCache) def test_default_caches(self): assert caches.get_config() == { "default": { "cache": "aiocache.SimpleMemoryCache", "serializer": {"class": "aiocache.serializers.NullSerializer"}, } } def test_get_alias_config(self): assert caches.get_alias_config("default") == { "cache": "aiocache.SimpleMemoryCache", "serializer": {"class": "aiocache.serializers.NullSerializer"}, } def test_set_empty_config(self): with pytest.raises(ValueError): caches.set_config({}) def test_set_config_updates_existing_values(self): assert not isinstance(caches.get("default").serializer, JsonSerializer) caches.set_config( { "default": { "cache": "aiocache.SimpleMemoryCache", "serializer": {"class": "aiocache.serializers.JsonSerializer"}, } } ) assert isinstance(caches.get("default").serializer, JsonSerializer) def test_set_config_removes_existing_caches(self): caches.set_config( { "default": {"cache": "aiocache.SimpleMemoryCache"}, "alt": {"cache": "aiocache.SimpleMemoryCache"}, } ) caches.get("default") caches.get("alt") assert len(caches._caches) == 2 caches.set_config( { "default": {"cache": "aiocache.SimpleMemoryCache"}, "alt": {"cache": "aiocache.SimpleMemoryCache"}, } ) assert caches._caches == {} def test_set_config_no_default(self): with pytest.raises(ValueError): caches.set_config( { "no_default": { "cache": "aiocache.RedisCache", "endpoint": "127.0.0.10", "port": 6378, "serializer": {"class": "aiocache.serializers.PickleSerializer"}, "plugins": [ {"class": "aiocache.plugins.HitMissRatioPlugin"}, {"class": "aiocache.plugins.TimingPlugin"}, ], } } ) @pytest.mark.redis def test_ensure_plugins_order(self): caches.set_config( { "default": { "cache": "aiocache.RedisCache", "plugins": [ {"class": "aiocache.plugins.HitMissRatioPlugin"}, {"class": "aiocache.plugins.TimingPlugin"}, ], } } ) cache = caches.get("default") assert isinstance(cache.plugins[0], HitMissRatioPlugin) cache = caches.create("default") assert isinstance(cache.plugins[0], HitMissRatioPlugin) ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/tests/ut/test_lock.py0000644000175100001730000001020714464001404017061 0ustar00runnerdockerimport asyncio from unittest.mock import Mock, patch import pytest from aiocache.lock import OptimisticLock, OptimisticLockError, RedLock from ..utils import KEY_LOCK, Keys class TestRedLock: @pytest.fixture def lock(self, mock_base_cache): RedLock._EVENTS = {} yield RedLock(mock_base_cache, Keys.KEY, 20) async def test_acquire(self, mock_base_cache, lock): await lock._acquire() mock_base_cache._add.assert_called_with(KEY_LOCK, lock._value, ttl=20) assert lock._EVENTS[KEY_LOCK].is_set() is False async def test_release(self, mock_base_cache, lock): mock_base_cache._redlock_release.return_value = True await lock._acquire() await lock._release() mock_base_cache._redlock_release.assert_called_with(KEY_LOCK, lock._value) assert KEY_LOCK not in lock._EVENTS async def test_release_no_acquire(self, mock_base_cache, lock): mock_base_cache._redlock_release.return_value = False assert KEY_LOCK not in lock._EVENTS await lock._release() assert KEY_LOCK not in lock._EVENTS async def test_context_manager(self, mock_base_cache, lock): async with lock: pass mock_base_cache._add.assert_called_with(KEY_LOCK, lock._value, ttl=20) mock_base_cache._redlock_release.assert_called_with(KEY_LOCK, lock._value) async def test_raises_exceptions(self, mock_base_cache, lock): mock_base_cache._redlock_release.return_value = True with pytest.raises(ValueError): async with lock: raise ValueError async def test_acquire_block_timeouts(self, mock_base_cache, lock): await lock._acquire() # Mock .wait() to avoid unawaited coroutine warning. with patch.object(RedLock._EVENTS[lock.key], "wait", Mock(spec_set=())): with patch("asyncio.wait_for", autospec=True, side_effect=asyncio.TimeoutError): mock_base_cache._add.side_effect = ValueError result = await lock._acquire() assert result is None async def test_wait_for_release_no_acquire(self, mock_base_cache, lock): mock_base_cache._add.side_effect = ValueError assert await lock._acquire() is None async def test_multiple_locks_lock(self, mock_base_cache, lock): lock_1 = RedLock(mock_base_cache, Keys.KEY, 20) lock_2 = RedLock(mock_base_cache, Keys.KEY, 20) mock_base_cache._add.side_effect = [True, ValueError(), ValueError()] await lock._acquire() event = lock._EVENTS[KEY_LOCK] assert KEY_LOCK in lock._EVENTS assert KEY_LOCK in lock_1._EVENTS assert KEY_LOCK in lock_2._EVENTS assert not event.is_set() await asyncio.gather(lock_1._acquire(), lock._release(), lock_2._acquire()) assert KEY_LOCK not in lock._EVENTS assert KEY_LOCK not in lock_1._EVENTS assert KEY_LOCK not in lock_2._EVENTS assert event.is_set() class TestOptimisticLock: @pytest.fixture def lock(self, mock_base_cache): yield OptimisticLock(mock_base_cache, Keys.KEY) def test_init(self, mock_base_cache, lock): assert lock.client == mock_base_cache assert lock._token is None assert lock.key == Keys.KEY assert lock.ns_key == mock_base_cache._build_key(Keys.KEY) async def test_aenter_returns_lock(self, lock): assert await lock.__aenter__() is lock async def test_aexit_not_crashing(self, lock): async with lock: pass async def test_acquire_calls_get(self, lock): await lock._acquire() lock.client._gets.assert_called_with(Keys.KEY) assert lock._token == lock.client._gets.return_value async def test_cas_calls_set_with_token(self, lock, mocker): m = mocker.spy(lock.client, "set") await lock._acquire() await lock.cas("value") m.assert_called_with(Keys.KEY, "value", _cas_token=lock._token) async def test_wrong_token_raises_error(self, mock_base_cache, lock): mock_base_cache._set.return_value = 0 with pytest.raises(OptimisticLockError): await lock.cas("value") ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/tests/ut/test_plugins.py0000644000175100001730000000556514464001404017625 0ustar00runnerdockerfrom unittest.mock import create_autospec import pytest from aiocache.base import API, BaseCache from aiocache.plugins import BasePlugin, HitMissRatioPlugin, TimingPlugin from ..utils import Keys class TestBasePlugin: async def test_interface_methods(self): for method in API.CMDS: pre = await getattr(BasePlugin, "pre_{}".format(method.__name__))(None) assert pre is None post = await getattr(BasePlugin, "post_{}".format(method.__name__))(None) assert post is None async def test_do_nothing(self): assert await BasePlugin().do_nothing() is None class TestTimingPlugin: async def test_save_time(self, mock_cache): do_save_time = TimingPlugin().save_time("get") await do_save_time("self", mock_cache, took=1) await do_save_time("self", mock_cache, took=2) assert mock_cache.profiling["get_total"] == 2 assert mock_cache.profiling["get_max"] == 2 assert mock_cache.profiling["get_min"] == 1 assert mock_cache.profiling["get_avg"] == 1.5 async def test_save_time_post_set(self, mock_cache): await TimingPlugin().post_set(mock_cache, took=1) await TimingPlugin().post_set(mock_cache, took=2) assert mock_cache.profiling["set_total"] == 2 assert mock_cache.profiling["set_max"] == 2 assert mock_cache.profiling["set_min"] == 1 assert mock_cache.profiling["set_avg"] == 1.5 async def test_interface_methods(self): for method in API.CMDS: assert hasattr(TimingPlugin, "pre_{}".format(method.__name__)) assert hasattr(TimingPlugin, "post_{}".format(method.__name__)) class TestHitMissRatioPlugin: @pytest.fixture def plugin(self): return HitMissRatioPlugin() async def test_post_get(self, plugin): client = create_autospec(BaseCache, instance=True) await plugin.post_get(client, Keys.KEY) assert client.hit_miss_ratio["hits"] == 0 assert client.hit_miss_ratio["total"] == 1 assert client.hit_miss_ratio["hit_ratio"] == 0 await plugin.post_get(client, Keys.KEY, ret="value") assert client.hit_miss_ratio["hits"] == 1 assert client.hit_miss_ratio["total"] == 2 assert client.hit_miss_ratio["hit_ratio"] == 0.5 async def test_post_multi_get(self, plugin): client = create_autospec(BaseCache, instance=True) await plugin.post_multi_get(client, [Keys.KEY, Keys.KEY_1], ret=[None, None]) assert client.hit_miss_ratio["hits"] == 0 assert client.hit_miss_ratio["total"] == 2 assert client.hit_miss_ratio["hit_ratio"] == 0 await plugin.post_multi_get(client, [Keys.KEY, Keys.KEY_1], ret=["value", "random"]) assert client.hit_miss_ratio["hits"] == 2 assert client.hit_miss_ratio["total"] == 4 assert client.hit_miss_ratio["hit_ratio"] == 0.5 ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/tests/ut/test_serializers.py0000644000175100001730000001370714464001404020475 0ustar00runnerdockerimport pickle from collections import namedtuple from unittest import mock import pytest from aiocache.serializers import ( BaseSerializer, JsonSerializer, MsgPackSerializer, NullSerializer, PickleSerializer, StringSerializer, ) Dummy = namedtuple("Dummy", "a, b") TYPES = [1, 2.0, "hi", True, ["1", 1], {"key": "value"}, Dummy(1, 2)] JSON_TYPES = [1, 2.0, "hi", True, ["1", 1], {"key": "value"}] class TestBaseSerializer: def test_init(self): serializer = BaseSerializer() assert serializer.DEFAULT_ENCODING == "utf-8" assert serializer.encoding == "utf-8" def test_init_encoding(self): serializer = BaseSerializer(encoding="whatever") assert serializer.DEFAULT_ENCODING == "utf-8" assert serializer.encoding == "whatever" def test_dumps(self): with pytest.raises(NotImplementedError): BaseSerializer().dumps("") def test_loads(self): with pytest.raises(NotImplementedError): BaseSerializer().loads("") class TestNullSerializer: def test_init(self): serializer = NullSerializer() assert isinstance(serializer, BaseSerializer) assert serializer.DEFAULT_ENCODING == "utf-8" assert serializer.encoding == "utf-8" @pytest.mark.parametrize("obj", TYPES) def test_set_types(self, obj): assert NullSerializer().dumps(obj) is obj def test_loads(self): assert NullSerializer().loads("hi") == "hi" class TestStringSerializer: def test_init(self): serializer = StringSerializer() assert isinstance(serializer, BaseSerializer) assert serializer.DEFAULT_ENCODING == "utf-8" assert serializer.encoding == "utf-8" @pytest.mark.parametrize("obj", TYPES) def test_set_types(self, obj): assert StringSerializer().dumps(obj) == str(obj) def test_loads(self): assert StringSerializer().loads("hi") == "hi" class TestPickleSerializer: @pytest.fixture def serializer(self): yield PickleSerializer(protocol=4) def test_init(self, serializer): assert isinstance(serializer, PickleSerializer) assert serializer.DEFAULT_ENCODING is None assert serializer.encoding is None assert serializer.protocol == 4 def test_init_sets_default_protocol(self): serializer = PickleSerializer() assert serializer.protocol == pickle.DEFAULT_PROTOCOL @pytest.mark.parametrize("obj", TYPES) def test_set_types(self, obj, serializer): assert serializer.loads(serializer.dumps(obj)) == obj def test_dumps(self, serializer): expected = b"\x80\x04\x95\x06\x00\x00\x00\x00\x00\x00\x00\x8c\x02hi\x94." assert serializer.dumps("hi") == expected def test_dumps_with_none(self, serializer): assert isinstance(serializer.dumps(None), bytes) def test_loads(self, serializer): assert serializer.loads(b"\x80\x03X\x02\x00\x00\x00hiq\x00.") == "hi" def test_loads_with_none(self, serializer): assert serializer.loads(None) is None def test_dumps_and_loads(self, serializer): obj = Dummy(1, 2) assert serializer.loads(serializer.dumps(obj)) == obj class TestJsonSerializer: def test_init(self): serializer = JsonSerializer() assert isinstance(serializer, BaseSerializer) assert serializer.DEFAULT_ENCODING == "utf-8" assert serializer.encoding == "utf-8" @pytest.mark.parametrize("obj", JSON_TYPES) def test_set_types(self, obj): serializer = JsonSerializer() assert serializer.loads(serializer.dumps(obj)) == obj def test_dumps(self): assert ( JsonSerializer().dumps({"hi": 1}) == '{"hi": 1}' or JsonSerializer().dumps({"hi": 1}) == '{"hi":1}' # json ) # ujson def test_dumps_with_none(self): assert JsonSerializer().dumps(None) == "null" def test_loads_with_null(self): assert JsonSerializer().loads("null") is None def test_loads_with_none(self): assert JsonSerializer().loads(None) is None def test_dumps_and_loads(self): obj = {"hi": 1} serializer = JsonSerializer() assert serializer.loads(serializer.dumps(obj)) == obj class TestMsgPackSerializer: def test_init(self): serializer = MsgPackSerializer() assert isinstance(serializer, BaseSerializer) assert serializer.DEFAULT_ENCODING == "utf-8" assert serializer.encoding == "utf-8" def test_init_fails_if_msgpack_not_installed(self): with mock.patch("aiocache.serializers.serializers.msgpack", None): with pytest.raises(RuntimeError): MsgPackSerializer() assert JsonSerializer(), "Other serializers should still initialize" def test_init_use_list(self): serializer = MsgPackSerializer(use_list=True) assert serializer.use_list is True @pytest.mark.parametrize("obj", JSON_TYPES) def test_set_types(self, obj): serializer = MsgPackSerializer() assert serializer.loads(serializer.dumps(obj)) == obj def test_dumps(self): assert MsgPackSerializer().dumps("hi") == b"\xa2hi" def test_dumps_with_none(self): assert isinstance(MsgPackSerializer().dumps(None), bytes) def test_loads(self): assert MsgPackSerializer().loads(b"\xa2hi") == "hi" def test_loads_no_encoding(self): assert MsgPackSerializer(encoding=None).loads(b"\xa2hi") == b"hi" def test_loads_with_none(self): assert MsgPackSerializer().loads(None) is None def test_dumps_and_loads_tuple(self): serializer = MsgPackSerializer() assert serializer.loads(serializer.dumps(Dummy(1, 2))) == [1, 2] def test_dumps_and_loads_dict(self): serializer = MsgPackSerializer() d = {"a": [1, 2, ("1", 2)], "b": {"b": 1, "c": [1, 2]}} assert serializer.loads(serializer.dumps(d)) == { "a": [1, 2, ["1", 2]], "b": {"b": 1, "c": [1, 2]}, } ././@PaxHeader0000000000000000000000000000002600000000000010213 xustar0022 mtime=1691353860.0 aiocache-0.12.2/tests/utils.py0000644000175100001730000000017614464001404015606 0ustar00runnerdockerfrom enum import Enum class Keys(str, Enum): KEY: str = "key" KEY_1: str = "random" KEY_LOCK = Keys.KEY + "-lock"