././@PaxHeader 0000000 0000000 0000000 00000000034 00000000000 011452 x ustar 00 0000000 0000000 28 mtime=1694704617.6901414
django-cache-memoize-0.2.0/ 0000755 0000765 0000024 00000000000 00000000000 016231 5 ustar 00peterbe staff 0000000 0000000 ././@PaxHeader 0000000 0000000 0000000 00000000034 00000000000 011452 x ustar 00 0000000 0000000 28 mtime=1694704617.6898608
django-cache-memoize-0.2.0/PKG-INFO 0000644 0000765 0000024 00000045046 00000000000 017337 0 ustar 00peterbe staff 0000000 0000000 Metadata-Version: 2.1
Name: django-cache-memoize
Version: 0.2.0
Summary: Django utility for a memoization decorator that uses the Django cache framework.
Home-page: https://github.com/peterbe/django-cache-memoize
Author: Peter Bengtsson
Author-email: mail@peterbe.com
License: MPL-2.0
Description: ====================
django-cache-memoize
====================
* License: MPL 2.0
.. image:: https://github.com/peterbe/django-cache-memoize/workflows/Python/badge.svg
:alt: Build Status
:target: https://github.com/peterbe/django-cache-memoize/actions?query=workflow%3APython
.. image:: https://readthedocs.org/projects/django-cache-memoize/badge/?version=latest
:alt: Documentation Status
:target: https://django-cache-memoize.readthedocs.io/en/latest/?badge=latest
.. image:: https://img.shields.io/badge/code%20style-black-000000.svg
:target: https://github.com/ambv/black
Django utility for a memoization decorator that uses the Django cache framework.
For versions of Python and Django, check out `the tox.ini file`_.
.. _`the tox.ini file`: https://github.com/peterbe/django-cache-memoize/blob/master/tox.ini
Key Features
------------
* Memoized function calls can be invalidated.
* Works with non-trivial arguments and keyword arguments
* Insight into cache hits and cache missed with a callback.
* Ability to use as a "guard" for repeated execution when storing the function
result isn't important or needed.
Installation
============
.. code-block:: python
pip install django-cache-memoize
Usage
=====
.. code-block:: python
# Import the decorator
from cache_memoize import cache_memoize
# Attach decorator to cacheable function with a timeout of 100 seconds.
@cache_memoize(100)
def expensive_function(start, end):
return random.randint(start, end)
# Just a regular Django view
def myview(request):
# If you run this view repeatedly you'll get the same
# output every time for 100 seconds.
return http.HttpResponse(str(expensive_function(0, 100)))
The caching uses `Django's default cache framework`_. Ultimately, it calls
``django.core.cache.cache.set(cache_key, function_out, expiration)``.
So if you have a function that returns something that can't be pickled and
cached it won't work.
For cases like this, Django exposes a simple, low-level cache API. You can
use this API to store objects in the cache with any level of granularity
you like. You can cache any Python object that can be pickled safely:
strings, dictionaries, lists of model objects, and so forth. (Most
common Python objects can be pickled; refer to the Python documentation
for more information about pickling.)
See `documentation`_.
.. _`Django's default cache framework`: https://docs.djangoproject.com/en/1.11/topics/cache/
.. _`documentation`: https://docs.djangoproject.com/en/1.11/topics/cache/#the-low-level-cache-api
Example Usage
=============
This blog post: `How to use django-cache-memoize`_
It demonstrates similarly to the above Usage example but with a little more
detail. In particular it demonstrates the difference between *not* using
``django-cache-memoize`` and then adding it to your code after.
.. _`How to use django-cache-memoize`: https://www.peterbe.com/plog/how-to-use-django-cache-memoize
Advanced Usage
==============
``args_rewrite``
~~~~~~~~~~~~~~~~
Internally the decorator rewrites every argument and keyword argument to
the function it wraps into a concatenated string. The first thing you
might want to do is help the decorator rewrite the arguments to something
more suitable as a cache key string. For example, suppose you have instances
of a class whose ``__str__`` method doesn't return a unique value. For example:
.. code-block:: python
class Record(models.Model):
name = models.CharField(max_length=100)
lastname = models.CharField(max_length=100)
friends = models.ManyToManyField(SomeOtherModel)
def __str__(self):
return self.name
# Example use:
>>> record = Record.objects.create(name='Peter', lastname='Bengtsson')
>>> print(record)
Peter
>>> record2 = Record.objects.create(name='Peter', lastname='Different')
>>> print(record2)
Peter
This is a contrived example, but basically *you know* that the ``str()``
conversion of certain arguments isn't safe. Then you can pass in a callable
called ``args_rewrite``. It gets the same positional and keyword arguments
as the function you're decorating. Here's an example implementation:
.. code-block:: python
from cache_memoize import cache_memoize
def count_friends_args_rewrite(record):
# The 'id' is always unique. Use that instead of the default __str__
return record.id
@cache_memoize(100, args_rewrite=count_friends_args_rewrite)
def count_friends(record):
# Assume this is an expensive function that can be memoize cached.
return record.friends.all().count()
``prefix``
~~~~~~~~~~
By default the prefix becomes the name of the function. Consider:
.. code-block:: python
from cache_memoize import cache_memoize
@cache_memoize(10, prefix='randomness')
def function1():
return random.random()
@cache_memoize(10, prefix='randomness')
def function2(): # different name, same arguments, same functionality
return random.random()
# Example use
>>> function1()
0.39403406043780986
>>> function1()
0.39403406043780986
>>> # ^ repeated of course
>>> function2()
0.39403406043780986
>>> # ^ because the prefix was forcibly the same, the cache key is the same
``hit_callable``
~~~~~~~~~~~~~~~~
If set, a function that gets called with the original argument and keyword
arguments **if** the cache was able to find and return a cache hit.
For example, suppose you want to tell your ``statsd`` server every time
there's a cache hit.
.. code-block:: python
from cache_memoize import cache_memoize
def _cache_hit(user, **kwargs):
statsdthing.incr(f'cachehit:{user.id}', 1)
@cache_memoize(10, hit_callable=_cache_hit)
def calculate_tax(user, tax=0.1):
return ...
``miss_callable``
~~~~~~~~~~~~~~~~~
Exact same functionality as ``hit_callable`` except the obvious difference
that it gets called if it was *not* a cache hit.
``store_result``
~~~~~~~~~~~~~~~~
This is useful if you have a function you want to make sure only gets called
once per timeout expiration but you don't actually care that much about
what the function return value was. Perhaps because you know that the
function returns something that would quickly fill up your ``memcached`` or
perhaps you know it returns something that can't be pickled. Then you
can set ``store_result`` to ``False``. This is equivalent to your function
returning ``True``.
.. code-block:: python
from cache_memoize import cache_memoize
@cache_memoize(1000, store_result=False)
def send_tax_returns(user):
# something something time consuming
...
return some_none_pickleable_thing
def myview(request):
# View this view as much as you like the 'send_tax_returns' function
# won't be called more than once every 1000 seconds.
send_tax_returns(request.user)
``cache_exceptions``
~~~~~~~~~~~~~~~~~~~~
This is useful if you have a function that can raise an exception as valid
result. If the cached function raises any of specified exceptions is the
exception cached and raised as normal. Subsequent cached calls will
immediately re-raise the exception and the function will not be executed.
``cache_exceptions`` accepts an Exception or a tuple of Exceptions.
This option allows you to cache said exceptions like any other result.
Only exceptions raised from the list of classes provided as cache_exceptions
are cached, all others are propagated immediately.
.. code-block:: python
>>> from cache_memoize import cache_memoize
>>> class InvalidParameter(Exception):
... pass
>>> @cache_memoize(1000, cache_exceptions=(InvalidParameter, ))
... def run_calculations(parameter):
... # something something time consuming
... raise InvalidParameter
>>> run_calculations(1)
Traceback (most recent call last):
...
InvalidParameter
# run_calculations will now raise InvalidParameter immediately
# without running the expensive calculation
>>> run_calculations(1)
Traceback (most recent call last):
...
InvalidParameter
``cache_alias``
~~~~~~~~~~~~~~~
The ``cache_alias`` argument allows you to use a cache other than the default.
.. code-block:: python
# Given settings like:
# CACHES = {
# 'default': {...},
# 'other': {...},
# }
@cache_memoize(1000, cache_alias='other')
def myfunc(start, end):
return random.random()
Cache invalidation
~~~~~~~~~~~~~~~~~~
When you want to "undo" some caching done, you simply call the function
again with the same arguments except you add ``.invalidate`` to the function.
.. code-block:: python
from cache_memoize import cache_memoize
@cache_memoize(10)
def expensive_function(start, end):
return random.randint(start, end)
>>> expensive_function(1, 100)
65
>>> expensive_function(1, 100)
65
>>> expensive_function(100, 200)
121
>>> exensive_function.invalidate(1, 100)
>>> expensive_function(1, 100)
89
>>> expensive_function(100, 200)
121
An "alias" of doing the same thing is to pass a keyword argument called
``_refresh=True``. Like this:
.. code-block:: python
# Continuing from the code block above
>>> expensive_function(100, 200)
121
>>> expensive_function(100, 200, _refresh=True)
177
>>> expensive_function(100, 200)
177
There is no way to clear more than one cache key. In the above example,
you had to know the "original arguments" when you wanted to invalidate
the cache. There is no method "search" for all cache keys that match a
certain pattern.
Compatibility
=============
* Python 3.8, 3.9, 3.10 & 3.11
* Django 3.2, 4.1 & 4.2
Check out the `tox.ini`_ file for more up-to-date compatibility by
test coverage.
.. _`tox.ini`: https://github.com/peterbe/django-cache-memoize/blob/master/tox.ini
Prior Art
=========
History
~~~~~~~
`Mozilla Symbol Server`_ is written in Django. It's a web service that
sits between C++ debuggers and AWS S3. It shuffles symbol files in and out of
AWS S3. Symbol files are for C++ (and other compiled languages) what
sourcemaps are for JavaScript.
This service gets a LOT of traffic. The download traffic (proxying requests
for symbols in S3) gets about ~40 requests per second. Due to the nature
of the application most of these GETs result in a 404 Not Found but instead
of asking AWS S3 for every single file, these lookups are cached in a
highly configured `Redis`_ configuration. This Redis cache is also connected
to the part of the code that uploads new files.
New uploads are arriving as zip file bundles of files, from Mozilla's build
systems, at a rate of about 600MB every minute, each containing on average
about 100 files each. When a new upload comes in we need to quickly be able
find out if it exists in S3 and this gets cached since often the same files
are repeated in different uploads. But when a file does get uploaded into S3
we need to quickly and confidently invalidate any local caches. That way you
get to keep a really aggressive cache without any stale periods.
This is the use case ``django-cache-memoize`` was built for and tested in.
It was originally written for Python 3.6 in Django 1.11 but when
extracted, made compatible with Python 2.7 and as far back as Django 1.8.
``django-cache-memoize`` is also used in `SongSear.ch`_ to cache short
queries in the autocomplete search input. All autocomplete is done by
Elasticsearch, which is amazingly fast, but not as fast as ``memcached``.
.. _`Mozilla Symbol Server`: https://symbols.mozilla.org
.. _`Redis`: https://redis.io/
.. _`SongSear.ch`: https://songsear.ch
"Competition"
~~~~~~~~~~~~~
There is already `django-memoize`_ by `Thomas Vavrys`_.
It too is available as a memoization decorator you use in Django. And it
uses the default cache framework as a storage. It used ``inspect`` on the
decorated function to build a cache key.
In benchmarks running both ``django-memoize`` and ``django-cache-memoize``
I found ``django-cache-memoize`` to be **~4 times faster** on average.
Another key difference is that ``django-cache-memoize`` uses ``str()`` and
``django-memoize`` uses ``repr()`` which in certain cases of mutable objects
(e.g. class instances) as arguments the caching will not work. For example,
this does *not* work in ``django-memoize``:
.. code-block:: python
from memoize import memoize
@memoize(60)
def count_user_groups(user):
return user.groups.all().count()
def myview(request):
# this will never be memoized
print(count_user_groups(request.user))
However, this works...
.. code-block:: python
from cache_memoize import cache_memoize
@cache_memoize(60)
def count_user_groups(user):
return user.groups.all().count()
def myview(request):
# this *will* work as expected
print(count_user_groups(request.user))
.. _`django-memoize`: http://pythonhosted.org/django-memoize/
.. _`Thomas Vavrys`: https://github.com/tvavrys
Development
===========
The most basic thing is to clone the repo and run:
.. code-block:: shell
pip install -e ".[dev]"
tox
Code style is all black
~~~~~~~~~~~~~~~~~~~~~~~
All code has to be formatted with `Black `_
and the best tool for checking this is
`therapist `_ since it can help you run
all, help you fix things, and help you make sure linting is passing before
you git commit. This project also uses ``flake8`` to check other things
Black can't check.
To check linting with ``tox`` use:
.. code:: bash
tox -e lint-py36
To install the ``therapist`` pre-commit hook simply run:
.. code:: bash
therapist install
When you run ``therapist run`` it will only check the files you've touched.
To run it for all files use:
.. code:: bash
therapist run --use-tracked-files
And to fix all/any issues run:
.. code:: bash
therapist run --use-tracked-files --fix
Keywords: django,memoize,cache,decorator
Platform: UNKNOWN
Classifier: Development Status :: 5 - Production/Stable
Classifier: Environment :: Web Environment :: Mozilla
Classifier: Framework :: Django
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: Mozilla Public License 2.0 (MPL 2.0)
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3 :: Only
Classifier: Topic :: Internet :: WWW/HTTP
Requires-Python: >=3.8
Provides-Extra: dev
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1694704611.0
django-cache-memoize-0.2.0/README.rst 0000644 0000765 0000024 00000033550 00000000000 017726 0 ustar 00peterbe staff 0000000 0000000 ====================
django-cache-memoize
====================
* License: MPL 2.0
.. image:: https://github.com/peterbe/django-cache-memoize/workflows/Python/badge.svg
:alt: Build Status
:target: https://github.com/peterbe/django-cache-memoize/actions?query=workflow%3APython
.. image:: https://readthedocs.org/projects/django-cache-memoize/badge/?version=latest
:alt: Documentation Status
:target: https://django-cache-memoize.readthedocs.io/en/latest/?badge=latest
.. image:: https://img.shields.io/badge/code%20style-black-000000.svg
:target: https://github.com/ambv/black
Django utility for a memoization decorator that uses the Django cache framework.
For versions of Python and Django, check out `the tox.ini file`_.
.. _`the tox.ini file`: https://github.com/peterbe/django-cache-memoize/blob/master/tox.ini
Key Features
------------
* Memoized function calls can be invalidated.
* Works with non-trivial arguments and keyword arguments
* Insight into cache hits and cache missed with a callback.
* Ability to use as a "guard" for repeated execution when storing the function
result isn't important or needed.
Installation
============
.. code-block:: python
pip install django-cache-memoize
Usage
=====
.. code-block:: python
# Import the decorator
from cache_memoize import cache_memoize
# Attach decorator to cacheable function with a timeout of 100 seconds.
@cache_memoize(100)
def expensive_function(start, end):
return random.randint(start, end)
# Just a regular Django view
def myview(request):
# If you run this view repeatedly you'll get the same
# output every time for 100 seconds.
return http.HttpResponse(str(expensive_function(0, 100)))
The caching uses `Django's default cache framework`_. Ultimately, it calls
``django.core.cache.cache.set(cache_key, function_out, expiration)``.
So if you have a function that returns something that can't be pickled and
cached it won't work.
For cases like this, Django exposes a simple, low-level cache API. You can
use this API to store objects in the cache with any level of granularity
you like. You can cache any Python object that can be pickled safely:
strings, dictionaries, lists of model objects, and so forth. (Most
common Python objects can be pickled; refer to the Python documentation
for more information about pickling.)
See `documentation`_.
.. _`Django's default cache framework`: https://docs.djangoproject.com/en/1.11/topics/cache/
.. _`documentation`: https://docs.djangoproject.com/en/1.11/topics/cache/#the-low-level-cache-api
Example Usage
=============
This blog post: `How to use django-cache-memoize`_
It demonstrates similarly to the above Usage example but with a little more
detail. In particular it demonstrates the difference between *not* using
``django-cache-memoize`` and then adding it to your code after.
.. _`How to use django-cache-memoize`: https://www.peterbe.com/plog/how-to-use-django-cache-memoize
Advanced Usage
==============
``args_rewrite``
~~~~~~~~~~~~~~~~
Internally the decorator rewrites every argument and keyword argument to
the function it wraps into a concatenated string. The first thing you
might want to do is help the decorator rewrite the arguments to something
more suitable as a cache key string. For example, suppose you have instances
of a class whose ``__str__`` method doesn't return a unique value. For example:
.. code-block:: python
class Record(models.Model):
name = models.CharField(max_length=100)
lastname = models.CharField(max_length=100)
friends = models.ManyToManyField(SomeOtherModel)
def __str__(self):
return self.name
# Example use:
>>> record = Record.objects.create(name='Peter', lastname='Bengtsson')
>>> print(record)
Peter
>>> record2 = Record.objects.create(name='Peter', lastname='Different')
>>> print(record2)
Peter
This is a contrived example, but basically *you know* that the ``str()``
conversion of certain arguments isn't safe. Then you can pass in a callable
called ``args_rewrite``. It gets the same positional and keyword arguments
as the function you're decorating. Here's an example implementation:
.. code-block:: python
from cache_memoize import cache_memoize
def count_friends_args_rewrite(record):
# The 'id' is always unique. Use that instead of the default __str__
return record.id
@cache_memoize(100, args_rewrite=count_friends_args_rewrite)
def count_friends(record):
# Assume this is an expensive function that can be memoize cached.
return record.friends.all().count()
``prefix``
~~~~~~~~~~
By default the prefix becomes the name of the function. Consider:
.. code-block:: python
from cache_memoize import cache_memoize
@cache_memoize(10, prefix='randomness')
def function1():
return random.random()
@cache_memoize(10, prefix='randomness')
def function2(): # different name, same arguments, same functionality
return random.random()
# Example use
>>> function1()
0.39403406043780986
>>> function1()
0.39403406043780986
>>> # ^ repeated of course
>>> function2()
0.39403406043780986
>>> # ^ because the prefix was forcibly the same, the cache key is the same
``hit_callable``
~~~~~~~~~~~~~~~~
If set, a function that gets called with the original argument and keyword
arguments **if** the cache was able to find and return a cache hit.
For example, suppose you want to tell your ``statsd`` server every time
there's a cache hit.
.. code-block:: python
from cache_memoize import cache_memoize
def _cache_hit(user, **kwargs):
statsdthing.incr(f'cachehit:{user.id}', 1)
@cache_memoize(10, hit_callable=_cache_hit)
def calculate_tax(user, tax=0.1):
return ...
``miss_callable``
~~~~~~~~~~~~~~~~~
Exact same functionality as ``hit_callable`` except the obvious difference
that it gets called if it was *not* a cache hit.
``store_result``
~~~~~~~~~~~~~~~~
This is useful if you have a function you want to make sure only gets called
once per timeout expiration but you don't actually care that much about
what the function return value was. Perhaps because you know that the
function returns something that would quickly fill up your ``memcached`` or
perhaps you know it returns something that can't be pickled. Then you
can set ``store_result`` to ``False``. This is equivalent to your function
returning ``True``.
.. code-block:: python
from cache_memoize import cache_memoize
@cache_memoize(1000, store_result=False)
def send_tax_returns(user):
# something something time consuming
...
return some_none_pickleable_thing
def myview(request):
# View this view as much as you like the 'send_tax_returns' function
# won't be called more than once every 1000 seconds.
send_tax_returns(request.user)
``cache_exceptions``
~~~~~~~~~~~~~~~~~~~~
This is useful if you have a function that can raise an exception as valid
result. If the cached function raises any of specified exceptions is the
exception cached and raised as normal. Subsequent cached calls will
immediately re-raise the exception and the function will not be executed.
``cache_exceptions`` accepts an Exception or a tuple of Exceptions.
This option allows you to cache said exceptions like any other result.
Only exceptions raised from the list of classes provided as cache_exceptions
are cached, all others are propagated immediately.
.. code-block:: python
>>> from cache_memoize import cache_memoize
>>> class InvalidParameter(Exception):
... pass
>>> @cache_memoize(1000, cache_exceptions=(InvalidParameter, ))
... def run_calculations(parameter):
... # something something time consuming
... raise InvalidParameter
>>> run_calculations(1)
Traceback (most recent call last):
...
InvalidParameter
# run_calculations will now raise InvalidParameter immediately
# without running the expensive calculation
>>> run_calculations(1)
Traceback (most recent call last):
...
InvalidParameter
``cache_alias``
~~~~~~~~~~~~~~~
The ``cache_alias`` argument allows you to use a cache other than the default.
.. code-block:: python
# Given settings like:
# CACHES = {
# 'default': {...},
# 'other': {...},
# }
@cache_memoize(1000, cache_alias='other')
def myfunc(start, end):
return random.random()
Cache invalidation
~~~~~~~~~~~~~~~~~~
When you want to "undo" some caching done, you simply call the function
again with the same arguments except you add ``.invalidate`` to the function.
.. code-block:: python
from cache_memoize import cache_memoize
@cache_memoize(10)
def expensive_function(start, end):
return random.randint(start, end)
>>> expensive_function(1, 100)
65
>>> expensive_function(1, 100)
65
>>> expensive_function(100, 200)
121
>>> exensive_function.invalidate(1, 100)
>>> expensive_function(1, 100)
89
>>> expensive_function(100, 200)
121
An "alias" of doing the same thing is to pass a keyword argument called
``_refresh=True``. Like this:
.. code-block:: python
# Continuing from the code block above
>>> expensive_function(100, 200)
121
>>> expensive_function(100, 200, _refresh=True)
177
>>> expensive_function(100, 200)
177
There is no way to clear more than one cache key. In the above example,
you had to know the "original arguments" when you wanted to invalidate
the cache. There is no method "search" for all cache keys that match a
certain pattern.
Compatibility
=============
* Python 3.8, 3.9, 3.10 & 3.11
* Django 3.2, 4.1 & 4.2
Check out the `tox.ini`_ file for more up-to-date compatibility by
test coverage.
.. _`tox.ini`: https://github.com/peterbe/django-cache-memoize/blob/master/tox.ini
Prior Art
=========
History
~~~~~~~
`Mozilla Symbol Server`_ is written in Django. It's a web service that
sits between C++ debuggers and AWS S3. It shuffles symbol files in and out of
AWS S3. Symbol files are for C++ (and other compiled languages) what
sourcemaps are for JavaScript.
This service gets a LOT of traffic. The download traffic (proxying requests
for symbols in S3) gets about ~40 requests per second. Due to the nature
of the application most of these GETs result in a 404 Not Found but instead
of asking AWS S3 for every single file, these lookups are cached in a
highly configured `Redis`_ configuration. This Redis cache is also connected
to the part of the code that uploads new files.
New uploads are arriving as zip file bundles of files, from Mozilla's build
systems, at a rate of about 600MB every minute, each containing on average
about 100 files each. When a new upload comes in we need to quickly be able
find out if it exists in S3 and this gets cached since often the same files
are repeated in different uploads. But when a file does get uploaded into S3
we need to quickly and confidently invalidate any local caches. That way you
get to keep a really aggressive cache without any stale periods.
This is the use case ``django-cache-memoize`` was built for and tested in.
It was originally written for Python 3.6 in Django 1.11 but when
extracted, made compatible with Python 2.7 and as far back as Django 1.8.
``django-cache-memoize`` is also used in `SongSear.ch`_ to cache short
queries in the autocomplete search input. All autocomplete is done by
Elasticsearch, which is amazingly fast, but not as fast as ``memcached``.
.. _`Mozilla Symbol Server`: https://symbols.mozilla.org
.. _`Redis`: https://redis.io/
.. _`SongSear.ch`: https://songsear.ch
"Competition"
~~~~~~~~~~~~~
There is already `django-memoize`_ by `Thomas Vavrys`_.
It too is available as a memoization decorator you use in Django. And it
uses the default cache framework as a storage. It used ``inspect`` on the
decorated function to build a cache key.
In benchmarks running both ``django-memoize`` and ``django-cache-memoize``
I found ``django-cache-memoize`` to be **~4 times faster** on average.
Another key difference is that ``django-cache-memoize`` uses ``str()`` and
``django-memoize`` uses ``repr()`` which in certain cases of mutable objects
(e.g. class instances) as arguments the caching will not work. For example,
this does *not* work in ``django-memoize``:
.. code-block:: python
from memoize import memoize
@memoize(60)
def count_user_groups(user):
return user.groups.all().count()
def myview(request):
# this will never be memoized
print(count_user_groups(request.user))
However, this works...
.. code-block:: python
from cache_memoize import cache_memoize
@cache_memoize(60)
def count_user_groups(user):
return user.groups.all().count()
def myview(request):
# this *will* work as expected
print(count_user_groups(request.user))
.. _`django-memoize`: http://pythonhosted.org/django-memoize/
.. _`Thomas Vavrys`: https://github.com/tvavrys
Development
===========
The most basic thing is to clone the repo and run:
.. code-block:: shell
pip install -e ".[dev]"
tox
Code style is all black
~~~~~~~~~~~~~~~~~~~~~~~
All code has to be formatted with `Black `_
and the best tool for checking this is
`therapist `_ since it can help you run
all, help you fix things, and help you make sure linting is passing before
you git commit. This project also uses ``flake8`` to check other things
Black can't check.
To check linting with ``tox`` use:
.. code:: bash
tox -e lint-py36
To install the ``therapist`` pre-commit hook simply run:
.. code:: bash
therapist install
When you run ``therapist run`` it will only check the files you've touched.
To run it for all files use:
.. code:: bash
therapist run --use-tracked-files
And to fix all/any issues run:
.. code:: bash
therapist run --use-tracked-files --fix
././@PaxHeader 0000000 0000000 0000000 00000000034 00000000000 011452 x ustar 00 0000000 0000000 28 mtime=1694704617.6902494
django-cache-memoize-0.2.0/setup.cfg 0000644 0000765 0000024 00000000046 00000000000 020052 0 ustar 00peterbe staff 0000000 0000000 [egg_info]
tag_build =
tag_date = 0
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1694704611.0
django-cache-memoize-0.2.0/setup.py 0000644 0000765 0000024 00000002627 00000000000 017752 0 ustar 00peterbe staff 0000000 0000000 from os import path
from setuptools import setup, find_packages
_here = path.dirname(__file__)
setup(
name="django-cache-memoize",
version="0.2.0",
description=(
"Django utility for a memoization decorator that uses the Django "
"cache framework."
),
long_description=open(path.join(_here, "README.rst")).read(),
author="Peter Bengtsson",
author_email="mail@peterbe.com",
license="MPL-2.0",
url="https://github.com/peterbe/django-cache-memoize",
packages=find_packages(where="src"),
package_dir={"": "src"},
python_requires=">=3.8",
classifiers=[
"Development Status :: 5 - Production/Stable",
"Environment :: Web Environment :: Mozilla",
"Framework :: Django",
"Intended Audience :: Developers",
"License :: OSI Approved :: Mozilla Public License 2.0 (MPL 2.0)",
"License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3 :: Only",
"Topic :: Internet :: WWW/HTTP",
],
keywords=["django", "memoize", "cache", "decorator"],
zip_safe=False,
extras_require={"dev": ["flake8", "tox", "twine", "therapist", "black"]},
)
././@PaxHeader 0000000 0000000 0000000 00000000033 00000000000 011451 x ustar 00 0000000 0000000 27 mtime=1694704617.685616
django-cache-memoize-0.2.0/src/ 0000755 0000765 0000024 00000000000 00000000000 017020 5 ustar 00peterbe staff 0000000 0000000 ././@PaxHeader 0000000 0000000 0000000 00000000034 00000000000 011452 x ustar 00 0000000 0000000 28 mtime=1694704617.6867442
django-cache-memoize-0.2.0/src/cache_memoize/ 0000755 0000765 0000024 00000000000 00000000000 021610 5 ustar 00peterbe staff 0000000 0000000 ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1694704611.0
django-cache-memoize-0.2.0/src/cache_memoize/__init__.py 0000644 0000765 0000024 00000014413 00000000000 023724 0 ustar 00peterbe staff 0000000 0000000 from functools import wraps
import itertools
import hashlib
from urllib.parse import quote
from django.core.cache import caches, DEFAULT_CACHE_ALIAS
from django.utils.encoding import force_bytes
MARKER = object()
def cache_memoize(
timeout,
prefix=None,
args_rewrite=None,
hit_callable=None,
miss_callable=None,
key_generator_callable=None,
store_result=True,
cache_exceptions=(),
cache_alias=DEFAULT_CACHE_ALIAS,
):
"""Decorator for memoizing function calls where we use the
"local cache" to store the result.
:arg int timeout: Number of seconds to store the result if not None
:arg string prefix: If None becomes the function name.
:arg function args_rewrite: Callable that rewrites the args first useful
if your function needs nontrivial types but you know a simple way to
re-represent them for the sake of the cache key.
:arg function hit_callable: Gets executed if key was in cache.
:arg function miss_callable: Gets executed if key was *not* in cache.
:arg key_generator_callable: Custom cache key name generator.
:arg bool store_result: If you know the result is not important, just
that the cache blocked it from running repeatedly, set this to False.
:arg Exception cache_exceptions: Accepts an Exception or a tuple of
Exceptions. If the cached function raises any of these exceptions is the
exception cached and raised as normal. Subsequent cached calls will
immediately re-raise the exception and the function will not be executed.
this tuple will be cached, all other will be propagated.
:arg string cache_alias: The cache alias to use; defaults to 'default'.
Usage::
@cache_memoize(
300, # 5 min
args_rewrite=lambda user: user.email,
hit_callable=lambda: print("Cache hit!"),
miss_callable=lambda: print("Cache miss :("),
)
def hash_user_email(user):
dk = hashlib.pbkdf2_hmac('sha256', user.email, b'salt', 100000)
return binascii.hexlify(dk)
Or, when you don't actually need the result, useful if you know it's not
valuable to store the execution result::
@cache_memoize(
300, # 5 min
store_result=False,
)
def send_email(email):
somelib.send(email, subject="You rock!", ...)
Also, whatever you do where things get cached, you can undo that.
For example::
@cache_memoize(100)
def callmeonce(arg1):
print(arg1)
callmeonce('peter') # will print 'peter'
callmeonce('peter') # nothing printed
callmeonce.invalidate('peter')
callmeonce('peter') # will print 'peter'
Suppose you know for good reason you want to bypass the cache and
really let the decorator let you through you can set one extra
keyword argument called `_refresh`. For example::
@cache_memoize(100)
def callmeonce(arg1):
print(arg1)
callmeonce('peter') # will print 'peter'
callmeonce('peter') # nothing printed
callmeonce('peter', _refresh=True) # will print 'peter'
"""
if args_rewrite is None:
def noop(*args):
return args
args_rewrite = noop
def decorator(func):
def _default_make_cache_key(*args, **kwargs):
cache_key = ":".join(
itertools.chain(
(quote(str(x)) for x in args_rewrite(*args)),
(
"{}={}".format(quote(k), quote(str(v)))
for k, v in sorted(kwargs.items())
),
)
)
prefix_ = prefix or ".".join((func.__module__ or "", func.__qualname__))
return hashlib.md5(
force_bytes("cache_memoize" + prefix_ + cache_key)
).hexdigest()
_make_cache_key = key_generator_callable or _default_make_cache_key
@wraps(func)
def inner(*args, **kwargs):
# The cache backend is fetched here (not in the outer decorator scope)
# to guarantee thread-safety at runtime.
cache = caches[cache_alias]
# The cache key string should never be dependent on special keyword
# arguments like _refresh. So extract it into a variable as soon as
# possible.
_refresh = bool(kwargs.pop("_refresh", False))
cache_key = _make_cache_key(*args, **kwargs)
if _refresh:
result = MARKER
else:
result = cache.get(cache_key, MARKER)
if result is MARKER:
# If the function all raises an exception we want to cache,
# catch it, else let it propagate.
try:
result = func(*args, **kwargs)
except cache_exceptions as exception:
result = exception
if not store_result:
# Then the result isn't valuable/important to store but
# we want to store something. Just to remember that
# it has be done.
cache.set(cache_key, True, timeout)
else:
cache.set(cache_key, result, timeout)
if miss_callable:
miss_callable(*args, **kwargs)
elif hit_callable:
hit_callable(*args, **kwargs)
# If the result is an exception we've caught and cached, raise it
# in the end as to not change the API of the function we're caching.
if isinstance(result, Exception):
raise result
return result
def invalidate(*args, **kwargs):
# The cache backend is fetched here (not in the outer decorator scope)
# to guarantee thread-safety at runtime.
cache = caches[cache_alias]
kwargs.pop("_refresh", None)
cache_key = _make_cache_key(*args, **kwargs)
cache.delete(cache_key)
def get_cache_key(*args, **kwargs):
kwargs.pop("_refresh", None)
return _make_cache_key(*args, **kwargs)
inner.invalidate = invalidate
inner.get_cache_key = get_cache_key
return inner
return decorator
././@PaxHeader 0000000 0000000 0000000 00000000034 00000000000 011452 x ustar 00 0000000 0000000 28 mtime=1694704617.6892343
django-cache-memoize-0.2.0/src/django_cache_memoize.egg-info/ 0000755 0000765 0000024 00000000000 00000000000 024624 5 ustar 00peterbe staff 0000000 0000000 ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1694704617.0
django-cache-memoize-0.2.0/src/django_cache_memoize.egg-info/PKG-INFO 0000644 0000765 0000024 00000045046 00000000000 025732 0 ustar 00peterbe staff 0000000 0000000 Metadata-Version: 2.1
Name: django-cache-memoize
Version: 0.2.0
Summary: Django utility for a memoization decorator that uses the Django cache framework.
Home-page: https://github.com/peterbe/django-cache-memoize
Author: Peter Bengtsson
Author-email: mail@peterbe.com
License: MPL-2.0
Description: ====================
django-cache-memoize
====================
* License: MPL 2.0
.. image:: https://github.com/peterbe/django-cache-memoize/workflows/Python/badge.svg
:alt: Build Status
:target: https://github.com/peterbe/django-cache-memoize/actions?query=workflow%3APython
.. image:: https://readthedocs.org/projects/django-cache-memoize/badge/?version=latest
:alt: Documentation Status
:target: https://django-cache-memoize.readthedocs.io/en/latest/?badge=latest
.. image:: https://img.shields.io/badge/code%20style-black-000000.svg
:target: https://github.com/ambv/black
Django utility for a memoization decorator that uses the Django cache framework.
For versions of Python and Django, check out `the tox.ini file`_.
.. _`the tox.ini file`: https://github.com/peterbe/django-cache-memoize/blob/master/tox.ini
Key Features
------------
* Memoized function calls can be invalidated.
* Works with non-trivial arguments and keyword arguments
* Insight into cache hits and cache missed with a callback.
* Ability to use as a "guard" for repeated execution when storing the function
result isn't important or needed.
Installation
============
.. code-block:: python
pip install django-cache-memoize
Usage
=====
.. code-block:: python
# Import the decorator
from cache_memoize import cache_memoize
# Attach decorator to cacheable function with a timeout of 100 seconds.
@cache_memoize(100)
def expensive_function(start, end):
return random.randint(start, end)
# Just a regular Django view
def myview(request):
# If you run this view repeatedly you'll get the same
# output every time for 100 seconds.
return http.HttpResponse(str(expensive_function(0, 100)))
The caching uses `Django's default cache framework`_. Ultimately, it calls
``django.core.cache.cache.set(cache_key, function_out, expiration)``.
So if you have a function that returns something that can't be pickled and
cached it won't work.
For cases like this, Django exposes a simple, low-level cache API. You can
use this API to store objects in the cache with any level of granularity
you like. You can cache any Python object that can be pickled safely:
strings, dictionaries, lists of model objects, and so forth. (Most
common Python objects can be pickled; refer to the Python documentation
for more information about pickling.)
See `documentation`_.
.. _`Django's default cache framework`: https://docs.djangoproject.com/en/1.11/topics/cache/
.. _`documentation`: https://docs.djangoproject.com/en/1.11/topics/cache/#the-low-level-cache-api
Example Usage
=============
This blog post: `How to use django-cache-memoize`_
It demonstrates similarly to the above Usage example but with a little more
detail. In particular it demonstrates the difference between *not* using
``django-cache-memoize`` and then adding it to your code after.
.. _`How to use django-cache-memoize`: https://www.peterbe.com/plog/how-to-use-django-cache-memoize
Advanced Usage
==============
``args_rewrite``
~~~~~~~~~~~~~~~~
Internally the decorator rewrites every argument and keyword argument to
the function it wraps into a concatenated string. The first thing you
might want to do is help the decorator rewrite the arguments to something
more suitable as a cache key string. For example, suppose you have instances
of a class whose ``__str__`` method doesn't return a unique value. For example:
.. code-block:: python
class Record(models.Model):
name = models.CharField(max_length=100)
lastname = models.CharField(max_length=100)
friends = models.ManyToManyField(SomeOtherModel)
def __str__(self):
return self.name
# Example use:
>>> record = Record.objects.create(name='Peter', lastname='Bengtsson')
>>> print(record)
Peter
>>> record2 = Record.objects.create(name='Peter', lastname='Different')
>>> print(record2)
Peter
This is a contrived example, but basically *you know* that the ``str()``
conversion of certain arguments isn't safe. Then you can pass in a callable
called ``args_rewrite``. It gets the same positional and keyword arguments
as the function you're decorating. Here's an example implementation:
.. code-block:: python
from cache_memoize import cache_memoize
def count_friends_args_rewrite(record):
# The 'id' is always unique. Use that instead of the default __str__
return record.id
@cache_memoize(100, args_rewrite=count_friends_args_rewrite)
def count_friends(record):
# Assume this is an expensive function that can be memoize cached.
return record.friends.all().count()
``prefix``
~~~~~~~~~~
By default the prefix becomes the name of the function. Consider:
.. code-block:: python
from cache_memoize import cache_memoize
@cache_memoize(10, prefix='randomness')
def function1():
return random.random()
@cache_memoize(10, prefix='randomness')
def function2(): # different name, same arguments, same functionality
return random.random()
# Example use
>>> function1()
0.39403406043780986
>>> function1()
0.39403406043780986
>>> # ^ repeated of course
>>> function2()
0.39403406043780986
>>> # ^ because the prefix was forcibly the same, the cache key is the same
``hit_callable``
~~~~~~~~~~~~~~~~
If set, a function that gets called with the original argument and keyword
arguments **if** the cache was able to find and return a cache hit.
For example, suppose you want to tell your ``statsd`` server every time
there's a cache hit.
.. code-block:: python
from cache_memoize import cache_memoize
def _cache_hit(user, **kwargs):
statsdthing.incr(f'cachehit:{user.id}', 1)
@cache_memoize(10, hit_callable=_cache_hit)
def calculate_tax(user, tax=0.1):
return ...
``miss_callable``
~~~~~~~~~~~~~~~~~
Exact same functionality as ``hit_callable`` except the obvious difference
that it gets called if it was *not* a cache hit.
``store_result``
~~~~~~~~~~~~~~~~
This is useful if you have a function you want to make sure only gets called
once per timeout expiration but you don't actually care that much about
what the function return value was. Perhaps because you know that the
function returns something that would quickly fill up your ``memcached`` or
perhaps you know it returns something that can't be pickled. Then you
can set ``store_result`` to ``False``. This is equivalent to your function
returning ``True``.
.. code-block:: python
from cache_memoize import cache_memoize
@cache_memoize(1000, store_result=False)
def send_tax_returns(user):
# something something time consuming
...
return some_none_pickleable_thing
def myview(request):
# View this view as much as you like the 'send_tax_returns' function
# won't be called more than once every 1000 seconds.
send_tax_returns(request.user)
``cache_exceptions``
~~~~~~~~~~~~~~~~~~~~
This is useful if you have a function that can raise an exception as valid
result. If the cached function raises any of specified exceptions is the
exception cached and raised as normal. Subsequent cached calls will
immediately re-raise the exception and the function will not be executed.
``cache_exceptions`` accepts an Exception or a tuple of Exceptions.
This option allows you to cache said exceptions like any other result.
Only exceptions raised from the list of classes provided as cache_exceptions
are cached, all others are propagated immediately.
.. code-block:: python
>>> from cache_memoize import cache_memoize
>>> class InvalidParameter(Exception):
... pass
>>> @cache_memoize(1000, cache_exceptions=(InvalidParameter, ))
... def run_calculations(parameter):
... # something something time consuming
... raise InvalidParameter
>>> run_calculations(1)
Traceback (most recent call last):
...
InvalidParameter
# run_calculations will now raise InvalidParameter immediately
# without running the expensive calculation
>>> run_calculations(1)
Traceback (most recent call last):
...
InvalidParameter
``cache_alias``
~~~~~~~~~~~~~~~
The ``cache_alias`` argument allows you to use a cache other than the default.
.. code-block:: python
# Given settings like:
# CACHES = {
# 'default': {...},
# 'other': {...},
# }
@cache_memoize(1000, cache_alias='other')
def myfunc(start, end):
return random.random()
Cache invalidation
~~~~~~~~~~~~~~~~~~
When you want to "undo" some caching done, you simply call the function
again with the same arguments except you add ``.invalidate`` to the function.
.. code-block:: python
from cache_memoize import cache_memoize
@cache_memoize(10)
def expensive_function(start, end):
return random.randint(start, end)
>>> expensive_function(1, 100)
65
>>> expensive_function(1, 100)
65
>>> expensive_function(100, 200)
121
>>> exensive_function.invalidate(1, 100)
>>> expensive_function(1, 100)
89
>>> expensive_function(100, 200)
121
An "alias" of doing the same thing is to pass a keyword argument called
``_refresh=True``. Like this:
.. code-block:: python
# Continuing from the code block above
>>> expensive_function(100, 200)
121
>>> expensive_function(100, 200, _refresh=True)
177
>>> expensive_function(100, 200)
177
There is no way to clear more than one cache key. In the above example,
you had to know the "original arguments" when you wanted to invalidate
the cache. There is no method "search" for all cache keys that match a
certain pattern.
Compatibility
=============
* Python 3.8, 3.9, 3.10 & 3.11
* Django 3.2, 4.1 & 4.2
Check out the `tox.ini`_ file for more up-to-date compatibility by
test coverage.
.. _`tox.ini`: https://github.com/peterbe/django-cache-memoize/blob/master/tox.ini
Prior Art
=========
History
~~~~~~~
`Mozilla Symbol Server`_ is written in Django. It's a web service that
sits between C++ debuggers and AWS S3. It shuffles symbol files in and out of
AWS S3. Symbol files are for C++ (and other compiled languages) what
sourcemaps are for JavaScript.
This service gets a LOT of traffic. The download traffic (proxying requests
for symbols in S3) gets about ~40 requests per second. Due to the nature
of the application most of these GETs result in a 404 Not Found but instead
of asking AWS S3 for every single file, these lookups are cached in a
highly configured `Redis`_ configuration. This Redis cache is also connected
to the part of the code that uploads new files.
New uploads are arriving as zip file bundles of files, from Mozilla's build
systems, at a rate of about 600MB every minute, each containing on average
about 100 files each. When a new upload comes in we need to quickly be able
find out if it exists in S3 and this gets cached since often the same files
are repeated in different uploads. But when a file does get uploaded into S3
we need to quickly and confidently invalidate any local caches. That way you
get to keep a really aggressive cache without any stale periods.
This is the use case ``django-cache-memoize`` was built for and tested in.
It was originally written for Python 3.6 in Django 1.11 but when
extracted, made compatible with Python 2.7 and as far back as Django 1.8.
``django-cache-memoize`` is also used in `SongSear.ch`_ to cache short
queries in the autocomplete search input. All autocomplete is done by
Elasticsearch, which is amazingly fast, but not as fast as ``memcached``.
.. _`Mozilla Symbol Server`: https://symbols.mozilla.org
.. _`Redis`: https://redis.io/
.. _`SongSear.ch`: https://songsear.ch
"Competition"
~~~~~~~~~~~~~
There is already `django-memoize`_ by `Thomas Vavrys`_.
It too is available as a memoization decorator you use in Django. And it
uses the default cache framework as a storage. It used ``inspect`` on the
decorated function to build a cache key.
In benchmarks running both ``django-memoize`` and ``django-cache-memoize``
I found ``django-cache-memoize`` to be **~4 times faster** on average.
Another key difference is that ``django-cache-memoize`` uses ``str()`` and
``django-memoize`` uses ``repr()`` which in certain cases of mutable objects
(e.g. class instances) as arguments the caching will not work. For example,
this does *not* work in ``django-memoize``:
.. code-block:: python
from memoize import memoize
@memoize(60)
def count_user_groups(user):
return user.groups.all().count()
def myview(request):
# this will never be memoized
print(count_user_groups(request.user))
However, this works...
.. code-block:: python
from cache_memoize import cache_memoize
@cache_memoize(60)
def count_user_groups(user):
return user.groups.all().count()
def myview(request):
# this *will* work as expected
print(count_user_groups(request.user))
.. _`django-memoize`: http://pythonhosted.org/django-memoize/
.. _`Thomas Vavrys`: https://github.com/tvavrys
Development
===========
The most basic thing is to clone the repo and run:
.. code-block:: shell
pip install -e ".[dev]"
tox
Code style is all black
~~~~~~~~~~~~~~~~~~~~~~~
All code has to be formatted with `Black `_
and the best tool for checking this is
`therapist `_ since it can help you run
all, help you fix things, and help you make sure linting is passing before
you git commit. This project also uses ``flake8`` to check other things
Black can't check.
To check linting with ``tox`` use:
.. code:: bash
tox -e lint-py36
To install the ``therapist`` pre-commit hook simply run:
.. code:: bash
therapist install
When you run ``therapist run`` it will only check the files you've touched.
To run it for all files use:
.. code:: bash
therapist run --use-tracked-files
And to fix all/any issues run:
.. code:: bash
therapist run --use-tracked-files --fix
Keywords: django,memoize,cache,decorator
Platform: UNKNOWN
Classifier: Development Status :: 5 - Production/Stable
Classifier: Environment :: Web Environment :: Mozilla
Classifier: Framework :: Django
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: Mozilla Public License 2.0 (MPL 2.0)
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3 :: Only
Classifier: Topic :: Internet :: WWW/HTTP
Requires-Python: >=3.8
Provides-Extra: dev
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1694704617.0
django-cache-memoize-0.2.0/src/django_cache_memoize.egg-info/SOURCES.txt 0000644 0000765 0000024 00000000517 00000000000 026513 0 ustar 00peterbe staff 0000000 0000000 README.rst
setup.py
src/cache_memoize/__init__.py
src/django_cache_memoize.egg-info/PKG-INFO
src/django_cache_memoize.egg-info/SOURCES.txt
src/django_cache_memoize.egg-info/dependency_links.txt
src/django_cache_memoize.egg-info/not-zip-safe
src/django_cache_memoize.egg-info/requires.txt
src/django_cache_memoize.egg-info/top_level.txt ././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1694704617.0
django-cache-memoize-0.2.0/src/django_cache_memoize.egg-info/dependency_links.txt 0000644 0000765 0000024 00000000001 00000000000 030672 0 ustar 00peterbe staff 0000000 0000000
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1509135471.0
django-cache-memoize-0.2.0/src/django_cache_memoize.egg-info/not-zip-safe 0000644 0000765 0000024 00000000001 00000000000 027052 0 ustar 00peterbe staff 0000000 0000000
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1694704617.0
django-cache-memoize-0.2.0/src/django_cache_memoize.egg-info/requires.txt 0000644 0000765 0000024 00000000050 00000000000 027217 0 ustar 00peterbe staff 0000000 0000000
[dev]
flake8
tox
twine
therapist
black
././@PaxHeader 0000000 0000000 0000000 00000000026 00000000000 011453 x ustar 00 0000000 0000000 22 mtime=1694704617.0
django-cache-memoize-0.2.0/src/django_cache_memoize.egg-info/top_level.txt 0000644 0000765 0000024 00000000016 00000000000 027353 0 ustar 00peterbe staff 0000000 0000000 cache_memoize