dogpile.cache-0.9.0/0000775000175000017500000000000013555610710015276 5ustar classicclassic00000000000000dogpile.cache-0.9.0/LICENSE0000664000175000017500000000204313555610667016315 0ustar classicclassic00000000000000Copyright 2005-2019 Michael Bayer. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. dogpile.cache-0.9.0/MANIFEST.in0000664000175000017500000000033013555610667017043 0ustar classicclassic00000000000000recursive-include docs *.html *.css *.txt *.js *.jpg *.png *.py Makefile *.rst *.sty recursive-include tests *.py *.dat include README* LICENSE CHANGES* log_tests.ini tox.ini hash_port.py prune docs/build/output dogpile.cache-0.9.0/PKG-INFO0000664000175000017500000001046713555610710016403 0ustar classicclassic00000000000000Metadata-Version: 1.1 Name: dogpile.cache Version: 0.9.0 Summary: A caching front-end based on the Dogpile lock. Home-page: https://github.com/sqlalchemy/dogpile.cache Author: Mike Bayer Author-email: mike_mp@zzzcomputing.com License: BSD Description: dogpile ======= Dogpile consists of two subsystems, one building on top of the other. ``dogpile`` provides the concept of a "dogpile lock", a control structure which allows a single thread of execution to be selected as the "creator" of some resource, while allowing other threads of execution to refer to the previous version of this resource as the creation proceeds; if there is no previous version, then those threads block until the object is available. ``dogpile.cache`` is a caching API which provides a generic interface to caching backends of any variety, and additionally provides API hooks which integrate these cache backends with the locking mechanism of ``dogpile``. Overall, dogpile.cache is intended as a replacement to the `Beaker `_ caching system, the internals of which are written by the same author. All the ideas of Beaker which "work" are re- implemented in dogpile.cache in a more efficient and succinct manner, and all the cruft (Beaker's internals were first written in 2005) relegated to the trash heap. Documentation ------------- See dogpile.cache's full documentation at `dogpile.cache documentation `_. The sections below provide a brief synopsis of the ``dogpile`` packages. Features -------- * A succinct API which encourages up-front configuration of pre-defined "regions", each one defining a set of caching characteristics including storage backend, configuration options, and default expiration time. * A standard get/set/delete API as well as a function decorator API is provided. * The mechanics of key generation are fully customizable. The function decorator API features a pluggable "key generator" to customize how cache keys are made to correspond to function calls, and an optional "key mangler" feature provides for pluggable mangling of keys (such as encoding, SHA-1 hashing) as desired for each region. * The dogpile lock, first developed as the core engine behind the Beaker caching system, here vastly simplified, improved, and better tested. Some key performance issues that were intrinsic to Beaker's architecture, particularly that values would frequently be "double-fetched" from the cache, have been fixed. * Backends implement their own version of a "distributed" lock, where the "distribution" matches the backend's storage system. For example, the memcached backends allow all clients to coordinate creation of values using memcached itself. The dbm file backend uses a lockfile alongside the dbm file. New backends, such as a Redis-based backend, can provide their own locking mechanism appropriate to the storage engine. * Writing new backends or hacking on the existing backends is intended to be routine - all that's needed are basic get/set/delete methods. A distributed lock tailored towards the backend is an optional addition, else dogpile uses a regular thread mutex. New backends can be registered with dogpile.cache directly or made available via setuptools entry points. * Included backends feature three memcached backends (python-memcached, pylibmc, bmemcached), a Redis backend, a backend based on Python's anydbm, and a plain dictionary backend. * Space for third party plugins, including one which provides the dogpile.cache engine to Mako templates. Keywords: caching Platform: UNKNOWN Classifier: Development Status :: 5 - Production/Stable Classifier: Intended Audience :: Developers Classifier: License :: OSI Approved :: BSD License Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 3 dogpile.cache-0.9.0/README.rst0000664000175000017500000000643413555610667017007 0ustar classicclassic00000000000000dogpile ======= Dogpile consists of two subsystems, one building on top of the other. ``dogpile`` provides the concept of a "dogpile lock", a control structure which allows a single thread of execution to be selected as the "creator" of some resource, while allowing other threads of execution to refer to the previous version of this resource as the creation proceeds; if there is no previous version, then those threads block until the object is available. ``dogpile.cache`` is a caching API which provides a generic interface to caching backends of any variety, and additionally provides API hooks which integrate these cache backends with the locking mechanism of ``dogpile``. Overall, dogpile.cache is intended as a replacement to the `Beaker `_ caching system, the internals of which are written by the same author. All the ideas of Beaker which "work" are re- implemented in dogpile.cache in a more efficient and succinct manner, and all the cruft (Beaker's internals were first written in 2005) relegated to the trash heap. Documentation ------------- See dogpile.cache's full documentation at `dogpile.cache documentation `_. The sections below provide a brief synopsis of the ``dogpile`` packages. Features -------- * A succinct API which encourages up-front configuration of pre-defined "regions", each one defining a set of caching characteristics including storage backend, configuration options, and default expiration time. * A standard get/set/delete API as well as a function decorator API is provided. * The mechanics of key generation are fully customizable. The function decorator API features a pluggable "key generator" to customize how cache keys are made to correspond to function calls, and an optional "key mangler" feature provides for pluggable mangling of keys (such as encoding, SHA-1 hashing) as desired for each region. * The dogpile lock, first developed as the core engine behind the Beaker caching system, here vastly simplified, improved, and better tested. Some key performance issues that were intrinsic to Beaker's architecture, particularly that values would frequently be "double-fetched" from the cache, have been fixed. * Backends implement their own version of a "distributed" lock, where the "distribution" matches the backend's storage system. For example, the memcached backends allow all clients to coordinate creation of values using memcached itself. The dbm file backend uses a lockfile alongside the dbm file. New backends, such as a Redis-based backend, can provide their own locking mechanism appropriate to the storage engine. * Writing new backends or hacking on the existing backends is intended to be routine - all that's needed are basic get/set/delete methods. A distributed lock tailored towards the backend is an optional addition, else dogpile uses a regular thread mutex. New backends can be registered with dogpile.cache directly or made available via setuptools entry points. * Included backends feature three memcached backends (python-memcached, pylibmc, bmemcached), a Redis backend, a backend based on Python's anydbm, and a plain dictionary backend. * Space for third party plugins, including one which provides the dogpile.cache engine to Mako templates. dogpile.cache-0.9.0/docs/0000775000175000017500000000000013555610710016226 5ustar classicclassic00000000000000dogpile.cache-0.9.0/docs/_sources/0000775000175000017500000000000013555610710020050 5ustar classicclassic00000000000000dogpile.cache-0.9.0/docs/_sources/api.rst.txt0000664000175000017500000000251513555610710022174 0ustar classicclassic00000000000000=== API === Region ====== .. automodule:: dogpile.cache.region :members: Backend API ============= See the section :ref:`creating_backends` for details on how to register new backends or :ref:`changing_backend_behavior` for details on how to alter the behavior of existing backends. .. automodule:: dogpile.cache.api :members: Backends ========== .. automodule:: dogpile.cache.backends.memory :members: .. automodule:: dogpile.cache.backends.memcached :members: .. automodule:: dogpile.cache.backends.redis :members: .. automodule:: dogpile.cache.backends.file :members: .. automodule:: dogpile.cache.proxy :members: .. automodule:: dogpile.cache.backends.null :members: Exceptions ========== .. automodule:: dogpile.cache.exception :members: Plugins ======== .. automodule:: dogpile.cache.plugins.mako_cache :members: Utilities ========= .. currentmodule:: dogpile.cache.util .. autofunction:: function_key_generator .. autofunction:: kwarg_function_key_generator .. autofunction:: sha1_mangle_key .. autofunction:: length_conditional_mangler dogpile Core ============ .. autoclass:: dogpile.Lock :members: .. autoclass:: dogpile.NeedRegenerationException :members: .. autoclass:: dogpile.util.ReadWriteMutex :members: .. autoclass:: dogpile.util.NameRegistry :members: dogpile.cache-0.9.0/docs/_sources/changelog.rst.txt0000664000175000017500000007063713555610710023364 0ustar classicclassic00000000000000============== Changelog ============== .. changelog:: :version: 0.9.0 :released: Mon Oct 28 2019 .. change:: :tags: feature Added logging facililities into :class:`.CacheRegion`, to indicate key events such as cache keys missing or regeneration of values. As these can be very high volume log messages, ``logging.DEBUG`` is used as the log level for the events. Pull request courtesy Stéphane Brunner. .. changelog:: :version: 0.8.0 :released: Fri Sep 20 2019 .. change:: :tags: bug, setup :tickets: 157 Removed the "python setup.py test" feature in favor of a straight run of "tox". Per Pypa / pytest developers, "setup.py" commands are in general headed towards deprecation in favor of tox. The tox.ini script has been updated such that running "tox" with no arguments will perform a single run of the test suite against the default installed Python interpreter. .. seealso:: https://github.com/pypa/setuptools/issues/1684 https://github.com/pytest-dev/pytest/issues/5534 .. change:: :tags: bug, py3k :tickets: 154 Replaced the Python compatbility routines for ``getfullargspec()`` with a fully vendored version from Python 3.3. Originally, Python was emitting deprecation warnings for this function in Python 3.8 alphas. While this change was reverted, it was observed that Python 3 implementations for ``getfullargspec()`` are an order of magnitude slower as of the 3.4 series where it was rewritten against ``Signature``. While Python plans to improve upon this situation, SQLAlchemy projects for now are using a simple replacement to avoid any future issues. .. change:: :tags: bug, installation :tickets: 160 Pinned minimum version of Python decorator module at 4.0.0 (July, 2015) as previous versions don't provide the API that dogpile is using. .. change:: :tags: bug, py3k :tickets: 159 Fixed the :func:`.sha1_mangle_key` key mangler to coerce incoming Unicode objects into bytes as is required by the Py3k version of this function. .. changelog:: :version: 0.7.1 :released: Tue Dec 11 2018 .. change:: :tags: bug, region :tickets: 139 Fixed regression in 0.7.0 caused by :ticket:`136` where the assumed arguments for the :paramref:`.CacheRegion.async_creation_runner` expanded to include the new :paramref:`.CacheRegion.get_or_create.creator_args` parameter, as it was not tested that the async runner would be implicitly called with these arguments when the :meth:`.CacheRegion.cache_on_arguments` decorator was used. The exact signature of ``async_creation_runner`` is now restored to have the same arguments in all cases. .. changelog:: :version: 0.7.0 :released: Mon Dec 10 2018 .. change:: :tags: bug :tickets: 137 The ``decorator`` module is now used when creating function decorators within :meth:`.CacheRegion.cache_on_arguments` and :meth:`.CacheRegion.cache_multi_on_arguments` so that function signatures are preserved. Pull request courtesy ankitpatel96. Additionally adds a small performance enhancement which is to avoid internally creating a ``@wraps()`` decorator for the creator function on every get operation, by allowing the arguments to the creator be passed separately to :meth:`.CacheRegion.get_or_create`. .. change:: :tags: bug, py3k :tickets: 129 Fixed all Python 3.x deprecation warnings including ``inspect.getargspec()``. .. changelog:: :version: 0.6.8 :released: Sat Nov 24 2018 .. change:: :tags: change Project hosting has moved to GitHub, under the SQLAlchemy organization at https://github.com/sqlalchemy/dogpile.cache .. changelog:: :version: 0.6.7 :released: Thu Jul 26 2018 .. change:: :tags: bug :tickets: 128 Fixed issue in the :meth:`.CacheRegion.get_or_create_multi` method which was erroneously considering the cached value as the timestamp field if the :meth:`.CacheRegion.invalidate` method had ben used, usually causing a ``TypeError`` to occur, or in less frequent cases an invalid result for whether or not the cached value was invalid, leading to excessive caching or regeneration. The issue was a regression caused by an implementation issue in the pluggable invalidation feature added in :ticket:`38`. .. changelog:: :version: 0.6.6 :released: Wed Jun 27 2018 .. change:: :tags: feature :tickets: 123 Added method :attr:`.CacheRegion.actual_backend` which calculates and caches the actual backend for the region, which may be abstracted by the use of one or more :class:`.ProxyBackend` subclasses. .. change:: :tags: bug :tickets: 122 Fixed a condition in the :class:`.Lock` where the "get" function could be called a second time unnecessarily, when returning an existing, expired value from the cache. .. changelog:: :version: 0.6.5 :released: Mon Mar 5 2018 .. change:: :tags: bug :tickets: 119 Fixed import issue for Python 3.7 where several variables named "async" were, leading to syntax errors. Pull request courtesy Brian Sheldon. .. changelog:: :version: 0.6.4 :released: Mon Jun 26, 2017 .. change:: :tags: bug The method :meth:`.Region.get_or_create_multi` will not pass to the cache backend if no values are ultimately to be stored, based on the use of the :paramref:`.Region.get_or_create_multi.should_cache_fn` function. This empty dictionary is unnecessary and can cause API problems for backends like that of Redis. Pull request courtesy Tobias Sauerwein. .. change:: :tags: bug The :attr:`.api.NO_VALUE` constant now has a fixed ``__repr__()`` output, so that scenarios where this constant's string value ends up being used as a cache key do not create multiple values. Pull request courtesy Paul Brown. .. change:: :tags: bug A new exception class :class:`.exception.PluginNotFound` is now raised when a particular cache plugin class cannot be located either as a setuptools entrypoint or as a registered backend. Previously, a plain ``Exception`` was thrown. Pull request courtesy Jamie Lennox. .. changelog:: :version: 0.6.3 :released: Thu May 18, 2017 .. change:: :tags: feature Added ``replace_existing_backend`` to :meth:`.CacheRegion.configure_from_config`. Pull request courtesy Daniel Kraus. .. changelog:: :version: 0.6.2 :released: Tue Aug 16 2016 .. change:: :tags: feature :tickets: 38 Added a new system to allow custom plugins specific to the issue of "invalidate the entire region", using a new base class :class:`.RegionInvalidationStrategy`. As there are many potential strategies to this (special backend function, storing special keys, etc.) the mechanism for both soft and hard invalidation is now customizable. New approaches to region invalidation can be contributed as documented recipes. Pull request courtesy Alexander Makarov. .. change:: :tags: feature :tickets: 43 Added a new cache key generator :func:`.kwarg_function_key_generator`, which takes keyword arguments as well as positional arguments into account when forming the cache key. .. change:: :tags: bug Restored some more util symbols that users may have been relying upon (although these were not necessarily intended as user-facing): ``dogpile.cache.util.coerce_string_conf``, ``dogpile.cache.util.KeyReentrantMutex``, ``dogpile.cache.util.memoized_property``, ``dogpile.cache.util.PluginLoader``, ``dogpile.cache.util.to_list``. .. changelog:: :version: 0.6.1 :released: Mon Jun 6 2016 .. change:: :tags: bug :tickets: 99 Fixed imports for ``dogpile.core`` restoring ``ReadWriteMutex`` and ``NameRegistry`` into the base namespace, in addition to ``dogpile.core.nameregistry`` and ``dogpile.core.readwrite_lock``. .. changelog:: :version: 0.6.0 :released: Mon Jun 6 2016 .. change:: :tags: feature :tickets: 91 The ``dogpile.core`` library has been rolled in as part of the ``dogpile.cache`` distribution. The configuration of the ``dogpile`` name as a namespace package is also removed from ``dogpile.cache``. In order to allow existing installations of ``dogpile.core`` as a separate package to remain unaffected, the ``.core`` package has been retired within ``dogpile.cache`` directly; the :class:`.Lock` class is now available directly as ``dogpile.Lock`` and the additional ``dogpile.core`` constructs are under the ``dogpile.util`` namespace. Additionally, the long-deprecated ``dogpile.core.Dogpile`` and ``dogpile.core.SyncReaderDogpile`` classes have been removed. .. change:: :tags: bug The Redis backend now creates a copy of the "arguments" dictionary passed to it, before popping values out of it. This prevents the given dictionary from losing its keys. .. change:: :tags: bug :tickets: 97 Fixed bug in "null" backend where :class:`.NullLock` did not accept a flag for the :meth:`.NullLock.acquire` method, nor did it return a boolean value for "success". .. changelog:: :version: 0.5.7 :released: Mon Oct 19 2015 .. change:: :tags: feature :pullreq: 37 :tickets: 54 Added new parameter :paramref:`.GenericMemcachedBackend.lock_timeout`, used in conjunction with :paramref:`.GenericMemcachedBackend.distributed_lock`, will specify the timeout used when communicating to the ``.add()`` method of the memcached client. Pull request courtesy Frits Stegmann and Morgan Fainberg. .. change:: :tags: feature :pullreq: 35 :tickets: 65 Added a new flag :paramref:`.CacheRegion.configure.replace_existing_backend`, allows a region to have a new backend replace an existing one. Pull request courtesy hbccbh. .. change:: :tags: feature, tests :pullreq: 33 Test suite now runs using py.test. Pull request courtesy John Anderson. .. change:: :tags: bug, redis :tickets: 74 Repaired the :meth:`.CacheRegion.get_multi` method when used with a list of zero length against the redis backend. .. changelog:: :version: 0.5.6 :released: Mon Feb 2 2015 .. change:: :tags: feature :pullreq: 30 Changed the pickle protocol for the file/DBM backend to ``pickle.HIGHEST_PROTOCOL`` when producing new pickles, to match that of the redis and memorypickle backends. Pull request courtesy anentropic. .. changelog:: :version: 0.5.5 :released: Wed Jan 21 2015 .. change:: :tags: feature :pullreq: 26 Added new arguments :paramref:`.CacheRegion.cache_on_arguments.function_key_generator` and :paramref:`.CacheRegion.cache_multi_on_arguments.function_multi_key_generator` which serve as per-decorator replacements for the region-wide :paramref:`.CacheRegion.function_key_generator` and :paramref:`.CacheRegion.function_multi_key_generator` parameters, respectively, so that custom key production schemes can be applied on a per-function basis within one region. Pull request courtesy Hongbin Lu. .. change:: :tags: bug :tickets: 71 :pullreq: 25 Fixed bug where sending -1 for the :paramref:`.CacheRegion.get_or_create.expiration_time` parameter to :meth:`.CacheRegion.get_or_create` or :meth:`.CacheRegion.get_or_create_multi` would fail to honor the setting as "no expiration time". Pull request courtesy Hongbin Lu. .. change:: :tags: bug :tickets: 41 :pullreq: 28 The ``wrap`` argument is now propagated when calling :meth:`.CacheRegion.configure_from_config`. Pull request courtesy Jonathan Vanasco. .. change:: :tags: bug Fixed tests under py.test, which were importing a symbol from pytest itself ``is_unittest`` which has been removed. .. changelog:: :version: 0.5.4 :released: Sat Jun 14 2014 .. change:: :tags: feature :pullreq: 18 Added new :class:`.NullBackend`, for testing and cache-disabling purposes. Pull request courtesy Wichert Akkerman. .. change:: :tags: bug :pullreq: 19 Added missing Mako test dependency to setup.py. Pull request courtesy Wichert Akkerman. .. change:: :tags: bug :tickets: 58 :pullreq: 20 Fixed bug where calling :meth:`.CacheRegion.get_multi` or :meth:`.CacheRegion.set_multi` with an empty list would cause failures based on backend. Pull request courtesy Wichert Akkerman. .. change:: :tags: feature :pullreq: 17 Added new :paramref:`.RedisBackend.connection_pool` option on the Redis backend; this can be passed a ``redis.ConnectionPool`` instance directly. Pull request courtesy Masayuko. .. change:: :tags: feature :pullreq: 16 Added new :paramref:`.RedisBackend.socket_timeout` option on the Redis backend. Pull request courtesy Saulius Menkevičius. .. change:: :tags: feature Added support for tests to run via py.test. .. change:: :tags: bug :pullreq: 15 Repaired the entry point for Mako templates; the name of the entrypoint itself was wrong vs. what was in the docs, but beyond that the entrypoint would load the wrong module name. Pull request courtesy zoomorph. .. change:: :tags: bug :tickets: 57 :pullreq: 13 The :func:`.coerce_string_conf` function, which is used by :meth:`.Region.configure_from_config`, will now recognize floating point values when parsing conf strings and deliver them as such; this supports non-integer values such as Redis ``lock_sleep``. Pullreq courtesy Jeff Dairiki. .. changelog:: :version: 0.5.3 :released: Wed Jan 8 2014 .. change:: :tags: bug :pullreq: 10 Fixed bug where the key_mangler would get in the way of usage of the async_creation_runner feature within the :meth:`.Region.get_or_create` method, by sending in the mangled key instead of the original key. The "mangled" key is only supposed to be exposed within the backend storage, not the creation function which sends the key back into the :meth:`.Region.set`, which does the mangling itself. Pull request courtesy Ryan Kolak. .. change:: :tags: bug, py3k Fixed bug where the :meth:`.Region.get_multi` method wasn't calling the backend correctly in Py3K (e.g. was passing a destructive ``map()`` object) which would cause this method to fail on the memcached backend. .. change:: :tags: feature :tickets: 55 Added a ``get()`` method to complement the ``set()``, ``invalidate()`` and ``refresh()`` methods established on functions decorated by :meth:`.CacheRegion.cache_on_arguments` and :meth:`.CacheRegion.cache_multi_on_arguments`. Pullreq courtesy Eric Hanchrow. .. change:: :tags: feature :tickets: 51 :pullreq: 11 Added a new variant on :class:`.MemoryBackend`, :class:`.MemoryPickleBackend`. This backend applies ``pickle.dumps()`` and ``pickle.loads()`` to cached values upon set and get, so that similar copy-on-cache behavior as that of other backends is employed, guarding cached values against subsequent in-memory state changes. Pullreq courtesy Jonathan Vanasco. .. change:: :tags: bug :pullreq: 9 Fixed a format call in the redis backend which would otherwise fail on Python 2.6; courtesy Jeff Dairiki. .. changelog:: :version: 0.5.2 :released: Fri Nov 15 2013 .. change:: :tags: bug Fixes to routines on Windows, including that default unit tests pass, and an adjustment to the "soft expiration" feature to ensure the expiration works given windows time.time() behavior. .. change:: :tags: bug Added py2.6 compatibility for unsupported ``total_seconds()`` call in region.py .. change:: :tags: feature :tickets: 44 Added a new argument ``lock_factory`` to the :class:`.DBMBackend` implementation. This allows for drop-in replacement of the default :class:`.FileLock` backend, which builds on ``os.flock()`` and only supports Unix platforms. A new abstract base :class:`.AbstractFileLock` has been added to provide a common base for custom lock implementations. The documentation points to an example thread-based rw lock which is now tested on Windows. .. changelog:: :version: 0.5.1 :released: Thu Oct 10 2013 .. change:: :tags: feature :tickets: 38 The :meth:`.CacheRegion.invalidate` method now supports an option ``hard=True|False``. A "hard" invalidation, equivalent to the existing functionality of :meth:`.CacheRegion.invalidate`, means :meth:`.CacheRegion.get_or_create` will not return the "old" value at all, forcing all getters to regenerate or wait for a regeneration. "soft" invalidation means that getters can continue to return the old value until a new one is generated. .. change:: :tags: feature :tickets: 40 New dogpile-specific exception classes have been added, so that issues like "region already configured", "region unconfigured", raise dogpile-specific exceptions. Other exception classes have been made more specific. Also added new accessor :attr:`.CacheRegion.is_configured`. Pullreq courtesy Morgan Fainberg. .. change:: :tags: bug Erroneously missed when the same change was made for ``set()`` in 0.5.0, the Redis backend now uses ``pickle.HIGHEST_PROTOCOL`` for the ``set_multi()`` method as well when producing pickles. Courtesy Łukasz Fidosz. .. change:: :tags: bug, redis, py3k :tickets: 39 Fixed an errant ``u''`` causing incompatibility in Python3.2 in the Redis backend, courtesy Jimmey Mabey. .. change:: :tags: bug The :func:`.util.coerce_string_conf` method now correctly coerces negative integers and those with a leading + sign. This previously prevented configuring a :class:`.CacheRegion` with an ``expiration_time`` of ``'-1'``. Courtesy David Beitey. .. change:: :tags: bug The ``refresh()`` method on :meth:`.CacheRegion.cache_multi_on_arguments` now supports the ``asdict`` flag. .. changelog:: :version: 0.5.0 :released: Fri Jun 21 2013 .. change:: :tags: misc Source repository has been moved to git. .. change:: :tags: bug The Redis backend now uses ``pickle.HIGHEST_PROTOCOL`` when producing pickles. Courtesy Lx Yu. .. change:: :tags: bug :meth:`.CacheRegion.cache_on_arguments` now has a new argument ``to_str``, defaults to ``str()``. Can be replaced with ``unicode()`` or other functions to support caching of functions that accept non-unicode arguments. Initial patch courtesy Lx Yu. .. change:: :tags: feature Now using the ``Lock`` included with the Python ``redis`` backend, which adds ``lock_timeout`` and ``lock_sleep`` arguments to the :class:`.RedisBackend`. .. change:: :tags: feature :tickets: 33, 35 Added new methods :meth:`.CacheRegion.get_or_create_multi` and :meth:`.CacheRegion.cache_multi_on_arguments`, which make use of the :meth:`.CacheRegion.get_multi` and similar functions to store and retrieve multiple keys at once while maintaining dogpile semantics for each. .. change:: :tags: feature :tickets: 36 Added a method ``refresh()`` to functions decorated by :meth:`.CacheRegion.cache_on_arguments` and :meth:`.CacheRegion.cache_multi_on_arguments`, to complement ``invalidate()`` and ``set()``. .. change:: :tags: feature :tickets: 13 :meth:`.CacheRegion.configure` accepts an optional ``datetime.timedelta`` object for the ``expiration_time`` argument as well as an integer, courtesy Jack Lutz. .. change:: :tags: feature :tickets: 20 The ``expiration_time`` argument passed to :meth:`.CacheRegion.cache_on_arguments` may be a callable, to return a dynamic timeout value. Courtesy David Beitey. .. change:: :tags: feature :tickets: 26 Added support for simple augmentation of existing backends using the :class:`.ProxyBackend` class. Thanks to Tim Hanus for the great effort with development, testing, and documentation. .. change:: :tags: feature :pullreq: 14 Full support for multivalue get/set/delete added, using :meth:`.CacheRegion.get_multi`, :meth:`.CacheRegion.set_multi`, :meth:`.CacheRegion.delete_multi`, courtesy Marcos Araujo Sobrinho. .. change:: :tags: bug :tickets: 27 Fixed bug where the "name" parameter for :class:`.CacheRegion` was ignored entirely. Courtesy Wichert Akkerman. .. changelog:: :version: 0.4.3 :released: Thu Apr 4 2013 .. change:: :tags: bug Added support for the ``cache_timeout`` Mako argument to the Mako plugin, which will pass the value to the ``expiration_time`` argument of :meth:`.CacheRegion.get_or_create`. .. change:: :tags: feature :pullreq: 13 :meth:`.CacheRegion.get_or_create` and :meth:`.CacheRegion.cache_on_arguments` now accept a new argument ``should_cache_fn``, receives the value returned by the "creator" and then returns True or False, where True means "cache plus return", False means "return the value but don't cache it." .. changelog:: :version: 0.4.2 :released: Sat Jan 19 2013 .. change:: :tags: feature :pullreq: 10 An "async creator" function can be specified to :class:`.CacheRegion` which allows the "creation" function to be called asynchronously or be subsituted for another asynchronous creation scheme. Courtesy Ralph Bean. .. changelog:: :version: 0.4.1 :released: Sat Dec 15 2012 .. change:: :tags: feature :pullreq: 9 The function decorated by :meth:`.CacheRegion.cache_on_arguments` now includes a ``set()`` method, in addition to the existing ``invalidate()`` method. Like ``invalidate()``, it accepts a set of function arguments, but additionally accepts as the first positional argument a new value to place in the cache, to take the place of that key. Courtesy Antoine Bertin. .. change:: :tags: bug :tickets: 15 Fixed bug in DBM backend whereby if an error occurred during the "write" operation, the file lock, if enabled, would not be released, thereby deadlocking the app. .. change:: :tags: bug :tickets: 12 The :func:`.util.function_key_generator` used by the function decorator no longer coerces non-unicode arguments into a Python unicode object on Python 2.x; this causes failures on backends such as DBM which on Python 2.x apparently require bytestrings. The key_mangler is still needed if actual unicode arguments are being used by the decorated function, however. .. change:: :tags: feature Redis backend now accepts optional "url" argument, will be passed to the new ``StrictRedis.from_url()`` method to determine connection info. Courtesy Jon Rosebaugh. .. change:: :tags: feature Redis backend now accepts optional "password" argument. Courtesy Jon Rosebaugh. .. change:: :tags: feature DBM backend has "fallback" when calling dbm.get() to instead use dictionary access + KeyError, in the case that the "gdbm" backend is used which does not include .get(). Courtesy Jon Rosebaugh. .. changelog:: :version: 0.4.0 :released: Tue Oct 30 2012 .. change:: :tags: bug :tickets: 1 Using dogpile.core 0.4.0 now, fixes a critical bug whereby dogpile pileup could occur on first value get across multiple processes, due to reliance upon a non-shared creation time. This is a dogpile.core issue. .. change:: :tags: bug :tickets: Fixed missing __future__ with_statement directive in region.py. .. changelog:: :version: 0.3.1 :released: Tue Sep 25 2012 .. change:: :tags: bug :tickets: Fixed the mako_cache plugin which was not yet covered, and wasn't implementing the mako plugin API correctly; fixed docs as well. Courtesy Ben Hayden. .. change:: :tags: bug :tickets: Fixed setup so that the tests/* directory isn't yanked into the install. Courtesy Ben Hayden. .. changelog:: :version: 0.3.0 :released: Thu Jun 14 2012 .. change:: :tags: feature :tickets: get() method now checks expiration time by default. Use ignore_expiration=True to bypass this. .. change:: :tags: feature :tickets: 7 Added new invalidate() method. Sets the current timestamp as a minimum value that all retrieved values must be created after. Is honored by the get_or_create() and get() methods. .. change:: :tags: bug :tickets: 8 Fixed bug whereby region.get() didn't work if the value wasn't present. .. changelog:: :version: 0.2.4 :released: .. change:: :tags: :tickets: Fixed py3k issue with config string coerce, courtesy Alexander Fedorov .. changelog:: :version: 0.2.3 :released: Wed May 16 2012 .. change:: :tags: :tickets: 3 support "min_compress_len" and "memcached_expire_time" with python-memcached backend. Tests courtesy Justin Azoff .. change:: :tags: :tickets: 4 Add support for coercion of string config values to Python objects - ints, "false", "true", "None". .. change:: :tags: :tickets: 5 Added support to DBM file lock to allow reentrant access per key within a single thread, so that even though the DBM backend locks for the whole file, a creation function that calls upon a different key in the cache can still proceed. .. change:: :tags: :tickets: Fixed DBM glitch where multiple readers could be serialized. .. change:: :tags: :tickets: Adjust bmemcached backend to work with newly-repaired bmemcached calling API (see bmemcached ef206ed4473fec3b639e). .. changelog:: :version: 0.2.2 :released: Thu Apr 19 2012 .. change:: :tags: :tickets: add Redis backend, courtesy Ollie Rutherfurd .. changelog:: :version: 0.2.1 :released: Sun Apr 15 2012 .. change:: :tags: :tickets: move tests into tests/cache namespace .. change:: :tags: :tickets: py3k compatibility is in-place now, no 2to3 needed. .. changelog:: :version: 0.2.0 :released: Sat Apr 14 2012 .. change:: :tags: :tickets: Based on dogpile.core now, to get the package namespace thing worked out. .. changelog:: :version: 0.1.1 :released: Tue Apr 10 2012 .. change:: :tags: :tickets: Fixed the configure_from_config() method of region and backend which wasn't working. Courtesy Christian Klinger. .. changelog:: :version: 0.1.0 :released: Sun Apr 08 2012 .. change:: :tags: :tickets: Initial release. .. change:: :tags: :tickets: Includes a pylibmc backend and a plain dictionary backend. dogpile.cache-0.9.0/docs/_sources/core_usage.rst.txt0000664000175000017500000002375613555610710023551 0ustar classicclassic00000000000000============ dogpile Core ============ ``dogpile`` provides a locking interface around a "value creation" and "value retrieval" pair of functions. .. versionchanged:: 0.6.0 The ``dogpile`` package encapsulates the functionality that was previously provided by the separate ``dogpile.core`` package. The primary interface is the :class:`.Lock` object, which provides for the invocation of the creation function by only one thread and/or process at a time, deferring all other threads/processes to the "value retrieval" function until the single creation thread is completed. Do I Need to Learn the dogpile Core API Directly? ================================================= It's anticipated that most users of ``dogpile`` will be using it indirectly via the ``dogpile.cache`` caching front-end. If you fall into this category, then the short answer is no. Using the core ``dogpile`` APIs described here directly implies you're building your own resource-usage system outside, or in addition to, the one ``dogpile.cache`` provides. Rudimentary Usage ================== The primary API dogpile provides is the :class:`.Lock` object. This object allows for functions that provide mutexing, value creation, as well as value retrieval. An example usage is as follows:: from dogpile import Lock, NeedRegenerationException import threading import time # store a reference to a "resource", some # object that is expensive to create. the_resource = [None] def some_creation_function(): # call a value creation function value = create_some_resource() # get creationtime using time.time() creationtime = time.time() # keep track of the value and creation time in the "cache" the_resource[0] = tup = (value, creationtime) # return the tuple of (value, creationtime) return tup def retrieve_resource(): # function that retrieves the resource and # creation time. # if no resource, then raise NeedRegenerationException if the_resource[0] is None: raise NeedRegenerationException() # else return the tuple of (value, creationtime) return the_resource[0] # a mutex, which needs here to be shared across all invocations # of this particular creation function mutex = threading.Lock() with Lock(mutex, some_creation_function, retrieve_resource, 3600) as value: # some function that uses # the resource. Won't reach # here until some_creation_function() # has completed at least once. value.do_something() Above, ``some_creation_function()`` will be called when :class:`.Lock` is first invoked as a context manager. The value returned by this function is then passed into the ``with`` block, where it can be used by application code. Concurrent threads which call :class:`.Lock` during this initial period will be blocked until ``some_creation_function()`` completes. Once the creation function has completed successfully the first time, new calls to :class:`.Lock` will call ``retrieve_resource()`` in order to get the current cached value as well as its creation time; if the creation time is older than the current time minus an expiration time of 3600, then ``some_creation_function()`` will be called again, but only by one thread/process, using the given mutex object as a source of synchronization. Concurrent threads/processes which call :class:`.Lock` during this period will fall through, and not be blocked; instead, the "stale" value just returned by ``retrieve_resource()`` will continue to be returned until the creation function has finished. The :class:`.Lock` API is designed to work with simple cache backends like Memcached. It addresses such issues as: * Values can disappear from the cache at any time, before our expiration time is reached. The :class:`.NeedRegenerationException` class is used to alert the :class:`.Lock` object that a value needs regeneration ahead of the usual expiration time. * There's no function in a Memcached-like system to "check" for a key without actually retrieving it. The usage of the ``retrieve_resource()`` function allows that we check for an existing key and also return the existing value, if any, at the same time, without the need for two separate round trips. * The "creation" function used by :class:`.Lock` is expected to store the newly created value in the cache, as well as to return it. This is also more efficient than using two separate round trips to separately store, and re-retrieve, the object. .. _caching_decorator: Example: Using dogpile directly for Caching =========================================== The following example approximates Beaker's "cache decoration" function, to decorate any function and store the value in Memcached. Note that normally, **we'd just use dogpile.cache here**, however for the purposes of example, we'll illustrate how the :class:`.Lock` object is used directly. We create a Python decorator function called ``cached()`` which will provide caching for the output of a single function. It's given the "key" which we'd like to use in Memcached, and internally it makes usage of :class:`.Lock`, along with a thread based mutex (we'll see a distributed mutex in the next section):: import pylibmc import threading import time from dogpile import Lock, NeedRegenerationException mc_pool = pylibmc.ThreadMappedPool(pylibmc.Client("localhost")) def cached(key, expiration_time): """A decorator that will cache the return value of a function in memcached given a key.""" mutex = threading.Lock() def get_value(): with mc_pool.reserve() as mc: value_plus_time = mc.get(key) if value_plus_time is None: raise NeedRegenerationException() # return a tuple (value, createdtime) return value_plus_time def decorate(fn): def gen_cached(): value = fn() with mc_pool.reserve() as mc: # create a tuple (value, createdtime) value_plus_time = (value, time.time()) mc.put(key, value_plus_time) return value_plus_time def invoke(): with Lock(mutex, gen_cached, get_value, expiration_time) as value: return value return invoke return decorate Using the above, we can decorate any function as:: @cached("some key", 3600) def generate_my_expensive_value(): return slow_database.lookup("stuff") The :class:`.Lock` object will ensure that only one thread at a time performs ``slow_database.lookup()``, and only every 3600 seconds, unless Memcached has removed the value, in which case it will be called again as needed. In particular, dogpile.core's system allows us to call the memcached get() function at most once per access, instead of Beaker's system which calls it twice, and doesn't make us call get() when we just created the value. For the mutex object, we keep a ``threading.Lock`` object that's local to the decorated function, rather than using a global lock. This localizes the in-process locking to be local to this one decorated function. In the next section, we'll see the usage of a cross-process lock that accomplishes this differently. Using a File or Distributed Lock with Dogpile ============================================== The examples thus far use a ``threading.Lock()`` object for synchronization. If our application uses multiple processes, we will want to coordinate creation operations not just on threads, but on some mutex that other processes can access. In this example we'll use a file-based lock as provided by the `lockfile `_ package, which uses a unix-symlink concept to provide a filesystem-level lock (which also has been made threadsafe). Another strategy may base itself directly off the Unix ``os.flock()`` call, or use an NFS-safe file lock like `flufl.lock `_, and still another approach is to lock against a cache server, using a recipe such as that described at `Using Memcached as a Distributed Locking Service `_. What all of these locking schemes have in common is that unlike the Python ``threading.Lock`` object, they all need access to an actual key which acts as the symbol that all processes will coordinate upon. So here, we will also need to create the "mutex" which we pass to :class:`.Lock` using the ``key`` argument:: import lockfile import os from hashlib import sha1 # ... other imports and setup from the previous example def cached(key, expiration_time): """A decorator that will cache the return value of a function in memcached given a key.""" lock_path = os.path.join("/tmp", "%s.lock" % sha1(key).hexdigest()) # ... get_value() from the previous example goes here def decorate(fn): # ... gen_cached() from the previous example goes here def invoke(): # create an ad-hoc FileLock mutex = lockfile.FileLock(lock_path) with Lock(mutex, gen_cached, get_value, expiration_time) as value: return value return invoke return decorate For a given key "some_key", we generate a hex digest of the key, then use ``lockfile.FileLock()`` to create a lock against the file ``/tmp/53def077a4264bd3183d4eb21b1f56f883e1b572.lock``. Any number of :class:`.Lock` objects in various processes will now coordinate with each other, using this common filename as the "baton" against which creation of a new value proceeds. Unlike when we used ``threading.Lock``, the file lock is ultimately locking on a file, so multiple instances of ``FileLock()`` will all coordinate on that same file - it's often the case that file locks that rely upon ``flock()`` require non-threaded usage, so a unique filesystem lock per thread is often a good idea in any case. dogpile.cache-0.9.0/docs/_sources/front.rst.txt0000664000175000017500000000140113555610710022544 0ustar classicclassic00000000000000============ Front Matter ============ Information about the dogpile.cache project. Project Homepage ================ dogpile.cache is hosted on GitHub at https://github.com/sqlalchemy/dogpile.cache. Releases and project status are available on Pypi at https://pypi.python.org/pypi/dogpile.cache. The most recent published version of this documentation should be at https://dogpilecache.sqlalchemy.org. Installation ============ Install released versions of dogpile.cache from the Python package index with `pip `_ or a similar tool:: pip install dogpile.cache Bugs ==== Bugs and feature enhancements to dogpile.cache should be reported on the `GitHub issue tracker `_.dogpile.cache-0.9.0/docs/_sources/index.rst.txt0000664000175000017500000000227713555610710022537 0ustar classicclassic00000000000000========================================== Welcome to dogpile.cache's documentation! ========================================== Dogpile consists of two subsystems, one building on top of the other. ``dogpile`` provides the concept of a "dogpile lock", a control structure which allows a single thread of execution to be selected as the "creator" of some resource, while allowing other threads of execution to refer to the previous version of this resource as the creation proceeds; if there is no previous version, then those threads block until the object is available. ``dogpile.cache`` is a caching API which provides a generic interface to caching backends of any variety, and additionally provides API hooks which integrate these cache backends with the locking mechanism of ``dogpile``. New backends are very easy to create and use; users are encouraged to adapt the provided backends for their own needs, as high volume caching requires lots of tweaks and adjustments specific to an application and its environment. .. toctree:: :maxdepth: 2 front usage recipes core_usage api changelog Indices and tables ================== * :ref:`genindex` * :ref:`modindex` * :ref:`search` dogpile.cache-0.9.0/docs/_sources/recipes.rst.txt0000664000175000017500000002212013555610710023047 0ustar classicclassic00000000000000Recipes ======= Invalidating a group of related keys ------------------------------------- This recipe presents a way to track the cache keys related to a particular region, for the purposes of invalidating a series of keys that relate to a particular id. Three cached functions, ``user_fn_one()``, ``user_fn_two()``, ``user_fn_three()`` each perform a different function based on a ``user_id`` integer value. The region applied to cache them uses a custom key generator which tracks each cache key generated, pulling out the integer "id" and replacing with a template. When all three functions have been called, the key generator is now aware of these three keys: ``user_fn_one_%d``, ``user_fn_two_%d``, and ``user_fn_three_%d``. The ``invalidate_user_id()`` function then knows that for a particular ``user_id``, it needs to hit all three of those keys in order to invalidate everything having to do with that id. :: from dogpile.cache import make_region from itertools import count user_keys = set() def my_key_generator(namespace, fn): fname = fn.__name__ def generate_key(*arg): # generate a key template: # "fname_%d_arg1_arg2_arg3..." key_template = fname + "_" + \ "%d" + \ "_".join(str(s) for s in arg[1:]) # store key template user_keys.add(key_template) # return cache key user_id = arg[0] return key_template % user_id return generate_key def invalidate_user_id(region, user_id): for key in user_keys: region.delete(key % user_id) region = make_region( function_key_generator=my_key_generator ).configure( "dogpile.cache.memory" ) counter = count() @region.cache_on_arguments() def user_fn_one(user_id): return "user fn one: %d, %d" % (next(counter), user_id) @region.cache_on_arguments() def user_fn_two(user_id): return "user fn two: %d, %d" % (next(counter), user_id) @region.cache_on_arguments() def user_fn_three(user_id): return "user fn three: %d, %d" % (next(counter), user_id) print user_fn_one(5) print user_fn_two(5) print user_fn_three(7) print user_fn_two(7) invalidate_user_id(region, 5) print "invalidated:" print user_fn_one(5) print user_fn_two(5) print user_fn_three(7) print user_fn_two(7) Asynchronous Data Updates with ORM Events ----------------------------------------- This recipe presents one technique of optimistically pushing new data into the cache when an update is sent to a database. Using SQLAlchemy for database querying, suppose a simple cache-decorated function returns the results of a database query:: @region.cache_on_arguments() def get_some_data(argument): # query database to get data data = Session().query(DBClass).filter(DBClass.argument == argument).all() return data We would like this particular function to be re-queried when the data has changed. We could call ``get_some_data.invalidate(argument, hard=False)`` at the point at which the data changes, however this only leads to the invalidation of the old value; a new value is not generated until the next call, and also means at least one client has to block while the new value is generated. We could also call ``get_some_data.refresh(argument)``, which would perform the data refresh at that moment, but then the writer is delayed by the re-query. A third variant is to instead offload the work of refreshing for this query into a background thread or process. This can be acheived using a system such as the :paramref:`.CacheRegion.async_creation_runner`. However, an expedient approach for smaller use cases is to link cache refresh operations to the ORM session's commit, as below:: from sqlalchemy import event from sqlalchemy.orm import Session def cache_refresh(session, refresher, *args, **kwargs): """ Refresh the functions cache data in a new thread. Starts refreshing only after the session was committed so all database data is available. """ assert isinstance(session, Session), \ "Need a session, not a sessionmaker or scoped_session" @event.listens_for(session, "after_commit") def do_refresh(session): t = Thread(target=refresher, args=args, kwargs=kwargs) t.daemon = True t.start() Within a sequence of data persistence, ``cache_refresh`` can be called given a particular SQLAlchemy ``Session`` and a callable to do the work:: def add_new_data(session, argument): # add some data session.add(something_new(argument)) # add a hook to refresh after the Session is committed. cache_refresh(session, get_some_data.refresh, argument) Note that the event to refresh the data is associated with the ``Session`` being used for persistence; however, the actual refresh operation is called with a **different** ``Session``, typically one that is local to the refresh operation, either through a thread-local registry or via direct instantiation. Prefixing all keys in Redis --------------------------- If you use a redis instance as backend that contains other keys besides the ones set by dogpile.cache, it is a good idea to uniquely prefix all dogpile.cache keys, to avoid potential collisions with keys set by your own code. This can easily be done using a key mangler function:: from dogpile.cache import make_region region = make_region( key_mangler=lambda key: "myapp:dogpile:" + key ) Encoding/Decoding data into another format ------------------------------------------ .. sidebar:: A Note on Data Encoding Under the hood, dogpile.cache wraps cached data in an instance of ``dogpile.cache.api.CachedValue`` and then pickles that data for storage along with some bookkeeping metadata. If you implement a ProxyBackend to encode/decode data, that transformation will happen on the pre-pickled data- dogpile does not store the data 'raw' and will still pass a pickled payload to the backend. This behavior can negate the hopeful improvements of some encoding schemes. Since dogpile is managing cached data, you may be concerned with the size of your payloads. A possible method of helping minimize payloads is to use a ProxyBackend to recode the data on-the-fly or otherwise transform data as it enters or leaves persistent storage. In the example below, we define 2 classes to implement msgpack encoding. Msgpack (http://msgpack.org/) is a serialization format that works exceptionally well with json-like data and can serialize nested dicts into a much smaller payload than Python's own pickle. ``_EncodedProxy`` is our base class for building data encoders, and inherits from dogpile's own `ProxyBackend`. You could just use one class. This class passes 4 of the main `key/value` functions into a configurable decoder and encoder. The ``MsgpackProxy`` class simply inherits from ``_EncodedProxy`` and implements the necessary ``value_decode`` and ``value_encode`` functions. Encoded ProxyBackend Example:: from dogpile.cache.proxy import ProxyBackend import msgpack class _EncodedProxy(ProxyBackend): """base class for building value-mangling proxies""" def value_decode(self, value): raise NotImplementedError("override me") def value_encode(self, value): raise NotImplementedError("override me") def set(self, k, v): v = self.value_encode(v) self.proxied.set(k, v) def get(self, key): v = self.proxied.get(key) return self.value_decode(v) def set_multi(self, mapping): """encode to a new dict to preserve unencoded values in-place when called by `get_or_create_multi` """ mapping_set = {} for (k, v) in mapping.iteritems(): mapping_set[k] = self.value_encode(v) return self.proxied.set_multi(mapping_set) def get_multi(self, keys): results = self.proxied.get_multi(keys) translated = [] for record in results: try: translated.append(self.value_decode(record)) except Exception as e: raise return translated class MsgpackProxy(_EncodedProxy): """custom decode/encode for value mangling""" def value_decode(self, v): if not v or v is NO_VALUE: return NO_VALUE # you probably want to specify a custom decoder via `object_hook` v = msgpack.unpackb(payload, encoding="utf-8") return CachedValue(*v) def value_encode(self, v): # you probably want to specify a custom encoder via `default` v = msgpack.packb(payload, use_bin_type=True) return v # extend our region configuration from above with a 'wrap' region = make_region().configure( 'dogpile.cache.pylibmc', expiration_time = 3600, arguments = { 'url': ["127.0.0.1"], }, wrap = [MsgpackProxy, ] ) dogpile.cache-0.9.0/docs/_sources/usage.rst.txt0000664000175000017500000002655113555610710022535 0ustar classicclassic00000000000000============ Usage Guide ============ Overview ======== At the time of this writing, popular key/value servers include `Memcached `_, `Redis `_ and many others. While these tools all have different usage focuses, they all have in common that the storage model is based on the retrieval of a value based on a key; as such, they are all potentially suitable for caching, particularly Memcached which is first and foremost designed for caching. With a caching system in mind, dogpile.cache provides an interface to a particular Python API targeted at that system. A dogpile.cache configuration consists of the following components: * A *region*, which is an instance of :class:`.CacheRegion`, and defines the configuration details for a particular cache backend. The :class:`.CacheRegion` can be considered the "front end" used by applications. * A *backend*, which is an instance of :class:`.CacheBackend`, describing how values are stored and retrieved from a backend. This interface specifies only :meth:`~.CacheBackend.get`, :meth:`~.CacheBackend.set` and :meth:`~.CacheBackend.delete`. The actual kind of :class:`.CacheBackend` in use for a particular :class:`.CacheRegion` is determined by the underlying Python API being used to talk to the cache, such as Pylibmc. The :class:`.CacheBackend` is instantiated behind the scenes and not directly accessed by applications under normal circumstances. * Value generation functions. These are user-defined functions that generate new values to be placed in the cache. While dogpile.cache offers the usual "set" approach of placing data into the cache, the usual mode of usage is to only instruct it to "get" a value, passing it a *creation function* which will be used to generate a new value if and only if one is needed. This "get-or-create" pattern is the entire key to the "Dogpile" system, which coordinates a single value creation operation among many concurrent get operations for a particular key, eliminating the issue of an expired value being redundantly re-generated by many workers simultaneously. Rudimentary Usage ================= dogpile.cache includes a Pylibmc backend. A basic configuration looks like:: from dogpile.cache import make_region region = make_region().configure( 'dogpile.cache.pylibmc', expiration_time = 3600, arguments = { 'url': ["127.0.0.1"], } ) @region.cache_on_arguments() def load_user_info(user_id): return some_database.lookup_user_by_id(user_id) .. sidebar:: pylibmc In this section, we're illustrating Memcached usage using the `pylibmc `_ backend, which is a high performing Python library for Memcached. It can be compared to the `python-memcached `_ client, which is also an excellent product. Pylibmc is written against Memcached's native API so is markedly faster, though might be considered to have rougher edges. The API is actually a bit more verbose to allow for correct multithreaded usage. Above, we create a :class:`.CacheRegion` using the :func:`.make_region` function, then apply the backend configuration via the :meth:`.CacheRegion.configure` method, which returns the region. The name of the backend is the only argument required by :meth:`.CacheRegion.configure` itself, in this case ``dogpile.cache.pylibmc``. However, in this specific case, the ``pylibmc`` backend also requires that the URL of the memcached server be passed within the ``arguments`` dictionary. The configuration is separated into two sections. Upon construction via :func:`.make_region`, the :class:`.CacheRegion` object is available, typically at module import time, for usage in decorating functions. Additional configuration details passed to :meth:`.CacheRegion.configure` are typically loaded from a configuration file and therefore not necessarily available until runtime, hence the two-step configurational process. Key arguments passed to :meth:`.CacheRegion.configure` include *expiration_time*, which is the expiration time passed to the Dogpile lock, and *arguments*, which are arguments used directly by the backend - in this case we are using arguments that are passed directly to the pylibmc module. Region Configuration ==================== The :func:`.make_region` function currently calls the :class:`.CacheRegion` constructor directly. .. autoclass:: dogpile.cache.region.CacheRegion :noindex: One you have a :class:`.CacheRegion`, the :meth:`.CacheRegion.cache_on_arguments` method can be used to decorate functions, but the cache itself can't be used until :meth:`.CacheRegion.configure` is called. The interface for that method is as follows: .. automethod:: dogpile.cache.region.CacheRegion.configure :noindex: The :class:`.CacheRegion` can also be configured from a dictionary, using the :meth:`.CacheRegion.configure_from_config` method: .. automethod:: dogpile.cache.region.CacheRegion.configure_from_config :noindex: Using a Region ============== The :class:`.CacheRegion` object is our front-end interface to a cache. It includes the following methods: .. automethod:: dogpile.cache.region.CacheRegion.get :noindex: .. automethod:: dogpile.cache.region.CacheRegion.get_or_create :noindex: .. automethod:: dogpile.cache.region.CacheRegion.set :noindex: .. automethod:: dogpile.cache.region.CacheRegion.delete :noindex: .. automethod:: dogpile.cache.region.CacheRegion.cache_on_arguments :noindex: .. _creating_backends: Creating Backends ================= Backends are located using the setuptools entrypoint system. To make life easier for writers of ad-hoc backends, a helper function is included which registers any backend in the same way as if it were part of the existing sys.path. For example, to create a backend called ``DictionaryBackend``, we subclass :class:`.CacheBackend`:: from dogpile.cache.api import CacheBackend, NO_VALUE class DictionaryBackend(CacheBackend): def __init__(self, arguments): self.cache = {} def get(self, key): return self.cache.get(key, NO_VALUE) def set(self, key, value): self.cache[key] = value def delete(self, key): self.cache.pop(key) Then make sure the class is available underneath the entrypoint ``dogpile.cache``. If we did this in a ``setup.py`` file, it would be in ``setup()`` as:: entry_points=""" [dogpile.cache] dictionary = mypackage.mybackend:DictionaryBackend """ Alternatively, if we want to register the plugin in the same process space without bothering to install anything, we can use ``register_backend``:: from dogpile.cache import register_backend register_backend("dictionary", "mypackage.mybackend", "DictionaryBackend") Our new backend would be usable in a region like this:: from dogpile.cache import make_region region = make_region("myregion") region.configure("dictionary") data = region.set("somekey", "somevalue") The values we receive for the backend here are instances of ``CachedValue``. This is a tuple subclass of length two, of the form:: (payload, metadata) Where "payload" is the thing being cached, and "metadata" is information we store in the cache - a dictionary which currently has just the "creation time" and a "version identifier" as key/values. If the cache backend requires serialization, pickle or similar can be used on the tuple - the "metadata" portion will always be a small and easily serializable Python structure. .. _changing_backend_behavior: Changing Backend Behavior ========================= The :class:`.ProxyBackend` is a decorator class provided to easily augment existing backend behavior without having to extend the original class. Using a decorator class is also adventageous as it allows us to share the altered behavior between different backends. Proxies are added to the :class:`.CacheRegion` object using the :meth:`.CacheRegion.configure` method. Only the overridden methods need to be specified and the real backend can be accessed with the ``self.proxied`` object from inside the :class:`.ProxyBackend`. For example, a simple class to log all calls to ``.set()`` would look like this:: from dogpile.cache.proxy import ProxyBackend import logging log = logging.getLogger(__name__) class LoggingProxy(ProxyBackend): def set(self, key, value): log.debug('Setting Cache Key: %s' % key) self.proxied.set(key, value) :class:`.ProxyBackend` can be be configured to optionally take arguments (as long as the :meth:`.ProxyBackend.__init__` method is called properly, either directly or via ``super()``. In the example below, the ``RetryDeleteProxy`` class accepts a ``retry_count`` parameter on initialization. In the event of an exception on delete(), it will retry this many times before returning:: from dogpile.cache.proxy import ProxyBackend class RetryDeleteProxy(ProxyBackend): def __init__(self, retry_count=5): super(RetryDeleteProxy, self).__init__() self.retry_count = retry_count def delete(self, key): retries = self.retry_count while retries > 0: retries -= 1 try: self.proxied.delete(key) return except: pass The ``wrap`` parameter of the :meth:`.CacheRegion.configure` accepts a list which can contain any combination of instantiated proxy objects as well as uninstantiated proxy classes. Putting the two examples above together would look like this:: from dogpile.cache import make_region retry_proxy = RetryDeleteProxy(5) region = make_region().configure( 'dogpile.cache.pylibmc', expiration_time = 3600, arguments = { 'url':["127.0.0.1"], }, wrap = [ LoggingProxy, retry_proxy ] ) In the above example, the ``LoggingProxy`` object would be instantated by the :class:`.CacheRegion` and applied to wrap requests on behalf of the ``retry_proxy`` instance; that proxy in turn wraps requests on behalf of the original dogpile.cache.pylibmc backend. .. versionadded:: 0.4.4 Added support for the :class:`.ProxyBackend` class. Configuring Logging ==================== .. versionadded:: 0.9.0 :class:`.CacheRegion` includes logging facilities that will emit debug log messages when key cache events occur, including when keys are regenerated as well as when hard invalidations occur. Using the `Python logging `_ module, set the log level to ``dogpile.cache`` to ``logging.DEBUG``:: logging.basicConfig() logging.getLogger("dogpile.cache").setLevel(logging.DEBUG) Debug logging will indicate time spent regenerating keys as well as when keys are missing:: DEBUG:dogpile.cache.region:No value present for key: '__main__:load_user_info|2' DEBUG:dogpile.cache.region:No value present for key: '__main__:load_user_info|1' DEBUG:dogpile.cache.region:Cache value generated in 0.501 seconds for keys: ['__main__:load_user_info|2', '__main__:load_user_info|3', '__main__:load_user_info|4', '__main__:load_user_info|5'] DEBUG:dogpile.cache.region:Hard invalidation detected for key: '__main__:load_user_info|3' DEBUG:dogpile.cache.region:Hard invalidation detected for key: '__main__:load_user_info|2'dogpile.cache-0.9.0/docs/_static/0000775000175000017500000000000013555610710017654 5ustar classicclassic00000000000000dogpile.cache-0.9.0/docs/_static/basic.css0000664000175000017500000002761313555610710021460 0ustar classicclassic00000000000000/* * basic.css * ~~~~~~~~~ * * Sphinx stylesheet -- basic theme. * * :copyright: Copyright 2007-2019 by the Sphinx team, see AUTHORS. * :license: BSD, see LICENSE for details. * */ /* -- main layout ----------------------------------------------------------- */ div.clearer { clear: both; } /* -- relbar ---------------------------------------------------------------- */ div.related { width: 100%; font-size: 90%; } div.related h3 { display: none; } div.related ul { margin: 0; padding: 0 0 0 10px; list-style: none; } div.related li { display: inline; } div.related li.right { float: right; margin-right: 5px; } /* -- sidebar --------------------------------------------------------------- */ div.sphinxsidebarwrapper { padding: 10px 5px 0 10px; } div.sphinxsidebar { float: left; width: 230px; margin-left: -100%; font-size: 90%; word-wrap: break-word; overflow-wrap : break-word; } div.sphinxsidebar ul { list-style: none; } div.sphinxsidebar ul ul, div.sphinxsidebar ul.want-points { margin-left: 20px; list-style: square; } div.sphinxsidebar ul ul { margin-top: 0; margin-bottom: 0; } div.sphinxsidebar form { margin-top: 10px; } div.sphinxsidebar input { border: 1px solid #98dbcc; font-family: sans-serif; font-size: 1em; } div.sphinxsidebar #searchbox form.search { overflow: hidden; } div.sphinxsidebar #searchbox input[type="text"] { float: left; width: 80%; padding: 0.25em; box-sizing: border-box; } div.sphinxsidebar #searchbox input[type="submit"] { float: left; width: 20%; border-left: none; padding: 0.25em; box-sizing: border-box; } img { border: 0; max-width: 100%; } /* -- search page ----------------------------------------------------------- */ ul.search { margin: 10px 0 0 20px; padding: 0; } ul.search li { padding: 5px 0 5px 20px; background-image: url(file.png); background-repeat: no-repeat; background-position: 0 7px; } ul.search li a { font-weight: bold; } ul.search li div.context { color: #888; margin: 2px 0 0 30px; text-align: left; } ul.keywordmatches li.goodmatch a { font-weight: bold; } /* -- index page ------------------------------------------------------------ */ table.contentstable { width: 90%; margin-left: auto; margin-right: auto; } table.contentstable p.biglink { line-height: 150%; } a.biglink { font-size: 1.3em; } span.linkdescr { font-style: italic; padding-top: 5px; font-size: 90%; } /* -- general index --------------------------------------------------------- */ table.indextable { width: 100%; } table.indextable td { text-align: left; vertical-align: top; } table.indextable ul { margin-top: 0; margin-bottom: 0; list-style-type: none; } table.indextable > tbody > tr > td > ul { padding-left: 0em; } table.indextable tr.pcap { height: 10px; } table.indextable tr.cap { margin-top: 10px; background-color: #f2f2f2; } img.toggler { margin-right: 3px; margin-top: 3px; cursor: pointer; } div.modindex-jumpbox { border-top: 1px solid #ddd; border-bottom: 1px solid #ddd; margin: 1em 0 1em 0; padding: 0.4em; } div.genindex-jumpbox { border-top: 1px solid #ddd; border-bottom: 1px solid #ddd; margin: 1em 0 1em 0; padding: 0.4em; } /* -- domain module index --------------------------------------------------- */ table.modindextable td { padding: 2px; border-collapse: collapse; } /* -- general body styles --------------------------------------------------- */ div.body { min-width: 450px; max-width: 800px; } div.body p, div.body dd, div.body li, div.body blockquote { -moz-hyphens: auto; -ms-hyphens: auto; -webkit-hyphens: auto; hyphens: auto; } a.headerlink { visibility: hidden; } a.brackets:before, span.brackets > a:before{ content: "["; } a.brackets:after, span.brackets > a:after { content: "]"; } h1:hover > a.headerlink, h2:hover > a.headerlink, h3:hover > a.headerlink, h4:hover > a.headerlink, h5:hover > a.headerlink, h6:hover > a.headerlink, dt:hover > a.headerlink, caption:hover > a.headerlink, p.caption:hover > a.headerlink, div.code-block-caption:hover > a.headerlink { visibility: visible; } div.body p.caption { text-align: inherit; } div.body td { text-align: left; } .first { margin-top: 0 !important; } p.rubric { margin-top: 30px; font-weight: bold; } img.align-left, .figure.align-left, object.align-left { clear: left; float: left; margin-right: 1em; } img.align-right, .figure.align-right, object.align-right { clear: right; float: right; margin-left: 1em; } img.align-center, .figure.align-center, object.align-center { display: block; margin-left: auto; margin-right: auto; } img.align-default, .figure.align-default { display: block; margin-left: auto; margin-right: auto; } .align-left { text-align: left; } .align-center { text-align: center; } .align-default { text-align: center; } .align-right { text-align: right; } /* -- sidebars -------------------------------------------------------------- */ div.sidebar { margin: 0 0 0.5em 1em; border: 1px solid #ddb; padding: 7px 7px 0 7px; background-color: #ffe; width: 40%; float: right; } p.sidebar-title { font-weight: bold; } /* -- topics ---------------------------------------------------------------- */ div.topic { border: 1px solid #ccc; padding: 7px 7px 0 7px; margin: 10px 0 10px 0; } p.topic-title { font-size: 1.1em; font-weight: bold; margin-top: 10px; } /* -- admonitions ----------------------------------------------------------- */ div.admonition { margin-top: 10px; margin-bottom: 10px; padding: 7px; } div.admonition dt { font-weight: bold; } div.admonition dl { margin-bottom: 0; } p.admonition-title { margin: 0px 10px 5px 0px; font-weight: bold; } div.body p.centered { text-align: center; margin-top: 25px; } /* -- tables ---------------------------------------------------------------- */ table.docutils { border: 0; border-collapse: collapse; } table.align-center { margin-left: auto; margin-right: auto; } table.align-default { margin-left: auto; margin-right: auto; } table caption span.caption-number { font-style: italic; } table caption span.caption-text { } table.docutils td, table.docutils th { padding: 1px 8px 1px 5px; border-top: 0; border-left: 0; border-right: 0; border-bottom: 1px solid #aaa; } table.footnote td, table.footnote th { border: 0 !important; } th { text-align: left; padding-right: 5px; } table.citation { border-left: solid 1px gray; margin-left: 1px; } table.citation td { border-bottom: none; } th > p:first-child, td > p:first-child { margin-top: 0px; } th > p:last-child, td > p:last-child { margin-bottom: 0px; } /* -- figures --------------------------------------------------------------- */ div.figure { margin: 0.5em; padding: 0.5em; } div.figure p.caption { padding: 0.3em; } div.figure p.caption span.caption-number { font-style: italic; } div.figure p.caption span.caption-text { } /* -- field list styles ----------------------------------------------------- */ table.field-list td, table.field-list th { border: 0 !important; } .field-list ul { margin: 0; padding-left: 1em; } .field-list p { margin: 0; } .field-name { -moz-hyphens: manual; -ms-hyphens: manual; -webkit-hyphens: manual; hyphens: manual; } /* -- hlist styles ---------------------------------------------------------- */ table.hlist td { vertical-align: top; } /* -- other body styles ----------------------------------------------------- */ ol.arabic { list-style: decimal; } ol.loweralpha { list-style: lower-alpha; } ol.upperalpha { list-style: upper-alpha; } ol.lowerroman { list-style: lower-roman; } ol.upperroman { list-style: upper-roman; } li > p:first-child { margin-top: 0px; } li > p:last-child { margin-bottom: 0px; } dl.footnote > dt, dl.citation > dt { float: left; } dl.footnote > dd, dl.citation > dd { margin-bottom: 0em; } dl.footnote > dd:after, dl.citation > dd:after { content: ""; clear: both; } dl.field-list { display: grid; grid-template-columns: fit-content(30%) auto; } dl.field-list > dt { font-weight: bold; word-break: break-word; padding-left: 0.5em; padding-right: 5px; } dl.field-list > dt:after { content: ":"; } dl.field-list > dd { padding-left: 0.5em; margin-top: 0em; margin-left: 0em; margin-bottom: 0em; } dl { margin-bottom: 15px; } dd > p:first-child { margin-top: 0px; } dd ul, dd table { margin-bottom: 10px; } dd { margin-top: 3px; margin-bottom: 10px; margin-left: 30px; } dt:target, span.highlighted { background-color: #fbe54e; } rect.highlighted { fill: #fbe54e; } dl.glossary dt { font-weight: bold; font-size: 1.1em; } .optional { font-size: 1.3em; } .sig-paren { font-size: larger; } .versionmodified { font-style: italic; } .system-message { background-color: #fda; padding: 5px; border: 3px solid red; } .footnote:target { background-color: #ffa; } .line-block { display: block; margin-top: 1em; margin-bottom: 1em; } .line-block .line-block { margin-top: 0; margin-bottom: 0; margin-left: 1.5em; } .guilabel, .menuselection { font-family: sans-serif; } .accelerator { text-decoration: underline; } .classifier { font-style: oblique; } .classifier:before { font-style: normal; margin: 0.5em; content: ":"; } abbr, acronym { border-bottom: dotted 1px; cursor: help; } /* -- code displays --------------------------------------------------------- */ pre { overflow: auto; overflow-y: hidden; /* fixes display issues on Chrome browsers */ } span.pre { -moz-hyphens: none; -ms-hyphens: none; -webkit-hyphens: none; hyphens: none; } td.linenos pre { padding: 5px 0px; border: 0; background-color: transparent; color: #aaa; } table.highlighttable { margin-left: 0.5em; } table.highlighttable td { padding: 0 0.5em 0 0.5em; } div.code-block-caption { padding: 2px 5px; font-size: small; } div.code-block-caption code { background-color: transparent; } div.code-block-caption + div > div.highlight > pre { margin-top: 0; } div.code-block-caption span.caption-number { padding: 0.1em 0.3em; font-style: italic; } div.code-block-caption span.caption-text { } div.literal-block-wrapper { padding: 1em 1em 0; } div.literal-block-wrapper div.highlight { margin: 0; } code.descname { background-color: transparent; font-weight: bold; font-size: 1.2em; } code.descclassname { background-color: transparent; } code.xref, a code { background-color: transparent; font-weight: bold; } h1 code, h2 code, h3 code, h4 code, h5 code, h6 code { background-color: transparent; } .viewcode-link { float: right; } .viewcode-back { float: right; font-family: sans-serif; } div.viewcode-block:target { margin: -1px -10px; padding: 0 10px; } /* -- math display ---------------------------------------------------------- */ img.math { vertical-align: middle; } div.body div.math p { text-align: center; } span.eqno { float: right; } span.eqno a.headerlink { position: relative; left: 0px; z-index: 1; } div.math:hover a.headerlink { visibility: visible; } /* -- printout stylesheet --------------------------------------------------- */ @media print { div.document, div.documentwrapper, div.bodywrapper { margin: 0 !important; width: 100%; } div.sphinxsidebar, div.related, div.footer, #top-link { display: none; } }dogpile.cache-0.9.0/docs/_static/changelog.css0000664000175000017500000000014113555610710022311 0ustar classicclassic00000000000000a.changeset-link { visibility: hidden; } li:hover a.changeset-link { visibility: visible; } dogpile.cache-0.9.0/docs/_static/doctools.js0000664000175000017500000002206613555610710022046 0ustar classicclassic00000000000000/* * doctools.js * ~~~~~~~~~~~ * * Sphinx JavaScript utilities for all documentation. * * :copyright: Copyright 2007-2019 by the Sphinx team, see AUTHORS. * :license: BSD, see LICENSE for details. * */ /** * select a different prefix for underscore */ $u = _.noConflict(); /** * make the code below compatible with browsers without * an installed firebug like debugger if (!window.console || !console.firebug) { var names = ["log", "debug", "info", "warn", "error", "assert", "dir", "dirxml", "group", "groupEnd", "time", "timeEnd", "count", "trace", "profile", "profileEnd"]; window.console = {}; for (var i = 0; i < names.length; ++i) window.console[names[i]] = function() {}; } */ /** * small helper function to urldecode strings */ jQuery.urldecode = function(x) { return decodeURIComponent(x).replace(/\+/g, ' '); }; /** * small helper function to urlencode strings */ jQuery.urlencode = encodeURIComponent; /** * This function returns the parsed url parameters of the * current request. Multiple values per key are supported, * it will always return arrays of strings for the value parts. */ jQuery.getQueryParameters = function(s) { if (typeof s === 'undefined') s = document.location.search; var parts = s.substr(s.indexOf('?') + 1).split('&'); var result = {}; for (var i = 0; i < parts.length; i++) { var tmp = parts[i].split('=', 2); var key = jQuery.urldecode(tmp[0]); var value = jQuery.urldecode(tmp[1]); if (key in result) result[key].push(value); else result[key] = [value]; } return result; }; /** * highlight a given string on a jquery object by wrapping it in * span elements with the given class name. */ jQuery.fn.highlightText = function(text, className) { function highlight(node, addItems) { if (node.nodeType === 3) { var val = node.nodeValue; var pos = val.toLowerCase().indexOf(text); if (pos >= 0 && !jQuery(node.parentNode).hasClass(className) && !jQuery(node.parentNode).hasClass("nohighlight")) { var span; var isInSVG = jQuery(node).closest("body, svg, foreignObject").is("svg"); if (isInSVG) { span = document.createElementNS("http://www.w3.org/2000/svg", "tspan"); } else { span = document.createElement("span"); span.className = className; } span.appendChild(document.createTextNode(val.substr(pos, text.length))); node.parentNode.insertBefore(span, node.parentNode.insertBefore( document.createTextNode(val.substr(pos + text.length)), node.nextSibling)); node.nodeValue = val.substr(0, pos); if (isInSVG) { var rect = document.createElementNS("http://www.w3.org/2000/svg", "rect"); var bbox = node.parentElement.getBBox(); rect.x.baseVal.value = bbox.x; rect.y.baseVal.value = bbox.y; rect.width.baseVal.value = bbox.width; rect.height.baseVal.value = bbox.height; rect.setAttribute('class', className); addItems.push({ "parent": node.parentNode, "target": rect}); } } } else if (!jQuery(node).is("button, select, textarea")) { jQuery.each(node.childNodes, function() { highlight(this, addItems); }); } } var addItems = []; var result = this.each(function() { highlight(this, addItems); }); for (var i = 0; i < addItems.length; ++i) { jQuery(addItems[i].parent).before(addItems[i].target); } return result; }; /* * backward compatibility for jQuery.browser * This will be supported until firefox bug is fixed. */ if (!jQuery.browser) { jQuery.uaMatch = function(ua) { ua = ua.toLowerCase(); var match = /(chrome)[ \/]([\w.]+)/.exec(ua) || /(webkit)[ \/]([\w.]+)/.exec(ua) || /(opera)(?:.*version|)[ \/]([\w.]+)/.exec(ua) || /(msie) ([\w.]+)/.exec(ua) || ua.indexOf("compatible") < 0 && /(mozilla)(?:.*? rv:([\w.]+)|)/.exec(ua) || []; return { browser: match[ 1 ] || "", version: match[ 2 ] || "0" }; }; jQuery.browser = {}; jQuery.browser[jQuery.uaMatch(navigator.userAgent).browser] = true; } /** * Small JavaScript module for the documentation. */ var Documentation = { init : function() { this.fixFirefoxAnchorBug(); this.highlightSearchWords(); this.initIndexTable(); if (DOCUMENTATION_OPTIONS.NAVIGATION_WITH_KEYS) { this.initOnKeyListeners(); } }, /** * i18n support */ TRANSLATIONS : {}, PLURAL_EXPR : function(n) { return n === 1 ? 0 : 1; }, LOCALE : 'unknown', // gettext and ngettext don't access this so that the functions // can safely bound to a different name (_ = Documentation.gettext) gettext : function(string) { var translated = Documentation.TRANSLATIONS[string]; if (typeof translated === 'undefined') return string; return (typeof translated === 'string') ? translated : translated[0]; }, ngettext : function(singular, plural, n) { var translated = Documentation.TRANSLATIONS[singular]; if (typeof translated === 'undefined') return (n == 1) ? singular : plural; return translated[Documentation.PLURALEXPR(n)]; }, addTranslations : function(catalog) { for (var key in catalog.messages) this.TRANSLATIONS[key] = catalog.messages[key]; this.PLURAL_EXPR = new Function('n', 'return +(' + catalog.plural_expr + ')'); this.LOCALE = catalog.locale; }, /** * add context elements like header anchor links */ addContextElements : function() { $('div[id] > :header:first').each(function() { $('\u00B6'). attr('href', '#' + this.id). attr('title', _('Permalink to this headline')). appendTo(this); }); $('dt[id]').each(function() { $('\u00B6'). attr('href', '#' + this.id). attr('title', _('Permalink to this definition')). appendTo(this); }); }, /** * workaround a firefox stupidity * see: https://bugzilla.mozilla.org/show_bug.cgi?id=645075 */ fixFirefoxAnchorBug : function() { if (document.location.hash && $.browser.mozilla) window.setTimeout(function() { document.location.href += ''; }, 10); }, /** * highlight the search words provided in the url in the text */ highlightSearchWords : function() { var params = $.getQueryParameters(); var terms = (params.highlight) ? params.highlight[0].split(/\s+/) : []; if (terms.length) { var body = $('div.body'); if (!body.length) { body = $('body'); } window.setTimeout(function() { $.each(terms, function() { body.highlightText(this.toLowerCase(), 'highlighted'); }); }, 10); $('') .appendTo($('#searchbox')); } }, /** * init the domain index toggle buttons */ initIndexTable : function() { var togglers = $('img.toggler').click(function() { var src = $(this).attr('src'); var idnum = $(this).attr('id').substr(7); $('tr.cg-' + idnum).toggle(); if (src.substr(-9) === 'minus.png') $(this).attr('src', src.substr(0, src.length-9) + 'plus.png'); else $(this).attr('src', src.substr(0, src.length-8) + 'minus.png'); }).css('display', ''); if (DOCUMENTATION_OPTIONS.COLLAPSE_INDEX) { togglers.click(); } }, /** * helper function to hide the search marks again */ hideSearchWords : function() { $('#searchbox .highlight-link').fadeOut(300); $('span.highlighted').removeClass('highlighted'); }, /** * make the url absolute */ makeURL : function(relativeURL) { return DOCUMENTATION_OPTIONS.URL_ROOT + '/' + relativeURL; }, /** * get the current relative url */ getCurrentURL : function() { var path = document.location.pathname; var parts = path.split(/\//); $.each(DOCUMENTATION_OPTIONS.URL_ROOT.split(/\//), function() { if (this === '..') parts.pop(); }); var url = parts.join('/'); return path.substring(url.lastIndexOf('/') + 1, path.length - 1); }, initOnKeyListeners: function() { $(document).keyup(function(event) { var activeElementType = document.activeElement.tagName; // don't navigate when in search box or textarea if (activeElementType !== 'TEXTAREA' && activeElementType !== 'INPUT' && activeElementType !== 'SELECT') { switch (event.keyCode) { case 37: // left var prevHref = $('link[rel="prev"]').prop('href'); if (prevHref) { window.location.href = prevHref; return false; } case 39: // right var nextHref = $('link[rel="next"]').prop('href'); if (nextHref) { window.location.href = nextHref; return false; } } } }); } }; // quick alias for translations _ = Documentation.gettext; $(document).ready(function() { Documentation.init(); }); dogpile.cache-0.9.0/docs/_static/documentation_options.js0000664000175000017500000000046413555610710024642 0ustar classicclassic00000000000000var DOCUMENTATION_OPTIONS = { URL_ROOT: document.getElementById("documentation_options").getAttribute('data-url_root'), VERSION: '0.9.0', LANGUAGE: 'None', COLLAPSE_INDEX: false, FILE_SUFFIX: '.html', HAS_SOURCE: true, SOURCELINK_SUFFIX: '.txt', NAVIGATION_WITH_KEYS: false };dogpile.cache-0.9.0/docs/_static/file.png0000664000175000017500000000043613555610710021304 0ustar classicclassic00000000000000PNG  IHDRaIDATxR){l ۶f=@ :3~箄rX$AX-D ~ lj(P%8<<9:: PO&$ l~X&EW^4wQ}^ͣ i0/H/@F)Dzq+j[SU5h/oY G&Lfs|{3%U+S`AFIENDB`dogpile.cache-0.9.0/docs/_static/jquery-3.4.1.js0000664000175000017500000104345413555610710022205 0ustar classicclassic00000000000000/*! * jQuery JavaScript Library v3.4.1 * https://jquery.com/ * * Includes Sizzle.js * https://sizzlejs.com/ * * Copyright JS Foundation and other contributors * Released under the MIT license * https://jquery.org/license * * Date: 2019-05-01T21:04Z */ ( function( global, factory ) { "use strict"; if ( typeof module === "object" && typeof module.exports === "object" ) { // For CommonJS and CommonJS-like environments where a proper `window` // is present, execute the factory and get jQuery. // For environments that do not have a `window` with a `document` // (such as Node.js), expose a factory as module.exports. // This accentuates the need for the creation of a real `window`. // e.g. var jQuery = require("jquery")(window); // See ticket #14549 for more info. module.exports = global.document ? factory( global, true ) : function( w ) { if ( !w.document ) { throw new Error( "jQuery requires a window with a document" ); } return factory( w ); }; } else { factory( global ); } // Pass this if window is not defined yet } )( typeof window !== "undefined" ? window : this, function( window, noGlobal ) { // Edge <= 12 - 13+, Firefox <=18 - 45+, IE 10 - 11, Safari 5.1 - 9+, iOS 6 - 9.1 // throw exceptions when non-strict code (e.g., ASP.NET 4.5) accesses strict mode // arguments.callee.caller (trac-13335). But as of jQuery 3.0 (2016), strict mode should be common // enough that all such attempts are guarded in a try block. "use strict"; var arr = []; var document = window.document; var getProto = Object.getPrototypeOf; var slice = arr.slice; var concat = arr.concat; var push = arr.push; var indexOf = arr.indexOf; var class2type = {}; var toString = class2type.toString; var hasOwn = class2type.hasOwnProperty; var fnToString = hasOwn.toString; var ObjectFunctionString = fnToString.call( Object ); var support = {}; var isFunction = function isFunction( obj ) { // Support: Chrome <=57, Firefox <=52 // In some browsers, typeof returns "function" for HTML elements // (i.e., `typeof document.createElement( "object" ) === "function"`). // We don't want to classify *any* DOM node as a function. return typeof obj === "function" && typeof obj.nodeType !== "number"; }; var isWindow = function isWindow( obj ) { return obj != null && obj === obj.window; }; var preservedScriptAttributes = { type: true, src: true, nonce: true, noModule: true }; function DOMEval( code, node, doc ) { doc = doc || document; var i, val, script = doc.createElement( "script" ); script.text = code; if ( node ) { for ( i in preservedScriptAttributes ) { // Support: Firefox 64+, Edge 18+ // Some browsers don't support the "nonce" property on scripts. // On the other hand, just using `getAttribute` is not enough as // the `nonce` attribute is reset to an empty string whenever it // becomes browsing-context connected. // See https://github.com/whatwg/html/issues/2369 // See https://html.spec.whatwg.org/#nonce-attributes // The `node.getAttribute` check was added for the sake of // `jQuery.globalEval` so that it can fake a nonce-containing node // via an object. val = node[ i ] || node.getAttribute && node.getAttribute( i ); if ( val ) { script.setAttribute( i, val ); } } } doc.head.appendChild( script ).parentNode.removeChild( script ); } function toType( obj ) { if ( obj == null ) { return obj + ""; } // Support: Android <=2.3 only (functionish RegExp) return typeof obj === "object" || typeof obj === "function" ? class2type[ toString.call( obj ) ] || "object" : typeof obj; } /* global Symbol */ // Defining this global in .eslintrc.json would create a danger of using the global // unguarded in another place, it seems safer to define global only for this module var version = "3.4.1", // Define a local copy of jQuery jQuery = function( selector, context ) { // The jQuery object is actually just the init constructor 'enhanced' // Need init if jQuery is called (just allow error to be thrown if not included) return new jQuery.fn.init( selector, context ); }, // Support: Android <=4.0 only // Make sure we trim BOM and NBSP rtrim = /^[\s\uFEFF\xA0]+|[\s\uFEFF\xA0]+$/g; jQuery.fn = jQuery.prototype = { // The current version of jQuery being used jquery: version, constructor: jQuery, // The default length of a jQuery object is 0 length: 0, toArray: function() { return slice.call( this ); }, // Get the Nth element in the matched element set OR // Get the whole matched element set as a clean array get: function( num ) { // Return all the elements in a clean array if ( num == null ) { return slice.call( this ); } // Return just the one element from the set return num < 0 ? this[ num + this.length ] : this[ num ]; }, // Take an array of elements and push it onto the stack // (returning the new matched element set) pushStack: function( elems ) { // Build a new jQuery matched element set var ret = jQuery.merge( this.constructor(), elems ); // Add the old object onto the stack (as a reference) ret.prevObject = this; // Return the newly-formed element set return ret; }, // Execute a callback for every element in the matched set. each: function( callback ) { return jQuery.each( this, callback ); }, map: function( callback ) { return this.pushStack( jQuery.map( this, function( elem, i ) { return callback.call( elem, i, elem ); } ) ); }, slice: function() { return this.pushStack( slice.apply( this, arguments ) ); }, first: function() { return this.eq( 0 ); }, last: function() { return this.eq( -1 ); }, eq: function( i ) { var len = this.length, j = +i + ( i < 0 ? len : 0 ); return this.pushStack( j >= 0 && j < len ? [ this[ j ] ] : [] ); }, end: function() { return this.prevObject || this.constructor(); }, // For internal use only. // Behaves like an Array's method, not like a jQuery method. push: push, sort: arr.sort, splice: arr.splice }; jQuery.extend = jQuery.fn.extend = function() { var options, name, src, copy, copyIsArray, clone, target = arguments[ 0 ] || {}, i = 1, length = arguments.length, deep = false; // Handle a deep copy situation if ( typeof target === "boolean" ) { deep = target; // Skip the boolean and the target target = arguments[ i ] || {}; i++; } // Handle case when target is a string or something (possible in deep copy) if ( typeof target !== "object" && !isFunction( target ) ) { target = {}; } // Extend jQuery itself if only one argument is passed if ( i === length ) { target = this; i--; } for ( ; i < length; i++ ) { // Only deal with non-null/undefined values if ( ( options = arguments[ i ] ) != null ) { // Extend the base object for ( name in options ) { copy = options[ name ]; // Prevent Object.prototype pollution // Prevent never-ending loop if ( name === "__proto__" || target === copy ) { continue; } // Recurse if we're merging plain objects or arrays if ( deep && copy && ( jQuery.isPlainObject( copy ) || ( copyIsArray = Array.isArray( copy ) ) ) ) { src = target[ name ]; // Ensure proper type for the source value if ( copyIsArray && !Array.isArray( src ) ) { clone = []; } else if ( !copyIsArray && !jQuery.isPlainObject( src ) ) { clone = {}; } else { clone = src; } copyIsArray = false; // Never move original objects, clone them target[ name ] = jQuery.extend( deep, clone, copy ); // Don't bring in undefined values } else if ( copy !== undefined ) { target[ name ] = copy; } } } } // Return the modified object return target; }; jQuery.extend( { // Unique for each copy of jQuery on the page expando: "jQuery" + ( version + Math.random() ).replace( /\D/g, "" ), // Assume jQuery is ready without the ready module isReady: true, error: function( msg ) { throw new Error( msg ); }, noop: function() {}, isPlainObject: function( obj ) { var proto, Ctor; // Detect obvious negatives // Use toString instead of jQuery.type to catch host objects if ( !obj || toString.call( obj ) !== "[object Object]" ) { return false; } proto = getProto( obj ); // Objects with no prototype (e.g., `Object.create( null )`) are plain if ( !proto ) { return true; } // Objects with prototype are plain iff they were constructed by a global Object function Ctor = hasOwn.call( proto, "constructor" ) && proto.constructor; return typeof Ctor === "function" && fnToString.call( Ctor ) === ObjectFunctionString; }, isEmptyObject: function( obj ) { var name; for ( name in obj ) { return false; } return true; }, // Evaluates a script in a global context globalEval: function( code, options ) { DOMEval( code, { nonce: options && options.nonce } ); }, each: function( obj, callback ) { var length, i = 0; if ( isArrayLike( obj ) ) { length = obj.length; for ( ; i < length; i++ ) { if ( callback.call( obj[ i ], i, obj[ i ] ) === false ) { break; } } } else { for ( i in obj ) { if ( callback.call( obj[ i ], i, obj[ i ] ) === false ) { break; } } } return obj; }, // Support: Android <=4.0 only trim: function( text ) { return text == null ? "" : ( text + "" ).replace( rtrim, "" ); }, // results is for internal usage only makeArray: function( arr, results ) { var ret = results || []; if ( arr != null ) { if ( isArrayLike( Object( arr ) ) ) { jQuery.merge( ret, typeof arr === "string" ? [ arr ] : arr ); } else { push.call( ret, arr ); } } return ret; }, inArray: function( elem, arr, i ) { return arr == null ? -1 : indexOf.call( arr, elem, i ); }, // Support: Android <=4.0 only, PhantomJS 1 only // push.apply(_, arraylike) throws on ancient WebKit merge: function( first, second ) { var len = +second.length, j = 0, i = first.length; for ( ; j < len; j++ ) { first[ i++ ] = second[ j ]; } first.length = i; return first; }, grep: function( elems, callback, invert ) { var callbackInverse, matches = [], i = 0, length = elems.length, callbackExpect = !invert; // Go through the array, only saving the items // that pass the validator function for ( ; i < length; i++ ) { callbackInverse = !callback( elems[ i ], i ); if ( callbackInverse !== callbackExpect ) { matches.push( elems[ i ] ); } } return matches; }, // arg is for internal usage only map: function( elems, callback, arg ) { var length, value, i = 0, ret = []; // Go through the array, translating each of the items to their new values if ( isArrayLike( elems ) ) { length = elems.length; for ( ; i < length; i++ ) { value = callback( elems[ i ], i, arg ); if ( value != null ) { ret.push( value ); } } // Go through every key on the object, } else { for ( i in elems ) { value = callback( elems[ i ], i, arg ); if ( value != null ) { ret.push( value ); } } } // Flatten any nested arrays return concat.apply( [], ret ); }, // A global GUID counter for objects guid: 1, // jQuery.support is not used in Core but other projects attach their // properties to it so it needs to exist. support: support } ); if ( typeof Symbol === "function" ) { jQuery.fn[ Symbol.iterator ] = arr[ Symbol.iterator ]; } // Populate the class2type map jQuery.each( "Boolean Number String Function Array Date RegExp Object Error Symbol".split( " " ), function( i, name ) { class2type[ "[object " + name + "]" ] = name.toLowerCase(); } ); function isArrayLike( obj ) { // Support: real iOS 8.2 only (not reproducible in simulator) // `in` check used to prevent JIT error (gh-2145) // hasOwn isn't used here due to false negatives // regarding Nodelist length in IE var length = !!obj && "length" in obj && obj.length, type = toType( obj ); if ( isFunction( obj ) || isWindow( obj ) ) { return false; } return type === "array" || length === 0 || typeof length === "number" && length > 0 && ( length - 1 ) in obj; } var Sizzle = /*! * Sizzle CSS Selector Engine v2.3.4 * https://sizzlejs.com/ * * Copyright JS Foundation and other contributors * Released under the MIT license * https://js.foundation/ * * Date: 2019-04-08 */ (function( window ) { var i, support, Expr, getText, isXML, tokenize, compile, select, outermostContext, sortInput, hasDuplicate, // Local document vars setDocument, document, docElem, documentIsHTML, rbuggyQSA, rbuggyMatches, matches, contains, // Instance-specific data expando = "sizzle" + 1 * new Date(), preferredDoc = window.document, dirruns = 0, done = 0, classCache = createCache(), tokenCache = createCache(), compilerCache = createCache(), nonnativeSelectorCache = createCache(), sortOrder = function( a, b ) { if ( a === b ) { hasDuplicate = true; } return 0; }, // Instance methods hasOwn = ({}).hasOwnProperty, arr = [], pop = arr.pop, push_native = arr.push, push = arr.push, slice = arr.slice, // Use a stripped-down indexOf as it's faster than native // https://jsperf.com/thor-indexof-vs-for/5 indexOf = function( list, elem ) { var i = 0, len = list.length; for ( ; i < len; i++ ) { if ( list[i] === elem ) { return i; } } return -1; }, booleans = "checked|selected|async|autofocus|autoplay|controls|defer|disabled|hidden|ismap|loop|multiple|open|readonly|required|scoped", // Regular expressions // http://www.w3.org/TR/css3-selectors/#whitespace whitespace = "[\\x20\\t\\r\\n\\f]", // http://www.w3.org/TR/CSS21/syndata.html#value-def-identifier identifier = "(?:\\\\.|[\\w-]|[^\0-\\xa0])+", // Attribute selectors: http://www.w3.org/TR/selectors/#attribute-selectors attributes = "\\[" + whitespace + "*(" + identifier + ")(?:" + whitespace + // Operator (capture 2) "*([*^$|!~]?=)" + whitespace + // "Attribute values must be CSS identifiers [capture 5] or strings [capture 3 or capture 4]" "*(?:'((?:\\\\.|[^\\\\'])*)'|\"((?:\\\\.|[^\\\\\"])*)\"|(" + identifier + "))|)" + whitespace + "*\\]", pseudos = ":(" + identifier + ")(?:\\((" + // To reduce the number of selectors needing tokenize in the preFilter, prefer arguments: // 1. quoted (capture 3; capture 4 or capture 5) "('((?:\\\\.|[^\\\\'])*)'|\"((?:\\\\.|[^\\\\\"])*)\")|" + // 2. simple (capture 6) "((?:\\\\.|[^\\\\()[\\]]|" + attributes + ")*)|" + // 3. anything else (capture 2) ".*" + ")\\)|)", // Leading and non-escaped trailing whitespace, capturing some non-whitespace characters preceding the latter rwhitespace = new RegExp( whitespace + "+", "g" ), rtrim = new RegExp( "^" + whitespace + "+|((?:^|[^\\\\])(?:\\\\.)*)" + whitespace + "+$", "g" ), rcomma = new RegExp( "^" + whitespace + "*," + whitespace + "*" ), rcombinators = new RegExp( "^" + whitespace + "*([>+~]|" + whitespace + ")" + whitespace + "*" ), rdescend = new RegExp( whitespace + "|>" ), rpseudo = new RegExp( pseudos ), ridentifier = new RegExp( "^" + identifier + "$" ), matchExpr = { "ID": new RegExp( "^#(" + identifier + ")" ), "CLASS": new RegExp( "^\\.(" + identifier + ")" ), "TAG": new RegExp( "^(" + identifier + "|[*])" ), "ATTR": new RegExp( "^" + attributes ), "PSEUDO": new RegExp( "^" + pseudos ), "CHILD": new RegExp( "^:(only|first|last|nth|nth-last)-(child|of-type)(?:\\(" + whitespace + "*(even|odd|(([+-]|)(\\d*)n|)" + whitespace + "*(?:([+-]|)" + whitespace + "*(\\d+)|))" + whitespace + "*\\)|)", "i" ), "bool": new RegExp( "^(?:" + booleans + ")$", "i" ), // For use in libraries implementing .is() // We use this for POS matching in `select` "needsContext": new RegExp( "^" + whitespace + "*[>+~]|:(even|odd|eq|gt|lt|nth|first|last)(?:\\(" + whitespace + "*((?:-\\d)?\\d*)" + whitespace + "*\\)|)(?=[^-]|$)", "i" ) }, rhtml = /HTML$/i, rinputs = /^(?:input|select|textarea|button)$/i, rheader = /^h\d$/i, rnative = /^[^{]+\{\s*\[native \w/, // Easily-parseable/retrievable ID or TAG or CLASS selectors rquickExpr = /^(?:#([\w-]+)|(\w+)|\.([\w-]+))$/, rsibling = /[+~]/, // CSS escapes // http://www.w3.org/TR/CSS21/syndata.html#escaped-characters runescape = new RegExp( "\\\\([\\da-f]{1,6}" + whitespace + "?|(" + whitespace + ")|.)", "ig" ), funescape = function( _, escaped, escapedWhitespace ) { var high = "0x" + escaped - 0x10000; // NaN means non-codepoint // Support: Firefox<24 // Workaround erroneous numeric interpretation of +"0x" return high !== high || escapedWhitespace ? escaped : high < 0 ? // BMP codepoint String.fromCharCode( high + 0x10000 ) : // Supplemental Plane codepoint (surrogate pair) String.fromCharCode( high >> 10 | 0xD800, high & 0x3FF | 0xDC00 ); }, // CSS string/identifier serialization // https://drafts.csswg.org/cssom/#common-serializing-idioms rcssescape = /([\0-\x1f\x7f]|^-?\d)|^-$|[^\0-\x1f\x7f-\uFFFF\w-]/g, fcssescape = function( ch, asCodePoint ) { if ( asCodePoint ) { // U+0000 NULL becomes U+FFFD REPLACEMENT CHARACTER if ( ch === "\0" ) { return "\uFFFD"; } // Control characters and (dependent upon position) numbers get escaped as code points return ch.slice( 0, -1 ) + "\\" + ch.charCodeAt( ch.length - 1 ).toString( 16 ) + " "; } // Other potentially-special ASCII characters get backslash-escaped return "\\" + ch; }, // Used for iframes // See setDocument() // Removing the function wrapper causes a "Permission Denied" // error in IE unloadHandler = function() { setDocument(); }, inDisabledFieldset = addCombinator( function( elem ) { return elem.disabled === true && elem.nodeName.toLowerCase() === "fieldset"; }, { dir: "parentNode", next: "legend" } ); // Optimize for push.apply( _, NodeList ) try { push.apply( (arr = slice.call( preferredDoc.childNodes )), preferredDoc.childNodes ); // Support: Android<4.0 // Detect silently failing push.apply arr[ preferredDoc.childNodes.length ].nodeType; } catch ( e ) { push = { apply: arr.length ? // Leverage slice if possible function( target, els ) { push_native.apply( target, slice.call(els) ); } : // Support: IE<9 // Otherwise append directly function( target, els ) { var j = target.length, i = 0; // Can't trust NodeList.length while ( (target[j++] = els[i++]) ) {} target.length = j - 1; } }; } function Sizzle( selector, context, results, seed ) { var m, i, elem, nid, match, groups, newSelector, newContext = context && context.ownerDocument, // nodeType defaults to 9, since context defaults to document nodeType = context ? context.nodeType : 9; results = results || []; // Return early from calls with invalid selector or context if ( typeof selector !== "string" || !selector || nodeType !== 1 && nodeType !== 9 && nodeType !== 11 ) { return results; } // Try to shortcut find operations (as opposed to filters) in HTML documents if ( !seed ) { if ( ( context ? context.ownerDocument || context : preferredDoc ) !== document ) { setDocument( context ); } context = context || document; if ( documentIsHTML ) { // If the selector is sufficiently simple, try using a "get*By*" DOM method // (excepting DocumentFragment context, where the methods don't exist) if ( nodeType !== 11 && (match = rquickExpr.exec( selector )) ) { // ID selector if ( (m = match[1]) ) { // Document context if ( nodeType === 9 ) { if ( (elem = context.getElementById( m )) ) { // Support: IE, Opera, Webkit // TODO: identify versions // getElementById can match elements by name instead of ID if ( elem.id === m ) { results.push( elem ); return results; } } else { return results; } // Element context } else { // Support: IE, Opera, Webkit // TODO: identify versions // getElementById can match elements by name instead of ID if ( newContext && (elem = newContext.getElementById( m )) && contains( context, elem ) && elem.id === m ) { results.push( elem ); return results; } } // Type selector } else if ( match[2] ) { push.apply( results, context.getElementsByTagName( selector ) ); return results; // Class selector } else if ( (m = match[3]) && support.getElementsByClassName && context.getElementsByClassName ) { push.apply( results, context.getElementsByClassName( m ) ); return results; } } // Take advantage of querySelectorAll if ( support.qsa && !nonnativeSelectorCache[ selector + " " ] && (!rbuggyQSA || !rbuggyQSA.test( selector )) && // Support: IE 8 only // Exclude object elements (nodeType !== 1 || context.nodeName.toLowerCase() !== "object") ) { newSelector = selector; newContext = context; // qSA considers elements outside a scoping root when evaluating child or // descendant combinators, which is not what we want. // In such cases, we work around the behavior by prefixing every selector in the // list with an ID selector referencing the scope context. // Thanks to Andrew Dupont for this technique. if ( nodeType === 1 && rdescend.test( selector ) ) { // Capture the context ID, setting it first if necessary if ( (nid = context.getAttribute( "id" )) ) { nid = nid.replace( rcssescape, fcssescape ); } else { context.setAttribute( "id", (nid = expando) ); } // Prefix every selector in the list groups = tokenize( selector ); i = groups.length; while ( i-- ) { groups[i] = "#" + nid + " " + toSelector( groups[i] ); } newSelector = groups.join( "," ); // Expand context for sibling selectors newContext = rsibling.test( selector ) && testContext( context.parentNode ) || context; } try { push.apply( results, newContext.querySelectorAll( newSelector ) ); return results; } catch ( qsaError ) { nonnativeSelectorCache( selector, true ); } finally { if ( nid === expando ) { context.removeAttribute( "id" ); } } } } } // All others return select( selector.replace( rtrim, "$1" ), context, results, seed ); } /** * Create key-value caches of limited size * @returns {function(string, object)} Returns the Object data after storing it on itself with * property name the (space-suffixed) string and (if the cache is larger than Expr.cacheLength) * deleting the oldest entry */ function createCache() { var keys = []; function cache( key, value ) { // Use (key + " ") to avoid collision with native prototype properties (see Issue #157) if ( keys.push( key + " " ) > Expr.cacheLength ) { // Only keep the most recent entries delete cache[ keys.shift() ]; } return (cache[ key + " " ] = value); } return cache; } /** * Mark a function for special use by Sizzle * @param {Function} fn The function to mark */ function markFunction( fn ) { fn[ expando ] = true; return fn; } /** * Support testing using an element * @param {Function} fn Passed the created element and returns a boolean result */ function assert( fn ) { var el = document.createElement("fieldset"); try { return !!fn( el ); } catch (e) { return false; } finally { // Remove from its parent by default if ( el.parentNode ) { el.parentNode.removeChild( el ); } // release memory in IE el = null; } } /** * Adds the same handler for all of the specified attrs * @param {String} attrs Pipe-separated list of attributes * @param {Function} handler The method that will be applied */ function addHandle( attrs, handler ) { var arr = attrs.split("|"), i = arr.length; while ( i-- ) { Expr.attrHandle[ arr[i] ] = handler; } } /** * Checks document order of two siblings * @param {Element} a * @param {Element} b * @returns {Number} Returns less than 0 if a precedes b, greater than 0 if a follows b */ function siblingCheck( a, b ) { var cur = b && a, diff = cur && a.nodeType === 1 && b.nodeType === 1 && a.sourceIndex - b.sourceIndex; // Use IE sourceIndex if available on both nodes if ( diff ) { return diff; } // Check if b follows a if ( cur ) { while ( (cur = cur.nextSibling) ) { if ( cur === b ) { return -1; } } } return a ? 1 : -1; } /** * Returns a function to use in pseudos for input types * @param {String} type */ function createInputPseudo( type ) { return function( elem ) { var name = elem.nodeName.toLowerCase(); return name === "input" && elem.type === type; }; } /** * Returns a function to use in pseudos for buttons * @param {String} type */ function createButtonPseudo( type ) { return function( elem ) { var name = elem.nodeName.toLowerCase(); return (name === "input" || name === "button") && elem.type === type; }; } /** * Returns a function to use in pseudos for :enabled/:disabled * @param {Boolean} disabled true for :disabled; false for :enabled */ function createDisabledPseudo( disabled ) { // Known :disabled false positives: fieldset[disabled] > legend:nth-of-type(n+2) :can-disable return function( elem ) { // Only certain elements can match :enabled or :disabled // https://html.spec.whatwg.org/multipage/scripting.html#selector-enabled // https://html.spec.whatwg.org/multipage/scripting.html#selector-disabled if ( "form" in elem ) { // Check for inherited disabledness on relevant non-disabled elements: // * listed form-associated elements in a disabled fieldset // https://html.spec.whatwg.org/multipage/forms.html#category-listed // https://html.spec.whatwg.org/multipage/forms.html#concept-fe-disabled // * option elements in a disabled optgroup // https://html.spec.whatwg.org/multipage/forms.html#concept-option-disabled // All such elements have a "form" property. if ( elem.parentNode && elem.disabled === false ) { // Option elements defer to a parent optgroup if present if ( "label" in elem ) { if ( "label" in elem.parentNode ) { return elem.parentNode.disabled === disabled; } else { return elem.disabled === disabled; } } // Support: IE 6 - 11 // Use the isDisabled shortcut property to check for disabled fieldset ancestors return elem.isDisabled === disabled || // Where there is no isDisabled, check manually /* jshint -W018 */ elem.isDisabled !== !disabled && inDisabledFieldset( elem ) === disabled; } return elem.disabled === disabled; // Try to winnow out elements that can't be disabled before trusting the disabled property. // Some victims get caught in our net (label, legend, menu, track), but it shouldn't // even exist on them, let alone have a boolean value. } else if ( "label" in elem ) { return elem.disabled === disabled; } // Remaining elements are neither :enabled nor :disabled return false; }; } /** * Returns a function to use in pseudos for positionals * @param {Function} fn */ function createPositionalPseudo( fn ) { return markFunction(function( argument ) { argument = +argument; return markFunction(function( seed, matches ) { var j, matchIndexes = fn( [], seed.length, argument ), i = matchIndexes.length; // Match elements found at the specified indexes while ( i-- ) { if ( seed[ (j = matchIndexes[i]) ] ) { seed[j] = !(matches[j] = seed[j]); } } }); }); } /** * Checks a node for validity as a Sizzle context * @param {Element|Object=} context * @returns {Element|Object|Boolean} The input node if acceptable, otherwise a falsy value */ function testContext( context ) { return context && typeof context.getElementsByTagName !== "undefined" && context; } // Expose support vars for convenience support = Sizzle.support = {}; /** * Detects XML nodes * @param {Element|Object} elem An element or a document * @returns {Boolean} True iff elem is a non-HTML XML node */ isXML = Sizzle.isXML = function( elem ) { var namespace = elem.namespaceURI, docElem = (elem.ownerDocument || elem).documentElement; // Support: IE <=8 // Assume HTML when documentElement doesn't yet exist, such as inside loading iframes // https://bugs.jquery.com/ticket/4833 return !rhtml.test( namespace || docElem && docElem.nodeName || "HTML" ); }; /** * Sets document-related variables once based on the current document * @param {Element|Object} [doc] An element or document object to use to set the document * @returns {Object} Returns the current document */ setDocument = Sizzle.setDocument = function( node ) { var hasCompare, subWindow, doc = node ? node.ownerDocument || node : preferredDoc; // Return early if doc is invalid or already selected if ( doc === document || doc.nodeType !== 9 || !doc.documentElement ) { return document; } // Update global variables document = doc; docElem = document.documentElement; documentIsHTML = !isXML( document ); // Support: IE 9-11, Edge // Accessing iframe documents after unload throws "permission denied" errors (jQuery #13936) if ( preferredDoc !== document && (subWindow = document.defaultView) && subWindow.top !== subWindow ) { // Support: IE 11, Edge if ( subWindow.addEventListener ) { subWindow.addEventListener( "unload", unloadHandler, false ); // Support: IE 9 - 10 only } else if ( subWindow.attachEvent ) { subWindow.attachEvent( "onunload", unloadHandler ); } } /* Attributes ---------------------------------------------------------------------- */ // Support: IE<8 // Verify that getAttribute really returns attributes and not properties // (excepting IE8 booleans) support.attributes = assert(function( el ) { el.className = "i"; return !el.getAttribute("className"); }); /* getElement(s)By* ---------------------------------------------------------------------- */ // Check if getElementsByTagName("*") returns only elements support.getElementsByTagName = assert(function( el ) { el.appendChild( document.createComment("") ); return !el.getElementsByTagName("*").length; }); // Support: IE<9 support.getElementsByClassName = rnative.test( document.getElementsByClassName ); // Support: IE<10 // Check if getElementById returns elements by name // The broken getElementById methods don't pick up programmatically-set names, // so use a roundabout getElementsByName test support.getById = assert(function( el ) { docElem.appendChild( el ).id = expando; return !document.getElementsByName || !document.getElementsByName( expando ).length; }); // ID filter and find if ( support.getById ) { Expr.filter["ID"] = function( id ) { var attrId = id.replace( runescape, funescape ); return function( elem ) { return elem.getAttribute("id") === attrId; }; }; Expr.find["ID"] = function( id, context ) { if ( typeof context.getElementById !== "undefined" && documentIsHTML ) { var elem = context.getElementById( id ); return elem ? [ elem ] : []; } }; } else { Expr.filter["ID"] = function( id ) { var attrId = id.replace( runescape, funescape ); return function( elem ) { var node = typeof elem.getAttributeNode !== "undefined" && elem.getAttributeNode("id"); return node && node.value === attrId; }; }; // Support: IE 6 - 7 only // getElementById is not reliable as a find shortcut Expr.find["ID"] = function( id, context ) { if ( typeof context.getElementById !== "undefined" && documentIsHTML ) { var node, i, elems, elem = context.getElementById( id ); if ( elem ) { // Verify the id attribute node = elem.getAttributeNode("id"); if ( node && node.value === id ) { return [ elem ]; } // Fall back on getElementsByName elems = context.getElementsByName( id ); i = 0; while ( (elem = elems[i++]) ) { node = elem.getAttributeNode("id"); if ( node && node.value === id ) { return [ elem ]; } } } return []; } }; } // Tag Expr.find["TAG"] = support.getElementsByTagName ? function( tag, context ) { if ( typeof context.getElementsByTagName !== "undefined" ) { return context.getElementsByTagName( tag ); // DocumentFragment nodes don't have gEBTN } else if ( support.qsa ) { return context.querySelectorAll( tag ); } } : function( tag, context ) { var elem, tmp = [], i = 0, // By happy coincidence, a (broken) gEBTN appears on DocumentFragment nodes too results = context.getElementsByTagName( tag ); // Filter out possible comments if ( tag === "*" ) { while ( (elem = results[i++]) ) { if ( elem.nodeType === 1 ) { tmp.push( elem ); } } return tmp; } return results; }; // Class Expr.find["CLASS"] = support.getElementsByClassName && function( className, context ) { if ( typeof context.getElementsByClassName !== "undefined" && documentIsHTML ) { return context.getElementsByClassName( className ); } }; /* QSA/matchesSelector ---------------------------------------------------------------------- */ // QSA and matchesSelector support // matchesSelector(:active) reports false when true (IE9/Opera 11.5) rbuggyMatches = []; // qSa(:focus) reports false when true (Chrome 21) // We allow this because of a bug in IE8/9 that throws an error // whenever `document.activeElement` is accessed on an iframe // So, we allow :focus to pass through QSA all the time to avoid the IE error // See https://bugs.jquery.com/ticket/13378 rbuggyQSA = []; if ( (support.qsa = rnative.test( document.querySelectorAll )) ) { // Build QSA regex // Regex strategy adopted from Diego Perini assert(function( el ) { // Select is set to empty string on purpose // This is to test IE's treatment of not explicitly // setting a boolean content attribute, // since its presence should be enough // https://bugs.jquery.com/ticket/12359 docElem.appendChild( el ).innerHTML = "" + ""; // Support: IE8, Opera 11-12.16 // Nothing should be selected when empty strings follow ^= or $= or *= // The test attribute must be unknown in Opera but "safe" for WinRT // https://msdn.microsoft.com/en-us/library/ie/hh465388.aspx#attribute_section if ( el.querySelectorAll("[msallowcapture^='']").length ) { rbuggyQSA.push( "[*^$]=" + whitespace + "*(?:''|\"\")" ); } // Support: IE8 // Boolean attributes and "value" are not treated correctly if ( !el.querySelectorAll("[selected]").length ) { rbuggyQSA.push( "\\[" + whitespace + "*(?:value|" + booleans + ")" ); } // Support: Chrome<29, Android<4.4, Safari<7.0+, iOS<7.0+, PhantomJS<1.9.8+ if ( !el.querySelectorAll( "[id~=" + expando + "-]" ).length ) { rbuggyQSA.push("~="); } // Webkit/Opera - :checked should return selected option elements // http://www.w3.org/TR/2011/REC-css3-selectors-20110929/#checked // IE8 throws error here and will not see later tests if ( !el.querySelectorAll(":checked").length ) { rbuggyQSA.push(":checked"); } // Support: Safari 8+, iOS 8+ // https://bugs.webkit.org/show_bug.cgi?id=136851 // In-page `selector#id sibling-combinator selector` fails if ( !el.querySelectorAll( "a#" + expando + "+*" ).length ) { rbuggyQSA.push(".#.+[+~]"); } }); assert(function( el ) { el.innerHTML = "" + ""; // Support: Windows 8 Native Apps // The type and name attributes are restricted during .innerHTML assignment var input = document.createElement("input"); input.setAttribute( "type", "hidden" ); el.appendChild( input ).setAttribute( "name", "D" ); // Support: IE8 // Enforce case-sensitivity of name attribute if ( el.querySelectorAll("[name=d]").length ) { rbuggyQSA.push( "name" + whitespace + "*[*^$|!~]?=" ); } // FF 3.5 - :enabled/:disabled and hidden elements (hidden elements are still enabled) // IE8 throws error here and will not see later tests if ( el.querySelectorAll(":enabled").length !== 2 ) { rbuggyQSA.push( ":enabled", ":disabled" ); } // Support: IE9-11+ // IE's :disabled selector does not pick up the children of disabled fieldsets docElem.appendChild( el ).disabled = true; if ( el.querySelectorAll(":disabled").length !== 2 ) { rbuggyQSA.push( ":enabled", ":disabled" ); } // Opera 10-11 does not throw on post-comma invalid pseudos el.querySelectorAll("*,:x"); rbuggyQSA.push(",.*:"); }); } if ( (support.matchesSelector = rnative.test( (matches = docElem.matches || docElem.webkitMatchesSelector || docElem.mozMatchesSelector || docElem.oMatchesSelector || docElem.msMatchesSelector) )) ) { assert(function( el ) { // Check to see if it's possible to do matchesSelector // on a disconnected node (IE 9) support.disconnectedMatch = matches.call( el, "*" ); // This should fail with an exception // Gecko does not error, returns false instead matches.call( el, "[s!='']:x" ); rbuggyMatches.push( "!=", pseudos ); }); } rbuggyQSA = rbuggyQSA.length && new RegExp( rbuggyQSA.join("|") ); rbuggyMatches = rbuggyMatches.length && new RegExp( rbuggyMatches.join("|") ); /* Contains ---------------------------------------------------------------------- */ hasCompare = rnative.test( docElem.compareDocumentPosition ); // Element contains another // Purposefully self-exclusive // As in, an element does not contain itself contains = hasCompare || rnative.test( docElem.contains ) ? function( a, b ) { var adown = a.nodeType === 9 ? a.documentElement : a, bup = b && b.parentNode; return a === bup || !!( bup && bup.nodeType === 1 && ( adown.contains ? adown.contains( bup ) : a.compareDocumentPosition && a.compareDocumentPosition( bup ) & 16 )); } : function( a, b ) { if ( b ) { while ( (b = b.parentNode) ) { if ( b === a ) { return true; } } } return false; }; /* Sorting ---------------------------------------------------------------------- */ // Document order sorting sortOrder = hasCompare ? function( a, b ) { // Flag for duplicate removal if ( a === b ) { hasDuplicate = true; return 0; } // Sort on method existence if only one input has compareDocumentPosition var compare = !a.compareDocumentPosition - !b.compareDocumentPosition; if ( compare ) { return compare; } // Calculate position if both inputs belong to the same document compare = ( a.ownerDocument || a ) === ( b.ownerDocument || b ) ? a.compareDocumentPosition( b ) : // Otherwise we know they are disconnected 1; // Disconnected nodes if ( compare & 1 || (!support.sortDetached && b.compareDocumentPosition( a ) === compare) ) { // Choose the first element that is related to our preferred document if ( a === document || a.ownerDocument === preferredDoc && contains(preferredDoc, a) ) { return -1; } if ( b === document || b.ownerDocument === preferredDoc && contains(preferredDoc, b) ) { return 1; } // Maintain original order return sortInput ? ( indexOf( sortInput, a ) - indexOf( sortInput, b ) ) : 0; } return compare & 4 ? -1 : 1; } : function( a, b ) { // Exit early if the nodes are identical if ( a === b ) { hasDuplicate = true; return 0; } var cur, i = 0, aup = a.parentNode, bup = b.parentNode, ap = [ a ], bp = [ b ]; // Parentless nodes are either documents or disconnected if ( !aup || !bup ) { return a === document ? -1 : b === document ? 1 : aup ? -1 : bup ? 1 : sortInput ? ( indexOf( sortInput, a ) - indexOf( sortInput, b ) ) : 0; // If the nodes are siblings, we can do a quick check } else if ( aup === bup ) { return siblingCheck( a, b ); } // Otherwise we need full lists of their ancestors for comparison cur = a; while ( (cur = cur.parentNode) ) { ap.unshift( cur ); } cur = b; while ( (cur = cur.parentNode) ) { bp.unshift( cur ); } // Walk down the tree looking for a discrepancy while ( ap[i] === bp[i] ) { i++; } return i ? // Do a sibling check if the nodes have a common ancestor siblingCheck( ap[i], bp[i] ) : // Otherwise nodes in our document sort first ap[i] === preferredDoc ? -1 : bp[i] === preferredDoc ? 1 : 0; }; return document; }; Sizzle.matches = function( expr, elements ) { return Sizzle( expr, null, null, elements ); }; Sizzle.matchesSelector = function( elem, expr ) { // Set document vars if needed if ( ( elem.ownerDocument || elem ) !== document ) { setDocument( elem ); } if ( support.matchesSelector && documentIsHTML && !nonnativeSelectorCache[ expr + " " ] && ( !rbuggyMatches || !rbuggyMatches.test( expr ) ) && ( !rbuggyQSA || !rbuggyQSA.test( expr ) ) ) { try { var ret = matches.call( elem, expr ); // IE 9's matchesSelector returns false on disconnected nodes if ( ret || support.disconnectedMatch || // As well, disconnected nodes are said to be in a document // fragment in IE 9 elem.document && elem.document.nodeType !== 11 ) { return ret; } } catch (e) { nonnativeSelectorCache( expr, true ); } } return Sizzle( expr, document, null, [ elem ] ).length > 0; }; Sizzle.contains = function( context, elem ) { // Set document vars if needed if ( ( context.ownerDocument || context ) !== document ) { setDocument( context ); } return contains( context, elem ); }; Sizzle.attr = function( elem, name ) { // Set document vars if needed if ( ( elem.ownerDocument || elem ) !== document ) { setDocument( elem ); } var fn = Expr.attrHandle[ name.toLowerCase() ], // Don't get fooled by Object.prototype properties (jQuery #13807) val = fn && hasOwn.call( Expr.attrHandle, name.toLowerCase() ) ? fn( elem, name, !documentIsHTML ) : undefined; return val !== undefined ? val : support.attributes || !documentIsHTML ? elem.getAttribute( name ) : (val = elem.getAttributeNode(name)) && val.specified ? val.value : null; }; Sizzle.escape = function( sel ) { return (sel + "").replace( rcssescape, fcssescape ); }; Sizzle.error = function( msg ) { throw new Error( "Syntax error, unrecognized expression: " + msg ); }; /** * Document sorting and removing duplicates * @param {ArrayLike} results */ Sizzle.uniqueSort = function( results ) { var elem, duplicates = [], j = 0, i = 0; // Unless we *know* we can detect duplicates, assume their presence hasDuplicate = !support.detectDuplicates; sortInput = !support.sortStable && results.slice( 0 ); results.sort( sortOrder ); if ( hasDuplicate ) { while ( (elem = results[i++]) ) { if ( elem === results[ i ] ) { j = duplicates.push( i ); } } while ( j-- ) { results.splice( duplicates[ j ], 1 ); } } // Clear input after sorting to release objects // See https://github.com/jquery/sizzle/pull/225 sortInput = null; return results; }; /** * Utility function for retrieving the text value of an array of DOM nodes * @param {Array|Element} elem */ getText = Sizzle.getText = function( elem ) { var node, ret = "", i = 0, nodeType = elem.nodeType; if ( !nodeType ) { // If no nodeType, this is expected to be an array while ( (node = elem[i++]) ) { // Do not traverse comment nodes ret += getText( node ); } } else if ( nodeType === 1 || nodeType === 9 || nodeType === 11 ) { // Use textContent for elements // innerText usage removed for consistency of new lines (jQuery #11153) if ( typeof elem.textContent === "string" ) { return elem.textContent; } else { // Traverse its children for ( elem = elem.firstChild; elem; elem = elem.nextSibling ) { ret += getText( elem ); } } } else if ( nodeType === 3 || nodeType === 4 ) { return elem.nodeValue; } // Do not include comment or processing instruction nodes return ret; }; Expr = Sizzle.selectors = { // Can be adjusted by the user cacheLength: 50, createPseudo: markFunction, match: matchExpr, attrHandle: {}, find: {}, relative: { ">": { dir: "parentNode", first: true }, " ": { dir: "parentNode" }, "+": { dir: "previousSibling", first: true }, "~": { dir: "previousSibling" } }, preFilter: { "ATTR": function( match ) { match[1] = match[1].replace( runescape, funescape ); // Move the given value to match[3] whether quoted or unquoted match[3] = ( match[3] || match[4] || match[5] || "" ).replace( runescape, funescape ); if ( match[2] === "~=" ) { match[3] = " " + match[3] + " "; } return match.slice( 0, 4 ); }, "CHILD": function( match ) { /* matches from matchExpr["CHILD"] 1 type (only|nth|...) 2 what (child|of-type) 3 argument (even|odd|\d*|\d*n([+-]\d+)?|...) 4 xn-component of xn+y argument ([+-]?\d*n|) 5 sign of xn-component 6 x of xn-component 7 sign of y-component 8 y of y-component */ match[1] = match[1].toLowerCase(); if ( match[1].slice( 0, 3 ) === "nth" ) { // nth-* requires argument if ( !match[3] ) { Sizzle.error( match[0] ); } // numeric x and y parameters for Expr.filter.CHILD // remember that false/true cast respectively to 0/1 match[4] = +( match[4] ? match[5] + (match[6] || 1) : 2 * ( match[3] === "even" || match[3] === "odd" ) ); match[5] = +( ( match[7] + match[8] ) || match[3] === "odd" ); // other types prohibit arguments } else if ( match[3] ) { Sizzle.error( match[0] ); } return match; }, "PSEUDO": function( match ) { var excess, unquoted = !match[6] && match[2]; if ( matchExpr["CHILD"].test( match[0] ) ) { return null; } // Accept quoted arguments as-is if ( match[3] ) { match[2] = match[4] || match[5] || ""; // Strip excess characters from unquoted arguments } else if ( unquoted && rpseudo.test( unquoted ) && // Get excess from tokenize (recursively) (excess = tokenize( unquoted, true )) && // advance to the next closing parenthesis (excess = unquoted.indexOf( ")", unquoted.length - excess ) - unquoted.length) ) { // excess is a negative index match[0] = match[0].slice( 0, excess ); match[2] = unquoted.slice( 0, excess ); } // Return only captures needed by the pseudo filter method (type and argument) return match.slice( 0, 3 ); } }, filter: { "TAG": function( nodeNameSelector ) { var nodeName = nodeNameSelector.replace( runescape, funescape ).toLowerCase(); return nodeNameSelector === "*" ? function() { return true; } : function( elem ) { return elem.nodeName && elem.nodeName.toLowerCase() === nodeName; }; }, "CLASS": function( className ) { var pattern = classCache[ className + " " ]; return pattern || (pattern = new RegExp( "(^|" + whitespace + ")" + className + "(" + whitespace + "|$)" )) && classCache( className, function( elem ) { return pattern.test( typeof elem.className === "string" && elem.className || typeof elem.getAttribute !== "undefined" && elem.getAttribute("class") || "" ); }); }, "ATTR": function( name, operator, check ) { return function( elem ) { var result = Sizzle.attr( elem, name ); if ( result == null ) { return operator === "!="; } if ( !operator ) { return true; } result += ""; return operator === "=" ? result === check : operator === "!=" ? result !== check : operator === "^=" ? check && result.indexOf( check ) === 0 : operator === "*=" ? check && result.indexOf( check ) > -1 : operator === "$=" ? check && result.slice( -check.length ) === check : operator === "~=" ? ( " " + result.replace( rwhitespace, " " ) + " " ).indexOf( check ) > -1 : operator === "|=" ? result === check || result.slice( 0, check.length + 1 ) === check + "-" : false; }; }, "CHILD": function( type, what, argument, first, last ) { var simple = type.slice( 0, 3 ) !== "nth", forward = type.slice( -4 ) !== "last", ofType = what === "of-type"; return first === 1 && last === 0 ? // Shortcut for :nth-*(n) function( elem ) { return !!elem.parentNode; } : function( elem, context, xml ) { var cache, uniqueCache, outerCache, node, nodeIndex, start, dir = simple !== forward ? "nextSibling" : "previousSibling", parent = elem.parentNode, name = ofType && elem.nodeName.toLowerCase(), useCache = !xml && !ofType, diff = false; if ( parent ) { // :(first|last|only)-(child|of-type) if ( simple ) { while ( dir ) { node = elem; while ( (node = node[ dir ]) ) { if ( ofType ? node.nodeName.toLowerCase() === name : node.nodeType === 1 ) { return false; } } // Reverse direction for :only-* (if we haven't yet done so) start = dir = type === "only" && !start && "nextSibling"; } return true; } start = [ forward ? parent.firstChild : parent.lastChild ]; // non-xml :nth-child(...) stores cache data on `parent` if ( forward && useCache ) { // Seek `elem` from a previously-cached index // ...in a gzip-friendly way node = parent; outerCache = node[ expando ] || (node[ expando ] = {}); // Support: IE <9 only // Defend against cloned attroperties (jQuery gh-1709) uniqueCache = outerCache[ node.uniqueID ] || (outerCache[ node.uniqueID ] = {}); cache = uniqueCache[ type ] || []; nodeIndex = cache[ 0 ] === dirruns && cache[ 1 ]; diff = nodeIndex && cache[ 2 ]; node = nodeIndex && parent.childNodes[ nodeIndex ]; while ( (node = ++nodeIndex && node && node[ dir ] || // Fallback to seeking `elem` from the start (diff = nodeIndex = 0) || start.pop()) ) { // When found, cache indexes on `parent` and break if ( node.nodeType === 1 && ++diff && node === elem ) { uniqueCache[ type ] = [ dirruns, nodeIndex, diff ]; break; } } } else { // Use previously-cached element index if available if ( useCache ) { // ...in a gzip-friendly way node = elem; outerCache = node[ expando ] || (node[ expando ] = {}); // Support: IE <9 only // Defend against cloned attroperties (jQuery gh-1709) uniqueCache = outerCache[ node.uniqueID ] || (outerCache[ node.uniqueID ] = {}); cache = uniqueCache[ type ] || []; nodeIndex = cache[ 0 ] === dirruns && cache[ 1 ]; diff = nodeIndex; } // xml :nth-child(...) // or :nth-last-child(...) or :nth(-last)?-of-type(...) if ( diff === false ) { // Use the same loop as above to seek `elem` from the start while ( (node = ++nodeIndex && node && node[ dir ] || (diff = nodeIndex = 0) || start.pop()) ) { if ( ( ofType ? node.nodeName.toLowerCase() === name : node.nodeType === 1 ) && ++diff ) { // Cache the index of each encountered element if ( useCache ) { outerCache = node[ expando ] || (node[ expando ] = {}); // Support: IE <9 only // Defend against cloned attroperties (jQuery gh-1709) uniqueCache = outerCache[ node.uniqueID ] || (outerCache[ node.uniqueID ] = {}); uniqueCache[ type ] = [ dirruns, diff ]; } if ( node === elem ) { break; } } } } } // Incorporate the offset, then check against cycle size diff -= last; return diff === first || ( diff % first === 0 && diff / first >= 0 ); } }; }, "PSEUDO": function( pseudo, argument ) { // pseudo-class names are case-insensitive // http://www.w3.org/TR/selectors/#pseudo-classes // Prioritize by case sensitivity in case custom pseudos are added with uppercase letters // Remember that setFilters inherits from pseudos var args, fn = Expr.pseudos[ pseudo ] || Expr.setFilters[ pseudo.toLowerCase() ] || Sizzle.error( "unsupported pseudo: " + pseudo ); // The user may use createPseudo to indicate that // arguments are needed to create the filter function // just as Sizzle does if ( fn[ expando ] ) { return fn( argument ); } // But maintain support for old signatures if ( fn.length > 1 ) { args = [ pseudo, pseudo, "", argument ]; return Expr.setFilters.hasOwnProperty( pseudo.toLowerCase() ) ? markFunction(function( seed, matches ) { var idx, matched = fn( seed, argument ), i = matched.length; while ( i-- ) { idx = indexOf( seed, matched[i] ); seed[ idx ] = !( matches[ idx ] = matched[i] ); } }) : function( elem ) { return fn( elem, 0, args ); }; } return fn; } }, pseudos: { // Potentially complex pseudos "not": markFunction(function( selector ) { // Trim the selector passed to compile // to avoid treating leading and trailing // spaces as combinators var input = [], results = [], matcher = compile( selector.replace( rtrim, "$1" ) ); return matcher[ expando ] ? markFunction(function( seed, matches, context, xml ) { var elem, unmatched = matcher( seed, null, xml, [] ), i = seed.length; // Match elements unmatched by `matcher` while ( i-- ) { if ( (elem = unmatched[i]) ) { seed[i] = !(matches[i] = elem); } } }) : function( elem, context, xml ) { input[0] = elem; matcher( input, null, xml, results ); // Don't keep the element (issue #299) input[0] = null; return !results.pop(); }; }), "has": markFunction(function( selector ) { return function( elem ) { return Sizzle( selector, elem ).length > 0; }; }), "contains": markFunction(function( text ) { text = text.replace( runescape, funescape ); return function( elem ) { return ( elem.textContent || getText( elem ) ).indexOf( text ) > -1; }; }), // "Whether an element is represented by a :lang() selector // is based solely on the element's language value // being equal to the identifier C, // or beginning with the identifier C immediately followed by "-". // The matching of C against the element's language value is performed case-insensitively. // The identifier C does not have to be a valid language name." // http://www.w3.org/TR/selectors/#lang-pseudo "lang": markFunction( function( lang ) { // lang value must be a valid identifier if ( !ridentifier.test(lang || "") ) { Sizzle.error( "unsupported lang: " + lang ); } lang = lang.replace( runescape, funescape ).toLowerCase(); return function( elem ) { var elemLang; do { if ( (elemLang = documentIsHTML ? elem.lang : elem.getAttribute("xml:lang") || elem.getAttribute("lang")) ) { elemLang = elemLang.toLowerCase(); return elemLang === lang || elemLang.indexOf( lang + "-" ) === 0; } } while ( (elem = elem.parentNode) && elem.nodeType === 1 ); return false; }; }), // Miscellaneous "target": function( elem ) { var hash = window.location && window.location.hash; return hash && hash.slice( 1 ) === elem.id; }, "root": function( elem ) { return elem === docElem; }, "focus": function( elem ) { return elem === document.activeElement && (!document.hasFocus || document.hasFocus()) && !!(elem.type || elem.href || ~elem.tabIndex); }, // Boolean properties "enabled": createDisabledPseudo( false ), "disabled": createDisabledPseudo( true ), "checked": function( elem ) { // In CSS3, :checked should return both checked and selected elements // http://www.w3.org/TR/2011/REC-css3-selectors-20110929/#checked var nodeName = elem.nodeName.toLowerCase(); return (nodeName === "input" && !!elem.checked) || (nodeName === "option" && !!elem.selected); }, "selected": function( elem ) { // Accessing this property makes selected-by-default // options in Safari work properly if ( elem.parentNode ) { elem.parentNode.selectedIndex; } return elem.selected === true; }, // Contents "empty": function( elem ) { // http://www.w3.org/TR/selectors/#empty-pseudo // :empty is negated by element (1) or content nodes (text: 3; cdata: 4; entity ref: 5), // but not by others (comment: 8; processing instruction: 7; etc.) // nodeType < 6 works because attributes (2) do not appear as children for ( elem = elem.firstChild; elem; elem = elem.nextSibling ) { if ( elem.nodeType < 6 ) { return false; } } return true; }, "parent": function( elem ) { return !Expr.pseudos["empty"]( elem ); }, // Element/input types "header": function( elem ) { return rheader.test( elem.nodeName ); }, "input": function( elem ) { return rinputs.test( elem.nodeName ); }, "button": function( elem ) { var name = elem.nodeName.toLowerCase(); return name === "input" && elem.type === "button" || name === "button"; }, "text": function( elem ) { var attr; return elem.nodeName.toLowerCase() === "input" && elem.type === "text" && // Support: IE<8 // New HTML5 attribute values (e.g., "search") appear with elem.type === "text" ( (attr = elem.getAttribute("type")) == null || attr.toLowerCase() === "text" ); }, // Position-in-collection "first": createPositionalPseudo(function() { return [ 0 ]; }), "last": createPositionalPseudo(function( matchIndexes, length ) { return [ length - 1 ]; }), "eq": createPositionalPseudo(function( matchIndexes, length, argument ) { return [ argument < 0 ? argument + length : argument ]; }), "even": createPositionalPseudo(function( matchIndexes, length ) { var i = 0; for ( ; i < length; i += 2 ) { matchIndexes.push( i ); } return matchIndexes; }), "odd": createPositionalPseudo(function( matchIndexes, length ) { var i = 1; for ( ; i < length; i += 2 ) { matchIndexes.push( i ); } return matchIndexes; }), "lt": createPositionalPseudo(function( matchIndexes, length, argument ) { var i = argument < 0 ? argument + length : argument > length ? length : argument; for ( ; --i >= 0; ) { matchIndexes.push( i ); } return matchIndexes; }), "gt": createPositionalPseudo(function( matchIndexes, length, argument ) { var i = argument < 0 ? argument + length : argument; for ( ; ++i < length; ) { matchIndexes.push( i ); } return matchIndexes; }) } }; Expr.pseudos["nth"] = Expr.pseudos["eq"]; // Add button/input type pseudos for ( i in { radio: true, checkbox: true, file: true, password: true, image: true } ) { Expr.pseudos[ i ] = createInputPseudo( i ); } for ( i in { submit: true, reset: true } ) { Expr.pseudos[ i ] = createButtonPseudo( i ); } // Easy API for creating new setFilters function setFilters() {} setFilters.prototype = Expr.filters = Expr.pseudos; Expr.setFilters = new setFilters(); tokenize = Sizzle.tokenize = function( selector, parseOnly ) { var matched, match, tokens, type, soFar, groups, preFilters, cached = tokenCache[ selector + " " ]; if ( cached ) { return parseOnly ? 0 : cached.slice( 0 ); } soFar = selector; groups = []; preFilters = Expr.preFilter; while ( soFar ) { // Comma and first run if ( !matched || (match = rcomma.exec( soFar )) ) { if ( match ) { // Don't consume trailing commas as valid soFar = soFar.slice( match[0].length ) || soFar; } groups.push( (tokens = []) ); } matched = false; // Combinators if ( (match = rcombinators.exec( soFar )) ) { matched = match.shift(); tokens.push({ value: matched, // Cast descendant combinators to space type: match[0].replace( rtrim, " " ) }); soFar = soFar.slice( matched.length ); } // Filters for ( type in Expr.filter ) { if ( (match = matchExpr[ type ].exec( soFar )) && (!preFilters[ type ] || (match = preFilters[ type ]( match ))) ) { matched = match.shift(); tokens.push({ value: matched, type: type, matches: match }); soFar = soFar.slice( matched.length ); } } if ( !matched ) { break; } } // Return the length of the invalid excess // if we're just parsing // Otherwise, throw an error or return tokens return parseOnly ? soFar.length : soFar ? Sizzle.error( selector ) : // Cache the tokens tokenCache( selector, groups ).slice( 0 ); }; function toSelector( tokens ) { var i = 0, len = tokens.length, selector = ""; for ( ; i < len; i++ ) { selector += tokens[i].value; } return selector; } function addCombinator( matcher, combinator, base ) { var dir = combinator.dir, skip = combinator.next, key = skip || dir, checkNonElements = base && key === "parentNode", doneName = done++; return combinator.first ? // Check against closest ancestor/preceding element function( elem, context, xml ) { while ( (elem = elem[ dir ]) ) { if ( elem.nodeType === 1 || checkNonElements ) { return matcher( elem, context, xml ); } } return false; } : // Check against all ancestor/preceding elements function( elem, context, xml ) { var oldCache, uniqueCache, outerCache, newCache = [ dirruns, doneName ]; // We can't set arbitrary data on XML nodes, so they don't benefit from combinator caching if ( xml ) { while ( (elem = elem[ dir ]) ) { if ( elem.nodeType === 1 || checkNonElements ) { if ( matcher( elem, context, xml ) ) { return true; } } } } else { while ( (elem = elem[ dir ]) ) { if ( elem.nodeType === 1 || checkNonElements ) { outerCache = elem[ expando ] || (elem[ expando ] = {}); // Support: IE <9 only // Defend against cloned attroperties (jQuery gh-1709) uniqueCache = outerCache[ elem.uniqueID ] || (outerCache[ elem.uniqueID ] = {}); if ( skip && skip === elem.nodeName.toLowerCase() ) { elem = elem[ dir ] || elem; } else if ( (oldCache = uniqueCache[ key ]) && oldCache[ 0 ] === dirruns && oldCache[ 1 ] === doneName ) { // Assign to newCache so results back-propagate to previous elements return (newCache[ 2 ] = oldCache[ 2 ]); } else { // Reuse newcache so results back-propagate to previous elements uniqueCache[ key ] = newCache; // A match means we're done; a fail means we have to keep checking if ( (newCache[ 2 ] = matcher( elem, context, xml )) ) { return true; } } } } } return false; }; } function elementMatcher( matchers ) { return matchers.length > 1 ? function( elem, context, xml ) { var i = matchers.length; while ( i-- ) { if ( !matchers[i]( elem, context, xml ) ) { return false; } } return true; } : matchers[0]; } function multipleContexts( selector, contexts, results ) { var i = 0, len = contexts.length; for ( ; i < len; i++ ) { Sizzle( selector, contexts[i], results ); } return results; } function condense( unmatched, map, filter, context, xml ) { var elem, newUnmatched = [], i = 0, len = unmatched.length, mapped = map != null; for ( ; i < len; i++ ) { if ( (elem = unmatched[i]) ) { if ( !filter || filter( elem, context, xml ) ) { newUnmatched.push( elem ); if ( mapped ) { map.push( i ); } } } } return newUnmatched; } function setMatcher( preFilter, selector, matcher, postFilter, postFinder, postSelector ) { if ( postFilter && !postFilter[ expando ] ) { postFilter = setMatcher( postFilter ); } if ( postFinder && !postFinder[ expando ] ) { postFinder = setMatcher( postFinder, postSelector ); } return markFunction(function( seed, results, context, xml ) { var temp, i, elem, preMap = [], postMap = [], preexisting = results.length, // Get initial elements from seed or context elems = seed || multipleContexts( selector || "*", context.nodeType ? [ context ] : context, [] ), // Prefilter to get matcher input, preserving a map for seed-results synchronization matcherIn = preFilter && ( seed || !selector ) ? condense( elems, preMap, preFilter, context, xml ) : elems, matcherOut = matcher ? // If we have a postFinder, or filtered seed, or non-seed postFilter or preexisting results, postFinder || ( seed ? preFilter : preexisting || postFilter ) ? // ...intermediate processing is necessary [] : // ...otherwise use results directly results : matcherIn; // Find primary matches if ( matcher ) { matcher( matcherIn, matcherOut, context, xml ); } // Apply postFilter if ( postFilter ) { temp = condense( matcherOut, postMap ); postFilter( temp, [], context, xml ); // Un-match failing elements by moving them back to matcherIn i = temp.length; while ( i-- ) { if ( (elem = temp[i]) ) { matcherOut[ postMap[i] ] = !(matcherIn[ postMap[i] ] = elem); } } } if ( seed ) { if ( postFinder || preFilter ) { if ( postFinder ) { // Get the final matcherOut by condensing this intermediate into postFinder contexts temp = []; i = matcherOut.length; while ( i-- ) { if ( (elem = matcherOut[i]) ) { // Restore matcherIn since elem is not yet a final match temp.push( (matcherIn[i] = elem) ); } } postFinder( null, (matcherOut = []), temp, xml ); } // Move matched elements from seed to results to keep them synchronized i = matcherOut.length; while ( i-- ) { if ( (elem = matcherOut[i]) && (temp = postFinder ? indexOf( seed, elem ) : preMap[i]) > -1 ) { seed[temp] = !(results[temp] = elem); } } } // Add elements to results, through postFinder if defined } else { matcherOut = condense( matcherOut === results ? matcherOut.splice( preexisting, matcherOut.length ) : matcherOut ); if ( postFinder ) { postFinder( null, results, matcherOut, xml ); } else { push.apply( results, matcherOut ); } } }); } function matcherFromTokens( tokens ) { var checkContext, matcher, j, len = tokens.length, leadingRelative = Expr.relative[ tokens[0].type ], implicitRelative = leadingRelative || Expr.relative[" "], i = leadingRelative ? 1 : 0, // The foundational matcher ensures that elements are reachable from top-level context(s) matchContext = addCombinator( function( elem ) { return elem === checkContext; }, implicitRelative, true ), matchAnyContext = addCombinator( function( elem ) { return indexOf( checkContext, elem ) > -1; }, implicitRelative, true ), matchers = [ function( elem, context, xml ) { var ret = ( !leadingRelative && ( xml || context !== outermostContext ) ) || ( (checkContext = context).nodeType ? matchContext( elem, context, xml ) : matchAnyContext( elem, context, xml ) ); // Avoid hanging onto element (issue #299) checkContext = null; return ret; } ]; for ( ; i < len; i++ ) { if ( (matcher = Expr.relative[ tokens[i].type ]) ) { matchers = [ addCombinator(elementMatcher( matchers ), matcher) ]; } else { matcher = Expr.filter[ tokens[i].type ].apply( null, tokens[i].matches ); // Return special upon seeing a positional matcher if ( matcher[ expando ] ) { // Find the next relative operator (if any) for proper handling j = ++i; for ( ; j < len; j++ ) { if ( Expr.relative[ tokens[j].type ] ) { break; } } return setMatcher( i > 1 && elementMatcher( matchers ), i > 1 && toSelector( // If the preceding token was a descendant combinator, insert an implicit any-element `*` tokens.slice( 0, i - 1 ).concat({ value: tokens[ i - 2 ].type === " " ? "*" : "" }) ).replace( rtrim, "$1" ), matcher, i < j && matcherFromTokens( tokens.slice( i, j ) ), j < len && matcherFromTokens( (tokens = tokens.slice( j )) ), j < len && toSelector( tokens ) ); } matchers.push( matcher ); } } return elementMatcher( matchers ); } function matcherFromGroupMatchers( elementMatchers, setMatchers ) { var bySet = setMatchers.length > 0, byElement = elementMatchers.length > 0, superMatcher = function( seed, context, xml, results, outermost ) { var elem, j, matcher, matchedCount = 0, i = "0", unmatched = seed && [], setMatched = [], contextBackup = outermostContext, // We must always have either seed elements or outermost context elems = seed || byElement && Expr.find["TAG"]( "*", outermost ), // Use integer dirruns iff this is the outermost matcher dirrunsUnique = (dirruns += contextBackup == null ? 1 : Math.random() || 0.1), len = elems.length; if ( outermost ) { outermostContext = context === document || context || outermost; } // Add elements passing elementMatchers directly to results // Support: IE<9, Safari // Tolerate NodeList properties (IE: "length"; Safari: ) matching elements by id for ( ; i !== len && (elem = elems[i]) != null; i++ ) { if ( byElement && elem ) { j = 0; if ( !context && elem.ownerDocument !== document ) { setDocument( elem ); xml = !documentIsHTML; } while ( (matcher = elementMatchers[j++]) ) { if ( matcher( elem, context || document, xml) ) { results.push( elem ); break; } } if ( outermost ) { dirruns = dirrunsUnique; } } // Track unmatched elements for set filters if ( bySet ) { // They will have gone through all possible matchers if ( (elem = !matcher && elem) ) { matchedCount--; } // Lengthen the array for every element, matched or not if ( seed ) { unmatched.push( elem ); } } } // `i` is now the count of elements visited above, and adding it to `matchedCount` // makes the latter nonnegative. matchedCount += i; // Apply set filters to unmatched elements // NOTE: This can be skipped if there are no unmatched elements (i.e., `matchedCount` // equals `i`), unless we didn't visit _any_ elements in the above loop because we have // no element matchers and no seed. // Incrementing an initially-string "0" `i` allows `i` to remain a string only in that // case, which will result in a "00" `matchedCount` that differs from `i` but is also // numerically zero. if ( bySet && i !== matchedCount ) { j = 0; while ( (matcher = setMatchers[j++]) ) { matcher( unmatched, setMatched, context, xml ); } if ( seed ) { // Reintegrate element matches to eliminate the need for sorting if ( matchedCount > 0 ) { while ( i-- ) { if ( !(unmatched[i] || setMatched[i]) ) { setMatched[i] = pop.call( results ); } } } // Discard index placeholder values to get only actual matches setMatched = condense( setMatched ); } // Add matches to results push.apply( results, setMatched ); // Seedless set matches succeeding multiple successful matchers stipulate sorting if ( outermost && !seed && setMatched.length > 0 && ( matchedCount + setMatchers.length ) > 1 ) { Sizzle.uniqueSort( results ); } } // Override manipulation of globals by nested matchers if ( outermost ) { dirruns = dirrunsUnique; outermostContext = contextBackup; } return unmatched; }; return bySet ? markFunction( superMatcher ) : superMatcher; } compile = Sizzle.compile = function( selector, match /* Internal Use Only */ ) { var i, setMatchers = [], elementMatchers = [], cached = compilerCache[ selector + " " ]; if ( !cached ) { // Generate a function of recursive functions that can be used to check each element if ( !match ) { match = tokenize( selector ); } i = match.length; while ( i-- ) { cached = matcherFromTokens( match[i] ); if ( cached[ expando ] ) { setMatchers.push( cached ); } else { elementMatchers.push( cached ); } } // Cache the compiled function cached = compilerCache( selector, matcherFromGroupMatchers( elementMatchers, setMatchers ) ); // Save selector and tokenization cached.selector = selector; } return cached; }; /** * A low-level selection function that works with Sizzle's compiled * selector functions * @param {String|Function} selector A selector or a pre-compiled * selector function built with Sizzle.compile * @param {Element} context * @param {Array} [results] * @param {Array} [seed] A set of elements to match against */ select = Sizzle.select = function( selector, context, results, seed ) { var i, tokens, token, type, find, compiled = typeof selector === "function" && selector, match = !seed && tokenize( (selector = compiled.selector || selector) ); results = results || []; // Try to minimize operations if there is only one selector in the list and no seed // (the latter of which guarantees us context) if ( match.length === 1 ) { // Reduce context if the leading compound selector is an ID tokens = match[0] = match[0].slice( 0 ); if ( tokens.length > 2 && (token = tokens[0]).type === "ID" && context.nodeType === 9 && documentIsHTML && Expr.relative[ tokens[1].type ] ) { context = ( Expr.find["ID"]( token.matches[0].replace(runescape, funescape), context ) || [] )[0]; if ( !context ) { return results; // Precompiled matchers will still verify ancestry, so step up a level } else if ( compiled ) { context = context.parentNode; } selector = selector.slice( tokens.shift().value.length ); } // Fetch a seed set for right-to-left matching i = matchExpr["needsContext"].test( selector ) ? 0 : tokens.length; while ( i-- ) { token = tokens[i]; // Abort if we hit a combinator if ( Expr.relative[ (type = token.type) ] ) { break; } if ( (find = Expr.find[ type ]) ) { // Search, expanding context for leading sibling combinators if ( (seed = find( token.matches[0].replace( runescape, funescape ), rsibling.test( tokens[0].type ) && testContext( context.parentNode ) || context )) ) { // If seed is empty or no tokens remain, we can return early tokens.splice( i, 1 ); selector = seed.length && toSelector( tokens ); if ( !selector ) { push.apply( results, seed ); return results; } break; } } } } // Compile and execute a filtering function if one is not provided // Provide `match` to avoid retokenization if we modified the selector above ( compiled || compile( selector, match ) )( seed, context, !documentIsHTML, results, !context || rsibling.test( selector ) && testContext( context.parentNode ) || context ); return results; }; // One-time assignments // Sort stability support.sortStable = expando.split("").sort( sortOrder ).join("") === expando; // Support: Chrome 14-35+ // Always assume duplicates if they aren't passed to the comparison function support.detectDuplicates = !!hasDuplicate; // Initialize against the default document setDocument(); // Support: Webkit<537.32 - Safari 6.0.3/Chrome 25 (fixed in Chrome 27) // Detached nodes confoundingly follow *each other* support.sortDetached = assert(function( el ) { // Should return 1, but returns 4 (following) return el.compareDocumentPosition( document.createElement("fieldset") ) & 1; }); // Support: IE<8 // Prevent attribute/property "interpolation" // https://msdn.microsoft.com/en-us/library/ms536429%28VS.85%29.aspx if ( !assert(function( el ) { el.innerHTML = ""; return el.firstChild.getAttribute("href") === "#" ; }) ) { addHandle( "type|href|height|width", function( elem, name, isXML ) { if ( !isXML ) { return elem.getAttribute( name, name.toLowerCase() === "type" ? 1 : 2 ); } }); } // Support: IE<9 // Use defaultValue in place of getAttribute("value") if ( !support.attributes || !assert(function( el ) { el.innerHTML = ""; el.firstChild.setAttribute( "value", "" ); return el.firstChild.getAttribute( "value" ) === ""; }) ) { addHandle( "value", function( elem, name, isXML ) { if ( !isXML && elem.nodeName.toLowerCase() === "input" ) { return elem.defaultValue; } }); } // Support: IE<9 // Use getAttributeNode to fetch booleans when getAttribute lies if ( !assert(function( el ) { return el.getAttribute("disabled") == null; }) ) { addHandle( booleans, function( elem, name, isXML ) { var val; if ( !isXML ) { return elem[ name ] === true ? name.toLowerCase() : (val = elem.getAttributeNode( name )) && val.specified ? val.value : null; } }); } return Sizzle; })( window ); jQuery.find = Sizzle; jQuery.expr = Sizzle.selectors; // Deprecated jQuery.expr[ ":" ] = jQuery.expr.pseudos; jQuery.uniqueSort = jQuery.unique = Sizzle.uniqueSort; jQuery.text = Sizzle.getText; jQuery.isXMLDoc = Sizzle.isXML; jQuery.contains = Sizzle.contains; jQuery.escapeSelector = Sizzle.escape; var dir = function( elem, dir, until ) { var matched = [], truncate = until !== undefined; while ( ( elem = elem[ dir ] ) && elem.nodeType !== 9 ) { if ( elem.nodeType === 1 ) { if ( truncate && jQuery( elem ).is( until ) ) { break; } matched.push( elem ); } } return matched; }; var siblings = function( n, elem ) { var matched = []; for ( ; n; n = n.nextSibling ) { if ( n.nodeType === 1 && n !== elem ) { matched.push( n ); } } return matched; }; var rneedsContext = jQuery.expr.match.needsContext; function nodeName( elem, name ) { return elem.nodeName && elem.nodeName.toLowerCase() === name.toLowerCase(); }; var rsingleTag = ( /^<([a-z][^\/\0>:\x20\t\r\n\f]*)[\x20\t\r\n\f]*\/?>(?:<\/\1>|)$/i ); // Implement the identical functionality for filter and not function winnow( elements, qualifier, not ) { if ( isFunction( qualifier ) ) { return jQuery.grep( elements, function( elem, i ) { return !!qualifier.call( elem, i, elem ) !== not; } ); } // Single element if ( qualifier.nodeType ) { return jQuery.grep( elements, function( elem ) { return ( elem === qualifier ) !== not; } ); } // Arraylike of elements (jQuery, arguments, Array) if ( typeof qualifier !== "string" ) { return jQuery.grep( elements, function( elem ) { return ( indexOf.call( qualifier, elem ) > -1 ) !== not; } ); } // Filtered directly for both simple and complex selectors return jQuery.filter( qualifier, elements, not ); } jQuery.filter = function( expr, elems, not ) { var elem = elems[ 0 ]; if ( not ) { expr = ":not(" + expr + ")"; } if ( elems.length === 1 && elem.nodeType === 1 ) { return jQuery.find.matchesSelector( elem, expr ) ? [ elem ] : []; } return jQuery.find.matches( expr, jQuery.grep( elems, function( elem ) { return elem.nodeType === 1; } ) ); }; jQuery.fn.extend( { find: function( selector ) { var i, ret, len = this.length, self = this; if ( typeof selector !== "string" ) { return this.pushStack( jQuery( selector ).filter( function() { for ( i = 0; i < len; i++ ) { if ( jQuery.contains( self[ i ], this ) ) { return true; } } } ) ); } ret = this.pushStack( [] ); for ( i = 0; i < len; i++ ) { jQuery.find( selector, self[ i ], ret ); } return len > 1 ? jQuery.uniqueSort( ret ) : ret; }, filter: function( selector ) { return this.pushStack( winnow( this, selector || [], false ) ); }, not: function( selector ) { return this.pushStack( winnow( this, selector || [], true ) ); }, is: function( selector ) { return !!winnow( this, // If this is a positional/relative selector, check membership in the returned set // so $("p:first").is("p:last") won't return true for a doc with two "p". typeof selector === "string" && rneedsContext.test( selector ) ? jQuery( selector ) : selector || [], false ).length; } } ); // Initialize a jQuery object // A central reference to the root jQuery(document) var rootjQuery, // A simple way to check for HTML strings // Prioritize #id over to avoid XSS via location.hash (#9521) // Strict HTML recognition (#11290: must start with <) // Shortcut simple #id case for speed rquickExpr = /^(?:\s*(<[\w\W]+>)[^>]*|#([\w-]+))$/, init = jQuery.fn.init = function( selector, context, root ) { var match, elem; // HANDLE: $(""), $(null), $(undefined), $(false) if ( !selector ) { return this; } // Method init() accepts an alternate rootjQuery // so migrate can support jQuery.sub (gh-2101) root = root || rootjQuery; // Handle HTML strings if ( typeof selector === "string" ) { if ( selector[ 0 ] === "<" && selector[ selector.length - 1 ] === ">" && selector.length >= 3 ) { // Assume that strings that start and end with <> are HTML and skip the regex check match = [ null, selector, null ]; } else { match = rquickExpr.exec( selector ); } // Match html or make sure no context is specified for #id if ( match && ( match[ 1 ] || !context ) ) { // HANDLE: $(html) -> $(array) if ( match[ 1 ] ) { context = context instanceof jQuery ? context[ 0 ] : context; // Option to run scripts is true for back-compat // Intentionally let the error be thrown if parseHTML is not present jQuery.merge( this, jQuery.parseHTML( match[ 1 ], context && context.nodeType ? context.ownerDocument || context : document, true ) ); // HANDLE: $(html, props) if ( rsingleTag.test( match[ 1 ] ) && jQuery.isPlainObject( context ) ) { for ( match in context ) { // Properties of context are called as methods if possible if ( isFunction( this[ match ] ) ) { this[ match ]( context[ match ] ); // ...and otherwise set as attributes } else { this.attr( match, context[ match ] ); } } } return this; // HANDLE: $(#id) } else { elem = document.getElementById( match[ 2 ] ); if ( elem ) { // Inject the element directly into the jQuery object this[ 0 ] = elem; this.length = 1; } return this; } // HANDLE: $(expr, $(...)) } else if ( !context || context.jquery ) { return ( context || root ).find( selector ); // HANDLE: $(expr, context) // (which is just equivalent to: $(context).find(expr) } else { return this.constructor( context ).find( selector ); } // HANDLE: $(DOMElement) } else if ( selector.nodeType ) { this[ 0 ] = selector; this.length = 1; return this; // HANDLE: $(function) // Shortcut for document ready } else if ( isFunction( selector ) ) { return root.ready !== undefined ? root.ready( selector ) : // Execute immediately if ready is not present selector( jQuery ); } return jQuery.makeArray( selector, this ); }; // Give the init function the jQuery prototype for later instantiation init.prototype = jQuery.fn; // Initialize central reference rootjQuery = jQuery( document ); var rparentsprev = /^(?:parents|prev(?:Until|All))/, // Methods guaranteed to produce a unique set when starting from a unique set guaranteedUnique = { children: true, contents: true, next: true, prev: true }; jQuery.fn.extend( { has: function( target ) { var targets = jQuery( target, this ), l = targets.length; return this.filter( function() { var i = 0; for ( ; i < l; i++ ) { if ( jQuery.contains( this, targets[ i ] ) ) { return true; } } } ); }, closest: function( selectors, context ) { var cur, i = 0, l = this.length, matched = [], targets = typeof selectors !== "string" && jQuery( selectors ); // Positional selectors never match, since there's no _selection_ context if ( !rneedsContext.test( selectors ) ) { for ( ; i < l; i++ ) { for ( cur = this[ i ]; cur && cur !== context; cur = cur.parentNode ) { // Always skip document fragments if ( cur.nodeType < 11 && ( targets ? targets.index( cur ) > -1 : // Don't pass non-elements to Sizzle cur.nodeType === 1 && jQuery.find.matchesSelector( cur, selectors ) ) ) { matched.push( cur ); break; } } } } return this.pushStack( matched.length > 1 ? jQuery.uniqueSort( matched ) : matched ); }, // Determine the position of an element within the set index: function( elem ) { // No argument, return index in parent if ( !elem ) { return ( this[ 0 ] && this[ 0 ].parentNode ) ? this.first().prevAll().length : -1; } // Index in selector if ( typeof elem === "string" ) { return indexOf.call( jQuery( elem ), this[ 0 ] ); } // Locate the position of the desired element return indexOf.call( this, // If it receives a jQuery object, the first element is used elem.jquery ? elem[ 0 ] : elem ); }, add: function( selector, context ) { return this.pushStack( jQuery.uniqueSort( jQuery.merge( this.get(), jQuery( selector, context ) ) ) ); }, addBack: function( selector ) { return this.add( selector == null ? this.prevObject : this.prevObject.filter( selector ) ); } } ); function sibling( cur, dir ) { while ( ( cur = cur[ dir ] ) && cur.nodeType !== 1 ) {} return cur; } jQuery.each( { parent: function( elem ) { var parent = elem.parentNode; return parent && parent.nodeType !== 11 ? parent : null; }, parents: function( elem ) { return dir( elem, "parentNode" ); }, parentsUntil: function( elem, i, until ) { return dir( elem, "parentNode", until ); }, next: function( elem ) { return sibling( elem, "nextSibling" ); }, prev: function( elem ) { return sibling( elem, "previousSibling" ); }, nextAll: function( elem ) { return dir( elem, "nextSibling" ); }, prevAll: function( elem ) { return dir( elem, "previousSibling" ); }, nextUntil: function( elem, i, until ) { return dir( elem, "nextSibling", until ); }, prevUntil: function( elem, i, until ) { return dir( elem, "previousSibling", until ); }, siblings: function( elem ) { return siblings( ( elem.parentNode || {} ).firstChild, elem ); }, children: function( elem ) { return siblings( elem.firstChild ); }, contents: function( elem ) { if ( typeof elem.contentDocument !== "undefined" ) { return elem.contentDocument; } // Support: IE 9 - 11 only, iOS 7 only, Android Browser <=4.3 only // Treat the template element as a regular one in browsers that // don't support it. if ( nodeName( elem, "template" ) ) { elem = elem.content || elem; } return jQuery.merge( [], elem.childNodes ); } }, function( name, fn ) { jQuery.fn[ name ] = function( until, selector ) { var matched = jQuery.map( this, fn, until ); if ( name.slice( -5 ) !== "Until" ) { selector = until; } if ( selector && typeof selector === "string" ) { matched = jQuery.filter( selector, matched ); } if ( this.length > 1 ) { // Remove duplicates if ( !guaranteedUnique[ name ] ) { jQuery.uniqueSort( matched ); } // Reverse order for parents* and prev-derivatives if ( rparentsprev.test( name ) ) { matched.reverse(); } } return this.pushStack( matched ); }; } ); var rnothtmlwhite = ( /[^\x20\t\r\n\f]+/g ); // Convert String-formatted options into Object-formatted ones function createOptions( options ) { var object = {}; jQuery.each( options.match( rnothtmlwhite ) || [], function( _, flag ) { object[ flag ] = true; } ); return object; } /* * Create a callback list using the following parameters: * * options: an optional list of space-separated options that will change how * the callback list behaves or a more traditional option object * * By default a callback list will act like an event callback list and can be * "fired" multiple times. * * Possible options: * * once: will ensure the callback list can only be fired once (like a Deferred) * * memory: will keep track of previous values and will call any callback added * after the list has been fired right away with the latest "memorized" * values (like a Deferred) * * unique: will ensure a callback can only be added once (no duplicate in the list) * * stopOnFalse: interrupt callings when a callback returns false * */ jQuery.Callbacks = function( options ) { // Convert options from String-formatted to Object-formatted if needed // (we check in cache first) options = typeof options === "string" ? createOptions( options ) : jQuery.extend( {}, options ); var // Flag to know if list is currently firing firing, // Last fire value for non-forgettable lists memory, // Flag to know if list was already fired fired, // Flag to prevent firing locked, // Actual callback list list = [], // Queue of execution data for repeatable lists queue = [], // Index of currently firing callback (modified by add/remove as needed) firingIndex = -1, // Fire callbacks fire = function() { // Enforce single-firing locked = locked || options.once; // Execute callbacks for all pending executions, // respecting firingIndex overrides and runtime changes fired = firing = true; for ( ; queue.length; firingIndex = -1 ) { memory = queue.shift(); while ( ++firingIndex < list.length ) { // Run callback and check for early termination if ( list[ firingIndex ].apply( memory[ 0 ], memory[ 1 ] ) === false && options.stopOnFalse ) { // Jump to end and forget the data so .add doesn't re-fire firingIndex = list.length; memory = false; } } } // Forget the data if we're done with it if ( !options.memory ) { memory = false; } firing = false; // Clean up if we're done firing for good if ( locked ) { // Keep an empty list if we have data for future add calls if ( memory ) { list = []; // Otherwise, this object is spent } else { list = ""; } } }, // Actual Callbacks object self = { // Add a callback or a collection of callbacks to the list add: function() { if ( list ) { // If we have memory from a past run, we should fire after adding if ( memory && !firing ) { firingIndex = list.length - 1; queue.push( memory ); } ( function add( args ) { jQuery.each( args, function( _, arg ) { if ( isFunction( arg ) ) { if ( !options.unique || !self.has( arg ) ) { list.push( arg ); } } else if ( arg && arg.length && toType( arg ) !== "string" ) { // Inspect recursively add( arg ); } } ); } )( arguments ); if ( memory && !firing ) { fire(); } } return this; }, // Remove a callback from the list remove: function() { jQuery.each( arguments, function( _, arg ) { var index; while ( ( index = jQuery.inArray( arg, list, index ) ) > -1 ) { list.splice( index, 1 ); // Handle firing indexes if ( index <= firingIndex ) { firingIndex--; } } } ); return this; }, // Check if a given callback is in the list. // If no argument is given, return whether or not list has callbacks attached. has: function( fn ) { return fn ? jQuery.inArray( fn, list ) > -1 : list.length > 0; }, // Remove all callbacks from the list empty: function() { if ( list ) { list = []; } return this; }, // Disable .fire and .add // Abort any current/pending executions // Clear all callbacks and values disable: function() { locked = queue = []; list = memory = ""; return this; }, disabled: function() { return !list; }, // Disable .fire // Also disable .add unless we have memory (since it would have no effect) // Abort any pending executions lock: function() { locked = queue = []; if ( !memory && !firing ) { list = memory = ""; } return this; }, locked: function() { return !!locked; }, // Call all callbacks with the given context and arguments fireWith: function( context, args ) { if ( !locked ) { args = args || []; args = [ context, args.slice ? args.slice() : args ]; queue.push( args ); if ( !firing ) { fire(); } } return this; }, // Call all the callbacks with the given arguments fire: function() { self.fireWith( this, arguments ); return this; }, // To know if the callbacks have already been called at least once fired: function() { return !!fired; } }; return self; }; function Identity( v ) { return v; } function Thrower( ex ) { throw ex; } function adoptValue( value, resolve, reject, noValue ) { var method; try { // Check for promise aspect first to privilege synchronous behavior if ( value && isFunction( ( method = value.promise ) ) ) { method.call( value ).done( resolve ).fail( reject ); // Other thenables } else if ( value && isFunction( ( method = value.then ) ) ) { method.call( value, resolve, reject ); // Other non-thenables } else { // Control `resolve` arguments by letting Array#slice cast boolean `noValue` to integer: // * false: [ value ].slice( 0 ) => resolve( value ) // * true: [ value ].slice( 1 ) => resolve() resolve.apply( undefined, [ value ].slice( noValue ) ); } // For Promises/A+, convert exceptions into rejections // Since jQuery.when doesn't unwrap thenables, we can skip the extra checks appearing in // Deferred#then to conditionally suppress rejection. } catch ( value ) { // Support: Android 4.0 only // Strict mode functions invoked without .call/.apply get global-object context reject.apply( undefined, [ value ] ); } } jQuery.extend( { Deferred: function( func ) { var tuples = [ // action, add listener, callbacks, // ... .then handlers, argument index, [final state] [ "notify", "progress", jQuery.Callbacks( "memory" ), jQuery.Callbacks( "memory" ), 2 ], [ "resolve", "done", jQuery.Callbacks( "once memory" ), jQuery.Callbacks( "once memory" ), 0, "resolved" ], [ "reject", "fail", jQuery.Callbacks( "once memory" ), jQuery.Callbacks( "once memory" ), 1, "rejected" ] ], state = "pending", promise = { state: function() { return state; }, always: function() { deferred.done( arguments ).fail( arguments ); return this; }, "catch": function( fn ) { return promise.then( null, fn ); }, // Keep pipe for back-compat pipe: function( /* fnDone, fnFail, fnProgress */ ) { var fns = arguments; return jQuery.Deferred( function( newDefer ) { jQuery.each( tuples, function( i, tuple ) { // Map tuples (progress, done, fail) to arguments (done, fail, progress) var fn = isFunction( fns[ tuple[ 4 ] ] ) && fns[ tuple[ 4 ] ]; // deferred.progress(function() { bind to newDefer or newDefer.notify }) // deferred.done(function() { bind to newDefer or newDefer.resolve }) // deferred.fail(function() { bind to newDefer or newDefer.reject }) deferred[ tuple[ 1 ] ]( function() { var returned = fn && fn.apply( this, arguments ); if ( returned && isFunction( returned.promise ) ) { returned.promise() .progress( newDefer.notify ) .done( newDefer.resolve ) .fail( newDefer.reject ); } else { newDefer[ tuple[ 0 ] + "With" ]( this, fn ? [ returned ] : arguments ); } } ); } ); fns = null; } ).promise(); }, then: function( onFulfilled, onRejected, onProgress ) { var maxDepth = 0; function resolve( depth, deferred, handler, special ) { return function() { var that = this, args = arguments, mightThrow = function() { var returned, then; // Support: Promises/A+ section 2.3.3.3.3 // https://promisesaplus.com/#point-59 // Ignore double-resolution attempts if ( depth < maxDepth ) { return; } returned = handler.apply( that, args ); // Support: Promises/A+ section 2.3.1 // https://promisesaplus.com/#point-48 if ( returned === deferred.promise() ) { throw new TypeError( "Thenable self-resolution" ); } // Support: Promises/A+ sections 2.3.3.1, 3.5 // https://promisesaplus.com/#point-54 // https://promisesaplus.com/#point-75 // Retrieve `then` only once then = returned && // Support: Promises/A+ section 2.3.4 // https://promisesaplus.com/#point-64 // Only check objects and functions for thenability ( typeof returned === "object" || typeof returned === "function" ) && returned.then; // Handle a returned thenable if ( isFunction( then ) ) { // Special processors (notify) just wait for resolution if ( special ) { then.call( returned, resolve( maxDepth, deferred, Identity, special ), resolve( maxDepth, deferred, Thrower, special ) ); // Normal processors (resolve) also hook into progress } else { // ...and disregard older resolution values maxDepth++; then.call( returned, resolve( maxDepth, deferred, Identity, special ), resolve( maxDepth, deferred, Thrower, special ), resolve( maxDepth, deferred, Identity, deferred.notifyWith ) ); } // Handle all other returned values } else { // Only substitute handlers pass on context // and multiple values (non-spec behavior) if ( handler !== Identity ) { that = undefined; args = [ returned ]; } // Process the value(s) // Default process is resolve ( special || deferred.resolveWith )( that, args ); } }, // Only normal processors (resolve) catch and reject exceptions process = special ? mightThrow : function() { try { mightThrow(); } catch ( e ) { if ( jQuery.Deferred.exceptionHook ) { jQuery.Deferred.exceptionHook( e, process.stackTrace ); } // Support: Promises/A+ section 2.3.3.3.4.1 // https://promisesaplus.com/#point-61 // Ignore post-resolution exceptions if ( depth + 1 >= maxDepth ) { // Only substitute handlers pass on context // and multiple values (non-spec behavior) if ( handler !== Thrower ) { that = undefined; args = [ e ]; } deferred.rejectWith( that, args ); } } }; // Support: Promises/A+ section 2.3.3.3.1 // https://promisesaplus.com/#point-57 // Re-resolve promises immediately to dodge false rejection from // subsequent errors if ( depth ) { process(); } else { // Call an optional hook to record the stack, in case of exception // since it's otherwise lost when execution goes async if ( jQuery.Deferred.getStackHook ) { process.stackTrace = jQuery.Deferred.getStackHook(); } window.setTimeout( process ); } }; } return jQuery.Deferred( function( newDefer ) { // progress_handlers.add( ... ) tuples[ 0 ][ 3 ].add( resolve( 0, newDefer, isFunction( onProgress ) ? onProgress : Identity, newDefer.notifyWith ) ); // fulfilled_handlers.add( ... ) tuples[ 1 ][ 3 ].add( resolve( 0, newDefer, isFunction( onFulfilled ) ? onFulfilled : Identity ) ); // rejected_handlers.add( ... ) tuples[ 2 ][ 3 ].add( resolve( 0, newDefer, isFunction( onRejected ) ? onRejected : Thrower ) ); } ).promise(); }, // Get a promise for this deferred // If obj is provided, the promise aspect is added to the object promise: function( obj ) { return obj != null ? jQuery.extend( obj, promise ) : promise; } }, deferred = {}; // Add list-specific methods jQuery.each( tuples, function( i, tuple ) { var list = tuple[ 2 ], stateString = tuple[ 5 ]; // promise.progress = list.add // promise.done = list.add // promise.fail = list.add promise[ tuple[ 1 ] ] = list.add; // Handle state if ( stateString ) { list.add( function() { // state = "resolved" (i.e., fulfilled) // state = "rejected" state = stateString; }, // rejected_callbacks.disable // fulfilled_callbacks.disable tuples[ 3 - i ][ 2 ].disable, // rejected_handlers.disable // fulfilled_handlers.disable tuples[ 3 - i ][ 3 ].disable, // progress_callbacks.lock tuples[ 0 ][ 2 ].lock, // progress_handlers.lock tuples[ 0 ][ 3 ].lock ); } // progress_handlers.fire // fulfilled_handlers.fire // rejected_handlers.fire list.add( tuple[ 3 ].fire ); // deferred.notify = function() { deferred.notifyWith(...) } // deferred.resolve = function() { deferred.resolveWith(...) } // deferred.reject = function() { deferred.rejectWith(...) } deferred[ tuple[ 0 ] ] = function() { deferred[ tuple[ 0 ] + "With" ]( this === deferred ? undefined : this, arguments ); return this; }; // deferred.notifyWith = list.fireWith // deferred.resolveWith = list.fireWith // deferred.rejectWith = list.fireWith deferred[ tuple[ 0 ] + "With" ] = list.fireWith; } ); // Make the deferred a promise promise.promise( deferred ); // Call given func if any if ( func ) { func.call( deferred, deferred ); } // All done! return deferred; }, // Deferred helper when: function( singleValue ) { var // count of uncompleted subordinates remaining = arguments.length, // count of unprocessed arguments i = remaining, // subordinate fulfillment data resolveContexts = Array( i ), resolveValues = slice.call( arguments ), // the master Deferred master = jQuery.Deferred(), // subordinate callback factory updateFunc = function( i ) { return function( value ) { resolveContexts[ i ] = this; resolveValues[ i ] = arguments.length > 1 ? slice.call( arguments ) : value; if ( !( --remaining ) ) { master.resolveWith( resolveContexts, resolveValues ); } }; }; // Single- and empty arguments are adopted like Promise.resolve if ( remaining <= 1 ) { adoptValue( singleValue, master.done( updateFunc( i ) ).resolve, master.reject, !remaining ); // Use .then() to unwrap secondary thenables (cf. gh-3000) if ( master.state() === "pending" || isFunction( resolveValues[ i ] && resolveValues[ i ].then ) ) { return master.then(); } } // Multiple arguments are aggregated like Promise.all array elements while ( i-- ) { adoptValue( resolveValues[ i ], updateFunc( i ), master.reject ); } return master.promise(); } } ); // These usually indicate a programmer mistake during development, // warn about them ASAP rather than swallowing them by default. var rerrorNames = /^(Eval|Internal|Range|Reference|Syntax|Type|URI)Error$/; jQuery.Deferred.exceptionHook = function( error, stack ) { // Support: IE 8 - 9 only // Console exists when dev tools are open, which can happen at any time if ( window.console && window.console.warn && error && rerrorNames.test( error.name ) ) { window.console.warn( "jQuery.Deferred exception: " + error.message, error.stack, stack ); } }; jQuery.readyException = function( error ) { window.setTimeout( function() { throw error; } ); }; // The deferred used on DOM ready var readyList = jQuery.Deferred(); jQuery.fn.ready = function( fn ) { readyList .then( fn ) // Wrap jQuery.readyException in a function so that the lookup // happens at the time of error handling instead of callback // registration. .catch( function( error ) { jQuery.readyException( error ); } ); return this; }; jQuery.extend( { // Is the DOM ready to be used? Set to true once it occurs. isReady: false, // A counter to track how many items to wait for before // the ready event fires. See #6781 readyWait: 1, // Handle when the DOM is ready ready: function( wait ) { // Abort if there are pending holds or we're already ready if ( wait === true ? --jQuery.readyWait : jQuery.isReady ) { return; } // Remember that the DOM is ready jQuery.isReady = true; // If a normal DOM Ready event fired, decrement, and wait if need be if ( wait !== true && --jQuery.readyWait > 0 ) { return; } // If there are functions bound, to execute readyList.resolveWith( document, [ jQuery ] ); } } ); jQuery.ready.then = readyList.then; // The ready event handler and self cleanup method function completed() { document.removeEventListener( "DOMContentLoaded", completed ); window.removeEventListener( "load", completed ); jQuery.ready(); } // Catch cases where $(document).ready() is called // after the browser event has already occurred. // Support: IE <=9 - 10 only // Older IE sometimes signals "interactive" too soon if ( document.readyState === "complete" || ( document.readyState !== "loading" && !document.documentElement.doScroll ) ) { // Handle it asynchronously to allow scripts the opportunity to delay ready window.setTimeout( jQuery.ready ); } else { // Use the handy event callback document.addEventListener( "DOMContentLoaded", completed ); // A fallback to window.onload, that will always work window.addEventListener( "load", completed ); } // Multifunctional method to get and set values of a collection // The value/s can optionally be executed if it's a function var access = function( elems, fn, key, value, chainable, emptyGet, raw ) { var i = 0, len = elems.length, bulk = key == null; // Sets many values if ( toType( key ) === "object" ) { chainable = true; for ( i in key ) { access( elems, fn, i, key[ i ], true, emptyGet, raw ); } // Sets one value } else if ( value !== undefined ) { chainable = true; if ( !isFunction( value ) ) { raw = true; } if ( bulk ) { // Bulk operations run against the entire set if ( raw ) { fn.call( elems, value ); fn = null; // ...except when executing function values } else { bulk = fn; fn = function( elem, key, value ) { return bulk.call( jQuery( elem ), value ); }; } } if ( fn ) { for ( ; i < len; i++ ) { fn( elems[ i ], key, raw ? value : value.call( elems[ i ], i, fn( elems[ i ], key ) ) ); } } } if ( chainable ) { return elems; } // Gets if ( bulk ) { return fn.call( elems ); } return len ? fn( elems[ 0 ], key ) : emptyGet; }; // Matches dashed string for camelizing var rmsPrefix = /^-ms-/, rdashAlpha = /-([a-z])/g; // Used by camelCase as callback to replace() function fcamelCase( all, letter ) { return letter.toUpperCase(); } // Convert dashed to camelCase; used by the css and data modules // Support: IE <=9 - 11, Edge 12 - 15 // Microsoft forgot to hump their vendor prefix (#9572) function camelCase( string ) { return string.replace( rmsPrefix, "ms-" ).replace( rdashAlpha, fcamelCase ); } var acceptData = function( owner ) { // Accepts only: // - Node // - Node.ELEMENT_NODE // - Node.DOCUMENT_NODE // - Object // - Any return owner.nodeType === 1 || owner.nodeType === 9 || !( +owner.nodeType ); }; function Data() { this.expando = jQuery.expando + Data.uid++; } Data.uid = 1; Data.prototype = { cache: function( owner ) { // Check if the owner object already has a cache var value = owner[ this.expando ]; // If not, create one if ( !value ) { value = {}; // We can accept data for non-element nodes in modern browsers, // but we should not, see #8335. // Always return an empty object. if ( acceptData( owner ) ) { // If it is a node unlikely to be stringify-ed or looped over // use plain assignment if ( owner.nodeType ) { owner[ this.expando ] = value; // Otherwise secure it in a non-enumerable property // configurable must be true to allow the property to be // deleted when data is removed } else { Object.defineProperty( owner, this.expando, { value: value, configurable: true } ); } } } return value; }, set: function( owner, data, value ) { var prop, cache = this.cache( owner ); // Handle: [ owner, key, value ] args // Always use camelCase key (gh-2257) if ( typeof data === "string" ) { cache[ camelCase( data ) ] = value; // Handle: [ owner, { properties } ] args } else { // Copy the properties one-by-one to the cache object for ( prop in data ) { cache[ camelCase( prop ) ] = data[ prop ]; } } return cache; }, get: function( owner, key ) { return key === undefined ? this.cache( owner ) : // Always use camelCase key (gh-2257) owner[ this.expando ] && owner[ this.expando ][ camelCase( key ) ]; }, access: function( owner, key, value ) { // In cases where either: // // 1. No key was specified // 2. A string key was specified, but no value provided // // Take the "read" path and allow the get method to determine // which value to return, respectively either: // // 1. The entire cache object // 2. The data stored at the key // if ( key === undefined || ( ( key && typeof key === "string" ) && value === undefined ) ) { return this.get( owner, key ); } // When the key is not a string, or both a key and value // are specified, set or extend (existing objects) with either: // // 1. An object of properties // 2. A key and value // this.set( owner, key, value ); // Since the "set" path can have two possible entry points // return the expected data based on which path was taken[*] return value !== undefined ? value : key; }, remove: function( owner, key ) { var i, cache = owner[ this.expando ]; if ( cache === undefined ) { return; } if ( key !== undefined ) { // Support array or space separated string of keys if ( Array.isArray( key ) ) { // If key is an array of keys... // We always set camelCase keys, so remove that. key = key.map( camelCase ); } else { key = camelCase( key ); // If a key with the spaces exists, use it. // Otherwise, create an array by matching non-whitespace key = key in cache ? [ key ] : ( key.match( rnothtmlwhite ) || [] ); } i = key.length; while ( i-- ) { delete cache[ key[ i ] ]; } } // Remove the expando if there's no more data if ( key === undefined || jQuery.isEmptyObject( cache ) ) { // Support: Chrome <=35 - 45 // Webkit & Blink performance suffers when deleting properties // from DOM nodes, so set to undefined instead // https://bugs.chromium.org/p/chromium/issues/detail?id=378607 (bug restricted) if ( owner.nodeType ) { owner[ this.expando ] = undefined; } else { delete owner[ this.expando ]; } } }, hasData: function( owner ) { var cache = owner[ this.expando ]; return cache !== undefined && !jQuery.isEmptyObject( cache ); } }; var dataPriv = new Data(); var dataUser = new Data(); // Implementation Summary // // 1. Enforce API surface and semantic compatibility with 1.9.x branch // 2. Improve the module's maintainability by reducing the storage // paths to a single mechanism. // 3. Use the same single mechanism to support "private" and "user" data. // 4. _Never_ expose "private" data to user code (TODO: Drop _data, _removeData) // 5. Avoid exposing implementation details on user objects (eg. expando properties) // 6. Provide a clear path for implementation upgrade to WeakMap in 2014 var rbrace = /^(?:\{[\w\W]*\}|\[[\w\W]*\])$/, rmultiDash = /[A-Z]/g; function getData( data ) { if ( data === "true" ) { return true; } if ( data === "false" ) { return false; } if ( data === "null" ) { return null; } // Only convert to a number if it doesn't change the string if ( data === +data + "" ) { return +data; } if ( rbrace.test( data ) ) { return JSON.parse( data ); } return data; } function dataAttr( elem, key, data ) { var name; // If nothing was found internally, try to fetch any // data from the HTML5 data-* attribute if ( data === undefined && elem.nodeType === 1 ) { name = "data-" + key.replace( rmultiDash, "-$&" ).toLowerCase(); data = elem.getAttribute( name ); if ( typeof data === "string" ) { try { data = getData( data ); } catch ( e ) {} // Make sure we set the data so it isn't changed later dataUser.set( elem, key, data ); } else { data = undefined; } } return data; } jQuery.extend( { hasData: function( elem ) { return dataUser.hasData( elem ) || dataPriv.hasData( elem ); }, data: function( elem, name, data ) { return dataUser.access( elem, name, data ); }, removeData: function( elem, name ) { dataUser.remove( elem, name ); }, // TODO: Now that all calls to _data and _removeData have been replaced // with direct calls to dataPriv methods, these can be deprecated. _data: function( elem, name, data ) { return dataPriv.access( elem, name, data ); }, _removeData: function( elem, name ) { dataPriv.remove( elem, name ); } } ); jQuery.fn.extend( { data: function( key, value ) { var i, name, data, elem = this[ 0 ], attrs = elem && elem.attributes; // Gets all values if ( key === undefined ) { if ( this.length ) { data = dataUser.get( elem ); if ( elem.nodeType === 1 && !dataPriv.get( elem, "hasDataAttrs" ) ) { i = attrs.length; while ( i-- ) { // Support: IE 11 only // The attrs elements can be null (#14894) if ( attrs[ i ] ) { name = attrs[ i ].name; if ( name.indexOf( "data-" ) === 0 ) { name = camelCase( name.slice( 5 ) ); dataAttr( elem, name, data[ name ] ); } } } dataPriv.set( elem, "hasDataAttrs", true ); } } return data; } // Sets multiple values if ( typeof key === "object" ) { return this.each( function() { dataUser.set( this, key ); } ); } return access( this, function( value ) { var data; // The calling jQuery object (element matches) is not empty // (and therefore has an element appears at this[ 0 ]) and the // `value` parameter was not undefined. An empty jQuery object // will result in `undefined` for elem = this[ 0 ] which will // throw an exception if an attempt to read a data cache is made. if ( elem && value === undefined ) { // Attempt to get data from the cache // The key will always be camelCased in Data data = dataUser.get( elem, key ); if ( data !== undefined ) { return data; } // Attempt to "discover" the data in // HTML5 custom data-* attrs data = dataAttr( elem, key ); if ( data !== undefined ) { return data; } // We tried really hard, but the data doesn't exist. return; } // Set the data... this.each( function() { // We always store the camelCased key dataUser.set( this, key, value ); } ); }, null, value, arguments.length > 1, null, true ); }, removeData: function( key ) { return this.each( function() { dataUser.remove( this, key ); } ); } } ); jQuery.extend( { queue: function( elem, type, data ) { var queue; if ( elem ) { type = ( type || "fx" ) + "queue"; queue = dataPriv.get( elem, type ); // Speed up dequeue by getting out quickly if this is just a lookup if ( data ) { if ( !queue || Array.isArray( data ) ) { queue = dataPriv.access( elem, type, jQuery.makeArray( data ) ); } else { queue.push( data ); } } return queue || []; } }, dequeue: function( elem, type ) { type = type || "fx"; var queue = jQuery.queue( elem, type ), startLength = queue.length, fn = queue.shift(), hooks = jQuery._queueHooks( elem, type ), next = function() { jQuery.dequeue( elem, type ); }; // If the fx queue is dequeued, always remove the progress sentinel if ( fn === "inprogress" ) { fn = queue.shift(); startLength--; } if ( fn ) { // Add a progress sentinel to prevent the fx queue from being // automatically dequeued if ( type === "fx" ) { queue.unshift( "inprogress" ); } // Clear up the last queue stop function delete hooks.stop; fn.call( elem, next, hooks ); } if ( !startLength && hooks ) { hooks.empty.fire(); } }, // Not public - generate a queueHooks object, or return the current one _queueHooks: function( elem, type ) { var key = type + "queueHooks"; return dataPriv.get( elem, key ) || dataPriv.access( elem, key, { empty: jQuery.Callbacks( "once memory" ).add( function() { dataPriv.remove( elem, [ type + "queue", key ] ); } ) } ); } } ); jQuery.fn.extend( { queue: function( type, data ) { var setter = 2; if ( typeof type !== "string" ) { data = type; type = "fx"; setter--; } if ( arguments.length < setter ) { return jQuery.queue( this[ 0 ], type ); } return data === undefined ? this : this.each( function() { var queue = jQuery.queue( this, type, data ); // Ensure a hooks for this queue jQuery._queueHooks( this, type ); if ( type === "fx" && queue[ 0 ] !== "inprogress" ) { jQuery.dequeue( this, type ); } } ); }, dequeue: function( type ) { return this.each( function() { jQuery.dequeue( this, type ); } ); }, clearQueue: function( type ) { return this.queue( type || "fx", [] ); }, // Get a promise resolved when queues of a certain type // are emptied (fx is the type by default) promise: function( type, obj ) { var tmp, count = 1, defer = jQuery.Deferred(), elements = this, i = this.length, resolve = function() { if ( !( --count ) ) { defer.resolveWith( elements, [ elements ] ); } }; if ( typeof type !== "string" ) { obj = type; type = undefined; } type = type || "fx"; while ( i-- ) { tmp = dataPriv.get( elements[ i ], type + "queueHooks" ); if ( tmp && tmp.empty ) { count++; tmp.empty.add( resolve ); } } resolve(); return defer.promise( obj ); } } ); var pnum = ( /[+-]?(?:\d*\.|)\d+(?:[eE][+-]?\d+|)/ ).source; var rcssNum = new RegExp( "^(?:([+-])=|)(" + pnum + ")([a-z%]*)$", "i" ); var cssExpand = [ "Top", "Right", "Bottom", "Left" ]; var documentElement = document.documentElement; var isAttached = function( elem ) { return jQuery.contains( elem.ownerDocument, elem ); }, composed = { composed: true }; // Support: IE 9 - 11+, Edge 12 - 18+, iOS 10.0 - 10.2 only // Check attachment across shadow DOM boundaries when possible (gh-3504) // Support: iOS 10.0-10.2 only // Early iOS 10 versions support `attachShadow` but not `getRootNode`, // leading to errors. We need to check for `getRootNode`. if ( documentElement.getRootNode ) { isAttached = function( elem ) { return jQuery.contains( elem.ownerDocument, elem ) || elem.getRootNode( composed ) === elem.ownerDocument; }; } var isHiddenWithinTree = function( elem, el ) { // isHiddenWithinTree might be called from jQuery#filter function; // in that case, element will be second argument elem = el || elem; // Inline style trumps all return elem.style.display === "none" || elem.style.display === "" && // Otherwise, check computed style // Support: Firefox <=43 - 45 // Disconnected elements can have computed display: none, so first confirm that elem is // in the document. isAttached( elem ) && jQuery.css( elem, "display" ) === "none"; }; var swap = function( elem, options, callback, args ) { var ret, name, old = {}; // Remember the old values, and insert the new ones for ( name in options ) { old[ name ] = elem.style[ name ]; elem.style[ name ] = options[ name ]; } ret = callback.apply( elem, args || [] ); // Revert the old values for ( name in options ) { elem.style[ name ] = old[ name ]; } return ret; }; function adjustCSS( elem, prop, valueParts, tween ) { var adjusted, scale, maxIterations = 20, currentValue = tween ? function() { return tween.cur(); } : function() { return jQuery.css( elem, prop, "" ); }, initial = currentValue(), unit = valueParts && valueParts[ 3 ] || ( jQuery.cssNumber[ prop ] ? "" : "px" ), // Starting value computation is required for potential unit mismatches initialInUnit = elem.nodeType && ( jQuery.cssNumber[ prop ] || unit !== "px" && +initial ) && rcssNum.exec( jQuery.css( elem, prop ) ); if ( initialInUnit && initialInUnit[ 3 ] !== unit ) { // Support: Firefox <=54 // Halve the iteration target value to prevent interference from CSS upper bounds (gh-2144) initial = initial / 2; // Trust units reported by jQuery.css unit = unit || initialInUnit[ 3 ]; // Iteratively approximate from a nonzero starting point initialInUnit = +initial || 1; while ( maxIterations-- ) { // Evaluate and update our best guess (doubling guesses that zero out). // Finish if the scale equals or crosses 1 (making the old*new product non-positive). jQuery.style( elem, prop, initialInUnit + unit ); if ( ( 1 - scale ) * ( 1 - ( scale = currentValue() / initial || 0.5 ) ) <= 0 ) { maxIterations = 0; } initialInUnit = initialInUnit / scale; } initialInUnit = initialInUnit * 2; jQuery.style( elem, prop, initialInUnit + unit ); // Make sure we update the tween properties later on valueParts = valueParts || []; } if ( valueParts ) { initialInUnit = +initialInUnit || +initial || 0; // Apply relative offset (+=/-=) if specified adjusted = valueParts[ 1 ] ? initialInUnit + ( valueParts[ 1 ] + 1 ) * valueParts[ 2 ] : +valueParts[ 2 ]; if ( tween ) { tween.unit = unit; tween.start = initialInUnit; tween.end = adjusted; } } return adjusted; } var defaultDisplayMap = {}; function getDefaultDisplay( elem ) { var temp, doc = elem.ownerDocument, nodeName = elem.nodeName, display = defaultDisplayMap[ nodeName ]; if ( display ) { return display; } temp = doc.body.appendChild( doc.createElement( nodeName ) ); display = jQuery.css( temp, "display" ); temp.parentNode.removeChild( temp ); if ( display === "none" ) { display = "block"; } defaultDisplayMap[ nodeName ] = display; return display; } function showHide( elements, show ) { var display, elem, values = [], index = 0, length = elements.length; // Determine new display value for elements that need to change for ( ; index < length; index++ ) { elem = elements[ index ]; if ( !elem.style ) { continue; } display = elem.style.display; if ( show ) { // Since we force visibility upon cascade-hidden elements, an immediate (and slow) // check is required in this first loop unless we have a nonempty display value (either // inline or about-to-be-restored) if ( display === "none" ) { values[ index ] = dataPriv.get( elem, "display" ) || null; if ( !values[ index ] ) { elem.style.display = ""; } } if ( elem.style.display === "" && isHiddenWithinTree( elem ) ) { values[ index ] = getDefaultDisplay( elem ); } } else { if ( display !== "none" ) { values[ index ] = "none"; // Remember what we're overwriting dataPriv.set( elem, "display", display ); } } } // Set the display of the elements in a second loop to avoid constant reflow for ( index = 0; index < length; index++ ) { if ( values[ index ] != null ) { elements[ index ].style.display = values[ index ]; } } return elements; } jQuery.fn.extend( { show: function() { return showHide( this, true ); }, hide: function() { return showHide( this ); }, toggle: function( state ) { if ( typeof state === "boolean" ) { return state ? this.show() : this.hide(); } return this.each( function() { if ( isHiddenWithinTree( this ) ) { jQuery( this ).show(); } else { jQuery( this ).hide(); } } ); } } ); var rcheckableType = ( /^(?:checkbox|radio)$/i ); var rtagName = ( /<([a-z][^\/\0>\x20\t\r\n\f]*)/i ); var rscriptType = ( /^$|^module$|\/(?:java|ecma)script/i ); // We have to close these tags to support XHTML (#13200) var wrapMap = { // Support: IE <=9 only option: [ 1, "" ], // XHTML parsers do not magically insert elements in the // same way that tag soup parsers do. So we cannot shorten // this by omitting or other required elements. thead: [ 1, "", "
" ], col: [ 2, "", "
" ], tr: [ 2, "", "
" ], td: [ 3, "", "
" ], _default: [ 0, "", "" ] }; // Support: IE <=9 only wrapMap.optgroup = wrapMap.option; wrapMap.tbody = wrapMap.tfoot = wrapMap.colgroup = wrapMap.caption = wrapMap.thead; wrapMap.th = wrapMap.td; function getAll( context, tag ) { // Support: IE <=9 - 11 only // Use typeof to avoid zero-argument method invocation on host objects (#15151) var ret; if ( typeof context.getElementsByTagName !== "undefined" ) { ret = context.getElementsByTagName( tag || "*" ); } else if ( typeof context.querySelectorAll !== "undefined" ) { ret = context.querySelectorAll( tag || "*" ); } else { ret = []; } if ( tag === undefined || tag && nodeName( context, tag ) ) { return jQuery.merge( [ context ], ret ); } return ret; } // Mark scripts as having already been evaluated function setGlobalEval( elems, refElements ) { var i = 0, l = elems.length; for ( ; i < l; i++ ) { dataPriv.set( elems[ i ], "globalEval", !refElements || dataPriv.get( refElements[ i ], "globalEval" ) ); } } var rhtml = /<|&#?\w+;/; function buildFragment( elems, context, scripts, selection, ignored ) { var elem, tmp, tag, wrap, attached, j, fragment = context.createDocumentFragment(), nodes = [], i = 0, l = elems.length; for ( ; i < l; i++ ) { elem = elems[ i ]; if ( elem || elem === 0 ) { // Add nodes directly if ( toType( elem ) === "object" ) { // Support: Android <=4.0 only, PhantomJS 1 only // push.apply(_, arraylike) throws on ancient WebKit jQuery.merge( nodes, elem.nodeType ? [ elem ] : elem ); // Convert non-html into a text node } else if ( !rhtml.test( elem ) ) { nodes.push( context.createTextNode( elem ) ); // Convert html into DOM nodes } else { tmp = tmp || fragment.appendChild( context.createElement( "div" ) ); // Deserialize a standard representation tag = ( rtagName.exec( elem ) || [ "", "" ] )[ 1 ].toLowerCase(); wrap = wrapMap[ tag ] || wrapMap._default; tmp.innerHTML = wrap[ 1 ] + jQuery.htmlPrefilter( elem ) + wrap[ 2 ]; // Descend through wrappers to the right content j = wrap[ 0 ]; while ( j-- ) { tmp = tmp.lastChild; } // Support: Android <=4.0 only, PhantomJS 1 only // push.apply(_, arraylike) throws on ancient WebKit jQuery.merge( nodes, tmp.childNodes ); // Remember the top-level container tmp = fragment.firstChild; // Ensure the created nodes are orphaned (#12392) tmp.textContent = ""; } } } // Remove wrapper from fragment fragment.textContent = ""; i = 0; while ( ( elem = nodes[ i++ ] ) ) { // Skip elements already in the context collection (trac-4087) if ( selection && jQuery.inArray( elem, selection ) > -1 ) { if ( ignored ) { ignored.push( elem ); } continue; } attached = isAttached( elem ); // Append to fragment tmp = getAll( fragment.appendChild( elem ), "script" ); // Preserve script evaluation history if ( attached ) { setGlobalEval( tmp ); } // Capture executables if ( scripts ) { j = 0; while ( ( elem = tmp[ j++ ] ) ) { if ( rscriptType.test( elem.type || "" ) ) { scripts.push( elem ); } } } } return fragment; } ( function() { var fragment = document.createDocumentFragment(), div = fragment.appendChild( document.createElement( "div" ) ), input = document.createElement( "input" ); // Support: Android 4.0 - 4.3 only // Check state lost if the name is set (#11217) // Support: Windows Web Apps (WWA) // `name` and `type` must use .setAttribute for WWA (#14901) input.setAttribute( "type", "radio" ); input.setAttribute( "checked", "checked" ); input.setAttribute( "name", "t" ); div.appendChild( input ); // Support: Android <=4.1 only // Older WebKit doesn't clone checked state correctly in fragments support.checkClone = div.cloneNode( true ).cloneNode( true ).lastChild.checked; // Support: IE <=11 only // Make sure textarea (and checkbox) defaultValue is properly cloned div.innerHTML = ""; support.noCloneChecked = !!div.cloneNode( true ).lastChild.defaultValue; } )(); var rkeyEvent = /^key/, rmouseEvent = /^(?:mouse|pointer|contextmenu|drag|drop)|click/, rtypenamespace = /^([^.]*)(?:\.(.+)|)/; function returnTrue() { return true; } function returnFalse() { return false; } // Support: IE <=9 - 11+ // focus() and blur() are asynchronous, except when they are no-op. // So expect focus to be synchronous when the element is already active, // and blur to be synchronous when the element is not already active. // (focus and blur are always synchronous in other supported browsers, // this just defines when we can count on it). function expectSync( elem, type ) { return ( elem === safeActiveElement() ) === ( type === "focus" ); } // Support: IE <=9 only // Accessing document.activeElement can throw unexpectedly // https://bugs.jquery.com/ticket/13393 function safeActiveElement() { try { return document.activeElement; } catch ( err ) { } } function on( elem, types, selector, data, fn, one ) { var origFn, type; // Types can be a map of types/handlers if ( typeof types === "object" ) { // ( types-Object, selector, data ) if ( typeof selector !== "string" ) { // ( types-Object, data ) data = data || selector; selector = undefined; } for ( type in types ) { on( elem, type, selector, data, types[ type ], one ); } return elem; } if ( data == null && fn == null ) { // ( types, fn ) fn = selector; data = selector = undefined; } else if ( fn == null ) { if ( typeof selector === "string" ) { // ( types, selector, fn ) fn = data; data = undefined; } else { // ( types, data, fn ) fn = data; data = selector; selector = undefined; } } if ( fn === false ) { fn = returnFalse; } else if ( !fn ) { return elem; } if ( one === 1 ) { origFn = fn; fn = function( event ) { // Can use an empty set, since event contains the info jQuery().off( event ); return origFn.apply( this, arguments ); }; // Use same guid so caller can remove using origFn fn.guid = origFn.guid || ( origFn.guid = jQuery.guid++ ); } return elem.each( function() { jQuery.event.add( this, types, fn, data, selector ); } ); } /* * Helper functions for managing events -- not part of the public interface. * Props to Dean Edwards' addEvent library for many of the ideas. */ jQuery.event = { global: {}, add: function( elem, types, handler, data, selector ) { var handleObjIn, eventHandle, tmp, events, t, handleObj, special, handlers, type, namespaces, origType, elemData = dataPriv.get( elem ); // Don't attach events to noData or text/comment nodes (but allow plain objects) if ( !elemData ) { return; } // Caller can pass in an object of custom data in lieu of the handler if ( handler.handler ) { handleObjIn = handler; handler = handleObjIn.handler; selector = handleObjIn.selector; } // Ensure that invalid selectors throw exceptions at attach time // Evaluate against documentElement in case elem is a non-element node (e.g., document) if ( selector ) { jQuery.find.matchesSelector( documentElement, selector ); } // Make sure that the handler has a unique ID, used to find/remove it later if ( !handler.guid ) { handler.guid = jQuery.guid++; } // Init the element's event structure and main handler, if this is the first if ( !( events = elemData.events ) ) { events = elemData.events = {}; } if ( !( eventHandle = elemData.handle ) ) { eventHandle = elemData.handle = function( e ) { // Discard the second event of a jQuery.event.trigger() and // when an event is called after a page has unloaded return typeof jQuery !== "undefined" && jQuery.event.triggered !== e.type ? jQuery.event.dispatch.apply( elem, arguments ) : undefined; }; } // Handle multiple events separated by a space types = ( types || "" ).match( rnothtmlwhite ) || [ "" ]; t = types.length; while ( t-- ) { tmp = rtypenamespace.exec( types[ t ] ) || []; type = origType = tmp[ 1 ]; namespaces = ( tmp[ 2 ] || "" ).split( "." ).sort(); // There *must* be a type, no attaching namespace-only handlers if ( !type ) { continue; } // If event changes its type, use the special event handlers for the changed type special = jQuery.event.special[ type ] || {}; // If selector defined, determine special event api type, otherwise given type type = ( selector ? special.delegateType : special.bindType ) || type; // Update special based on newly reset type special = jQuery.event.special[ type ] || {}; // handleObj is passed to all event handlers handleObj = jQuery.extend( { type: type, origType: origType, data: data, handler: handler, guid: handler.guid, selector: selector, needsContext: selector && jQuery.expr.match.needsContext.test( selector ), namespace: namespaces.join( "." ) }, handleObjIn ); // Init the event handler queue if we're the first if ( !( handlers = events[ type ] ) ) { handlers = events[ type ] = []; handlers.delegateCount = 0; // Only use addEventListener if the special events handler returns false if ( !special.setup || special.setup.call( elem, data, namespaces, eventHandle ) === false ) { if ( elem.addEventListener ) { elem.addEventListener( type, eventHandle ); } } } if ( special.add ) { special.add.call( elem, handleObj ); if ( !handleObj.handler.guid ) { handleObj.handler.guid = handler.guid; } } // Add to the element's handler list, delegates in front if ( selector ) { handlers.splice( handlers.delegateCount++, 0, handleObj ); } else { handlers.push( handleObj ); } // Keep track of which events have ever been used, for event optimization jQuery.event.global[ type ] = true; } }, // Detach an event or set of events from an element remove: function( elem, types, handler, selector, mappedTypes ) { var j, origCount, tmp, events, t, handleObj, special, handlers, type, namespaces, origType, elemData = dataPriv.hasData( elem ) && dataPriv.get( elem ); if ( !elemData || !( events = elemData.events ) ) { return; } // Once for each type.namespace in types; type may be omitted types = ( types || "" ).match( rnothtmlwhite ) || [ "" ]; t = types.length; while ( t-- ) { tmp = rtypenamespace.exec( types[ t ] ) || []; type = origType = tmp[ 1 ]; namespaces = ( tmp[ 2 ] || "" ).split( "." ).sort(); // Unbind all events (on this namespace, if provided) for the element if ( !type ) { for ( type in events ) { jQuery.event.remove( elem, type + types[ t ], handler, selector, true ); } continue; } special = jQuery.event.special[ type ] || {}; type = ( selector ? special.delegateType : special.bindType ) || type; handlers = events[ type ] || []; tmp = tmp[ 2 ] && new RegExp( "(^|\\.)" + namespaces.join( "\\.(?:.*\\.|)" ) + "(\\.|$)" ); // Remove matching events origCount = j = handlers.length; while ( j-- ) { handleObj = handlers[ j ]; if ( ( mappedTypes || origType === handleObj.origType ) && ( !handler || handler.guid === handleObj.guid ) && ( !tmp || tmp.test( handleObj.namespace ) ) && ( !selector || selector === handleObj.selector || selector === "**" && handleObj.selector ) ) { handlers.splice( j, 1 ); if ( handleObj.selector ) { handlers.delegateCount--; } if ( special.remove ) { special.remove.call( elem, handleObj ); } } } // Remove generic event handler if we removed something and no more handlers exist // (avoids potential for endless recursion during removal of special event handlers) if ( origCount && !handlers.length ) { if ( !special.teardown || special.teardown.call( elem, namespaces, elemData.handle ) === false ) { jQuery.removeEvent( elem, type, elemData.handle ); } delete events[ type ]; } } // Remove data and the expando if it's no longer used if ( jQuery.isEmptyObject( events ) ) { dataPriv.remove( elem, "handle events" ); } }, dispatch: function( nativeEvent ) { // Make a writable jQuery.Event from the native event object var event = jQuery.event.fix( nativeEvent ); var i, j, ret, matched, handleObj, handlerQueue, args = new Array( arguments.length ), handlers = ( dataPriv.get( this, "events" ) || {} )[ event.type ] || [], special = jQuery.event.special[ event.type ] || {}; // Use the fix-ed jQuery.Event rather than the (read-only) native event args[ 0 ] = event; for ( i = 1; i < arguments.length; i++ ) { args[ i ] = arguments[ i ]; } event.delegateTarget = this; // Call the preDispatch hook for the mapped type, and let it bail if desired if ( special.preDispatch && special.preDispatch.call( this, event ) === false ) { return; } // Determine handlers handlerQueue = jQuery.event.handlers.call( this, event, handlers ); // Run delegates first; they may want to stop propagation beneath us i = 0; while ( ( matched = handlerQueue[ i++ ] ) && !event.isPropagationStopped() ) { event.currentTarget = matched.elem; j = 0; while ( ( handleObj = matched.handlers[ j++ ] ) && !event.isImmediatePropagationStopped() ) { // If the event is namespaced, then each handler is only invoked if it is // specially universal or its namespaces are a superset of the event's. if ( !event.rnamespace || handleObj.namespace === false || event.rnamespace.test( handleObj.namespace ) ) { event.handleObj = handleObj; event.data = handleObj.data; ret = ( ( jQuery.event.special[ handleObj.origType ] || {} ).handle || handleObj.handler ).apply( matched.elem, args ); if ( ret !== undefined ) { if ( ( event.result = ret ) === false ) { event.preventDefault(); event.stopPropagation(); } } } } } // Call the postDispatch hook for the mapped type if ( special.postDispatch ) { special.postDispatch.call( this, event ); } return event.result; }, handlers: function( event, handlers ) { var i, handleObj, sel, matchedHandlers, matchedSelectors, handlerQueue = [], delegateCount = handlers.delegateCount, cur = event.target; // Find delegate handlers if ( delegateCount && // Support: IE <=9 // Black-hole SVG instance trees (trac-13180) cur.nodeType && // Support: Firefox <=42 // Suppress spec-violating clicks indicating a non-primary pointer button (trac-3861) // https://www.w3.org/TR/DOM-Level-3-Events/#event-type-click // Support: IE 11 only // ...but not arrow key "clicks" of radio inputs, which can have `button` -1 (gh-2343) !( event.type === "click" && event.button >= 1 ) ) { for ( ; cur !== this; cur = cur.parentNode || this ) { // Don't check non-elements (#13208) // Don't process clicks on disabled elements (#6911, #8165, #11382, #11764) if ( cur.nodeType === 1 && !( event.type === "click" && cur.disabled === true ) ) { matchedHandlers = []; matchedSelectors = {}; for ( i = 0; i < delegateCount; i++ ) { handleObj = handlers[ i ]; // Don't conflict with Object.prototype properties (#13203) sel = handleObj.selector + " "; if ( matchedSelectors[ sel ] === undefined ) { matchedSelectors[ sel ] = handleObj.needsContext ? jQuery( sel, this ).index( cur ) > -1 : jQuery.find( sel, this, null, [ cur ] ).length; } if ( matchedSelectors[ sel ] ) { matchedHandlers.push( handleObj ); } } if ( matchedHandlers.length ) { handlerQueue.push( { elem: cur, handlers: matchedHandlers } ); } } } } // Add the remaining (directly-bound) handlers cur = this; if ( delegateCount < handlers.length ) { handlerQueue.push( { elem: cur, handlers: handlers.slice( delegateCount ) } ); } return handlerQueue; }, addProp: function( name, hook ) { Object.defineProperty( jQuery.Event.prototype, name, { enumerable: true, configurable: true, get: isFunction( hook ) ? function() { if ( this.originalEvent ) { return hook( this.originalEvent ); } } : function() { if ( this.originalEvent ) { return this.originalEvent[ name ]; } }, set: function( value ) { Object.defineProperty( this, name, { enumerable: true, configurable: true, writable: true, value: value } ); } } ); }, fix: function( originalEvent ) { return originalEvent[ jQuery.expando ] ? originalEvent : new jQuery.Event( originalEvent ); }, special: { load: { // Prevent triggered image.load events from bubbling to window.load noBubble: true }, click: { // Utilize native event to ensure correct state for checkable inputs setup: function( data ) { // For mutual compressibility with _default, replace `this` access with a local var. // `|| data` is dead code meant only to preserve the variable through minification. var el = this || data; // Claim the first handler if ( rcheckableType.test( el.type ) && el.click && nodeName( el, "input" ) ) { // dataPriv.set( el, "click", ... ) leverageNative( el, "click", returnTrue ); } // Return false to allow normal processing in the caller return false; }, trigger: function( data ) { // For mutual compressibility with _default, replace `this` access with a local var. // `|| data` is dead code meant only to preserve the variable through minification. var el = this || data; // Force setup before triggering a click if ( rcheckableType.test( el.type ) && el.click && nodeName( el, "input" ) ) { leverageNative( el, "click" ); } // Return non-false to allow normal event-path propagation return true; }, // For cross-browser consistency, suppress native .click() on links // Also prevent it if we're currently inside a leveraged native-event stack _default: function( event ) { var target = event.target; return rcheckableType.test( target.type ) && target.click && nodeName( target, "input" ) && dataPriv.get( target, "click" ) || nodeName( target, "a" ); } }, beforeunload: { postDispatch: function( event ) { // Support: Firefox 20+ // Firefox doesn't alert if the returnValue field is not set. if ( event.result !== undefined && event.originalEvent ) { event.originalEvent.returnValue = event.result; } } } } }; // Ensure the presence of an event listener that handles manually-triggered // synthetic events by interrupting progress until reinvoked in response to // *native* events that it fires directly, ensuring that state changes have // already occurred before other listeners are invoked. function leverageNative( el, type, expectSync ) { // Missing expectSync indicates a trigger call, which must force setup through jQuery.event.add if ( !expectSync ) { if ( dataPriv.get( el, type ) === undefined ) { jQuery.event.add( el, type, returnTrue ); } return; } // Register the controller as a special universal handler for all event namespaces dataPriv.set( el, type, false ); jQuery.event.add( el, type, { namespace: false, handler: function( event ) { var notAsync, result, saved = dataPriv.get( this, type ); if ( ( event.isTrigger & 1 ) && this[ type ] ) { // Interrupt processing of the outer synthetic .trigger()ed event // Saved data should be false in such cases, but might be a leftover capture object // from an async native handler (gh-4350) if ( !saved.length ) { // Store arguments for use when handling the inner native event // There will always be at least one argument (an event object), so this array // will not be confused with a leftover capture object. saved = slice.call( arguments ); dataPriv.set( this, type, saved ); // Trigger the native event and capture its result // Support: IE <=9 - 11+ // focus() and blur() are asynchronous notAsync = expectSync( this, type ); this[ type ](); result = dataPriv.get( this, type ); if ( saved !== result || notAsync ) { dataPriv.set( this, type, false ); } else { result = {}; } if ( saved !== result ) { // Cancel the outer synthetic event event.stopImmediatePropagation(); event.preventDefault(); return result.value; } // If this is an inner synthetic event for an event with a bubbling surrogate // (focus or blur), assume that the surrogate already propagated from triggering the // native event and prevent that from happening again here. // This technically gets the ordering wrong w.r.t. to `.trigger()` (in which the // bubbling surrogate propagates *after* the non-bubbling base), but that seems // less bad than duplication. } else if ( ( jQuery.event.special[ type ] || {} ).delegateType ) { event.stopPropagation(); } // If this is a native event triggered above, everything is now in order // Fire an inner synthetic event with the original arguments } else if ( saved.length ) { // ...and capture the result dataPriv.set( this, type, { value: jQuery.event.trigger( // Support: IE <=9 - 11+ // Extend with the prototype to reset the above stopImmediatePropagation() jQuery.extend( saved[ 0 ], jQuery.Event.prototype ), saved.slice( 1 ), this ) } ); // Abort handling of the native event event.stopImmediatePropagation(); } } } ); } jQuery.removeEvent = function( elem, type, handle ) { // This "if" is needed for plain objects if ( elem.removeEventListener ) { elem.removeEventListener( type, handle ); } }; jQuery.Event = function( src, props ) { // Allow instantiation without the 'new' keyword if ( !( this instanceof jQuery.Event ) ) { return new jQuery.Event( src, props ); } // Event object if ( src && src.type ) { this.originalEvent = src; this.type = src.type; // Events bubbling up the document may have been marked as prevented // by a handler lower down the tree; reflect the correct value. this.isDefaultPrevented = src.defaultPrevented || src.defaultPrevented === undefined && // Support: Android <=2.3 only src.returnValue === false ? returnTrue : returnFalse; // Create target properties // Support: Safari <=6 - 7 only // Target should not be a text node (#504, #13143) this.target = ( src.target && src.target.nodeType === 3 ) ? src.target.parentNode : src.target; this.currentTarget = src.currentTarget; this.relatedTarget = src.relatedTarget; // Event type } else { this.type = src; } // Put explicitly provided properties onto the event object if ( props ) { jQuery.extend( this, props ); } // Create a timestamp if incoming event doesn't have one this.timeStamp = src && src.timeStamp || Date.now(); // Mark it as fixed this[ jQuery.expando ] = true; }; // jQuery.Event is based on DOM3 Events as specified by the ECMAScript Language Binding // https://www.w3.org/TR/2003/WD-DOM-Level-3-Events-20030331/ecma-script-binding.html jQuery.Event.prototype = { constructor: jQuery.Event, isDefaultPrevented: returnFalse, isPropagationStopped: returnFalse, isImmediatePropagationStopped: returnFalse, isSimulated: false, preventDefault: function() { var e = this.originalEvent; this.isDefaultPrevented = returnTrue; if ( e && !this.isSimulated ) { e.preventDefault(); } }, stopPropagation: function() { var e = this.originalEvent; this.isPropagationStopped = returnTrue; if ( e && !this.isSimulated ) { e.stopPropagation(); } }, stopImmediatePropagation: function() { var e = this.originalEvent; this.isImmediatePropagationStopped = returnTrue; if ( e && !this.isSimulated ) { e.stopImmediatePropagation(); } this.stopPropagation(); } }; // Includes all common event props including KeyEvent and MouseEvent specific props jQuery.each( { altKey: true, bubbles: true, cancelable: true, changedTouches: true, ctrlKey: true, detail: true, eventPhase: true, metaKey: true, pageX: true, pageY: true, shiftKey: true, view: true, "char": true, code: true, charCode: true, key: true, keyCode: true, button: true, buttons: true, clientX: true, clientY: true, offsetX: true, offsetY: true, pointerId: true, pointerType: true, screenX: true, screenY: true, targetTouches: true, toElement: true, touches: true, which: function( event ) { var button = event.button; // Add which for key events if ( event.which == null && rkeyEvent.test( event.type ) ) { return event.charCode != null ? event.charCode : event.keyCode; } // Add which for click: 1 === left; 2 === middle; 3 === right if ( !event.which && button !== undefined && rmouseEvent.test( event.type ) ) { if ( button & 1 ) { return 1; } if ( button & 2 ) { return 3; } if ( button & 4 ) { return 2; } return 0; } return event.which; } }, jQuery.event.addProp ); jQuery.each( { focus: "focusin", blur: "focusout" }, function( type, delegateType ) { jQuery.event.special[ type ] = { // Utilize native event if possible so blur/focus sequence is correct setup: function() { // Claim the first handler // dataPriv.set( this, "focus", ... ) // dataPriv.set( this, "blur", ... ) leverageNative( this, type, expectSync ); // Return false to allow normal processing in the caller return false; }, trigger: function() { // Force setup before trigger leverageNative( this, type ); // Return non-false to allow normal event-path propagation return true; }, delegateType: delegateType }; } ); // Create mouseenter/leave events using mouseover/out and event-time checks // so that event delegation works in jQuery. // Do the same for pointerenter/pointerleave and pointerover/pointerout // // Support: Safari 7 only // Safari sends mouseenter too often; see: // https://bugs.chromium.org/p/chromium/issues/detail?id=470258 // for the description of the bug (it existed in older Chrome versions as well). jQuery.each( { mouseenter: "mouseover", mouseleave: "mouseout", pointerenter: "pointerover", pointerleave: "pointerout" }, function( orig, fix ) { jQuery.event.special[ orig ] = { delegateType: fix, bindType: fix, handle: function( event ) { var ret, target = this, related = event.relatedTarget, handleObj = event.handleObj; // For mouseenter/leave call the handler if related is outside the target. // NB: No relatedTarget if the mouse left/entered the browser window if ( !related || ( related !== target && !jQuery.contains( target, related ) ) ) { event.type = handleObj.origType; ret = handleObj.handler.apply( this, arguments ); event.type = fix; } return ret; } }; } ); jQuery.fn.extend( { on: function( types, selector, data, fn ) { return on( this, types, selector, data, fn ); }, one: function( types, selector, data, fn ) { return on( this, types, selector, data, fn, 1 ); }, off: function( types, selector, fn ) { var handleObj, type; if ( types && types.preventDefault && types.handleObj ) { // ( event ) dispatched jQuery.Event handleObj = types.handleObj; jQuery( types.delegateTarget ).off( handleObj.namespace ? handleObj.origType + "." + handleObj.namespace : handleObj.origType, handleObj.selector, handleObj.handler ); return this; } if ( typeof types === "object" ) { // ( types-object [, selector] ) for ( type in types ) { this.off( type, selector, types[ type ] ); } return this; } if ( selector === false || typeof selector === "function" ) { // ( types [, fn] ) fn = selector; selector = undefined; } if ( fn === false ) { fn = returnFalse; } return this.each( function() { jQuery.event.remove( this, types, fn, selector ); } ); } } ); var /* eslint-disable max-len */ // See https://github.com/eslint/eslint/issues/3229 rxhtmlTag = /<(?!area|br|col|embed|hr|img|input|link|meta|param)(([a-z][^\/\0>\x20\t\r\n\f]*)[^>]*)\/>/gi, /* eslint-enable */ // Support: IE <=10 - 11, Edge 12 - 13 only // In IE/Edge using regex groups here causes severe slowdowns. // See https://connect.microsoft.com/IE/feedback/details/1736512/ rnoInnerhtml = /\s*$/g; // Prefer a tbody over its parent table for containing new rows function manipulationTarget( elem, content ) { if ( nodeName( elem, "table" ) && nodeName( content.nodeType !== 11 ? content : content.firstChild, "tr" ) ) { return jQuery( elem ).children( "tbody" )[ 0 ] || elem; } return elem; } // Replace/restore the type attribute of script elements for safe DOM manipulation function disableScript( elem ) { elem.type = ( elem.getAttribute( "type" ) !== null ) + "/" + elem.type; return elem; } function restoreScript( elem ) { if ( ( elem.type || "" ).slice( 0, 5 ) === "true/" ) { elem.type = elem.type.slice( 5 ); } else { elem.removeAttribute( "type" ); } return elem; } function cloneCopyEvent( src, dest ) { var i, l, type, pdataOld, pdataCur, udataOld, udataCur, events; if ( dest.nodeType !== 1 ) { return; } // 1. Copy private data: events, handlers, etc. if ( dataPriv.hasData( src ) ) { pdataOld = dataPriv.access( src ); pdataCur = dataPriv.set( dest, pdataOld ); events = pdataOld.events; if ( events ) { delete pdataCur.handle; pdataCur.events = {}; for ( type in events ) { for ( i = 0, l = events[ type ].length; i < l; i++ ) { jQuery.event.add( dest, type, events[ type ][ i ] ); } } } } // 2. Copy user data if ( dataUser.hasData( src ) ) { udataOld = dataUser.access( src ); udataCur = jQuery.extend( {}, udataOld ); dataUser.set( dest, udataCur ); } } // Fix IE bugs, see support tests function fixInput( src, dest ) { var nodeName = dest.nodeName.toLowerCase(); // Fails to persist the checked state of a cloned checkbox or radio button. if ( nodeName === "input" && rcheckableType.test( src.type ) ) { dest.checked = src.checked; // Fails to return the selected option to the default selected state when cloning options } else if ( nodeName === "input" || nodeName === "textarea" ) { dest.defaultValue = src.defaultValue; } } function domManip( collection, args, callback, ignored ) { // Flatten any nested arrays args = concat.apply( [], args ); var fragment, first, scripts, hasScripts, node, doc, i = 0, l = collection.length, iNoClone = l - 1, value = args[ 0 ], valueIsFunction = isFunction( value ); // We can't cloneNode fragments that contain checked, in WebKit if ( valueIsFunction || ( l > 1 && typeof value === "string" && !support.checkClone && rchecked.test( value ) ) ) { return collection.each( function( index ) { var self = collection.eq( index ); if ( valueIsFunction ) { args[ 0 ] = value.call( this, index, self.html() ); } domManip( self, args, callback, ignored ); } ); } if ( l ) { fragment = buildFragment( args, collection[ 0 ].ownerDocument, false, collection, ignored ); first = fragment.firstChild; if ( fragment.childNodes.length === 1 ) { fragment = first; } // Require either new content or an interest in ignored elements to invoke the callback if ( first || ignored ) { scripts = jQuery.map( getAll( fragment, "script" ), disableScript ); hasScripts = scripts.length; // Use the original fragment for the last item // instead of the first because it can end up // being emptied incorrectly in certain situations (#8070). for ( ; i < l; i++ ) { node = fragment; if ( i !== iNoClone ) { node = jQuery.clone( node, true, true ); // Keep references to cloned scripts for later restoration if ( hasScripts ) { // Support: Android <=4.0 only, PhantomJS 1 only // push.apply(_, arraylike) throws on ancient WebKit jQuery.merge( scripts, getAll( node, "script" ) ); } } callback.call( collection[ i ], node, i ); } if ( hasScripts ) { doc = scripts[ scripts.length - 1 ].ownerDocument; // Reenable scripts jQuery.map( scripts, restoreScript ); // Evaluate executable scripts on first document insertion for ( i = 0; i < hasScripts; i++ ) { node = scripts[ i ]; if ( rscriptType.test( node.type || "" ) && !dataPriv.access( node, "globalEval" ) && jQuery.contains( doc, node ) ) { if ( node.src && ( node.type || "" ).toLowerCase() !== "module" ) { // Optional AJAX dependency, but won't run scripts if not present if ( jQuery._evalUrl && !node.noModule ) { jQuery._evalUrl( node.src, { nonce: node.nonce || node.getAttribute( "nonce" ) } ); } } else { DOMEval( node.textContent.replace( rcleanScript, "" ), node, doc ); } } } } } } return collection; } function remove( elem, selector, keepData ) { var node, nodes = selector ? jQuery.filter( selector, elem ) : elem, i = 0; for ( ; ( node = nodes[ i ] ) != null; i++ ) { if ( !keepData && node.nodeType === 1 ) { jQuery.cleanData( getAll( node ) ); } if ( node.parentNode ) { if ( keepData && isAttached( node ) ) { setGlobalEval( getAll( node, "script" ) ); } node.parentNode.removeChild( node ); } } return elem; } jQuery.extend( { htmlPrefilter: function( html ) { return html.replace( rxhtmlTag, "<$1>" ); }, clone: function( elem, dataAndEvents, deepDataAndEvents ) { var i, l, srcElements, destElements, clone = elem.cloneNode( true ), inPage = isAttached( elem ); // Fix IE cloning issues if ( !support.noCloneChecked && ( elem.nodeType === 1 || elem.nodeType === 11 ) && !jQuery.isXMLDoc( elem ) ) { // We eschew Sizzle here for performance reasons: https://jsperf.com/getall-vs-sizzle/2 destElements = getAll( clone ); srcElements = getAll( elem ); for ( i = 0, l = srcElements.length; i < l; i++ ) { fixInput( srcElements[ i ], destElements[ i ] ); } } // Copy the events from the original to the clone if ( dataAndEvents ) { if ( deepDataAndEvents ) { srcElements = srcElements || getAll( elem ); destElements = destElements || getAll( clone ); for ( i = 0, l = srcElements.length; i < l; i++ ) { cloneCopyEvent( srcElements[ i ], destElements[ i ] ); } } else { cloneCopyEvent( elem, clone ); } } // Preserve script evaluation history destElements = getAll( clone, "script" ); if ( destElements.length > 0 ) { setGlobalEval( destElements, !inPage && getAll( elem, "script" ) ); } // Return the cloned set return clone; }, cleanData: function( elems ) { var data, elem, type, special = jQuery.event.special, i = 0; for ( ; ( elem = elems[ i ] ) !== undefined; i++ ) { if ( acceptData( elem ) ) { if ( ( data = elem[ dataPriv.expando ] ) ) { if ( data.events ) { for ( type in data.events ) { if ( special[ type ] ) { jQuery.event.remove( elem, type ); // This is a shortcut to avoid jQuery.event.remove's overhead } else { jQuery.removeEvent( elem, type, data.handle ); } } } // Support: Chrome <=35 - 45+ // Assign undefined instead of using delete, see Data#remove elem[ dataPriv.expando ] = undefined; } if ( elem[ dataUser.expando ] ) { // Support: Chrome <=35 - 45+ // Assign undefined instead of using delete, see Data#remove elem[ dataUser.expando ] = undefined; } } } } } ); jQuery.fn.extend( { detach: function( selector ) { return remove( this, selector, true ); }, remove: function( selector ) { return remove( this, selector ); }, text: function( value ) { return access( this, function( value ) { return value === undefined ? jQuery.text( this ) : this.empty().each( function() { if ( this.nodeType === 1 || this.nodeType === 11 || this.nodeType === 9 ) { this.textContent = value; } } ); }, null, value, arguments.length ); }, append: function() { return domManip( this, arguments, function( elem ) { if ( this.nodeType === 1 || this.nodeType === 11 || this.nodeType === 9 ) { var target = manipulationTarget( this, elem ); target.appendChild( elem ); } } ); }, prepend: function() { return domManip( this, arguments, function( elem ) { if ( this.nodeType === 1 || this.nodeType === 11 || this.nodeType === 9 ) { var target = manipulationTarget( this, elem ); target.insertBefore( elem, target.firstChild ); } } ); }, before: function() { return domManip( this, arguments, function( elem ) { if ( this.parentNode ) { this.parentNode.insertBefore( elem, this ); } } ); }, after: function() { return domManip( this, arguments, function( elem ) { if ( this.parentNode ) { this.parentNode.insertBefore( elem, this.nextSibling ); } } ); }, empty: function() { var elem, i = 0; for ( ; ( elem = this[ i ] ) != null; i++ ) { if ( elem.nodeType === 1 ) { // Prevent memory leaks jQuery.cleanData( getAll( elem, false ) ); // Remove any remaining nodes elem.textContent = ""; } } return this; }, clone: function( dataAndEvents, deepDataAndEvents ) { dataAndEvents = dataAndEvents == null ? false : dataAndEvents; deepDataAndEvents = deepDataAndEvents == null ? dataAndEvents : deepDataAndEvents; return this.map( function() { return jQuery.clone( this, dataAndEvents, deepDataAndEvents ); } ); }, html: function( value ) { return access( this, function( value ) { var elem = this[ 0 ] || {}, i = 0, l = this.length; if ( value === undefined && elem.nodeType === 1 ) { return elem.innerHTML; } // See if we can take a shortcut and just use innerHTML if ( typeof value === "string" && !rnoInnerhtml.test( value ) && !wrapMap[ ( rtagName.exec( value ) || [ "", "" ] )[ 1 ].toLowerCase() ] ) { value = jQuery.htmlPrefilter( value ); try { for ( ; i < l; i++ ) { elem = this[ i ] || {}; // Remove element nodes and prevent memory leaks if ( elem.nodeType === 1 ) { jQuery.cleanData( getAll( elem, false ) ); elem.innerHTML = value; } } elem = 0; // If using innerHTML throws an exception, use the fallback method } catch ( e ) {} } if ( elem ) { this.empty().append( value ); } }, null, value, arguments.length ); }, replaceWith: function() { var ignored = []; // Make the changes, replacing each non-ignored context element with the new content return domManip( this, arguments, function( elem ) { var parent = this.parentNode; if ( jQuery.inArray( this, ignored ) < 0 ) { jQuery.cleanData( getAll( this ) ); if ( parent ) { parent.replaceChild( elem, this ); } } // Force callback invocation }, ignored ); } } ); jQuery.each( { appendTo: "append", prependTo: "prepend", insertBefore: "before", insertAfter: "after", replaceAll: "replaceWith" }, function( name, original ) { jQuery.fn[ name ] = function( selector ) { var elems, ret = [], insert = jQuery( selector ), last = insert.length - 1, i = 0; for ( ; i <= last; i++ ) { elems = i === last ? this : this.clone( true ); jQuery( insert[ i ] )[ original ]( elems ); // Support: Android <=4.0 only, PhantomJS 1 only // .get() because push.apply(_, arraylike) throws on ancient WebKit push.apply( ret, elems.get() ); } return this.pushStack( ret ); }; } ); var rnumnonpx = new RegExp( "^(" + pnum + ")(?!px)[a-z%]+$", "i" ); var getStyles = function( elem ) { // Support: IE <=11 only, Firefox <=30 (#15098, #14150) // IE throws on elements created in popups // FF meanwhile throws on frame elements through "defaultView.getComputedStyle" var view = elem.ownerDocument.defaultView; if ( !view || !view.opener ) { view = window; } return view.getComputedStyle( elem ); }; var rboxStyle = new RegExp( cssExpand.join( "|" ), "i" ); ( function() { // Executing both pixelPosition & boxSizingReliable tests require only one layout // so they're executed at the same time to save the second computation. function computeStyleTests() { // This is a singleton, we need to execute it only once if ( !div ) { return; } container.style.cssText = "position:absolute;left:-11111px;width:60px;" + "margin-top:1px;padding:0;border:0"; div.style.cssText = "position:relative;display:block;box-sizing:border-box;overflow:scroll;" + "margin:auto;border:1px;padding:1px;" + "width:60%;top:1%"; documentElement.appendChild( container ).appendChild( div ); var divStyle = window.getComputedStyle( div ); pixelPositionVal = divStyle.top !== "1%"; // Support: Android 4.0 - 4.3 only, Firefox <=3 - 44 reliableMarginLeftVal = roundPixelMeasures( divStyle.marginLeft ) === 12; // Support: Android 4.0 - 4.3 only, Safari <=9.1 - 10.1, iOS <=7.0 - 9.3 // Some styles come back with percentage values, even though they shouldn't div.style.right = "60%"; pixelBoxStylesVal = roundPixelMeasures( divStyle.right ) === 36; // Support: IE 9 - 11 only // Detect misreporting of content dimensions for box-sizing:border-box elements boxSizingReliableVal = roundPixelMeasures( divStyle.width ) === 36; // Support: IE 9 only // Detect overflow:scroll screwiness (gh-3699) // Support: Chrome <=64 // Don't get tricked when zoom affects offsetWidth (gh-4029) div.style.position = "absolute"; scrollboxSizeVal = roundPixelMeasures( div.offsetWidth / 3 ) === 12; documentElement.removeChild( container ); // Nullify the div so it wouldn't be stored in the memory and // it will also be a sign that checks already performed div = null; } function roundPixelMeasures( measure ) { return Math.round( parseFloat( measure ) ); } var pixelPositionVal, boxSizingReliableVal, scrollboxSizeVal, pixelBoxStylesVal, reliableMarginLeftVal, container = document.createElement( "div" ), div = document.createElement( "div" ); // Finish early in limited (non-browser) environments if ( !div.style ) { return; } // Support: IE <=9 - 11 only // Style of cloned element affects source element cloned (#8908) div.style.backgroundClip = "content-box"; div.cloneNode( true ).style.backgroundClip = ""; support.clearCloneStyle = div.style.backgroundClip === "content-box"; jQuery.extend( support, { boxSizingReliable: function() { computeStyleTests(); return boxSizingReliableVal; }, pixelBoxStyles: function() { computeStyleTests(); return pixelBoxStylesVal; }, pixelPosition: function() { computeStyleTests(); return pixelPositionVal; }, reliableMarginLeft: function() { computeStyleTests(); return reliableMarginLeftVal; }, scrollboxSize: function() { computeStyleTests(); return scrollboxSizeVal; } } ); } )(); function curCSS( elem, name, computed ) { var width, minWidth, maxWidth, ret, // Support: Firefox 51+ // Retrieving style before computed somehow // fixes an issue with getting wrong values // on detached elements style = elem.style; computed = computed || getStyles( elem ); // getPropertyValue is needed for: // .css('filter') (IE 9 only, #12537) // .css('--customProperty) (#3144) if ( computed ) { ret = computed.getPropertyValue( name ) || computed[ name ]; if ( ret === "" && !isAttached( elem ) ) { ret = jQuery.style( elem, name ); } // A tribute to the "awesome hack by Dean Edwards" // Android Browser returns percentage for some values, // but width seems to be reliably pixels. // This is against the CSSOM draft spec: // https://drafts.csswg.org/cssom/#resolved-values if ( !support.pixelBoxStyles() && rnumnonpx.test( ret ) && rboxStyle.test( name ) ) { // Remember the original values width = style.width; minWidth = style.minWidth; maxWidth = style.maxWidth; // Put in the new values to get a computed value out style.minWidth = style.maxWidth = style.width = ret; ret = computed.width; // Revert the changed values style.width = width; style.minWidth = minWidth; style.maxWidth = maxWidth; } } return ret !== undefined ? // Support: IE <=9 - 11 only // IE returns zIndex value as an integer. ret + "" : ret; } function addGetHookIf( conditionFn, hookFn ) { // Define the hook, we'll check on the first run if it's really needed. return { get: function() { if ( conditionFn() ) { // Hook not needed (or it's not possible to use it due // to missing dependency), remove it. delete this.get; return; } // Hook needed; redefine it so that the support test is not executed again. return ( this.get = hookFn ).apply( this, arguments ); } }; } var cssPrefixes = [ "Webkit", "Moz", "ms" ], emptyStyle = document.createElement( "div" ).style, vendorProps = {}; // Return a vendor-prefixed property or undefined function vendorPropName( name ) { // Check for vendor prefixed names var capName = name[ 0 ].toUpperCase() + name.slice( 1 ), i = cssPrefixes.length; while ( i-- ) { name = cssPrefixes[ i ] + capName; if ( name in emptyStyle ) { return name; } } } // Return a potentially-mapped jQuery.cssProps or vendor prefixed property function finalPropName( name ) { var final = jQuery.cssProps[ name ] || vendorProps[ name ]; if ( final ) { return final; } if ( name in emptyStyle ) { return name; } return vendorProps[ name ] = vendorPropName( name ) || name; } var // Swappable if display is none or starts with table // except "table", "table-cell", or "table-caption" // See here for display values: https://developer.mozilla.org/en-US/docs/CSS/display rdisplayswap = /^(none|table(?!-c[ea]).+)/, rcustomProp = /^--/, cssShow = { position: "absolute", visibility: "hidden", display: "block" }, cssNormalTransform = { letterSpacing: "0", fontWeight: "400" }; function setPositiveNumber( elem, value, subtract ) { // Any relative (+/-) values have already been // normalized at this point var matches = rcssNum.exec( value ); return matches ? // Guard against undefined "subtract", e.g., when used as in cssHooks Math.max( 0, matches[ 2 ] - ( subtract || 0 ) ) + ( matches[ 3 ] || "px" ) : value; } function boxModelAdjustment( elem, dimension, box, isBorderBox, styles, computedVal ) { var i = dimension === "width" ? 1 : 0, extra = 0, delta = 0; // Adjustment may not be necessary if ( box === ( isBorderBox ? "border" : "content" ) ) { return 0; } for ( ; i < 4; i += 2 ) { // Both box models exclude margin if ( box === "margin" ) { delta += jQuery.css( elem, box + cssExpand[ i ], true, styles ); } // If we get here with a content-box, we're seeking "padding" or "border" or "margin" if ( !isBorderBox ) { // Add padding delta += jQuery.css( elem, "padding" + cssExpand[ i ], true, styles ); // For "border" or "margin", add border if ( box !== "padding" ) { delta += jQuery.css( elem, "border" + cssExpand[ i ] + "Width", true, styles ); // But still keep track of it otherwise } else { extra += jQuery.css( elem, "border" + cssExpand[ i ] + "Width", true, styles ); } // If we get here with a border-box (content + padding + border), we're seeking "content" or // "padding" or "margin" } else { // For "content", subtract padding if ( box === "content" ) { delta -= jQuery.css( elem, "padding" + cssExpand[ i ], true, styles ); } // For "content" or "padding", subtract border if ( box !== "margin" ) { delta -= jQuery.css( elem, "border" + cssExpand[ i ] + "Width", true, styles ); } } } // Account for positive content-box scroll gutter when requested by providing computedVal if ( !isBorderBox && computedVal >= 0 ) { // offsetWidth/offsetHeight is a rounded sum of content, padding, scroll gutter, and border // Assuming integer scroll gutter, subtract the rest and round down delta += Math.max( 0, Math.ceil( elem[ "offset" + dimension[ 0 ].toUpperCase() + dimension.slice( 1 ) ] - computedVal - delta - extra - 0.5 // If offsetWidth/offsetHeight is unknown, then we can't determine content-box scroll gutter // Use an explicit zero to avoid NaN (gh-3964) ) ) || 0; } return delta; } function getWidthOrHeight( elem, dimension, extra ) { // Start with computed style var styles = getStyles( elem ), // To avoid forcing a reflow, only fetch boxSizing if we need it (gh-4322). // Fake content-box until we know it's needed to know the true value. boxSizingNeeded = !support.boxSizingReliable() || extra, isBorderBox = boxSizingNeeded && jQuery.css( elem, "boxSizing", false, styles ) === "border-box", valueIsBorderBox = isBorderBox, val = curCSS( elem, dimension, styles ), offsetProp = "offset" + dimension[ 0 ].toUpperCase() + dimension.slice( 1 ); // Support: Firefox <=54 // Return a confounding non-pixel value or feign ignorance, as appropriate. if ( rnumnonpx.test( val ) ) { if ( !extra ) { return val; } val = "auto"; } // Fall back to offsetWidth/offsetHeight when value is "auto" // This happens for inline elements with no explicit setting (gh-3571) // Support: Android <=4.1 - 4.3 only // Also use offsetWidth/offsetHeight for misreported inline dimensions (gh-3602) // Support: IE 9-11 only // Also use offsetWidth/offsetHeight for when box sizing is unreliable // We use getClientRects() to check for hidden/disconnected. // In those cases, the computed value can be trusted to be border-box if ( ( !support.boxSizingReliable() && isBorderBox || val === "auto" || !parseFloat( val ) && jQuery.css( elem, "display", false, styles ) === "inline" ) && elem.getClientRects().length ) { isBorderBox = jQuery.css( elem, "boxSizing", false, styles ) === "border-box"; // Where available, offsetWidth/offsetHeight approximate border box dimensions. // Where not available (e.g., SVG), assume unreliable box-sizing and interpret the // retrieved value as a content box dimension. valueIsBorderBox = offsetProp in elem; if ( valueIsBorderBox ) { val = elem[ offsetProp ]; } } // Normalize "" and auto val = parseFloat( val ) || 0; // Adjust for the element's box model return ( val + boxModelAdjustment( elem, dimension, extra || ( isBorderBox ? "border" : "content" ), valueIsBorderBox, styles, // Provide the current computed size to request scroll gutter calculation (gh-3589) val ) ) + "px"; } jQuery.extend( { // Add in style property hooks for overriding the default // behavior of getting and setting a style property cssHooks: { opacity: { get: function( elem, computed ) { if ( computed ) { // We should always get a number back from opacity var ret = curCSS( elem, "opacity" ); return ret === "" ? "1" : ret; } } } }, // Don't automatically add "px" to these possibly-unitless properties cssNumber: { "animationIterationCount": true, "columnCount": true, "fillOpacity": true, "flexGrow": true, "flexShrink": true, "fontWeight": true, "gridArea": true, "gridColumn": true, "gridColumnEnd": true, "gridColumnStart": true, "gridRow": true, "gridRowEnd": true, "gridRowStart": true, "lineHeight": true, "opacity": true, "order": true, "orphans": true, "widows": true, "zIndex": true, "zoom": true }, // Add in properties whose names you wish to fix before // setting or getting the value cssProps: {}, // Get and set the style property on a DOM Node style: function( elem, name, value, extra ) { // Don't set styles on text and comment nodes if ( !elem || elem.nodeType === 3 || elem.nodeType === 8 || !elem.style ) { return; } // Make sure that we're working with the right name var ret, type, hooks, origName = camelCase( name ), isCustomProp = rcustomProp.test( name ), style = elem.style; // Make sure that we're working with the right name. We don't // want to query the value if it is a CSS custom property // since they are user-defined. if ( !isCustomProp ) { name = finalPropName( origName ); } // Gets hook for the prefixed version, then unprefixed version hooks = jQuery.cssHooks[ name ] || jQuery.cssHooks[ origName ]; // Check if we're setting a value if ( value !== undefined ) { type = typeof value; // Convert "+=" or "-=" to relative numbers (#7345) if ( type === "string" && ( ret = rcssNum.exec( value ) ) && ret[ 1 ] ) { value = adjustCSS( elem, name, ret ); // Fixes bug #9237 type = "number"; } // Make sure that null and NaN values aren't set (#7116) if ( value == null || value !== value ) { return; } // If a number was passed in, add the unit (except for certain CSS properties) // The isCustomProp check can be removed in jQuery 4.0 when we only auto-append // "px" to a few hardcoded values. if ( type === "number" && !isCustomProp ) { value += ret && ret[ 3 ] || ( jQuery.cssNumber[ origName ] ? "" : "px" ); } // background-* props affect original clone's values if ( !support.clearCloneStyle && value === "" && name.indexOf( "background" ) === 0 ) { style[ name ] = "inherit"; } // If a hook was provided, use that value, otherwise just set the specified value if ( !hooks || !( "set" in hooks ) || ( value = hooks.set( elem, value, extra ) ) !== undefined ) { if ( isCustomProp ) { style.setProperty( name, value ); } else { style[ name ] = value; } } } else { // If a hook was provided get the non-computed value from there if ( hooks && "get" in hooks && ( ret = hooks.get( elem, false, extra ) ) !== undefined ) { return ret; } // Otherwise just get the value from the style object return style[ name ]; } }, css: function( elem, name, extra, styles ) { var val, num, hooks, origName = camelCase( name ), isCustomProp = rcustomProp.test( name ); // Make sure that we're working with the right name. We don't // want to modify the value if it is a CSS custom property // since they are user-defined. if ( !isCustomProp ) { name = finalPropName( origName ); } // Try prefixed name followed by the unprefixed name hooks = jQuery.cssHooks[ name ] || jQuery.cssHooks[ origName ]; // If a hook was provided get the computed value from there if ( hooks && "get" in hooks ) { val = hooks.get( elem, true, extra ); } // Otherwise, if a way to get the computed value exists, use that if ( val === undefined ) { val = curCSS( elem, name, styles ); } // Convert "normal" to computed value if ( val === "normal" && name in cssNormalTransform ) { val = cssNormalTransform[ name ]; } // Make numeric if forced or a qualifier was provided and val looks numeric if ( extra === "" || extra ) { num = parseFloat( val ); return extra === true || isFinite( num ) ? num || 0 : val; } return val; } } ); jQuery.each( [ "height", "width" ], function( i, dimension ) { jQuery.cssHooks[ dimension ] = { get: function( elem, computed, extra ) { if ( computed ) { // Certain elements can have dimension info if we invisibly show them // but it must have a current display style that would benefit return rdisplayswap.test( jQuery.css( elem, "display" ) ) && // Support: Safari 8+ // Table columns in Safari have non-zero offsetWidth & zero // getBoundingClientRect().width unless display is changed. // Support: IE <=11 only // Running getBoundingClientRect on a disconnected node // in IE throws an error. ( !elem.getClientRects().length || !elem.getBoundingClientRect().width ) ? swap( elem, cssShow, function() { return getWidthOrHeight( elem, dimension, extra ); } ) : getWidthOrHeight( elem, dimension, extra ); } }, set: function( elem, value, extra ) { var matches, styles = getStyles( elem ), // Only read styles.position if the test has a chance to fail // to avoid forcing a reflow. scrollboxSizeBuggy = !support.scrollboxSize() && styles.position === "absolute", // To avoid forcing a reflow, only fetch boxSizing if we need it (gh-3991) boxSizingNeeded = scrollboxSizeBuggy || extra, isBorderBox = boxSizingNeeded && jQuery.css( elem, "boxSizing", false, styles ) === "border-box", subtract = extra ? boxModelAdjustment( elem, dimension, extra, isBorderBox, styles ) : 0; // Account for unreliable border-box dimensions by comparing offset* to computed and // faking a content-box to get border and padding (gh-3699) if ( isBorderBox && scrollboxSizeBuggy ) { subtract -= Math.ceil( elem[ "offset" + dimension[ 0 ].toUpperCase() + dimension.slice( 1 ) ] - parseFloat( styles[ dimension ] ) - boxModelAdjustment( elem, dimension, "border", false, styles ) - 0.5 ); } // Convert to pixels if value adjustment is needed if ( subtract && ( matches = rcssNum.exec( value ) ) && ( matches[ 3 ] || "px" ) !== "px" ) { elem.style[ dimension ] = value; value = jQuery.css( elem, dimension ); } return setPositiveNumber( elem, value, subtract ); } }; } ); jQuery.cssHooks.marginLeft = addGetHookIf( support.reliableMarginLeft, function( elem, computed ) { if ( computed ) { return ( parseFloat( curCSS( elem, "marginLeft" ) ) || elem.getBoundingClientRect().left - swap( elem, { marginLeft: 0 }, function() { return elem.getBoundingClientRect().left; } ) ) + "px"; } } ); // These hooks are used by animate to expand properties jQuery.each( { margin: "", padding: "", border: "Width" }, function( prefix, suffix ) { jQuery.cssHooks[ prefix + suffix ] = { expand: function( value ) { var i = 0, expanded = {}, // Assumes a single number if not a string parts = typeof value === "string" ? value.split( " " ) : [ value ]; for ( ; i < 4; i++ ) { expanded[ prefix + cssExpand[ i ] + suffix ] = parts[ i ] || parts[ i - 2 ] || parts[ 0 ]; } return expanded; } }; if ( prefix !== "margin" ) { jQuery.cssHooks[ prefix + suffix ].set = setPositiveNumber; } } ); jQuery.fn.extend( { css: function( name, value ) { return access( this, function( elem, name, value ) { var styles, len, map = {}, i = 0; if ( Array.isArray( name ) ) { styles = getStyles( elem ); len = name.length; for ( ; i < len; i++ ) { map[ name[ i ] ] = jQuery.css( elem, name[ i ], false, styles ); } return map; } return value !== undefined ? jQuery.style( elem, name, value ) : jQuery.css( elem, name ); }, name, value, arguments.length > 1 ); } } ); function Tween( elem, options, prop, end, easing ) { return new Tween.prototype.init( elem, options, prop, end, easing ); } jQuery.Tween = Tween; Tween.prototype = { constructor: Tween, init: function( elem, options, prop, end, easing, unit ) { this.elem = elem; this.prop = prop; this.easing = easing || jQuery.easing._default; this.options = options; this.start = this.now = this.cur(); this.end = end; this.unit = unit || ( jQuery.cssNumber[ prop ] ? "" : "px" ); }, cur: function() { var hooks = Tween.propHooks[ this.prop ]; return hooks && hooks.get ? hooks.get( this ) : Tween.propHooks._default.get( this ); }, run: function( percent ) { var eased, hooks = Tween.propHooks[ this.prop ]; if ( this.options.duration ) { this.pos = eased = jQuery.easing[ this.easing ]( percent, this.options.duration * percent, 0, 1, this.options.duration ); } else { this.pos = eased = percent; } this.now = ( this.end - this.start ) * eased + this.start; if ( this.options.step ) { this.options.step.call( this.elem, this.now, this ); } if ( hooks && hooks.set ) { hooks.set( this ); } else { Tween.propHooks._default.set( this ); } return this; } }; Tween.prototype.init.prototype = Tween.prototype; Tween.propHooks = { _default: { get: function( tween ) { var result; // Use a property on the element directly when it is not a DOM element, // or when there is no matching style property that exists. if ( tween.elem.nodeType !== 1 || tween.elem[ tween.prop ] != null && tween.elem.style[ tween.prop ] == null ) { return tween.elem[ tween.prop ]; } // Passing an empty string as a 3rd parameter to .css will automatically // attempt a parseFloat and fallback to a string if the parse fails. // Simple values such as "10px" are parsed to Float; // complex values such as "rotate(1rad)" are returned as-is. result = jQuery.css( tween.elem, tween.prop, "" ); // Empty strings, null, undefined and "auto" are converted to 0. return !result || result === "auto" ? 0 : result; }, set: function( tween ) { // Use step hook for back compat. // Use cssHook if its there. // Use .style if available and use plain properties where available. if ( jQuery.fx.step[ tween.prop ] ) { jQuery.fx.step[ tween.prop ]( tween ); } else if ( tween.elem.nodeType === 1 && ( jQuery.cssHooks[ tween.prop ] || tween.elem.style[ finalPropName( tween.prop ) ] != null ) ) { jQuery.style( tween.elem, tween.prop, tween.now + tween.unit ); } else { tween.elem[ tween.prop ] = tween.now; } } } }; // Support: IE <=9 only // Panic based approach to setting things on disconnected nodes Tween.propHooks.scrollTop = Tween.propHooks.scrollLeft = { set: function( tween ) { if ( tween.elem.nodeType && tween.elem.parentNode ) { tween.elem[ tween.prop ] = tween.now; } } }; jQuery.easing = { linear: function( p ) { return p; }, swing: function( p ) { return 0.5 - Math.cos( p * Math.PI ) / 2; }, _default: "swing" }; jQuery.fx = Tween.prototype.init; // Back compat <1.8 extension point jQuery.fx.step = {}; var fxNow, inProgress, rfxtypes = /^(?:toggle|show|hide)$/, rrun = /queueHooks$/; function schedule() { if ( inProgress ) { if ( document.hidden === false && window.requestAnimationFrame ) { window.requestAnimationFrame( schedule ); } else { window.setTimeout( schedule, jQuery.fx.interval ); } jQuery.fx.tick(); } } // Animations created synchronously will run synchronously function createFxNow() { window.setTimeout( function() { fxNow = undefined; } ); return ( fxNow = Date.now() ); } // Generate parameters to create a standard animation function genFx( type, includeWidth ) { var which, i = 0, attrs = { height: type }; // If we include width, step value is 1 to do all cssExpand values, // otherwise step value is 2 to skip over Left and Right includeWidth = includeWidth ? 1 : 0; for ( ; i < 4; i += 2 - includeWidth ) { which = cssExpand[ i ]; attrs[ "margin" + which ] = attrs[ "padding" + which ] = type; } if ( includeWidth ) { attrs.opacity = attrs.width = type; } return attrs; } function createTween( value, prop, animation ) { var tween, collection = ( Animation.tweeners[ prop ] || [] ).concat( Animation.tweeners[ "*" ] ), index = 0, length = collection.length; for ( ; index < length; index++ ) { if ( ( tween = collection[ index ].call( animation, prop, value ) ) ) { // We're done with this property return tween; } } } function defaultPrefilter( elem, props, opts ) { var prop, value, toggle, hooks, oldfire, propTween, restoreDisplay, display, isBox = "width" in props || "height" in props, anim = this, orig = {}, style = elem.style, hidden = elem.nodeType && isHiddenWithinTree( elem ), dataShow = dataPriv.get( elem, "fxshow" ); // Queue-skipping animations hijack the fx hooks if ( !opts.queue ) { hooks = jQuery._queueHooks( elem, "fx" ); if ( hooks.unqueued == null ) { hooks.unqueued = 0; oldfire = hooks.empty.fire; hooks.empty.fire = function() { if ( !hooks.unqueued ) { oldfire(); } }; } hooks.unqueued++; anim.always( function() { // Ensure the complete handler is called before this completes anim.always( function() { hooks.unqueued--; if ( !jQuery.queue( elem, "fx" ).length ) { hooks.empty.fire(); } } ); } ); } // Detect show/hide animations for ( prop in props ) { value = props[ prop ]; if ( rfxtypes.test( value ) ) { delete props[ prop ]; toggle = toggle || value === "toggle"; if ( value === ( hidden ? "hide" : "show" ) ) { // Pretend to be hidden if this is a "show" and // there is still data from a stopped show/hide if ( value === "show" && dataShow && dataShow[ prop ] !== undefined ) { hidden = true; // Ignore all other no-op show/hide data } else { continue; } } orig[ prop ] = dataShow && dataShow[ prop ] || jQuery.style( elem, prop ); } } // Bail out if this is a no-op like .hide().hide() propTween = !jQuery.isEmptyObject( props ); if ( !propTween && jQuery.isEmptyObject( orig ) ) { return; } // Restrict "overflow" and "display" styles during box animations if ( isBox && elem.nodeType === 1 ) { // Support: IE <=9 - 11, Edge 12 - 15 // Record all 3 overflow attributes because IE does not infer the shorthand // from identically-valued overflowX and overflowY and Edge just mirrors // the overflowX value there. opts.overflow = [ style.overflow, style.overflowX, style.overflowY ]; // Identify a display type, preferring old show/hide data over the CSS cascade restoreDisplay = dataShow && dataShow.display; if ( restoreDisplay == null ) { restoreDisplay = dataPriv.get( elem, "display" ); } display = jQuery.css( elem, "display" ); if ( display === "none" ) { if ( restoreDisplay ) { display = restoreDisplay; } else { // Get nonempty value(s) by temporarily forcing visibility showHide( [ elem ], true ); restoreDisplay = elem.style.display || restoreDisplay; display = jQuery.css( elem, "display" ); showHide( [ elem ] ); } } // Animate inline elements as inline-block if ( display === "inline" || display === "inline-block" && restoreDisplay != null ) { if ( jQuery.css( elem, "float" ) === "none" ) { // Restore the original display value at the end of pure show/hide animations if ( !propTween ) { anim.done( function() { style.display = restoreDisplay; } ); if ( restoreDisplay == null ) { display = style.display; restoreDisplay = display === "none" ? "" : display; } } style.display = "inline-block"; } } } if ( opts.overflow ) { style.overflow = "hidden"; anim.always( function() { style.overflow = opts.overflow[ 0 ]; style.overflowX = opts.overflow[ 1 ]; style.overflowY = opts.overflow[ 2 ]; } ); } // Implement show/hide animations propTween = false; for ( prop in orig ) { // General show/hide setup for this element animation if ( !propTween ) { if ( dataShow ) { if ( "hidden" in dataShow ) { hidden = dataShow.hidden; } } else { dataShow = dataPriv.access( elem, "fxshow", { display: restoreDisplay } ); } // Store hidden/visible for toggle so `.stop().toggle()` "reverses" if ( toggle ) { dataShow.hidden = !hidden; } // Show elements before animating them if ( hidden ) { showHide( [ elem ], true ); } /* eslint-disable no-loop-func */ anim.done( function() { /* eslint-enable no-loop-func */ // The final step of a "hide" animation is actually hiding the element if ( !hidden ) { showHide( [ elem ] ); } dataPriv.remove( elem, "fxshow" ); for ( prop in orig ) { jQuery.style( elem, prop, orig[ prop ] ); } } ); } // Per-property setup propTween = createTween( hidden ? dataShow[ prop ] : 0, prop, anim ); if ( !( prop in dataShow ) ) { dataShow[ prop ] = propTween.start; if ( hidden ) { propTween.end = propTween.start; propTween.start = 0; } } } } function propFilter( props, specialEasing ) { var index, name, easing, value, hooks; // camelCase, specialEasing and expand cssHook pass for ( index in props ) { name = camelCase( index ); easing = specialEasing[ name ]; value = props[ index ]; if ( Array.isArray( value ) ) { easing = value[ 1 ]; value = props[ index ] = value[ 0 ]; } if ( index !== name ) { props[ name ] = value; delete props[ index ]; } hooks = jQuery.cssHooks[ name ]; if ( hooks && "expand" in hooks ) { value = hooks.expand( value ); delete props[ name ]; // Not quite $.extend, this won't overwrite existing keys. // Reusing 'index' because we have the correct "name" for ( index in value ) { if ( !( index in props ) ) { props[ index ] = value[ index ]; specialEasing[ index ] = easing; } } } else { specialEasing[ name ] = easing; } } } function Animation( elem, properties, options ) { var result, stopped, index = 0, length = Animation.prefilters.length, deferred = jQuery.Deferred().always( function() { // Don't match elem in the :animated selector delete tick.elem; } ), tick = function() { if ( stopped ) { return false; } var currentTime = fxNow || createFxNow(), remaining = Math.max( 0, animation.startTime + animation.duration - currentTime ), // Support: Android 2.3 only // Archaic crash bug won't allow us to use `1 - ( 0.5 || 0 )` (#12497) temp = remaining / animation.duration || 0, percent = 1 - temp, index = 0, length = animation.tweens.length; for ( ; index < length; index++ ) { animation.tweens[ index ].run( percent ); } deferred.notifyWith( elem, [ animation, percent, remaining ] ); // If there's more to do, yield if ( percent < 1 && length ) { return remaining; } // If this was an empty animation, synthesize a final progress notification if ( !length ) { deferred.notifyWith( elem, [ animation, 1, 0 ] ); } // Resolve the animation and report its conclusion deferred.resolveWith( elem, [ animation ] ); return false; }, animation = deferred.promise( { elem: elem, props: jQuery.extend( {}, properties ), opts: jQuery.extend( true, { specialEasing: {}, easing: jQuery.easing._default }, options ), originalProperties: properties, originalOptions: options, startTime: fxNow || createFxNow(), duration: options.duration, tweens: [], createTween: function( prop, end ) { var tween = jQuery.Tween( elem, animation.opts, prop, end, animation.opts.specialEasing[ prop ] || animation.opts.easing ); animation.tweens.push( tween ); return tween; }, stop: function( gotoEnd ) { var index = 0, // If we are going to the end, we want to run all the tweens // otherwise we skip this part length = gotoEnd ? animation.tweens.length : 0; if ( stopped ) { return this; } stopped = true; for ( ; index < length; index++ ) { animation.tweens[ index ].run( 1 ); } // Resolve when we played the last frame; otherwise, reject if ( gotoEnd ) { deferred.notifyWith( elem, [ animation, 1, 0 ] ); deferred.resolveWith( elem, [ animation, gotoEnd ] ); } else { deferred.rejectWith( elem, [ animation, gotoEnd ] ); } return this; } } ), props = animation.props; propFilter( props, animation.opts.specialEasing ); for ( ; index < length; index++ ) { result = Animation.prefilters[ index ].call( animation, elem, props, animation.opts ); if ( result ) { if ( isFunction( result.stop ) ) { jQuery._queueHooks( animation.elem, animation.opts.queue ).stop = result.stop.bind( result ); } return result; } } jQuery.map( props, createTween, animation ); if ( isFunction( animation.opts.start ) ) { animation.opts.start.call( elem, animation ); } // Attach callbacks from options animation .progress( animation.opts.progress ) .done( animation.opts.done, animation.opts.complete ) .fail( animation.opts.fail ) .always( animation.opts.always ); jQuery.fx.timer( jQuery.extend( tick, { elem: elem, anim: animation, queue: animation.opts.queue } ) ); return animation; } jQuery.Animation = jQuery.extend( Animation, { tweeners: { "*": [ function( prop, value ) { var tween = this.createTween( prop, value ); adjustCSS( tween.elem, prop, rcssNum.exec( value ), tween ); return tween; } ] }, tweener: function( props, callback ) { if ( isFunction( props ) ) { callback = props; props = [ "*" ]; } else { props = props.match( rnothtmlwhite ); } var prop, index = 0, length = props.length; for ( ; index < length; index++ ) { prop = props[ index ]; Animation.tweeners[ prop ] = Animation.tweeners[ prop ] || []; Animation.tweeners[ prop ].unshift( callback ); } }, prefilters: [ defaultPrefilter ], prefilter: function( callback, prepend ) { if ( prepend ) { Animation.prefilters.unshift( callback ); } else { Animation.prefilters.push( callback ); } } } ); jQuery.speed = function( speed, easing, fn ) { var opt = speed && typeof speed === "object" ? jQuery.extend( {}, speed ) : { complete: fn || !fn && easing || isFunction( speed ) && speed, duration: speed, easing: fn && easing || easing && !isFunction( easing ) && easing }; // Go to the end state if fx are off if ( jQuery.fx.off ) { opt.duration = 0; } else { if ( typeof opt.duration !== "number" ) { if ( opt.duration in jQuery.fx.speeds ) { opt.duration = jQuery.fx.speeds[ opt.duration ]; } else { opt.duration = jQuery.fx.speeds._default; } } } // Normalize opt.queue - true/undefined/null -> "fx" if ( opt.queue == null || opt.queue === true ) { opt.queue = "fx"; } // Queueing opt.old = opt.complete; opt.complete = function() { if ( isFunction( opt.old ) ) { opt.old.call( this ); } if ( opt.queue ) { jQuery.dequeue( this, opt.queue ); } }; return opt; }; jQuery.fn.extend( { fadeTo: function( speed, to, easing, callback ) { // Show any hidden elements after setting opacity to 0 return this.filter( isHiddenWithinTree ).css( "opacity", 0 ).show() // Animate to the value specified .end().animate( { opacity: to }, speed, easing, callback ); }, animate: function( prop, speed, easing, callback ) { var empty = jQuery.isEmptyObject( prop ), optall = jQuery.speed( speed, easing, callback ), doAnimation = function() { // Operate on a copy of prop so per-property easing won't be lost var anim = Animation( this, jQuery.extend( {}, prop ), optall ); // Empty animations, or finishing resolves immediately if ( empty || dataPriv.get( this, "finish" ) ) { anim.stop( true ); } }; doAnimation.finish = doAnimation; return empty || optall.queue === false ? this.each( doAnimation ) : this.queue( optall.queue, doAnimation ); }, stop: function( type, clearQueue, gotoEnd ) { var stopQueue = function( hooks ) { var stop = hooks.stop; delete hooks.stop; stop( gotoEnd ); }; if ( typeof type !== "string" ) { gotoEnd = clearQueue; clearQueue = type; type = undefined; } if ( clearQueue && type !== false ) { this.queue( type || "fx", [] ); } return this.each( function() { var dequeue = true, index = type != null && type + "queueHooks", timers = jQuery.timers, data = dataPriv.get( this ); if ( index ) { if ( data[ index ] && data[ index ].stop ) { stopQueue( data[ index ] ); } } else { for ( index in data ) { if ( data[ index ] && data[ index ].stop && rrun.test( index ) ) { stopQueue( data[ index ] ); } } } for ( index = timers.length; index--; ) { if ( timers[ index ].elem === this && ( type == null || timers[ index ].queue === type ) ) { timers[ index ].anim.stop( gotoEnd ); dequeue = false; timers.splice( index, 1 ); } } // Start the next in the queue if the last step wasn't forced. // Timers currently will call their complete callbacks, which // will dequeue but only if they were gotoEnd. if ( dequeue || !gotoEnd ) { jQuery.dequeue( this, type ); } } ); }, finish: function( type ) { if ( type !== false ) { type = type || "fx"; } return this.each( function() { var index, data = dataPriv.get( this ), queue = data[ type + "queue" ], hooks = data[ type + "queueHooks" ], timers = jQuery.timers, length = queue ? queue.length : 0; // Enable finishing flag on private data data.finish = true; // Empty the queue first jQuery.queue( this, type, [] ); if ( hooks && hooks.stop ) { hooks.stop.call( this, true ); } // Look for any active animations, and finish them for ( index = timers.length; index--; ) { if ( timers[ index ].elem === this && timers[ index ].queue === type ) { timers[ index ].anim.stop( true ); timers.splice( index, 1 ); } } // Look for any animations in the old queue and finish them for ( index = 0; index < length; index++ ) { if ( queue[ index ] && queue[ index ].finish ) { queue[ index ].finish.call( this ); } } // Turn off finishing flag delete data.finish; } ); } } ); jQuery.each( [ "toggle", "show", "hide" ], function( i, name ) { var cssFn = jQuery.fn[ name ]; jQuery.fn[ name ] = function( speed, easing, callback ) { return speed == null || typeof speed === "boolean" ? cssFn.apply( this, arguments ) : this.animate( genFx( name, true ), speed, easing, callback ); }; } ); // Generate shortcuts for custom animations jQuery.each( { slideDown: genFx( "show" ), slideUp: genFx( "hide" ), slideToggle: genFx( "toggle" ), fadeIn: { opacity: "show" }, fadeOut: { opacity: "hide" }, fadeToggle: { opacity: "toggle" } }, function( name, props ) { jQuery.fn[ name ] = function( speed, easing, callback ) { return this.animate( props, speed, easing, callback ); }; } ); jQuery.timers = []; jQuery.fx.tick = function() { var timer, i = 0, timers = jQuery.timers; fxNow = Date.now(); for ( ; i < timers.length; i++ ) { timer = timers[ i ]; // Run the timer and safely remove it when done (allowing for external removal) if ( !timer() && timers[ i ] === timer ) { timers.splice( i--, 1 ); } } if ( !timers.length ) { jQuery.fx.stop(); } fxNow = undefined; }; jQuery.fx.timer = function( timer ) { jQuery.timers.push( timer ); jQuery.fx.start(); }; jQuery.fx.interval = 13; jQuery.fx.start = function() { if ( inProgress ) { return; } inProgress = true; schedule(); }; jQuery.fx.stop = function() { inProgress = null; }; jQuery.fx.speeds = { slow: 600, fast: 200, // Default speed _default: 400 }; // Based off of the plugin by Clint Helfers, with permission. // https://web.archive.org/web/20100324014747/http://blindsignals.com/index.php/2009/07/jquery-delay/ jQuery.fn.delay = function( time, type ) { time = jQuery.fx ? jQuery.fx.speeds[ time ] || time : time; type = type || "fx"; return this.queue( type, function( next, hooks ) { var timeout = window.setTimeout( next, time ); hooks.stop = function() { window.clearTimeout( timeout ); }; } ); }; ( function() { var input = document.createElement( "input" ), select = document.createElement( "select" ), opt = select.appendChild( document.createElement( "option" ) ); input.type = "checkbox"; // Support: Android <=4.3 only // Default value for a checkbox should be "on" support.checkOn = input.value !== ""; // Support: IE <=11 only // Must access selectedIndex to make default options select support.optSelected = opt.selected; // Support: IE <=11 only // An input loses its value after becoming a radio input = document.createElement( "input" ); input.value = "t"; input.type = "radio"; support.radioValue = input.value === "t"; } )(); var boolHook, attrHandle = jQuery.expr.attrHandle; jQuery.fn.extend( { attr: function( name, value ) { return access( this, jQuery.attr, name, value, arguments.length > 1 ); }, removeAttr: function( name ) { return this.each( function() { jQuery.removeAttr( this, name ); } ); } } ); jQuery.extend( { attr: function( elem, name, value ) { var ret, hooks, nType = elem.nodeType; // Don't get/set attributes on text, comment and attribute nodes if ( nType === 3 || nType === 8 || nType === 2 ) { return; } // Fallback to prop when attributes are not supported if ( typeof elem.getAttribute === "undefined" ) { return jQuery.prop( elem, name, value ); } // Attribute hooks are determined by the lowercase version // Grab necessary hook if one is defined if ( nType !== 1 || !jQuery.isXMLDoc( elem ) ) { hooks = jQuery.attrHooks[ name.toLowerCase() ] || ( jQuery.expr.match.bool.test( name ) ? boolHook : undefined ); } if ( value !== undefined ) { if ( value === null ) { jQuery.removeAttr( elem, name ); return; } if ( hooks && "set" in hooks && ( ret = hooks.set( elem, value, name ) ) !== undefined ) { return ret; } elem.setAttribute( name, value + "" ); return value; } if ( hooks && "get" in hooks && ( ret = hooks.get( elem, name ) ) !== null ) { return ret; } ret = jQuery.find.attr( elem, name ); // Non-existent attributes return null, we normalize to undefined return ret == null ? undefined : ret; }, attrHooks: { type: { set: function( elem, value ) { if ( !support.radioValue && value === "radio" && nodeName( elem, "input" ) ) { var val = elem.value; elem.setAttribute( "type", value ); if ( val ) { elem.value = val; } return value; } } } }, removeAttr: function( elem, value ) { var name, i = 0, // Attribute names can contain non-HTML whitespace characters // https://html.spec.whatwg.org/multipage/syntax.html#attributes-2 attrNames = value && value.match( rnothtmlwhite ); if ( attrNames && elem.nodeType === 1 ) { while ( ( name = attrNames[ i++ ] ) ) { elem.removeAttribute( name ); } } } } ); // Hooks for boolean attributes boolHook = { set: function( elem, value, name ) { if ( value === false ) { // Remove boolean attributes when set to false jQuery.removeAttr( elem, name ); } else { elem.setAttribute( name, name ); } return name; } }; jQuery.each( jQuery.expr.match.bool.source.match( /\w+/g ), function( i, name ) { var getter = attrHandle[ name ] || jQuery.find.attr; attrHandle[ name ] = function( elem, name, isXML ) { var ret, handle, lowercaseName = name.toLowerCase(); if ( !isXML ) { // Avoid an infinite loop by temporarily removing this function from the getter handle = attrHandle[ lowercaseName ]; attrHandle[ lowercaseName ] = ret; ret = getter( elem, name, isXML ) != null ? lowercaseName : null; attrHandle[ lowercaseName ] = handle; } return ret; }; } ); var rfocusable = /^(?:input|select|textarea|button)$/i, rclickable = /^(?:a|area)$/i; jQuery.fn.extend( { prop: function( name, value ) { return access( this, jQuery.prop, name, value, arguments.length > 1 ); }, removeProp: function( name ) { return this.each( function() { delete this[ jQuery.propFix[ name ] || name ]; } ); } } ); jQuery.extend( { prop: function( elem, name, value ) { var ret, hooks, nType = elem.nodeType; // Don't get/set properties on text, comment and attribute nodes if ( nType === 3 || nType === 8 || nType === 2 ) { return; } if ( nType !== 1 || !jQuery.isXMLDoc( elem ) ) { // Fix name and attach hooks name = jQuery.propFix[ name ] || name; hooks = jQuery.propHooks[ name ]; } if ( value !== undefined ) { if ( hooks && "set" in hooks && ( ret = hooks.set( elem, value, name ) ) !== undefined ) { return ret; } return ( elem[ name ] = value ); } if ( hooks && "get" in hooks && ( ret = hooks.get( elem, name ) ) !== null ) { return ret; } return elem[ name ]; }, propHooks: { tabIndex: { get: function( elem ) { // Support: IE <=9 - 11 only // elem.tabIndex doesn't always return the // correct value when it hasn't been explicitly set // https://web.archive.org/web/20141116233347/http://fluidproject.org/blog/2008/01/09/getting-setting-and-removing-tabindex-values-with-javascript/ // Use proper attribute retrieval(#12072) var tabindex = jQuery.find.attr( elem, "tabindex" ); if ( tabindex ) { return parseInt( tabindex, 10 ); } if ( rfocusable.test( elem.nodeName ) || rclickable.test( elem.nodeName ) && elem.href ) { return 0; } return -1; } } }, propFix: { "for": "htmlFor", "class": "className" } } ); // Support: IE <=11 only // Accessing the selectedIndex property // forces the browser to respect setting selected // on the option // The getter ensures a default option is selected // when in an optgroup // eslint rule "no-unused-expressions" is disabled for this code // since it considers such accessions noop if ( !support.optSelected ) { jQuery.propHooks.selected = { get: function( elem ) { /* eslint no-unused-expressions: "off" */ var parent = elem.parentNode; if ( parent && parent.parentNode ) { parent.parentNode.selectedIndex; } return null; }, set: function( elem ) { /* eslint no-unused-expressions: "off" */ var parent = elem.parentNode; if ( parent ) { parent.selectedIndex; if ( parent.parentNode ) { parent.parentNode.selectedIndex; } } } }; } jQuery.each( [ "tabIndex", "readOnly", "maxLength", "cellSpacing", "cellPadding", "rowSpan", "colSpan", "useMap", "frameBorder", "contentEditable" ], function() { jQuery.propFix[ this.toLowerCase() ] = this; } ); // Strip and collapse whitespace according to HTML spec // https://infra.spec.whatwg.org/#strip-and-collapse-ascii-whitespace function stripAndCollapse( value ) { var tokens = value.match( rnothtmlwhite ) || []; return tokens.join( " " ); } function getClass( elem ) { return elem.getAttribute && elem.getAttribute( "class" ) || ""; } function classesToArray( value ) { if ( Array.isArray( value ) ) { return value; } if ( typeof value === "string" ) { return value.match( rnothtmlwhite ) || []; } return []; } jQuery.fn.extend( { addClass: function( value ) { var classes, elem, cur, curValue, clazz, j, finalValue, i = 0; if ( isFunction( value ) ) { return this.each( function( j ) { jQuery( this ).addClass( value.call( this, j, getClass( this ) ) ); } ); } classes = classesToArray( value ); if ( classes.length ) { while ( ( elem = this[ i++ ] ) ) { curValue = getClass( elem ); cur = elem.nodeType === 1 && ( " " + stripAndCollapse( curValue ) + " " ); if ( cur ) { j = 0; while ( ( clazz = classes[ j++ ] ) ) { if ( cur.indexOf( " " + clazz + " " ) < 0 ) { cur += clazz + " "; } } // Only assign if different to avoid unneeded rendering. finalValue = stripAndCollapse( cur ); if ( curValue !== finalValue ) { elem.setAttribute( "class", finalValue ); } } } } return this; }, removeClass: function( value ) { var classes, elem, cur, curValue, clazz, j, finalValue, i = 0; if ( isFunction( value ) ) { return this.each( function( j ) { jQuery( this ).removeClass( value.call( this, j, getClass( this ) ) ); } ); } if ( !arguments.length ) { return this.attr( "class", "" ); } classes = classesToArray( value ); if ( classes.length ) { while ( ( elem = this[ i++ ] ) ) { curValue = getClass( elem ); // This expression is here for better compressibility (see addClass) cur = elem.nodeType === 1 && ( " " + stripAndCollapse( curValue ) + " " ); if ( cur ) { j = 0; while ( ( clazz = classes[ j++ ] ) ) { // Remove *all* instances while ( cur.indexOf( " " + clazz + " " ) > -1 ) { cur = cur.replace( " " + clazz + " ", " " ); } } // Only assign if different to avoid unneeded rendering. finalValue = stripAndCollapse( cur ); if ( curValue !== finalValue ) { elem.setAttribute( "class", finalValue ); } } } } return this; }, toggleClass: function( value, stateVal ) { var type = typeof value, isValidValue = type === "string" || Array.isArray( value ); if ( typeof stateVal === "boolean" && isValidValue ) { return stateVal ? this.addClass( value ) : this.removeClass( value ); } if ( isFunction( value ) ) { return this.each( function( i ) { jQuery( this ).toggleClass( value.call( this, i, getClass( this ), stateVal ), stateVal ); } ); } return this.each( function() { var className, i, self, classNames; if ( isValidValue ) { // Toggle individual class names i = 0; self = jQuery( this ); classNames = classesToArray( value ); while ( ( className = classNames[ i++ ] ) ) { // Check each className given, space separated list if ( self.hasClass( className ) ) { self.removeClass( className ); } else { self.addClass( className ); } } // Toggle whole class name } else if ( value === undefined || type === "boolean" ) { className = getClass( this ); if ( className ) { // Store className if set dataPriv.set( this, "__className__", className ); } // If the element has a class name or if we're passed `false`, // then remove the whole classname (if there was one, the above saved it). // Otherwise bring back whatever was previously saved (if anything), // falling back to the empty string if nothing was stored. if ( this.setAttribute ) { this.setAttribute( "class", className || value === false ? "" : dataPriv.get( this, "__className__" ) || "" ); } } } ); }, hasClass: function( selector ) { var className, elem, i = 0; className = " " + selector + " "; while ( ( elem = this[ i++ ] ) ) { if ( elem.nodeType === 1 && ( " " + stripAndCollapse( getClass( elem ) ) + " " ).indexOf( className ) > -1 ) { return true; } } return false; } } ); var rreturn = /\r/g; jQuery.fn.extend( { val: function( value ) { var hooks, ret, valueIsFunction, elem = this[ 0 ]; if ( !arguments.length ) { if ( elem ) { hooks = jQuery.valHooks[ elem.type ] || jQuery.valHooks[ elem.nodeName.toLowerCase() ]; if ( hooks && "get" in hooks && ( ret = hooks.get( elem, "value" ) ) !== undefined ) { return ret; } ret = elem.value; // Handle most common string cases if ( typeof ret === "string" ) { return ret.replace( rreturn, "" ); } // Handle cases where value is null/undef or number return ret == null ? "" : ret; } return; } valueIsFunction = isFunction( value ); return this.each( function( i ) { var val; if ( this.nodeType !== 1 ) { return; } if ( valueIsFunction ) { val = value.call( this, i, jQuery( this ).val() ); } else { val = value; } // Treat null/undefined as ""; convert numbers to string if ( val == null ) { val = ""; } else if ( typeof val === "number" ) { val += ""; } else if ( Array.isArray( val ) ) { val = jQuery.map( val, function( value ) { return value == null ? "" : value + ""; } ); } hooks = jQuery.valHooks[ this.type ] || jQuery.valHooks[ this.nodeName.toLowerCase() ]; // If set returns undefined, fall back to normal setting if ( !hooks || !( "set" in hooks ) || hooks.set( this, val, "value" ) === undefined ) { this.value = val; } } ); } } ); jQuery.extend( { valHooks: { option: { get: function( elem ) { var val = jQuery.find.attr( elem, "value" ); return val != null ? val : // Support: IE <=10 - 11 only // option.text throws exceptions (#14686, #14858) // Strip and collapse whitespace // https://html.spec.whatwg.org/#strip-and-collapse-whitespace stripAndCollapse( jQuery.text( elem ) ); } }, select: { get: function( elem ) { var value, option, i, options = elem.options, index = elem.selectedIndex, one = elem.type === "select-one", values = one ? null : [], max = one ? index + 1 : options.length; if ( index < 0 ) { i = max; } else { i = one ? index : 0; } // Loop through all the selected options for ( ; i < max; i++ ) { option = options[ i ]; // Support: IE <=9 only // IE8-9 doesn't update selected after form reset (#2551) if ( ( option.selected || i === index ) && // Don't return options that are disabled or in a disabled optgroup !option.disabled && ( !option.parentNode.disabled || !nodeName( option.parentNode, "optgroup" ) ) ) { // Get the specific value for the option value = jQuery( option ).val(); // We don't need an array for one selects if ( one ) { return value; } // Multi-Selects return an array values.push( value ); } } return values; }, set: function( elem, value ) { var optionSet, option, options = elem.options, values = jQuery.makeArray( value ), i = options.length; while ( i-- ) { option = options[ i ]; /* eslint-disable no-cond-assign */ if ( option.selected = jQuery.inArray( jQuery.valHooks.option.get( option ), values ) > -1 ) { optionSet = true; } /* eslint-enable no-cond-assign */ } // Force browsers to behave consistently when non-matching value is set if ( !optionSet ) { elem.selectedIndex = -1; } return values; } } } } ); // Radios and checkboxes getter/setter jQuery.each( [ "radio", "checkbox" ], function() { jQuery.valHooks[ this ] = { set: function( elem, value ) { if ( Array.isArray( value ) ) { return ( elem.checked = jQuery.inArray( jQuery( elem ).val(), value ) > -1 ); } } }; if ( !support.checkOn ) { jQuery.valHooks[ this ].get = function( elem ) { return elem.getAttribute( "value" ) === null ? "on" : elem.value; }; } } ); // Return jQuery for attributes-only inclusion support.focusin = "onfocusin" in window; var rfocusMorph = /^(?:focusinfocus|focusoutblur)$/, stopPropagationCallback = function( e ) { e.stopPropagation(); }; jQuery.extend( jQuery.event, { trigger: function( event, data, elem, onlyHandlers ) { var i, cur, tmp, bubbleType, ontype, handle, special, lastElement, eventPath = [ elem || document ], type = hasOwn.call( event, "type" ) ? event.type : event, namespaces = hasOwn.call( event, "namespace" ) ? event.namespace.split( "." ) : []; cur = lastElement = tmp = elem = elem || document; // Don't do events on text and comment nodes if ( elem.nodeType === 3 || elem.nodeType === 8 ) { return; } // focus/blur morphs to focusin/out; ensure we're not firing them right now if ( rfocusMorph.test( type + jQuery.event.triggered ) ) { return; } if ( type.indexOf( "." ) > -1 ) { // Namespaced trigger; create a regexp to match event type in handle() namespaces = type.split( "." ); type = namespaces.shift(); namespaces.sort(); } ontype = type.indexOf( ":" ) < 0 && "on" + type; // Caller can pass in a jQuery.Event object, Object, or just an event type string event = event[ jQuery.expando ] ? event : new jQuery.Event( type, typeof event === "object" && event ); // Trigger bitmask: & 1 for native handlers; & 2 for jQuery (always true) event.isTrigger = onlyHandlers ? 2 : 3; event.namespace = namespaces.join( "." ); event.rnamespace = event.namespace ? new RegExp( "(^|\\.)" + namespaces.join( "\\.(?:.*\\.|)" ) + "(\\.|$)" ) : null; // Clean up the event in case it is being reused event.result = undefined; if ( !event.target ) { event.target = elem; } // Clone any incoming data and prepend the event, creating the handler arg list data = data == null ? [ event ] : jQuery.makeArray( data, [ event ] ); // Allow special events to draw outside the lines special = jQuery.event.special[ type ] || {}; if ( !onlyHandlers && special.trigger && special.trigger.apply( elem, data ) === false ) { return; } // Determine event propagation path in advance, per W3C events spec (#9951) // Bubble up to document, then to window; watch for a global ownerDocument var (#9724) if ( !onlyHandlers && !special.noBubble && !isWindow( elem ) ) { bubbleType = special.delegateType || type; if ( !rfocusMorph.test( bubbleType + type ) ) { cur = cur.parentNode; } for ( ; cur; cur = cur.parentNode ) { eventPath.push( cur ); tmp = cur; } // Only add window if we got to document (e.g., not plain obj or detached DOM) if ( tmp === ( elem.ownerDocument || document ) ) { eventPath.push( tmp.defaultView || tmp.parentWindow || window ); } } // Fire handlers on the event path i = 0; while ( ( cur = eventPath[ i++ ] ) && !event.isPropagationStopped() ) { lastElement = cur; event.type = i > 1 ? bubbleType : special.bindType || type; // jQuery handler handle = ( dataPriv.get( cur, "events" ) || {} )[ event.type ] && dataPriv.get( cur, "handle" ); if ( handle ) { handle.apply( cur, data ); } // Native handler handle = ontype && cur[ ontype ]; if ( handle && handle.apply && acceptData( cur ) ) { event.result = handle.apply( cur, data ); if ( event.result === false ) { event.preventDefault(); } } } event.type = type; // If nobody prevented the default action, do it now if ( !onlyHandlers && !event.isDefaultPrevented() ) { if ( ( !special._default || special._default.apply( eventPath.pop(), data ) === false ) && acceptData( elem ) ) { // Call a native DOM method on the target with the same name as the event. // Don't do default actions on window, that's where global variables be (#6170) if ( ontype && isFunction( elem[ type ] ) && !isWindow( elem ) ) { // Don't re-trigger an onFOO event when we call its FOO() method tmp = elem[ ontype ]; if ( tmp ) { elem[ ontype ] = null; } // Prevent re-triggering of the same event, since we already bubbled it above jQuery.event.triggered = type; if ( event.isPropagationStopped() ) { lastElement.addEventListener( type, stopPropagationCallback ); } elem[ type ](); if ( event.isPropagationStopped() ) { lastElement.removeEventListener( type, stopPropagationCallback ); } jQuery.event.triggered = undefined; if ( tmp ) { elem[ ontype ] = tmp; } } } } return event.result; }, // Piggyback on a donor event to simulate a different one // Used only for `focus(in | out)` events simulate: function( type, elem, event ) { var e = jQuery.extend( new jQuery.Event(), event, { type: type, isSimulated: true } ); jQuery.event.trigger( e, null, elem ); } } ); jQuery.fn.extend( { trigger: function( type, data ) { return this.each( function() { jQuery.event.trigger( type, data, this ); } ); }, triggerHandler: function( type, data ) { var elem = this[ 0 ]; if ( elem ) { return jQuery.event.trigger( type, data, elem, true ); } } } ); // Support: Firefox <=44 // Firefox doesn't have focus(in | out) events // Related ticket - https://bugzilla.mozilla.org/show_bug.cgi?id=687787 // // Support: Chrome <=48 - 49, Safari <=9.0 - 9.1 // focus(in | out) events fire after focus & blur events, // which is spec violation - http://www.w3.org/TR/DOM-Level-3-Events/#events-focusevent-event-order // Related ticket - https://bugs.chromium.org/p/chromium/issues/detail?id=449857 if ( !support.focusin ) { jQuery.each( { focus: "focusin", blur: "focusout" }, function( orig, fix ) { // Attach a single capturing handler on the document while someone wants focusin/focusout var handler = function( event ) { jQuery.event.simulate( fix, event.target, jQuery.event.fix( event ) ); }; jQuery.event.special[ fix ] = { setup: function() { var doc = this.ownerDocument || this, attaches = dataPriv.access( doc, fix ); if ( !attaches ) { doc.addEventListener( orig, handler, true ); } dataPriv.access( doc, fix, ( attaches || 0 ) + 1 ); }, teardown: function() { var doc = this.ownerDocument || this, attaches = dataPriv.access( doc, fix ) - 1; if ( !attaches ) { doc.removeEventListener( orig, handler, true ); dataPriv.remove( doc, fix ); } else { dataPriv.access( doc, fix, attaches ); } } }; } ); } var location = window.location; var nonce = Date.now(); var rquery = ( /\?/ ); // Cross-browser xml parsing jQuery.parseXML = function( data ) { var xml; if ( !data || typeof data !== "string" ) { return null; } // Support: IE 9 - 11 only // IE throws on parseFromString with invalid input. try { xml = ( new window.DOMParser() ).parseFromString( data, "text/xml" ); } catch ( e ) { xml = undefined; } if ( !xml || xml.getElementsByTagName( "parsererror" ).length ) { jQuery.error( "Invalid XML: " + data ); } return xml; }; var rbracket = /\[\]$/, rCRLF = /\r?\n/g, rsubmitterTypes = /^(?:submit|button|image|reset|file)$/i, rsubmittable = /^(?:input|select|textarea|keygen)/i; function buildParams( prefix, obj, traditional, add ) { var name; if ( Array.isArray( obj ) ) { // Serialize array item. jQuery.each( obj, function( i, v ) { if ( traditional || rbracket.test( prefix ) ) { // Treat each array item as a scalar. add( prefix, v ); } else { // Item is non-scalar (array or object), encode its numeric index. buildParams( prefix + "[" + ( typeof v === "object" && v != null ? i : "" ) + "]", v, traditional, add ); } } ); } else if ( !traditional && toType( obj ) === "object" ) { // Serialize object item. for ( name in obj ) { buildParams( prefix + "[" + name + "]", obj[ name ], traditional, add ); } } else { // Serialize scalar item. add( prefix, obj ); } } // Serialize an array of form elements or a set of // key/values into a query string jQuery.param = function( a, traditional ) { var prefix, s = [], add = function( key, valueOrFunction ) { // If value is a function, invoke it and use its return value var value = isFunction( valueOrFunction ) ? valueOrFunction() : valueOrFunction; s[ s.length ] = encodeURIComponent( key ) + "=" + encodeURIComponent( value == null ? "" : value ); }; if ( a == null ) { return ""; } // If an array was passed in, assume that it is an array of form elements. if ( Array.isArray( a ) || ( a.jquery && !jQuery.isPlainObject( a ) ) ) { // Serialize the form elements jQuery.each( a, function() { add( this.name, this.value ); } ); } else { // If traditional, encode the "old" way (the way 1.3.2 or older // did it), otherwise encode params recursively. for ( prefix in a ) { buildParams( prefix, a[ prefix ], traditional, add ); } } // Return the resulting serialization return s.join( "&" ); }; jQuery.fn.extend( { serialize: function() { return jQuery.param( this.serializeArray() ); }, serializeArray: function() { return this.map( function() { // Can add propHook for "elements" to filter or add form elements var elements = jQuery.prop( this, "elements" ); return elements ? jQuery.makeArray( elements ) : this; } ) .filter( function() { var type = this.type; // Use .is( ":disabled" ) so that fieldset[disabled] works return this.name && !jQuery( this ).is( ":disabled" ) && rsubmittable.test( this.nodeName ) && !rsubmitterTypes.test( type ) && ( this.checked || !rcheckableType.test( type ) ); } ) .map( function( i, elem ) { var val = jQuery( this ).val(); if ( val == null ) { return null; } if ( Array.isArray( val ) ) { return jQuery.map( val, function( val ) { return { name: elem.name, value: val.replace( rCRLF, "\r\n" ) }; } ); } return { name: elem.name, value: val.replace( rCRLF, "\r\n" ) }; } ).get(); } } ); var r20 = /%20/g, rhash = /#.*$/, rantiCache = /([?&])_=[^&]*/, rheaders = /^(.*?):[ \t]*([^\r\n]*)$/mg, // #7653, #8125, #8152: local protocol detection rlocalProtocol = /^(?:about|app|app-storage|.+-extension|file|res|widget):$/, rnoContent = /^(?:GET|HEAD)$/, rprotocol = /^\/\//, /* Prefilters * 1) They are useful to introduce custom dataTypes (see ajax/jsonp.js for an example) * 2) These are called: * - BEFORE asking for a transport * - AFTER param serialization (s.data is a string if s.processData is true) * 3) key is the dataType * 4) the catchall symbol "*" can be used * 5) execution will start with transport dataType and THEN continue down to "*" if needed */ prefilters = {}, /* Transports bindings * 1) key is the dataType * 2) the catchall symbol "*" can be used * 3) selection will start with transport dataType and THEN go to "*" if needed */ transports = {}, // Avoid comment-prolog char sequence (#10098); must appease lint and evade compression allTypes = "*/".concat( "*" ), // Anchor tag for parsing the document origin originAnchor = document.createElement( "a" ); originAnchor.href = location.href; // Base "constructor" for jQuery.ajaxPrefilter and jQuery.ajaxTransport function addToPrefiltersOrTransports( structure ) { // dataTypeExpression is optional and defaults to "*" return function( dataTypeExpression, func ) { if ( typeof dataTypeExpression !== "string" ) { func = dataTypeExpression; dataTypeExpression = "*"; } var dataType, i = 0, dataTypes = dataTypeExpression.toLowerCase().match( rnothtmlwhite ) || []; if ( isFunction( func ) ) { // For each dataType in the dataTypeExpression while ( ( dataType = dataTypes[ i++ ] ) ) { // Prepend if requested if ( dataType[ 0 ] === "+" ) { dataType = dataType.slice( 1 ) || "*"; ( structure[ dataType ] = structure[ dataType ] || [] ).unshift( func ); // Otherwise append } else { ( structure[ dataType ] = structure[ dataType ] || [] ).push( func ); } } } }; } // Base inspection function for prefilters and transports function inspectPrefiltersOrTransports( structure, options, originalOptions, jqXHR ) { var inspected = {}, seekingTransport = ( structure === transports ); function inspect( dataType ) { var selected; inspected[ dataType ] = true; jQuery.each( structure[ dataType ] || [], function( _, prefilterOrFactory ) { var dataTypeOrTransport = prefilterOrFactory( options, originalOptions, jqXHR ); if ( typeof dataTypeOrTransport === "string" && !seekingTransport && !inspected[ dataTypeOrTransport ] ) { options.dataTypes.unshift( dataTypeOrTransport ); inspect( dataTypeOrTransport ); return false; } else if ( seekingTransport ) { return !( selected = dataTypeOrTransport ); } } ); return selected; } return inspect( options.dataTypes[ 0 ] ) || !inspected[ "*" ] && inspect( "*" ); } // A special extend for ajax options // that takes "flat" options (not to be deep extended) // Fixes #9887 function ajaxExtend( target, src ) { var key, deep, flatOptions = jQuery.ajaxSettings.flatOptions || {}; for ( key in src ) { if ( src[ key ] !== undefined ) { ( flatOptions[ key ] ? target : ( deep || ( deep = {} ) ) )[ key ] = src[ key ]; } } if ( deep ) { jQuery.extend( true, target, deep ); } return target; } /* Handles responses to an ajax request: * - finds the right dataType (mediates between content-type and expected dataType) * - returns the corresponding response */ function ajaxHandleResponses( s, jqXHR, responses ) { var ct, type, finalDataType, firstDataType, contents = s.contents, dataTypes = s.dataTypes; // Remove auto dataType and get content-type in the process while ( dataTypes[ 0 ] === "*" ) { dataTypes.shift(); if ( ct === undefined ) { ct = s.mimeType || jqXHR.getResponseHeader( "Content-Type" ); } } // Check if we're dealing with a known content-type if ( ct ) { for ( type in contents ) { if ( contents[ type ] && contents[ type ].test( ct ) ) { dataTypes.unshift( type ); break; } } } // Check to see if we have a response for the expected dataType if ( dataTypes[ 0 ] in responses ) { finalDataType = dataTypes[ 0 ]; } else { // Try convertible dataTypes for ( type in responses ) { if ( !dataTypes[ 0 ] || s.converters[ type + " " + dataTypes[ 0 ] ] ) { finalDataType = type; break; } if ( !firstDataType ) { firstDataType = type; } } // Or just use first one finalDataType = finalDataType || firstDataType; } // If we found a dataType // We add the dataType to the list if needed // and return the corresponding response if ( finalDataType ) { if ( finalDataType !== dataTypes[ 0 ] ) { dataTypes.unshift( finalDataType ); } return responses[ finalDataType ]; } } /* Chain conversions given the request and the original response * Also sets the responseXXX fields on the jqXHR instance */ function ajaxConvert( s, response, jqXHR, isSuccess ) { var conv2, current, conv, tmp, prev, converters = {}, // Work with a copy of dataTypes in case we need to modify it for conversion dataTypes = s.dataTypes.slice(); // Create converters map with lowercased keys if ( dataTypes[ 1 ] ) { for ( conv in s.converters ) { converters[ conv.toLowerCase() ] = s.converters[ conv ]; } } current = dataTypes.shift(); // Convert to each sequential dataType while ( current ) { if ( s.responseFields[ current ] ) { jqXHR[ s.responseFields[ current ] ] = response; } // Apply the dataFilter if provided if ( !prev && isSuccess && s.dataFilter ) { response = s.dataFilter( response, s.dataType ); } prev = current; current = dataTypes.shift(); if ( current ) { // There's only work to do if current dataType is non-auto if ( current === "*" ) { current = prev; // Convert response if prev dataType is non-auto and differs from current } else if ( prev !== "*" && prev !== current ) { // Seek a direct converter conv = converters[ prev + " " + current ] || converters[ "* " + current ]; // If none found, seek a pair if ( !conv ) { for ( conv2 in converters ) { // If conv2 outputs current tmp = conv2.split( " " ); if ( tmp[ 1 ] === current ) { // If prev can be converted to accepted input conv = converters[ prev + " " + tmp[ 0 ] ] || converters[ "* " + tmp[ 0 ] ]; if ( conv ) { // Condense equivalence converters if ( conv === true ) { conv = converters[ conv2 ]; // Otherwise, insert the intermediate dataType } else if ( converters[ conv2 ] !== true ) { current = tmp[ 0 ]; dataTypes.unshift( tmp[ 1 ] ); } break; } } } } // Apply converter (if not an equivalence) if ( conv !== true ) { // Unless errors are allowed to bubble, catch and return them if ( conv && s.throws ) { response = conv( response ); } else { try { response = conv( response ); } catch ( e ) { return { state: "parsererror", error: conv ? e : "No conversion from " + prev + " to " + current }; } } } } } } return { state: "success", data: response }; } jQuery.extend( { // Counter for holding the number of active queries active: 0, // Last-Modified header cache for next request lastModified: {}, etag: {}, ajaxSettings: { url: location.href, type: "GET", isLocal: rlocalProtocol.test( location.protocol ), global: true, processData: true, async: true, contentType: "application/x-www-form-urlencoded; charset=UTF-8", /* timeout: 0, data: null, dataType: null, username: null, password: null, cache: null, throws: false, traditional: false, headers: {}, */ accepts: { "*": allTypes, text: "text/plain", html: "text/html", xml: "application/xml, text/xml", json: "application/json, text/javascript" }, contents: { xml: /\bxml\b/, html: /\bhtml/, json: /\bjson\b/ }, responseFields: { xml: "responseXML", text: "responseText", json: "responseJSON" }, // Data converters // Keys separate source (or catchall "*") and destination types with a single space converters: { // Convert anything to text "* text": String, // Text to html (true = no transformation) "text html": true, // Evaluate text as a json expression "text json": JSON.parse, // Parse text as xml "text xml": jQuery.parseXML }, // For options that shouldn't be deep extended: // you can add your own custom options here if // and when you create one that shouldn't be // deep extended (see ajaxExtend) flatOptions: { url: true, context: true } }, // Creates a full fledged settings object into target // with both ajaxSettings and settings fields. // If target is omitted, writes into ajaxSettings. ajaxSetup: function( target, settings ) { return settings ? // Building a settings object ajaxExtend( ajaxExtend( target, jQuery.ajaxSettings ), settings ) : // Extending ajaxSettings ajaxExtend( jQuery.ajaxSettings, target ); }, ajaxPrefilter: addToPrefiltersOrTransports( prefilters ), ajaxTransport: addToPrefiltersOrTransports( transports ), // Main method ajax: function( url, options ) { // If url is an object, simulate pre-1.5 signature if ( typeof url === "object" ) { options = url; url = undefined; } // Force options to be an object options = options || {}; var transport, // URL without anti-cache param cacheURL, // Response headers responseHeadersString, responseHeaders, // timeout handle timeoutTimer, // Url cleanup var urlAnchor, // Request state (becomes false upon send and true upon completion) completed, // To know if global events are to be dispatched fireGlobals, // Loop variable i, // uncached part of the url uncached, // Create the final options object s = jQuery.ajaxSetup( {}, options ), // Callbacks context callbackContext = s.context || s, // Context for global events is callbackContext if it is a DOM node or jQuery collection globalEventContext = s.context && ( callbackContext.nodeType || callbackContext.jquery ) ? jQuery( callbackContext ) : jQuery.event, // Deferreds deferred = jQuery.Deferred(), completeDeferred = jQuery.Callbacks( "once memory" ), // Status-dependent callbacks statusCode = s.statusCode || {}, // Headers (they are sent all at once) requestHeaders = {}, requestHeadersNames = {}, // Default abort message strAbort = "canceled", // Fake xhr jqXHR = { readyState: 0, // Builds headers hashtable if needed getResponseHeader: function( key ) { var match; if ( completed ) { if ( !responseHeaders ) { responseHeaders = {}; while ( ( match = rheaders.exec( responseHeadersString ) ) ) { responseHeaders[ match[ 1 ].toLowerCase() + " " ] = ( responseHeaders[ match[ 1 ].toLowerCase() + " " ] || [] ) .concat( match[ 2 ] ); } } match = responseHeaders[ key.toLowerCase() + " " ]; } return match == null ? null : match.join( ", " ); }, // Raw string getAllResponseHeaders: function() { return completed ? responseHeadersString : null; }, // Caches the header setRequestHeader: function( name, value ) { if ( completed == null ) { name = requestHeadersNames[ name.toLowerCase() ] = requestHeadersNames[ name.toLowerCase() ] || name; requestHeaders[ name ] = value; } return this; }, // Overrides response content-type header overrideMimeType: function( type ) { if ( completed == null ) { s.mimeType = type; } return this; }, // Status-dependent callbacks statusCode: function( map ) { var code; if ( map ) { if ( completed ) { // Execute the appropriate callbacks jqXHR.always( map[ jqXHR.status ] ); } else { // Lazy-add the new callbacks in a way that preserves old ones for ( code in map ) { statusCode[ code ] = [ statusCode[ code ], map[ code ] ]; } } } return this; }, // Cancel the request abort: function( statusText ) { var finalText = statusText || strAbort; if ( transport ) { transport.abort( finalText ); } done( 0, finalText ); return this; } }; // Attach deferreds deferred.promise( jqXHR ); // Add protocol if not provided (prefilters might expect it) // Handle falsy url in the settings object (#10093: consistency with old signature) // We also use the url parameter if available s.url = ( ( url || s.url || location.href ) + "" ) .replace( rprotocol, location.protocol + "//" ); // Alias method option to type as per ticket #12004 s.type = options.method || options.type || s.method || s.type; // Extract dataTypes list s.dataTypes = ( s.dataType || "*" ).toLowerCase().match( rnothtmlwhite ) || [ "" ]; // A cross-domain request is in order when the origin doesn't match the current origin. if ( s.crossDomain == null ) { urlAnchor = document.createElement( "a" ); // Support: IE <=8 - 11, Edge 12 - 15 // IE throws exception on accessing the href property if url is malformed, // e.g. http://example.com:80x/ try { urlAnchor.href = s.url; // Support: IE <=8 - 11 only // Anchor's host property isn't correctly set when s.url is relative urlAnchor.href = urlAnchor.href; s.crossDomain = originAnchor.protocol + "//" + originAnchor.host !== urlAnchor.protocol + "//" + urlAnchor.host; } catch ( e ) { // If there is an error parsing the URL, assume it is crossDomain, // it can be rejected by the transport if it is invalid s.crossDomain = true; } } // Convert data if not already a string if ( s.data && s.processData && typeof s.data !== "string" ) { s.data = jQuery.param( s.data, s.traditional ); } // Apply prefilters inspectPrefiltersOrTransports( prefilters, s, options, jqXHR ); // If request was aborted inside a prefilter, stop there if ( completed ) { return jqXHR; } // We can fire global events as of now if asked to // Don't fire events if jQuery.event is undefined in an AMD-usage scenario (#15118) fireGlobals = jQuery.event && s.global; // Watch for a new set of requests if ( fireGlobals && jQuery.active++ === 0 ) { jQuery.event.trigger( "ajaxStart" ); } // Uppercase the type s.type = s.type.toUpperCase(); // Determine if request has content s.hasContent = !rnoContent.test( s.type ); // Save the URL in case we're toying with the If-Modified-Since // and/or If-None-Match header later on // Remove hash to simplify url manipulation cacheURL = s.url.replace( rhash, "" ); // More options handling for requests with no content if ( !s.hasContent ) { // Remember the hash so we can put it back uncached = s.url.slice( cacheURL.length ); // If data is available and should be processed, append data to url if ( s.data && ( s.processData || typeof s.data === "string" ) ) { cacheURL += ( rquery.test( cacheURL ) ? "&" : "?" ) + s.data; // #9682: remove data so that it's not used in an eventual retry delete s.data; } // Add or update anti-cache param if needed if ( s.cache === false ) { cacheURL = cacheURL.replace( rantiCache, "$1" ); uncached = ( rquery.test( cacheURL ) ? "&" : "?" ) + "_=" + ( nonce++ ) + uncached; } // Put hash and anti-cache on the URL that will be requested (gh-1732) s.url = cacheURL + uncached; // Change '%20' to '+' if this is encoded form body content (gh-2658) } else if ( s.data && s.processData && ( s.contentType || "" ).indexOf( "application/x-www-form-urlencoded" ) === 0 ) { s.data = s.data.replace( r20, "+" ); } // Set the If-Modified-Since and/or If-None-Match header, if in ifModified mode. if ( s.ifModified ) { if ( jQuery.lastModified[ cacheURL ] ) { jqXHR.setRequestHeader( "If-Modified-Since", jQuery.lastModified[ cacheURL ] ); } if ( jQuery.etag[ cacheURL ] ) { jqXHR.setRequestHeader( "If-None-Match", jQuery.etag[ cacheURL ] ); } } // Set the correct header, if data is being sent if ( s.data && s.hasContent && s.contentType !== false || options.contentType ) { jqXHR.setRequestHeader( "Content-Type", s.contentType ); } // Set the Accepts header for the server, depending on the dataType jqXHR.setRequestHeader( "Accept", s.dataTypes[ 0 ] && s.accepts[ s.dataTypes[ 0 ] ] ? s.accepts[ s.dataTypes[ 0 ] ] + ( s.dataTypes[ 0 ] !== "*" ? ", " + allTypes + "; q=0.01" : "" ) : s.accepts[ "*" ] ); // Check for headers option for ( i in s.headers ) { jqXHR.setRequestHeader( i, s.headers[ i ] ); } // Allow custom headers/mimetypes and early abort if ( s.beforeSend && ( s.beforeSend.call( callbackContext, jqXHR, s ) === false || completed ) ) { // Abort if not done already and return return jqXHR.abort(); } // Aborting is no longer a cancellation strAbort = "abort"; // Install callbacks on deferreds completeDeferred.add( s.complete ); jqXHR.done( s.success ); jqXHR.fail( s.error ); // Get transport transport = inspectPrefiltersOrTransports( transports, s, options, jqXHR ); // If no transport, we auto-abort if ( !transport ) { done( -1, "No Transport" ); } else { jqXHR.readyState = 1; // Send global event if ( fireGlobals ) { globalEventContext.trigger( "ajaxSend", [ jqXHR, s ] ); } // If request was aborted inside ajaxSend, stop there if ( completed ) { return jqXHR; } // Timeout if ( s.async && s.timeout > 0 ) { timeoutTimer = window.setTimeout( function() { jqXHR.abort( "timeout" ); }, s.timeout ); } try { completed = false; transport.send( requestHeaders, done ); } catch ( e ) { // Rethrow post-completion exceptions if ( completed ) { throw e; } // Propagate others as results done( -1, e ); } } // Callback for when everything is done function done( status, nativeStatusText, responses, headers ) { var isSuccess, success, error, response, modified, statusText = nativeStatusText; // Ignore repeat invocations if ( completed ) { return; } completed = true; // Clear timeout if it exists if ( timeoutTimer ) { window.clearTimeout( timeoutTimer ); } // Dereference transport for early garbage collection // (no matter how long the jqXHR object will be used) transport = undefined; // Cache response headers responseHeadersString = headers || ""; // Set readyState jqXHR.readyState = status > 0 ? 4 : 0; // Determine if successful isSuccess = status >= 200 && status < 300 || status === 304; // Get response data if ( responses ) { response = ajaxHandleResponses( s, jqXHR, responses ); } // Convert no matter what (that way responseXXX fields are always set) response = ajaxConvert( s, response, jqXHR, isSuccess ); // If successful, handle type chaining if ( isSuccess ) { // Set the If-Modified-Since and/or If-None-Match header, if in ifModified mode. if ( s.ifModified ) { modified = jqXHR.getResponseHeader( "Last-Modified" ); if ( modified ) { jQuery.lastModified[ cacheURL ] = modified; } modified = jqXHR.getResponseHeader( "etag" ); if ( modified ) { jQuery.etag[ cacheURL ] = modified; } } // if no content if ( status === 204 || s.type === "HEAD" ) { statusText = "nocontent"; // if not modified } else if ( status === 304 ) { statusText = "notmodified"; // If we have data, let's convert it } else { statusText = response.state; success = response.data; error = response.error; isSuccess = !error; } } else { // Extract error from statusText and normalize for non-aborts error = statusText; if ( status || !statusText ) { statusText = "error"; if ( status < 0 ) { status = 0; } } } // Set data for the fake xhr object jqXHR.status = status; jqXHR.statusText = ( nativeStatusText || statusText ) + ""; // Success/Error if ( isSuccess ) { deferred.resolveWith( callbackContext, [ success, statusText, jqXHR ] ); } else { deferred.rejectWith( callbackContext, [ jqXHR, statusText, error ] ); } // Status-dependent callbacks jqXHR.statusCode( statusCode ); statusCode = undefined; if ( fireGlobals ) { globalEventContext.trigger( isSuccess ? "ajaxSuccess" : "ajaxError", [ jqXHR, s, isSuccess ? success : error ] ); } // Complete completeDeferred.fireWith( callbackContext, [ jqXHR, statusText ] ); if ( fireGlobals ) { globalEventContext.trigger( "ajaxComplete", [ jqXHR, s ] ); // Handle the global AJAX counter if ( !( --jQuery.active ) ) { jQuery.event.trigger( "ajaxStop" ); } } } return jqXHR; }, getJSON: function( url, data, callback ) { return jQuery.get( url, data, callback, "json" ); }, getScript: function( url, callback ) { return jQuery.get( url, undefined, callback, "script" ); } } ); jQuery.each( [ "get", "post" ], function( i, method ) { jQuery[ method ] = function( url, data, callback, type ) { // Shift arguments if data argument was omitted if ( isFunction( data ) ) { type = type || callback; callback = data; data = undefined; } // The url can be an options object (which then must have .url) return jQuery.ajax( jQuery.extend( { url: url, type: method, dataType: type, data: data, success: callback }, jQuery.isPlainObject( url ) && url ) ); }; } ); jQuery._evalUrl = function( url, options ) { return jQuery.ajax( { url: url, // Make this explicit, since user can override this through ajaxSetup (#11264) type: "GET", dataType: "script", cache: true, async: false, global: false, // Only evaluate the response if it is successful (gh-4126) // dataFilter is not invoked for failure responses, so using it instead // of the default converter is kludgy but it works. converters: { "text script": function() {} }, dataFilter: function( response ) { jQuery.globalEval( response, options ); } } ); }; jQuery.fn.extend( { wrapAll: function( html ) { var wrap; if ( this[ 0 ] ) { if ( isFunction( html ) ) { html = html.call( this[ 0 ] ); } // The elements to wrap the target around wrap = jQuery( html, this[ 0 ].ownerDocument ).eq( 0 ).clone( true ); if ( this[ 0 ].parentNode ) { wrap.insertBefore( this[ 0 ] ); } wrap.map( function() { var elem = this; while ( elem.firstElementChild ) { elem = elem.firstElementChild; } return elem; } ).append( this ); } return this; }, wrapInner: function( html ) { if ( isFunction( html ) ) { return this.each( function( i ) { jQuery( this ).wrapInner( html.call( this, i ) ); } ); } return this.each( function() { var self = jQuery( this ), contents = self.contents(); if ( contents.length ) { contents.wrapAll( html ); } else { self.append( html ); } } ); }, wrap: function( html ) { var htmlIsFunction = isFunction( html ); return this.each( function( i ) { jQuery( this ).wrapAll( htmlIsFunction ? html.call( this, i ) : html ); } ); }, unwrap: function( selector ) { this.parent( selector ).not( "body" ).each( function() { jQuery( this ).replaceWith( this.childNodes ); } ); return this; } } ); jQuery.expr.pseudos.hidden = function( elem ) { return !jQuery.expr.pseudos.visible( elem ); }; jQuery.expr.pseudos.visible = function( elem ) { return !!( elem.offsetWidth || elem.offsetHeight || elem.getClientRects().length ); }; jQuery.ajaxSettings.xhr = function() { try { return new window.XMLHttpRequest(); } catch ( e ) {} }; var xhrSuccessStatus = { // File protocol always yields status code 0, assume 200 0: 200, // Support: IE <=9 only // #1450: sometimes IE returns 1223 when it should be 204 1223: 204 }, xhrSupported = jQuery.ajaxSettings.xhr(); support.cors = !!xhrSupported && ( "withCredentials" in xhrSupported ); support.ajax = xhrSupported = !!xhrSupported; jQuery.ajaxTransport( function( options ) { var callback, errorCallback; // Cross domain only allowed if supported through XMLHttpRequest if ( support.cors || xhrSupported && !options.crossDomain ) { return { send: function( headers, complete ) { var i, xhr = options.xhr(); xhr.open( options.type, options.url, options.async, options.username, options.password ); // Apply custom fields if provided if ( options.xhrFields ) { for ( i in options.xhrFields ) { xhr[ i ] = options.xhrFields[ i ]; } } // Override mime type if needed if ( options.mimeType && xhr.overrideMimeType ) { xhr.overrideMimeType( options.mimeType ); } // X-Requested-With header // For cross-domain requests, seeing as conditions for a preflight are // akin to a jigsaw puzzle, we simply never set it to be sure. // (it can always be set on a per-request basis or even using ajaxSetup) // For same-domain requests, won't change header if already provided. if ( !options.crossDomain && !headers[ "X-Requested-With" ] ) { headers[ "X-Requested-With" ] = "XMLHttpRequest"; } // Set headers for ( i in headers ) { xhr.setRequestHeader( i, headers[ i ] ); } // Callback callback = function( type ) { return function() { if ( callback ) { callback = errorCallback = xhr.onload = xhr.onerror = xhr.onabort = xhr.ontimeout = xhr.onreadystatechange = null; if ( type === "abort" ) { xhr.abort(); } else if ( type === "error" ) { // Support: IE <=9 only // On a manual native abort, IE9 throws // errors on any property access that is not readyState if ( typeof xhr.status !== "number" ) { complete( 0, "error" ); } else { complete( // File: protocol always yields status 0; see #8605, #14207 xhr.status, xhr.statusText ); } } else { complete( xhrSuccessStatus[ xhr.status ] || xhr.status, xhr.statusText, // Support: IE <=9 only // IE9 has no XHR2 but throws on binary (trac-11426) // For XHR2 non-text, let the caller handle it (gh-2498) ( xhr.responseType || "text" ) !== "text" || typeof xhr.responseText !== "string" ? { binary: xhr.response } : { text: xhr.responseText }, xhr.getAllResponseHeaders() ); } } }; }; // Listen to events xhr.onload = callback(); errorCallback = xhr.onerror = xhr.ontimeout = callback( "error" ); // Support: IE 9 only // Use onreadystatechange to replace onabort // to handle uncaught aborts if ( xhr.onabort !== undefined ) { xhr.onabort = errorCallback; } else { xhr.onreadystatechange = function() { // Check readyState before timeout as it changes if ( xhr.readyState === 4 ) { // Allow onerror to be called first, // but that will not handle a native abort // Also, save errorCallback to a variable // as xhr.onerror cannot be accessed window.setTimeout( function() { if ( callback ) { errorCallback(); } } ); } }; } // Create the abort callback callback = callback( "abort" ); try { // Do send the request (this may raise an exception) xhr.send( options.hasContent && options.data || null ); } catch ( e ) { // #14683: Only rethrow if this hasn't been notified as an error yet if ( callback ) { throw e; } } }, abort: function() { if ( callback ) { callback(); } } }; } } ); // Prevent auto-execution of scripts when no explicit dataType was provided (See gh-2432) jQuery.ajaxPrefilter( function( s ) { if ( s.crossDomain ) { s.contents.script = false; } } ); // Install script dataType jQuery.ajaxSetup( { accepts: { script: "text/javascript, application/javascript, " + "application/ecmascript, application/x-ecmascript" }, contents: { script: /\b(?:java|ecma)script\b/ }, converters: { "text script": function( text ) { jQuery.globalEval( text ); return text; } } } ); // Handle cache's special case and crossDomain jQuery.ajaxPrefilter( "script", function( s ) { if ( s.cache === undefined ) { s.cache = false; } if ( s.crossDomain ) { s.type = "GET"; } } ); // Bind script tag hack transport jQuery.ajaxTransport( "script", function( s ) { // This transport only deals with cross domain or forced-by-attrs requests if ( s.crossDomain || s.scriptAttrs ) { var script, callback; return { send: function( _, complete ) { script = jQuery( "

API

Region

class dogpile.cache.region.CacheRegion(name=None, function_key_generator=<function function_key_generator>, function_multi_key_generator=<function function_multi_key_generator>, key_mangler=None, async_creation_runner=None)

A front end to a particular cache backend.

Parameters
  • name – Optional, a string name for the region. This isn’t used internally but can be accessed via the .name parameter, helpful for configuring a region from a config file.

  • function_key_generator

    Optional. A function that will produce a “cache key” given a data creation function and arguments, when using the CacheRegion.cache_on_arguments() method. The structure of this function should be two levels: given the data creation function, return a new function that generates the key based on the given arguments. Such as:

    def my_key_generator(namespace, fn, **kw):
        fname = fn.__name__
        def generate_key(*arg):
            return namespace + "_" + fname + "_".join(str(s) for s in arg)
        return generate_key
    
    
    region = make_region(
        function_key_generator = my_key_generator
    ).configure(
        "dogpile.cache.dbm",
        expiration_time=300,
        arguments={
            "filename":"file.dbm"
        }
    )
    

    The namespace is that passed to CacheRegion.cache_on_arguments(). It’s not consulted outside this function, so in fact can be of any form. For example, it can be passed as a tuple, used to specify arguments to pluck from **kw:

    def my_key_generator(namespace, fn):
        def generate_key(*arg, **kw):
            return ":".join(
                    [kw[k] for k in namespace] +
                    [str(x) for x in arg]
                )
        return generate_key
    

    Where the decorator might be used as:

    @my_region.cache_on_arguments(namespace=('x', 'y'))
    def my_function(a, b, **kw):
        return my_data()
    

    See also

    function_key_generator() - default key generator

    kwarg_function_key_generator() - optional gen that also uses keyword arguments

  • function_multi_key_generator

    Optional. Similar to function_key_generator parameter, but it’s used in CacheRegion.cache_multi_on_arguments(). Generated function should return list of keys. For example:

    def my_multi_key_generator(namespace, fn, **kw):
        namespace = fn.__name__ + (namespace or '')
    
        def generate_keys(*args):
            return [namespace + ':' + str(a) for a in args]
    
        return generate_keys
    

  • key_mangler – Function which will be used on all incoming keys before passing to the backend. Defaults to None, in which case the key mangling function recommended by the cache backend will be used. A typical mangler is the SHA1 mangler found at sha1_mangle_key() which coerces keys into a SHA1 hash, so that the string length is fixed. To disable all key mangling, set to False. Another typical mangler is the built-in Python function str, which can be used to convert non-string or Unicode keys to bytestrings, which is needed when using a backend such as bsddb or dbm under Python 2.x in conjunction with Unicode keys.

  • async_creation_runner

    A callable that, when specified, will be passed to and called by dogpile.lock when there is a stale value present in the cache. It will be passed the mutex and is responsible releasing that mutex when finished. This can be used to defer the computation of expensive creator functions to later points in the future by way of, for example, a background thread, a long-running queue, or a task manager system like Celery.

    For a specific example using async_creation_runner, new values can be created in a background thread like so:

    import threading
    
    def async_creation_runner(cache, somekey, creator, mutex):
        ''' Used by dogpile.core:Lock when appropriate  '''
        def runner():
            try:
                value = creator()
                cache.set(somekey, value)
            finally:
                mutex.release()
    
        thread = threading.Thread(target=runner)
        thread.start()
    
    
    region = make_region(
        async_creation_runner=async_creation_runner,
    ).configure(
        'dogpile.cache.memcached',
        expiration_time=5,
        arguments={
            'url': '127.0.0.1:11211',
            'distributed_lock': True,
        }
    )
    

    Remember that the first request for a key with no associated value will always block; async_creator will not be invoked. However, subsequent requests for cached-but-expired values will still return promptly. They will be refreshed by whatever asynchronous means the provided async_creation_runner callable implements.

    By default the async_creation_runner is disabled and is set to None.

    New in version 0.4.2: added the async_creation_runner feature.

property actual_backend

Return the ultimate backend underneath any proxies.

The backend might be the result of one or more proxy.wrap applications. If so, derive the actual underlying backend.

New in version 0.6.6.

cache_multi_on_arguments(namespace=None, expiration_time=None, should_cache_fn=None, asdict=False, to_str=<class 'str'>, function_multi_key_generator=None)

A function decorator that will cache multiple return values from the function using a sequence of keys derived from the function itself and the arguments passed to it.

This method is the “multiple key” analogue to the CacheRegion.cache_on_arguments() method.

Example:

@someregion.cache_multi_on_arguments()
def generate_something(*keys):
    return [
        somedatabase.query(key)
        for key in keys
    ]

The decorated function can be called normally. The decorator will produce a list of cache keys using a mechanism similar to that of CacheRegion.cache_on_arguments(), combining the name of the function with the optional namespace and with the string form of each key. It will then consult the cache using the same mechanism as that of CacheRegion.get_multi() to retrieve all current values; the originally passed keys corresponding to those values which aren’t generated or need regeneration will be assembled into a new argument list, and the decorated function is then called with that subset of arguments.

The returned result is a list:

result = generate_something("key1", "key2", "key3")

The decorator internally makes use of the CacheRegion.get_or_create_multi() method to access the cache and conditionally call the function. See that method for additional behavioral details.

Unlike the CacheRegion.cache_on_arguments() method, CacheRegion.cache_multi_on_arguments() works only with a single function signature, one which takes a simple list of keys as arguments.

Like CacheRegion.cache_on_arguments(), the decorated function is also provided with a set() method, which here accepts a mapping of keys and values to set in the cache:

generate_something.set({"k1": "value1",
                        "k2": "value2", "k3": "value3"})

…an invalidate() method, which has the effect of deleting the given sequence of keys using the same mechanism as that of CacheRegion.delete_multi():

generate_something.invalidate("k1", "k2", "k3")

…a refresh() method, which will call the creation function, cache the new values, and return them:

values = generate_something.refresh("k1", "k2", "k3")

…and a get() method, which will return values based on the given arguments:

values = generate_something.get("k1", "k2", "k3")

New in version 0.5.3: Added get() method to decorated function.

Parameters passed to CacheRegion.cache_multi_on_arguments() have the same meaning as those passed to CacheRegion.cache_on_arguments().

Parameters
  • namespace – optional string argument which will be established as part of each cache key.

  • expiration_time – if not None, will override the normal expiration time. May be passed as an integer or a callable.

  • should_cache_fn – passed to CacheRegion.get_or_create_multi(). This function is given a value as returned by the creator, and only if it returns True will that value be placed in the cache.

  • asdict

    if True, the decorated function should return its result as a dictionary of keys->values, and the final result of calling the decorated function will also be a dictionary. If left at its default value of False, the decorated function should return its result as a list of values, and the final result of calling the decorated function will also be a list.

    When asdict==True if the dictionary returned by the decorated function is missing keys, those keys will not be cached.

  • to_str – callable, will be called on each function argument in order to convert to a string. Defaults to str(). If the function accepts non-ascii unicode arguments on Python 2.x, the unicode() builtin can be substituted, but note this will produce unicode cache keys which may require key mangling before reaching the cache.

New in version 0.5.0.

Parameters

function_multi_key_generator

a function that will produce a list of keys. This function will supersede the one configured on the CacheRegion itself.

New in version 0.5.5.

cache_on_arguments(namespace=None, expiration_time=None, should_cache_fn=None, to_str=<class 'str'>, function_key_generator=None)

A function decorator that will cache the return value of the function using a key derived from the function itself and its arguments.

The decorator internally makes use of the CacheRegion.get_or_create() method to access the cache and conditionally call the function. See that method for additional behavioral details.

E.g.:

@someregion.cache_on_arguments()
def generate_something(x, y):
    return somedatabase.query(x, y)

The decorated function can then be called normally, where data will be pulled from the cache region unless a new value is needed:

result = generate_something(5, 6)

The function is also given an attribute invalidate(), which provides for invalidation of the value. Pass to invalidate() the same arguments you’d pass to the function itself to represent a particular value:

generate_something.invalidate(5, 6)

Another attribute set() is added to provide extra caching possibilities relative to the function. This is a convenience method for CacheRegion.set() which will store a given value directly without calling the decorated function. The value to be cached is passed as the first argument, and the arguments which would normally be passed to the function should follow:

generate_something.set(3, 5, 6)

The above example is equivalent to calling generate_something(5, 6), if the function were to produce the value 3 as the value to be cached.

New in version 0.4.1: Added set() method to decorated function.

Similar to set() is refresh(). This attribute will invoke the decorated function and populate a new value into the cache with the new value, as well as returning that value:

newvalue = generate_something.refresh(5, 6)

New in version 0.5.0: Added refresh() method to decorated function.

original() on other hand will invoke the decorated function without any caching:

newvalue = generate_something.original(5, 6)

New in version 0.6.0: Added original() method to decorated function.

Lastly, the get() method returns either the value cached for the given key, or the token NO_VALUE if no such key exists:

value = generate_something.get(5, 6)

New in version 0.5.3: Added get() method to decorated function.

The default key generation will use the name of the function, the module name for the function, the arguments passed, as well as an optional “namespace” parameter in order to generate a cache key.

Given a function one inside the module myapp.tools:

@region.cache_on_arguments(namespace="foo")
def one(a, b):
    return a + b

Above, calling one(3, 4) will produce a cache key as follows:

myapp.tools:one|foo|3 4

The key generator will ignore an initial argument of self or cls, making the decorator suitable (with caveats) for use with instance or class methods. Given the example:

class MyClass(object):
    @region.cache_on_arguments(namespace="foo")
    def one(self, a, b):
        return a + b

The cache key above for MyClass().one(3, 4) will again produce the same cache key of myapp.tools:one|foo|3 4 - the name self is skipped.

The namespace parameter is optional, and is used normally to disambiguate two functions of the same name within the same module, as can occur when decorating instance or class methods as below:

class MyClass(object):
    @region.cache_on_arguments(namespace='MC')
    def somemethod(self, x, y):
        ""

class MyOtherClass(object):
    @region.cache_on_arguments(namespace='MOC')
    def somemethod(self, x, y):
        ""

Above, the namespace parameter disambiguates between somemethod on MyClass and MyOtherClass. Python class declaration mechanics otherwise prevent the decorator from having awareness of the MyClass and MyOtherClass names, as the function is received by the decorator before it becomes an instance method.

The function key generation can be entirely replaced on a per-region basis using the function_key_generator argument present on make_region() and CacheRegion. If defaults to function_key_generator().

Parameters
  • namespace – optional string argument which will be established as part of the cache key. This may be needed to disambiguate functions of the same name within the same source file, such as those associated with classes - note that the decorator itself can’t see the parent class on a function as the class is being declared.

  • expiration_time

    if not None, will override the normal expiration time.

    May be specified as a callable, taking no arguments, that returns a value to be used as the expiration_time. This callable will be called whenever the decorated function itself is called, in caching or retrieving. Thus, this can be used to determine a dynamic expiration time for the cached function result. Example use cases include “cache the result until the end of the day, week or time period” and “cache until a certain date or time passes”.

    Changed in version 0.5.0: expiration_time may be passed as a callable to CacheRegion.cache_on_arguments().

  • should_cache_fn

    passed to CacheRegion.get_or_create().

    New in version 0.4.3.

  • to_str

    callable, will be called on each function argument in order to convert to a string. Defaults to str(). If the function accepts non-ascii unicode arguments on Python 2.x, the unicode() builtin can be substituted, but note this will produce unicode cache keys which may require key mangling before reaching the cache.

    New in version 0.5.0.

  • function_key_generator

    a function that will produce a “cache key”. This function will supersede the one configured on the CacheRegion itself.

    New in version 0.5.5.

configure(backend, expiration_time=None, arguments=None, _config_argument_dict=None, _config_prefix=None, wrap=None, replace_existing_backend=False, region_invalidator=None)

Configure a CacheRegion.

The CacheRegion itself is returned.

Parameters
  • backend – Required. This is the name of the CacheBackend to use, and is resolved by loading the class from the dogpile.cache entrypoint.

  • expiration_time

    Optional. The expiration time passed to the dogpile system. May be passed as an integer number of seconds, or as a datetime.timedelta value.

    The CacheRegion.get_or_create() method as well as the CacheRegion.cache_on_arguments() decorator (though note: not the CacheRegion.get() method) will call upon the value creation function after this time period has passed since the last generation.

  • arguments – Optional. The structure here is passed directly to the constructor of the CacheBackend in use, though is typically a dictionary.

  • wrap

    Optional. A list of ProxyBackend classes and/or instances, each of which will be applied in a chain to ultimately wrap the original backend, so that custom functionality augmentation can be applied.

    New in version 0.5.0.

  • replace_existing_backend

    if True, the existing cache backend will be replaced. Without this flag, an exception is raised if a backend is already configured.

    New in version 0.5.7.

  • region_invalidator

    Optional. Override default invalidation strategy with custom implementation of RegionInvalidationStrategy.

    New in version 0.6.2.

configure_from_config(config_dict, prefix)

Configure from a configuration dictionary and a prefix.

Example:

local_region = make_region()
memcached_region = make_region()

# regions are ready to use for function
# decorators, but not yet for actual caching

# later, when config is available
myconfig = {
    "cache.local.backend":"dogpile.cache.dbm",
    "cache.local.arguments.filename":"/path/to/dbmfile.dbm",
    "cache.memcached.backend":"dogpile.cache.pylibmc",
    "cache.memcached.arguments.url":"127.0.0.1, 10.0.0.1",
}
local_region.configure_from_config(myconfig, "cache.local.")
memcached_region.configure_from_config(myconfig,
                                    "cache.memcached.")
delete(key)

Remove a value from the cache.

This operation is idempotent (can be called multiple times, or on a non-existent key, safely)

delete_multi(keys)

Remove multiple values from the cache.

This operation is idempotent (can be called multiple times, or on a non-existent key, safely)

New in version 0.5.0.

get(key, expiration_time=None, ignore_expiration=False)

Return a value from the cache, based on the given key.

If the value is not present, the method returns the token NO_VALUE. NO_VALUE evaluates to False, but is separate from None to distinguish between a cached value of None.

By default, the configured expiration time of the CacheRegion, or alternatively the expiration time supplied by the expiration_time argument, is tested against the creation time of the retrieved value versus the current time (as reported by time.time()). If stale, the cached value is ignored and the NO_VALUE token is returned. Passing the flag ignore_expiration=True bypasses the expiration time check.

Changed in version 0.3.0: CacheRegion.get() now checks the value’s creation time against the expiration time, rather than returning the value unconditionally.

The method also interprets the cached value in terms of the current “invalidation” time as set by the invalidate() method. If a value is present, but its creation time is older than the current invalidation time, the NO_VALUE token is returned. Passing the flag ignore_expiration=True bypasses the invalidation time check.

New in version 0.3.0: Support for the CacheRegion.invalidate() method.

Parameters
  • key – Key to be retrieved. While it’s typical for a key to be a string, it is ultimately passed directly down to the cache backend, before being optionally processed by the key_mangler function, so can be of any type recognized by the backend or by the key_mangler function, if present.

  • expiration_time

    Optional expiration time value which will supersede that configured on the CacheRegion itself.

    Note

    The CacheRegion.get.expiration_time argument is not persisted in the cache and is relevant only to this specific cache retrieval operation, relative to the creation time stored with the existing cached value. Subsequent calls to CacheRegion.get() are not affected by this value.

    New in version 0.3.0.

  • ignore_expiration

    if True, the value is returned from the cache if present, regardless of configured expiration times or whether or not invalidate() was called.

    New in version 0.3.0.

get_multi(keys, expiration_time=None, ignore_expiration=False)

Return multiple values from the cache, based on the given keys.

Returns values as a list matching the keys given.

E.g.:

values = region.get_multi(["one", "two", "three"])

To convert values to a dictionary, use zip():

keys = ["one", "two", "three"]
values = region.get_multi(keys)
dictionary = dict(zip(keys, values))

Keys which aren’t present in the list are returned as the NO_VALUE token. NO_VALUE evaluates to False, but is separate from None to distinguish between a cached value of None.

By default, the configured expiration time of the CacheRegion, or alternatively the expiration time supplied by the expiration_time argument, is tested against the creation time of the retrieved value versus the current time (as reported by time.time()). If stale, the cached value is ignored and the NO_VALUE token is returned. Passing the flag ignore_expiration=True bypasses the expiration time check.

New in version 0.5.0.

get_or_create(key, creator, expiration_time=None, should_cache_fn=None, creator_args=None)

Return a cached value based on the given key.

If the value does not exist or is considered to be expired based on its creation time, the given creation function may or may not be used to recreate the value and persist the newly generated value in the cache.

Whether or not the function is used depends on if the dogpile lock can be acquired or not. If it can’t, it means a different thread or process is already running a creation function for this key against the cache. When the dogpile lock cannot be acquired, the method will block if no previous value is available, until the lock is released and a new value available. If a previous value is available, that value is returned immediately without blocking.

If the invalidate() method has been called, and the retrieved value’s timestamp is older than the invalidation timestamp, the value is unconditionally prevented from being returned. The method will attempt to acquire the dogpile lock to generate a new value, or will wait until the lock is released to return the new value.

Changed in version 0.3.0: The value is unconditionally regenerated if the creation time is older than the last call to invalidate().

Parameters
  • key – Key to be retrieved. While it’s typical for a key to be a string, it is ultimately passed directly down to the cache backend, before being optionally processed by the key_mangler function, so can be of any type recognized by the backend or by the key_mangler function, if present.

  • creator – function which creates a new value.

  • creator_args

    optional tuple of (args, kwargs) that will be passed to the creator function if present.

    New in version 0.7.0.

  • expiration_time

    optional expiration time which will overide the expiration time already configured on this CacheRegion if not None. To set no expiration, use the value -1.

    Note

    The CacheRegion.get_or_create.expiration_time argument is not persisted in the cache and is relevant only to this specific cache retrieval operation, relative to the creation time stored with the existing cached value. Subsequent calls to CacheRegion.get_or_create() are not affected by this value.

  • should_cache_fn

    optional callable function which will receive the value returned by the “creator”, and will then return True or False, indicating if the value should actually be cached or not. If it returns False, the value is still returned, but isn’t cached. E.g.:

    def dont_cache_none(value):
        return value is not None
    
    value = region.get_or_create("some key",
                        create_value,
                        should_cache_fn=dont_cache_none)
    

    Above, the function returns the value of create_value() if the cache is invalid, however if the return value is None, it won’t be cached.

    New in version 0.4.3.

See also

CacheRegion.get()

CacheRegion.cache_on_arguments() - applies get_or_create() to any function using a decorator.

CacheRegion.get_or_create_multi() - multiple key/value version

get_or_create_multi(keys, creator, expiration_time=None, should_cache_fn=None)

Return a sequence of cached values based on a sequence of keys.

The behavior for generation of values based on keys corresponds to that of Region.get_or_create(), with the exception that the creator() function may be asked to generate any subset of the given keys. The list of keys to be generated is passed to creator(), and creator() should return the generated values as a sequence corresponding to the order of the keys.

The method uses the same approach as Region.get_multi() and Region.set_multi() to get and set values from the backend.

If you are using a CacheBackend or ProxyBackend that modifies values, take note this function invokes .set_multi() for newly generated values using the same values it returns to the calling function. A correct implementation of .set_multi() will not modify values in-place on the submitted mapping dict.

Parameters
  • keys – Sequence of keys to be retrieved.

  • creator – function which accepts a sequence of keys and returns a sequence of new values.

  • expiration_time – optional expiration time which will overide the expiration time already configured on this CacheRegion if not None. To set no expiration, use the value -1.

  • should_cache_fn – optional callable function which will receive each value returned by the “creator”, and will then return True or False, indicating if the value should actually be cached or not. If it returns False, the value is still returned, but isn’t cached.

New in version 0.5.0.

invalidate(hard=True)

Invalidate this CacheRegion.

The default invalidation system works by setting a current timestamp (using time.time()) representing the “minimum creation time” for a value. Any retrieved value whose creation time is prior to this timestamp is considered to be stale. It does not affect the data in the cache in any way, and is local to this instance of :class:`.CacheRegion`.

Warning

The CacheRegion.invalidate() method’s default mode of operation is to set a timestamp local to this CacheRegion in this Python process only. It does not impact other Python processes or regions as the timestamp is only stored locally in memory. To implement invalidation where the timestamp is stored in the cache or similar so that all Python processes can be affected by an invalidation timestamp, implement a custom RegionInvalidationStrategy.

Once set, the invalidation time is honored by the CacheRegion.get_or_create(), CacheRegion.get_or_create_multi() and CacheRegion.get() methods.

The method supports both “hard” and “soft” invalidation options. With “hard” invalidation, CacheRegion.get_or_create() will force an immediate regeneration of the value which all getters will wait for. With “soft” invalidation, subsequent getters will return the “old” value until the new one is available.

Usage of “soft” invalidation requires that the region or the method is given a non-None expiration time.

New in version 0.3.0.

Parameters

hard

if True, cache values will all require immediate regeneration; dogpile logic won’t be used. If False, the creation time of existing values will be pushed back before the expiration time so that a return+regen will be invoked.

New in version 0.5.1.

property is_configured

Return True if the backend has been configured via the CacheRegion.configure() method already.

New in version 0.5.1.

set(key, value)

Place a new value in the cache under the given key.

set_multi(mapping)

Place new values in the cache under the given keys.

New in version 0.5.0.

wrap(proxy)

Takes a ProxyBackend instance or class and wraps the attached backend.

class dogpile.cache.region.DefaultInvalidationStrategy
invalidate(hard=True)

Region invalidation.

CacheRegion propagated call. The default invalidation system works by setting a current timestamp (using time.time()) to consider all older timestamps effectively invalidated.

is_hard_invalidated(timestamp)

Check timestamp to determine if it was hard invalidated.

Returns

Boolean. True if timestamp is older than the last region invalidation time and region is invalidated in hard mode.

is_invalidated(timestamp)

Check timestamp to determine if it was invalidated.

Returns

Boolean. True if timestamp is older than the last region invalidation time.

is_soft_invalidated(timestamp)

Check timestamp to determine if it was soft invalidated.

Returns

Boolean. True if timestamp is older than the last region invalidation time and region is invalidated in soft mode.

was_hard_invalidated()

Indicate the region was invalidated in hard mode.

Returns

Boolean. True if region was invalidated in hard mode.

was_soft_invalidated()

Indicate the region was invalidated in soft mode.

Returns

Boolean. True if region was invalidated in soft mode.

class dogpile.cache.region.RegionInvalidationStrategy

Region invalidation strategy interface

Implement this interface and pass implementation instance to CacheRegion.configure() to override default region invalidation.

Example:

class CustomInvalidationStrategy(RegionInvalidationStrategy):

    def __init__(self):
        self._soft_invalidated = None
        self._hard_invalidated = None

    def invalidate(self, hard=None):
        if hard:
            self._soft_invalidated = None
            self._hard_invalidated = time.time()
        else:
            self._soft_invalidated = time.time()
            self._hard_invalidated = None

    def is_invalidated(self, timestamp):
        return ((self._soft_invalidated and
                 timestamp < self._soft_invalidated) or
                (self._hard_invalidated and
                 timestamp < self._hard_invalidated))

    def was_hard_invalidated(self):
        return bool(self._hard_invalidated)

    def is_hard_invalidated(self, timestamp):
        return (self._hard_invalidated and
                timestamp < self._hard_invalidated)

    def was_soft_invalidated(self):
        return bool(self._soft_invalidated)

    def is_soft_invalidated(self, timestamp):
        return (self._soft_invalidated and
                timestamp < self._soft_invalidated)

The custom implementation is injected into a CacheRegion at configure time using the CacheRegion.configure.region_invalidator parameter:

region = CacheRegion()

region = region.configure(region_invalidator=CustomInvalidationStrategy())  # noqa

Invalidation strategies that wish to have access to the CacheRegion itself should construct the invalidator given the region as an argument:

class MyInvalidator(RegionInvalidationStrategy):
    def __init__(self, region):
        self.region = region
        # ...

    # ...

region = CacheRegion()
region = region.configure(region_invalidator=MyInvalidator(region))

New in version 0.6.2.

invalidate(hard=True)

Region invalidation.

CacheRegion propagated call. The default invalidation system works by setting a current timestamp (using time.time()) to consider all older timestamps effectively invalidated.

is_hard_invalidated(timestamp)

Check timestamp to determine if it was hard invalidated.

Returns

Boolean. True if timestamp is older than the last region invalidation time and region is invalidated in hard mode.

is_invalidated(timestamp)

Check timestamp to determine if it was invalidated.

Returns

Boolean. True if timestamp is older than the last region invalidation time.

is_soft_invalidated(timestamp)

Check timestamp to determine if it was soft invalidated.

Returns

Boolean. True if timestamp is older than the last region invalidation time and region is invalidated in soft mode.

was_hard_invalidated()

Indicate the region was invalidated in hard mode.

Returns

Boolean. True if region was invalidated in hard mode.

was_soft_invalidated()

Indicate the region was invalidated in soft mode.

Returns

Boolean. True if region was invalidated in soft mode.

dogpile.cache.region.make_region(*arg, **kw)

Instantiate a new CacheRegion.

Currently, make_region() is a passthrough to CacheRegion. See that class for constructor arguments.

dogpile.cache.region.value_version = 1

An integer placed in the CachedValue so that new versions of dogpile.cache can detect cached values from a previous, backwards-incompatible version.

Backend API

See the section Creating Backends for details on how to register new backends or Changing Backend Behavior for details on how to alter the behavior of existing backends.

class dogpile.cache.api.CacheBackend(arguments)

Base class for backend implementations.

delete(key)

Delete a value from the cache.

The key will be whatever was passed to the registry, processed by the “key mangling” function, if any.

The behavior here should be idempotent, that is, can be called any number of times regardless of whether or not the key exists.

delete_multi(keys)

Delete multiple values from the cache.

The key will be whatever was passed to the registry, processed by the “key mangling” function, if any.

The behavior here should be idempotent, that is, can be called any number of times regardless of whether or not the key exists.

New in version 0.5.0.

get(key)

Retrieve a value from the cache.

The returned value should be an instance of CachedValue, or NO_VALUE if not present.

get_multi(keys)

Retrieve multiple values from the cache.

The returned value should be a list, corresponding to the list of keys given.

New in version 0.5.0.

get_mutex(key)

Return an optional mutexing object for the given key.

This object need only provide an acquire() and release() method.

May return None, in which case the dogpile lock will use a regular threading.Lock object to mutex concurrent threads for value creation. The default implementation returns None.

Different backends may want to provide various kinds of “mutex” objects, such as those which link to lock files, distributed mutexes, memcached semaphores, etc. Whatever kind of system is best suited for the scope and behavior of the caching backend.

A mutex that takes the key into account will allow multiple regenerate operations across keys to proceed simultaneously, while a mutex that does not will serialize regenerate operations to just one at a time across all keys in the region. The latter approach, or a variant that involves a modulus of the given key’s hash value, can be used as a means of throttling the total number of value recreation operations that may proceed at one time.

key_mangler = None

Key mangling function.

May be None, or otherwise declared as an ordinary instance method.

set(key, value)

Set a value in the cache.

The key will be whatever was passed to the registry, processed by the “key mangling” function, if any. The value will always be an instance of CachedValue.

set_multi(mapping)

Set multiple values in the cache.

mapping is a dict in which the key will be whatever was passed to the registry, processed by the “key mangling” function, if any. The value will always be an instance of CachedValue.

When implementing a new CacheBackend or cutomizing via ProxyBackend, be aware that when this method is invoked by Region.get_or_create_multi(), the mapping values are the same ones returned to the upstream caller. If the subclass alters the values in any way, it must not do so ‘in-place’ on the mapping dict – that will have the undesirable effect of modifying the returned values as well.

New in version 0.5.0.

class dogpile.cache.api.CachedValue

Represent a value stored in the cache.

CachedValue is a two-tuple of (payload, metadata), where metadata is dogpile.cache’s tracking information ( currently the creation time). The metadata and tuple structure is pickleable, if the backend requires serialization.

property metadata

Named accessor for the dogpile.cache metadata dictionary.

property payload

Named accessor for the payload.

dogpile.cache.api.NO_VALUE = <dogpile.cache.api.NoValue object>

Value returned from get() that describes a key not present.

class dogpile.cache.api.NoValue

Describe a missing cache value.

The NO_VALUE module global should be used.

Backends

Memory Backends

Provides simple dictionary-based backends.

The two backends are MemoryBackend and MemoryPickleBackend; the latter applies a serialization step to cached values while the former places the value as given into the dictionary.

class dogpile.cache.backends.memory.MemoryBackend(arguments)

A backend that uses a plain dictionary.

There is no size management, and values which are placed into the dictionary will remain until explicitly removed. Note that Dogpile’s expiration of items is based on timestamps and does not remove them from the cache.

E.g.:

from dogpile.cache import make_region

region = make_region().configure(
    'dogpile.cache.memory'
)

To use a Python dictionary of your choosing, it can be passed in with the cache_dict argument:

my_dictionary = {}
region = make_region().configure(
    'dogpile.cache.memory',
    arguments={
        "cache_dict":my_dictionary
    }
)
delete(key)

Delete a value from the cache.

The key will be whatever was passed to the registry, processed by the “key mangling” function, if any.

The behavior here should be idempotent, that is, can be called any number of times regardless of whether or not the key exists.

delete_multi(keys)

Delete multiple values from the cache.

The key will be whatever was passed to the registry, processed by the “key mangling” function, if any.

The behavior here should be idempotent, that is, can be called any number of times regardless of whether or not the key exists.

New in version 0.5.0.

get(key)

Retrieve a value from the cache.

The returned value should be an instance of CachedValue, or NO_VALUE if not present.

get_multi(keys)

Retrieve multiple values from the cache.

The returned value should be a list, corresponding to the list of keys given.

New in version 0.5.0.

set(key, value)

Set a value in the cache.

The key will be whatever was passed to the registry, processed by the “key mangling” function, if any. The value will always be an instance of CachedValue.

set_multi(mapping)

Set multiple values in the cache.

mapping is a dict in which the key will be whatever was passed to the registry, processed by the “key mangling” function, if any. The value will always be an instance of CachedValue.

When implementing a new CacheBackend or cutomizing via ProxyBackend, be aware that when this method is invoked by Region.get_or_create_multi(), the mapping values are the same ones returned to the upstream caller. If the subclass alters the values in any way, it must not do so ‘in-place’ on the mapping dict – that will have the undesirable effect of modifying the returned values as well.

New in version 0.5.0.

class dogpile.cache.backends.memory.MemoryPickleBackend(arguments)

A backend that uses a plain dictionary, but serializes objects on MemoryBackend.set() and deserializes MemoryBackend.get().

E.g.:

from dogpile.cache import make_region

region = make_region().configure(
    'dogpile.cache.memory_pickle'
)

The usage of pickle to serialize cached values allows an object as placed in the cache to be a copy of the original given object, so that any subsequent changes to the given object aren’t reflected in the cached value, thus making the backend behave the same way as other backends which make use of serialization.

The serialization is performed via pickle, and incurs the same performance hit in doing so as that of other backends; in this way the MemoryPickleBackend performance is somewhere in between that of the pure MemoryBackend and the remote server oriented backends such as that of Memcached or Redis.

Pickle behavior here is the same as that of the Redis backend, using either cPickle or pickle and specifying HIGHEST_PROTOCOL upon serialize.

New in version 0.5.3.

Memcached Backends

Provides backends for talking to memcached.

class dogpile.cache.backends.memcached.GenericMemcachedBackend(arguments)

Base class for memcached backends.

This base class accepts a number of paramters common to all backends.

Parameters
  • url – the string URL to connect to. Can be a single string or a list of strings. This is the only argument that’s required.

  • distributed_lock – boolean, when True, will use a memcached-lock as the dogpile lock (see MemcachedLock). Use this when multiple processes will be talking to the same memcached instance. When left at False, dogpile will coordinate on a regular threading mutex.

  • lock_timeout

    integer, number of seconds after acquiring a lock that memcached should expire it. This argument is only valid when distributed_lock is True.

    New in version 0.5.7.

  • memcached_expire_time

    integer, when present will be passed as the time parameter to pylibmc.Client.set. This is used to set the memcached expiry time for a value.

    Note

    This parameter is different from Dogpile’s own expiration_time, which is the number of seconds after which Dogpile will consider the value to be expired. When Dogpile considers a value to be expired, it continues to use the value until generation of a new value is complete, when using CacheRegion.get_or_create(). Therefore, if you are setting memcached_expire_time, you’ll want to make sure it is greater than expiration_time by at least enough seconds for new values to be generated, else the value won’t be available during a regeneration, forcing all threads to wait for a regeneration each time a value expires.

The GenericMemachedBackend uses a threading.local() object to store individual client objects per thread, as most modern memcached clients do not appear to be inherently threadsafe.

In particular, threading.local() has the advantage over pylibmc’s built-in thread pool in that it automatically discards objects associated with a particular thread when that thread ends.

property client

Return the memcached client.

This uses a threading.local by default as it appears most modern memcached libs aren’t inherently threadsafe.

delete(key)

Delete a value from the cache.

The key will be whatever was passed to the registry, processed by the “key mangling” function, if any.

The behavior here should be idempotent, that is, can be called any number of times regardless of whether or not the key exists.

delete_multi(keys)

Delete multiple values from the cache.

The key will be whatever was passed to the registry, processed by the “key mangling” function, if any.

The behavior here should be idempotent, that is, can be called any number of times regardless of whether or not the key exists.

New in version 0.5.0.

get(key)

Retrieve a value from the cache.

The returned value should be an instance of CachedValue, or NO_VALUE if not present.

get_multi(keys)

Retrieve multiple values from the cache.

The returned value should be a list, corresponding to the list of keys given.

New in version 0.5.0.

get_mutex(key)

Return an optional mutexing object for the given key.

This object need only provide an acquire() and release() method.

May return None, in which case the dogpile lock will use a regular threading.Lock object to mutex concurrent threads for value creation. The default implementation returns None.

Different backends may want to provide various kinds of “mutex” objects, such as those which link to lock files, distributed mutexes, memcached semaphores, etc. Whatever kind of system is best suited for the scope and behavior of the caching backend.

A mutex that takes the key into account will allow multiple regenerate operations across keys to proceed simultaneously, while a mutex that does not will serialize regenerate operations to just one at a time across all keys in the region. The latter approach, or a variant that involves a modulus of the given key’s hash value, can be used as a means of throttling the total number of value recreation operations that may proceed at one time.

set(key, value)

Set a value in the cache.

The key will be whatever was passed to the registry, processed by the “key mangling” function, if any. The value will always be an instance of CachedValue.

set_arguments = {}

Additional arguments which will be passed to the set() method.

set_multi(mapping)

Set multiple values in the cache.

mapping is a dict in which the key will be whatever was passed to the registry, processed by the “key mangling” function, if any. The value will always be an instance of CachedValue.

When implementing a new CacheBackend or cutomizing via ProxyBackend, be aware that when this method is invoked by Region.get_or_create_multi(), the mapping values are the same ones returned to the upstream caller. If the subclass alters the values in any way, it must not do so ‘in-place’ on the mapping dict – that will have the undesirable effect of modifying the returned values as well.

New in version 0.5.0.

class dogpile.cache.backends.memcached.MemcachedBackend(arguments)

A backend using the standard Python-memcached library.

Example:

from dogpile.cache import make_region

region = make_region().configure(
    'dogpile.cache.memcached',
    expiration_time = 3600,
    arguments = {
        'url':"127.0.0.1:11211"
    }
)
class dogpile.cache.backends.memcached.PylibmcBackend(arguments)

A backend for the pylibmc memcached client.

A configuration illustrating several of the optional arguments described in the pylibmc documentation:

from dogpile.cache import make_region

region = make_region().configure(
    'dogpile.cache.pylibmc',
    expiration_time = 3600,
    arguments = {
        'url':["127.0.0.1"],
        'binary':True,
        'behaviors':{"tcp_nodelay": True,"ketama":True}
    }
)

Arguments accepted here include those of GenericMemcachedBackend, as well as those below.

Parameters
  • binary – sets the binary flag understood by pylibmc.Client.

  • behaviors – a dictionary which will be passed to pylibmc.Client as the behaviors parameter.

  • min_compress_len – Integer, will be passed as the min_compress_len parameter to the pylibmc.Client.set method.

class dogpile.cache.backends.memcached.BMemcachedBackend(arguments)

A backend for the python-binary-memcached memcached client.

This is a pure Python memcached client which includes the ability to authenticate with a memcached server using SASL.

A typical configuration using username/password:

from dogpile.cache import make_region

region = make_region().configure(
    'dogpile.cache.bmemcached',
    expiration_time = 3600,
    arguments = {
        'url':["127.0.0.1"],
        'username':'scott',
        'password':'tiger'
    }
)

Arguments which can be passed to the arguments dictionary include:

Parameters
  • username – optional username, will be used for SASL authentication.

  • password – optional password, will be used for SASL authentication.

delete_multi(keys)

python-binary-memcached api does not implements delete_multi

class dogpile.cache.backends.memcached.MemcachedLock(client_fn, key, timeout=0)

Simple distributed lock using memcached.

This is an adaptation of the lock featured at http://amix.dk/blog/post/19386

Redis Backends

Provides backends for talking to Redis.

class dogpile.cache.backends.redis.RedisBackend(arguments)

A Redis backend, using the redis-py backend.

Example configuration:

from dogpile.cache import make_region

region = make_region().configure(
    'dogpile.cache.redis',
    arguments = {
        'host': 'localhost',
        'port': 6379,
        'db': 0,
        'redis_expiration_time': 60*60*2,   # 2 hours
        'distributed_lock': True
        }
)

Arguments accepted in the arguments dictionary:

Parameters
  • url

    string. If provided, will override separate host/port/db params. The format is that accepted by StrictRedis.from_url().

    New in version 0.4.1.

  • host – string, default is localhost.

  • password

    string, default is no password.

    New in version 0.4.1.

  • port – integer, default is 6379.

  • db – integer, default is 0.

  • redis_expiration_time – integer, number of seconds after setting a value that Redis should expire it. This should be larger than dogpile’s cache expiration. By default no expiration is set.

  • distributed_lock – boolean, when True, will use a redis-lock as the dogpile lock. Use this when multiple processes will be talking to the same redis instance. When left at False, dogpile will coordinate on a regular threading mutex.

  • lock_timeout

    integer, number of seconds after acquiring a lock that Redis should expire it. This argument is only valid when distributed_lock is True.

    New in version 0.5.0.

  • socket_timeout

    float, seconds for socket timeout. Default is None (no timeout).

    New in version 0.5.4.

  • lock_sleep

    integer, number of seconds to sleep when failed to acquire a lock. This argument is only valid when distributed_lock is True.

    New in version 0.5.0.

  • connection_pool

    redis.ConnectionPool object. If provided, this object supersedes other connection arguments passed to the redis.StrictRedis instance, including url and/or host as well as socket_timeout, and will be passed to redis.StrictRedis as the source of connectivity.

    New in version 0.5.4.

delete(key)

Delete a value from the cache.

The key will be whatever was passed to the registry, processed by the “key mangling” function, if any.

The behavior here should be idempotent, that is, can be called any number of times regardless of whether or not the key exists.

delete_multi(keys)

Delete multiple values from the cache.

The key will be whatever was passed to the registry, processed by the “key mangling” function, if any.

The behavior here should be idempotent, that is, can be called any number of times regardless of whether or not the key exists.

New in version 0.5.0.

get(key)

Retrieve a value from the cache.

The returned value should be an instance of CachedValue, or NO_VALUE if not present.

get_multi(keys)

Retrieve multiple values from the cache.

The returned value should be a list, corresponding to the list of keys given.

New in version 0.5.0.

get_mutex(key)

Return an optional mutexing object for the given key.

This object need only provide an acquire() and release() method.

May return None, in which case the dogpile lock will use a regular threading.Lock object to mutex concurrent threads for value creation. The default implementation returns None.

Different backends may want to provide various kinds of “mutex” objects, such as those which link to lock files, distributed mutexes, memcached semaphores, etc. Whatever kind of system is best suited for the scope and behavior of the caching backend.

A mutex that takes the key into account will allow multiple regenerate operations across keys to proceed simultaneously, while a mutex that does not will serialize regenerate operations to just one at a time across all keys in the region. The latter approach, or a variant that involves a modulus of the given key’s hash value, can be used as a means of throttling the total number of value recreation operations that may proceed at one time.

set(key, value)

Set a value in the cache.

The key will be whatever was passed to the registry, processed by the “key mangling” function, if any. The value will always be an instance of CachedValue.

set_multi(mapping)

Set multiple values in the cache.

mapping is a dict in which the key will be whatever was passed to the registry, processed by the “key mangling” function, if any. The value will always be an instance of CachedValue.

When implementing a new CacheBackend or cutomizing via ProxyBackend, be aware that when this method is invoked by Region.get_or_create_multi(), the mapping values are the same ones returned to the upstream caller. If the subclass alters the values in any way, it must not do so ‘in-place’ on the mapping dict – that will have the undesirable effect of modifying the returned values as well.

New in version 0.5.0.

File Backends

Provides backends that deal with local filesystem access.

class dogpile.cache.backends.file.DBMBackend(arguments)

A file-backend using a dbm file to store keys.

Basic usage:

from dogpile.cache import make_region

region = make_region().configure(
    'dogpile.cache.dbm',
    expiration_time = 3600,
    arguments = {
        "filename":"/path/to/cachefile.dbm"
    }
)

DBM access is provided using the Python anydbm module, which selects a platform-specific dbm module to use. This may be made to be more configurable in a future release.

Note that different dbm modules have different behaviors. Some dbm implementations handle their own locking, while others don’t. The DBMBackend uses a read/write lockfile by default, which is compatible even with those DBM implementations for which this is unnecessary, though the behavior can be disabled.

The DBM backend by default makes use of two lockfiles. One is in order to protect the DBM file itself from concurrent writes, the other is to coordinate value creation (i.e. the dogpile lock). By default, these lockfiles use the flock() system call for locking; this is only available on Unix platforms. An alternative lock implementation, such as one which is based on threads or uses a third-party system such as portalocker, can be dropped in using the lock_factory argument in conjunction with the AbstractFileLock base class.

Currently, the dogpile lock is against the entire DBM file, not per key. This means there can only be one “creator” job running at a time per dbm file.

A future improvement might be to have the dogpile lock using a filename that’s based on a modulus of the key. Locking on a filename that uniquely corresponds to the key is problematic, since it’s not generally safe to delete lockfiles as the application runs, implying an unlimited number of key-based files would need to be created and never deleted.

Parameters to the arguments dictionary are below.

Parameters
  • filename – path of the filename in which to create the DBM file. Note that some dbm backends will change this name to have additional suffixes.

  • rw_lockfile – the name of the file to use for read/write locking. If omitted, a default name is used by appending the suffix “.rw.lock” to the DBM filename. If False, then no lock is used.

  • dogpile_lockfile – the name of the file to use for value creation, i.e. the dogpile lock. If omitted, a default name is used by appending the suffix “.dogpile.lock” to the DBM filename. If False, then dogpile.cache uses the default dogpile lock, a plain thread-based mutex.

  • lock_factory

    a function or class which provides for a read/write lock. Defaults to FileLock. Custom implementations need to implement context-manager based read() and write() functions - the AbstractFileLock class is provided as a base class which provides these methods based on individual read/write lock functions. E.g. to replace the lock with the dogpile.core ReadWriteMutex:

    from dogpile.core.readwrite_lock import ReadWriteMutex
    from dogpile.cache.backends.file import AbstractFileLock
    
    class MutexLock(AbstractFileLock):
        def __init__(self, filename):
            self.mutex = ReadWriteMutex()
    
        def acquire_read_lock(self, wait):
            ret = self.mutex.acquire_read_lock(wait)
            return wait or ret
    
        def acquire_write_lock(self, wait):
            ret = self.mutex.acquire_write_lock(wait)
            return wait or ret
    
        def release_read_lock(self):
            return self.mutex.release_read_lock()
    
        def release_write_lock(self):
            return self.mutex.release_write_lock()
    
    from dogpile.cache import make_region
    
    region = make_region().configure(
        "dogpile.cache.dbm",
        expiration_time=300,
        arguments={
            "filename": "file.dbm",
            "lock_factory": MutexLock
        }
    )
    

    While the included FileLock uses os.flock(), a windows-compatible implementation can be built using a library such as portalocker.

    New in version 0.5.2.

delete(key)

Delete a value from the cache.

The key will be whatever was passed to the registry, processed by the “key mangling” function, if any.

The behavior here should be idempotent, that is, can be called any number of times regardless of whether or not the key exists.

delete_multi(keys)

Delete multiple values from the cache.

The key will be whatever was passed to the registry, processed by the “key mangling” function, if any.

The behavior here should be idempotent, that is, can be called any number of times regardless of whether or not the key exists.

New in version 0.5.0.

get(key)

Retrieve a value from the cache.

The returned value should be an instance of CachedValue, or NO_VALUE if not present.

get_multi(keys)

Retrieve multiple values from the cache.

The returned value should be a list, corresponding to the list of keys given.

New in version 0.5.0.

get_mutex(key)

Return an optional mutexing object for the given key.

This object need only provide an acquire() and release() method.

May return None, in which case the dogpile lock will use a regular threading.Lock object to mutex concurrent threads for value creation. The default implementation returns None.

Different backends may want to provide various kinds of “mutex” objects, such as those which link to lock files, distributed mutexes, memcached semaphores, etc. Whatever kind of system is best suited for the scope and behavior of the caching backend.

A mutex that takes the key into account will allow multiple regenerate operations across keys to proceed simultaneously, while a mutex that does not will serialize regenerate operations to just one at a time across all keys in the region. The latter approach, or a variant that involves a modulus of the given key’s hash value, can be used as a means of throttling the total number of value recreation operations that may proceed at one time.

set(key, value)

Set a value in the cache.

The key will be whatever was passed to the registry, processed by the “key mangling” function, if any. The value will always be an instance of CachedValue.

set_multi(mapping)

Set multiple values in the cache.

mapping is a dict in which the key will be whatever was passed to the registry, processed by the “key mangling” function, if any. The value will always be an instance of CachedValue.

When implementing a new CacheBackend or cutomizing via ProxyBackend, be aware that when this method is invoked by Region.get_or_create_multi(), the mapping values are the same ones returned to the upstream caller. If the subclass alters the values in any way, it must not do so ‘in-place’ on the mapping dict – that will have the undesirable effect of modifying the returned values as well.

New in version 0.5.0.

class dogpile.cache.backends.file.FileLock(filename)

Use lockfiles to coordinate read/write access to a file.

Only works on Unix systems, using fcntl.flock().

acquire_read_lock(wait)

Acquire a ‘reader’ lock.

Raises NotImplementedError by default, must be implemented by subclasses.

acquire_write_lock(wait)

Acquire a ‘write’ lock.

Raises NotImplementedError by default, must be implemented by subclasses.

property is_open

optional method.

release_read_lock()

Release a ‘reader’ lock.

Raises NotImplementedError by default, must be implemented by subclasses.

release_write_lock()

Release a ‘writer’ lock.

Raises NotImplementedError by default, must be implemented by subclasses.

class dogpile.cache.backends.file.AbstractFileLock(filename)

Coordinate read/write access to a file.

typically is a file-based lock but doesn’t necessarily have to be.

The default implementation here is FileLock.

Implementations should provide the following methods:

* __init__()
* acquire_read_lock()
* acquire_write_lock()
* release_read_lock()
* release_write_lock()

The __init__() method accepts a single argument “filename”, which may be used as the “lock file”, for those implementations that use a lock file.

Note that multithreaded environments must provide a thread-safe version of this lock. The recommended approach for file- descriptor-based locks is to use a Python threading.local() so that a unique file descriptor is held per thread. See the source code of FileLock for an implementation example.

acquire(wait=True)

Acquire the “write” lock.

This is a direct call to AbstractFileLock.acquire_write_lock().

acquire_read_lock(wait)

Acquire a ‘reader’ lock.

Raises NotImplementedError by default, must be implemented by subclasses.

acquire_write_lock(wait)

Acquire a ‘write’ lock.

Raises NotImplementedError by default, must be implemented by subclasses.

property is_open

optional method.

read()

Provide a context manager for the “read” lock.

This method makes use of AbstractFileLock.acquire_read_lock() and AbstractFileLock.release_read_lock()

release()

Release the “write” lock.

This is a direct call to AbstractFileLock.release_write_lock().

release_read_lock()

Release a ‘reader’ lock.

Raises NotImplementedError by default, must be implemented by subclasses.

release_write_lock()

Release a ‘writer’ lock.

Raises NotImplementedError by default, must be implemented by subclasses.

write()

Provide a context manager for the “write” lock.

This method makes use of AbstractFileLock.acquire_write_lock() and AbstractFileLock.release_write_lock()

Proxy Backends

Provides a utility and a decorator class that allow for modifying the behavior of different backends without altering the class itself or having to extend the base backend.

New in version 0.5.0: Added support for the ProxyBackend class.

class dogpile.cache.proxy.ProxyBackend(*args, **kwargs)

A decorator class for altering the functionality of backends.

Basic usage:

from dogpile.cache import make_region
from dogpile.cache.proxy import ProxyBackend

class MyFirstProxy(ProxyBackend):
    def get(self, key):
        # ... custom code goes here ...
        return self.proxied.get(key)

    def set(self, key, value):
        # ... custom code goes here ...
        self.proxied.set(key)

class MySecondProxy(ProxyBackend):
    def get(self, key):
        # ... custom code goes here ...
        return self.proxied.get(key)


region = make_region().configure(
    'dogpile.cache.dbm',
    expiration_time = 3600,
    arguments = {
        "filename":"/path/to/cachefile.dbm"
    },
    wrap = [ MyFirstProxy, MySecondProxy ]
)

Classes that extend ProxyBackend can be stacked together. The .proxied property will always point to either the concrete backend instance or the next proxy in the chain that a method can be delegated towards.

New in version 0.5.0.

delete(key)

Delete a value from the cache.

The key will be whatever was passed to the registry, processed by the “key mangling” function, if any.

The behavior here should be idempotent, that is, can be called any number of times regardless of whether or not the key exists.

delete_multi(keys)

Delete multiple values from the cache.

The key will be whatever was passed to the registry, processed by the “key mangling” function, if any.

The behavior here should be idempotent, that is, can be called any number of times regardless of whether or not the key exists.

New in version 0.5.0.

get(key)

Retrieve a value from the cache.

The returned value should be an instance of CachedValue, or NO_VALUE if not present.

get_multi(keys)

Retrieve multiple values from the cache.

The returned value should be a list, corresponding to the list of keys given.

New in version 0.5.0.

get_mutex(key)

Return an optional mutexing object for the given key.

This object need only provide an acquire() and release() method.

May return None, in which case the dogpile lock will use a regular threading.Lock object to mutex concurrent threads for value creation. The default implementation returns None.

Different backends may want to provide various kinds of “mutex” objects, such as those which link to lock files, distributed mutexes, memcached semaphores, etc. Whatever kind of system is best suited for the scope and behavior of the caching backend.

A mutex that takes the key into account will allow multiple regenerate operations across keys to proceed simultaneously, while a mutex that does not will serialize regenerate operations to just one at a time across all keys in the region. The latter approach, or a variant that involves a modulus of the given key’s hash value, can be used as a means of throttling the total number of value recreation operations that may proceed at one time.

set(key, value)

Set a value in the cache.

The key will be whatever was passed to the registry, processed by the “key mangling” function, if any. The value will always be an instance of CachedValue.

set_multi(mapping)

Set multiple values in the cache.

mapping is a dict in which the key will be whatever was passed to the registry, processed by the “key mangling” function, if any. The value will always be an instance of CachedValue.

When implementing a new CacheBackend or cutomizing via ProxyBackend, be aware that when this method is invoked by Region.get_or_create_multi(), the mapping values are the same ones returned to the upstream caller. If the subclass alters the values in any way, it must not do so ‘in-place’ on the mapping dict – that will have the undesirable effect of modifying the returned values as well.

New in version 0.5.0.

wrap(backend)

Take a backend as an argument and setup the self.proxied property. Return an object that be used as a backend by a CacheRegion object.

Null Backend

The Null backend does not do any caching at all. It can be used to test behavior without caching, or as a means of disabling caching for a region that is otherwise used normally.

New in version 0.5.4.

class dogpile.cache.backends.null.NullBackend(arguments)

A “null” backend that effectively disables all cache operations.

Basic usage:

from dogpile.cache import make_region

region = make_region().configure(
    'dogpile.cache.null'
)
delete(key)

Delete a value from the cache.

The key will be whatever was passed to the registry, processed by the “key mangling” function, if any.

The behavior here should be idempotent, that is, can be called any number of times regardless of whether or not the key exists.

delete_multi(keys)

Delete multiple values from the cache.

The key will be whatever was passed to the registry, processed by the “key mangling” function, if any.

The behavior here should be idempotent, that is, can be called any number of times regardless of whether or not the key exists.

New in version 0.5.0.

get(key)

Retrieve a value from the cache.

The returned value should be an instance of CachedValue, or NO_VALUE if not present.

get_multi(keys)

Retrieve multiple values from the cache.

The returned value should be a list, corresponding to the list of keys given.

New in version 0.5.0.

get_mutex(key)

Return an optional mutexing object for the given key.

This object need only provide an acquire() and release() method.

May return None, in which case the dogpile lock will use a regular threading.Lock object to mutex concurrent threads for value creation. The default implementation returns None.

Different backends may want to provide various kinds of “mutex” objects, such as those which link to lock files, distributed mutexes, memcached semaphores, etc. Whatever kind of system is best suited for the scope and behavior of the caching backend.

A mutex that takes the key into account will allow multiple regenerate operations across keys to proceed simultaneously, while a mutex that does not will serialize regenerate operations to just one at a time across all keys in the region. The latter approach, or a variant that involves a modulus of the given key’s hash value, can be used as a means of throttling the total number of value recreation operations that may proceed at one time.

set(key, value)

Set a value in the cache.

The key will be whatever was passed to the registry, processed by the “key mangling” function, if any. The value will always be an instance of CachedValue.

set_multi(mapping)

Set multiple values in the cache.

mapping is a dict in which the key will be whatever was passed to the registry, processed by the “key mangling” function, if any. The value will always be an instance of CachedValue.

When implementing a new CacheBackend or cutomizing via ProxyBackend, be aware that when this method is invoked by Region.get_or_create_multi(), the mapping values are the same ones returned to the upstream caller. If the subclass alters the values in any way, it must not do so ‘in-place’ on the mapping dict – that will have the undesirable effect of modifying the returned values as well.

New in version 0.5.0.

Exceptions

Exception classes for dogpile.cache.

exception dogpile.cache.exception.DogpileCacheException

Base Exception for dogpile.cache exceptions to inherit from.

exception dogpile.cache.exception.PluginNotFound

The specified plugin could not be found.

New in version 0.6.4.

exception dogpile.cache.exception.RegionAlreadyConfigured

CacheRegion instance is already configured.

exception dogpile.cache.exception.RegionNotConfigured

CacheRegion instance has not been configured.

exception dogpile.cache.exception.ValidationError

Error validating a value or option.

Plugins

Mako Integration

dogpile.cache includes a Mako plugin that replaces Beaker as the cache backend. Setup a Mako template lookup using the “dogpile.cache” cache implementation and a region dictionary:

from dogpile.cache import make_region
from mako.lookup import TemplateLookup

my_regions = {
    "local":make_region().configure(
                "dogpile.cache.dbm",
                expiration_time=360,
                arguments={"filename":"file.dbm"}
            ),
    "memcached":make_region().configure(
                "dogpile.cache.pylibmc",
                expiration_time=3600,
                arguments={"url":["127.0.0.1"]}
            )
}

mako_lookup = TemplateLookup(
    directories=["/myapp/templates"],
    cache_impl="dogpile.cache",
    cache_args={
        'regions':my_regions
    }
)

To use the above configuration in a template, use the cached=True argument on any Mako tag which accepts it, in conjunction with the name of the desired region as the cache_region argument:

<%def name="mysection()" cached="True" cache_region="memcached">
    some content that's cached
</%def>
class dogpile.cache.plugins.mako_cache.MakoPlugin(cache)

A Mako CacheImpl which talks to dogpile.cache.

get(key, **kw)

Retrieve a value from the cache.

Parameters
  • key – the value’s key.

  • **kw – cache configuration arguments.

get_or_create(key, creation_function, **kw)

Retrieve a value from the cache, using the given creation function to generate a new value.

This function must return a value, either from the cache, or via the given creation function. If the creation function is called, the newly created value should be populated into the cache under the given key before being returned.

Parameters
  • key – the value’s key.

  • creation_function – function that when called generates a new value.

  • **kw – cache configuration arguments.

invalidate(key, **kw)

Invalidate a value in the cache.

Parameters
  • key – the value’s key.

  • **kw – cache configuration arguments.

Utilities

dogpile.cache.util.function_key_generator(namespace, fn, to_str=<class 'str'>)

Return a function that generates a string key, based on a given function as well as arguments to the returned function itself.

This is used by CacheRegion.cache_on_arguments() to generate a cache key from a decorated function.

An alternate function may be used by specifying the CacheRegion.function_key_generator argument for CacheRegion.

See also

kwarg_function_key_generator() - similar function that also takes keyword arguments into account

dogpile.cache.util.kwarg_function_key_generator(namespace, fn, to_str=<class 'str'>)

Return a function that generates a string key, based on a given function as well as arguments to the returned function itself.

For kwargs passed in, we will build a dict of all argname (key) argvalue (values) including default args from the argspec and then alphabetize the list before generating the key.

New in version 0.6.2.

See also

function_key_generator() - default key generation function

dogpile.cache.util.sha1_mangle_key(key)

a SHA1 key mangler.

dogpile.cache.util.length_conditional_mangler(length, mangler)

a key mangler that mangles if the length of the key is past a certain threshold.

dogpile Core

class dogpile.Lock(mutex, creator, value_and_created_fn, expiretime, async_creator=None)

Dogpile lock class.

Provides an interface around an arbitrary mutex that allows one thread/process to be elected as the creator of a new value, while other threads/processes continue to return the previous version of that value.

Parameters
  • mutex – A mutex object that provides acquire() and release() methods.

  • creator – Callable which returns a tuple of the form (new_value, creation_time). “new_value” should be a newly generated value representing completed state. “creation_time” should be a floating point time value which is relative to Python’s time.time() call, representing the time at which the value was created. This time value should be associated with the created value.

  • value_and_created_fn – Callable which returns a tuple of the form (existing_value, creation_time). This basically should return what the last local call to the creator() callable has returned, i.e. the value and the creation time, which would be assumed here to be from a cache. If the value is not available, the NeedRegenerationException exception should be thrown.

  • expiretime – Expiration time in seconds. Set to None for never expires. This timestamp is compared to the creation_time result and time.time() to determine if the value returned by value_and_created_fn is “expired”.

  • async_creator – A callable. If specified, this callable will be passed the mutex as an argument and is responsible for releasing the mutex after it finishes some asynchronous value creation. The intent is for this to be used to defer invocation of the creator callable until some later time.

class dogpile.NeedRegenerationException

An exception that when raised in the ‘with’ block, forces the ‘has_value’ flag to False and incurs a regeneration of the value.

class dogpile.util.ReadWriteMutex

A mutex which allows multiple readers, single writer.

ReadWriteMutex uses a Python threading.Condition to provide this functionality across threads within a process.

The Beaker package also contained a file-lock based version of this concept, so that readers/writers could be synchronized across processes with a common filesystem. A future Dogpile release may include this additional class at some point.

acquire_read_lock(wait=True)

Acquire the ‘read’ lock.

acquire_write_lock(wait=True)

Acquire the ‘write’ lock.

release_read_lock()

Release the ‘read’ lock.

release_write_lock()

Release the ‘write’ lock.

class dogpile.util.NameRegistry(creator)

Generates and return an object, keeping it as a singleton for a certain identifier for as long as its strongly referenced.

e.g.:

class MyFoo(object):
    "some important object."
    def __init__(self, identifier):
        self.identifier = identifier

registry = NameRegistry(MyFoo)

# thread 1:
my_foo = registry.get("foo1")

# thread 2
my_foo = registry.get("foo1")

Above, my_foo in both thread #1 and #2 will be the same object. The constructor for MyFoo will be called once, passing the identifier foo1 as the argument.

When thread 1 and thread 2 both complete or otherwise delete references to my_foo, the object is removed from the NameRegistry as a result of Python garbage collection.

Parameters

creator – A function that will create a new value, given the identifier passed to the NameRegistry.get() method.

get(identifier, *args, **kw)

Get and possibly create the value.

Parameters
  • identifier – Hash key for the value. If the creation function is called, this identifier will also be passed to the creation function.

  • **kw (*args,) – Additional arguments which will also be passed to the creation function if it is called.

dogpile.cache-0.9.0/docs/build/0000775000175000017500000000000013555610710017325 5ustar classicclassic00000000000000dogpile.cache-0.9.0/docs/build/Makefile0000664000175000017500000000645513555610667021012 0ustar classicclassic00000000000000# Makefile for Sphinx documentation # # You can set these variables from the command line. SPHINXOPTS = SPHINXBUILD = sphinx-build PAPER = BUILDDIR = output # Internal variables. PAPEROPT_a4 = -D latex_paper_size=a4 PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . .PHONY: help clean html dirhtml pickle json htmlhelp qthelp latex changes linkcheck doctest help: @echo "Please use \`make ' where is one of" @echo " html to make standalone HTML files" @echo " dist-html same as html, but places files in /doc" @echo " dirhtml to make HTML files named index.html in directories" @echo " pickle to make pickle files" @echo " json to make JSON files" @echo " htmlhelp to make HTML files and a HTML help project" @echo " qthelp to make HTML files and a qthelp project" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" @echo " changes to make an overview of all changed/added/deprecated items" @echo " linkcheck to check all external links for integrity" @echo " doctest to run all doctests embedded in the documentation (if enabled)" clean: -rm -rf $(BUILDDIR)/* html: $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." dist-html: $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html cp -R $(BUILDDIR)/html/* ../ rm -fr $(BUILDDIR)/html @echo @echo "Build finished. The HTML pages are in ../." dirhtml: $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." pickle: $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle @echo @echo "Build finished; now you can process the pickle files." json: $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json @echo @echo "Build finished; now you can process the JSON files." htmlhelp: $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp @echo @echo "Build finished; now you can run HTML Help Workshop with the" \ ".hhp project file in $(BUILDDIR)/htmlhelp." qthelp: $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp @echo @echo "Build finished; now you can run "qcollectiongenerator" with the" \ ".qhcp project file in $(BUILDDIR)/qthelp, like this:" @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/Alembic.qhcp" @echo "To view the help file:" @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/Alembic.qhc" latex: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." @echo "Run \`make all-pdf' or \`make all-ps' in that directory to" \ "run these through (pdf)latex." changes: $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes @echo @echo "The overview file is in $(BUILDDIR)/changes." linkcheck: $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck @echo @echo "Link check complete; look for any errors in the above output " \ "or in $(BUILDDIR)/linkcheck/output.txt." doctest: $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest @echo "Testing of doctests in the sources finished, look at the " \ "results in $(BUILDDIR)/doctest/output.txt." dogpile.cache-0.9.0/docs/build/_static/0000775000175000017500000000000013555610710020753 5ustar classicclassic00000000000000dogpile.cache-0.9.0/docs/build/_static/nature_override.css0000664000175000017500000000052713555610667024701 0ustar classicclassic00000000000000@import url("nature.css"); @import url("site_custom_css.css"); .versionadded, .versionchanged, .deprecated { background-color: #FFFFCC; border: 1px solid #FFFF66; margin-bottom: 10px; margin-top: 10px; padding: 7px; } .versionadded > p > span, .versionchanged > p > span, .deprecated > p > span{ font-style: italic; } dogpile.cache-0.9.0/docs/build/_static/site_custom_css.css0000664000175000017500000000000013555610667024674 0ustar classicclassic00000000000000dogpile.cache-0.9.0/docs/build/_templates/0000775000175000017500000000000013555610710021462 5ustar classicclassic00000000000000dogpile.cache-0.9.0/docs/build/_templates/site_custom_sidebars.html0000664000175000017500000000000013555610667026563 0ustar classicclassic00000000000000dogpile.cache-0.9.0/docs/build/api.rst0000664000175000017500000000251513555610667020646 0ustar classicclassic00000000000000=== API === Region ====== .. automodule:: dogpile.cache.region :members: Backend API ============= See the section :ref:`creating_backends` for details on how to register new backends or :ref:`changing_backend_behavior` for details on how to alter the behavior of existing backends. .. automodule:: dogpile.cache.api :members: Backends ========== .. automodule:: dogpile.cache.backends.memory :members: .. automodule:: dogpile.cache.backends.memcached :members: .. automodule:: dogpile.cache.backends.redis :members: .. automodule:: dogpile.cache.backends.file :members: .. automodule:: dogpile.cache.proxy :members: .. automodule:: dogpile.cache.backends.null :members: Exceptions ========== .. automodule:: dogpile.cache.exception :members: Plugins ======== .. automodule:: dogpile.cache.plugins.mako_cache :members: Utilities ========= .. currentmodule:: dogpile.cache.util .. autofunction:: function_key_generator .. autofunction:: kwarg_function_key_generator .. autofunction:: sha1_mangle_key .. autofunction:: length_conditional_mangler dogpile Core ============ .. autoclass:: dogpile.Lock :members: .. autoclass:: dogpile.NeedRegenerationException :members: .. autoclass:: dogpile.util.ReadWriteMutex :members: .. autoclass:: dogpile.util.NameRegistry :members: dogpile.cache-0.9.0/docs/build/builder.py0000664000175000017500000000041213555610667021335 0ustar classicclassic00000000000000 def autodoc_skip_member(app, what, name, obj, skip, options): if what == 'class' and skip and name in ('__init__',) and obj.__doc__: return False else: return skip def setup(app): app.connect('autodoc-skip-member', autodoc_skip_member) dogpile.cache-0.9.0/docs/build/changelog.rst0000600000175000017500000007063713555610703022013 0ustar classicclassic00000000000000============== Changelog ============== .. changelog:: :version: 0.9.0 :released: Mon Oct 28 2019 .. change:: :tags: feature Added logging facililities into :class:`.CacheRegion`, to indicate key events such as cache keys missing or regeneration of values. As these can be very high volume log messages, ``logging.DEBUG`` is used as the log level for the events. Pull request courtesy Stéphane Brunner. .. changelog:: :version: 0.8.0 :released: Fri Sep 20 2019 .. change:: :tags: bug, setup :tickets: 157 Removed the "python setup.py test" feature in favor of a straight run of "tox". Per Pypa / pytest developers, "setup.py" commands are in general headed towards deprecation in favor of tox. The tox.ini script has been updated such that running "tox" with no arguments will perform a single run of the test suite against the default installed Python interpreter. .. seealso:: https://github.com/pypa/setuptools/issues/1684 https://github.com/pytest-dev/pytest/issues/5534 .. change:: :tags: bug, py3k :tickets: 154 Replaced the Python compatbility routines for ``getfullargspec()`` with a fully vendored version from Python 3.3. Originally, Python was emitting deprecation warnings for this function in Python 3.8 alphas. While this change was reverted, it was observed that Python 3 implementations for ``getfullargspec()`` are an order of magnitude slower as of the 3.4 series where it was rewritten against ``Signature``. While Python plans to improve upon this situation, SQLAlchemy projects for now are using a simple replacement to avoid any future issues. .. change:: :tags: bug, installation :tickets: 160 Pinned minimum version of Python decorator module at 4.0.0 (July, 2015) as previous versions don't provide the API that dogpile is using. .. change:: :tags: bug, py3k :tickets: 159 Fixed the :func:`.sha1_mangle_key` key mangler to coerce incoming Unicode objects into bytes as is required by the Py3k version of this function. .. changelog:: :version: 0.7.1 :released: Tue Dec 11 2018 .. change:: :tags: bug, region :tickets: 139 Fixed regression in 0.7.0 caused by :ticket:`136` where the assumed arguments for the :paramref:`.CacheRegion.async_creation_runner` expanded to include the new :paramref:`.CacheRegion.get_or_create.creator_args` parameter, as it was not tested that the async runner would be implicitly called with these arguments when the :meth:`.CacheRegion.cache_on_arguments` decorator was used. The exact signature of ``async_creation_runner`` is now restored to have the same arguments in all cases. .. changelog:: :version: 0.7.0 :released: Mon Dec 10 2018 .. change:: :tags: bug :tickets: 137 The ``decorator`` module is now used when creating function decorators within :meth:`.CacheRegion.cache_on_arguments` and :meth:`.CacheRegion.cache_multi_on_arguments` so that function signatures are preserved. Pull request courtesy ankitpatel96. Additionally adds a small performance enhancement which is to avoid internally creating a ``@wraps()`` decorator for the creator function on every get operation, by allowing the arguments to the creator be passed separately to :meth:`.CacheRegion.get_or_create`. .. change:: :tags: bug, py3k :tickets: 129 Fixed all Python 3.x deprecation warnings including ``inspect.getargspec()``. .. changelog:: :version: 0.6.8 :released: Sat Nov 24 2018 .. change:: :tags: change Project hosting has moved to GitHub, under the SQLAlchemy organization at https://github.com/sqlalchemy/dogpile.cache .. changelog:: :version: 0.6.7 :released: Thu Jul 26 2018 .. change:: :tags: bug :tickets: 128 Fixed issue in the :meth:`.CacheRegion.get_or_create_multi` method which was erroneously considering the cached value as the timestamp field if the :meth:`.CacheRegion.invalidate` method had ben used, usually causing a ``TypeError`` to occur, or in less frequent cases an invalid result for whether or not the cached value was invalid, leading to excessive caching or regeneration. The issue was a regression caused by an implementation issue in the pluggable invalidation feature added in :ticket:`38`. .. changelog:: :version: 0.6.6 :released: Wed Jun 27 2018 .. change:: :tags: feature :tickets: 123 Added method :attr:`.CacheRegion.actual_backend` which calculates and caches the actual backend for the region, which may be abstracted by the use of one or more :class:`.ProxyBackend` subclasses. .. change:: :tags: bug :tickets: 122 Fixed a condition in the :class:`.Lock` where the "get" function could be called a second time unnecessarily, when returning an existing, expired value from the cache. .. changelog:: :version: 0.6.5 :released: Mon Mar 5 2018 .. change:: :tags: bug :tickets: 119 Fixed import issue for Python 3.7 where several variables named "async" were, leading to syntax errors. Pull request courtesy Brian Sheldon. .. changelog:: :version: 0.6.4 :released: Mon Jun 26, 2017 .. change:: :tags: bug The method :meth:`.Region.get_or_create_multi` will not pass to the cache backend if no values are ultimately to be stored, based on the use of the :paramref:`.Region.get_or_create_multi.should_cache_fn` function. This empty dictionary is unnecessary and can cause API problems for backends like that of Redis. Pull request courtesy Tobias Sauerwein. .. change:: :tags: bug The :attr:`.api.NO_VALUE` constant now has a fixed ``__repr__()`` output, so that scenarios where this constant's string value ends up being used as a cache key do not create multiple values. Pull request courtesy Paul Brown. .. change:: :tags: bug A new exception class :class:`.exception.PluginNotFound` is now raised when a particular cache plugin class cannot be located either as a setuptools entrypoint or as a registered backend. Previously, a plain ``Exception`` was thrown. Pull request courtesy Jamie Lennox. .. changelog:: :version: 0.6.3 :released: Thu May 18, 2017 .. change:: :tags: feature Added ``replace_existing_backend`` to :meth:`.CacheRegion.configure_from_config`. Pull request courtesy Daniel Kraus. .. changelog:: :version: 0.6.2 :released: Tue Aug 16 2016 .. change:: :tags: feature :tickets: 38 Added a new system to allow custom plugins specific to the issue of "invalidate the entire region", using a new base class :class:`.RegionInvalidationStrategy`. As there are many potential strategies to this (special backend function, storing special keys, etc.) the mechanism for both soft and hard invalidation is now customizable. New approaches to region invalidation can be contributed as documented recipes. Pull request courtesy Alexander Makarov. .. change:: :tags: feature :tickets: 43 Added a new cache key generator :func:`.kwarg_function_key_generator`, which takes keyword arguments as well as positional arguments into account when forming the cache key. .. change:: :tags: bug Restored some more util symbols that users may have been relying upon (although these were not necessarily intended as user-facing): ``dogpile.cache.util.coerce_string_conf``, ``dogpile.cache.util.KeyReentrantMutex``, ``dogpile.cache.util.memoized_property``, ``dogpile.cache.util.PluginLoader``, ``dogpile.cache.util.to_list``. .. changelog:: :version: 0.6.1 :released: Mon Jun 6 2016 .. change:: :tags: bug :tickets: 99 Fixed imports for ``dogpile.core`` restoring ``ReadWriteMutex`` and ``NameRegistry`` into the base namespace, in addition to ``dogpile.core.nameregistry`` and ``dogpile.core.readwrite_lock``. .. changelog:: :version: 0.6.0 :released: Mon Jun 6 2016 .. change:: :tags: feature :tickets: 91 The ``dogpile.core`` library has been rolled in as part of the ``dogpile.cache`` distribution. The configuration of the ``dogpile`` name as a namespace package is also removed from ``dogpile.cache``. In order to allow existing installations of ``dogpile.core`` as a separate package to remain unaffected, the ``.core`` package has been retired within ``dogpile.cache`` directly; the :class:`.Lock` class is now available directly as ``dogpile.Lock`` and the additional ``dogpile.core`` constructs are under the ``dogpile.util`` namespace. Additionally, the long-deprecated ``dogpile.core.Dogpile`` and ``dogpile.core.SyncReaderDogpile`` classes have been removed. .. change:: :tags: bug The Redis backend now creates a copy of the "arguments" dictionary passed to it, before popping values out of it. This prevents the given dictionary from losing its keys. .. change:: :tags: bug :tickets: 97 Fixed bug in "null" backend where :class:`.NullLock` did not accept a flag for the :meth:`.NullLock.acquire` method, nor did it return a boolean value for "success". .. changelog:: :version: 0.5.7 :released: Mon Oct 19 2015 .. change:: :tags: feature :pullreq: 37 :tickets: 54 Added new parameter :paramref:`.GenericMemcachedBackend.lock_timeout`, used in conjunction with :paramref:`.GenericMemcachedBackend.distributed_lock`, will specify the timeout used when communicating to the ``.add()`` method of the memcached client. Pull request courtesy Frits Stegmann and Morgan Fainberg. .. change:: :tags: feature :pullreq: 35 :tickets: 65 Added a new flag :paramref:`.CacheRegion.configure.replace_existing_backend`, allows a region to have a new backend replace an existing one. Pull request courtesy hbccbh. .. change:: :tags: feature, tests :pullreq: 33 Test suite now runs using py.test. Pull request courtesy John Anderson. .. change:: :tags: bug, redis :tickets: 74 Repaired the :meth:`.CacheRegion.get_multi` method when used with a list of zero length against the redis backend. .. changelog:: :version: 0.5.6 :released: Mon Feb 2 2015 .. change:: :tags: feature :pullreq: 30 Changed the pickle protocol for the file/DBM backend to ``pickle.HIGHEST_PROTOCOL`` when producing new pickles, to match that of the redis and memorypickle backends. Pull request courtesy anentropic. .. changelog:: :version: 0.5.5 :released: Wed Jan 21 2015 .. change:: :tags: feature :pullreq: 26 Added new arguments :paramref:`.CacheRegion.cache_on_arguments.function_key_generator` and :paramref:`.CacheRegion.cache_multi_on_arguments.function_multi_key_generator` which serve as per-decorator replacements for the region-wide :paramref:`.CacheRegion.function_key_generator` and :paramref:`.CacheRegion.function_multi_key_generator` parameters, respectively, so that custom key production schemes can be applied on a per-function basis within one region. Pull request courtesy Hongbin Lu. .. change:: :tags: bug :tickets: 71 :pullreq: 25 Fixed bug where sending -1 for the :paramref:`.CacheRegion.get_or_create.expiration_time` parameter to :meth:`.CacheRegion.get_or_create` or :meth:`.CacheRegion.get_or_create_multi` would fail to honor the setting as "no expiration time". Pull request courtesy Hongbin Lu. .. change:: :tags: bug :tickets: 41 :pullreq: 28 The ``wrap`` argument is now propagated when calling :meth:`.CacheRegion.configure_from_config`. Pull request courtesy Jonathan Vanasco. .. change:: :tags: bug Fixed tests under py.test, which were importing a symbol from pytest itself ``is_unittest`` which has been removed. .. changelog:: :version: 0.5.4 :released: Sat Jun 14 2014 .. change:: :tags: feature :pullreq: 18 Added new :class:`.NullBackend`, for testing and cache-disabling purposes. Pull request courtesy Wichert Akkerman. .. change:: :tags: bug :pullreq: 19 Added missing Mako test dependency to setup.py. Pull request courtesy Wichert Akkerman. .. change:: :tags: bug :tickets: 58 :pullreq: 20 Fixed bug where calling :meth:`.CacheRegion.get_multi` or :meth:`.CacheRegion.set_multi` with an empty list would cause failures based on backend. Pull request courtesy Wichert Akkerman. .. change:: :tags: feature :pullreq: 17 Added new :paramref:`.RedisBackend.connection_pool` option on the Redis backend; this can be passed a ``redis.ConnectionPool`` instance directly. Pull request courtesy Masayuko. .. change:: :tags: feature :pullreq: 16 Added new :paramref:`.RedisBackend.socket_timeout` option on the Redis backend. Pull request courtesy Saulius Menkevičius. .. change:: :tags: feature Added support for tests to run via py.test. .. change:: :tags: bug :pullreq: 15 Repaired the entry point for Mako templates; the name of the entrypoint itself was wrong vs. what was in the docs, but beyond that the entrypoint would load the wrong module name. Pull request courtesy zoomorph. .. change:: :tags: bug :tickets: 57 :pullreq: 13 The :func:`.coerce_string_conf` function, which is used by :meth:`.Region.configure_from_config`, will now recognize floating point values when parsing conf strings and deliver them as such; this supports non-integer values such as Redis ``lock_sleep``. Pullreq courtesy Jeff Dairiki. .. changelog:: :version: 0.5.3 :released: Wed Jan 8 2014 .. change:: :tags: bug :pullreq: 10 Fixed bug where the key_mangler would get in the way of usage of the async_creation_runner feature within the :meth:`.Region.get_or_create` method, by sending in the mangled key instead of the original key. The "mangled" key is only supposed to be exposed within the backend storage, not the creation function which sends the key back into the :meth:`.Region.set`, which does the mangling itself. Pull request courtesy Ryan Kolak. .. change:: :tags: bug, py3k Fixed bug where the :meth:`.Region.get_multi` method wasn't calling the backend correctly in Py3K (e.g. was passing a destructive ``map()`` object) which would cause this method to fail on the memcached backend. .. change:: :tags: feature :tickets: 55 Added a ``get()`` method to complement the ``set()``, ``invalidate()`` and ``refresh()`` methods established on functions decorated by :meth:`.CacheRegion.cache_on_arguments` and :meth:`.CacheRegion.cache_multi_on_arguments`. Pullreq courtesy Eric Hanchrow. .. change:: :tags: feature :tickets: 51 :pullreq: 11 Added a new variant on :class:`.MemoryBackend`, :class:`.MemoryPickleBackend`. This backend applies ``pickle.dumps()`` and ``pickle.loads()`` to cached values upon set and get, so that similar copy-on-cache behavior as that of other backends is employed, guarding cached values against subsequent in-memory state changes. Pullreq courtesy Jonathan Vanasco. .. change:: :tags: bug :pullreq: 9 Fixed a format call in the redis backend which would otherwise fail on Python 2.6; courtesy Jeff Dairiki. .. changelog:: :version: 0.5.2 :released: Fri Nov 15 2013 .. change:: :tags: bug Fixes to routines on Windows, including that default unit tests pass, and an adjustment to the "soft expiration" feature to ensure the expiration works given windows time.time() behavior. .. change:: :tags: bug Added py2.6 compatibility for unsupported ``total_seconds()`` call in region.py .. change:: :tags: feature :tickets: 44 Added a new argument ``lock_factory`` to the :class:`.DBMBackend` implementation. This allows for drop-in replacement of the default :class:`.FileLock` backend, which builds on ``os.flock()`` and only supports Unix platforms. A new abstract base :class:`.AbstractFileLock` has been added to provide a common base for custom lock implementations. The documentation points to an example thread-based rw lock which is now tested on Windows. .. changelog:: :version: 0.5.1 :released: Thu Oct 10 2013 .. change:: :tags: feature :tickets: 38 The :meth:`.CacheRegion.invalidate` method now supports an option ``hard=True|False``. A "hard" invalidation, equivalent to the existing functionality of :meth:`.CacheRegion.invalidate`, means :meth:`.CacheRegion.get_or_create` will not return the "old" value at all, forcing all getters to regenerate or wait for a regeneration. "soft" invalidation means that getters can continue to return the old value until a new one is generated. .. change:: :tags: feature :tickets: 40 New dogpile-specific exception classes have been added, so that issues like "region already configured", "region unconfigured", raise dogpile-specific exceptions. Other exception classes have been made more specific. Also added new accessor :attr:`.CacheRegion.is_configured`. Pullreq courtesy Morgan Fainberg. .. change:: :tags: bug Erroneously missed when the same change was made for ``set()`` in 0.5.0, the Redis backend now uses ``pickle.HIGHEST_PROTOCOL`` for the ``set_multi()`` method as well when producing pickles. Courtesy Łukasz Fidosz. .. change:: :tags: bug, redis, py3k :tickets: 39 Fixed an errant ``u''`` causing incompatibility in Python3.2 in the Redis backend, courtesy Jimmey Mabey. .. change:: :tags: bug The :func:`.util.coerce_string_conf` method now correctly coerces negative integers and those with a leading + sign. This previously prevented configuring a :class:`.CacheRegion` with an ``expiration_time`` of ``'-1'``. Courtesy David Beitey. .. change:: :tags: bug The ``refresh()`` method on :meth:`.CacheRegion.cache_multi_on_arguments` now supports the ``asdict`` flag. .. changelog:: :version: 0.5.0 :released: Fri Jun 21 2013 .. change:: :tags: misc Source repository has been moved to git. .. change:: :tags: bug The Redis backend now uses ``pickle.HIGHEST_PROTOCOL`` when producing pickles. Courtesy Lx Yu. .. change:: :tags: bug :meth:`.CacheRegion.cache_on_arguments` now has a new argument ``to_str``, defaults to ``str()``. Can be replaced with ``unicode()`` or other functions to support caching of functions that accept non-unicode arguments. Initial patch courtesy Lx Yu. .. change:: :tags: feature Now using the ``Lock`` included with the Python ``redis`` backend, which adds ``lock_timeout`` and ``lock_sleep`` arguments to the :class:`.RedisBackend`. .. change:: :tags: feature :tickets: 33, 35 Added new methods :meth:`.CacheRegion.get_or_create_multi` and :meth:`.CacheRegion.cache_multi_on_arguments`, which make use of the :meth:`.CacheRegion.get_multi` and similar functions to store and retrieve multiple keys at once while maintaining dogpile semantics for each. .. change:: :tags: feature :tickets: 36 Added a method ``refresh()`` to functions decorated by :meth:`.CacheRegion.cache_on_arguments` and :meth:`.CacheRegion.cache_multi_on_arguments`, to complement ``invalidate()`` and ``set()``. .. change:: :tags: feature :tickets: 13 :meth:`.CacheRegion.configure` accepts an optional ``datetime.timedelta`` object for the ``expiration_time`` argument as well as an integer, courtesy Jack Lutz. .. change:: :tags: feature :tickets: 20 The ``expiration_time`` argument passed to :meth:`.CacheRegion.cache_on_arguments` may be a callable, to return a dynamic timeout value. Courtesy David Beitey. .. change:: :tags: feature :tickets: 26 Added support for simple augmentation of existing backends using the :class:`.ProxyBackend` class. Thanks to Tim Hanus for the great effort with development, testing, and documentation. .. change:: :tags: feature :pullreq: 14 Full support for multivalue get/set/delete added, using :meth:`.CacheRegion.get_multi`, :meth:`.CacheRegion.set_multi`, :meth:`.CacheRegion.delete_multi`, courtesy Marcos Araujo Sobrinho. .. change:: :tags: bug :tickets: 27 Fixed bug where the "name" parameter for :class:`.CacheRegion` was ignored entirely. Courtesy Wichert Akkerman. .. changelog:: :version: 0.4.3 :released: Thu Apr 4 2013 .. change:: :tags: bug Added support for the ``cache_timeout`` Mako argument to the Mako plugin, which will pass the value to the ``expiration_time`` argument of :meth:`.CacheRegion.get_or_create`. .. change:: :tags: feature :pullreq: 13 :meth:`.CacheRegion.get_or_create` and :meth:`.CacheRegion.cache_on_arguments` now accept a new argument ``should_cache_fn``, receives the value returned by the "creator" and then returns True or False, where True means "cache plus return", False means "return the value but don't cache it." .. changelog:: :version: 0.4.2 :released: Sat Jan 19 2013 .. change:: :tags: feature :pullreq: 10 An "async creator" function can be specified to :class:`.CacheRegion` which allows the "creation" function to be called asynchronously or be subsituted for another asynchronous creation scheme. Courtesy Ralph Bean. .. changelog:: :version: 0.4.1 :released: Sat Dec 15 2012 .. change:: :tags: feature :pullreq: 9 The function decorated by :meth:`.CacheRegion.cache_on_arguments` now includes a ``set()`` method, in addition to the existing ``invalidate()`` method. Like ``invalidate()``, it accepts a set of function arguments, but additionally accepts as the first positional argument a new value to place in the cache, to take the place of that key. Courtesy Antoine Bertin. .. change:: :tags: bug :tickets: 15 Fixed bug in DBM backend whereby if an error occurred during the "write" operation, the file lock, if enabled, would not be released, thereby deadlocking the app. .. change:: :tags: bug :tickets: 12 The :func:`.util.function_key_generator` used by the function decorator no longer coerces non-unicode arguments into a Python unicode object on Python 2.x; this causes failures on backends such as DBM which on Python 2.x apparently require bytestrings. The key_mangler is still needed if actual unicode arguments are being used by the decorated function, however. .. change:: :tags: feature Redis backend now accepts optional "url" argument, will be passed to the new ``StrictRedis.from_url()`` method to determine connection info. Courtesy Jon Rosebaugh. .. change:: :tags: feature Redis backend now accepts optional "password" argument. Courtesy Jon Rosebaugh. .. change:: :tags: feature DBM backend has "fallback" when calling dbm.get() to instead use dictionary access + KeyError, in the case that the "gdbm" backend is used which does not include .get(). Courtesy Jon Rosebaugh. .. changelog:: :version: 0.4.0 :released: Tue Oct 30 2012 .. change:: :tags: bug :tickets: 1 Using dogpile.core 0.4.0 now, fixes a critical bug whereby dogpile pileup could occur on first value get across multiple processes, due to reliance upon a non-shared creation time. This is a dogpile.core issue. .. change:: :tags: bug :tickets: Fixed missing __future__ with_statement directive in region.py. .. changelog:: :version: 0.3.1 :released: Tue Sep 25 2012 .. change:: :tags: bug :tickets: Fixed the mako_cache plugin which was not yet covered, and wasn't implementing the mako plugin API correctly; fixed docs as well. Courtesy Ben Hayden. .. change:: :tags: bug :tickets: Fixed setup so that the tests/* directory isn't yanked into the install. Courtesy Ben Hayden. .. changelog:: :version: 0.3.0 :released: Thu Jun 14 2012 .. change:: :tags: feature :tickets: get() method now checks expiration time by default. Use ignore_expiration=True to bypass this. .. change:: :tags: feature :tickets: 7 Added new invalidate() method. Sets the current timestamp as a minimum value that all retrieved values must be created after. Is honored by the get_or_create() and get() methods. .. change:: :tags: bug :tickets: 8 Fixed bug whereby region.get() didn't work if the value wasn't present. .. changelog:: :version: 0.2.4 :released: .. change:: :tags: :tickets: Fixed py3k issue with config string coerce, courtesy Alexander Fedorov .. changelog:: :version: 0.2.3 :released: Wed May 16 2012 .. change:: :tags: :tickets: 3 support "min_compress_len" and "memcached_expire_time" with python-memcached backend. Tests courtesy Justin Azoff .. change:: :tags: :tickets: 4 Add support for coercion of string config values to Python objects - ints, "false", "true", "None". .. change:: :tags: :tickets: 5 Added support to DBM file lock to allow reentrant access per key within a single thread, so that even though the DBM backend locks for the whole file, a creation function that calls upon a different key in the cache can still proceed. .. change:: :tags: :tickets: Fixed DBM glitch where multiple readers could be serialized. .. change:: :tags: :tickets: Adjust bmemcached backend to work with newly-repaired bmemcached calling API (see bmemcached ef206ed4473fec3b639e). .. changelog:: :version: 0.2.2 :released: Thu Apr 19 2012 .. change:: :tags: :tickets: add Redis backend, courtesy Ollie Rutherfurd .. changelog:: :version: 0.2.1 :released: Sun Apr 15 2012 .. change:: :tags: :tickets: move tests into tests/cache namespace .. change:: :tags: :tickets: py3k compatibility is in-place now, no 2to3 needed. .. changelog:: :version: 0.2.0 :released: Sat Apr 14 2012 .. change:: :tags: :tickets: Based on dogpile.core now, to get the package namespace thing worked out. .. changelog:: :version: 0.1.1 :released: Tue Apr 10 2012 .. change:: :tags: :tickets: Fixed the configure_from_config() method of region and backend which wasn't working. Courtesy Christian Klinger. .. changelog:: :version: 0.1.0 :released: Sun Apr 08 2012 .. change:: :tags: :tickets: Initial release. .. change:: :tags: :tickets: Includes a pylibmc backend and a plain dictionary backend. dogpile.cache-0.9.0/docs/build/conf.py0000664000175000017500000001635113555610703020634 0ustar classicclassic00000000000000# -*- coding: utf-8 -*- # # Dogpile.cache documentation build configuration file, created by # sphinx-quickstart on Sat May 1 12:47:55 2010. # # This file is execfile()d with the current directory set to its containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. import os import sys # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. # sys.path.append(os.path.abspath('.')) # If your extensions are in another directory, add it here. If the directory # is relative to the documentation root, use os.path.abspath to make it # absolute, like shown here. sys.path.insert(0, os.path.abspath("../../")) if True: import dogpile # -- General configuration ----------------------------------------------------- # Add any Sphinx extension module names here, as strings. They can be extensions # coming with Sphinx (named 'sphinx.ext.*') or your custom ones. extensions = [ "sphinx.ext.autodoc", "sphinx.ext.intersphinx", "changelog", "sphinx_paramlinks", ] changelog_sections = ["feature", "bug"] changelog_render_ticket = ( "https://github.com/sqlalchemy/dogpile.cache/issues/%s" ) changelog_render_pullreq = ( "https://github.com/sqlalchemy/dogpile.cache/pull-request/%s" ) changelog_render_changeset = ( "https://github.com/sqlalchemy/dogpile.cache/commit/%s" ) # Add any paths that contain templates here, relative to this directory. templates_path = ["_templates"] # The suffix of source filenames. source_suffix = ".rst" # The encoding of source files. # source_encoding = 'utf-8' # The master toctree document. master_doc = "index" # General information about the project. project = u"dogpile.cache" copyright = u"2011-2019 Mike Bayer" # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # # The short X.Y version. version = dogpile.__version__ # The full version, including alpha/beta/rc tags. release = "0.9.0" # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. # language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: # today = '' # Else, today_fmt is used as the format for a strftime call. # today_fmt = '%B %d, %Y' # List of documents that shouldn't be included in the build. # unused_docs = [] # List of directories, relative to source directory, that shouldn't be searched # for source files. exclude_trees = [] # The reST default role (used for this markup: `text`) to use for all documents. # default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. # add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). # add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. # show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = "sphinx" # A list of ignored prefixes for module index sorting. # modindex_common_prefix = [] # -- Options for HTML output --------------------------------------------------- # The theme to use for HTML and HTML Help pages. Major themes that come with # Sphinx are currently 'default' and 'sphinxdoc'. html_theme = "nature" html_style = "nature_override.css" # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. # html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. # html_theme_path = [] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". # html_title = None # A shorter title for the navigation bar. Default is the same as html_title. # html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. # html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. # html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ["_static"] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. # html_last_updated_fmt = '%b %d, %Y' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. # html_use_smartypants = True # Custom sidebar templates, maps document names to template names. # html_sidebars = {} # Custom sidebar templates, maps document names to template names. html_sidebars = { "**": [ "site_custom_sidebars.html", "localtoc.html", "searchbox.html", "relations.html", ] } # Additional templates that should be rendered to pages, maps page names to # template names. # html_additional_pages = {} # If false, no module index is generated. # html_use_modindex = True # If false, no index is generated. # html_use_index = True # If true, the index is split into individual pages for each letter. # html_split_index = False # If true, links to the reST sources are added to the pages. # html_show_sourcelink = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. # html_use_opensearch = '' # If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml"). # html_file_suffix = '' # Output file base name for HTML help builder. htmlhelp_basename = "dogpile.cachedoc" # -- Options for LaTeX output -------------------------------------------------- # The paper size ('letter' or 'a4'). # latex_paper_size = 'letter' # The font size ('10pt', '11pt' or '12pt'). # latex_font_size = '10pt' # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, documentclass [howto/manual]). latex_documents = [ ( "index", "dogpile.cache.tex", u"Dogpile.Cache Documentation", u"Mike Bayer", "manual", ) ] # The name of an image file (relative to this directory) to place at the top of # the title page. # latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. # latex_use_parts = False # Additional stuff for the LaTeX preamble. # latex_preamble = '' # Documents to append as an appendix to all manuals. # latex_appendices = [] # If false, no module index is generated. # latex_use_modindex = True # {'python': ('http://docs.python.org/3.2', None)} dogpile.cache-0.9.0/docs/build/core_usage.rst0000664000175000017500000002375613555610667022223 0ustar classicclassic00000000000000============ dogpile Core ============ ``dogpile`` provides a locking interface around a "value creation" and "value retrieval" pair of functions. .. versionchanged:: 0.6.0 The ``dogpile`` package encapsulates the functionality that was previously provided by the separate ``dogpile.core`` package. The primary interface is the :class:`.Lock` object, which provides for the invocation of the creation function by only one thread and/or process at a time, deferring all other threads/processes to the "value retrieval" function until the single creation thread is completed. Do I Need to Learn the dogpile Core API Directly? ================================================= It's anticipated that most users of ``dogpile`` will be using it indirectly via the ``dogpile.cache`` caching front-end. If you fall into this category, then the short answer is no. Using the core ``dogpile`` APIs described here directly implies you're building your own resource-usage system outside, or in addition to, the one ``dogpile.cache`` provides. Rudimentary Usage ================== The primary API dogpile provides is the :class:`.Lock` object. This object allows for functions that provide mutexing, value creation, as well as value retrieval. An example usage is as follows:: from dogpile import Lock, NeedRegenerationException import threading import time # store a reference to a "resource", some # object that is expensive to create. the_resource = [None] def some_creation_function(): # call a value creation function value = create_some_resource() # get creationtime using time.time() creationtime = time.time() # keep track of the value and creation time in the "cache" the_resource[0] = tup = (value, creationtime) # return the tuple of (value, creationtime) return tup def retrieve_resource(): # function that retrieves the resource and # creation time. # if no resource, then raise NeedRegenerationException if the_resource[0] is None: raise NeedRegenerationException() # else return the tuple of (value, creationtime) return the_resource[0] # a mutex, which needs here to be shared across all invocations # of this particular creation function mutex = threading.Lock() with Lock(mutex, some_creation_function, retrieve_resource, 3600) as value: # some function that uses # the resource. Won't reach # here until some_creation_function() # has completed at least once. value.do_something() Above, ``some_creation_function()`` will be called when :class:`.Lock` is first invoked as a context manager. The value returned by this function is then passed into the ``with`` block, where it can be used by application code. Concurrent threads which call :class:`.Lock` during this initial period will be blocked until ``some_creation_function()`` completes. Once the creation function has completed successfully the first time, new calls to :class:`.Lock` will call ``retrieve_resource()`` in order to get the current cached value as well as its creation time; if the creation time is older than the current time minus an expiration time of 3600, then ``some_creation_function()`` will be called again, but only by one thread/process, using the given mutex object as a source of synchronization. Concurrent threads/processes which call :class:`.Lock` during this period will fall through, and not be blocked; instead, the "stale" value just returned by ``retrieve_resource()`` will continue to be returned until the creation function has finished. The :class:`.Lock` API is designed to work with simple cache backends like Memcached. It addresses such issues as: * Values can disappear from the cache at any time, before our expiration time is reached. The :class:`.NeedRegenerationException` class is used to alert the :class:`.Lock` object that a value needs regeneration ahead of the usual expiration time. * There's no function in a Memcached-like system to "check" for a key without actually retrieving it. The usage of the ``retrieve_resource()`` function allows that we check for an existing key and also return the existing value, if any, at the same time, without the need for two separate round trips. * The "creation" function used by :class:`.Lock` is expected to store the newly created value in the cache, as well as to return it. This is also more efficient than using two separate round trips to separately store, and re-retrieve, the object. .. _caching_decorator: Example: Using dogpile directly for Caching =========================================== The following example approximates Beaker's "cache decoration" function, to decorate any function and store the value in Memcached. Note that normally, **we'd just use dogpile.cache here**, however for the purposes of example, we'll illustrate how the :class:`.Lock` object is used directly. We create a Python decorator function called ``cached()`` which will provide caching for the output of a single function. It's given the "key" which we'd like to use in Memcached, and internally it makes usage of :class:`.Lock`, along with a thread based mutex (we'll see a distributed mutex in the next section):: import pylibmc import threading import time from dogpile import Lock, NeedRegenerationException mc_pool = pylibmc.ThreadMappedPool(pylibmc.Client("localhost")) def cached(key, expiration_time): """A decorator that will cache the return value of a function in memcached given a key.""" mutex = threading.Lock() def get_value(): with mc_pool.reserve() as mc: value_plus_time = mc.get(key) if value_plus_time is None: raise NeedRegenerationException() # return a tuple (value, createdtime) return value_plus_time def decorate(fn): def gen_cached(): value = fn() with mc_pool.reserve() as mc: # create a tuple (value, createdtime) value_plus_time = (value, time.time()) mc.put(key, value_plus_time) return value_plus_time def invoke(): with Lock(mutex, gen_cached, get_value, expiration_time) as value: return value return invoke return decorate Using the above, we can decorate any function as:: @cached("some key", 3600) def generate_my_expensive_value(): return slow_database.lookup("stuff") The :class:`.Lock` object will ensure that only one thread at a time performs ``slow_database.lookup()``, and only every 3600 seconds, unless Memcached has removed the value, in which case it will be called again as needed. In particular, dogpile.core's system allows us to call the memcached get() function at most once per access, instead of Beaker's system which calls it twice, and doesn't make us call get() when we just created the value. For the mutex object, we keep a ``threading.Lock`` object that's local to the decorated function, rather than using a global lock. This localizes the in-process locking to be local to this one decorated function. In the next section, we'll see the usage of a cross-process lock that accomplishes this differently. Using a File or Distributed Lock with Dogpile ============================================== The examples thus far use a ``threading.Lock()`` object for synchronization. If our application uses multiple processes, we will want to coordinate creation operations not just on threads, but on some mutex that other processes can access. In this example we'll use a file-based lock as provided by the `lockfile `_ package, which uses a unix-symlink concept to provide a filesystem-level lock (which also has been made threadsafe). Another strategy may base itself directly off the Unix ``os.flock()`` call, or use an NFS-safe file lock like `flufl.lock `_, and still another approach is to lock against a cache server, using a recipe such as that described at `Using Memcached as a Distributed Locking Service `_. What all of these locking schemes have in common is that unlike the Python ``threading.Lock`` object, they all need access to an actual key which acts as the symbol that all processes will coordinate upon. So here, we will also need to create the "mutex" which we pass to :class:`.Lock` using the ``key`` argument:: import lockfile import os from hashlib import sha1 # ... other imports and setup from the previous example def cached(key, expiration_time): """A decorator that will cache the return value of a function in memcached given a key.""" lock_path = os.path.join("/tmp", "%s.lock" % sha1(key).hexdigest()) # ... get_value() from the previous example goes here def decorate(fn): # ... gen_cached() from the previous example goes here def invoke(): # create an ad-hoc FileLock mutex = lockfile.FileLock(lock_path) with Lock(mutex, gen_cached, get_value, expiration_time) as value: return value return invoke return decorate For a given key "some_key", we generate a hex digest of the key, then use ``lockfile.FileLock()`` to create a lock against the file ``/tmp/53def077a4264bd3183d4eb21b1f56f883e1b572.lock``. Any number of :class:`.Lock` objects in various processes will now coordinate with each other, using this common filename as the "baton" against which creation of a new value proceeds. Unlike when we used ``threading.Lock``, the file lock is ultimately locking on a file, so multiple instances of ``FileLock()`` will all coordinate on that same file - it's often the case that file locks that rely upon ``flock()`` require non-threaded usage, so a unique filesystem lock per thread is often a good idea in any case. dogpile.cache-0.9.0/docs/build/front.rst0000664000175000017500000000140113555610667021216 0ustar classicclassic00000000000000============ Front Matter ============ Information about the dogpile.cache project. Project Homepage ================ dogpile.cache is hosted on GitHub at https://github.com/sqlalchemy/dogpile.cache. Releases and project status are available on Pypi at https://pypi.python.org/pypi/dogpile.cache. The most recent published version of this documentation should be at https://dogpilecache.sqlalchemy.org. Installation ============ Install released versions of dogpile.cache from the Python package index with `pip `_ or a similar tool:: pip install dogpile.cache Bugs ==== Bugs and feature enhancements to dogpile.cache should be reported on the `GitHub issue tracker `_.dogpile.cache-0.9.0/docs/build/index.rst0000664000175000017500000000227713555610667021211 0ustar classicclassic00000000000000========================================== Welcome to dogpile.cache's documentation! ========================================== Dogpile consists of two subsystems, one building on top of the other. ``dogpile`` provides the concept of a "dogpile lock", a control structure which allows a single thread of execution to be selected as the "creator" of some resource, while allowing other threads of execution to refer to the previous version of this resource as the creation proceeds; if there is no previous version, then those threads block until the object is available. ``dogpile.cache`` is a caching API which provides a generic interface to caching backends of any variety, and additionally provides API hooks which integrate these cache backends with the locking mechanism of ``dogpile``. New backends are very easy to create and use; users are encouraged to adapt the provided backends for their own needs, as high volume caching requires lots of tweaks and adjustments specific to an application and its environment. .. toctree:: :maxdepth: 2 front usage recipes core_usage api changelog Indices and tables ================== * :ref:`genindex` * :ref:`modindex` * :ref:`search` dogpile.cache-0.9.0/docs/build/recipes.rst0000664000175000017500000002212013555610667021521 0ustar classicclassic00000000000000Recipes ======= Invalidating a group of related keys ------------------------------------- This recipe presents a way to track the cache keys related to a particular region, for the purposes of invalidating a series of keys that relate to a particular id. Three cached functions, ``user_fn_one()``, ``user_fn_two()``, ``user_fn_three()`` each perform a different function based on a ``user_id`` integer value. The region applied to cache them uses a custom key generator which tracks each cache key generated, pulling out the integer "id" and replacing with a template. When all three functions have been called, the key generator is now aware of these three keys: ``user_fn_one_%d``, ``user_fn_two_%d``, and ``user_fn_three_%d``. The ``invalidate_user_id()`` function then knows that for a particular ``user_id``, it needs to hit all three of those keys in order to invalidate everything having to do with that id. :: from dogpile.cache import make_region from itertools import count user_keys = set() def my_key_generator(namespace, fn): fname = fn.__name__ def generate_key(*arg): # generate a key template: # "fname_%d_arg1_arg2_arg3..." key_template = fname + "_" + \ "%d" + \ "_".join(str(s) for s in arg[1:]) # store key template user_keys.add(key_template) # return cache key user_id = arg[0] return key_template % user_id return generate_key def invalidate_user_id(region, user_id): for key in user_keys: region.delete(key % user_id) region = make_region( function_key_generator=my_key_generator ).configure( "dogpile.cache.memory" ) counter = count() @region.cache_on_arguments() def user_fn_one(user_id): return "user fn one: %d, %d" % (next(counter), user_id) @region.cache_on_arguments() def user_fn_two(user_id): return "user fn two: %d, %d" % (next(counter), user_id) @region.cache_on_arguments() def user_fn_three(user_id): return "user fn three: %d, %d" % (next(counter), user_id) print user_fn_one(5) print user_fn_two(5) print user_fn_three(7) print user_fn_two(7) invalidate_user_id(region, 5) print "invalidated:" print user_fn_one(5) print user_fn_two(5) print user_fn_three(7) print user_fn_two(7) Asynchronous Data Updates with ORM Events ----------------------------------------- This recipe presents one technique of optimistically pushing new data into the cache when an update is sent to a database. Using SQLAlchemy for database querying, suppose a simple cache-decorated function returns the results of a database query:: @region.cache_on_arguments() def get_some_data(argument): # query database to get data data = Session().query(DBClass).filter(DBClass.argument == argument).all() return data We would like this particular function to be re-queried when the data has changed. We could call ``get_some_data.invalidate(argument, hard=False)`` at the point at which the data changes, however this only leads to the invalidation of the old value; a new value is not generated until the next call, and also means at least one client has to block while the new value is generated. We could also call ``get_some_data.refresh(argument)``, which would perform the data refresh at that moment, but then the writer is delayed by the re-query. A third variant is to instead offload the work of refreshing for this query into a background thread or process. This can be acheived using a system such as the :paramref:`.CacheRegion.async_creation_runner`. However, an expedient approach for smaller use cases is to link cache refresh operations to the ORM session's commit, as below:: from sqlalchemy import event from sqlalchemy.orm import Session def cache_refresh(session, refresher, *args, **kwargs): """ Refresh the functions cache data in a new thread. Starts refreshing only after the session was committed so all database data is available. """ assert isinstance(session, Session), \ "Need a session, not a sessionmaker or scoped_session" @event.listens_for(session, "after_commit") def do_refresh(session): t = Thread(target=refresher, args=args, kwargs=kwargs) t.daemon = True t.start() Within a sequence of data persistence, ``cache_refresh`` can be called given a particular SQLAlchemy ``Session`` and a callable to do the work:: def add_new_data(session, argument): # add some data session.add(something_new(argument)) # add a hook to refresh after the Session is committed. cache_refresh(session, get_some_data.refresh, argument) Note that the event to refresh the data is associated with the ``Session`` being used for persistence; however, the actual refresh operation is called with a **different** ``Session``, typically one that is local to the refresh operation, either through a thread-local registry or via direct instantiation. Prefixing all keys in Redis --------------------------- If you use a redis instance as backend that contains other keys besides the ones set by dogpile.cache, it is a good idea to uniquely prefix all dogpile.cache keys, to avoid potential collisions with keys set by your own code. This can easily be done using a key mangler function:: from dogpile.cache import make_region region = make_region( key_mangler=lambda key: "myapp:dogpile:" + key ) Encoding/Decoding data into another format ------------------------------------------ .. sidebar:: A Note on Data Encoding Under the hood, dogpile.cache wraps cached data in an instance of ``dogpile.cache.api.CachedValue`` and then pickles that data for storage along with some bookkeeping metadata. If you implement a ProxyBackend to encode/decode data, that transformation will happen on the pre-pickled data- dogpile does not store the data 'raw' and will still pass a pickled payload to the backend. This behavior can negate the hopeful improvements of some encoding schemes. Since dogpile is managing cached data, you may be concerned with the size of your payloads. A possible method of helping minimize payloads is to use a ProxyBackend to recode the data on-the-fly or otherwise transform data as it enters or leaves persistent storage. In the example below, we define 2 classes to implement msgpack encoding. Msgpack (http://msgpack.org/) is a serialization format that works exceptionally well with json-like data and can serialize nested dicts into a much smaller payload than Python's own pickle. ``_EncodedProxy`` is our base class for building data encoders, and inherits from dogpile's own `ProxyBackend`. You could just use one class. This class passes 4 of the main `key/value` functions into a configurable decoder and encoder. The ``MsgpackProxy`` class simply inherits from ``_EncodedProxy`` and implements the necessary ``value_decode`` and ``value_encode`` functions. Encoded ProxyBackend Example:: from dogpile.cache.proxy import ProxyBackend import msgpack class _EncodedProxy(ProxyBackend): """base class for building value-mangling proxies""" def value_decode(self, value): raise NotImplementedError("override me") def value_encode(self, value): raise NotImplementedError("override me") def set(self, k, v): v = self.value_encode(v) self.proxied.set(k, v) def get(self, key): v = self.proxied.get(key) return self.value_decode(v) def set_multi(self, mapping): """encode to a new dict to preserve unencoded values in-place when called by `get_or_create_multi` """ mapping_set = {} for (k, v) in mapping.iteritems(): mapping_set[k] = self.value_encode(v) return self.proxied.set_multi(mapping_set) def get_multi(self, keys): results = self.proxied.get_multi(keys) translated = [] for record in results: try: translated.append(self.value_decode(record)) except Exception as e: raise return translated class MsgpackProxy(_EncodedProxy): """custom decode/encode for value mangling""" def value_decode(self, v): if not v or v is NO_VALUE: return NO_VALUE # you probably want to specify a custom decoder via `object_hook` v = msgpack.unpackb(payload, encoding="utf-8") return CachedValue(*v) def value_encode(self, v): # you probably want to specify a custom encoder via `default` v = msgpack.packb(payload, use_bin_type=True) return v # extend our region configuration from above with a 'wrap' region = make_region().configure( 'dogpile.cache.pylibmc', expiration_time = 3600, arguments = { 'url': ["127.0.0.1"], }, wrap = [MsgpackProxy, ] ) dogpile.cache-0.9.0/docs/build/requirements.txt0000664000175000017500000000025013555610667022621 0ustar classicclassic00000000000000git+https://github.com/sqlalchemyorg/changelog.git#egg=changelog git+https://github.com/sqlalchemyorg/sphinx-paramlinks.git#egg=sphinx-paramlinks sphinx mako decorator dogpile.cache-0.9.0/docs/build/unreleased/0000775000175000017500000000000013555610710021454 5ustar classicclassic00000000000000dogpile.cache-0.9.0/docs/build/unreleased/README.txt0000664000175000017500000000063213555610667023166 0ustar classicclassic00000000000000Individual per-changelog files go here in .rst format, which are pulled in by changelog (version 0.4.0 or higher) to be rendered into the changelog.rst file. At release time, the files here are removed and written directly into the changelog. Rationale is so that multiple changes being merged into gerrit don't produce conflicts. Note that gerrit does not support custom merge handlers unlike git itself. dogpile.cache-0.9.0/docs/build/usage.rst0000664000175000017500000002655113555610667021207 0ustar classicclassic00000000000000============ Usage Guide ============ Overview ======== At the time of this writing, popular key/value servers include `Memcached `_, `Redis `_ and many others. While these tools all have different usage focuses, they all have in common that the storage model is based on the retrieval of a value based on a key; as such, they are all potentially suitable for caching, particularly Memcached which is first and foremost designed for caching. With a caching system in mind, dogpile.cache provides an interface to a particular Python API targeted at that system. A dogpile.cache configuration consists of the following components: * A *region*, which is an instance of :class:`.CacheRegion`, and defines the configuration details for a particular cache backend. The :class:`.CacheRegion` can be considered the "front end" used by applications. * A *backend*, which is an instance of :class:`.CacheBackend`, describing how values are stored and retrieved from a backend. This interface specifies only :meth:`~.CacheBackend.get`, :meth:`~.CacheBackend.set` and :meth:`~.CacheBackend.delete`. The actual kind of :class:`.CacheBackend` in use for a particular :class:`.CacheRegion` is determined by the underlying Python API being used to talk to the cache, such as Pylibmc. The :class:`.CacheBackend` is instantiated behind the scenes and not directly accessed by applications under normal circumstances. * Value generation functions. These are user-defined functions that generate new values to be placed in the cache. While dogpile.cache offers the usual "set" approach of placing data into the cache, the usual mode of usage is to only instruct it to "get" a value, passing it a *creation function* which will be used to generate a new value if and only if one is needed. This "get-or-create" pattern is the entire key to the "Dogpile" system, which coordinates a single value creation operation among many concurrent get operations for a particular key, eliminating the issue of an expired value being redundantly re-generated by many workers simultaneously. Rudimentary Usage ================= dogpile.cache includes a Pylibmc backend. A basic configuration looks like:: from dogpile.cache import make_region region = make_region().configure( 'dogpile.cache.pylibmc', expiration_time = 3600, arguments = { 'url': ["127.0.0.1"], } ) @region.cache_on_arguments() def load_user_info(user_id): return some_database.lookup_user_by_id(user_id) .. sidebar:: pylibmc In this section, we're illustrating Memcached usage using the `pylibmc `_ backend, which is a high performing Python library for Memcached. It can be compared to the `python-memcached `_ client, which is also an excellent product. Pylibmc is written against Memcached's native API so is markedly faster, though might be considered to have rougher edges. The API is actually a bit more verbose to allow for correct multithreaded usage. Above, we create a :class:`.CacheRegion` using the :func:`.make_region` function, then apply the backend configuration via the :meth:`.CacheRegion.configure` method, which returns the region. The name of the backend is the only argument required by :meth:`.CacheRegion.configure` itself, in this case ``dogpile.cache.pylibmc``. However, in this specific case, the ``pylibmc`` backend also requires that the URL of the memcached server be passed within the ``arguments`` dictionary. The configuration is separated into two sections. Upon construction via :func:`.make_region`, the :class:`.CacheRegion` object is available, typically at module import time, for usage in decorating functions. Additional configuration details passed to :meth:`.CacheRegion.configure` are typically loaded from a configuration file and therefore not necessarily available until runtime, hence the two-step configurational process. Key arguments passed to :meth:`.CacheRegion.configure` include *expiration_time*, which is the expiration time passed to the Dogpile lock, and *arguments*, which are arguments used directly by the backend - in this case we are using arguments that are passed directly to the pylibmc module. Region Configuration ==================== The :func:`.make_region` function currently calls the :class:`.CacheRegion` constructor directly. .. autoclass:: dogpile.cache.region.CacheRegion :noindex: One you have a :class:`.CacheRegion`, the :meth:`.CacheRegion.cache_on_arguments` method can be used to decorate functions, but the cache itself can't be used until :meth:`.CacheRegion.configure` is called. The interface for that method is as follows: .. automethod:: dogpile.cache.region.CacheRegion.configure :noindex: The :class:`.CacheRegion` can also be configured from a dictionary, using the :meth:`.CacheRegion.configure_from_config` method: .. automethod:: dogpile.cache.region.CacheRegion.configure_from_config :noindex: Using a Region ============== The :class:`.CacheRegion` object is our front-end interface to a cache. It includes the following methods: .. automethod:: dogpile.cache.region.CacheRegion.get :noindex: .. automethod:: dogpile.cache.region.CacheRegion.get_or_create :noindex: .. automethod:: dogpile.cache.region.CacheRegion.set :noindex: .. automethod:: dogpile.cache.region.CacheRegion.delete :noindex: .. automethod:: dogpile.cache.region.CacheRegion.cache_on_arguments :noindex: .. _creating_backends: Creating Backends ================= Backends are located using the setuptools entrypoint system. To make life easier for writers of ad-hoc backends, a helper function is included which registers any backend in the same way as if it were part of the existing sys.path. For example, to create a backend called ``DictionaryBackend``, we subclass :class:`.CacheBackend`:: from dogpile.cache.api import CacheBackend, NO_VALUE class DictionaryBackend(CacheBackend): def __init__(self, arguments): self.cache = {} def get(self, key): return self.cache.get(key, NO_VALUE) def set(self, key, value): self.cache[key] = value def delete(self, key): self.cache.pop(key) Then make sure the class is available underneath the entrypoint ``dogpile.cache``. If we did this in a ``setup.py`` file, it would be in ``setup()`` as:: entry_points=""" [dogpile.cache] dictionary = mypackage.mybackend:DictionaryBackend """ Alternatively, if we want to register the plugin in the same process space without bothering to install anything, we can use ``register_backend``:: from dogpile.cache import register_backend register_backend("dictionary", "mypackage.mybackend", "DictionaryBackend") Our new backend would be usable in a region like this:: from dogpile.cache import make_region region = make_region("myregion") region.configure("dictionary") data = region.set("somekey", "somevalue") The values we receive for the backend here are instances of ``CachedValue``. This is a tuple subclass of length two, of the form:: (payload, metadata) Where "payload" is the thing being cached, and "metadata" is information we store in the cache - a dictionary which currently has just the "creation time" and a "version identifier" as key/values. If the cache backend requires serialization, pickle or similar can be used on the tuple - the "metadata" portion will always be a small and easily serializable Python structure. .. _changing_backend_behavior: Changing Backend Behavior ========================= The :class:`.ProxyBackend` is a decorator class provided to easily augment existing backend behavior without having to extend the original class. Using a decorator class is also adventageous as it allows us to share the altered behavior between different backends. Proxies are added to the :class:`.CacheRegion` object using the :meth:`.CacheRegion.configure` method. Only the overridden methods need to be specified and the real backend can be accessed with the ``self.proxied`` object from inside the :class:`.ProxyBackend`. For example, a simple class to log all calls to ``.set()`` would look like this:: from dogpile.cache.proxy import ProxyBackend import logging log = logging.getLogger(__name__) class LoggingProxy(ProxyBackend): def set(self, key, value): log.debug('Setting Cache Key: %s' % key) self.proxied.set(key, value) :class:`.ProxyBackend` can be be configured to optionally take arguments (as long as the :meth:`.ProxyBackend.__init__` method is called properly, either directly or via ``super()``. In the example below, the ``RetryDeleteProxy`` class accepts a ``retry_count`` parameter on initialization. In the event of an exception on delete(), it will retry this many times before returning:: from dogpile.cache.proxy import ProxyBackend class RetryDeleteProxy(ProxyBackend): def __init__(self, retry_count=5): super(RetryDeleteProxy, self).__init__() self.retry_count = retry_count def delete(self, key): retries = self.retry_count while retries > 0: retries -= 1 try: self.proxied.delete(key) return except: pass The ``wrap`` parameter of the :meth:`.CacheRegion.configure` accepts a list which can contain any combination of instantiated proxy objects as well as uninstantiated proxy classes. Putting the two examples above together would look like this:: from dogpile.cache import make_region retry_proxy = RetryDeleteProxy(5) region = make_region().configure( 'dogpile.cache.pylibmc', expiration_time = 3600, arguments = { 'url':["127.0.0.1"], }, wrap = [ LoggingProxy, retry_proxy ] ) In the above example, the ``LoggingProxy`` object would be instantated by the :class:`.CacheRegion` and applied to wrap requests on behalf of the ``retry_proxy`` instance; that proxy in turn wraps requests on behalf of the original dogpile.cache.pylibmc backend. .. versionadded:: 0.4.4 Added support for the :class:`.ProxyBackend` class. Configuring Logging ==================== .. versionadded:: 0.9.0 :class:`.CacheRegion` includes logging facilities that will emit debug log messages when key cache events occur, including when keys are regenerated as well as when hard invalidations occur. Using the `Python logging `_ module, set the log level to ``dogpile.cache`` to ``logging.DEBUG``:: logging.basicConfig() logging.getLogger("dogpile.cache").setLevel(logging.DEBUG) Debug logging will indicate time spent regenerating keys as well as when keys are missing:: DEBUG:dogpile.cache.region:No value present for key: '__main__:load_user_info|2' DEBUG:dogpile.cache.region:No value present for key: '__main__:load_user_info|1' DEBUG:dogpile.cache.region:Cache value generated in 0.501 seconds for keys: ['__main__:load_user_info|2', '__main__:load_user_info|3', '__main__:load_user_info|4', '__main__:load_user_info|5'] DEBUG:dogpile.cache.region:Hard invalidation detected for key: '__main__:load_user_info|3' DEBUG:dogpile.cache.region:Hard invalidation detected for key: '__main__:load_user_info|2'dogpile.cache-0.9.0/docs/changelog.html0000664000175000017500000030020313555610710021041 0ustar classicclassic00000000000000 Changelog — dogpile.cache 0.9.0 documentation

Changelog

0.9.0

Released: Mon Oct 28 2019

feature

  • [feature]

    Added logging facililities into CacheRegion, to indicate key events such as cache keys missing or regeneration of values. As these can be very high volume log messages, logging.DEBUG is used as the log level for the events. Pull request courtesy Stéphane Brunner.

0.8.0

Released: Fri Sep 20 2019

bug

  • [bug] [setup]

    Removed the “python setup.py test” feature in favor of a straight run of “tox”. Per Pypa / pytest developers, “setup.py” commands are in general headed towards deprecation in favor of tox. The tox.ini script has been updated such that running “tox” with no arguments will perform a single run of the test suite against the default installed Python interpreter.

    References: #157

  • [bug] [py3k]

    Replaced the Python compatbility routines for getfullargspec() with a fully vendored version from Python 3.3. Originally, Python was emitting deprecation warnings for this function in Python 3.8 alphas. While this change was reverted, it was observed that Python 3 implementations for getfullargspec() are an order of magnitude slower as of the 3.4 series where it was rewritten against Signature. While Python plans to improve upon this situation, SQLAlchemy projects for now are using a simple replacement to avoid any future issues.

    References: #154

  • [bug] [installation]

    Pinned minimum version of Python decorator module at 4.0.0 (July, 2015) as previous versions don’t provide the API that dogpile is using.

    References: #160

  • [bug] [py3k]

    Fixed the sha1_mangle_key() key mangler to coerce incoming Unicode objects into bytes as is required by the Py3k version of this function.

    References: #159

0.7.1

Released: Tue Dec 11 2018

bug

0.7.0

Released: Mon Dec 10 2018

bug

  • [bug]

    The decorator module is now used when creating function decorators within CacheRegion.cache_on_arguments() and CacheRegion.cache_multi_on_arguments() so that function signatures are preserved. Pull request courtesy ankitpatel96.

    Additionally adds a small performance enhancement which is to avoid internally creating a @wraps() decorator for the creator function on every get operation, by allowing the arguments to the creator be passed separately to CacheRegion.get_or_create().

    References: #137

  • [bug] [py3k]

    Fixed all Python 3.x deprecation warnings including inspect.getargspec().

    References: #129

0.6.8

Released: Sat Nov 24 2018

0.6.7

Released: Thu Jul 26 2018

bug

  • [bug]

    Fixed issue in the CacheRegion.get_or_create_multi() method which was erroneously considering the cached value as the timestamp field if the CacheRegion.invalidate() method had ben used, usually causing a TypeError to occur, or in less frequent cases an invalid result for whether or not the cached value was invalid, leading to excessive caching or regeneration. The issue was a regression caused by an implementation issue in the pluggable invalidation feature added in #38.

    References: #128

0.6.6

Released: Wed Jun 27 2018

feature

  • [feature]

    Added method CacheRegion.actual_backend which calculates and caches the actual backend for the region, which may be abstracted by the use of one or more ProxyBackend subclasses.

    References: #123

bug

  • [bug]

    Fixed a condition in the Lock where the “get” function could be called a second time unnecessarily, when returning an existing, expired value from the cache.

    References: #122

0.6.5

Released: Mon Mar 5 2018

bug

  • [bug]

    Fixed import issue for Python 3.7 where several variables named “async” were, leading to syntax errors. Pull request courtesy Brian Sheldon.

    References: #119

0.6.4

Released: Mon Jun 26, 2017

bug

  • [bug]

    The method Region.get_or_create_multi() will not pass to the cache backend if no values are ultimately to be stored, based on the use of the Region.get_or_create_multi.should_cache_fn function. This empty dictionary is unnecessary and can cause API problems for backends like that of Redis. Pull request courtesy Tobias Sauerwein.

  • [bug]

    The api.NO_VALUE constant now has a fixed __repr__() output, so that scenarios where this constant’s string value ends up being used as a cache key do not create multiple values. Pull request courtesy Paul Brown.

  • [bug]

    A new exception class exception.PluginNotFound is now raised when a particular cache plugin class cannot be located either as a setuptools entrypoint or as a registered backend. Previously, a plain Exception was thrown. Pull request courtesy Jamie Lennox.

0.6.3

Released: Thu May 18, 2017

feature

0.6.2

Released: Tue Aug 16 2016

feature

  • [feature]

    Added a new system to allow custom plugins specific to the issue of “invalidate the entire region”, using a new base class RegionInvalidationStrategy. As there are many potential strategies to this (special backend function, storing special keys, etc.) the mechanism for both soft and hard invalidation is now customizable. New approaches to region invalidation can be contributed as documented recipes. Pull request courtesy Alexander Makarov.

    References: #38

  • [feature]

    Added a new cache key generator kwarg_function_key_generator(), which takes keyword arguments as well as positional arguments into account when forming the cache key.

    References: #43

bug

  • [bug]

    Restored some more util symbols that users may have been relying upon (although these were not necessarily intended as user-facing): dogpile.cache.util.coerce_string_conf, dogpile.cache.util.KeyReentrantMutex, dogpile.cache.util.memoized_property, dogpile.cache.util.PluginLoader, dogpile.cache.util.to_list.

0.6.1

Released: Mon Jun 6 2016

bug

  • [bug]

    Fixed imports for dogpile.core restoring ReadWriteMutex and NameRegistry into the base namespace, in addition to dogpile.core.nameregistry and dogpile.core.readwrite_lock.

    References: #99

0.6.0

Released: Mon Jun 6 2016

feature

  • [feature]

    The dogpile.core library has been rolled in as part of the dogpile.cache distribution. The configuration of the dogpile name as a namespace package is also removed from dogpile.cache. In order to allow existing installations of dogpile.core as a separate package to remain unaffected, the .core package has been retired within dogpile.cache directly; the Lock class is now available directly as dogpile.Lock and the additional dogpile.core constructs are under the dogpile.util namespace.

    Additionally, the long-deprecated dogpile.core.Dogpile and dogpile.core.SyncReaderDogpile classes have been removed.

    References: #91

bug

  • [bug]

    The Redis backend now creates a copy of the “arguments” dictionary passed to it, before popping values out of it. This prevents the given dictionary from losing its keys.

  • [bug]

    Fixed bug in “null” backend where NullLock did not accept a flag for the NullLock.acquire() method, nor did it return a boolean value for “success”.

    References: #97

0.5.7

Released: Mon Oct 19 2015

feature

bug

  • [bug] [redis]

    Repaired the CacheRegion.get_multi() method when used with a list of zero length against the redis backend.

    References: #74

0.5.6

Released: Mon Feb 2 2015

feature

  • [feature]

    Changed the pickle protocol for the file/DBM backend to pickle.HIGHEST_PROTOCOL when producing new pickles, to match that of the redis and memorypickle backends. Pull request courtesy anentropic.

    References: pull request 30

0.5.5

Released: Wed Jan 21 2015

feature

bug

0.5.4

Released: Sat Jun 14 2014

feature

bug

  • [bug]

    Added missing Mako test dependency to setup.py. Pull request courtesy Wichert Akkerman.

    References: pull request 19

  • [bug]

    Fixed bug where calling CacheRegion.get_multi() or CacheRegion.set_multi() with an empty list would cause failures based on backend. Pull request courtesy Wichert Akkerman.

    References: #58, pull request 20

  • [bug]

    Repaired the entry point for Mako templates; the name of the entrypoint itself was wrong vs. what was in the docs, but beyond that the entrypoint would load the wrong module name. Pull request courtesy zoomorph.

    References: pull request 15

  • [bug]

    The coerce_string_conf() function, which is used by Region.configure_from_config(), will now recognize floating point values when parsing conf strings and deliver them as such; this supports non-integer values such as Redis lock_sleep. Pullreq courtesy Jeff Dairiki.

    References: #57, pull request 13

0.5.3

Released: Wed Jan 8 2014

feature

bug

  • [bug]

    Fixed bug where the key_mangler would get in the way of usage of the async_creation_runner feature within the Region.get_or_create() method, by sending in the mangled key instead of the original key. The “mangled” key is only supposed to be exposed within the backend storage, not the creation function which sends the key back into the Region.set(), which does the mangling itself. Pull request courtesy Ryan Kolak.

    References: pull request 10

  • [bug] [py3k]

    Fixed bug where the Region.get_multi() method wasn’t calling the backend correctly in Py3K (e.g. was passing a destructive map() object) which would cause this method to fail on the memcached backend.

  • [bug]

    Fixed a format call in the redis backend which would otherwise fail on Python 2.6; courtesy Jeff Dairiki.

    References: pull request 9

0.5.2

Released: Fri Nov 15 2013

feature

  • [feature]

    Added a new argument lock_factory to the DBMBackend implementation. This allows for drop-in replacement of the default FileLock backend, which builds on os.flock() and only supports Unix platforms. A new abstract base AbstractFileLock has been added to provide a common base for custom lock implementations. The documentation points to an example thread-based rw lock which is now tested on Windows.

    References: #44

bug

  • [bug]

    Fixes to routines on Windows, including that default unit tests pass, and an adjustment to the “soft expiration” feature to ensure the expiration works given windows time.time() behavior.

  • [bug]

    Added py2.6 compatibility for unsupported total_seconds() call in region.py

0.5.1

Released: Thu Oct 10 2013

feature

  • [feature]

    The CacheRegion.invalidate() method now supports an option hard=True|False. A “hard” invalidation, equivalent to the existing functionality of CacheRegion.invalidate(), means CacheRegion.get_or_create() will not return the “old” value at all, forcing all getters to regenerate or wait for a regeneration. “soft” invalidation means that getters can continue to return the old value until a new one is generated.

    References: #38

  • [feature]

    New dogpile-specific exception classes have been added, so that issues like “region already configured”, “region unconfigured”, raise dogpile-specific exceptions. Other exception classes have been made more specific. Also added new accessor CacheRegion.is_configured. Pullreq courtesy Morgan Fainberg.

    References: #40

bug

  • [bug]

    Erroneously missed when the same change was made for set() in 0.5.0, the Redis backend now uses pickle.HIGHEST_PROTOCOL for the set_multi() method as well when producing pickles. Courtesy Łukasz Fidosz.

  • [bug] [py3k] [redis]

    Fixed an errant u'' causing incompatibility in Python3.2 in the Redis backend, courtesy Jimmey Mabey.

    References: #39

  • [bug]

    The util.coerce_string_conf() method now correctly coerces negative integers and those with a leading + sign. This previously prevented configuring a CacheRegion with an expiration_time of '-1'. Courtesy David Beitey.

  • [bug]

    The refresh() method on CacheRegion.cache_multi_on_arguments() now supports the asdict flag.

0.5.0

Released: Fri Jun 21 2013

feature

bug

  • [bug]

    The Redis backend now uses pickle.HIGHEST_PROTOCOL when producing pickles. Courtesy Lx Yu.

  • [bug]

    CacheRegion.cache_on_arguments() now has a new argument to_str, defaults to str(). Can be replaced with unicode() or other functions to support caching of functions that accept non-unicode arguments. Initial patch courtesy Lx Yu.

  • [bug]

    Fixed bug where the “name” parameter for CacheRegion was ignored entirely. Courtesy Wichert Akkerman.

    References: #27

misc

  • [misc]

    Source repository has been moved to git.

0.4.3

Released: Thu Apr 4 2013

feature

bug

  • [bug]

    Added support for the cache_timeout Mako argument to the Mako plugin, which will pass the value to the expiration_time argument of CacheRegion.get_or_create().

0.4.2

Released: Sat Jan 19 2013

feature

  • [feature]

    An “async creator” function can be specified to CacheRegion which allows the “creation” function to be called asynchronously or be subsituted for another asynchronous creation scheme. Courtesy Ralph Bean.

    References: pull request 10

0.4.1

Released: Sat Dec 15 2012

feature

  • [feature]

    The function decorated by CacheRegion.cache_on_arguments() now includes a set() method, in addition to the existing invalidate() method. Like invalidate(), it accepts a set of function arguments, but additionally accepts as the first positional argument a new value to place in the cache, to take the place of that key. Courtesy Antoine Bertin.

    References: pull request 9

  • [feature]

    Redis backend now accepts optional “url” argument, will be passed to the new StrictRedis.from_url() method to determine connection info. Courtesy Jon Rosebaugh.

  • [feature]

    Redis backend now accepts optional “password” argument. Courtesy Jon Rosebaugh.

  • [feature]

    DBM backend has “fallback” when calling dbm.get() to instead use dictionary access + KeyError, in the case that the “gdbm” backend is used which does not include .get(). Courtesy Jon Rosebaugh.

bug

  • [bug]

    Fixed bug in DBM backend whereby if an error occurred during the “write” operation, the file lock, if enabled, would not be released, thereby deadlocking the app.

    References: #15

  • [bug]

    The util.function_key_generator() used by the function decorator no longer coerces non-unicode arguments into a Python unicode object on Python 2.x; this causes failures on backends such as DBM which on Python 2.x apparently require bytestrings. The key_mangler is still needed if actual unicode arguments are being used by the decorated function, however.

    References: #12

0.4.0

Released: Tue Oct 30 2012

bug

  • [bug]

    Using dogpile.core 0.4.0 now, fixes a critical bug whereby dogpile pileup could occur on first value get across multiple processes, due to reliance upon a non-shared creation time. This is a dogpile.core issue.

    References: #1

  • [bug]

    Fixed missing __future__ with_statement directive in region.py.

0.3.1

Released: Tue Sep 25 2012

bug

  • [bug]

    Fixed the mako_cache plugin which was not yet covered, and wasn’t implementing the mako plugin API correctly; fixed docs as well. Courtesy Ben Hayden.

  • [bug]

    Fixed setup so that the tests/* directory isn’t yanked into the install. Courtesy Ben Hayden.

0.3.0

Released: Thu Jun 14 2012

feature

  • [feature]

    get() method now checks expiration time by default. Use ignore_expiration=True to bypass this.

  • [feature]

    Added new invalidate() method. Sets the current timestamp as a minimum value that all retrieved values must be created after. Is honored by the get_or_create() and get() methods.

    References: #7

bug

  • [bug]

    Fixed bug whereby region.get() didn’t work if the value wasn’t present.

    References: #8

0.2.4

no release date
  • Fixed py3k issue with config string coerce, courtesy Alexander Fedorov

0.2.3

Released: Wed May 16 2012
  • support “min_compress_len” and “memcached_expire_time” with python-memcached backend. Tests courtesy Justin Azoff

    References: #3

  • Add support for coercion of string config values to Python objects - ints, “false”, “true”, “None”.

    References: #4

  • Added support to DBM file lock to allow reentrant access per key within a single thread, so that even though the DBM backend locks for the whole file, a creation function that calls upon a different key in the cache can still proceed.

    References: #5

  • Fixed DBM glitch where multiple readers could be serialized.

  • Adjust bmemcached backend to work with newly-repaired bmemcached calling API (see bmemcached ef206ed4473fec3b639e).

0.2.2

Released: Thu Apr 19 2012
  • add Redis backend, courtesy Ollie Rutherfurd

0.2.1

Released: Sun Apr 15 2012
  • move tests into tests/cache namespace

  • py3k compatibility is in-place now, no 2to3 needed.

0.2.0

Released: Sat Apr 14 2012
  • Based on dogpile.core now, to get the package namespace thing worked out.

0.1.1

Released: Tue Apr 10 2012
  • Fixed the configure_from_config() method of region and backend which wasn’t working. Courtesy Christian Klinger.

0.1.0

Released: Sun Apr 08 2012
  • Initial release.

  • Includes a pylibmc backend and a plain dictionary backend.

dogpile.cache-0.9.0/docs/core_usage.html0000664000175000017500000006703713555610710021245 0ustar classicclassic00000000000000 dogpile Core — dogpile.cache 0.9.0 documentation

dogpile Core

dogpile provides a locking interface around a “value creation” and “value retrieval” pair of functions.

Changed in version 0.6.0: The dogpile package encapsulates the functionality that was previously provided by the separate dogpile.core package.

The primary interface is the Lock object, which provides for the invocation of the creation function by only one thread and/or process at a time, deferring all other threads/processes to the “value retrieval” function until the single creation thread is completed.

Do I Need to Learn the dogpile Core API Directly?

It’s anticipated that most users of dogpile will be using it indirectly via the dogpile.cache caching front-end. If you fall into this category, then the short answer is no.

Using the core dogpile APIs described here directly implies you’re building your own resource-usage system outside, or in addition to, the one dogpile.cache provides.

Rudimentary Usage

The primary API dogpile provides is the Lock object. This object allows for functions that provide mutexing, value creation, as well as value retrieval.

An example usage is as follows:

from dogpile import Lock, NeedRegenerationException
import threading
import time

# store a reference to a "resource", some
# object that is expensive to create.
the_resource = [None]

def some_creation_function():
    # call a value creation function
    value = create_some_resource()

    # get creationtime using time.time()
    creationtime = time.time()

    # keep track of the value and creation time in the "cache"
    the_resource[0] = tup = (value, creationtime)

    # return the tuple of (value, creationtime)
    return tup

def retrieve_resource():
    # function that retrieves the resource and
    # creation time.

    # if no resource, then raise NeedRegenerationException
    if the_resource[0] is None:
        raise NeedRegenerationException()

    # else return the tuple of (value, creationtime)
    return the_resource[0]

# a mutex, which needs here to be shared across all invocations
# of this particular creation function
mutex = threading.Lock()

with Lock(mutex, some_creation_function, retrieve_resource, 3600) as value:
      # some function that uses
      # the resource.  Won't reach
      # here until some_creation_function()
      # has completed at least once.
      value.do_something()

Above, some_creation_function() will be called when Lock is first invoked as a context manager. The value returned by this function is then passed into the with block, where it can be used by application code. Concurrent threads which call Lock during this initial period will be blocked until some_creation_function() completes.

Once the creation function has completed successfully the first time, new calls to Lock will call retrieve_resource() in order to get the current cached value as well as its creation time; if the creation time is older than the current time minus an expiration time of 3600, then some_creation_function() will be called again, but only by one thread/process, using the given mutex object as a source of synchronization. Concurrent threads/processes which call Lock during this period will fall through, and not be blocked; instead, the “stale” value just returned by retrieve_resource() will continue to be returned until the creation function has finished.

The Lock API is designed to work with simple cache backends like Memcached. It addresses such issues as:

  • Values can disappear from the cache at any time, before our expiration time is reached. The NeedRegenerationException class is used to alert the Lock object that a value needs regeneration ahead of the usual expiration time.

  • There’s no function in a Memcached-like system to “check” for a key without actually retrieving it. The usage of the retrieve_resource() function allows that we check for an existing key and also return the existing value, if any, at the same time, without the need for two separate round trips.

  • The “creation” function used by Lock is expected to store the newly created value in the cache, as well as to return it. This is also more efficient than using two separate round trips to separately store, and re-retrieve, the object.

Example: Using dogpile directly for Caching

The following example approximates Beaker’s “cache decoration” function, to decorate any function and store the value in Memcached. Note that normally, we’d just use dogpile.cache here, however for the purposes of example, we’ll illustrate how the Lock object is used directly.

We create a Python decorator function called cached() which will provide caching for the output of a single function. It’s given the “key” which we’d like to use in Memcached, and internally it makes usage of Lock, along with a thread based mutex (we’ll see a distributed mutex in the next section):

import pylibmc
import threading
import time
from dogpile import Lock, NeedRegenerationException

mc_pool = pylibmc.ThreadMappedPool(pylibmc.Client("localhost"))

def cached(key, expiration_time):
    """A decorator that will cache the return value of a function
    in memcached given a key."""

    mutex = threading.Lock()

    def get_value():
         with mc_pool.reserve() as mc:
            value_plus_time = mc.get(key)
            if value_plus_time is None:
                raise NeedRegenerationException()
            # return a tuple (value, createdtime)
            return value_plus_time

    def decorate(fn):
        def gen_cached():
            value = fn()
            with mc_pool.reserve() as mc:
                # create a tuple (value, createdtime)
                value_plus_time = (value, time.time())
                mc.put(key, value_plus_time)
            return value_plus_time

        def invoke():
            with Lock(mutex, gen_cached, get_value, expiration_time) as value:
                return value
        return invoke

    return decorate

Using the above, we can decorate any function as:

@cached("some key", 3600)
def generate_my_expensive_value():
    return slow_database.lookup("stuff")

The Lock object will ensure that only one thread at a time performs slow_database.lookup(), and only every 3600 seconds, unless Memcached has removed the value, in which case it will be called again as needed.

In particular, dogpile.core’s system allows us to call the memcached get() function at most once per access, instead of Beaker’s system which calls it twice, and doesn’t make us call get() when we just created the value.

For the mutex object, we keep a threading.Lock object that’s local to the decorated function, rather than using a global lock. This localizes the in-process locking to be local to this one decorated function. In the next section, we’ll see the usage of a cross-process lock that accomplishes this differently.

Using a File or Distributed Lock with Dogpile

The examples thus far use a threading.Lock() object for synchronization. If our application uses multiple processes, we will want to coordinate creation operations not just on threads, but on some mutex that other processes can access.

In this example we’ll use a file-based lock as provided by the lockfile package, which uses a unix-symlink concept to provide a filesystem-level lock (which also has been made threadsafe). Another strategy may base itself directly off the Unix os.flock() call, or use an NFS-safe file lock like flufl.lock, and still another approach is to lock against a cache server, using a recipe such as that described at Using Memcached as a Distributed Locking Service.

What all of these locking schemes have in common is that unlike the Python threading.Lock object, they all need access to an actual key which acts as the symbol that all processes will coordinate upon. So here, we will also need to create the “mutex” which we pass to Lock using the key argument:

import lockfile
import os
from hashlib import sha1

# ... other imports and setup from the previous example

def cached(key, expiration_time):
    """A decorator that will cache the return value of a function
    in memcached given a key."""

    lock_path = os.path.join("/tmp", "%s.lock" % sha1(key).hexdigest())

    # ... get_value() from the previous example goes here

    def decorate(fn):
        # ... gen_cached() from the previous example goes here

        def invoke():
            # create an ad-hoc FileLock
            mutex = lockfile.FileLock(lock_path)

            with Lock(mutex, gen_cached, get_value, expiration_time) as value:
                return value
        return invoke

    return decorate

For a given key “some_key”, we generate a hex digest of the key, then use lockfile.FileLock() to create a lock against the file /tmp/53def077a4264bd3183d4eb21b1f56f883e1b572.lock. Any number of Lock objects in various processes will now coordinate with each other, using this common filename as the “baton” against which creation of a new value proceeds.

Unlike when we used threading.Lock, the file lock is ultimately locking on a file, so multiple instances of FileLock() will all coordinate on that same file - it’s often the case that file locks that rely upon flock() require non-threaded usage, so a unique filesystem lock per thread is often a good idea in any case.

dogpile.cache-0.9.0/docs/front.html0000664000175000017500000001415113555610710020246 0ustar classicclassic00000000000000 Front Matter — dogpile.cache 0.9.0 documentation

Front Matter

Information about the dogpile.cache project.

Project Homepage

dogpile.cache is hosted on GitHub at https://github.com/sqlalchemy/dogpile.cache.

Releases and project status are available on Pypi at https://pypi.python.org/pypi/dogpile.cache.

The most recent published version of this documentation should be at https://dogpilecache.sqlalchemy.org.

Installation

Install released versions of dogpile.cache from the Python package index with pip or a similar tool:

pip install dogpile.cache

Bugs

Bugs and feature enhancements to dogpile.cache should be reported on the GitHub issue tracker.

dogpile.cache-0.9.0/docs/genindex.html0000664000175000017500000012751513555610710020730 0ustar classicclassic00000000000000 Index — dogpile.cache 0.9.0 documentation

Index

Symbols | A | B | C | D | E | F | G | H | I | K | L | M | N | P | R | S | T | U | V | W

Symbols

A

B

C

D

E

F

G

H

I

K

L

M

N

P

R

S

T

U

V

W

dogpile.cache-0.9.0/docs/index.html0000664000175000017500000003137713555610710020236 0ustar classicclassic00000000000000 Welcome to dogpile.cache’s documentation! — dogpile.cache 0.9.0 documentation

Welcome to dogpile.cache’s documentation!

Dogpile consists of two subsystems, one building on top of the other.

dogpile provides the concept of a “dogpile lock”, a control structure which allows a single thread of execution to be selected as the “creator” of some resource, while allowing other threads of execution to refer to the previous version of this resource as the creation proceeds; if there is no previous version, then those threads block until the object is available.

dogpile.cache is a caching API which provides a generic interface to caching backends of any variety, and additionally provides API hooks which integrate these cache backends with the locking mechanism of dogpile.

New backends are very easy to create and use; users are encouraged to adapt the provided backends for their own needs, as high volume caching requires lots of tweaks and adjustments specific to an application and its environment.

Indices and tables

dogpile.cache-0.9.0/docs/py-modindex.html0000664000175000017500000001327013555610710021354 0ustar classicclassic00000000000000 Python Module Index — dogpile.cache 0.9.0 documentation dogpile.cache-0.9.0/docs/recipes.html0000664000175000017500000007661613555610710020566 0ustar classicclassic00000000000000 Recipes — dogpile.cache 0.9.0 documentation

Recipes

Asynchronous Data Updates with ORM Events

This recipe presents one technique of optimistically pushing new data into the cache when an update is sent to a database.

Using SQLAlchemy for database querying, suppose a simple cache-decorated function returns the results of a database query:

@region.cache_on_arguments()
def get_some_data(argument):
    # query database to get data
    data = Session().query(DBClass).filter(DBClass.argument == argument).all()
    return data

We would like this particular function to be re-queried when the data has changed. We could call get_some_data.invalidate(argument, hard=False) at the point at which the data changes, however this only leads to the invalidation of the old value; a new value is not generated until the next call, and also means at least one client has to block while the new value is generated. We could also call get_some_data.refresh(argument), which would perform the data refresh at that moment, but then the writer is delayed by the re-query.

A third variant is to instead offload the work of refreshing for this query into a background thread or process. This can be acheived using a system such as the CacheRegion.async_creation_runner. However, an expedient approach for smaller use cases is to link cache refresh operations to the ORM session’s commit, as below:

from sqlalchemy import event
from sqlalchemy.orm import Session

def cache_refresh(session, refresher, *args, **kwargs):
    """
    Refresh the functions cache data in a new thread. Starts refreshing only
    after the session was committed so all database data is available.
    """
    assert isinstance(session, Session), \
        "Need a session, not a sessionmaker or scoped_session"

    @event.listens_for(session, "after_commit")
    def do_refresh(session):
        t = Thread(target=refresher, args=args, kwargs=kwargs)
        t.daemon = True
        t.start()

Within a sequence of data persistence, cache_refresh can be called given a particular SQLAlchemy Session and a callable to do the work:

def add_new_data(session, argument):
    # add some data
    session.add(something_new(argument))

    # add a hook to refresh after the Session is committed.
    cache_refresh(session, get_some_data.refresh, argument)

Note that the event to refresh the data is associated with the Session being used for persistence; however, the actual refresh operation is called with a different Session, typically one that is local to the refresh operation, either through a thread-local registry or via direct instantiation.

Prefixing all keys in Redis

If you use a redis instance as backend that contains other keys besides the ones set by dogpile.cache, it is a good idea to uniquely prefix all dogpile.cache keys, to avoid potential collisions with keys set by your own code. This can easily be done using a key mangler function:

from dogpile.cache import make_region

region = make_region(
  key_mangler=lambda key: "myapp:dogpile:" + key
)

Encoding/Decoding data into another format

Since dogpile is managing cached data, you may be concerned with the size of your payloads. A possible method of helping minimize payloads is to use a ProxyBackend to recode the data on-the-fly or otherwise transform data as it enters or leaves persistent storage.

In the example below, we define 2 classes to implement msgpack encoding. Msgpack (http://msgpack.org/) is a serialization format that works exceptionally well with json-like data and can serialize nested dicts into a much smaller payload than Python’s own pickle. _EncodedProxy is our base class for building data encoders, and inherits from dogpile’s own ProxyBackend. You could just use one class. This class passes 4 of the main key/value functions into a configurable decoder and encoder. The MsgpackProxy class simply inherits from _EncodedProxy and implements the necessary value_decode and value_encode functions.

Encoded ProxyBackend Example:

from dogpile.cache.proxy import ProxyBackend
import msgpack

class _EncodedProxy(ProxyBackend):
    """base class for building value-mangling proxies"""

    def value_decode(self, value):
        raise NotImplementedError("override me")

    def value_encode(self, value):
        raise NotImplementedError("override me")

    def set(self, k, v):
        v = self.value_encode(v)
        self.proxied.set(k, v)

    def get(self, key):
        v = self.proxied.get(key)
        return self.value_decode(v)

    def set_multi(self, mapping):
        """encode to a new dict to preserve unencoded values in-place when
           called by `get_or_create_multi`
           """
        mapping_set = {}
        for (k, v) in mapping.iteritems():
            mapping_set[k] = self.value_encode(v)
        return self.proxied.set_multi(mapping_set)

    def get_multi(self, keys):
        results = self.proxied.get_multi(keys)
        translated = []
        for record in results:
            try:
                translated.append(self.value_decode(record))
            except Exception as e:
                raise
        return translated


class MsgpackProxy(_EncodedProxy):
    """custom decode/encode for value mangling"""

    def value_decode(self, v):
        if not v or v is NO_VALUE:
            return NO_VALUE
        # you probably want to specify a custom decoder via `object_hook`
        v = msgpack.unpackb(payload, encoding="utf-8")
        return CachedValue(*v)

    def value_encode(self, v):
        # you probably want to specify a custom encoder via `default`
        v = msgpack.packb(payload, use_bin_type=True)
        return v

# extend our region configuration from above with a 'wrap'
region = make_region().configure(
    'dogpile.cache.pylibmc',
    expiration_time = 3600,
    arguments = {
        'url': ["127.0.0.1"],
    },
    wrap = [MsgpackProxy, ]
)
dogpile.cache-0.9.0/docs/search.html0000664000175000017500000000703013555610710020361 0ustar classicclassic00000000000000 Search — dogpile.cache 0.9.0 documentation

Search

Please activate JavaScript to enable the search functionality.

From here you can search these documents. Enter your search words into the box below and click "search". Note that the search function will automatically search for all of the words. Pages containing fewer words won't appear in the result list.

dogpile.cache-0.9.0/docs/searchindex.js0000664000175000017500000006435513555610710021076 0ustar classicclassic00000000000000Search.setIndex({docnames:["api","changelog","core_usage","front","index","recipes","usage"],envversion:{"sphinx.domains.c":1,"sphinx.domains.changeset":1,"sphinx.domains.citation":1,"sphinx.domains.cpp":1,"sphinx.domains.javascript":1,"sphinx.domains.math":2,"sphinx.domains.python":1,"sphinx.domains.rst":1,"sphinx.domains.std":1,"sphinx.ext.intersphinx":1,sphinx:56},filenames:["api.rst","changelog.rst","core_usage.rst","front.rst","index.rst","recipes.rst","usage.rst"],objects:{"dogpile.Lock.params":{async_creator:[0,1,1,""],creator:[0,1,1,""],expiretime:[0,1,1,""],mutex:[0,1,1,""],value_and_created_fn:[0,1,1,""]},"dogpile.cache":{api:[0,2,0,"-"],exception:[0,2,0,"-"],proxy:[0,2,0,"-"],region:[0,2,0,"-"]},"dogpile.cache.api":{CacheBackend:[0,0,1,""],CachedValue:[0,0,1,""],NO_VALUE:[0,5,1,""],NoValue:[0,0,1,""]},"dogpile.cache.api.CacheBackend":{"delete":[0,3,1,""],delete_multi:[0,3,1,""],get:[0,3,1,""],get_multi:[0,3,1,""],get_mutex:[0,3,1,""],key_mangler:[0,4,1,""],set:[0,3,1,""],set_multi:[0,3,1,""]},"dogpile.cache.api.CachedValue":{metadata:[0,3,1,""],payload:[0,3,1,""]},"dogpile.cache.backends":{"null":[0,2,0,"-"],file:[0,2,0,"-"],memcached:[0,2,0,"-"],memory:[0,2,0,"-"],redis:[0,2,0,"-"]},"dogpile.cache.backends.file":{AbstractFileLock:[0,0,1,""],DBMBackend:[0,0,1,""],FileLock:[0,0,1,""]},"dogpile.cache.backends.file.AbstractFileLock":{acquire:[0,3,1,""],acquire_read_lock:[0,3,1,""],acquire_write_lock:[0,3,1,""],is_open:[0,3,1,""],read:[0,3,1,""],release:[0,3,1,""],release_read_lock:[0,3,1,""],release_write_lock:[0,3,1,""],write:[0,3,1,""]},"dogpile.cache.backends.file.DBMBackend":{"delete":[0,3,1,""],delete_multi:[0,3,1,""],get:[0,3,1,""],get_multi:[0,3,1,""],get_mutex:[0,3,1,""],set:[0,3,1,""],set_multi:[0,3,1,""]},"dogpile.cache.backends.file.DBMBackend.params":{dogpile_lockfile:[0,1,1,""],filename:[0,1,1,""],lock_factory:[0,1,1,""],rw_lockfile:[0,1,1,""]},"dogpile.cache.backends.file.FileLock":{acquire_read_lock:[0,3,1,""],acquire_write_lock:[0,3,1,""],is_open:[0,3,1,""],release_read_lock:[0,3,1,""],release_write_lock:[0,3,1,""]},"dogpile.cache.backends.memcached":{BMemcachedBackend:[0,0,1,""],GenericMemcachedBackend:[0,0,1,""],MemcachedBackend:[0,0,1,""],MemcachedLock:[0,0,1,""],PylibmcBackend:[0,0,1,""]},"dogpile.cache.backends.memcached.BMemcachedBackend":{delete_multi:[0,3,1,""]},"dogpile.cache.backends.memcached.BMemcachedBackend.params":{password:[0,1,1,""],username:[0,1,1,""]},"dogpile.cache.backends.memcached.GenericMemcachedBackend":{"delete":[0,3,1,""],client:[0,3,1,""],delete_multi:[0,3,1,""],get:[0,3,1,""],get_multi:[0,3,1,""],get_mutex:[0,3,1,""],set:[0,3,1,""],set_arguments:[0,4,1,""],set_multi:[0,3,1,""]},"dogpile.cache.backends.memcached.GenericMemcachedBackend.params":{distributed_lock:[0,1,1,""],lock_timeout:[0,1,1,""],memcached_expire_time:[0,1,1,""],url:[0,1,1,""]},"dogpile.cache.backends.memcached.PylibmcBackend.params":{behaviors:[0,1,1,""],binary:[0,1,1,""],min_compress_len:[0,1,1,""]},"dogpile.cache.backends.memory":{MemoryBackend:[0,0,1,""],MemoryPickleBackend:[0,0,1,""]},"dogpile.cache.backends.memory.MemoryBackend":{"delete":[0,3,1,""],delete_multi:[0,3,1,""],get:[0,3,1,""],get_multi:[0,3,1,""],set:[0,3,1,""],set_multi:[0,3,1,""]},"dogpile.cache.backends.null":{NullBackend:[0,0,1,""]},"dogpile.cache.backends.null.NullBackend":{"delete":[0,3,1,""],delete_multi:[0,3,1,""],get:[0,3,1,""],get_multi:[0,3,1,""],get_mutex:[0,3,1,""],set:[0,3,1,""],set_multi:[0,3,1,""]},"dogpile.cache.backends.redis":{RedisBackend:[0,0,1,""]},"dogpile.cache.backends.redis.RedisBackend":{"delete":[0,3,1,""],delete_multi:[0,3,1,""],get:[0,3,1,""],get_multi:[0,3,1,""],get_mutex:[0,3,1,""],set:[0,3,1,""],set_multi:[0,3,1,""]},"dogpile.cache.backends.redis.RedisBackend.params":{connection_pool:[0,1,1,""],db:[0,1,1,""],distributed_lock:[0,1,1,""],host:[0,1,1,""],lock_sleep:[0,1,1,""],lock_timeout:[0,1,1,""],password:[0,1,1,""],port:[0,1,1,""],redis_expiration_time:[0,1,1,""],socket_timeout:[0,1,1,""],url:[0,1,1,""]},"dogpile.cache.exception":{DogpileCacheException:[0,6,1,""],PluginNotFound:[0,6,1,""],RegionAlreadyConfigured:[0,6,1,""],RegionNotConfigured:[0,6,1,""],ValidationError:[0,6,1,""]},"dogpile.cache.plugins":{mako_cache:[0,2,0,"-"]},"dogpile.cache.plugins.mako_cache":{MakoPlugin:[0,0,1,""]},"dogpile.cache.plugins.mako_cache.MakoPlugin":{get:[0,3,1,""],get_or_create:[0,3,1,""],invalidate:[0,3,1,""]},"dogpile.cache.plugins.mako_cache.MakoPlugin.get.params":{"**kw":[0,1,1,""],key:[0,1,1,""]},"dogpile.cache.plugins.mako_cache.MakoPlugin.get_or_create.params":{"**kw":[0,1,1,""],creation_function:[0,1,1,""],key:[0,1,1,""]},"dogpile.cache.plugins.mako_cache.MakoPlugin.invalidate.params":{"**kw":[0,1,1,""],key:[0,1,1,""]},"dogpile.cache.proxy":{ProxyBackend:[0,0,1,""]},"dogpile.cache.proxy.ProxyBackend":{"delete":[0,3,1,""],delete_multi:[0,3,1,""],get:[0,3,1,""],get_multi:[0,3,1,""],get_mutex:[0,3,1,""],set:[0,3,1,""],set_multi:[0,3,1,""],wrap:[0,3,1,""]},"dogpile.cache.region":{CacheRegion:[0,0,1,""],DefaultInvalidationStrategy:[0,0,1,""],RegionInvalidationStrategy:[0,0,1,""],make_region:[0,7,1,""],value_version:[0,5,1,""]},"dogpile.cache.region.CacheRegion":{"delete":[0,3,1,""],actual_backend:[0,3,1,""],cache_multi_on_arguments:[0,3,1,""],cache_on_arguments:[0,3,1,""],configure:[0,3,1,""],configure_from_config:[0,3,1,""],delete_multi:[0,3,1,""],get:[0,3,1,""],get_multi:[0,3,1,""],get_or_create:[0,3,1,""],get_or_create_multi:[0,3,1,""],invalidate:[0,3,1,""],is_configured:[0,3,1,""],set:[0,3,1,""],set_multi:[0,3,1,""],wrap:[0,3,1,""]},"dogpile.cache.region.CacheRegion.cache_multi_on_arguments.params":{asdict:[0,1,1,""],expiration_time:[0,1,1,""],function_multi_key_generator:[0,1,1,""],namespace:[0,1,1,""],should_cache_fn:[0,1,1,""],to_str:[0,1,1,""]},"dogpile.cache.region.CacheRegion.cache_on_arguments.params":{expiration_time:[6,1,1,""],function_key_generator:[6,1,1,""],namespace:[6,1,1,""],should_cache_fn:[6,1,1,""],to_str:[6,1,1,""]},"dogpile.cache.region.CacheRegion.configure.params":{arguments:[6,1,1,""],backend:[6,1,1,""],expiration_time:[6,1,1,""],region_invalidator:[6,1,1,""],replace_existing_backend:[6,1,1,""],wrap:[6,1,1,""]},"dogpile.cache.region.CacheRegion.get.params":{expiration_time:[6,1,1,""],ignore_expiration:[6,1,1,""],key:[6,1,1,""]},"dogpile.cache.region.CacheRegion.get_or_create.params":{creator:[6,1,1,""],creator_args:[6,1,1,""],expiration_time:[6,1,1,""],key:[6,1,1,""],should_cache_fn:[6,1,1,""]},"dogpile.cache.region.CacheRegion.get_or_create_multi.params":{creator:[0,1,1,""],expiration_time:[0,1,1,""],keys:[0,1,1,""],should_cache_fn:[0,1,1,""]},"dogpile.cache.region.CacheRegion.invalidate.params":{hard:[0,1,1,""]},"dogpile.cache.region.CacheRegion.params":{async_creation_runner:[6,1,1,""],function_key_generator:[6,1,1,""],function_multi_key_generator:[6,1,1,""],key_mangler:[6,1,1,""],name:[6,1,1,""]},"dogpile.cache.region.DefaultInvalidationStrategy":{invalidate:[0,3,1,""],is_hard_invalidated:[0,3,1,""],is_invalidated:[0,3,1,""],is_soft_invalidated:[0,3,1,""],was_hard_invalidated:[0,3,1,""],was_soft_invalidated:[0,3,1,""]},"dogpile.cache.region.RegionInvalidationStrategy":{invalidate:[0,3,1,""],is_hard_invalidated:[0,3,1,""],is_invalidated:[0,3,1,""],is_soft_invalidated:[0,3,1,""],was_hard_invalidated:[0,3,1,""],was_soft_invalidated:[0,3,1,""]},"dogpile.cache.util":{function_key_generator:[0,7,1,""],kwarg_function_key_generator:[0,7,1,""],length_conditional_mangler:[0,7,1,""],sha1_mangle_key:[0,7,1,""]},"dogpile.util":{NameRegistry:[0,0,1,""],ReadWriteMutex:[0,0,1,""]},"dogpile.util.NameRegistry":{get:[0,3,1,""]},"dogpile.util.NameRegistry.get.params":{"**kw":[0,1,1,""],identifier:[0,1,1,""]},"dogpile.util.NameRegistry.params":{creator:[0,1,1,""]},"dogpile.util.ReadWriteMutex":{acquire_read_lock:[0,3,1,""],acquire_write_lock:[0,3,1,""],release_read_lock:[0,3,1,""],release_write_lock:[0,3,1,""]},dogpile:{Lock:[0,0,1,""],NeedRegenerationException:[0,0,1,""]}},objnames:{"0":["py","class","Python class"],"1":["py","parameter","Python parameter"],"2":["py","module","Python module"],"3":["py","method","Python method"],"4":["py","attribute","Python attribute"],"5":["py","data","Python data"],"6":["py","exception","Python exception"],"7":["py","function","Python function"]},objtypes:{"0":"py:class","1":"py:parameter","2":"py:module","3":"py:method","4":"py:attribute","5":"py:data","6":"py:exception","7":"py:function"},terms:{"2to3":1,"53def077a4264bd3183d4eb21b1f56f883e1b572":2,"\u0142ukasz":1,"abstract":1,"boolean":[0,1],"byte":1,"case":[0,1,2,5,6],"class":[0,1,2,5,6],"default":[0,1,5,6],"final":[0,6],"float":[0,1],"function":[0,1,2,5,6],"import":[0,1,2,5,6],"int":1,"long":[0,1,6],"menkevi\u010diu":1,"new":[0,1,2,4,5,6],"null":1,"return":[0,1,2,5,6],"short":2,"st\u00e9phane":1,"super":6,"true":[0,1,5,6],"try":[0,5,6],"while":[0,1,4,5,6],Added:[0,1,6],For:[0,2,6],NFS:2,One:[0,6],Such:[0,6],The:[0,1,2,3,5,6],Then:6,There:[0,2],These:6,Use:[0,1],Used:[0,6],Using:[1,4,5],With:[0,6],__future__:1,__init__:[0,6],__main__:6,__name__:[0,5,6],__repr__:1,_config_argument_dict:[0,6],_config_prefix:[0,6],_encodedproxi:5,_hard_invalid:0,_soft_invalid:0,abil:0,about:3,abov:[0,2,5,6],abstractfilelock:[0,1],accept:[0,1,6],access:[0,1,2,6],accessor:[0,1],accomplish:2,account:[0,1],acheiv:5,acquir:[0,1,6],acquire_read_lock:0,acquire_write_lock:0,across:[0,1,2],act:2,actual:[0,1,2,5,6],actual_backend:[0,1],adapt:[0,4],add:[1,5],add_new_data:5,added:[0,1,6],addit:[0,1,2,6],addition:[1,4],address:2,adjust:[1,4],advantag:0,adventag:6,affect:[0,6],after:[0,1,5,6],after_commit:5,again:[0,2,6],against:[0,1,2,6],ahead:2,akkerman:1,alert:2,alexand:1,all:[0,1,2,4,6],allow:[0,1,2,4,6],along:[2,5],alpha:1,alphabet:0,alreadi:[0,1,6],also:[0,1,2,5,6],alter:[0,6],altern:[0,6],although:1,alwai:[0,6],amix:0,among:6,analogu:0,anderson:1,anentrop:1,ani:[0,1,2,4,6],ankitpatel96:1,anoth:[0,1,2,4,6],answer:2,anticip:2,antoin:1,anydbm:0,anyth:6,api:[1,4,5,6],app:1,appar:1,appear:0,append:[0,5],appli:[0,1,5,6],applic:[0,2,4,6],approach:[0,1,2,5,6],appropri:[0,6],approxim:2,apr:1,araujo:1,arbitrari:0,aren:0,arg:[0,5,6],argnam:0,argspec:0,argument:[0,1,2,5,6],argvalu:0,around:[0,2],ascii:[0,6],asdict:[0,1],ask:0,assembl:0,assert:5,associ:[0,5,6],assum:[0,1],async:1,async_cr:[0,6],async_creation_runn:[0,1,5,6],asynchron:[0,1,4,6],attach:0,attempt:[0,6],attribut:[0,6],aug:1,augment:[0,1,6],authent:0,automat:0,avail:[0,1,3,4,5,6],avoid:[1,5],awar:[0,5,6],azoff:1,back:[0,1],backend:[1,2,4,5],background:[0,5,6],backward:0,base:[0,1,2,5,6],basi:[0,1,6],basic:[0,6],basicconfig:6,baton:2,beaker:[0,2],bean:1,becom:[0,6],been:[0,1,2,5,6],befor:[0,1,2,6],behalf:6,behav:0,behavior:[0,1,4,5],behind:6,being:[0,1,5,6],beitei:1,below:[0,5,6],ben:1,bertin:1,besid:5,best:0,between:[0,6],beyond:1,binari:0,bit:6,block:[0,2,4,5,6],blog:0,bmemcach:[0,1],bmemcachedbackend:0,bookkeep:5,bool:0,both:[0,1],bother:6,brian:1,brown:1,brunner:1,bsddb:[0,6],bug:4,build:[0,1,2,4,5],built:[0,6],builtin:[0,6],bypass:[0,1,6],bytestr:[0,1,6],cach:[0,1,3,5,6],cache_arg:0,cache_dict:0,cache_impl:0,cache_multi_on_argu:[0,1,6],cache_on_argu:[0,1,5,6],cache_refresh:5,cache_region:0,cache_timeout:1,cachebackend:[0,6],cachedvalu:[0,5,6],cachefil:0,cacheimpl:0,cacheregion:[0,1,5,6],calcul:1,call:[0,1,2,5,6],callabl:[0,1,5,6],caller:0,can:[0,1,2,5,6],cannot:[0,1,6],categori:2,caus:1,caveat:[0,6],celeri:[0,6],certain:[0,6],chain:[0,6],chang:[0,1,2,4,5],changelog:4,check:[0,1,2,6],choos:0,christian:1,circumst:6,client:[0,1,2,5,6],client_fn:0,cls:[0,6],code:[0,2,5],coerc:[0,1,6],coerce_string_conf:1,coercion:1,collect:0,collis:5,com:[1,3],combin:[0,6],command:1,commit:5,common:[0,1,2,6],commun:1,compar:[0,6],compat:[0,1],compatbl:1,complement:1,complet:[0,2],compon:6,comput:[0,6],concept:[0,2,4],concern:5,concret:0,concurr:[0,2,6],condit:[0,1],condition:[0,6],conf:1,config:[0,1,6],config_dict:[0,6],configur:[0,1,4,5],configure_from_config:[0,1,6],conjunct:[0,1,6],connect:[0,1],connection_pool:[0,1],connectionpool:[0,1],consid:[0,1,6],consist:[4,6],constant:1,construct:[0,1,6],constructor:[0,6],consult:[0,6],contain:[0,5,6],content:0,context:[0,2],continu:[0,1,2],contribut:1,control:4,conveni:[0,6],convert:[0,6],coordin:[0,2,6],copi:[0,1],core:[1,4,6],correct:[0,6],correctli:1,correspond:0,could:[0,1,5],count:5,counter:5,courtesi:1,cover:1,cpickl:0,creat:[0,1,2,4],create_some_resourc:2,create_valu:[0,6],createdtim:2,creation:[0,1,2,4,6],creation_funct:0,creation_tim:0,creationtim:2,creator:[0,1,4,6],creator_arg:[0,1,6],critic:1,cross:2,current:[0,1,2,6],custom:[0,1,5,6],custominvalidationstrategi:0,customiz:1,cutom:0,d_arg1_arg2_arg3:5,daemon:5,dai:[0,6],dairiki:1,daniel:1,data:[0,4,6],databas:5,date:[0,1,6],datetim:[0,1,6],david:1,dbclass:5,dbm:[0,1,6],dbmbackend:[0,1],dbmfile:[0,6],deadlock:1,deal:0,debug:[1,6],dec:1,declar:[0,6],decod:4,decor:[0,1,2,5,6],def:[0,2,5,6],defaultinvalidationstrategi:0,defer:[0,2,6],defin:[5,6],delai:5,deleg:0,delet:[0,1,5,6],delete_multi:[0,1],deliv:1,depend:[0,1,6],deprec:1,deriv:[0,6],describ:[0,2,6],descriptor:0,deseri:0,design:[2,6],desir:0,destruct:1,detail:[0,6],detect:[0,6],determin:[0,1,6],dev:1,develop:1,dict:[0,5],dictionari:[0,1,6],dictionarybackend:6,did:[1,6],didn:1,differ:[0,1,2,5,6],digest:2,direct:[0,1,5],directli:[0,1,4,6],directori:[0,1],disabl:[0,1,6],disambigu:[0,6],disappear:2,discard:0,distinguish:[0,6],distribut:[0,1,4],distributed_lock:[0,1,6],do_refresh:5,do_someth:2,doc:1,document:[0,1,3],doe:[0,1,5,6],doesn:[0,2],dogpil:[1,3,5,6],dogpile_lockfil:0,dogpilecach:3,dogpilecacheexcept:0,doing:0,don:[0,1],done:5,dont_cache_non:[0,6],down:[0,6],drop:[0,1],due:1,dump:1,dure:[0,1,2],dynam:[0,1,6],each:[0,1,2,5,6],easi:4,easier:6,easili:[5,6],edg:6,ef206ed4473fec3b639:1,effect:0,effici:2,effort:1,either:[0,1,5,6],elect:0,elimin:6,els:[0,2],emit:[1,6],emploi:1,empti:1,enabl:1,encapsul:2,encod:4,encourag:4,end:[0,1,2,6],enhanc:[1,3],enough:0,ensur:[1,2],enter:5,entir:[0,1,6],entri:1,entry_point:6,entrypoint:[0,1,6],environ:[0,4],equival:[0,1,6],eric:1,errant:1,erron:1,error:[0,1],establish:[0,1,6],etc:[0,1],evalu:[0,6],even:[0,1],event:[1,4,6],everi:[1,2],everyth:5,exact:1,exampl:[0,1,4,5,6],excel:6,except:[1,4,5,6],exception:5,excess:1,execut:4,exist:[0,1,2,6],existing_valu:0,expand:1,expect:2,expedi:5,expens:[0,2,6],expir:[0,1,2,6],expiration_tim:[0,1,2,5,6],expiretim:0,expiri:0,explicitli:0,expos:1,extend:[0,5,6],extra:[0,6],face:1,facil:6,facilil:1,fact:[0,6],fail:[0,1],failur:1,fainberg:1,fall:2,fallback:1,fals:[0,1,5,6],far:2,faster:6,favor:1,fcntl:0,featur:[0,3,6],feb:1,fedorov:1,fidosz:1,field:1,file:[1,4,6],filelock:[0,1,2],filenam:[0,2,6],filesystem:[0,2],filter:5,finish:[0,2,6],first:[0,1,2,6],fix:[0,1,6],flag:[0,1,6],flock:[0,1,2],flufl:2,fly:5,fname:[0,5,6],fname_:5,focus:6,follow:[0,2,6],foo1:0,foo:[0,6],forc:[0,1],foremost:6,form:[0,1,6],format:[0,1,4],former:0,found:[0,6],frequent:1,fri:1,frit:1,from:[0,1,2,3,5,6],from_url:[0,1],front:[0,2,4,6],full:1,fulli:1,function_key_gener:[0,1,5,6],function_multi_key_gener:[0,1,6],futur:[0,1,6],garbag:0,gdbm:1,gen:[0,6],gen_cach:2,gener:[0,1,2,4,5,6],generate_kei:[0,5,6],generate_my_expensive_valu:2,generate_someth:[0,6],genericmemachedbackend:0,genericmemcachedbackend:[0,1],get:[0,1,2,5,6],get_multi:[0,1,5,6],get_mutex:0,get_or_cr:[0,1,6],get_or_create_multi:[0,1,5,6],get_some_data:5,get_valu:2,getargspec:1,getfullargspec:1,getlogg:6,getter:[0,1],git:1,github:[1,3],given:[0,1,2,5,6],glitch:1,global:[0,2],goe:[0,2],good:[2,5],great:1,greater:0,group:4,guard:1,guid:4,had:1,hanchrow:1,hand:[0,6],handl:0,hanu:1,happen:5,hard:[0,1,5,6],has:[0,1,2,5,6],has_valu:0,hash:[0,6],hashlib:2,have:[0,1,2,5,6],hayden:1,hbccbh:1,head:1,held:0,help:[0,5,6],helper:6,henc:6,here:[0,2,6],hex:2,hexdigest:2,high:[1,4,6],highest_protocol:[0,1],hit:[0,5],hoc:[2,6],homepag:4,hongbin:1,honor:[0,1],hood:5,hook:[4,5],hope:5,host:[0,1,3],hour:0,how:[0,2,6],howev:[0,1,2,5,6],http:[0,1,3,5],idea:[2,5],idempot:[0,6],identifi:[0,6],ignor:[0,1,6],ignore_expir:[0,1,6],illustr:[0,2,6],immedi:[0,6],impact:0,implement:[0,1,5,6],impli:[0,2],implicitli:1,improv:[0,1,5],includ:[0,1,6],incom:[0,1,6],incompat:[0,1],incur:0,index:[3,4],indic:[0,1,6],indirectli:2,individu:0,info:1,inform:[0,3,6],inher:0,inherit:[0,5],ini:1,initi:[0,1,2,6],inject:0,insid:[0,6],inspect:1,instal:[1,4,6],instanc:[0,1,2,5,6],instant:6,instanti:[0,5,6],instead:[1,2,5],instruct:6,integ:[0,1,5,6],integr:4,intend:1,intent:0,interfac:[0,2,4,6],intern:[0,1,2,6],interpret:[0,1,6],invalid:[0,1,4,6],invalidate_user_id:5,invoc:[0,2],invok:[0,2,6],involv:0,is_configur:[0,1],is_hard_invalid:0,is_invalid:0,is_open:0,is_soft_invalid:0,is_unittest:1,isinst:5,isn:[0,1,6],issu:[1,2,3,6],item:0,iteritem:5,itertool:5,its:[0,1,2,4,6],itself:[0,1,2,6],jack:1,jami:1,jan:1,jeff:1,jimmei:1,job:0,john:1,join:[0,2,5,6],jon:1,jonathan:1,json:5,jul:1,juli:1,jun:1,just:[0,2,5,6],justin:1,keep:[0,2],kei:[0,1,2,4,6],ketama:0,key1:0,key2:0,key3:0,key_mangl:[0,1,5,6],key_templ:5,keyerror:1,keyreentrantmutex:1,keyword:[0,1,6],kind:[0,6],klinger:1,know:5,kolak:1,krau:1,kwarg:[0,5,6],kwarg_function_key_gener:[0,1,6],lambda:5,larger:0,last:[0,6],lastli:[0,6],later:[0,6],latter:0,lead:[1,5],learn:4,least:[0,2,5],leav:5,left:0,length:[0,1,6],length_conditional_mangl:0,lennox:1,less:1,level:[0,1,2,6],lib:0,librari:[0,1,6],life:6,like:[0,1,2,5,6],link:[0,5],list:[0,1,6],listens_for:5,load:[0,1,6],load_user_info:6,local:[0,2,5,6],local_region:[0,6],localhost:[0,2],locat:[1,6],lock:[0,1,4,6],lock_factori:[0,1],lock_path:2,lock_sleep:[0,1],lock_timeout:[0,1],lockfil:[0,2],log:[1,4],loggingproxi:6,logic:0,longer:1,look:6,lookup:[0,2],lookup_user_by_id:6,lose:1,lot:4,lutz:1,mabei:1,made:[0,1,2],magnitud:1,mai:[0,1,2,5,6],main:5,maintain:1,makarov:1,make:[0,1,2,6],make_region:[0,5,6],mako:1,mako_cach:[0,1],mako_lookup:0,makoplugin:0,manag:[0,2,5,6],mangl:[0,1,5,6],mangler:[0,1,5,6],mani:[1,6],map:[0,1,5],mapping_set:5,mar:1,marco:1,markedli:6,masayuko:1,match:[0,1],matter:4,mc_pool:2,mean:[0,1,5,6],mechan:[0,1,4,6],memcach:[1,2,6],memcached_expire_tim:[0,1],memcached_region:[0,6],memcachedbackend:0,memcachedlock:0,memoized_properti:1,memori:[1,5],memory_pickl:0,memorybackend:[0,1],memorypickl:1,memorypicklebackend:[0,1],messag:[1,6],metadata:[0,5,6],method:[0,1,5,6],might:[0,6],min_compress_len:[0,1],mind:6,minim:5,minimum:[0,1],minu:2,miss:[0,1,6],moc:[0,6],mode:[0,6],model:6,modern:0,modifi:0,modul:[0,1,4,6],modulu:0,moment:5,mon:1,more:[0,1,2,6],morgan:1,most:[0,2,3],move:1,msgpack:5,msgpackproxi:5,much:5,multipl:[0,1,2,6],multithread:[0,6],multivalu:1,must:[0,1],mutex:[0,2,6],mutexlock:0,my_data:[0,6],my_dictionari:0,my_foo:0,my_funct:[0,6],my_key_gener:[0,5,6],my_multi_key_gener:[0,6],my_region:[0,6],myapp:[0,5,6],mybackend:6,myclass:[0,6],myconfig:[0,6],myfirstproxi:0,myfoo:0,myinvalid:0,myotherclass:[0,6],mypackag:6,myregion:6,mysecondproxi:0,mysect:0,name:[0,1,6],nameregistri:[0,1],namespac:[0,1,5,6],nativ:6,necessari:5,necessarili:[0,1,6],need:[0,1,4,5,6],needregenerationexcept:[0,2],neg:1,negat:5,nest:5,never:0,new_valu:0,newli:[0,1,2,6],newvalu:[0,6],next:[0,2,5],no_valu:[0,1,5,6],non:[0,1,2,6],none:[0,1,2,6],noqa:0,nor:1,normal:[0,2,6],note:[0,2,6],notimplementederror:[0,5],nov:1,novalu:0,now:[0,1,2,5,6],nullbackend:[0,1],nulllock:1,number:[0,2,6],object:[0,1,2,4,6],object_hook:5,observ:1,occur:[0,1,6],oct:1,off:2,offer:6,offload:5,often:2,old:[0,1,5],older:[0,2,6],olli:1,omit:0,onc:[0,1,2],one:[0,1,2,4,5,6],ones:[0,5],onli:[0,1,2,5,6],oper:[0,1,2,5,6],optimist:5,option:[0,1,6],order:[0,1,2,5,6],ordinari:0,org:[3,5],organ:1,orient:0,origin:[0,1,6],orm:4,other:[0,1,2,4,5,6],otherwis:[0,1,5,6],our:[2,5,6],out:[1,5],output:[1,2],outsid:[0,2,6],over:0,overid:[0,6],overrid:[0,5,6],overridden:6,overview:4,own:[0,2,4,5],packag:[0,1,2,3],packb:5,page:4,pair:2,param:0,paramet:[0,1,6],paramt:0,parent:[0,6],pars:1,part:[0,1,6],parti:0,particular:[0,1,2,5,6],particularli:6,pass:[0,1,2,5,6],passthrough:0,password:[0,1],past:0,patch:1,path:[0,2,6],pattern:6,paul:1,payload:[0,5,6],per:[0,1,2,6],perform:[0,1,2,5,6],period:[0,2,6],persist:[0,5,6],pickl:[0,1,5,6],pickleabl:0,pileup:1,pin:1,pip:3,place:[0,1,5,6],plain:[0,1],plan:1,platform:[0,1],plu:1,pluck:[0,6],pluggabl:1,plugin:[1,4,6],pluginload:1,pluginnotfound:[0,1],point:[0,1,5,6],pool:0,pop:[1,6],popul:[0,6],popular:6,port:0,portalock:0,portion:6,posit:1,possibl:[0,5,6],post:0,potenti:[1,5,6],pre:5,prefix:[0,4,6],present:[0,1,5,6],preserv:[1,5],prevent:[0,1,6],previou:[0,1,2,4,6],previous:[1,2],primari:2,print:5,prior:0,probabl:5,problem:1,problemat:0,proce:[0,1,2,4],process:[0,1,2,5,6],produc:[0,1,6],product:[1,6],project:[1,4],promptli:[0,6],propag:[0,1],properli:6,properti:0,protect:0,protocol:1,provid:[0,1,2,4,6],proxi:[5,6],proxybackend:[0,1,5,6],publish:3,pull:[0,1,5,6],pullreq:1,pure:0,purpos:[1,2,5],push:[0,5],put:[2,6],py2:1,py3k:1,pylibmc:[0,1,2,5],pylibmcbackend:0,pypa:1,pypi:3,pytest:1,python3:1,python:[0,1,2,3,5,6],queri:[0,5,6],queue:[0,6],rais:[0,1,2,5,6],ralph:1,rather:[0,2,6],raw:5,reach:[0,2,6],read:0,reader:[0,1],readi:[0,6],readwrite_lock:[0,1],readwritemutex:[0,1],real:6,receiv:[0,1,6],recent:3,recip:[1,2,4],recod:5,recogn:[0,1,6],recommend:[0,6],record:5,recreat:[0,6],redi:[1,4,6],redis_expiration_tim:0,redisbackend:[0,1],redundantli:6,reentrant:1,refer:[0,1,2,4],referenc:0,reflect:0,refresh:[0,1,5,6],regardless:[0,6],regen:0,regener:[0,1,2,6],region:[1,4,5],region_invalid:[0,6],regionalreadyconfigur:0,regioninvalidationstrategi:[0,1,6],regionnotconfigur:0,regist:[0,1,6],register_backend:6,registri:[0,5],regress:1,regular:0,rel:[0,6],relat:4,releas:[0,1,3,6],release_read_lock:0,release_write_lock:0,relev:[0,6],reli:[1,2],relianc:1,remain:[0,1],rememb:[0,6],remot:0,remov:[0,1,2,6],repair:1,replac:[0,1,5,6],replace_existing_backend:[0,1,6],report:[0,3,6],repositori:1,repres:[0,6],request:[0,1,6],requir:[0,1,2,4,6],reserv:2,resolv:[0,6],resourc:[2,4],respect:1,respons:[0,6],restor:1,result:[0,1,5,6],ret:0,retir:1,retri:6,retriev:[0,1,2,6],retrieve_resourc:2,retry_count:6,retry_proxi:6,retrydeleteproxi:6,revert:1,rewritten:1,roll:1,rosebaugh:1,rougher:6,round:2,routin:1,rudimentari:4,run:[0,1,6],runner:[0,1,6],runtim:6,rutherfurd:1,rw_lockfil:0,ryan:1,safe:[0,2,6],same:[0,1,2,6],sasl:0,sat:1,sauerwein:1,sauliu:1,scenario:1,scene:6,scheme:[1,2,5],scope:0,scoped_sess:5,scott:0,script:1,search:4,second:[0,1,2,6],section:[0,2,6],see:[0,1,2,6],select:[0,4],self:[0,5,6],semant:1,semaphor:0,send:1,sent:5,sep:1,separ:[0,1,2,6],sequenc:[0,5],seri:[1,5],serial:[0,1,5,6],serializ:6,serv:1,server:[0,2,6],servic:2,session:5,sessionmak:5,set:[0,1,5,6],set_argu:0,set_multi:[0,1,5],setlevel:6,setup:[0,1,2,6],setuptool:[1,6],sever:[0,1],sha1:[0,2,6],sha1_mangle_kei:[0,1,6],share:[1,2,6],sheldon:1,should:[0,3,6],should_cache_fn:[0,1,6],sign:1,signatur:[0,1],similar:[0,1,3,6],simpl:[0,1,2,5,6],simpli:5,simultan:[0,6],sinc:[0,5,6],singl:[0,1,2,4,6],singleton:0,situat:1,size:[0,5],skip:[0,6],sleep:0,slow_databas:2,slower:1,small:[1,6],smaller:5,sobrinho:1,socket:0,socket_timeout:[0,1],soft:[0,1],some:[0,1,2,4,5,6],some_creation_funct:2,some_databas:6,some_kei:2,somedatabas:[0,6],somekei:[0,6],somemethod:[0,6],someregion:[0,6],something_new:5,somevalu:6,somewher:0,sourc:[0,1,2,6],space:6,special:1,specif:[0,1,4,6],specifi:[0,1,5,6],spent:6,sqlalchemi:[1,3,5],stack:0,stale:[0,2,6],standard:0,start:[0,5,6],state:[0,1],statu:3,stegmann:1,step:[0,6],still:[0,1,2,5,6],storag:[1,5,6],store:[0,1,2,5,6],str:[0,1,5,6],straight:1,strategi:[0,1,2,6],strictredi:[0,1],string:[0,1,6],strongli:0,structur:[0,4,6],stuff:2,subclass:[0,1,6],submit:0,subsequ:[0,1,6],subset:0,subsitut:1,substitut:[0,6],subsystem:4,success:1,successfulli:2,suffix:0,suit:[0,1],suitabl:[0,6],sun:1,supersed:[0,6],suppli:[0,6],support:[0,1,6],suppos:[1,5],sure:[0,6],symbol:[1,2],symlink:2,synchron:[0,2],syncreaderdogpil:1,syntax:1,sys:6,system:[0,1,2,5,6],tag:0,take:[0,1,6],talk:[0,6],target:[0,5,6],task:[0,6],tcp_nodelai:0,techniqu:5,templat:[0,1,5],templatelookup:0,term:[0,6],test:[0,1,6],than:[0,2,5,6],thank:1,the_resourc:2,thei:[0,2,6],them:[0,1,5],therebi:1,therefor:[0,6],thi:[0,1,2,3,4,5,6],thing:[1,6],third:[0,5],those:[0,1,4,5,6],though:[0,1,6],thread:[0,1,2,4,5,6],threadmappedpool:2,threadsaf:[0,2],three:[0,5],threshold:0,throttl:0,through:[2,5],thrown:[0,1],thu:[0,1,2,6],tiger:0,tim:1,time:[0,1,2,6],timedelta:[0,1,6],timeout:[0,1],timestamp:[0,1,6],tmp:2,to_list:1,to_str:[0,1,6],tobia:1,togeth:[0,6],token:[0,6],tool:[0,3,6],top:4,total:0,total_second:1,toward:[0,1],tox:1,track:[0,2,5],tracker:3,transform:5,translat:5,trip:2,tue:1,tup:2,tupl:[0,2,6],turn:6,tweak:4,twice:2,two:[0,2,4,5,6],type:[0,6],typeerror:1,typic:[0,5,6],ultim:[0,1,2,6],unaffect:1,uncondition:[0,6],unconfigur:1,under:[0,1,5,6],underli:[0,6],underneath:[0,6],understood:0,undesir:0,unencod:5,unicod:[0,1,6],uninstanti:6,uniqu:[0,2,5],unit:1,unix:[0,1,2],unless:[0,2,6],unlik:[0,2],unlimit:0,unnecessari:[0,1],unnecessarili:1,unpackb:5,unsupport:1,until:[0,1,2,4,5,6],updat:[1,4],upon:[0,1,2,6],upstream:0,url:[0,1,5,6],usabl:6,usag:[0,1,4],use:[0,1,2,4,5,6],use_bin_typ:5,used:[0,1,2,5,6],user:[1,2,4,5,6],user_fn_on:5,user_fn_one_:5,user_fn_thre:5,user_fn_three_:5,user_fn_two:5,user_fn_two_:5,user_id:[5,6],user_kei:5,usernam:0,uses:[0,1,2,5,6],using:[0,1,2,5,6],usual:[1,2,6],utf:5,util:[1,4],valid:0,validationerror:0,valu:[0,1,2,5,6],value1:0,value2:0,value3:0,value_and_created_fn:0,value_decod:5,value_encod:5,value_plus_tim:2,value_vers:0,vanasco:1,variabl:1,variant:[0,1,5],varieti:4,variou:[0,2],vendor:1,verbos:6,veri:[1,4],version:[0,1,2,3,4,6],versu:[0,6],via:[0,1,2,5,6],volum:[1,4],wai:[0,1,5,6],wait:[0,1,6],want:[0,2,5,6],warn:1,was_hard_invalid:0,was_soft_invalid:0,wasn:1,wed:1,week:[0,6],well:[0,1,2,5,6],were:[0,1,6],what:[0,1,2],whatev:[0,6],when:[0,1,2,5,6],whenev:[0,6],where:[0,1,2,6],wherebi:1,whether:[0,1,6],which:[0,1,2,4,5,6],whole:1,whose:0,wichert:1,wide:1,window:[0,1],wish:0,with_stat:1,within:[0,1,5,6],without:[0,2,6],won:[0,2,6],work:[0,1,2,5],worker:6,would:[0,1,5,6],wrap:[0,1,5,6],write:[0,1,6],writer:[0,5,6],written:6,wrong:1,yank:1,yet:[0,1,6],you:[0,2,5,6],your:[0,2,5],zero:1,zip:0,zoomorph:1},titles:["API","Changelog","dogpile Core","Front Matter","Welcome to dogpile.cache\u2019s documentation!","Recipes","Usage Guide"],titleterms:{"null":0,Using:[2,6],all:5,anoth:5,api:[0,2],asynchron:5,backend:[0,6],behavior:6,bug:[1,3],cach:[2,4],chang:6,changelog:1,configur:6,core:[0,2],creat:6,data:5,decod:5,directli:2,distribut:2,document:4,dogpil:[0,2,4],encod:5,event:5,exampl:2,except:0,featur:1,file:[0,2],format:5,front:3,group:5,guid:6,homepag:3,indic:4,instal:3,integr:0,invalid:5,kei:5,learn:2,lock:2,log:6,mako:0,matter:3,memcach:0,memori:0,misc:1,need:2,note:5,orm:5,overview:6,plugin:0,prefix:5,project:3,proxi:0,pylibmc:6,recip:5,redi:[0,5],region:[0,6],relat:5,rudimentari:[2,6],tabl:4,updat:5,usag:[2,6],util:0,welcom:4}})dogpile.cache-0.9.0/docs/usage.html0000664000175000017500000027374613555610710020243 0ustar classicclassic00000000000000 Usage Guide — dogpile.cache 0.9.0 documentation

Usage Guide

Overview

At the time of this writing, popular key/value servers include Memcached, Redis and many others. While these tools all have different usage focuses, they all have in common that the storage model is based on the retrieval of a value based on a key; as such, they are all potentially suitable for caching, particularly Memcached which is first and foremost designed for caching.

With a caching system in mind, dogpile.cache provides an interface to a particular Python API targeted at that system.

A dogpile.cache configuration consists of the following components:

  • A region, which is an instance of CacheRegion, and defines the configuration details for a particular cache backend. The CacheRegion can be considered the “front end” used by applications.

  • A backend, which is an instance of CacheBackend, describing how values are stored and retrieved from a backend. This interface specifies only get(), set() and delete(). The actual kind of CacheBackend in use for a particular CacheRegion is determined by the underlying Python API being used to talk to the cache, such as Pylibmc. The CacheBackend is instantiated behind the scenes and not directly accessed by applications under normal circumstances.

  • Value generation functions. These are user-defined functions that generate new values to be placed in the cache. While dogpile.cache offers the usual “set” approach of placing data into the cache, the usual mode of usage is to only instruct it to “get” a value, passing it a creation function which will be used to generate a new value if and only if one is needed. This “get-or-create” pattern is the entire key to the “Dogpile” system, which coordinates a single value creation operation among many concurrent get operations for a particular key, eliminating the issue of an expired value being redundantly re-generated by many workers simultaneously.

Rudimentary Usage

dogpile.cache includes a Pylibmc backend. A basic configuration looks like:

from dogpile.cache import make_region

region = make_region().configure(
    'dogpile.cache.pylibmc',
    expiration_time = 3600,
    arguments = {
        'url': ["127.0.0.1"],
    }
)

@region.cache_on_arguments()
def load_user_info(user_id):
    return some_database.lookup_user_by_id(user_id)

Above, we create a CacheRegion using the make_region() function, then apply the backend configuration via the CacheRegion.configure() method, which returns the region. The name of the backend is the only argument required by CacheRegion.configure() itself, in this case dogpile.cache.pylibmc. However, in this specific case, the pylibmc backend also requires that the URL of the memcached server be passed within the arguments dictionary.

The configuration is separated into two sections. Upon construction via make_region(), the CacheRegion object is available, typically at module import time, for usage in decorating functions. Additional configuration details passed to CacheRegion.configure() are typically loaded from a configuration file and therefore not necessarily available until runtime, hence the two-step configurational process.

Key arguments passed to CacheRegion.configure() include expiration_time, which is the expiration time passed to the Dogpile lock, and arguments, which are arguments used directly by the backend - in this case we are using arguments that are passed directly to the pylibmc module.

Region Configuration

The make_region() function currently calls the CacheRegion constructor directly.

class dogpile.cache.region.CacheRegion(name=None, function_key_generator=<function function_key_generator>, function_multi_key_generator=<function function_multi_key_generator>, key_mangler=None, async_creation_runner=None)

A front end to a particular cache backend.

Parameters
  • name – Optional, a string name for the region. This isn’t used internally but can be accessed via the .name parameter, helpful for configuring a region from a config file.

  • function_key_generator

    Optional. A function that will produce a “cache key” given a data creation function and arguments, when using the CacheRegion.cache_on_arguments() method. The structure of this function should be two levels: given the data creation function, return a new function that generates the key based on the given arguments. Such as:

    def my_key_generator(namespace, fn, **kw):
        fname = fn.__name__
        def generate_key(*arg):
            return namespace + "_" + fname + "_".join(str(s) for s in arg)
        return generate_key
    
    
    region = make_region(
        function_key_generator = my_key_generator
    ).configure(
        "dogpile.cache.dbm",
        expiration_time=300,
        arguments={
            "filename":"file.dbm"
        }
    )
    

    The namespace is that passed to CacheRegion.cache_on_arguments(). It’s not consulted outside this function, so in fact can be of any form. For example, it can be passed as a tuple, used to specify arguments to pluck from **kw:

    def my_key_generator(namespace, fn):
        def generate_key(*arg, **kw):
            return ":".join(
                    [kw[k] for k in namespace] +
                    [str(x) for x in arg]
                )
        return generate_key
    

    Where the decorator might be used as:

    @my_region.cache_on_arguments(namespace=('x', 'y'))
    def my_function(a, b, **kw):
        return my_data()
    

    See also

    function_key_generator() - default key generator

    kwarg_function_key_generator() - optional gen that also uses keyword arguments

  • function_multi_key_generator

    Optional. Similar to function_key_generator parameter, but it’s used in CacheRegion.cache_multi_on_arguments(). Generated function should return list of keys. For example:

    def my_multi_key_generator(namespace, fn, **kw):
        namespace = fn.__name__ + (namespace or '')
    
        def generate_keys(*args):
            return [namespace + ':' + str(a) for a in args]
    
        return generate_keys
    

  • key_mangler – Function which will be used on all incoming keys before passing to the backend. Defaults to None, in which case the key mangling function recommended by the cache backend will be used. A typical mangler is the SHA1 mangler found at sha1_mangle_key() which coerces keys into a SHA1 hash, so that the string length is fixed. To disable all key mangling, set to False. Another typical mangler is the built-in Python function str, which can be used to convert non-string or Unicode keys to bytestrings, which is needed when using a backend such as bsddb or dbm under Python 2.x in conjunction with Unicode keys.

  • async_creation_runner

    A callable that, when specified, will be passed to and called by dogpile.lock when there is a stale value present in the cache. It will be passed the mutex and is responsible releasing that mutex when finished. This can be used to defer the computation of expensive creator functions to later points in the future by way of, for example, a background thread, a long-running queue, or a task manager system like Celery.

    For a specific example using async_creation_runner, new values can be created in a background thread like so:

    import threading
    
    def async_creation_runner(cache, somekey, creator, mutex):
        ''' Used by dogpile.core:Lock when appropriate  '''
        def runner():
            try:
                value = creator()
                cache.set(somekey, value)
            finally:
                mutex.release()
    
        thread = threading.Thread(target=runner)
        thread.start()
    
    
    region = make_region(
        async_creation_runner=async_creation_runner,
    ).configure(
        'dogpile.cache.memcached',
        expiration_time=5,
        arguments={
            'url': '127.0.0.1:11211',
            'distributed_lock': True,
        }
    )
    

    Remember that the first request for a key with no associated value will always block; async_creator will not be invoked. However, subsequent requests for cached-but-expired values will still return promptly. They will be refreshed by whatever asynchronous means the provided async_creation_runner callable implements.

    By default the async_creation_runner is disabled and is set to None.

    New in version 0.4.2: added the async_creation_runner feature.

One you have a CacheRegion, the CacheRegion.cache_on_arguments() method can be used to decorate functions, but the cache itself can’t be used until CacheRegion.configure() is called. The interface for that method is as follows:

CacheRegion.configure(backend, expiration_time=None, arguments=None, _config_argument_dict=None, _config_prefix=None, wrap=None, replace_existing_backend=False, region_invalidator=None)

Configure a CacheRegion.

The CacheRegion itself is returned.

Parameters
  • backend – Required. This is the name of the CacheBackend to use, and is resolved by loading the class from the dogpile.cache entrypoint.

  • expiration_time

    Optional. The expiration time passed to the dogpile system. May be passed as an integer number of seconds, or as a datetime.timedelta value.

    The CacheRegion.get_or_create() method as well as the CacheRegion.cache_on_arguments() decorator (though note: not the CacheRegion.get() method) will call upon the value creation function after this time period has passed since the last generation.

  • arguments – Optional. The structure here is passed directly to the constructor of the CacheBackend in use, though is typically a dictionary.

  • wrap

    Optional. A list of ProxyBackend classes and/or instances, each of which will be applied in a chain to ultimately wrap the original backend, so that custom functionality augmentation can be applied.

    New in version 0.5.0.

  • replace_existing_backend

    if True, the existing cache backend will be replaced. Without this flag, an exception is raised if a backend is already configured.

    New in version 0.5.7.

  • region_invalidator

    Optional. Override default invalidation strategy with custom implementation of RegionInvalidationStrategy.

    New in version 0.6.2.

The CacheRegion can also be configured from a dictionary, using the CacheRegion.configure_from_config() method:

CacheRegion.configure_from_config(config_dict, prefix)

Configure from a configuration dictionary and a prefix.

Example:

local_region = make_region()
memcached_region = make_region()

# regions are ready to use for function
# decorators, but not yet for actual caching

# later, when config is available
myconfig = {
    "cache.local.backend":"dogpile.cache.dbm",
    "cache.local.arguments.filename":"/path/to/dbmfile.dbm",
    "cache.memcached.backend":"dogpile.cache.pylibmc",
    "cache.memcached.arguments.url":"127.0.0.1, 10.0.0.1",
}
local_region.configure_from_config(myconfig, "cache.local.")
memcached_region.configure_from_config(myconfig,
                                    "cache.memcached.")

Using a Region

The CacheRegion object is our front-end interface to a cache. It includes the following methods:

CacheRegion.get(key, expiration_time=None, ignore_expiration=False)

Return a value from the cache, based on the given key.

If the value is not present, the method returns the token NO_VALUE. NO_VALUE evaluates to False, but is separate from None to distinguish between a cached value of None.

By default, the configured expiration time of the CacheRegion, or alternatively the expiration time supplied by the expiration_time argument, is tested against the creation time of the retrieved value versus the current time (as reported by time.time()). If stale, the cached value is ignored and the NO_VALUE token is returned. Passing the flag ignore_expiration=True bypasses the expiration time check.

Changed in version 0.3.0: CacheRegion.get() now checks the value’s creation time against the expiration time, rather than returning the value unconditionally.

The method also interprets the cached value in terms of the current “invalidation” time as set by the invalidate() method. If a value is present, but its creation time is older than the current invalidation time, the NO_VALUE token is returned. Passing the flag ignore_expiration=True bypasses the invalidation time check.

New in version 0.3.0: Support for the CacheRegion.invalidate() method.

Parameters
  • key – Key to be retrieved. While it’s typical for a key to be a string, it is ultimately passed directly down to the cache backend, before being optionally processed by the key_mangler function, so can be of any type recognized by the backend or by the key_mangler function, if present.

  • expiration_time

    Optional expiration time value which will supersede that configured on the CacheRegion itself.

    Note

    The CacheRegion.get.expiration_time argument is not persisted in the cache and is relevant only to this specific cache retrieval operation, relative to the creation time stored with the existing cached value. Subsequent calls to CacheRegion.get() are not affected by this value.

    New in version 0.3.0.

  • ignore_expiration

    if True, the value is returned from the cache if present, regardless of configured expiration times or whether or not invalidate() was called.

    New in version 0.3.0.

CacheRegion.get_or_create(key, creator, expiration_time=None, should_cache_fn=None, creator_args=None)

Return a cached value based on the given key.

If the value does not exist or is considered to be expired based on its creation time, the given creation function may or may not be used to recreate the value and persist the newly generated value in the cache.

Whether or not the function is used depends on if the dogpile lock can be acquired or not. If it can’t, it means a different thread or process is already running a creation function for this key against the cache. When the dogpile lock cannot be acquired, the method will block if no previous value is available, until the lock is released and a new value available. If a previous value is available, that value is returned immediately without blocking.

If the invalidate() method has been called, and the retrieved value’s timestamp is older than the invalidation timestamp, the value is unconditionally prevented from being returned. The method will attempt to acquire the dogpile lock to generate a new value, or will wait until the lock is released to return the new value.

Changed in version 0.3.0: The value is unconditionally regenerated if the creation time is older than the last call to invalidate().

Parameters
  • key – Key to be retrieved. While it’s typical for a key to be a string, it is ultimately passed directly down to the cache backend, before being optionally processed by the key_mangler function, so can be of any type recognized by the backend or by the key_mangler function, if present.

  • creator – function which creates a new value.

  • creator_args

    optional tuple of (args, kwargs) that will be passed to the creator function if present.

    New in version 0.7.0.

  • expiration_time

    optional expiration time which will overide the expiration time already configured on this CacheRegion if not None. To set no expiration, use the value -1.

    Note

    The CacheRegion.get_or_create.expiration_time argument is not persisted in the cache and is relevant only to this specific cache retrieval operation, relative to the creation time stored with the existing cached value. Subsequent calls to CacheRegion.get_or_create() are not affected by this value.

  • should_cache_fn

    optional callable function which will receive the value returned by the “creator”, and will then return True or False, indicating if the value should actually be cached or not. If it returns False, the value is still returned, but isn’t cached. E.g.:

    def dont_cache_none(value):
        return value is not None
    
    value = region.get_or_create("some key",
                        create_value,
                        should_cache_fn=dont_cache_none)
    

    Above, the function returns the value of create_value() if the cache is invalid, however if the return value is None, it won’t be cached.

    New in version 0.4.3.

See also

CacheRegion.get()

CacheRegion.cache_on_arguments() - applies get_or_create() to any function using a decorator.

CacheRegion.get_or_create_multi() - multiple key/value version

CacheRegion.set(key, value)

Place a new value in the cache under the given key.

CacheRegion.delete(key)

Remove a value from the cache.

This operation is idempotent (can be called multiple times, or on a non-existent key, safely)

CacheRegion.cache_on_arguments(namespace=None, expiration_time=None, should_cache_fn=None, to_str=<class 'str'>, function_key_generator=None)

A function decorator that will cache the return value of the function using a key derived from the function itself and its arguments.

The decorator internally makes use of the CacheRegion.get_or_create() method to access the cache and conditionally call the function. See that method for additional behavioral details.

E.g.:

@someregion.cache_on_arguments()
def generate_something(x, y):
    return somedatabase.query(x, y)

The decorated function can then be called normally, where data will be pulled from the cache region unless a new value is needed:

result = generate_something(5, 6)

The function is also given an attribute invalidate(), which provides for invalidation of the value. Pass to invalidate() the same arguments you’d pass to the function itself to represent a particular value:

generate_something.invalidate(5, 6)

Another attribute set() is added to provide extra caching possibilities relative to the function. This is a convenience method for CacheRegion.set() which will store a given value directly without calling the decorated function. The value to be cached is passed as the first argument, and the arguments which would normally be passed to the function should follow:

generate_something.set(3, 5, 6)

The above example is equivalent to calling generate_something(5, 6), if the function were to produce the value 3 as the value to be cached.

New in version 0.4.1: Added set() method to decorated function.

Similar to set() is refresh(). This attribute will invoke the decorated function and populate a new value into the cache with the new value, as well as returning that value:

newvalue = generate_something.refresh(5, 6)

New in version 0.5.0: Added refresh() method to decorated function.

original() on other hand will invoke the decorated function without any caching:

newvalue = generate_something.original(5, 6)

New in version 0.6.0: Added original() method to decorated function.

Lastly, the get() method returns either the value cached for the given key, or the token NO_VALUE if no such key exists:

value = generate_something.get(5, 6)

New in version 0.5.3: Added get() method to decorated function.

The default key generation will use the name of the function, the module name for the function, the arguments passed, as well as an optional “namespace” parameter in order to generate a cache key.

Given a function one inside the module myapp.tools:

@region.cache_on_arguments(namespace="foo")
def one(a, b):
    return a + b

Above, calling one(3, 4) will produce a cache key as follows:

myapp.tools:one|foo|3 4

The key generator will ignore an initial argument of self or cls, making the decorator suitable (with caveats) for use with instance or class methods. Given the example:

class MyClass(object):
    @region.cache_on_arguments(namespace="foo")
    def one(self, a, b):
        return a + b

The cache key above for MyClass().one(3, 4) will again produce the same cache key of myapp.tools:one|foo|3 4 - the name self is skipped.

The namespace parameter is optional, and is used normally to disambiguate two functions of the same name within the same module, as can occur when decorating instance or class methods as below:

class MyClass(object):
    @region.cache_on_arguments(namespace='MC')
    def somemethod(self, x, y):
        ""

class MyOtherClass(object):
    @region.cache_on_arguments(namespace='MOC')
    def somemethod(self, x, y):
        ""

Above, the namespace parameter disambiguates between somemethod on MyClass and MyOtherClass. Python class declaration mechanics otherwise prevent the decorator from having awareness of the MyClass and MyOtherClass names, as the function is received by the decorator before it becomes an instance method.

The function key generation can be entirely replaced on a per-region basis using the function_key_generator argument present on make_region() and CacheRegion. If defaults to function_key_generator().

Parameters
  • namespace – optional string argument which will be established as part of the cache key. This may be needed to disambiguate functions of the same name within the same source file, such as those associated with classes - note that the decorator itself can’t see the parent class on a function as the class is being declared.

  • expiration_time

    if not None, will override the normal expiration time.

    May be specified as a callable, taking no arguments, that returns a value to be used as the expiration_time. This callable will be called whenever the decorated function itself is called, in caching or retrieving. Thus, this can be used to determine a dynamic expiration time for the cached function result. Example use cases include “cache the result until the end of the day, week or time period” and “cache until a certain date or time passes”.

    Changed in version 0.5.0: expiration_time may be passed as a callable to CacheRegion.cache_on_arguments().

  • should_cache_fn

    passed to CacheRegion.get_or_create().

    New in version 0.4.3.

  • to_str

    callable, will be called on each function argument in order to convert to a string. Defaults to str(). If the function accepts non-ascii unicode arguments on Python 2.x, the unicode() builtin can be substituted, but note this will produce unicode cache keys which may require key mangling before reaching the cache.

    New in version 0.5.0.

  • function_key_generator

    a function that will produce a “cache key”. This function will supersede the one configured on the CacheRegion itself.

    New in version 0.5.5.

Creating Backends

Backends are located using the setuptools entrypoint system. To make life easier for writers of ad-hoc backends, a helper function is included which registers any backend in the same way as if it were part of the existing sys.path.

For example, to create a backend called DictionaryBackend, we subclass CacheBackend:

from dogpile.cache.api import CacheBackend, NO_VALUE

class DictionaryBackend(CacheBackend):
    def __init__(self, arguments):
        self.cache = {}

    def get(self, key):
        return self.cache.get(key, NO_VALUE)

    def set(self, key, value):
        self.cache[key] = value

    def delete(self, key):
        self.cache.pop(key)

Then make sure the class is available underneath the entrypoint dogpile.cache. If we did this in a setup.py file, it would be in setup() as:

entry_points="""
  [dogpile.cache]
  dictionary = mypackage.mybackend:DictionaryBackend
  """

Alternatively, if we want to register the plugin in the same process space without bothering to install anything, we can use register_backend:

from dogpile.cache import register_backend

register_backend("dictionary", "mypackage.mybackend", "DictionaryBackend")

Our new backend would be usable in a region like this:

from dogpile.cache import make_region

region = make_region("myregion")

region.configure("dictionary")

data = region.set("somekey", "somevalue")

The values we receive for the backend here are instances of CachedValue. This is a tuple subclass of length two, of the form:

(payload, metadata)

Where “payload” is the thing being cached, and “metadata” is information we store in the cache - a dictionary which currently has just the “creation time” and a “version identifier” as key/values. If the cache backend requires serialization, pickle or similar can be used on the tuple - the “metadata” portion will always be a small and easily serializable Python structure.

Changing Backend Behavior

The ProxyBackend is a decorator class provided to easily augment existing backend behavior without having to extend the original class. Using a decorator class is also adventageous as it allows us to share the altered behavior between different backends.

Proxies are added to the CacheRegion object using the CacheRegion.configure() method. Only the overridden methods need to be specified and the real backend can be accessed with the self.proxied object from inside the ProxyBackend.

For example, a simple class to log all calls to .set() would look like this:

from dogpile.cache.proxy import ProxyBackend

import logging
log = logging.getLogger(__name__)

class LoggingProxy(ProxyBackend):
    def set(self, key, value):
        log.debug('Setting Cache Key: %s' % key)
        self.proxied.set(key, value)

ProxyBackend can be be configured to optionally take arguments (as long as the ProxyBackend.__init__() method is called properly, either directly or via super(). In the example below, the RetryDeleteProxy class accepts a retry_count parameter on initialization. In the event of an exception on delete(), it will retry this many times before returning:

from dogpile.cache.proxy import ProxyBackend

class RetryDeleteProxy(ProxyBackend):
    def __init__(self, retry_count=5):
        super(RetryDeleteProxy, self).__init__()
        self.retry_count = retry_count

    def delete(self, key):
        retries = self.retry_count
        while retries > 0:
            retries -= 1
            try:
                self.proxied.delete(key)
                return

            except:
                pass

The wrap parameter of the CacheRegion.configure() accepts a list which can contain any combination of instantiated proxy objects as well as uninstantiated proxy classes. Putting the two examples above together would look like this:

from dogpile.cache import make_region

retry_proxy = RetryDeleteProxy(5)

region = make_region().configure(
    'dogpile.cache.pylibmc',
    expiration_time = 3600,
    arguments = {
        'url':["127.0.0.1"],
    },
    wrap = [ LoggingProxy, retry_proxy ]
)

In the above example, the LoggingProxy object would be instantated by the CacheRegion and applied to wrap requests on behalf of the retry_proxy instance; that proxy in turn wraps requests on behalf of the original dogpile.cache.pylibmc backend.

New in version 0.4.4: Added support for the ProxyBackend class.

Configuring Logging

New in version 0.9.0.

CacheRegion includes logging facilities that will emit debug log messages when key cache events occur, including when keys are regenerated as well as when hard invalidations occur. Using the Python logging module, set the log level to dogpile.cache to logging.DEBUG:

logging.basicConfig()
logging.getLogger("dogpile.cache").setLevel(logging.DEBUG)

Debug logging will indicate time spent regenerating keys as well as when keys are missing:

DEBUG:dogpile.cache.region:No value present for key: '__main__:load_user_info|2'
DEBUG:dogpile.cache.region:No value present for key: '__main__:load_user_info|1'
DEBUG:dogpile.cache.region:Cache value generated in 0.501 seconds for keys: ['__main__:load_user_info|2', '__main__:load_user_info|3', '__main__:load_user_info|4', '__main__:load_user_info|5']
DEBUG:dogpile.cache.region:Hard invalidation detected for key: '__main__:load_user_info|3'
DEBUG:dogpile.cache.region:Hard invalidation detected for key: '__main__:load_user_info|2'
dogpile.cache-0.9.0/dogpile/0000775000175000017500000000000013555610710016721 5ustar classicclassic00000000000000dogpile.cache-0.9.0/dogpile/__init__.py0000664000175000017500000000015213555610667021043 0ustar classicclassic00000000000000__version__ = "0.9.0" from .lock import Lock # noqa from .lock import NeedRegenerationException # noqa dogpile.cache-0.9.0/dogpile/cache/0000775000175000017500000000000013555610710017764 5ustar classicclassic00000000000000dogpile.cache-0.9.0/dogpile/cache/__init__.py0000664000175000017500000000026413555610667022112 0ustar classicclassic00000000000000from .region import CacheRegion # noqa from .region import make_region # noqa from .region import register_backend # noqa from .. import __version__ # noqa # backwards compat dogpile.cache-0.9.0/dogpile/cache/api.py0000664000175000017500000001432113555610667021123 0ustar classicclassic00000000000000import operator from ..util.compat import py3k class NoValue(object): """Describe a missing cache value. The :attr:`.NO_VALUE` module global should be used. """ @property def payload(self): return self def __repr__(self): """Ensure __repr__ is a consistent value in case NoValue is used to fill another cache key. """ return "" if py3k: def __bool__(self): # pragma NO COVERAGE return False else: def __nonzero__(self): # pragma NO COVERAGE return False NO_VALUE = NoValue() """Value returned from ``get()`` that describes a key not present.""" class CachedValue(tuple): """Represent a value stored in the cache. :class:`.CachedValue` is a two-tuple of ``(payload, metadata)``, where ``metadata`` is dogpile.cache's tracking information ( currently the creation time). The metadata and tuple structure is pickleable, if the backend requires serialization. """ payload = property(operator.itemgetter(0)) """Named accessor for the payload.""" metadata = property(operator.itemgetter(1)) """Named accessor for the dogpile.cache metadata dictionary.""" def __new__(cls, payload, metadata): return tuple.__new__(cls, (payload, metadata)) def __reduce__(self): return CachedValue, (self.payload, self.metadata) class CacheBackend(object): """Base class for backend implementations.""" key_mangler = None """Key mangling function. May be None, or otherwise declared as an ordinary instance method. """ def __init__(self, arguments): # pragma NO COVERAGE """Construct a new :class:`.CacheBackend`. Subclasses should override this to handle the given arguments. :param arguments: The ``arguments`` parameter passed to :func:`.make_registry`. """ raise NotImplementedError() @classmethod def from_config_dict(cls, config_dict, prefix): prefix_len = len(prefix) return cls( dict( (key[prefix_len:], config_dict[key]) for key in config_dict if key.startswith(prefix) ) ) def has_lock_timeout(self): return False def get_mutex(self, key): """Return an optional mutexing object for the given key. This object need only provide an ``acquire()`` and ``release()`` method. May return ``None``, in which case the dogpile lock will use a regular ``threading.Lock`` object to mutex concurrent threads for value creation. The default implementation returns ``None``. Different backends may want to provide various kinds of "mutex" objects, such as those which link to lock files, distributed mutexes, memcached semaphores, etc. Whatever kind of system is best suited for the scope and behavior of the caching backend. A mutex that takes the key into account will allow multiple regenerate operations across keys to proceed simultaneously, while a mutex that does not will serialize regenerate operations to just one at a time across all keys in the region. The latter approach, or a variant that involves a modulus of the given key's hash value, can be used as a means of throttling the total number of value recreation operations that may proceed at one time. """ return None def get(self, key): # pragma NO COVERAGE """Retrieve a value from the cache. The returned value should be an instance of :class:`.CachedValue`, or ``NO_VALUE`` if not present. """ raise NotImplementedError() def get_multi(self, keys): # pragma NO COVERAGE """Retrieve multiple values from the cache. The returned value should be a list, corresponding to the list of keys given. .. versionadded:: 0.5.0 """ raise NotImplementedError() def set(self, key, value): # pragma NO COVERAGE """Set a value in the cache. The key will be whatever was passed to the registry, processed by the "key mangling" function, if any. The value will always be an instance of :class:`.CachedValue`. """ raise NotImplementedError() def set_multi(self, mapping): # pragma NO COVERAGE """Set multiple values in the cache. ``mapping`` is a dict in which the key will be whatever was passed to the registry, processed by the "key mangling" function, if any. The value will always be an instance of :class:`.CachedValue`. When implementing a new :class:`.CacheBackend` or cutomizing via :class:`.ProxyBackend`, be aware that when this method is invoked by :meth:`.Region.get_or_create_multi`, the ``mapping`` values are the same ones returned to the upstream caller. If the subclass alters the values in any way, it must not do so 'in-place' on the ``mapping`` dict -- that will have the undesirable effect of modifying the returned values as well. .. versionadded:: 0.5.0 """ raise NotImplementedError() def delete(self, key): # pragma NO COVERAGE """Delete a value from the cache. The key will be whatever was passed to the registry, processed by the "key mangling" function, if any. The behavior here should be idempotent, that is, can be called any number of times regardless of whether or not the key exists. """ raise NotImplementedError() def delete_multi(self, keys): # pragma NO COVERAGE """Delete multiple values from the cache. The key will be whatever was passed to the registry, processed by the "key mangling" function, if any. The behavior here should be idempotent, that is, can be called any number of times regardless of whether or not the key exists. .. versionadded:: 0.5.0 """ raise NotImplementedError() dogpile.cache-0.9.0/dogpile/cache/backends/0000775000175000017500000000000013555610710021536 5ustar classicclassic00000000000000dogpile.cache-0.9.0/dogpile/cache/backends/__init__.py0000664000175000017500000000170013555610667023660 0ustar classicclassic00000000000000from ...util import PluginLoader _backend_loader = PluginLoader("dogpile.cache") register_backend = _backend_loader.register register_backend( "dogpile.cache.null", "dogpile.cache.backends.null", "NullBackend" ) register_backend( "dogpile.cache.dbm", "dogpile.cache.backends.file", "DBMBackend" ) register_backend( "dogpile.cache.pylibmc", "dogpile.cache.backends.memcached", "PylibmcBackend", ) register_backend( "dogpile.cache.bmemcached", "dogpile.cache.backends.memcached", "BMemcachedBackend", ) register_backend( "dogpile.cache.memcached", "dogpile.cache.backends.memcached", "MemcachedBackend", ) register_backend( "dogpile.cache.memory", "dogpile.cache.backends.memory", "MemoryBackend" ) register_backend( "dogpile.cache.memory_pickle", "dogpile.cache.backends.memory", "MemoryPickleBackend", ) register_backend( "dogpile.cache.redis", "dogpile.cache.backends.redis", "RedisBackend" ) dogpile.cache-0.9.0/dogpile/cache/backends/file.py0000664000175000017500000003344013555610667023046 0ustar classicclassic00000000000000""" File Backends ------------------ Provides backends that deal with local filesystem access. """ from __future__ import with_statement from contextlib import contextmanager import os from ..api import CacheBackend from ..api import NO_VALUE from ... import util from ...util import compat __all__ = ["DBMBackend", "FileLock", "AbstractFileLock"] class DBMBackend(CacheBackend): """A file-backend using a dbm file to store keys. Basic usage:: from dogpile.cache import make_region region = make_region().configure( 'dogpile.cache.dbm', expiration_time = 3600, arguments = { "filename":"/path/to/cachefile.dbm" } ) DBM access is provided using the Python ``anydbm`` module, which selects a platform-specific dbm module to use. This may be made to be more configurable in a future release. Note that different dbm modules have different behaviors. Some dbm implementations handle their own locking, while others don't. The :class:`.DBMBackend` uses a read/write lockfile by default, which is compatible even with those DBM implementations for which this is unnecessary, though the behavior can be disabled. The DBM backend by default makes use of two lockfiles. One is in order to protect the DBM file itself from concurrent writes, the other is to coordinate value creation (i.e. the dogpile lock). By default, these lockfiles use the ``flock()`` system call for locking; this is **only available on Unix platforms**. An alternative lock implementation, such as one which is based on threads or uses a third-party system such as `portalocker `_, can be dropped in using the ``lock_factory`` argument in conjunction with the :class:`.AbstractFileLock` base class. Currently, the dogpile lock is against the entire DBM file, not per key. This means there can only be one "creator" job running at a time per dbm file. A future improvement might be to have the dogpile lock using a filename that's based on a modulus of the key. Locking on a filename that uniquely corresponds to the key is problematic, since it's not generally safe to delete lockfiles as the application runs, implying an unlimited number of key-based files would need to be created and never deleted. Parameters to the ``arguments`` dictionary are below. :param filename: path of the filename in which to create the DBM file. Note that some dbm backends will change this name to have additional suffixes. :param rw_lockfile: the name of the file to use for read/write locking. If omitted, a default name is used by appending the suffix ".rw.lock" to the DBM filename. If False, then no lock is used. :param dogpile_lockfile: the name of the file to use for value creation, i.e. the dogpile lock. If omitted, a default name is used by appending the suffix ".dogpile.lock" to the DBM filename. If False, then dogpile.cache uses the default dogpile lock, a plain thread-based mutex. :param lock_factory: a function or class which provides for a read/write lock. Defaults to :class:`.FileLock`. Custom implementations need to implement context-manager based ``read()`` and ``write()`` functions - the :class:`.AbstractFileLock` class is provided as a base class which provides these methods based on individual read/write lock functions. E.g. to replace the lock with the dogpile.core :class:`.ReadWriteMutex`:: from dogpile.core.readwrite_lock import ReadWriteMutex from dogpile.cache.backends.file import AbstractFileLock class MutexLock(AbstractFileLock): def __init__(self, filename): self.mutex = ReadWriteMutex() def acquire_read_lock(self, wait): ret = self.mutex.acquire_read_lock(wait) return wait or ret def acquire_write_lock(self, wait): ret = self.mutex.acquire_write_lock(wait) return wait or ret def release_read_lock(self): return self.mutex.release_read_lock() def release_write_lock(self): return self.mutex.release_write_lock() from dogpile.cache import make_region region = make_region().configure( "dogpile.cache.dbm", expiration_time=300, arguments={ "filename": "file.dbm", "lock_factory": MutexLock } ) While the included :class:`.FileLock` uses ``os.flock()``, a windows-compatible implementation can be built using a library such as `portalocker `_. .. versionadded:: 0.5.2 """ def __init__(self, arguments): self.filename = os.path.abspath( os.path.normpath(arguments["filename"]) ) dir_, filename = os.path.split(self.filename) self.lock_factory = arguments.get("lock_factory", FileLock) self._rw_lock = self._init_lock( arguments.get("rw_lockfile"), ".rw.lock", dir_, filename ) self._dogpile_lock = self._init_lock( arguments.get("dogpile_lockfile"), ".dogpile.lock", dir_, filename, util.KeyReentrantMutex.factory, ) # TODO: make this configurable if compat.py3k: import dbm else: import anydbm as dbm self.dbmmodule = dbm self._init_dbm_file() def _init_lock(self, argument, suffix, basedir, basefile, wrapper=None): if argument is None: lock = self.lock_factory(os.path.join(basedir, basefile + suffix)) elif argument is not False: lock = self.lock_factory( os.path.abspath(os.path.normpath(argument)) ) else: return None if wrapper: lock = wrapper(lock) return lock def _init_dbm_file(self): exists = os.access(self.filename, os.F_OK) if not exists: for ext in ("db", "dat", "pag", "dir"): if os.access(self.filename + os.extsep + ext, os.F_OK): exists = True break if not exists: fh = self.dbmmodule.open(self.filename, "c") fh.close() def get_mutex(self, key): # using one dogpile for the whole file. Other ways # to do this might be using a set of files keyed to a # hash/modulus of the key. the issue is it's never # really safe to delete a lockfile as this can # break other processes trying to get at the file # at the same time - so handling unlimited keys # can't imply unlimited filenames if self._dogpile_lock: return self._dogpile_lock(key) else: return None @contextmanager def _use_rw_lock(self, write): if self._rw_lock is None: yield elif write: with self._rw_lock.write(): yield else: with self._rw_lock.read(): yield @contextmanager def _dbm_file(self, write): with self._use_rw_lock(write): dbm = self.dbmmodule.open(self.filename, "w" if write else "r") yield dbm dbm.close() def get(self, key): with self._dbm_file(False) as dbm: if hasattr(dbm, "get"): value = dbm.get(key, NO_VALUE) else: # gdbm objects lack a .get method try: value = dbm[key] except KeyError: value = NO_VALUE if value is not NO_VALUE: value = compat.pickle.loads(value) return value def get_multi(self, keys): return [self.get(key) for key in keys] def set(self, key, value): with self._dbm_file(True) as dbm: dbm[key] = compat.pickle.dumps( value, compat.pickle.HIGHEST_PROTOCOL ) def set_multi(self, mapping): with self._dbm_file(True) as dbm: for key, value in mapping.items(): dbm[key] = compat.pickle.dumps( value, compat.pickle.HIGHEST_PROTOCOL ) def delete(self, key): with self._dbm_file(True) as dbm: try: del dbm[key] except KeyError: pass def delete_multi(self, keys): with self._dbm_file(True) as dbm: for key in keys: try: del dbm[key] except KeyError: pass class AbstractFileLock(object): """Coordinate read/write access to a file. typically is a file-based lock but doesn't necessarily have to be. The default implementation here is :class:`.FileLock`. Implementations should provide the following methods:: * __init__() * acquire_read_lock() * acquire_write_lock() * release_read_lock() * release_write_lock() The ``__init__()`` method accepts a single argument "filename", which may be used as the "lock file", for those implementations that use a lock file. Note that multithreaded environments must provide a thread-safe version of this lock. The recommended approach for file- descriptor-based locks is to use a Python ``threading.local()`` so that a unique file descriptor is held per thread. See the source code of :class:`.FileLock` for an implementation example. """ def __init__(self, filename): """Constructor, is given the filename of a potential lockfile. The usage of this filename is optional and no file is created by default. Raises ``NotImplementedError`` by default, must be implemented by subclasses. """ raise NotImplementedError() def acquire(self, wait=True): """Acquire the "write" lock. This is a direct call to :meth:`.AbstractFileLock.acquire_write_lock`. """ return self.acquire_write_lock(wait) def release(self): """Release the "write" lock. This is a direct call to :meth:`.AbstractFileLock.release_write_lock`. """ self.release_write_lock() @contextmanager def read(self): """Provide a context manager for the "read" lock. This method makes use of :meth:`.AbstractFileLock.acquire_read_lock` and :meth:`.AbstractFileLock.release_read_lock` """ self.acquire_read_lock(True) try: yield finally: self.release_read_lock() @contextmanager def write(self): """Provide a context manager for the "write" lock. This method makes use of :meth:`.AbstractFileLock.acquire_write_lock` and :meth:`.AbstractFileLock.release_write_lock` """ self.acquire_write_lock(True) try: yield finally: self.release_write_lock() @property def is_open(self): """optional method.""" raise NotImplementedError() def acquire_read_lock(self, wait): """Acquire a 'reader' lock. Raises ``NotImplementedError`` by default, must be implemented by subclasses. """ raise NotImplementedError() def acquire_write_lock(self, wait): """Acquire a 'write' lock. Raises ``NotImplementedError`` by default, must be implemented by subclasses. """ raise NotImplementedError() def release_read_lock(self): """Release a 'reader' lock. Raises ``NotImplementedError`` by default, must be implemented by subclasses. """ raise NotImplementedError() def release_write_lock(self): """Release a 'writer' lock. Raises ``NotImplementedError`` by default, must be implemented by subclasses. """ raise NotImplementedError() class FileLock(AbstractFileLock): """Use lockfiles to coordinate read/write access to a file. Only works on Unix systems, using `fcntl.flock() `_. """ def __init__(self, filename): self._filedescriptor = compat.threading.local() self.filename = filename @util.memoized_property def _module(self): import fcntl return fcntl @property def is_open(self): return hasattr(self._filedescriptor, "fileno") def acquire_read_lock(self, wait): return self._acquire(wait, os.O_RDONLY, self._module.LOCK_SH) def acquire_write_lock(self, wait): return self._acquire(wait, os.O_WRONLY, self._module.LOCK_EX) def release_read_lock(self): self._release() def release_write_lock(self): self._release() def _acquire(self, wait, wrflag, lockflag): wrflag |= os.O_CREAT fileno = os.open(self.filename, wrflag) try: if not wait: lockflag |= self._module.LOCK_NB self._module.flock(fileno, lockflag) except IOError: os.close(fileno) if not wait: # this is typically # "[Errno 35] Resource temporarily unavailable", # because of LOCK_NB return False else: raise else: self._filedescriptor.fileno = fileno return True def _release(self): try: fileno = self._filedescriptor.fileno except AttributeError: return else: self._module.flock(fileno, self._module.LOCK_UN) os.close(fileno) del self._filedescriptor.fileno dogpile.cache-0.9.0/dogpile/cache/backends/memcached.py0000664000175000017500000002475613555610667024047 0ustar classicclassic00000000000000""" Memcached Backends ------------------ Provides backends for talking to `memcached `_. """ import random import time from ..api import CacheBackend from ..api import NO_VALUE from ... import util from ...util import compat __all__ = ( "GenericMemcachedBackend", "MemcachedBackend", "PylibmcBackend", "BMemcachedBackend", "MemcachedLock", ) class MemcachedLock(object): """Simple distributed lock using memcached. This is an adaptation of the lock featured at http://amix.dk/blog/post/19386 """ def __init__(self, client_fn, key, timeout=0): self.client_fn = client_fn self.key = "_lock" + key self.timeout = timeout def acquire(self, wait=True): client = self.client_fn() i = 0 while True: if client.add(self.key, 1, self.timeout): return True elif not wait: return False else: sleep_time = (((i + 1) * random.random()) + 2 ** i) / 2.5 time.sleep(sleep_time) if i < 15: i += 1 def release(self): client = self.client_fn() client.delete(self.key) class GenericMemcachedBackend(CacheBackend): """Base class for memcached backends. This base class accepts a number of paramters common to all backends. :param url: the string URL to connect to. Can be a single string or a list of strings. This is the only argument that's required. :param distributed_lock: boolean, when True, will use a memcached-lock as the dogpile lock (see :class:`.MemcachedLock`). Use this when multiple processes will be talking to the same memcached instance. When left at False, dogpile will coordinate on a regular threading mutex. :param lock_timeout: integer, number of seconds after acquiring a lock that memcached should expire it. This argument is only valid when ``distributed_lock`` is ``True``. .. versionadded:: 0.5.7 :param memcached_expire_time: integer, when present will be passed as the ``time`` parameter to ``pylibmc.Client.set``. This is used to set the memcached expiry time for a value. .. note:: This parameter is **different** from Dogpile's own ``expiration_time``, which is the number of seconds after which Dogpile will consider the value to be expired. When Dogpile considers a value to be expired, it **continues to use the value** until generation of a new value is complete, when using :meth:`.CacheRegion.get_or_create`. Therefore, if you are setting ``memcached_expire_time``, you'll want to make sure it is greater than ``expiration_time`` by at least enough seconds for new values to be generated, else the value won't be available during a regeneration, forcing all threads to wait for a regeneration each time a value expires. The :class:`.GenericMemachedBackend` uses a ``threading.local()`` object to store individual client objects per thread, as most modern memcached clients do not appear to be inherently threadsafe. In particular, ``threading.local()`` has the advantage over pylibmc's built-in thread pool in that it automatically discards objects associated with a particular thread when that thread ends. """ set_arguments = {} """Additional arguments which will be passed to the :meth:`set` method.""" def __init__(self, arguments): self._imports() # using a plain threading.local here. threading.local # automatically deletes the __dict__ when a thread ends, # so the idea is that this is superior to pylibmc's # own ThreadMappedPool which doesn't handle this # automatically. self.url = util.to_list(arguments["url"]) self.distributed_lock = arguments.get("distributed_lock", False) self.lock_timeout = arguments.get("lock_timeout", 0) self.memcached_expire_time = arguments.get("memcached_expire_time", 0) def has_lock_timeout(self): return self.lock_timeout != 0 def _imports(self): """client library imports go here.""" raise NotImplementedError() def _create_client(self): """Creation of a Client instance goes here.""" raise NotImplementedError() @util.memoized_property def _clients(self): backend = self class ClientPool(compat.threading.local): def __init__(self): self.memcached = backend._create_client() return ClientPool() @property def client(self): """Return the memcached client. This uses a threading.local by default as it appears most modern memcached libs aren't inherently threadsafe. """ return self._clients.memcached def get_mutex(self, key): if self.distributed_lock: return MemcachedLock( lambda: self.client, key, timeout=self.lock_timeout ) else: return None def get(self, key): value = self.client.get(key) if value is None: return NO_VALUE else: return value def get_multi(self, keys): values = self.client.get_multi(keys) return [NO_VALUE if key not in values else values[key] for key in keys] def set(self, key, value): self.client.set(key, value, **self.set_arguments) def set_multi(self, mapping): self.client.set_multi(mapping, **self.set_arguments) def delete(self, key): self.client.delete(key) def delete_multi(self, keys): self.client.delete_multi(keys) class MemcacheArgs(object): """Mixin which provides support for the 'time' argument to set(), 'min_compress_len' to other methods. """ def __init__(self, arguments): self.min_compress_len = arguments.get("min_compress_len", 0) self.set_arguments = {} if "memcached_expire_time" in arguments: self.set_arguments["time"] = arguments["memcached_expire_time"] if "min_compress_len" in arguments: self.set_arguments["min_compress_len"] = arguments[ "min_compress_len" ] super(MemcacheArgs, self).__init__(arguments) pylibmc = None class PylibmcBackend(MemcacheArgs, GenericMemcachedBackend): """A backend for the `pylibmc `_ memcached client. A configuration illustrating several of the optional arguments described in the pylibmc documentation:: from dogpile.cache import make_region region = make_region().configure( 'dogpile.cache.pylibmc', expiration_time = 3600, arguments = { 'url':["127.0.0.1"], 'binary':True, 'behaviors':{"tcp_nodelay": True,"ketama":True} } ) Arguments accepted here include those of :class:`.GenericMemcachedBackend`, as well as those below. :param binary: sets the ``binary`` flag understood by ``pylibmc.Client``. :param behaviors: a dictionary which will be passed to ``pylibmc.Client`` as the ``behaviors`` parameter. :param min_compress_len: Integer, will be passed as the ``min_compress_len`` parameter to the ``pylibmc.Client.set`` method. """ def __init__(self, arguments): self.binary = arguments.get("binary", False) self.behaviors = arguments.get("behaviors", {}) super(PylibmcBackend, self).__init__(arguments) def _imports(self): global pylibmc import pylibmc # noqa def _create_client(self): return pylibmc.Client( self.url, binary=self.binary, behaviors=self.behaviors ) memcache = None class MemcachedBackend(MemcacheArgs, GenericMemcachedBackend): """A backend using the standard `Python-memcached `_ library. Example:: from dogpile.cache import make_region region = make_region().configure( 'dogpile.cache.memcached', expiration_time = 3600, arguments = { 'url':"127.0.0.1:11211" } ) """ def _imports(self): global memcache import memcache # noqa def _create_client(self): return memcache.Client(self.url) bmemcached = None class BMemcachedBackend(GenericMemcachedBackend): """A backend for the `python-binary-memcached `_ memcached client. This is a pure Python memcached client which includes the ability to authenticate with a memcached server using SASL. A typical configuration using username/password:: from dogpile.cache import make_region region = make_region().configure( 'dogpile.cache.bmemcached', expiration_time = 3600, arguments = { 'url':["127.0.0.1"], 'username':'scott', 'password':'tiger' } ) Arguments which can be passed to the ``arguments`` dictionary include: :param username: optional username, will be used for SASL authentication. :param password: optional password, will be used for SASL authentication. """ def __init__(self, arguments): self.username = arguments.get("username", None) self.password = arguments.get("password", None) super(BMemcachedBackend, self).__init__(arguments) def _imports(self): global bmemcached import bmemcached class RepairBMemcachedAPI(bmemcached.Client): """Repairs BMemcached's non-standard method signatures, which was fixed in BMemcached ef206ed4473fec3b639e. """ def add(self, key, value, timeout=0): try: return super(RepairBMemcachedAPI, self).add( key, value, timeout ) except ValueError: return False self.Client = RepairBMemcachedAPI def _create_client(self): return self.Client( self.url, username=self.username, password=self.password ) def delete_multi(self, keys): """python-binary-memcached api does not implements delete_multi""" for key in keys: self.delete(key) dogpile.cache-0.9.0/dogpile/cache/backends/memory.py0000664000175000017500000000702013555610667023432 0ustar classicclassic00000000000000""" Memory Backends --------------- Provides simple dictionary-based backends. The two backends are :class:`.MemoryBackend` and :class:`.MemoryPickleBackend`; the latter applies a serialization step to cached values while the former places the value as given into the dictionary. """ from ..api import CacheBackend from ..api import NO_VALUE from ...util.compat import pickle class MemoryBackend(CacheBackend): """A backend that uses a plain dictionary. There is no size management, and values which are placed into the dictionary will remain until explicitly removed. Note that Dogpile's expiration of items is based on timestamps and does not remove them from the cache. E.g.:: from dogpile.cache import make_region region = make_region().configure( 'dogpile.cache.memory' ) To use a Python dictionary of your choosing, it can be passed in with the ``cache_dict`` argument:: my_dictionary = {} region = make_region().configure( 'dogpile.cache.memory', arguments={ "cache_dict":my_dictionary } ) """ pickle_values = False def __init__(self, arguments): self._cache = arguments.pop("cache_dict", {}) def get(self, key): value = self._cache.get(key, NO_VALUE) if value is not NO_VALUE and self.pickle_values: value = pickle.loads(value) return value def get_multi(self, keys): ret = [self._cache.get(key, NO_VALUE) for key in keys] if self.pickle_values: ret = [ pickle.loads(value) if value is not NO_VALUE else value for value in ret ] return ret def set(self, key, value): if self.pickle_values: value = pickle.dumps(value, pickle.HIGHEST_PROTOCOL) self._cache[key] = value def set_multi(self, mapping): pickle_values = self.pickle_values for key, value in mapping.items(): if pickle_values: value = pickle.dumps(value, pickle.HIGHEST_PROTOCOL) self._cache[key] = value def delete(self, key): self._cache.pop(key, None) def delete_multi(self, keys): for key in keys: self._cache.pop(key, None) class MemoryPickleBackend(MemoryBackend): """A backend that uses a plain dictionary, but serializes objects on :meth:`.MemoryBackend.set` and deserializes :meth:`.MemoryBackend.get`. E.g.:: from dogpile.cache import make_region region = make_region().configure( 'dogpile.cache.memory_pickle' ) The usage of pickle to serialize cached values allows an object as placed in the cache to be a copy of the original given object, so that any subsequent changes to the given object aren't reflected in the cached value, thus making the backend behave the same way as other backends which make use of serialization. The serialization is performed via pickle, and incurs the same performance hit in doing so as that of other backends; in this way the :class:`.MemoryPickleBackend` performance is somewhere in between that of the pure :class:`.MemoryBackend` and the remote server oriented backends such as that of Memcached or Redis. Pickle behavior here is the same as that of the Redis backend, using either ``cPickle`` or ``pickle`` and specifying ``HIGHEST_PROTOCOL`` upon serialize. .. versionadded:: 0.5.3 """ pickle_values = True dogpile.cache-0.9.0/dogpile/cache/backends/null.py0000664000175000017500000000214313555610667023075 0ustar classicclassic00000000000000""" Null Backend ------------- The Null backend does not do any caching at all. It can be used to test behavior without caching, or as a means of disabling caching for a region that is otherwise used normally. .. versionadded:: 0.5.4 """ from ..api import CacheBackend from ..api import NO_VALUE __all__ = ["NullBackend"] class NullLock(object): def acquire(self, wait=True): return True def release(self): pass class NullBackend(CacheBackend): """A "null" backend that effectively disables all cache operations. Basic usage:: from dogpile.cache import make_region region = make_region().configure( 'dogpile.cache.null' ) """ def __init__(self, arguments): pass def get_mutex(self, key): return NullLock() def get(self, key): return NO_VALUE def get_multi(self, keys): return [NO_VALUE for k in keys] def set(self, key, value): pass def set_multi(self, mapping): pass def delete(self, key): pass def delete_multi(self, keys): pass dogpile.cache-0.9.0/dogpile/cache/backends/redis.py0000664000175000017500000001341413555610667023234 0ustar classicclassic00000000000000""" Redis Backends ------------------ Provides backends for talking to `Redis `_. """ from __future__ import absolute_import from ..api import CacheBackend from ..api import NO_VALUE from ...util.compat import pickle from ...util.compat import u redis = None __all__ = ("RedisBackend",) class RedisBackend(CacheBackend): """A `Redis `_ backend, using the `redis-py `_ backend. Example configuration:: from dogpile.cache import make_region region = make_region().configure( 'dogpile.cache.redis', arguments = { 'host': 'localhost', 'port': 6379, 'db': 0, 'redis_expiration_time': 60*60*2, # 2 hours 'distributed_lock': True } ) Arguments accepted in the arguments dictionary: :param url: string. If provided, will override separate host/port/db params. The format is that accepted by ``StrictRedis.from_url()``. .. versionadded:: 0.4.1 :param host: string, default is ``localhost``. :param password: string, default is no password. .. versionadded:: 0.4.1 :param port: integer, default is ``6379``. :param db: integer, default is ``0``. :param redis_expiration_time: integer, number of seconds after setting a value that Redis should expire it. This should be larger than dogpile's cache expiration. By default no expiration is set. :param distributed_lock: boolean, when True, will use a redis-lock as the dogpile lock. Use this when multiple processes will be talking to the same redis instance. When left at False, dogpile will coordinate on a regular threading mutex. :param lock_timeout: integer, number of seconds after acquiring a lock that Redis should expire it. This argument is only valid when ``distributed_lock`` is ``True``. .. versionadded:: 0.5.0 :param socket_timeout: float, seconds for socket timeout. Default is None (no timeout). .. versionadded:: 0.5.4 :param lock_sleep: integer, number of seconds to sleep when failed to acquire a lock. This argument is only valid when ``distributed_lock`` is ``True``. .. versionadded:: 0.5.0 :param connection_pool: ``redis.ConnectionPool`` object. If provided, this object supersedes other connection arguments passed to the ``redis.StrictRedis`` instance, including url and/or host as well as socket_timeout, and will be passed to ``redis.StrictRedis`` as the source of connectivity. .. versionadded:: 0.5.4 """ def __init__(self, arguments): arguments = arguments.copy() self._imports() self.url = arguments.pop("url", None) self.host = arguments.pop("host", "localhost") self.password = arguments.pop("password", None) self.port = arguments.pop("port", 6379) self.db = arguments.pop("db", 0) self.distributed_lock = arguments.get("distributed_lock", False) self.socket_timeout = arguments.pop("socket_timeout", None) self.lock_timeout = arguments.get("lock_timeout", None) self.lock_sleep = arguments.get("lock_sleep", 0.1) self.redis_expiration_time = arguments.pop("redis_expiration_time", 0) self.connection_pool = arguments.get("connection_pool", None) self.client = self._create_client() def _imports(self): # defer imports until backend is used global redis import redis # noqa def _create_client(self): if self.connection_pool is not None: # the connection pool already has all other connection # options present within, so here we disregard socket_timeout # and others. return redis.StrictRedis(connection_pool=self.connection_pool) args = {} if self.socket_timeout: args["socket_timeout"] = self.socket_timeout if self.url is not None: args.update(url=self.url) return redis.StrictRedis.from_url(**args) else: args.update( host=self.host, password=self.password, port=self.port, db=self.db, ) return redis.StrictRedis(**args) def get_mutex(self, key): if self.distributed_lock: return self.client.lock( u("_lock{0}").format(key), self.lock_timeout, self.lock_sleep ) else: return None def get(self, key): value = self.client.get(key) if value is None: return NO_VALUE return pickle.loads(value) def get_multi(self, keys): if not keys: return [] values = self.client.mget(keys) return [pickle.loads(v) if v is not None else NO_VALUE for v in values] def set(self, key, value): if self.redis_expiration_time: self.client.setex( key, self.redis_expiration_time, pickle.dumps(value, pickle.HIGHEST_PROTOCOL), ) else: self.client.set(key, pickle.dumps(value, pickle.HIGHEST_PROTOCOL)) def set_multi(self, mapping): mapping = dict( (k, pickle.dumps(v, pickle.HIGHEST_PROTOCOL)) for k, v in mapping.items() ) if not self.redis_expiration_time: self.client.mset(mapping) else: pipe = self.client.pipeline() for key, value in mapping.items(): pipe.setex(key, self.redis_expiration_time, value) pipe.execute() def delete(self, key): self.client.delete(key) def delete_multi(self, keys): self.client.delete(*keys) dogpile.cache-0.9.0/dogpile/cache/exception.py0000664000175000017500000000113113555610667022343 0ustar classicclassic00000000000000"""Exception classes for dogpile.cache.""" class DogpileCacheException(Exception): """Base Exception for dogpile.cache exceptions to inherit from.""" class RegionAlreadyConfigured(DogpileCacheException): """CacheRegion instance is already configured.""" class RegionNotConfigured(DogpileCacheException): """CacheRegion instance has not been configured.""" class ValidationError(DogpileCacheException): """Error validating a value or option.""" class PluginNotFound(DogpileCacheException): """The specified plugin could not be found. .. versionadded:: 0.6.4 """ dogpile.cache-0.9.0/dogpile/cache/plugins/0000775000175000017500000000000013555610710021445 5ustar classicclassic00000000000000dogpile.cache-0.9.0/dogpile/cache/plugins/__init__.py0000664000175000017500000000000013555610667023557 0ustar classicclassic00000000000000dogpile.cache-0.9.0/dogpile/cache/plugins/mako_cache.py0000664000175000017500000000562413555610667024113 0ustar classicclassic00000000000000""" Mako Integration ---------------- dogpile.cache includes a `Mako `_ plugin that replaces `Beaker `_ as the cache backend. Setup a Mako template lookup using the "dogpile.cache" cache implementation and a region dictionary:: from dogpile.cache import make_region from mako.lookup import TemplateLookup my_regions = { "local":make_region().configure( "dogpile.cache.dbm", expiration_time=360, arguments={"filename":"file.dbm"} ), "memcached":make_region().configure( "dogpile.cache.pylibmc", expiration_time=3600, arguments={"url":["127.0.0.1"]} ) } mako_lookup = TemplateLookup( directories=["/myapp/templates"], cache_impl="dogpile.cache", cache_args={ 'regions':my_regions } ) To use the above configuration in a template, use the ``cached=True`` argument on any Mako tag which accepts it, in conjunction with the name of the desired region as the ``cache_region`` argument:: <%def name="mysection()" cached="True" cache_region="memcached"> some content that's cached """ from mako.cache import CacheImpl class MakoPlugin(CacheImpl): """A Mako ``CacheImpl`` which talks to dogpile.cache.""" def __init__(self, cache): super(MakoPlugin, self).__init__(cache) try: self.regions = self.cache.template.cache_args["regions"] except KeyError: raise KeyError( "'cache_regions' argument is required on the " "Mako Lookup or Template object for usage " "with the dogpile.cache plugin." ) def _get_region(self, **kw): try: region = kw["region"] except KeyError: raise KeyError( "'cache_region' argument must be specified with 'cache=True'" "within templates for usage with the dogpile.cache plugin." ) try: return self.regions[region] except KeyError: raise KeyError("No such region '%s'" % region) def get_and_replace(self, key, creation_function, **kw): expiration_time = kw.pop("timeout", None) return self._get_region(**kw).get_or_create( key, creation_function, expiration_time=expiration_time ) def get_or_create(self, key, creation_function, **kw): return self.get_and_replace(key, creation_function, **kw) def put(self, key, value, **kw): self._get_region(**kw).put(key, value) def get(self, key, **kw): expiration_time = kw.pop("timeout", None) return self._get_region(**kw).get(key, expiration_time=expiration_time) def invalidate(self, key, **kw): self._get_region(**kw).delete(key) dogpile.cache-0.9.0/dogpile/cache/proxy.py0000664000175000017500000000505113555610667021533 0ustar classicclassic00000000000000""" Proxy Backends ------------------ Provides a utility and a decorator class that allow for modifying the behavior of different backends without altering the class itself or having to extend the base backend. .. versionadded:: 0.5.0 Added support for the :class:`.ProxyBackend` class. """ from .api import CacheBackend class ProxyBackend(CacheBackend): """A decorator class for altering the functionality of backends. Basic usage:: from dogpile.cache import make_region from dogpile.cache.proxy import ProxyBackend class MyFirstProxy(ProxyBackend): def get(self, key): # ... custom code goes here ... return self.proxied.get(key) def set(self, key, value): # ... custom code goes here ... self.proxied.set(key) class MySecondProxy(ProxyBackend): def get(self, key): # ... custom code goes here ... return self.proxied.get(key) region = make_region().configure( 'dogpile.cache.dbm', expiration_time = 3600, arguments = { "filename":"/path/to/cachefile.dbm" }, wrap = [ MyFirstProxy, MySecondProxy ] ) Classes that extend :class:`.ProxyBackend` can be stacked together. The ``.proxied`` property will always point to either the concrete backend instance or the next proxy in the chain that a method can be delegated towards. .. versionadded:: 0.5.0 """ def __init__(self, *args, **kwargs): self.proxied = None def wrap(self, backend): """ Take a backend as an argument and setup the self.proxied property. Return an object that be used as a backend by a :class:`.CacheRegion` object. """ assert isinstance(backend, CacheBackend) or isinstance( backend, ProxyBackend ) self.proxied = backend return self # # Delegate any functions that are not already overridden to # the proxies backend # def get(self, key): return self.proxied.get(key) def set(self, key, value): self.proxied.set(key, value) def delete(self, key): self.proxied.delete(key) def get_multi(self, keys): return self.proxied.get_multi(keys) def set_multi(self, mapping): self.proxied.set_multi(mapping) def delete_multi(self, keys): self.proxied.delete_multi(keys) def get_mutex(self, key): return self.proxied.get_mutex(key) dogpile.cache-0.9.0/dogpile/cache/region.py0000664000175000017500000016156013555610667021645 0ustar classicclassic00000000000000from __future__ import with_statement import contextlib import datetime from functools import partial from functools import wraps import logging from numbers import Number import threading import time from decorator import decorate from . import exception from .api import CachedValue from .api import NO_VALUE from .backends import _backend_loader from .backends import register_backend # noqa from .proxy import ProxyBackend from .util import function_key_generator from .util import function_multi_key_generator from .util import repr_obj from .. import Lock from .. import NeedRegenerationException from ..util import coerce_string_conf from ..util import compat from ..util import memoized_property from ..util import NameRegistry from ..util import PluginLoader value_version = 1 """An integer placed in the :class:`.CachedValue` so that new versions of dogpile.cache can detect cached values from a previous, backwards-incompatible version. """ log = logging.getLogger(__name__) class RegionInvalidationStrategy(object): """Region invalidation strategy interface Implement this interface and pass implementation instance to :meth:`.CacheRegion.configure` to override default region invalidation. Example:: class CustomInvalidationStrategy(RegionInvalidationStrategy): def __init__(self): self._soft_invalidated = None self._hard_invalidated = None def invalidate(self, hard=None): if hard: self._soft_invalidated = None self._hard_invalidated = time.time() else: self._soft_invalidated = time.time() self._hard_invalidated = None def is_invalidated(self, timestamp): return ((self._soft_invalidated and timestamp < self._soft_invalidated) or (self._hard_invalidated and timestamp < self._hard_invalidated)) def was_hard_invalidated(self): return bool(self._hard_invalidated) def is_hard_invalidated(self, timestamp): return (self._hard_invalidated and timestamp < self._hard_invalidated) def was_soft_invalidated(self): return bool(self._soft_invalidated) def is_soft_invalidated(self, timestamp): return (self._soft_invalidated and timestamp < self._soft_invalidated) The custom implementation is injected into a :class:`.CacheRegion` at configure time using the :paramref:`.CacheRegion.configure.region_invalidator` parameter:: region = CacheRegion() region = region.configure(region_invalidator=CustomInvalidationStrategy()) # noqa Invalidation strategies that wish to have access to the :class:`.CacheRegion` itself should construct the invalidator given the region as an argument:: class MyInvalidator(RegionInvalidationStrategy): def __init__(self, region): self.region = region # ... # ... region = CacheRegion() region = region.configure(region_invalidator=MyInvalidator(region)) .. versionadded:: 0.6.2 .. seealso:: :paramref:`.CacheRegion.configure.region_invalidator` """ def invalidate(self, hard=True): """Region invalidation. :class:`.CacheRegion` propagated call. The default invalidation system works by setting a current timestamp (using ``time.time()``) to consider all older timestamps effectively invalidated. """ raise NotImplementedError() def is_hard_invalidated(self, timestamp): """Check timestamp to determine if it was hard invalidated. :return: Boolean. True if ``timestamp`` is older than the last region invalidation time and region is invalidated in hard mode. """ raise NotImplementedError() def is_soft_invalidated(self, timestamp): """Check timestamp to determine if it was soft invalidated. :return: Boolean. True if ``timestamp`` is older than the last region invalidation time and region is invalidated in soft mode. """ raise NotImplementedError() def is_invalidated(self, timestamp): """Check timestamp to determine if it was invalidated. :return: Boolean. True if ``timestamp`` is older than the last region invalidation time. """ raise NotImplementedError() def was_soft_invalidated(self): """Indicate the region was invalidated in soft mode. :return: Boolean. True if region was invalidated in soft mode. """ raise NotImplementedError() def was_hard_invalidated(self): """Indicate the region was invalidated in hard mode. :return: Boolean. True if region was invalidated in hard mode. """ raise NotImplementedError() class DefaultInvalidationStrategy(RegionInvalidationStrategy): def __init__(self): self._is_hard_invalidated = None self._invalidated = None def invalidate(self, hard=True): self._is_hard_invalidated = bool(hard) self._invalidated = time.time() def is_invalidated(self, timestamp): return self._invalidated is not None and timestamp < self._invalidated def was_hard_invalidated(self): return self._is_hard_invalidated is True def is_hard_invalidated(self, timestamp): return self.was_hard_invalidated() and self.is_invalidated(timestamp) def was_soft_invalidated(self): return self._is_hard_invalidated is False def is_soft_invalidated(self, timestamp): return self.was_soft_invalidated() and self.is_invalidated(timestamp) class CacheRegion(object): r"""A front end to a particular cache backend. :param name: Optional, a string name for the region. This isn't used internally but can be accessed via the ``.name`` parameter, helpful for configuring a region from a config file. :param function_key_generator: Optional. A function that will produce a "cache key" given a data creation function and arguments, when using the :meth:`.CacheRegion.cache_on_arguments` method. The structure of this function should be two levels: given the data creation function, return a new function that generates the key based on the given arguments. Such as:: def my_key_generator(namespace, fn, **kw): fname = fn.__name__ def generate_key(*arg): return namespace + "_" + fname + "_".join(str(s) for s in arg) return generate_key region = make_region( function_key_generator = my_key_generator ).configure( "dogpile.cache.dbm", expiration_time=300, arguments={ "filename":"file.dbm" } ) The ``namespace`` is that passed to :meth:`.CacheRegion.cache_on_arguments`. It's not consulted outside this function, so in fact can be of any form. For example, it can be passed as a tuple, used to specify arguments to pluck from \**kw:: def my_key_generator(namespace, fn): def generate_key(*arg, **kw): return ":".join( [kw[k] for k in namespace] + [str(x) for x in arg] ) return generate_key Where the decorator might be used as:: @my_region.cache_on_arguments(namespace=('x', 'y')) def my_function(a, b, **kw): return my_data() .. seealso:: :func:`.function_key_generator` - default key generator :func:`.kwarg_function_key_generator` - optional gen that also uses keyword arguments :param function_multi_key_generator: Optional. Similar to ``function_key_generator`` parameter, but it's used in :meth:`.CacheRegion.cache_multi_on_arguments`. Generated function should return list of keys. For example:: def my_multi_key_generator(namespace, fn, **kw): namespace = fn.__name__ + (namespace or '') def generate_keys(*args): return [namespace + ':' + str(a) for a in args] return generate_keys :param key_mangler: Function which will be used on all incoming keys before passing to the backend. Defaults to ``None``, in which case the key mangling function recommended by the cache backend will be used. A typical mangler is the SHA1 mangler found at :func:`.sha1_mangle_key` which coerces keys into a SHA1 hash, so that the string length is fixed. To disable all key mangling, set to ``False``. Another typical mangler is the built-in Python function ``str``, which can be used to convert non-string or Unicode keys to bytestrings, which is needed when using a backend such as bsddb or dbm under Python 2.x in conjunction with Unicode keys. :param async_creation_runner: A callable that, when specified, will be passed to and called by dogpile.lock when there is a stale value present in the cache. It will be passed the mutex and is responsible releasing that mutex when finished. This can be used to defer the computation of expensive creator functions to later points in the future by way of, for example, a background thread, a long-running queue, or a task manager system like Celery. For a specific example using async_creation_runner, new values can be created in a background thread like so:: import threading def async_creation_runner(cache, somekey, creator, mutex): ''' Used by dogpile.core:Lock when appropriate ''' def runner(): try: value = creator() cache.set(somekey, value) finally: mutex.release() thread = threading.Thread(target=runner) thread.start() region = make_region( async_creation_runner=async_creation_runner, ).configure( 'dogpile.cache.memcached', expiration_time=5, arguments={ 'url': '127.0.0.1:11211', 'distributed_lock': True, } ) Remember that the first request for a key with no associated value will always block; async_creator will not be invoked. However, subsequent requests for cached-but-expired values will still return promptly. They will be refreshed by whatever asynchronous means the provided async_creation_runner callable implements. By default the async_creation_runner is disabled and is set to ``None``. .. versionadded:: 0.4.2 added the async_creation_runner feature. """ def __init__( self, name=None, function_key_generator=function_key_generator, function_multi_key_generator=function_multi_key_generator, key_mangler=None, async_creation_runner=None, ): """Construct a new :class:`.CacheRegion`.""" self.name = name self.function_key_generator = function_key_generator self.function_multi_key_generator = function_multi_key_generator self.key_mangler = self._user_defined_key_mangler = key_mangler self.async_creation_runner = async_creation_runner self.region_invalidator = DefaultInvalidationStrategy() def configure( self, backend, expiration_time=None, arguments=None, _config_argument_dict=None, _config_prefix=None, wrap=None, replace_existing_backend=False, region_invalidator=None, ): """Configure a :class:`.CacheRegion`. The :class:`.CacheRegion` itself is returned. :param backend: Required. This is the name of the :class:`.CacheBackend` to use, and is resolved by loading the class from the ``dogpile.cache`` entrypoint. :param expiration_time: Optional. The expiration time passed to the dogpile system. May be passed as an integer number of seconds, or as a ``datetime.timedelta`` value. .. versionadded 0.5.0 ``expiration_time`` may be optionally passed as a ``datetime.timedelta`` value. The :meth:`.CacheRegion.get_or_create` method as well as the :meth:`.CacheRegion.cache_on_arguments` decorator (though note: **not** the :meth:`.CacheRegion.get` method) will call upon the value creation function after this time period has passed since the last generation. :param arguments: Optional. The structure here is passed directly to the constructor of the :class:`.CacheBackend` in use, though is typically a dictionary. :param wrap: Optional. A list of :class:`.ProxyBackend` classes and/or instances, each of which will be applied in a chain to ultimately wrap the original backend, so that custom functionality augmentation can be applied. .. versionadded:: 0.5.0 .. seealso:: :ref:`changing_backend_behavior` :param replace_existing_backend: if True, the existing cache backend will be replaced. Without this flag, an exception is raised if a backend is already configured. .. versionadded:: 0.5.7 :param region_invalidator: Optional. Override default invalidation strategy with custom implementation of :class:`.RegionInvalidationStrategy`. .. versionadded:: 0.6.2 """ if "backend" in self.__dict__ and not replace_existing_backend: raise exception.RegionAlreadyConfigured( "This region is already " "configured with backend: %s. " "Specify replace_existing_backend=True to replace." % self.backend ) try: backend_cls = _backend_loader.load(backend) except PluginLoader.NotFound: raise exception.PluginNotFound( "Couldn't find cache plugin to load: %s" % backend ) if _config_argument_dict: self.backend = backend_cls.from_config_dict( _config_argument_dict, _config_prefix ) else: self.backend = backend_cls(arguments or {}) if not expiration_time or isinstance(expiration_time, Number): self.expiration_time = expiration_time elif isinstance(expiration_time, datetime.timedelta): self.expiration_time = int( compat.timedelta_total_seconds(expiration_time) ) else: raise exception.ValidationError( "expiration_time is not a number or timedelta." ) if not self._user_defined_key_mangler: self.key_mangler = self.backend.key_mangler self._lock_registry = NameRegistry(self._create_mutex) if getattr(wrap, "__iter__", False): for wrapper in reversed(wrap): self.wrap(wrapper) if region_invalidator: self.region_invalidator = region_invalidator return self def wrap(self, proxy): """ Takes a ProxyBackend instance or class and wraps the attached backend. """ # if we were passed a type rather than an instance then # initialize it. if type(proxy) == type: proxy = proxy() if not issubclass(type(proxy), ProxyBackend): raise TypeError( "Type %s is not a valid ProxyBackend" % type(proxy) ) self.backend = proxy.wrap(self.backend) def _mutex(self, key): return self._lock_registry.get(key) class _LockWrapper(object): """weakref-capable wrapper for threading.Lock""" def __init__(self): self.lock = threading.Lock() def acquire(self, wait=True): return self.lock.acquire(wait) def release(self): self.lock.release() def _create_mutex(self, key): mutex = self.backend.get_mutex(key) if mutex is not None: return mutex else: return self._LockWrapper() # cached value _actual_backend = None @property def actual_backend(self): """Return the ultimate backend underneath any proxies. The backend might be the result of one or more ``proxy.wrap`` applications. If so, derive the actual underlying backend. .. versionadded:: 0.6.6 """ if self._actual_backend is None: _backend = self.backend while hasattr(_backend, "proxied"): _backend = _backend.proxied self._actual_backend = _backend return self._actual_backend def invalidate(self, hard=True): """Invalidate this :class:`.CacheRegion`. The default invalidation system works by setting a current timestamp (using ``time.time()``) representing the "minimum creation time" for a value. Any retrieved value whose creation time is prior to this timestamp is considered to be stale. It does not affect the data in the cache in any way, and is **local to this instance of :class:`.CacheRegion`.** .. warning:: The :meth:`.CacheRegion.invalidate` method's default mode of operation is to set a timestamp **local to this CacheRegion in this Python process only**. It does not impact other Python processes or regions as the timestamp is **only stored locally in memory**. To implement invalidation where the timestamp is stored in the cache or similar so that all Python processes can be affected by an invalidation timestamp, implement a custom :class:`.RegionInvalidationStrategy`. Once set, the invalidation time is honored by the :meth:`.CacheRegion.get_or_create`, :meth:`.CacheRegion.get_or_create_multi` and :meth:`.CacheRegion.get` methods. The method supports both "hard" and "soft" invalidation options. With "hard" invalidation, :meth:`.CacheRegion.get_or_create` will force an immediate regeneration of the value which all getters will wait for. With "soft" invalidation, subsequent getters will return the "old" value until the new one is available. Usage of "soft" invalidation requires that the region or the method is given a non-None expiration time. .. versionadded:: 0.3.0 :param hard: if True, cache values will all require immediate regeneration; dogpile logic won't be used. If False, the creation time of existing values will be pushed back before the expiration time so that a return+regen will be invoked. .. versionadded:: 0.5.1 """ self.region_invalidator.invalidate(hard) def configure_from_config(self, config_dict, prefix): """Configure from a configuration dictionary and a prefix. Example:: local_region = make_region() memcached_region = make_region() # regions are ready to use for function # decorators, but not yet for actual caching # later, when config is available myconfig = { "cache.local.backend":"dogpile.cache.dbm", "cache.local.arguments.filename":"/path/to/dbmfile.dbm", "cache.memcached.backend":"dogpile.cache.pylibmc", "cache.memcached.arguments.url":"127.0.0.1, 10.0.0.1", } local_region.configure_from_config(myconfig, "cache.local.") memcached_region.configure_from_config(myconfig, "cache.memcached.") """ config_dict = coerce_string_conf(config_dict) return self.configure( config_dict["%sbackend" % prefix], expiration_time=config_dict.get( "%sexpiration_time" % prefix, None ), _config_argument_dict=config_dict, _config_prefix="%sarguments." % prefix, wrap=config_dict.get("%swrap" % prefix, None), replace_existing_backend=config_dict.get( "%sreplace_existing_backend" % prefix, False ), ) @memoized_property def backend(self): raise exception.RegionNotConfigured( "No backend is configured on this region." ) @property def is_configured(self): """Return True if the backend has been configured via the :meth:`.CacheRegion.configure` method already. .. versionadded:: 0.5.1 """ return "backend" in self.__dict__ def get(self, key, expiration_time=None, ignore_expiration=False): """Return a value from the cache, based on the given key. If the value is not present, the method returns the token ``NO_VALUE``. ``NO_VALUE`` evaluates to False, but is separate from ``None`` to distinguish between a cached value of ``None``. By default, the configured expiration time of the :class:`.CacheRegion`, or alternatively the expiration time supplied by the ``expiration_time`` argument, is tested against the creation time of the retrieved value versus the current time (as reported by ``time.time()``). If stale, the cached value is ignored and the ``NO_VALUE`` token is returned. Passing the flag ``ignore_expiration=True`` bypasses the expiration time check. .. versionchanged:: 0.3.0 :meth:`.CacheRegion.get` now checks the value's creation time against the expiration time, rather than returning the value unconditionally. The method also interprets the cached value in terms of the current "invalidation" time as set by the :meth:`.invalidate` method. If a value is present, but its creation time is older than the current invalidation time, the ``NO_VALUE`` token is returned. Passing the flag ``ignore_expiration=True`` bypasses the invalidation time check. .. versionadded:: 0.3.0 Support for the :meth:`.CacheRegion.invalidate` method. :param key: Key to be retrieved. While it's typical for a key to be a string, it is ultimately passed directly down to the cache backend, before being optionally processed by the key_mangler function, so can be of any type recognized by the backend or by the key_mangler function, if present. :param expiration_time: Optional expiration time value which will supersede that configured on the :class:`.CacheRegion` itself. .. note:: The :paramref:`.CacheRegion.get.expiration_time` argument is **not persisted in the cache** and is relevant only to **this specific cache retrieval operation**, relative to the creation time stored with the existing cached value. Subsequent calls to :meth:`.CacheRegion.get` are **not** affected by this value. .. versionadded:: 0.3.0 :param ignore_expiration: if ``True``, the value is returned from the cache if present, regardless of configured expiration times or whether or not :meth:`.invalidate` was called. .. versionadded:: 0.3.0 .. seealso:: :meth:`.CacheRegion.get_multi` :meth:`.CacheRegion.get_or_create` :meth:`.CacheRegion.set` :meth:`.CacheRegion.delete` """ if self.key_mangler: key = self.key_mangler(key) value = self.backend.get(key) value = self._unexpired_value_fn(expiration_time, ignore_expiration)( value ) return value.payload def _unexpired_value_fn(self, expiration_time, ignore_expiration): if ignore_expiration: return lambda value: value else: if expiration_time is None: expiration_time = self.expiration_time current_time = time.time() def value_fn(value): if value is NO_VALUE: return value elif ( expiration_time is not None and current_time - value.metadata["ct"] > expiration_time ): return NO_VALUE elif self.region_invalidator.is_invalidated( value.metadata["ct"] ): return NO_VALUE else: return value return value_fn def get_multi(self, keys, expiration_time=None, ignore_expiration=False): """Return multiple values from the cache, based on the given keys. Returns values as a list matching the keys given. E.g.:: values = region.get_multi(["one", "two", "three"]) To convert values to a dictionary, use ``zip()``:: keys = ["one", "two", "three"] values = region.get_multi(keys) dictionary = dict(zip(keys, values)) Keys which aren't present in the list are returned as the ``NO_VALUE`` token. ``NO_VALUE`` evaluates to False, but is separate from ``None`` to distinguish between a cached value of ``None``. By default, the configured expiration time of the :class:`.CacheRegion`, or alternatively the expiration time supplied by the ``expiration_time`` argument, is tested against the creation time of the retrieved value versus the current time (as reported by ``time.time()``). If stale, the cached value is ignored and the ``NO_VALUE`` token is returned. Passing the flag ``ignore_expiration=True`` bypasses the expiration time check. .. versionadded:: 0.5.0 """ if not keys: return [] if self.key_mangler: keys = list(map(lambda key: self.key_mangler(key), keys)) backend_values = self.backend.get_multi(keys) _unexpired_value_fn = self._unexpired_value_fn( expiration_time, ignore_expiration ) return [ value.payload if value is not NO_VALUE else value for value in ( _unexpired_value_fn(value) for value in backend_values ) ] @contextlib.contextmanager def _log_time(self, keys): start_time = time.time() yield seconds = time.time() - start_time log.debug( "Cache value generated in %(seconds).3f seconds for key(s): " "%(keys)r", {"seconds": seconds, "keys": repr_obj(keys)}, ) def _is_cache_miss(self, value, orig_key): if value is NO_VALUE: log.debug("No value present for key: %r", orig_key) elif value.metadata["v"] != value_version: log.debug("Dogpile version update for key: %r", orig_key) elif self.region_invalidator.is_hard_invalidated(value.metadata["ct"]): log.debug("Hard invalidation detected for key: %r", orig_key) else: return False return True def get_or_create( self, key, creator, expiration_time=None, should_cache_fn=None, creator_args=None, ): """Return a cached value based on the given key. If the value does not exist or is considered to be expired based on its creation time, the given creation function may or may not be used to recreate the value and persist the newly generated value in the cache. Whether or not the function is used depends on if the *dogpile lock* can be acquired or not. If it can't, it means a different thread or process is already running a creation function for this key against the cache. When the dogpile lock cannot be acquired, the method will block if no previous value is available, until the lock is released and a new value available. If a previous value is available, that value is returned immediately without blocking. If the :meth:`.invalidate` method has been called, and the retrieved value's timestamp is older than the invalidation timestamp, the value is unconditionally prevented from being returned. The method will attempt to acquire the dogpile lock to generate a new value, or will wait until the lock is released to return the new value. .. versionchanged:: 0.3.0 The value is unconditionally regenerated if the creation time is older than the last call to :meth:`.invalidate`. :param key: Key to be retrieved. While it's typical for a key to be a string, it is ultimately passed directly down to the cache backend, before being optionally processed by the key_mangler function, so can be of any type recognized by the backend or by the key_mangler function, if present. :param creator: function which creates a new value. :param creator_args: optional tuple of (args, kwargs) that will be passed to the creator function if present. .. versionadded:: 0.7.0 :param expiration_time: optional expiration time which will overide the expiration time already configured on this :class:`.CacheRegion` if not None. To set no expiration, use the value -1. .. note:: The :paramref:`.CacheRegion.get_or_create.expiration_time` argument is **not persisted in the cache** and is relevant only to **this specific cache retrieval operation**, relative to the creation time stored with the existing cached value. Subsequent calls to :meth:`.CacheRegion.get_or_create` are **not** affected by this value. :param should_cache_fn: optional callable function which will receive the value returned by the "creator", and will then return True or False, indicating if the value should actually be cached or not. If it returns False, the value is still returned, but isn't cached. E.g.:: def dont_cache_none(value): return value is not None value = region.get_or_create("some key", create_value, should_cache_fn=dont_cache_none) Above, the function returns the value of create_value() if the cache is invalid, however if the return value is None, it won't be cached. .. versionadded:: 0.4.3 .. seealso:: :meth:`.CacheRegion.get` :meth:`.CacheRegion.cache_on_arguments` - applies :meth:`.get_or_create` to any function using a decorator. :meth:`.CacheRegion.get_or_create_multi` - multiple key/value version """ orig_key = key if self.key_mangler: key = self.key_mangler(key) def get_value(): value = self.backend.get(key) if self._is_cache_miss(value, orig_key): raise NeedRegenerationException() ct = value.metadata["ct"] if self.region_invalidator.is_soft_invalidated(ct): ct = time.time() - expiration_time - 0.0001 return value.payload, ct def gen_value(): with self._log_time(orig_key): if creator_args: created_value = creator( *creator_args[0], **creator_args[1] ) else: created_value = creator() value = self._value(created_value) if not should_cache_fn or should_cache_fn(created_value): self.backend.set(key, value) return value.payload, value.metadata["ct"] if expiration_time is None: expiration_time = self.expiration_time if ( expiration_time is None and self.region_invalidator.was_soft_invalidated() ): raise exception.DogpileCacheException( "Non-None expiration time required " "for soft invalidation" ) if expiration_time == -1: expiration_time = None if self.async_creation_runner: def async_creator(mutex): if creator_args: @wraps(creator) def go(): return creator(*creator_args[0], **creator_args[1]) else: go = creator return self.async_creation_runner(self, orig_key, go, mutex) else: async_creator = None with Lock( self._mutex(key), gen_value, get_value, expiration_time, async_creator, ) as value: return value def get_or_create_multi( self, keys, creator, expiration_time=None, should_cache_fn=None ): """Return a sequence of cached values based on a sequence of keys. The behavior for generation of values based on keys corresponds to that of :meth:`.Region.get_or_create`, with the exception that the ``creator()`` function may be asked to generate any subset of the given keys. The list of keys to be generated is passed to ``creator()``, and ``creator()`` should return the generated values as a sequence corresponding to the order of the keys. The method uses the same approach as :meth:`.Region.get_multi` and :meth:`.Region.set_multi` to get and set values from the backend. If you are using a :class:`.CacheBackend` or :class:`.ProxyBackend` that modifies values, take note this function invokes ``.set_multi()`` for newly generated values using the same values it returns to the calling function. A correct implementation of ``.set_multi()`` will not modify values in-place on the submitted ``mapping`` dict. :param keys: Sequence of keys to be retrieved. :param creator: function which accepts a sequence of keys and returns a sequence of new values. :param expiration_time: optional expiration time which will overide the expiration time already configured on this :class:`.CacheRegion` if not None. To set no expiration, use the value -1. :param should_cache_fn: optional callable function which will receive each value returned by the "creator", and will then return True or False, indicating if the value should actually be cached or not. If it returns False, the value is still returned, but isn't cached. .. versionadded:: 0.5.0 .. seealso:: :meth:`.CacheRegion.cache_multi_on_arguments` :meth:`.CacheRegion.get_or_create` """ def get_value(key): value = values.get(key, NO_VALUE) if self._is_cache_miss(value, orig_key): # dogpile.core understands a 0 here as # "the value is not available", e.g. # _has_value() will return False. return value.payload, 0 else: ct = value.metadata["ct"] if self.region_invalidator.is_soft_invalidated(ct): ct = time.time() - expiration_time - 0.0001 return value.payload, ct def gen_value(): raise NotImplementedError() def async_creator(key, mutex): mutexes[key] = mutex if expiration_time is None: expiration_time = self.expiration_time if ( expiration_time is None and self.region_invalidator.was_soft_invalidated() ): raise exception.DogpileCacheException( "Non-None expiration time required " "for soft invalidation" ) if expiration_time == -1: expiration_time = None mutexes = {} sorted_unique_keys = sorted(set(keys)) if self.key_mangler: mangled_keys = [self.key_mangler(k) for k in sorted_unique_keys] else: mangled_keys = sorted_unique_keys orig_to_mangled = dict(zip(sorted_unique_keys, mangled_keys)) values = dict(zip(mangled_keys, self.backend.get_multi(mangled_keys))) for orig_key, mangled_key in orig_to_mangled.items(): with Lock( self._mutex(mangled_key), gen_value, lambda: get_value(mangled_key), expiration_time, async_creator=lambda mutex: async_creator(orig_key, mutex), ): pass try: if mutexes: # sort the keys, the idea is to prevent deadlocks. # though haven't been able to simulate one anyway. keys_to_get = sorted(mutexes) with self._log_time(keys_to_get): new_values = creator(*keys_to_get) values_w_created = dict( (orig_to_mangled[k], self._value(v)) for k, v in zip(keys_to_get, new_values) ) if not should_cache_fn: self.backend.set_multi(values_w_created) else: values_to_cache = dict( (k, v) for k, v in values_w_created.items() if should_cache_fn(v[0]) ) if values_to_cache: self.backend.set_multi(values_to_cache) values.update(values_w_created) return [values[orig_to_mangled[k]].payload for k in keys] finally: for mutex in mutexes.values(): mutex.release() def _value(self, value): """Return a :class:`.CachedValue` given a value.""" return CachedValue(value, {"ct": time.time(), "v": value_version}) def set(self, key, value): """Place a new value in the cache under the given key.""" if self.key_mangler: key = self.key_mangler(key) self.backend.set(key, self._value(value)) def set_multi(self, mapping): """Place new values in the cache under the given keys. .. versionadded:: 0.5.0 """ if not mapping: return if self.key_mangler: mapping = dict( (self.key_mangler(k), self._value(v)) for k, v in mapping.items() ) else: mapping = dict((k, self._value(v)) for k, v in mapping.items()) self.backend.set_multi(mapping) def delete(self, key): """Remove a value from the cache. This operation is idempotent (can be called multiple times, or on a non-existent key, safely) """ if self.key_mangler: key = self.key_mangler(key) self.backend.delete(key) def delete_multi(self, keys): """Remove multiple values from the cache. This operation is idempotent (can be called multiple times, or on a non-existent key, safely) .. versionadded:: 0.5.0 """ if self.key_mangler: keys = list(map(lambda key: self.key_mangler(key), keys)) self.backend.delete_multi(keys) def cache_on_arguments( self, namespace=None, expiration_time=None, should_cache_fn=None, to_str=compat.string_type, function_key_generator=None, ): """A function decorator that will cache the return value of the function using a key derived from the function itself and its arguments. The decorator internally makes use of the :meth:`.CacheRegion.get_or_create` method to access the cache and conditionally call the function. See that method for additional behavioral details. E.g.:: @someregion.cache_on_arguments() def generate_something(x, y): return somedatabase.query(x, y) The decorated function can then be called normally, where data will be pulled from the cache region unless a new value is needed:: result = generate_something(5, 6) The function is also given an attribute ``invalidate()``, which provides for invalidation of the value. Pass to ``invalidate()`` the same arguments you'd pass to the function itself to represent a particular value:: generate_something.invalidate(5, 6) Another attribute ``set()`` is added to provide extra caching possibilities relative to the function. This is a convenience method for :meth:`.CacheRegion.set` which will store a given value directly without calling the decorated function. The value to be cached is passed as the first argument, and the arguments which would normally be passed to the function should follow:: generate_something.set(3, 5, 6) The above example is equivalent to calling ``generate_something(5, 6)``, if the function were to produce the value ``3`` as the value to be cached. .. versionadded:: 0.4.1 Added ``set()`` method to decorated function. Similar to ``set()`` is ``refresh()``. This attribute will invoke the decorated function and populate a new value into the cache with the new value, as well as returning that value:: newvalue = generate_something.refresh(5, 6) .. versionadded:: 0.5.0 Added ``refresh()`` method to decorated function. ``original()`` on other hand will invoke the decorated function without any caching:: newvalue = generate_something.original(5, 6) .. versionadded:: 0.6.0 Added ``original()`` method to decorated function. Lastly, the ``get()`` method returns either the value cached for the given key, or the token ``NO_VALUE`` if no such key exists:: value = generate_something.get(5, 6) .. versionadded:: 0.5.3 Added ``get()`` method to decorated function. The default key generation will use the name of the function, the module name for the function, the arguments passed, as well as an optional "namespace" parameter in order to generate a cache key. Given a function ``one`` inside the module ``myapp.tools``:: @region.cache_on_arguments(namespace="foo") def one(a, b): return a + b Above, calling ``one(3, 4)`` will produce a cache key as follows:: myapp.tools:one|foo|3 4 The key generator will ignore an initial argument of ``self`` or ``cls``, making the decorator suitable (with caveats) for use with instance or class methods. Given the example:: class MyClass(object): @region.cache_on_arguments(namespace="foo") def one(self, a, b): return a + b The cache key above for ``MyClass().one(3, 4)`` will again produce the same cache key of ``myapp.tools:one|foo|3 4`` - the name ``self`` is skipped. The ``namespace`` parameter is optional, and is used normally to disambiguate two functions of the same name within the same module, as can occur when decorating instance or class methods as below:: class MyClass(object): @region.cache_on_arguments(namespace='MC') def somemethod(self, x, y): "" class MyOtherClass(object): @region.cache_on_arguments(namespace='MOC') def somemethod(self, x, y): "" Above, the ``namespace`` parameter disambiguates between ``somemethod`` on ``MyClass`` and ``MyOtherClass``. Python class declaration mechanics otherwise prevent the decorator from having awareness of the ``MyClass`` and ``MyOtherClass`` names, as the function is received by the decorator before it becomes an instance method. The function key generation can be entirely replaced on a per-region basis using the ``function_key_generator`` argument present on :func:`.make_region` and :class:`.CacheRegion`. If defaults to :func:`.function_key_generator`. :param namespace: optional string argument which will be established as part of the cache key. This may be needed to disambiguate functions of the same name within the same source file, such as those associated with classes - note that the decorator itself can't see the parent class on a function as the class is being declared. :param expiration_time: if not None, will override the normal expiration time. May be specified as a callable, taking no arguments, that returns a value to be used as the ``expiration_time``. This callable will be called whenever the decorated function itself is called, in caching or retrieving. Thus, this can be used to determine a *dynamic* expiration time for the cached function result. Example use cases include "cache the result until the end of the day, week or time period" and "cache until a certain date or time passes". .. versionchanged:: 0.5.0 ``expiration_time`` may be passed as a callable to :meth:`.CacheRegion.cache_on_arguments`. :param should_cache_fn: passed to :meth:`.CacheRegion.get_or_create`. .. versionadded:: 0.4.3 :param to_str: callable, will be called on each function argument in order to convert to a string. Defaults to ``str()``. If the function accepts non-ascii unicode arguments on Python 2.x, the ``unicode()`` builtin can be substituted, but note this will produce unicode cache keys which may require key mangling before reaching the cache. .. versionadded:: 0.5.0 :param function_key_generator: a function that will produce a "cache key". This function will supersede the one configured on the :class:`.CacheRegion` itself. .. versionadded:: 0.5.5 .. seealso:: :meth:`.CacheRegion.cache_multi_on_arguments` :meth:`.CacheRegion.get_or_create` """ expiration_time_is_callable = compat.callable(expiration_time) if function_key_generator is None: function_key_generator = self.function_key_generator def get_or_create_for_user_func(key_generator, user_func, *arg, **kw): key = key_generator(*arg, **kw) timeout = ( expiration_time() if expiration_time_is_callable else expiration_time ) return self.get_or_create( key, user_func, timeout, should_cache_fn, (arg, kw) ) def cache_decorator(user_func): if to_str is compat.string_type: # backwards compatible key_generator = function_key_generator(namespace, user_func) else: key_generator = function_key_generator( namespace, user_func, to_str=to_str ) def refresh(*arg, **kw): """ Like invalidate, but regenerates the value instead """ key = key_generator(*arg, **kw) value = user_func(*arg, **kw) self.set(key, value) return value def invalidate(*arg, **kw): key = key_generator(*arg, **kw) self.delete(key) def set_(value, *arg, **kw): key = key_generator(*arg, **kw) self.set(key, value) def get(*arg, **kw): key = key_generator(*arg, **kw) return self.get(key) user_func.set = set_ user_func.invalidate = invalidate user_func.get = get user_func.refresh = refresh user_func.original = user_func # Use `decorate` to preserve the signature of :param:`user_func`. return decorate( user_func, partial(get_or_create_for_user_func, key_generator) ) return cache_decorator def cache_multi_on_arguments( self, namespace=None, expiration_time=None, should_cache_fn=None, asdict=False, to_str=compat.string_type, function_multi_key_generator=None, ): """A function decorator that will cache multiple return values from the function using a sequence of keys derived from the function itself and the arguments passed to it. This method is the "multiple key" analogue to the :meth:`.CacheRegion.cache_on_arguments` method. Example:: @someregion.cache_multi_on_arguments() def generate_something(*keys): return [ somedatabase.query(key) for key in keys ] The decorated function can be called normally. The decorator will produce a list of cache keys using a mechanism similar to that of :meth:`.CacheRegion.cache_on_arguments`, combining the name of the function with the optional namespace and with the string form of each key. It will then consult the cache using the same mechanism as that of :meth:`.CacheRegion.get_multi` to retrieve all current values; the originally passed keys corresponding to those values which aren't generated or need regeneration will be assembled into a new argument list, and the decorated function is then called with that subset of arguments. The returned result is a list:: result = generate_something("key1", "key2", "key3") The decorator internally makes use of the :meth:`.CacheRegion.get_or_create_multi` method to access the cache and conditionally call the function. See that method for additional behavioral details. Unlike the :meth:`.CacheRegion.cache_on_arguments` method, :meth:`.CacheRegion.cache_multi_on_arguments` works only with a single function signature, one which takes a simple list of keys as arguments. Like :meth:`.CacheRegion.cache_on_arguments`, the decorated function is also provided with a ``set()`` method, which here accepts a mapping of keys and values to set in the cache:: generate_something.set({"k1": "value1", "k2": "value2", "k3": "value3"}) ...an ``invalidate()`` method, which has the effect of deleting the given sequence of keys using the same mechanism as that of :meth:`.CacheRegion.delete_multi`:: generate_something.invalidate("k1", "k2", "k3") ...a ``refresh()`` method, which will call the creation function, cache the new values, and return them:: values = generate_something.refresh("k1", "k2", "k3") ...and a ``get()`` method, which will return values based on the given arguments:: values = generate_something.get("k1", "k2", "k3") .. versionadded:: 0.5.3 Added ``get()`` method to decorated function. Parameters passed to :meth:`.CacheRegion.cache_multi_on_arguments` have the same meaning as those passed to :meth:`.CacheRegion.cache_on_arguments`. :param namespace: optional string argument which will be established as part of each cache key. :param expiration_time: if not None, will override the normal expiration time. May be passed as an integer or a callable. :param should_cache_fn: passed to :meth:`.CacheRegion.get_or_create_multi`. This function is given a value as returned by the creator, and only if it returns True will that value be placed in the cache. :param asdict: if ``True``, the decorated function should return its result as a dictionary of keys->values, and the final result of calling the decorated function will also be a dictionary. If left at its default value of ``False``, the decorated function should return its result as a list of values, and the final result of calling the decorated function will also be a list. When ``asdict==True`` if the dictionary returned by the decorated function is missing keys, those keys will not be cached. :param to_str: callable, will be called on each function argument in order to convert to a string. Defaults to ``str()``. If the function accepts non-ascii unicode arguments on Python 2.x, the ``unicode()`` builtin can be substituted, but note this will produce unicode cache keys which may require key mangling before reaching the cache. .. versionadded:: 0.5.0 :param function_multi_key_generator: a function that will produce a list of keys. This function will supersede the one configured on the :class:`.CacheRegion` itself. .. versionadded:: 0.5.5 .. seealso:: :meth:`.CacheRegion.cache_on_arguments` :meth:`.CacheRegion.get_or_create_multi` """ expiration_time_is_callable = compat.callable(expiration_time) if function_multi_key_generator is None: function_multi_key_generator = self.function_multi_key_generator def get_or_create_for_user_func(key_generator, user_func, *arg, **kw): cache_keys = arg keys = key_generator(*arg, **kw) key_lookup = dict(zip(keys, cache_keys)) @wraps(user_func) def creator(*keys_to_create): return user_func(*[key_lookup[k] for k in keys_to_create]) timeout = ( expiration_time() if expiration_time_is_callable else expiration_time ) if asdict: def dict_create(*keys): d_values = creator(*keys) return [ d_values.get(key_lookup[k], NO_VALUE) for k in keys ] def wrap_cache_fn(value): if value is NO_VALUE: return False elif not should_cache_fn: return True else: return should_cache_fn(value) result = self.get_or_create_multi( keys, dict_create, timeout, wrap_cache_fn ) result = dict( (k, v) for k, v in zip(cache_keys, result) if v is not NO_VALUE ) else: result = self.get_or_create_multi( keys, creator, timeout, should_cache_fn ) return result def cache_decorator(user_func): key_generator = function_multi_key_generator( namespace, user_func, to_str=to_str ) def invalidate(*arg): keys = key_generator(*arg) self.delete_multi(keys) def set_(mapping): keys = list(mapping) gen_keys = key_generator(*keys) self.set_multi( dict( (gen_key, mapping[key]) for gen_key, key in zip(gen_keys, keys) ) ) def get(*arg): keys = key_generator(*arg) return self.get_multi(keys) def refresh(*arg): keys = key_generator(*arg) values = user_func(*arg) if asdict: self.set_multi(dict(zip(keys, [values[a] for a in arg]))) return values else: self.set_multi(dict(zip(keys, values))) return values user_func.set = set_ user_func.invalidate = invalidate user_func.refresh = refresh user_func.get = get # Use `decorate` to preserve the signature of :param:`user_func`. return decorate( user_func, partial(get_or_create_for_user_func, key_generator) ) return cache_decorator def make_region(*arg, **kw): """Instantiate a new :class:`.CacheRegion`. Currently, :func:`.make_region` is a passthrough to :class:`.CacheRegion`. See that class for constructor arguments. """ return CacheRegion(*arg, **kw) dogpile.cache-0.9.0/dogpile/cache/util.py0000664000175000017500000001255413555610667021335 0ustar classicclassic00000000000000from hashlib import sha1 from ..util import compat from ..util import langhelpers def function_key_generator(namespace, fn, to_str=compat.string_type): """Return a function that generates a string key, based on a given function as well as arguments to the returned function itself. This is used by :meth:`.CacheRegion.cache_on_arguments` to generate a cache key from a decorated function. An alternate function may be used by specifying the :paramref:`.CacheRegion.function_key_generator` argument for :class:`.CacheRegion`. .. seealso:: :func:`.kwarg_function_key_generator` - similar function that also takes keyword arguments into account """ if namespace is None: namespace = "%s:%s" % (fn.__module__, fn.__name__) else: namespace = "%s:%s|%s" % (fn.__module__, fn.__name__, namespace) args = compat.inspect_getargspec(fn) has_self = args[0] and args[0][0] in ("self", "cls") def generate_key(*args, **kw): if kw: raise ValueError( "dogpile.cache's default key creation " "function does not accept keyword arguments." ) if has_self: args = args[1:] return namespace + "|" + " ".join(map(to_str, args)) return generate_key def function_multi_key_generator(namespace, fn, to_str=compat.string_type): if namespace is None: namespace = "%s:%s" % (fn.__module__, fn.__name__) else: namespace = "%s:%s|%s" % (fn.__module__, fn.__name__, namespace) args = compat.inspect_getargspec(fn) has_self = args[0] and args[0][0] in ("self", "cls") def generate_keys(*args, **kw): if kw: raise ValueError( "dogpile.cache's default key creation " "function does not accept keyword arguments." ) if has_self: args = args[1:] return [namespace + "|" + key for key in map(to_str, args)] return generate_keys def kwarg_function_key_generator(namespace, fn, to_str=compat.string_type): """Return a function that generates a string key, based on a given function as well as arguments to the returned function itself. For kwargs passed in, we will build a dict of all argname (key) argvalue (values) including default args from the argspec and then alphabetize the list before generating the key. .. versionadded:: 0.6.2 .. seealso:: :func:`.function_key_generator` - default key generation function """ if namespace is None: namespace = "%s:%s" % (fn.__module__, fn.__name__) else: namespace = "%s:%s|%s" % (fn.__module__, fn.__name__, namespace) argspec = compat.inspect_getargspec(fn) default_list = list(argspec.defaults or []) # Reverse the list, as we want to compare the argspec by negative index, # meaning default_list[0] should be args[-1], which works well with # enumerate() default_list.reverse() # use idx*-1 to create the correct right-lookup index. args_with_defaults = dict( (argspec.args[(idx * -1)], default) for idx, default in enumerate(default_list, 1) ) if argspec.args and argspec.args[0] in ("self", "cls"): arg_index_start = 1 else: arg_index_start = 0 def generate_key(*args, **kwargs): as_kwargs = dict( [ (argspec.args[idx], arg) for idx, arg in enumerate( args[arg_index_start:], arg_index_start ) ] ) as_kwargs.update(kwargs) for arg, val in args_with_defaults.items(): if arg not in as_kwargs: as_kwargs[arg] = val argument_values = [as_kwargs[key] for key in sorted(as_kwargs.keys())] return namespace + "|" + " ".join(map(to_str, argument_values)) return generate_key def sha1_mangle_key(key): """a SHA1 key mangler.""" if isinstance(key, compat.text_type): key = key.encode("utf-8") return sha1(key).hexdigest() def length_conditional_mangler(length, mangler): """a key mangler that mangles if the length of the key is past a certain threshold. """ def mangle(key): if len(key) >= length: return mangler(key) else: return key return mangle # in the 0.6 release these functions were moved to the dogpile.util namespace. # They are linked here to maintain compatibility with older versions. coerce_string_conf = langhelpers.coerce_string_conf KeyReentrantMutex = langhelpers.KeyReentrantMutex memoized_property = langhelpers.memoized_property PluginLoader = langhelpers.PluginLoader to_list = langhelpers.to_list class repr_obj(object): __slots__ = ("value", "max_chars") def __init__(self, value, max_chars=300): self.value = value self.max_chars = max_chars def __eq__(self, other): return other.value == self.value def __repr__(self): rep = repr(self.value) lenrep = len(rep) if lenrep > self.max_chars: segment_length = self.max_chars // 2 rep = ( rep[0:segment_length] + ( " ... (%d characters truncated) ... " % (lenrep - self.max_chars) ) + rep[-segment_length:] ) return rep dogpile.cache-0.9.0/dogpile/core.py0000664000175000017500000000106513555610667020240 0ustar classicclassic00000000000000"""Compatibility namespace for those using dogpile.core. As of dogpile.cache 0.6.0, dogpile.core as a separate package is no longer used by dogpile.cache. Note that this namespace will not take effect if an actual dogpile.core installation is present. """ from . import __version__ # noqa from .lock import Lock # noqa from .lock import NeedRegenerationException # noqa from .util import nameregistry # noqa from .util import readwrite_lock # noqa from .util.nameregistry import NameRegistry # noqa from .util.readwrite_lock import ReadWriteMutex # noqa dogpile.cache-0.9.0/dogpile/lock.py0000664000175000017500000001572313555610667020246 0ustar classicclassic00000000000000import logging import time log = logging.getLogger(__name__) class NeedRegenerationException(Exception): """An exception that when raised in the 'with' block, forces the 'has_value' flag to False and incurs a regeneration of the value. """ NOT_REGENERATED = object() class Lock(object): """Dogpile lock class. Provides an interface around an arbitrary mutex that allows one thread/process to be elected as the creator of a new value, while other threads/processes continue to return the previous version of that value. :param mutex: A mutex object that provides ``acquire()`` and ``release()`` methods. :param creator: Callable which returns a tuple of the form (new_value, creation_time). "new_value" should be a newly generated value representing completed state. "creation_time" should be a floating point time value which is relative to Python's ``time.time()`` call, representing the time at which the value was created. This time value should be associated with the created value. :param value_and_created_fn: Callable which returns a tuple of the form (existing_value, creation_time). This basically should return what the last local call to the ``creator()`` callable has returned, i.e. the value and the creation time, which would be assumed here to be from a cache. If the value is not available, the :class:`.NeedRegenerationException` exception should be thrown. :param expiretime: Expiration time in seconds. Set to ``None`` for never expires. This timestamp is compared to the creation_time result and ``time.time()`` to determine if the value returned by value_and_created_fn is "expired". :param async_creator: A callable. If specified, this callable will be passed the mutex as an argument and is responsible for releasing the mutex after it finishes some asynchronous value creation. The intent is for this to be used to defer invocation of the creator callable until some later time. """ def __init__( self, mutex, creator, value_and_created_fn, expiretime, async_creator=None, ): self.mutex = mutex self.creator = creator self.value_and_created_fn = value_and_created_fn self.expiretime = expiretime self.async_creator = async_creator def _is_expired(self, createdtime): """Return true if the expiration time is reached, or no value is available.""" return not self._has_value(createdtime) or ( self.expiretime is not None and time.time() - createdtime > self.expiretime ) def _has_value(self, createdtime): """Return true if the creation function has proceeded at least once.""" return createdtime > 0 def _enter(self): value_fn = self.value_and_created_fn try: value = value_fn() value, createdtime = value except NeedRegenerationException: log.debug("NeedRegenerationException") value = NOT_REGENERATED createdtime = -1 generated = self._enter_create(value, createdtime) if generated is not NOT_REGENERATED: generated, createdtime = generated return generated elif value is NOT_REGENERATED: # we called upon the creator, and it said that it # didn't regenerate. this typically means another # thread is running the creation function, and that the # cache should still have a value. However, # we don't have a value at all, which is unusual since we just # checked for it, so check again (TODO: is this a real codepath?) try: value, createdtime = value_fn() return value except NeedRegenerationException: raise Exception( "Generation function should " "have just been called by a concurrent " "thread." ) else: return value def _enter_create(self, value, createdtime): if not self._is_expired(createdtime): return NOT_REGENERATED _async = False if self._has_value(createdtime): has_value = True if not self.mutex.acquire(False): log.debug( "creation function in progress elsewhere, returning" ) return NOT_REGENERATED else: has_value = False log.debug("no value, waiting for create lock") self.mutex.acquire() try: log.debug("value creation lock %r acquired" % self.mutex) if not has_value: # we entered without a value, or at least with "creationtime == # 0". Run the "getter" function again, to see if another # thread has already generated the value while we waited on the # mutex, or if the caller is otherwise telling us there is a # value already which allows us to use async regeneration. (the # latter is used by the multi-key routine). try: value, createdtime = self.value_and_created_fn() except NeedRegenerationException: # nope, nobody created the value, we're it. # we must create it right now pass else: has_value = True # caller is telling us there is a value and that we can # use async creation if it is expired. if not self._is_expired(createdtime): # it's not expired, return it log.debug("Concurrent thread created the value") return value, createdtime # otherwise it's expired, call creator again if has_value and self.async_creator: # we have a value we can return, safe to use async_creator log.debug("Passing creation lock to async runner") # so...run it! self.async_creator(self.mutex) _async = True # and return the expired value for now return value, createdtime # it's expired, and it's our turn to create it synchronously, *or*, # there's no value at all, and we have to create it synchronously log.debug( "Calling creation function for %s value", "not-yet-present" if not has_value else "previously expired", ) return self.creator() finally: if not _async: self.mutex.release() log.debug("Released creation lock") def __enter__(self): return self._enter() def __exit__(self, type_, value, traceback): pass dogpile.cache-0.9.0/dogpile/util/0000775000175000017500000000000013555610710017676 5ustar classicclassic00000000000000dogpile.cache-0.9.0/dogpile/util/__init__.py0000664000175000017500000000052313555610667022022 0ustar classicclassic00000000000000from .langhelpers import coerce_string_conf # noqa from .langhelpers import KeyReentrantMutex # noqa from .langhelpers import memoized_property # noqa from .langhelpers import PluginLoader # noqa from .langhelpers import to_list # noqa from .nameregistry import NameRegistry # noqa from .readwrite_lock import ReadWriteMutex # noqa dogpile.cache-0.9.0/dogpile/util/compat.py0000664000175000017500000000603113555610667021546 0ustar classicclassic00000000000000import collections import inspect import sys py2k = sys.version_info < (3, 0) py3k = sys.version_info >= (3, 0) py32 = sys.version_info >= (3, 2) py27 = sys.version_info >= (2, 7) jython = sys.platform.startswith("java") win32 = sys.platform.startswith("win") try: import threading except ImportError: import dummy_threading as threading # noqa FullArgSpec = collections.namedtuple( "FullArgSpec", [ "args", "varargs", "varkw", "defaults", "kwonlyargs", "kwonlydefaults", "annotations", ], ) ArgSpec = collections.namedtuple( "ArgSpec", ["args", "varargs", "keywords", "defaults"] ) def inspect_getfullargspec(func): """Fully vendored version of getfullargspec from Python 3.3.""" if inspect.ismethod(func): func = func.__func__ if not inspect.isfunction(func): raise TypeError("{!r} is not a Python function".format(func)) co = func.__code__ if not inspect.iscode(co): raise TypeError("{!r} is not a code object".format(co)) nargs = co.co_argcount names = co.co_varnames nkwargs = co.co_kwonlyargcount if py3k else 0 args = list(names[:nargs]) kwonlyargs = list(names[nargs : nargs + nkwargs]) nargs += nkwargs varargs = None if co.co_flags & inspect.CO_VARARGS: varargs = co.co_varnames[nargs] nargs = nargs + 1 varkw = None if co.co_flags & inspect.CO_VARKEYWORDS: varkw = co.co_varnames[nargs] return FullArgSpec( args, varargs, varkw, func.__defaults__, kwonlyargs, func.__kwdefaults__ if py3k else None, func.__annotations__ if py3k else {}, ) def inspect_getargspec(func): return ArgSpec(*inspect_getfullargspec(func)[0:4]) if py3k: # pragma: no cover string_types = (str,) text_type = str string_type = str if py32: callable = callable # noqa else: def callable(fn): # noqa return hasattr(fn, "__call__") def u(s): return s def ue(s): return s import configparser import io import _thread as thread else: # Using noqa bellow due to tox -e pep8 who use # python3.7 as the default interpreter string_types = (basestring,) # noqa text_type = unicode # noqa string_type = str def u(s): return unicode(s, "utf-8") # noqa def ue(s): return unicode(s, "unicode_escape") # noqa import ConfigParser as configparser # noqa import StringIO as io # noqa callable = callable # noqa import thread # noqa if py3k or jython: import pickle else: import cPickle as pickle # noqa if py3k: def read_config_file(config, fileobj): return config.read_file(fileobj) else: def read_config_file(config, fileobj): return config.readfp(fileobj) def timedelta_total_seconds(td): if py27: return td.total_seconds() else: return ( td.microseconds + (td.seconds + td.days * 24 * 3600) * 1e6 ) / 1e6 dogpile.cache-0.9.0/dogpile/util/langhelpers.py0000664000175000017500000000700213555610667022566 0ustar classicclassic00000000000000import collections import re from . import compat def coerce_string_conf(d): result = {} for k, v in d.items(): if not isinstance(v, compat.string_types): result[k] = v continue v = v.strip() if re.match(r"^[-+]?\d+$", v): result[k] = int(v) elif re.match(r"^[-+]?(?:\d+(?:\.\d*)?|\.\d+)(?:[eE][-+]?\d+)?$", v): result[k] = float(v) elif v.lower() in ("false", "true"): result[k] = v.lower() == "true" elif v == "None": result[k] = None else: result[k] = v return result class PluginLoader(object): def __init__(self, group): self.group = group self.impls = {} def load(self, name): if name in self.impls: return self.impls[name]() else: # pragma NO COVERAGE import pkg_resources for impl in pkg_resources.iter_entry_points(self.group, name): self.impls[name] = impl.load return impl.load() else: raise self.NotFound( "Can't load plugin %s %s" % (self.group, name) ) def register(self, name, modulepath, objname): def load(): mod = __import__(modulepath, fromlist=[objname]) return getattr(mod, objname) self.impls[name] = load class NotFound(Exception): """The specified plugin could not be found.""" class memoized_property(object): """A read-only @property that is only evaluated once.""" def __init__(self, fget, doc=None): self.fget = fget self.__doc__ = doc or fget.__doc__ self.__name__ = fget.__name__ def __get__(self, obj, cls): if obj is None: return self obj.__dict__[self.__name__] = result = self.fget(obj) return result def to_list(x, default=None): """Coerce to a list.""" if x is None: return default if not isinstance(x, (list, tuple)): return [x] else: return x class KeyReentrantMutex(object): def __init__(self, key, mutex, keys): self.key = key self.mutex = mutex self.keys = keys @classmethod def factory(cls, mutex): # this collection holds zero or one # thread idents as the key; a set of # keynames held as the value. keystore = collections.defaultdict(set) def fac(key): return KeyReentrantMutex(key, mutex, keystore) return fac def acquire(self, wait=True): current_thread = compat.threading.current_thread().ident keys = self.keys.get(current_thread) if keys is not None and self.key not in keys: # current lockholder, new key. add it in keys.add(self.key) return True elif self.mutex.acquire(wait=wait): # after acquire, create new set and add our key self.keys[current_thread].add(self.key) return True else: return False def release(self): current_thread = compat.threading.current_thread().ident keys = self.keys.get(current_thread) assert keys is not None, "this thread didn't do the acquire" assert self.key in keys, "No acquire held for key '%s'" % self.key keys.remove(self.key) if not keys: # when list of keys empty, remove # the thread ident and unlock. del self.keys[current_thread] self.mutex.release() dogpile.cache-0.9.0/dogpile/util/nameregistry.py0000664000175000017500000000520613555610667022777 0ustar classicclassic00000000000000import weakref from .compat import threading class NameRegistry(object): """Generates and return an object, keeping it as a singleton for a certain identifier for as long as its strongly referenced. e.g.:: class MyFoo(object): "some important object." def __init__(self, identifier): self.identifier = identifier registry = NameRegistry(MyFoo) # thread 1: my_foo = registry.get("foo1") # thread 2 my_foo = registry.get("foo1") Above, ``my_foo`` in both thread #1 and #2 will be *the same object*. The constructor for ``MyFoo`` will be called once, passing the identifier ``foo1`` as the argument. When thread 1 and thread 2 both complete or otherwise delete references to ``my_foo``, the object is *removed* from the :class:`.NameRegistry` as a result of Python garbage collection. :param creator: A function that will create a new value, given the identifier passed to the :meth:`.NameRegistry.get` method. """ _locks = weakref.WeakValueDictionary() _mutex = threading.RLock() def __init__(self, creator): """Create a new :class:`.NameRegistry`. """ self._values = weakref.WeakValueDictionary() self._mutex = threading.RLock() self.creator = creator def get(self, identifier, *args, **kw): r"""Get and possibly create the value. :param identifier: Hash key for the value. If the creation function is called, this identifier will also be passed to the creation function. :param \*args, \**kw: Additional arguments which will also be passed to the creation function if it is called. """ try: if identifier in self._values: return self._values[identifier] else: return self._sync_get(identifier, *args, **kw) except KeyError: return self._sync_get(identifier, *args, **kw) def _sync_get(self, identifier, *args, **kw): self._mutex.acquire() try: try: if identifier in self._values: return self._values[identifier] else: self._values[identifier] = value = self.creator( identifier, *args, **kw ) return value except KeyError: self._values[identifier] = value = self.creator( identifier, *args, **kw ) return value finally: self._mutex.release() dogpile.cache-0.9.0/dogpile/util/readwrite_lock.py0000664000175000017500000001067513555610667023272 0ustar classicclassic00000000000000import logging from .compat import threading log = logging.getLogger(__name__) class LockError(Exception): pass class ReadWriteMutex(object): """A mutex which allows multiple readers, single writer. :class:`.ReadWriteMutex` uses a Python ``threading.Condition`` to provide this functionality across threads within a process. The Beaker package also contained a file-lock based version of this concept, so that readers/writers could be synchronized across processes with a common filesystem. A future Dogpile release may include this additional class at some point. """ def __init__(self): # counts how many asynchronous methods are executing self.async_ = 0 # pointer to thread that is the current sync operation self.current_sync_operation = None # condition object to lock on self.condition = threading.Condition(threading.Lock()) def acquire_read_lock(self, wait=True): """Acquire the 'read' lock.""" self.condition.acquire() try: # see if a synchronous operation is waiting to start # or is already running, in which case we wait (or just # give up and return) if wait: while self.current_sync_operation is not None: self.condition.wait() else: if self.current_sync_operation is not None: return False self.async_ += 1 log.debug("%s acquired read lock", self) finally: self.condition.release() if not wait: return True def release_read_lock(self): """Release the 'read' lock.""" self.condition.acquire() try: self.async_ -= 1 # check if we are the last asynchronous reader thread # out the door. if self.async_ == 0: # yes. so if a sync operation is waiting, notifyAll to wake # it up if self.current_sync_operation is not None: self.condition.notifyAll() elif self.async_ < 0: raise LockError( "Synchronizer error - too many " "release_read_locks called" ) log.debug("%s released read lock", self) finally: self.condition.release() def acquire_write_lock(self, wait=True): """Acquire the 'write' lock.""" self.condition.acquire() try: # here, we are not a synchronous reader, and after returning, # assuming waiting or immediate availability, we will be. if wait: # if another sync is working, wait while self.current_sync_operation is not None: self.condition.wait() else: # if another sync is working, # we dont want to wait, so forget it if self.current_sync_operation is not None: return False # establish ourselves as the current sync # this indicates to other read/write operations # that they should wait until this is None again self.current_sync_operation = threading.currentThread() # now wait again for asyncs to finish if self.async_ > 0: if wait: # wait self.condition.wait() else: # we dont want to wait, so forget it self.current_sync_operation = None return False log.debug("%s acquired write lock", self) finally: self.condition.release() if not wait: return True def release_write_lock(self): """Release the 'write' lock.""" self.condition.acquire() try: if self.current_sync_operation is not threading.currentThread(): raise LockError( "Synchronizer error - current thread doesn't " "have the write lock" ) # reset the current sync operation so # another can get it self.current_sync_operation = None # tell everyone to get ready self.condition.notifyAll() log.debug("%s released write lock", self) finally: # everyone go !! self.condition.release() dogpile.cache-0.9.0/dogpile.cache.egg-info/0000775000175000017500000000000013555610710021455 5ustar classicclassic00000000000000dogpile.cache-0.9.0/dogpile.cache.egg-info/PKG-INFO0000664000175000017500000001046713555610710022562 0ustar classicclassic00000000000000Metadata-Version: 1.1 Name: dogpile.cache Version: 0.9.0 Summary: A caching front-end based on the Dogpile lock. Home-page: https://github.com/sqlalchemy/dogpile.cache Author: Mike Bayer Author-email: mike_mp@zzzcomputing.com License: BSD Description: dogpile ======= Dogpile consists of two subsystems, one building on top of the other. ``dogpile`` provides the concept of a "dogpile lock", a control structure which allows a single thread of execution to be selected as the "creator" of some resource, while allowing other threads of execution to refer to the previous version of this resource as the creation proceeds; if there is no previous version, then those threads block until the object is available. ``dogpile.cache`` is a caching API which provides a generic interface to caching backends of any variety, and additionally provides API hooks which integrate these cache backends with the locking mechanism of ``dogpile``. Overall, dogpile.cache is intended as a replacement to the `Beaker `_ caching system, the internals of which are written by the same author. All the ideas of Beaker which "work" are re- implemented in dogpile.cache in a more efficient and succinct manner, and all the cruft (Beaker's internals were first written in 2005) relegated to the trash heap. Documentation ------------- See dogpile.cache's full documentation at `dogpile.cache documentation `_. The sections below provide a brief synopsis of the ``dogpile`` packages. Features -------- * A succinct API which encourages up-front configuration of pre-defined "regions", each one defining a set of caching characteristics including storage backend, configuration options, and default expiration time. * A standard get/set/delete API as well as a function decorator API is provided. * The mechanics of key generation are fully customizable. The function decorator API features a pluggable "key generator" to customize how cache keys are made to correspond to function calls, and an optional "key mangler" feature provides for pluggable mangling of keys (such as encoding, SHA-1 hashing) as desired for each region. * The dogpile lock, first developed as the core engine behind the Beaker caching system, here vastly simplified, improved, and better tested. Some key performance issues that were intrinsic to Beaker's architecture, particularly that values would frequently be "double-fetched" from the cache, have been fixed. * Backends implement their own version of a "distributed" lock, where the "distribution" matches the backend's storage system. For example, the memcached backends allow all clients to coordinate creation of values using memcached itself. The dbm file backend uses a lockfile alongside the dbm file. New backends, such as a Redis-based backend, can provide their own locking mechanism appropriate to the storage engine. * Writing new backends or hacking on the existing backends is intended to be routine - all that's needed are basic get/set/delete methods. A distributed lock tailored towards the backend is an optional addition, else dogpile uses a regular thread mutex. New backends can be registered with dogpile.cache directly or made available via setuptools entry points. * Included backends feature three memcached backends (python-memcached, pylibmc, bmemcached), a Redis backend, a backend based on Python's anydbm, and a plain dictionary backend. * Space for third party plugins, including one which provides the dogpile.cache engine to Mako templates. Keywords: caching Platform: UNKNOWN Classifier: Development Status :: 5 - Production/Stable Classifier: Intended Audience :: Developers Classifier: License :: OSI Approved :: BSD License Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 3 dogpile.cache-0.9.0/dogpile.cache.egg-info/SOURCES.txt0000664000175000017500000000534013555610710023343 0ustar classicclassic00000000000000LICENSE MANIFEST.in README.rst hash_port.py log_tests.ini setup.cfg setup.py tox.ini docs/api.html docs/changelog.html docs/core_usage.html docs/front.html docs/genindex.html docs/index.html docs/py-modindex.html docs/recipes.html docs/search.html docs/searchindex.js docs/usage.html docs/_sources/api.rst.txt docs/_sources/changelog.rst.txt docs/_sources/core_usage.rst.txt docs/_sources/front.rst.txt docs/_sources/index.rst.txt docs/_sources/recipes.rst.txt docs/_sources/usage.rst.txt docs/_static/basic.css docs/_static/changelog.css docs/_static/doctools.js docs/_static/documentation_options.js docs/_static/file.png docs/_static/jquery-3.4.1.js docs/_static/jquery.js docs/_static/language_data.js docs/_static/minus.png docs/_static/nature.css docs/_static/nature_override.css docs/_static/plus.png docs/_static/pygments.css docs/_static/searchtools.js docs/_static/site_custom_css.css docs/_static/sphinx_paramlinks.css docs/_static/underscore-1.3.1.js docs/_static/underscore.js docs/build/Makefile docs/build/api.rst docs/build/builder.py docs/build/changelog.rst docs/build/conf.py docs/build/core_usage.rst docs/build/front.rst docs/build/index.rst docs/build/recipes.rst docs/build/requirements.txt docs/build/usage.rst docs/build/_static/nature_override.css docs/build/_static/site_custom_css.css docs/build/_templates/site_custom_sidebars.html docs/build/unreleased/README.txt dogpile/__init__.py dogpile/core.py dogpile/lock.py dogpile.cache.egg-info/PKG-INFO dogpile.cache.egg-info/SOURCES.txt dogpile.cache.egg-info/dependency_links.txt dogpile.cache.egg-info/entry_points.txt dogpile.cache.egg-info/not-zip-safe dogpile.cache.egg-info/requires.txt dogpile.cache.egg-info/top_level.txt dogpile/cache/__init__.py dogpile/cache/api.py dogpile/cache/exception.py dogpile/cache/proxy.py dogpile/cache/region.py dogpile/cache/util.py dogpile/cache/backends/__init__.py dogpile/cache/backends/file.py dogpile/cache/backends/memcached.py dogpile/cache/backends/memory.py dogpile/cache/backends/null.py dogpile/cache/backends/redis.py dogpile/cache/plugins/__init__.py dogpile/cache/plugins/mako_cache.py dogpile/util/__init__.py dogpile/util/compat.py dogpile/util/langhelpers.py dogpile/util/nameregistry.py dogpile/util/readwrite_lock.py tests/__init__.py tests/conftest.py tests/test_backgrounding.py tests/test_lock.py tests/test_utils.py tests/cache/__init__.py tests/cache/_fixtures.py tests/cache/test_dbm_backend.py tests/cache/test_decorator.py tests/cache/test_mako.py tests/cache/test_memcached_backend.py tests/cache/test_memory_backend.py tests/cache/test_null_backend.py tests/cache/test_redis_backend.py tests/cache/test_region.py tests/cache/plugins/__init__.py tests/cache/plugins/test_mako_cache.py tests/util/__init__.py tests/util/test_nameregistry.pydogpile.cache-0.9.0/dogpile.cache.egg-info/dependency_links.txt0000664000175000017500000000000113555610710025523 0ustar classicclassic00000000000000 dogpile.cache-0.9.0/dogpile.cache.egg-info/entry_points.txt0000664000175000017500000000012613555610710024752 0ustar classicclassic00000000000000 [mako.cache] dogpile.cache = dogpile.cache.plugins.mako_cache:MakoPlugin dogpile.cache-0.9.0/dogpile.cache.egg-info/not-zip-safe0000664000175000017500000000000113555610710023703 0ustar classicclassic00000000000000 dogpile.cache-0.9.0/dogpile.cache.egg-info/requires.txt0000664000175000017500000000002113555610710024046 0ustar classicclassic00000000000000decorator>=4.0.0 dogpile.cache-0.9.0/dogpile.cache.egg-info/top_level.txt0000664000175000017500000000001013555610710024176 0ustar classicclassic00000000000000dogpile dogpile.cache-0.9.0/hash_port.py0000664000175000017500000000150413555610667017652 0ustar classicclassic00000000000000""" Helper script which provides an integer number from a given range based on a hash of current directory name. This is used in continuous integration as a helper to provide ports to assign to services like Redis, Memcached when they are run on a per-test basis. E.g. in a Jenkins job, one could put as the run command:: export TOX_DOGPILE_PORT=`python hash_port.py 10000 34000` tox -r -e ${pyv}-${backend} So you'd get one TOX_DOGPILE_PORT for the script in /var/lib/jenkins-workspace/py27-redis, another TOX_DOGPILE_PORT for the script in /var/lib/jenkins-workspace/py34-memcached. tox calls the pifpaf tool to run redis/ memcached local to that build and has it listen on this port. """ import os import sys start, end = int(sys.argv[1]), int(sys.argv[2]) dir_ = os.getcwd() print(hash(dir_) % (end - start)) + start dogpile.cache-0.9.0/log_tests.ini0000664000175000017500000000100113555610667020005 0ustar classicclassic00000000000000[loggers] keys = root, dogpilecore, tests [handlers] keys = console [formatters] keys = generic [logger_root] level = CRITICAL handlers = console [logger_dogpilecore] level = DEBUG qualname = dogpile.core handlers = [logger_tests] level = DEBUG qualname = tests handlers = [handler_console] class = StreamHandler args = (sys.stderr,) level = NOTSET formatter = generic [formatter_generic] format = %(asctime)s,%(msecs)03d %(levelname)-5.5s [%(name)s] [%(thread)s] %(message)s datefmt = %Y-%m-%d %H:%M:%S dogpile.cache-0.9.0/setup.cfg0000664000175000017500000000103013555610710017111 0ustar classicclassic00000000000000[egg_info] tag_build = tag_date = 0 [upload_docs] upload-dir = docs/build/output/html [wheel] universal = 1 [upload] sign = 1 identity = C4DAFEE1 [flake8] enable-extensions = G ignore = A003, D, E203,E305,E711,E712,E721,E722,E741, N801,N802,N806, RST304,RST303,RST299,RST399, W503,W504 exclude = .venv,.git,.tox,dist,docs/*,*egg,build import-order-style = google application-import-names = dogpile,tests [tool:pytest] addopts = --tb native -v -r fxX -p no:logging python_files = tests/*test_*.py filterwarnings = error dogpile.cache-0.9.0/setup.py0000664000175000017500000000322613555610667017026 0ustar classicclassic00000000000000import os import re import sys from setuptools import find_packages from setuptools import setup from setuptools.command.test import test as TestCommand class UseTox(TestCommand): RED = 31 RESET_SEQ = "\033[0m" BOLD_SEQ = "\033[1m" COLOR_SEQ = "\033[1;%dm" def run_tests(self): sys.stderr.write( "%s%spython setup.py test is deprecated by pypa. Please invoke " "'tox' with no arguments for a basic test run.\n%s" % (self.COLOR_SEQ % self.RED, self.BOLD_SEQ, self.RESET_SEQ) ) sys.exit(1) v = open(os.path.join(os.path.dirname(__file__), "dogpile", "__init__.py")) VERSION = ( re.compile(r""".*__version__ = ["'](.*?)["']""", re.S) .match(v.read()) .group(1) ) v.close() readme = os.path.join(os.path.dirname(__file__), "README.rst") setup( name="dogpile.cache", version=VERSION, description="A caching front-end based on the Dogpile lock.", long_description=open(readme).read(), classifiers=[ "Development Status :: 5 - Production/Stable", "Intended Audience :: Developers", "License :: OSI Approved :: BSD License", "Programming Language :: Python", "Programming Language :: Python :: 3", ], keywords="caching", author="Mike Bayer", author_email="mike_mp@zzzcomputing.com", url="https://github.com/sqlalchemy/dogpile.cache", license="BSD", packages=find_packages(".", exclude=["tests*"]), entry_points=""" [mako.cache] dogpile.cache = dogpile.cache.plugins.mako_cache:MakoPlugin """, zip_safe=False, install_requires=["decorator>=4.0.0"], cmdclass={"test": UseTox}, ) dogpile.cache-0.9.0/tests/0000775000175000017500000000000013555610710016440 5ustar classicclassic00000000000000dogpile.cache-0.9.0/tests/__init__.py0000664000175000017500000000000013555610667020552 0ustar classicclassic00000000000000dogpile.cache-0.9.0/tests/cache/0000775000175000017500000000000013555610710017503 5ustar classicclassic00000000000000dogpile.cache-0.9.0/tests/cache/__init__.py0000664000175000017500000000225313555610667021631 0ustar classicclassic00000000000000from functools import wraps import re import time import pytest from dogpile.util import compat from dogpile.util.compat import configparser # noqa from dogpile.util.compat import io # noqa def eq_(a, b, msg=None): """Assert a == b, with repr messaging on failure.""" assert a == b, msg or "%r != %r" % (a, b) def is_(a, b, msg=None): """Assert a is b, with repr messaging on failure.""" assert a is b, msg or "%r is not %r" % (a, b) def ne_(a, b, msg=None): """Assert a != b, with repr messaging on failure.""" assert a != b, msg or "%r == %r" % (a, b) def assert_raises_message(except_cls, msg, callable_, *args, **kwargs): try: callable_(*args, **kwargs) assert False, "Callable did not raise an exception" except except_cls as e: assert re.search(msg, str(e)), "%r !~ %s" % (msg, e) def winsleep(): # sleep a for an amount of time # sufficient for windows time.time() # to change if compat.win32: time.sleep(0.001) def requires_py3k(fn): @wraps(fn) def wrap(*arg, **kw): if compat.py2k: pytest.skip("Python 3 required") return fn(*arg, **kw) return wrap dogpile.cache-0.9.0/tests/cache/_fixtures.py0000664000175000017500000003076213555610667022110 0ustar classicclassic00000000000000import collections import itertools import random from threading import Lock from threading import Thread import time from unittest import TestCase import pytest from dogpile.cache import CacheRegion from dogpile.cache import register_backend from dogpile.cache.api import CacheBackend from dogpile.cache.api import NO_VALUE from dogpile.cache.region import _backend_loader from . import assert_raises_message from . import eq_ class _GenericBackendFixture(object): @classmethod def setup_class(cls): backend_cls = _backend_loader.load(cls.backend) try: arguments = cls.config_args.get("arguments", {}) backend = backend_cls(arguments) except ImportError: pytest.skip("Backend %s not installed" % cls.backend) cls._check_backend_available(backend) def tearDown(self): if self._region_inst: for key in self._keys: self._region_inst.delete(key) self._keys.clear() elif self._backend_inst: self._backend_inst.delete("some_key") @classmethod def _check_backend_available(cls, backend): pass region_args = {} config_args = {} _region_inst = None _backend_inst = None _keys = set() def _region(self, backend=None, region_args={}, config_args={}): _region_args = self.region_args.copy() _region_args.update(**region_args) _config_args = self.config_args.copy() _config_args.update(config_args) def _store_keys(key): if existing_key_mangler: key = existing_key_mangler(key) self._keys.add(key) return key self._region_inst = reg = CacheRegion(**_region_args) existing_key_mangler = self._region_inst.key_mangler self._region_inst.key_mangler = _store_keys self._region_inst._user_defined_key_mangler = _store_keys reg.configure(backend or self.backend, **_config_args) return reg def _backend(self): backend_cls = _backend_loader.load(self.backend) _config_args = self.config_args.copy() arguments = _config_args.get("arguments", {}) self._backend_inst = backend_cls(arguments) return self._backend_inst class _GenericBackendTest(_GenericBackendFixture, TestCase): def test_backend_get_nothing(self): backend = self._backend() eq_(backend.get("some_key"), NO_VALUE) def test_backend_delete_nothing(self): backend = self._backend() backend.delete("some_key") def test_backend_set_get_value(self): backend = self._backend() backend.set("some_key", "some value") eq_(backend.get("some_key"), "some value") def test_backend_delete(self): backend = self._backend() backend.set("some_key", "some value") backend.delete("some_key") eq_(backend.get("some_key"), NO_VALUE) def test_region_set_get_value(self): reg = self._region() reg.set("some key", "some value") eq_(reg.get("some key"), "some value") def test_region_set_multiple_values(self): reg = self._region() values = {"key1": "value1", "key2": "value2", "key3": "value3"} reg.set_multi(values) eq_(values["key1"], reg.get("key1")) eq_(values["key2"], reg.get("key2")) eq_(values["key3"], reg.get("key3")) def test_region_get_zero_multiple_values(self): reg = self._region() eq_(reg.get_multi([]), []) def test_region_set_zero_multiple_values(self): reg = self._region() reg.set_multi({}) def test_region_set_zero_multiple_values_w_decorator(self): reg = self._region() values = reg.get_or_create_multi([], lambda: 0) eq_(values, []) def test_region_get_or_create_multi_w_should_cache_none(self): reg = self._region() values = reg.get_or_create_multi( ["key1", "key2", "key3"], lambda *k: [None, None, None], should_cache_fn=lambda v: v is not None, ) eq_(values, [None, None, None]) def test_region_get_multiple_values(self): reg = self._region() key1 = "value1" key2 = "value2" key3 = "value3" reg.set("key1", key1) reg.set("key2", key2) reg.set("key3", key3) values = reg.get_multi(["key1", "key2", "key3"]) eq_([key1, key2, key3], values) def test_region_get_nothing_multiple(self): reg = self._region() reg.delete_multi(["key1", "key2", "key3", "key4", "key5"]) values = {"key1": "value1", "key3": "value3", "key5": "value5"} reg.set_multi(values) reg_values = reg.get_multi( ["key1", "key2", "key3", "key4", "key5", "key6"] ) eq_( reg_values, ["value1", NO_VALUE, "value3", NO_VALUE, "value5", NO_VALUE], ) def test_region_get_empty_multiple(self): reg = self._region() reg_values = reg.get_multi([]) eq_(reg_values, []) def test_region_delete_multiple(self): reg = self._region() values = {"key1": "value1", "key2": "value2", "key3": "value3"} reg.set_multi(values) reg.delete_multi(["key2", "key10"]) eq_(values["key1"], reg.get("key1")) eq_(NO_VALUE, reg.get("key2")) eq_(values["key3"], reg.get("key3")) eq_(NO_VALUE, reg.get("key10")) def test_region_set_get_nothing(self): reg = self._region() reg.delete_multi(["some key"]) eq_(reg.get("some key"), NO_VALUE) def test_region_creator(self): reg = self._region() def creator(): return "some value" eq_(reg.get_or_create("some key", creator), "some value") def test_threaded_dogpile(self): # run a basic dogpile concurrency test. # note the concurrency of dogpile itself # is intensively tested as part of dogpile. reg = self._region(config_args={"expiration_time": 0.25}) lock = Lock() canary = [] def creator(): ack = lock.acquire(False) canary.append(ack) time.sleep(0.25) if ack: lock.release() return "some value" def f(): for x in range(5): reg.get_or_create("some key", creator) time.sleep(0.5) threads = [Thread(target=f) for i in range(10)] for t in threads: t.start() for t in threads: t.join() assert len(canary) > 2 if not reg.backend.has_lock_timeout(): assert False not in canary else: assert False in canary def test_threaded_get_multi(self): reg = self._region(config_args={"expiration_time": 0.25}) locks = dict((str(i), Lock()) for i in range(11)) canary = collections.defaultdict(list) def creator(*keys): assert keys ack = [locks[key].acquire(False) for key in keys] # print( # ("%s " % thread.get_ident()) + \ # ", ".join(sorted("%s=%s" % (key, acq) # for acq, key in zip(ack, keys))) # ) for acq, key in zip(ack, keys): canary[key].append(acq) time.sleep(0.5) for acq, key in zip(ack, keys): if acq: locks[key].release() return ["some value %s" % k for k in keys] def f(): for x in range(5): reg.get_or_create_multi( [ str(random.randint(1, 10)) for i in range(random.randint(1, 5)) ], creator, ) time.sleep(0.5) f() return threads = [Thread(target=f) for i in range(5)] for t in threads: t.start() for t in threads: t.join() assert sum([len(v) for v in canary.values()]) > 10 for l in canary.values(): assert False not in l def test_region_delete(self): reg = self._region() reg.set("some key", "some value") reg.delete("some key") reg.delete("some key") eq_(reg.get("some key"), NO_VALUE) def test_region_expire(self): reg = self._region(config_args={"expiration_time": 0.25}) counter = itertools.count(1) def creator(): return "some value %d" % next(counter) eq_(reg.get_or_create("some key", creator), "some value 1") time.sleep(0.4) eq_(reg.get("some key", ignore_expiration=True), "some value 1") eq_(reg.get_or_create("some key", creator), "some value 2") eq_(reg.get("some key"), "some value 2") def test_decorated_fn_functionality(self): # test for any quirks in the fn decoration that interact # with the backend. reg = self._region() counter = itertools.count(1) @reg.cache_on_arguments() def my_function(x, y): return next(counter) + x + y # Start with a clean slate my_function.invalidate(3, 4) my_function.invalidate(5, 6) my_function.invalidate(4, 3) eq_(my_function(3, 4), 8) eq_(my_function(5, 6), 13) eq_(my_function(3, 4), 8) eq_(my_function(4, 3), 10) my_function.invalidate(4, 3) eq_(my_function(4, 3), 11) def test_exploding_value_fn(self): reg = self._region() def boom(): raise Exception("boom") assert_raises_message( Exception, "boom", reg.get_or_create, "some_key", boom ) class _GenericMutexTest(_GenericBackendFixture, TestCase): def test_mutex(self): backend = self._backend() mutex = backend.get_mutex("foo") ac = mutex.acquire() assert ac ac2 = mutex.acquire(False) assert not ac2 mutex.release() ac3 = mutex.acquire() assert ac3 mutex.release() def test_mutex_threaded(self): backend = self._backend() backend.get_mutex("foo") lock = Lock() canary = [] def f(): for x in range(5): mutex = backend.get_mutex("foo") mutex.acquire() for y in range(5): ack = lock.acquire(False) canary.append(ack) time.sleep(0.002) if ack: lock.release() mutex.release() time.sleep(0.02) threads = [Thread(target=f) for i in range(5)] for t in threads: t.start() for t in threads: t.join() assert False not in canary def test_mutex_reentrant_across_keys(self): backend = self._backend() for x in range(3): m1 = backend.get_mutex("foo") m2 = backend.get_mutex("bar") try: m1.acquire() assert m2.acquire(False) assert not m2.acquire(False) m2.release() assert m2.acquire(False) assert not m2.acquire(False) m2.release() finally: m1.release() def test_reentrant_dogpile(self): reg = self._region() def create_foo(): return "foo" + reg.get_or_create("bar", create_bar) def create_bar(): return "bar" eq_(reg.get_or_create("foo", create_foo), "foobar") eq_(reg.get_or_create("foo", create_foo), "foobar") class MockMutex(object): def __init__(self, key): self.key = key def acquire(self, blocking=True): return True def release(self): return class MockBackend(CacheBackend): def __init__(self, arguments): self.arguments = arguments self._cache = {} def get_mutex(self, key): return MockMutex(key) def get(self, key): try: return self._cache[key] except KeyError: return NO_VALUE def get_multi(self, keys): return [self.get(key) for key in keys] def set(self, key, value): self._cache[key] = value def set_multi(self, mapping): for key, value in mapping.items(): self.set(key, value) def delete(self, key): self._cache.pop(key, None) def delete_multi(self, keys): for key in keys: self.delete(key) register_backend("mock", __name__, "MockBackend") dogpile.cache-0.9.0/tests/cache/plugins/0000775000175000017500000000000013555610710021164 5ustar classicclassic00000000000000dogpile.cache-0.9.0/tests/cache/plugins/__init__.py0000664000175000017500000000000013555610667023276 0ustar classicclassic00000000000000dogpile.cache-0.9.0/tests/cache/plugins/test_mako_cache.py0000664000175000017500000000253613555610667024670 0ustar classicclassic00000000000000from unittest import TestCase from mako.cache import register_plugin from mako.template import Template import mock import pytest from .. import eq_ try: import mako # noqa except ImportError: raise pytest.skip("this test suite requires mako templates") register_plugin( "dogpile.cache", "dogpile.cache.plugins.mako_cache", "MakoPlugin" ) class TestMakoPlugin(TestCase): def _mock_fixture(self): reg = mock.MagicMock() reg.get_or_create.return_value = "hello world" my_regions = {"myregion": reg} return ( { "cache_impl": "dogpile.cache", "cache_args": {"regions": my_regions}, }, reg, ) def test_basic(self): kw, reg = self._mock_fixture() t = Template('<%page cached="True" cache_region="myregion"/>hi', **kw) t.render() eq_(reg.get_or_create.call_count, 1) def test_timeout(self): kw, reg = self._mock_fixture() t = Template( """ <%def name="mydef()" cached="True" cache_region="myregion" cache_timeout="20"> some content ${mydef()} """, **kw ) t.render() eq_(reg.get_or_create.call_args[1], {"expiration_time": 20}) dogpile.cache-0.9.0/tests/cache/test_dbm_backend.py0000664000175000017500000000500613555610667023341 0ustar classicclassic00000000000000import os import sys from dogpile.cache.backends.file import AbstractFileLock from dogpile.util.readwrite_lock import ReadWriteMutex from . import assert_raises_message from ._fixtures import _GenericBackendTest from ._fixtures import _GenericMutexTest try: import fcntl # noqa has_fcntl = True except ImportError: has_fcntl = False class MutexLock(AbstractFileLock): def __init__(self, filename): self.mutex = ReadWriteMutex() def acquire_read_lock(self, wait): ret = self.mutex.acquire_read_lock(wait) return wait or ret def acquire_write_lock(self, wait): ret = self.mutex.acquire_write_lock(wait) return wait or ret def release_read_lock(self): return self.mutex.release_read_lock() def release_write_lock(self): return self.mutex.release_write_lock() test_fname = "test_%s.db" % sys.hexversion if has_fcntl: class DBMBackendTest(_GenericBackendTest): backend = "dogpile.cache.dbm" config_args = {"arguments": {"filename": test_fname}} class DBMBackendConditionTest(_GenericBackendTest): backend = "dogpile.cache.dbm" config_args = { "arguments": {"filename": test_fname, "lock_factory": MutexLock} } class DBMBackendNoLockTest(_GenericBackendTest): backend = "dogpile.cache.dbm" config_args = { "arguments": { "filename": test_fname, "rw_lockfile": False, "dogpile_lockfile": False, } } class _DBMMutexTest(_GenericMutexTest): backend = "dogpile.cache.dbm" def test_release_assertion_thread(self): backend = self._backend() m1 = backend.get_mutex("foo") assert_raises_message( AssertionError, "this thread didn't do the acquire", m1.release ) def test_release_assertion_key(self): backend = self._backend() m1 = backend.get_mutex("foo") m2 = backend.get_mutex("bar") m1.acquire() try: assert_raises_message( AssertionError, "No acquire held for key 'bar'", m2.release ) finally: m1.release() if has_fcntl: class DBMMutexFileTest(_DBMMutexTest): config_args = {"arguments": {"filename": test_fname}} class DBMMutexConditionTest(_DBMMutexTest): config_args = { "arguments": {"filename": test_fname, "lock_factory": MutexLock} } def teardown(): for fname in os.listdir(os.curdir): if fname.startswith(test_fname): os.unlink(fname) dogpile.cache-0.9.0/tests/cache/test_decorator.py0000664000175000017500000004556213555610667023125 0ustar classicclassic00000000000000#! coding: utf-8 import itertools import time from unittest import TestCase from dogpile.cache import util from dogpile.cache.api import NO_VALUE from dogpile.util import compat from . import eq_ from . import requires_py3k from . import winsleep from ._fixtures import _GenericBackendFixture class DecoratorTest(_GenericBackendFixture, TestCase): backend = "dogpile.cache.memory" def _fixture( self, namespace=None, expiration_time=None, key_generator=None ): reg = self._region(config_args={"expiration_time": 0.25}) counter = itertools.count(1) @reg.cache_on_arguments( namespace=namespace, expiration_time=expiration_time, function_key_generator=key_generator, ) def go(a, b): val = next(counter) return val, a, b return go def _multi_fixture( self, namespace=None, expiration_time=None, key_generator=None ): reg = self._region(config_args={"expiration_time": 0.25}) counter = itertools.count(1) @reg.cache_multi_on_arguments( namespace=namespace, expiration_time=expiration_time, function_multi_key_generator=key_generator, ) def go(*args): val = next(counter) return ["%d %s" % (val, arg) for arg in args] return go def test_decorator(self): go = self._fixture() eq_(go(1, 2), (1, 1, 2)) eq_(go(3, 4), (2, 3, 4)) eq_(go(1, 2), (1, 1, 2)) time.sleep(0.3) eq_(go(1, 2), (3, 1, 2)) def test_decorator_namespace(self): # TODO: test the namespace actually # working somehow... go = self._fixture(namespace="x") eq_(go(1, 2), (1, 1, 2)) eq_(go(3, 4), (2, 3, 4)) eq_(go(1, 2), (1, 1, 2)) time.sleep(0.3) eq_(go(1, 2), (3, 1, 2)) def test_decorator_custom_expire(self): go = self._fixture(expiration_time=0.5) eq_(go(1, 2), (1, 1, 2)) eq_(go(3, 4), (2, 3, 4)) eq_(go(1, 2), (1, 1, 2)) time.sleep(0.3) eq_(go(1, 2), (1, 1, 2)) time.sleep(0.3) eq_(go(1, 2), (3, 1, 2)) def test_decorator_expire_callable(self): go = self._fixture(expiration_time=lambda: 0.5) eq_(go(1, 2), (1, 1, 2)) eq_(go(3, 4), (2, 3, 4)) eq_(go(1, 2), (1, 1, 2)) time.sleep(0.3) eq_(go(1, 2), (1, 1, 2)) time.sleep(0.3) eq_(go(1, 2), (3, 1, 2)) def test_decorator_expire_callable_zero(self): go = self._fixture(expiration_time=lambda: 0) eq_(go(1, 2), (1, 1, 2)) winsleep() eq_(go(1, 2), (2, 1, 2)) winsleep() eq_(go(1, 2), (3, 1, 2)) def test_explicit_expire(self): go = self._fixture(expiration_time=1) eq_(go(1, 2), (1, 1, 2)) eq_(go(3, 4), (2, 3, 4)) eq_(go(1, 2), (1, 1, 2)) go.invalidate(1, 2) eq_(go(1, 2), (3, 1, 2)) def test_explicit_set(self): go = self._fixture(expiration_time=1) eq_(go(1, 2), (1, 1, 2)) go.set(5, 1, 2) eq_(go(3, 4), (2, 3, 4)) eq_(go(1, 2), 5) go.invalidate(1, 2) eq_(go(1, 2), (3, 1, 2)) go.set(0, 1, 3) eq_(go(1, 3), 0) def test_explicit_get(self): go = self._fixture(expiration_time=1) eq_(go(1, 2), (1, 1, 2)) eq_(go.get(1, 2), (1, 1, 2)) eq_(go.get(2, 1), NO_VALUE) eq_(go(2, 1), (2, 2, 1)) eq_(go.get(2, 1), (2, 2, 1)) def test_explicit_get_multi(self): go = self._multi_fixture(expiration_time=1) eq_(go(1, 2), ["1 1", "1 2"]) eq_(go.get(1, 2), ["1 1", "1 2"]) eq_(go.get(3, 1), [NO_VALUE, "1 1"]) eq_(go(3, 1), ["2 3", "1 1"]) eq_(go.get(3, 1), ["2 3", "1 1"]) def test_explicit_set_multi(self): go = self._multi_fixture(expiration_time=1) eq_(go(1, 2), ["1 1", "1 2"]) eq_(go(1, 2), ["1 1", "1 2"]) go.set({1: "1 5", 2: "1 6"}) eq_(go(1, 2), ["1 5", "1 6"]) def test_explicit_refresh(self): go = self._fixture(expiration_time=1) eq_(go(1, 2), (1, 1, 2)) eq_(go.refresh(1, 2), (2, 1, 2)) eq_(go(1, 2), (2, 1, 2)) eq_(go(1, 2), (2, 1, 2)) eq_(go.refresh(1, 2), (3, 1, 2)) eq_(go(1, 2), (3, 1, 2)) def test_explicit_refresh_multi(self): go = self._multi_fixture(expiration_time=1) eq_(go(1, 2), ["1 1", "1 2"]) eq_(go(1, 2), ["1 1", "1 2"]) eq_(go.refresh(1, 2), ["2 1", "2 2"]) eq_(go(1, 2), ["2 1", "2 2"]) eq_(go(1, 2), ["2 1", "2 2"]) def test_decorator_key_generator(self): def my_key_generator(namespace, fn, **kw): fname = fn.__name__ def generate_key_with_first_argument(*args): return fname + "_" + str(args[0]) return generate_key_with_first_argument go = self._fixture(key_generator=my_key_generator) eq_(go(1, 2), (1, 1, 2)) eq_(go(3, 4), (2, 3, 4)) eq_(go(1, 3), (1, 1, 2)) time.sleep(0.3) eq_(go(1, 3), (3, 1, 3)) def test_decorator_key_generator_multi(self): def my_key_generator(namespace, fn, **kw): fname = fn.__name__ def generate_key_with_reversed_order(*args): return [fname + "_" + str(a) for a in args][::-1] return generate_key_with_reversed_order go = self._multi_fixture(key_generator=my_key_generator) eq_(go(1, 2), ["1 1", "1 2"]) eq_(go.get(1, 2), ["1 1", "1 2"]) eq_(go.get(3, 1), ["1 2", NO_VALUE]) eq_(go(3, 1), ["1 2", "2 1"]) eq_(go.get(3, 1), ["1 2", "2 1"]) class KeyGenerationTest(TestCase): def _keygen_decorator(self, namespace=None, **kw): canary = [] def decorate(fn): canary.append(util.function_key_generator(namespace, fn, **kw)) return fn return decorate, canary def _multi_keygen_decorator(self, namespace=None, **kw): canary = [] def decorate(fn): canary.append( util.function_multi_key_generator(namespace, fn, **kw) ) return fn return decorate, canary def _kwarg_keygen_decorator(self, namespace=None, **kw): canary = [] def decorate(fn): canary.append( util.kwarg_function_key_generator(namespace, fn, **kw) ) return fn return decorate, canary def test_default_keygen_kwargs_raises_value_error(self): decorate, canary = self._keygen_decorator() @decorate def one(a, b): pass gen = canary[0] self.assertRaises(ValueError, gen, 1, b=2) def test_kwarg_kegen_keygen_fn(self): decorate, canary = self._kwarg_keygen_decorator() @decorate def one(a, b): pass gen = canary[0] result_key = "tests.cache.test_decorator:one|1 2" eq_(gen(1, 2), result_key) eq_(gen(1, b=2), result_key) eq_(gen(a=1, b=2), result_key) eq_(gen(b=2, a=1), result_key) def test_kwarg_kegen_keygen_fn_with_defaults_and_positional(self): decorate, canary = self._kwarg_keygen_decorator() @decorate def one(a, b=None): pass gen = canary[0] result_key = "tests.cache.test_decorator:one|1 2" eq_(gen(1, 2), result_key) eq_(gen(1, b=2), result_key) eq_(gen(a=1, b=2), result_key) eq_(gen(b=2, a=1), result_key) eq_(gen(a=1), "tests.cache.test_decorator:one|1 None") def test_kwarg_kegen_keygen_fn_all_defaults(self): decorate, canary = self._kwarg_keygen_decorator() @decorate def one(a=True, b=None): pass gen = canary[0] result_key = "tests.cache.test_decorator:one|1 2" eq_(gen(1, 2), result_key) eq_(gen(1, b=2), result_key) eq_(gen(a=1, b=2), result_key) eq_(gen(b=2, a=1), result_key) eq_(gen(a=1), "tests.cache.test_decorator:one|1 None") eq_(gen(1), "tests.cache.test_decorator:one|1 None") eq_(gen(), "tests.cache.test_decorator:one|True None") eq_(gen(b=2), "tests.cache.test_decorator:one|True 2") def test_keygen_fn(self): decorate, canary = self._keygen_decorator() @decorate def one(a, b): pass gen = canary[0] eq_(gen(1, 2), "tests.cache.test_decorator:one|1 2") eq_(gen(None, 5), "tests.cache.test_decorator:one|None 5") def test_multi_keygen_fn(self): decorate, canary = self._multi_keygen_decorator() @decorate def one(a, b): pass gen = canary[0] eq_( gen(1, 2), [ "tests.cache.test_decorator:one|1", "tests.cache.test_decorator:one|2", ], ) def test_keygen_fn_namespace(self): decorate, canary = self._keygen_decorator("mynamespace") @decorate def one(a, b): pass gen = canary[0] eq_(gen(1, 2), "tests.cache.test_decorator:one|mynamespace|1 2") eq_(gen(None, 5), "tests.cache.test_decorator:one|mynamespace|None 5") def test_kwarg_keygen_fn_namespace(self): decorate, canary = self._kwarg_keygen_decorator("mynamespace") @decorate def one(a, b): pass gen = canary[0] eq_(gen(1, 2), "tests.cache.test_decorator:one|mynamespace|1 2") eq_(gen(None, 5), "tests.cache.test_decorator:one|mynamespace|None 5") def test_key_isnt_unicode_bydefault(self): decorate, canary = self._keygen_decorator("mynamespace") @decorate def one(a, b): pass gen = canary[0] assert isinstance(gen("foo"), str) def test_kwarg_kwgen_key_isnt_unicode_bydefault(self): decorate, canary = self._kwarg_keygen_decorator("mynamespace") @decorate def one(a, b): pass gen = canary[0] assert isinstance(gen("foo"), str) def test_unicode_key(self): decorate, canary = self._keygen_decorator( "mynamespace", to_str=compat.text_type ) @decorate def one(a, b): pass gen = canary[0] eq_( gen(compat.u("méil"), compat.u("drôle")), compat.ue( "tests.cache.test_decorator:" "one|mynamespace|m\xe9il dr\xf4le" ), ) def test_unicode_key_kwarg_generator(self): decorate, canary = self._kwarg_keygen_decorator( "mynamespace", to_str=compat.text_type ) @decorate def one(a, b): pass gen = canary[0] eq_( gen(compat.u("méil"), compat.u("drôle")), compat.ue( "tests.cache.test_decorator:" "one|mynamespace|m\xe9il dr\xf4le" ), ) def test_unicode_key_multi(self): decorate, canary = self._multi_keygen_decorator( "mynamespace", to_str=compat.text_type ) @decorate def one(a, b): pass gen = canary[0] eq_( gen(compat.u("méil"), compat.u("drôle")), [ compat.ue( "tests.cache.test_decorator:one|mynamespace|m\xe9il" ), compat.ue( "tests.cache.test_decorator:one|mynamespace|dr\xf4le" ), ], ) @requires_py3k def test_unicode_key_by_default(self): decorate, canary = self._keygen_decorator( "mynamespace", to_str=compat.text_type ) @decorate def one(a, b): pass gen = canary[0] assert isinstance(gen("méil"), str) eq_( gen("méil", "drôle"), "tests.cache.test_decorator:" "one|mynamespace|m\xe9il dr\xf4le", ) @requires_py3k def test_unicode_key_by_default_kwarg_generator(self): decorate, canary = self._kwarg_keygen_decorator( "mynamespace", to_str=compat.text_type ) @decorate def one(a, b): pass gen = canary[0] assert isinstance(gen("méil"), str) eq_( gen("méil", "drôle"), "tests.cache.test_decorator:" "one|mynamespace|m\xe9il dr\xf4le", ) def test_sha1_key_mangler(self): decorate, canary = self._keygen_decorator() @decorate def one(a, b): pass gen = canary[0] key = gen(1, 2) eq_( util.sha1_mangle_key(key), "aead490a8ace2d69a00160f1fd8fd8a16552c24f", ) def test_sha1_key_mangler_unicode_py2k(self): eq_( util.sha1_mangle_key(u"some_key"), "53def077a4264bd3183d4eb21b1f56f883e1b572", ) def test_sha1_key_mangler_bytes_py3k(self): eq_( util.sha1_mangle_key(b"some_key"), "53def077a4264bd3183d4eb21b1f56f883e1b572", ) class CacheDecoratorTest(_GenericBackendFixture, TestCase): backend = "mock" def test_cache_arg(self): reg = self._region() counter = itertools.count(1) @reg.cache_on_arguments() def generate(x, y): return next(counter) + x + y eq_(generate(1, 2), 4) eq_(generate(2, 1), 5) eq_(generate(1, 2), 4) generate.invalidate(1, 2) eq_(generate(1, 2), 6) def test_original_fn_set(self): reg = self._region(backend="dogpile.cache.memory") counter = itertools.count(1) def generate(x, y): return next(counter) + x + y decorated = reg.cache_on_arguments()(generate) eq_(decorated.original, generate) def test_reentrant_call(self): reg = self._region(backend="dogpile.cache.memory") counter = itertools.count(1) # if these two classes get the same namespace, # you get a reentrant deadlock. class Foo(object): @classmethod @reg.cache_on_arguments(namespace="foo") def generate(cls, x, y): return next(counter) + x + y class Bar(object): @classmethod @reg.cache_on_arguments(namespace="bar") def generate(cls, x, y): return Foo.generate(x, y) eq_(Bar.generate(1, 2), 4) def test_multi(self): reg = self._region() counter = itertools.count(1) @reg.cache_multi_on_arguments() def generate(*args): return ["%d %d" % (arg, next(counter)) for arg in args] eq_(generate(2, 8, 10), ["2 2", "8 3", "10 1"]) eq_(generate(2, 9, 10), ["2 2", "9 4", "10 1"]) generate.invalidate(2) eq_(generate(2, 7, 10), ["2 5", "7 6", "10 1"]) generate.set({7: 18, 10: 15}) eq_(generate(2, 7, 10), ["2 5", 18, 15]) def test_multi_asdict(self): reg = self._region() counter = itertools.count(1) @reg.cache_multi_on_arguments(asdict=True) def generate(*args): return dict( [(arg, "%d %d" % (arg, next(counter))) for arg in args] ) eq_(generate(2, 8, 10), {2: "2 2", 8: "8 3", 10: "10 1"}) eq_(generate(2, 9, 10), {2: "2 2", 9: "9 4", 10: "10 1"}) generate.invalidate(2) eq_(generate(2, 7, 10), {2: "2 5", 7: "7 6", 10: "10 1"}) generate.set({7: 18, 10: 15}) eq_(generate(2, 7, 10), {2: "2 5", 7: 18, 10: 15}) eq_(generate.refresh(2, 7), {2: "2 7", 7: "7 8"}) eq_(generate(2, 7, 10), {2: "2 7", 10: 15, 7: "7 8"}) def test_multi_asdict_keys_missing(self): reg = self._region() counter = itertools.count(1) @reg.cache_multi_on_arguments(asdict=True) def generate(*args): return dict( [ (arg, "%d %d" % (arg, next(counter))) for arg in args if arg != 10 ] ) eq_(generate(2, 8, 10), {2: "2 1", 8: "8 2"}) eq_(generate(2, 9, 10), {2: "2 1", 9: "9 3"}) assert reg.get(10) is NO_VALUE generate.invalidate(2) eq_(generate(2, 7, 10), {2: "2 4", 7: "7 5"}) generate.set({7: 18, 10: 15}) eq_(generate(2, 7, 10), {2: "2 4", 7: 18, 10: 15}) def test_multi_asdict_keys_missing_existing_cache_fn(self): reg = self._region() counter = itertools.count(1) @reg.cache_multi_on_arguments( asdict=True, should_cache_fn=lambda v: not v.startswith("8 ") ) def generate(*args): return dict( [ (arg, "%d %d" % (arg, next(counter))) for arg in args if arg != 10 ] ) eq_(generate(2, 8, 10), {2: "2 1", 8: "8 2"}) eq_(generate(2, 8, 10), {2: "2 1", 8: "8 3"}) eq_(generate(2, 8, 10), {2: "2 1", 8: "8 4"}) eq_(generate(2, 9, 10), {2: "2 1", 9: "9 5"}) assert reg.get(10) is NO_VALUE generate.invalidate(2) eq_(generate(2, 7, 10), {2: "2 6", 7: "7 7"}) generate.set({7: 18, 10: 15}) eq_(generate(2, 7, 10), {2: "2 6", 7: 18, 10: 15}) def test_multi_namespace(self): reg = self._region() counter = itertools.count(1) @reg.cache_multi_on_arguments(namespace="foo") def generate(*args): return ["%d %d" % (arg, next(counter)) for arg in args] eq_(generate(2, 8, 10), ["2 2", "8 3", "10 1"]) eq_(generate(2, 9, 10), ["2 2", "9 4", "10 1"]) eq_( sorted(list(reg.backend._cache)), [ "tests.cache.test_decorator:generate|foo|10", "tests.cache.test_decorator:generate|foo|2", "tests.cache.test_decorator:generate|foo|8", "tests.cache.test_decorator:generate|foo|9", ], ) generate.invalidate(2) eq_(generate(2, 7, 10), ["2 5", "7 6", "10 1"]) generate.set({7: 18, 10: 15}) eq_(generate(2, 7, 10), ["2 5", 18, 15]) def test_cache_preserve_sig(self): reg = self._region() def func(a, b, c=True, *args, **kwargs): return None signature = compat.inspect_getargspec(func) cached_func = reg.cache_on_arguments()(func) cached_signature = compat.inspect_getargspec(cached_func) self.assertEqual(signature, cached_signature) def test_cache_multi_preserve_sig(self): reg = self._region() def func(a, b, c=True, *args, **kwargs): return None, None signature = compat.inspect_getargspec(func) cached_func = reg.cache_multi_on_arguments()(func) cached_signature = compat.inspect_getargspec(cached_func) self.assertEqual(signature, cached_signature) dogpile.cache-0.9.0/tests/cache/test_mako.py0000664000175000017500000000065413555610667022063 0ustar classicclassic00000000000000from unittest import TestCase class MakoTest(TestCase): """ Test entry point for Mako """ def test_entry_point(self): import pkg_resources # if the entrypoint isn't there, just pass, as the tests can be run # without any setuptools install for impl in pkg_resources.iter_entry_points( "mako.cache", "dogpile.cache" ): impl.load() dogpile.cache-0.9.0/tests/cache/test_memcached_backend.py0000664000175000017500000001774613555610667024523 0ustar classicclassic00000000000000import os from threading import Thread import time from unittest import TestCase import weakref import pytest from dogpile.cache.backends.memcached import GenericMemcachedBackend from dogpile.cache.backends.memcached import MemcachedBackend from dogpile.cache.backends.memcached import PylibmcBackend from . import eq_ from ._fixtures import _GenericBackendTest from ._fixtures import _GenericMutexTest MEMCACHED_PORT = os.getenv("DOGPILE_MEMCACHED_PORT", "11211") MEMCACHED_URL = "127.0.0.1:%s" % MEMCACHED_PORT expect_memcached_running = bool(os.getenv("DOGPILE_MEMCACHED_PORT")) LOCK_TIMEOUT = 1 class _TestMemcachedConn(object): @classmethod def _check_backend_available(cls, backend): try: client = backend._create_client() client.set("x", "y") assert client.get("x") == "y" except Exception: if not expect_memcached_running: pytest.skip( "memcached is not running or " "otherwise not functioning correctly" ) else: raise class _NonDistributedMemcachedTest(_TestMemcachedConn, _GenericBackendTest): region_args = {"key_mangler": lambda x: x.replace(" ", "_")} config_args = {"arguments": {"url": MEMCACHED_URL}} class _DistributedMemcachedWithTimeoutTest( _TestMemcachedConn, _GenericBackendTest ): region_args = {"key_mangler": lambda x: x.replace(" ", "_")} config_args = { "arguments": { "url": MEMCACHED_URL, "distributed_lock": True, "lock_timeout": LOCK_TIMEOUT, } } class _DistributedMemcachedTest(_TestMemcachedConn, _GenericBackendTest): region_args = {"key_mangler": lambda x: x.replace(" ", "_")} config_args = { "arguments": {"url": MEMCACHED_URL, "distributed_lock": True} } class _DistributedMemcachedMutexTest(_TestMemcachedConn, _GenericMutexTest): config_args = { "arguments": {"url": MEMCACHED_URL, "distributed_lock": True} } class _DistributedMemcachedMutexWithTimeoutTest( _TestMemcachedConn, _GenericMutexTest ): config_args = { "arguments": { "url": MEMCACHED_URL, "distributed_lock": True, "lock_timeout": LOCK_TIMEOUT, } } class PylibmcTest(_NonDistributedMemcachedTest): backend = "dogpile.cache.pylibmc" class PylibmcDistributedTest(_DistributedMemcachedTest): backend = "dogpile.cache.pylibmc" class PylibmcDistributedMutexTest(_DistributedMemcachedMutexTest): backend = "dogpile.cache.pylibmc" class BMemcachedSkips(object): def test_threaded_dogpile(self): pytest.skip("bmemcached is too unreliable here") def test_threaded_get_multi(self): pytest.skip("bmemcached is too unreliable here") def test_mutex_threaded_dogpile(self): pytest.skip("bmemcached is too unreliable here") def test_mutex_threaded(self): pytest.skip("bmemcached is too unreliable here") class BMemcachedTest(BMemcachedSkips, _NonDistributedMemcachedTest): backend = "dogpile.cache.bmemcached" class BMemcachedDistributedWithTimeoutTest( BMemcachedSkips, _DistributedMemcachedWithTimeoutTest ): backend = "dogpile.cache.bmemcached" class BMemcachedDistributedTest(BMemcachedSkips, _DistributedMemcachedTest): backend = "dogpile.cache.bmemcached" class BMemcachedDistributedMutexTest( BMemcachedSkips, _DistributedMemcachedMutexTest ): backend = "dogpile.cache.bmemcached" class BMemcachedDistributedMutexWithTimeoutTest( BMemcachedSkips, _DistributedMemcachedMutexWithTimeoutTest ): backend = "dogpile.cache.bmemcached" class MemcachedTest(_NonDistributedMemcachedTest): backend = "dogpile.cache.memcached" class MemcachedDistributedTest(_DistributedMemcachedTest): backend = "dogpile.cache.memcached" class MemcachedDistributedMutexTest(_DistributedMemcachedMutexTest): backend = "dogpile.cache.memcached" class MockGenericMemcachedBackend(GenericMemcachedBackend): def _imports(self): pass def _create_client(self): return MockClient(self.url) class MockMemcacheBackend(MemcachedBackend): def _imports(self): pass def _create_client(self): return MockClient(self.url) class MockPylibmcBackend(PylibmcBackend): def _imports(self): pass def _create_client(self): return MockClient( self.url, binary=self.binary, behaviors=self.behaviors ) class MockClient(object): clients = set() def __init__(self, *arg, **kw): self.arg = arg self.kw = kw self.canary = [] self._cache = {} self.clients.add(weakref.ref(self, MockClient._remove)) @classmethod def _remove(cls, ref): cls.clients.remove(ref) @classmethod def number_of_clients(cls): return len(cls.clients) def get(self, key): return self._cache.get(key) def set(self, key, value, **kw): self.canary.append(kw) self._cache[key] = value def delete(self, key): self._cache.pop(key, None) class PylibmcArgsTest(TestCase): def test_binary_flag(self): backend = MockPylibmcBackend(arguments={"url": "foo", "binary": True}) eq_(backend._create_client().kw["binary"], True) def test_url_list(self): backend = MockPylibmcBackend(arguments={"url": ["a", "b", "c"]}) eq_(backend._create_client().arg[0], ["a", "b", "c"]) def test_url_scalar(self): backend = MockPylibmcBackend(arguments={"url": "foo"}) eq_(backend._create_client().arg[0], ["foo"]) def test_behaviors(self): backend = MockPylibmcBackend( arguments={"url": "foo", "behaviors": {"q": "p"}} ) eq_(backend._create_client().kw["behaviors"], {"q": "p"}) def test_set_time(self): backend = MockPylibmcBackend( arguments={"url": "foo", "memcached_expire_time": 20} ) backend.set("foo", "bar") eq_(backend._clients.memcached.canary, [{"time": 20}]) def test_set_min_compress_len(self): backend = MockPylibmcBackend( arguments={"url": "foo", "min_compress_len": 20} ) backend.set("foo", "bar") eq_(backend._clients.memcached.canary, [{"min_compress_len": 20}]) def test_no_set_args(self): backend = MockPylibmcBackend(arguments={"url": "foo"}) backend.set("foo", "bar") eq_(backend._clients.memcached.canary, [{}]) class MemcachedArgstest(TestCase): def test_set_time(self): backend = MockMemcacheBackend( arguments={"url": "foo", "memcached_expire_time": 20} ) backend.set("foo", "bar") eq_(backend._clients.memcached.canary, [{"time": 20}]) def test_set_min_compress_len(self): backend = MockMemcacheBackend( arguments={"url": "foo", "min_compress_len": 20} ) backend.set("foo", "bar") eq_(backend._clients.memcached.canary, [{"min_compress_len": 20}]) class LocalThreadTest(TestCase): def setUp(self): import gc gc.collect() eq_(MockClient.number_of_clients(), 0) def test_client_cleanup_1(self): self._test_client_cleanup(1) def test_client_cleanup_3(self): self._test_client_cleanup(3) def test_client_cleanup_10(self): self._test_client_cleanup(10) def _test_client_cleanup(self, count): backend = MockGenericMemcachedBackend(arguments={"url": "foo"}) canary = [] flag = [False] def f(delay): backend._clients.memcached canary.append(MockClient.number_of_clients()) while not flag[0]: time.sleep(0.02) threads = [Thread(target=f, args=(count - i,)) for i in range(count)] for t in threads: t.start() flag[0] = True for t in threads: t.join() eq_(canary, [i + 1 for i in range(count)]) import gc gc.collect() eq_(MockClient.number_of_clients(), 0) dogpile.cache-0.9.0/tests/cache/test_memory_backend.py0000664000175000017500000000034213555610667024105 0ustar classicclassic00000000000000from ._fixtures import _GenericBackendTest class MemoryBackendTest(_GenericBackendTest): backend = "dogpile.cache.memory" class MemoryPickleBackendTest(_GenericBackendTest): backend = "dogpile.cache.memory_pickle" dogpile.cache-0.9.0/tests/cache/test_null_backend.py0000664000175000017500000000361613555610667023556 0ustar classicclassic00000000000000import itertools from unittest import TestCase from dogpile.cache.api import NO_VALUE from . import eq_ from ._fixtures import _GenericBackendFixture class NullBackendTest(_GenericBackendFixture, TestCase): backend = "dogpile.cache.null" def test_get(self): reg = self._region() eq_(reg.get("some key"), NO_VALUE) def test_set(self): reg = self._region() reg.set("some key", "some value") eq_(reg.get("some key"), NO_VALUE) def test_delete(self): reg = self._region() reg.delete("some key") eq_(reg.get("some key"), NO_VALUE) def test_get_multi(self): reg = self._region() eq_(reg.get_multi(["a", "b", "c"]), [NO_VALUE, NO_VALUE, NO_VALUE]) def test_set_multi(self): reg = self._region() reg.set_multi({"a": 1, "b": 2, "c": 3}) eq_(reg.get_multi(["a", "b", "c"]), [NO_VALUE, NO_VALUE, NO_VALUE]) def test_delete_multi(self): reg = self._region() reg.delete_multi(["a", "b", "c"]) eq_(reg.get_multi(["a", "b", "c"]), [NO_VALUE, NO_VALUE, NO_VALUE]) def test_decorator(self): reg = self._region() counter = itertools.count(1) @reg.cache_on_arguments() def go(a, b): val = next(counter) return val, a, b eq_(go(1, 2), (1, 1, 2)) eq_(go(1, 2), (2, 1, 2)) eq_(go(1, 3), (3, 1, 3)) def test_mutex(self): backend = self._backend() mutex = backend.get_mutex("foo") ac = mutex.acquire() assert ac mutex.release() ac2 = mutex.acquire(False) assert ac2 mutex.release() def test_mutex_doesnt_actually_lock(self): backend = self._backend() mutex = backend.get_mutex("foo") ac = mutex.acquire() assert ac ac2 = mutex.acquire(False) assert ac2 mutex.release() dogpile.cache-0.9.0/tests/cache/test_redis_backend.py0000664000175000017500000000732313555610667023711 0ustar classicclassic00000000000000import os from unittest import TestCase from mock import Mock from mock import patch import pytest from dogpile.cache.region import _backend_loader from ._fixtures import _GenericBackendTest from ._fixtures import _GenericMutexTest REDIS_HOST = "127.0.0.1" REDIS_PORT = int(os.getenv("DOGPILE_REDIS_PORT", "6379")) expect_redis_running = os.getenv("DOGPILE_REDIS_PORT") is not None class _TestRedisConn(object): @classmethod def _check_backend_available(cls, backend): try: client = backend._create_client() client.set("x", "y") # on py3k it appears to return b"y" assert client.get("x").decode("ascii") == "y" client.delete("x") except Exception: if not expect_redis_running: pytest.skip( "redis is not running or " "otherwise not functioning correctly" ) else: raise class RedisTest(_TestRedisConn, _GenericBackendTest): backend = "dogpile.cache.redis" config_args = { "arguments": { "host": REDIS_HOST, "port": REDIS_PORT, "db": 0, "foo": "barf", } } class RedisDistributedMutexTest(_TestRedisConn, _GenericMutexTest): backend = "dogpile.cache.redis" config_args = { "arguments": { "host": REDIS_HOST, "port": REDIS_PORT, "db": 0, "distributed_lock": True, } } @patch("redis.StrictRedis", autospec=True) class RedisConnectionTest(TestCase): backend = "dogpile.cache.redis" @classmethod def setup_class(cls): cls.backend_cls = _backend_loader.load(cls.backend) try: cls.backend_cls({}) except ImportError: pytest.skip("Backend %s not installed" % cls.backend) def _test_helper(self, mock_obj, expected_args, connection_args=None): if connection_args is None: connection_args = expected_args self.backend_cls(connection_args) mock_obj.assert_called_once_with(**expected_args) def test_connect_with_defaults(self, MockStrictRedis): # The defaults, used if keys are missing from the arguments dict. arguments = { "host": "localhost", "password": None, "port": 6379, "db": 0, } self._test_helper(MockStrictRedis, arguments, {}) def test_connect_with_basics(self, MockStrictRedis): arguments = { "host": "127.0.0.1", "password": None, "port": 6379, "db": 0, } self._test_helper(MockStrictRedis, arguments) def test_connect_with_password(self, MockStrictRedis): arguments = { "host": "127.0.0.1", "password": "some password", "port": 6379, "db": 0, } self._test_helper(MockStrictRedis, arguments) def test_connect_with_socket_timeout(self, MockStrictRedis): arguments = { "host": "127.0.0.1", "port": 6379, "socket_timeout": 0.5, "password": None, "db": 0, } self._test_helper(MockStrictRedis, arguments) def test_connect_with_connection_pool(self, MockStrictRedis): pool = Mock() arguments = {"connection_pool": pool, "socket_timeout": 0.5} expected_args = {"connection_pool": pool} self._test_helper( MockStrictRedis, expected_args, connection_args=arguments ) def test_connect_with_url(self, MockStrictRedis): arguments = {"url": "redis://redis:password@127.0.0.1:6379/0"} self._test_helper(MockStrictRedis.from_url, arguments) dogpile.cache-0.9.0/tests/cache/test_region.py0000664000175000017500000007330113555610667022416 0ustar classicclassic00000000000000from collections import defaultdict import datetime import itertools import time from unittest import TestCase import mock from dogpile.cache import CacheRegion from dogpile.cache import exception from dogpile.cache import make_region from dogpile.cache import util from dogpile.cache.api import CacheBackend from dogpile.cache.api import CachedValue from dogpile.cache.api import NO_VALUE from dogpile.cache.proxy import ProxyBackend from dogpile.cache.region import _backend_loader from dogpile.cache.region import RegionInvalidationStrategy from dogpile.cache.region import value_version from dogpile.util import compat from . import assert_raises_message from . import configparser from . import eq_ from . import io from . import is_ from ._fixtures import MockBackend def key_mangler(key): return "HI!" + key class APITest(TestCase): def test_no_value_str(self): eq_(str(NO_VALUE), "") class RegionTest(TestCase): def _region(self, init_args={}, config_args={}, backend="mock"): reg = CacheRegion(**init_args) reg.configure(backend, **config_args) return reg def test_set_name(self): my_region = make_region(name="my-name") eq_(my_region.name, "my-name") def test_instance_from_dict(self): my_conf = { "cache.example.backend": "mock", "cache.example.expiration_time": 600, "cache.example.arguments.url": "127.0.0.1", } my_region = make_region() my_region.configure_from_config(my_conf, "cache.example.") eq_(my_region.expiration_time, 600) assert isinstance(my_region.backend, MockBackend) is True eq_(my_region.backend.arguments, {"url": "127.0.0.1"}) def test_instance_from_config_string(self): my_conf = ( "[xyz]\n" "cache.example.backend=mock\n" "cache.example.expiration_time=600\n" "cache.example.arguments.url=127.0.0.1\n" "cache.example.arguments.dogpile_lockfile=false\n" "cache.example.arguments.xyz=None\n" ) my_region = make_region() config = configparser.ConfigParser() compat.read_config_file(config, io.StringIO(my_conf)) my_region.configure_from_config( dict(config.items("xyz")), "cache.example." ) eq_(my_region.expiration_time, 600) assert isinstance(my_region.backend, MockBackend) is True eq_( my_region.backend.arguments, {"url": "127.0.0.1", "dogpile_lockfile": False, "xyz": None}, ) def test_datetime_expiration_time(self): my_region = make_region() my_region.configure( backend="mock", expiration_time=datetime.timedelta(days=1, hours=8) ) eq_(my_region.expiration_time, 32 * 60 * 60) def test_reject_invalid_expiration_time(self): my_region = make_region() assert_raises_message( exception.ValidationError, "expiration_time is not a number or timedelta.", my_region.configure, "mock", "one hour", ) def test_key_mangler_argument(self): reg = self._region(init_args={"key_mangler": key_mangler}) assert reg.key_mangler is key_mangler reg = self._region() assert reg.key_mangler is None MockBackend.key_mangler = lambda self, k: "foo" reg = self._region() eq_(reg.key_mangler("bar"), "foo") MockBackend.key_mangler = None def test_key_mangler_impl(self): reg = self._region(init_args={"key_mangler": key_mangler}) reg.set("some key", "some value") eq_(list(reg.backend._cache), ["HI!some key"]) eq_(reg.get("some key"), "some value") eq_( reg.get_or_create("some key", lambda: "some new value"), "some value", ) reg.delete("some key") eq_(reg.get("some key"), NO_VALUE) def test_dupe_config(self): reg = CacheRegion() reg.configure("mock") assert_raises_message( exception.RegionAlreadyConfigured, "This region is already configured", reg.configure, "mock", ) eq_(reg.is_configured, True) def test_replace_backend_config(self): reg = CacheRegion() reg.configure("dogpile.cache.null") eq_(reg.is_configured, True) null_backend = _backend_loader.load("dogpile.cache.null") assert reg.key_mangler is null_backend.key_mangler reg.configure("mock", replace_existing_backend=True) eq_(reg.is_configured, True) assert isinstance(reg.backend, MockBackend) assert reg.key_mangler is MockBackend.key_mangler def test_replace_backend_config_with_custom_key_mangler(self): reg = CacheRegion(key_mangler=key_mangler) reg.configure("dogpile.cache.null") eq_(reg.is_configured, True) assert reg.key_mangler is key_mangler reg.configure("mock", replace_existing_backend=True) eq_(reg.is_configured, True) assert reg.key_mangler is key_mangler def test_no_config(self): reg = CacheRegion() assert_raises_message( exception.RegionNotConfigured, "No backend is configured on this region.", getattr, reg, "backend", ) eq_(reg.is_configured, False) def test_invalid_backend(self): reg = CacheRegion() assert_raises_message( exception.PluginNotFound, "Couldn't find cache plugin to load: unknown", reg.configure, "unknown", ) eq_(reg.is_configured, False) def test_set_get_value(self): reg = self._region() reg.set("some key", "some value") eq_(reg.get("some key"), "some value") def test_set_get_nothing(self): reg = self._region() eq_(reg.get("some key"), NO_VALUE) eq_(reg.get("some key", expiration_time=10), NO_VALUE) reg.invalidate() eq_(reg.get("some key"), NO_VALUE) def test_creator(self): reg = self._region() def creator(): return "some value" eq_(reg.get_or_create("some key", creator), "some value") def test_multi_creator(self): reg = self._region() def creator(*keys): return ["some value %s" % key for key in keys] eq_( reg.get_or_create_multi(["k3", "k2", "k5"], creator), ["some value k3", "some value k2", "some value k5"], ) def test_remove(self): reg = self._region() reg.set("some key", "some value") reg.delete("some key") reg.delete("some key") eq_(reg.get("some key"), NO_VALUE) def test_expire(self): reg = self._region(config_args={"expiration_time": 1}) counter = itertools.count(1) def creator(): return "some value %d" % next(counter) eq_(reg.get_or_create("some key", creator), "some value 1") time.sleep(2) is_(reg.get("some key"), NO_VALUE) eq_(reg.get("some key", ignore_expiration=True), "some value 1") eq_( reg.get_or_create("some key", creator, expiration_time=-1), "some value 1", ) eq_(reg.get_or_create("some key", creator), "some value 2") eq_(reg.get("some key"), "some value 2") def test_expire_multi(self): reg = self._region(config_args={"expiration_time": 1}) counter = itertools.count(1) def creator(*keys): return ["some value %s %d" % (key, next(counter)) for key in keys] eq_( reg.get_or_create_multi(["k3", "k2", "k5"], creator), ["some value k3 2", "some value k2 1", "some value k5 3"], ) time.sleep(2) is_(reg.get("k2"), NO_VALUE) eq_(reg.get("k2", ignore_expiration=True), "some value k2 1") eq_( reg.get_or_create_multi(["k3", "k2"], creator, expiration_time=-1), ["some value k3 2", "some value k2 1"], ) eq_( reg.get_or_create_multi(["k3", "k2"], creator), ["some value k3 5", "some value k2 4"], ) eq_(reg.get("k2"), "some value k2 4") def test_expire_on_get(self): reg = self._region(config_args={"expiration_time": 0.5}) reg.set("some key", "some value") eq_(reg.get("some key"), "some value") time.sleep(1) is_(reg.get("some key"), NO_VALUE) def test_ignore_expire_on_get(self): reg = self._region(config_args={"expiration_time": 0.5}) reg.set("some key", "some value") eq_(reg.get("some key"), "some value") time.sleep(1) eq_(reg.get("some key", ignore_expiration=True), "some value") def test_override_expire_on_get(self): reg = self._region(config_args={"expiration_time": 0.5}) reg.set("some key", "some value") eq_(reg.get("some key"), "some value") time.sleep(1) eq_(reg.get("some key", expiration_time=5), "some value") is_(reg.get("some key"), NO_VALUE) def test_expire_override(self): reg = self._region(config_args={"expiration_time": 5}) counter = itertools.count(1) def creator(): return "some value %d" % next(counter) eq_( reg.get_or_create("some key", creator, expiration_time=1), "some value 1", ) time.sleep(2) eq_(reg.get("some key"), "some value 1") eq_( reg.get_or_create("some key", creator, expiration_time=1), "some value 2", ) eq_(reg.get("some key"), "some value 2") def test_hard_invalidate_get(self): reg = self._region() reg.set("some key", "some value") time.sleep(0.1) reg.invalidate() is_(reg.get("some key"), NO_VALUE) def test_hard_invalidate_get_or_create(self): reg = self._region() counter = itertools.count(1) def creator(): return "some value %d" % next(counter) eq_(reg.get_or_create("some key", creator), "some value 1") time.sleep(0.1) reg.invalidate() eq_(reg.get_or_create("some key", creator), "some value 2") eq_(reg.get_or_create("some key", creator), "some value 2") reg.invalidate() eq_(reg.get_or_create("some key", creator), "some value 3") eq_(reg.get_or_create("some key", creator), "some value 3") def test_hard_invalidate_get_or_create_multi(self): reg = self._region() counter = itertools.count(1) def creator(*keys): return ["some value %s %d" % (k, next(counter)) for k in keys] eq_( reg.get_or_create_multi(["k1", "k2"], creator), ["some value k1 1", "some value k2 2"], ) time.sleep(0.1) reg.invalidate() eq_( reg.get_or_create_multi(["k1", "k2"], creator), ["some value k1 3", "some value k2 4"], ) eq_( reg.get_or_create_multi(["k1", "k2"], creator), ["some value k1 3", "some value k2 4"], ) reg.invalidate() eq_( reg.get_or_create_multi(["k1", "k2"], creator), ["some value k1 5", "some value k2 6"], ) eq_( reg.get_or_create_multi(["k1", "k2"], creator), ["some value k1 5", "some value k2 6"], ) def test_soft_invalidate_get(self): reg = self._region(config_args={"expiration_time": 1}) reg.set("some key", "some value") time.sleep(0.1) reg.invalidate(hard=False) is_(reg.get("some key"), NO_VALUE) def test_soft_invalidate_get_or_create(self): reg = self._region(config_args={"expiration_time": 1}) counter = itertools.count(1) def creator(): return "some value %d" % next(counter) eq_(reg.get_or_create("some key", creator), "some value 1") time.sleep(0.1) reg.invalidate(hard=False) eq_(reg.get_or_create("some key", creator), "some value 2") def test_soft_invalidate_get_or_create_multi(self): reg = self._region(config_args={"expiration_time": 5}) values = [1, 2, 3] def creator(*keys): v = values.pop(0) return [v for k in keys] ret = reg.get_or_create_multi([1, 2], creator) eq_(ret, [1, 1]) time.sleep(0.1) reg.invalidate(hard=False) ret = reg.get_or_create_multi([1, 2], creator) eq_(ret, [2, 2]) def test_soft_invalidate_requires_expire_time_get(self): reg = self._region() reg.invalidate(hard=False) assert_raises_message( exception.DogpileCacheException, "Non-None expiration time required for soft invalidation", reg.get_or_create, "some key", lambda: "x", ) def test_soft_invalidate_requires_expire_time_get_multi(self): reg = self._region() reg.invalidate(hard=False) assert_raises_message( exception.DogpileCacheException, "Non-None expiration time required for soft invalidation", reg.get_or_create_multi, ["k1", "k2"], lambda k: "x", ) def test_should_cache_fn(self): reg = self._region() values = [1, 2, 3] def creator(): return values.pop(0) should_cache_fn = lambda val: val in (1, 3) # noqa ret = reg.get_or_create( "some key", creator, should_cache_fn=should_cache_fn ) eq_(ret, 1) eq_(reg.backend._cache["some key"][0], 1) time.sleep(0.1) reg.invalidate() ret = reg.get_or_create( "some key", creator, should_cache_fn=should_cache_fn ) eq_(ret, 2) eq_(reg.backend._cache["some key"][0], 1) reg.invalidate() ret = reg.get_or_create( "some key", creator, should_cache_fn=should_cache_fn ) eq_(ret, 3) eq_(reg.backend._cache["some key"][0], 3) def test_should_cache_fn_multi(self): reg = self._region() values = [1, 2, 3] def creator(*keys): v = values.pop(0) return [v for k in keys] should_cache_fn = lambda val: val in (1, 3) # noqa ret = reg.get_or_create_multi( [1, 2], creator, should_cache_fn=should_cache_fn ) eq_(ret, [1, 1]) eq_(reg.backend._cache[1][0], 1) time.sleep(0.1) reg.invalidate() ret = reg.get_or_create_multi( [1, 2], creator, should_cache_fn=should_cache_fn ) eq_(ret, [2, 2]) eq_(reg.backend._cache[1][0], 1) time.sleep(0.1) reg.invalidate() ret = reg.get_or_create_multi( [1, 2], creator, should_cache_fn=should_cache_fn ) eq_(ret, [3, 3]) eq_(reg.backend._cache[1][0], 3) def test_should_set_multiple_values(self): reg = self._region() values = {"key1": "value1", "key2": "value2", "key3": "value3"} reg.set_multi(values) eq_(values["key1"], reg.get("key1")) eq_(values["key2"], reg.get("key2")) eq_(values["key3"], reg.get("key3")) def test_should_get_multiple_values(self): reg = self._region() values = {"key1": "value1", "key2": "value2", "key3": "value3"} reg.set_multi(values) reg_values = reg.get_multi(["key1", "key2", "key3"]) eq_(reg_values, ["value1", "value2", "value3"]) def test_should_delete_multiple_values(self): reg = self._region() values = {"key1": "value1", "key2": "value2", "key3": "value3"} reg.set_multi(values) reg.delete_multi(["key2", "key1000"]) eq_(values["key1"], reg.get("key1")) eq_(NO_VALUE, reg.get("key2")) eq_(values["key3"], reg.get("key3")) class ProxyRegionTest(RegionTest): """ This is exactly the same as the region test above, but it goes through a dummy proxy. The purpose of this is to make sure the tests still run successfully even when there is a proxy """ class MockProxy(ProxyBackend): @property def _cache(self): return self.proxied._cache def _region(self, init_args={}, config_args={}, backend="mock"): reg = CacheRegion(**init_args) config_args["wrap"] = [ProxyRegionTest.MockProxy] reg.configure(backend, **config_args) return reg class CustomInvalidationStrategyTest(RegionTest): """Try region tests with custom invalidation strategy. This is exactly the same as the region test above, but it uses custom invalidation strategy. The purpose of this is to make sure the tests still run successfully even when there is a proxy. """ class CustomInvalidationStrategy(RegionInvalidationStrategy): def __init__(self): self._soft_invalidated = None self._hard_invalidated = None def invalidate(self, hard=None): if hard: self._soft_invalidated = None self._hard_invalidated = time.time() else: self._soft_invalidated = time.time() self._hard_invalidated = None def is_invalidated(self, timestamp): return ( self._soft_invalidated and timestamp < self._soft_invalidated ) or ( self._hard_invalidated and timestamp < self._hard_invalidated ) def was_hard_invalidated(self): return bool(self._hard_invalidated) def is_hard_invalidated(self, timestamp): return ( self._hard_invalidated and timestamp < self._hard_invalidated ) def was_soft_invalidated(self): return bool(self._soft_invalidated) def is_soft_invalidated(self, timestamp): return ( self._soft_invalidated and timestamp < self._soft_invalidated ) def _region(self, init_args={}, config_args={}, backend="mock"): reg = CacheRegion(**init_args) invalidator = self.CustomInvalidationStrategy() reg.configure(backend, region_invalidator=invalidator, **config_args) return reg class TestProxyValue(object): def __init__(self, value): self.value = value class AsyncCreatorTest(TestCase): def _fixture(self): def async_creation_runner(cache, somekey, creator, mutex): try: value = creator() cache.set(somekey, value) finally: mutex.release() return mock.Mock(side_effect=async_creation_runner) def test_get_or_create(self): acr = self._fixture() reg = CacheRegion(async_creation_runner=acr) reg.configure("mock", expiration_time=0.2) def some_value(): return "some value" def some_new_value(): return "some new value" eq_(reg.get_or_create("some key", some_value), "some value") time.sleep(0.5) eq_(reg.get_or_create("some key", some_new_value), "some value") eq_(reg.get_or_create("some key", some_new_value), "some new value") eq_( acr.mock_calls, [ mock.call( reg, "some key", some_new_value, reg._mutex("some key") ) ], ) def test_fn_decorator(self): acr = self._fixture() reg = CacheRegion(async_creation_runner=acr) reg.configure("mock", expiration_time=5) canary = mock.Mock() @reg.cache_on_arguments() def go(x, y): canary(x, y) return x + y eq_(go(1, 2), 3) eq_(go(1, 2), 3) eq_(canary.mock_calls, [mock.call(1, 2)]) eq_(go(3, 4), 7) eq_(canary.mock_calls, [mock.call(1, 2), mock.call(3, 4)]) reg.invalidate(hard=False) eq_(go(1, 2), 3) eq_( canary.mock_calls, [mock.call(1, 2), mock.call(3, 4), mock.call(1, 2)], ) eq_( acr.mock_calls, [ mock.call( reg, "tests.cache.test_region:go|1 2", mock.ANY, reg._mutex("tests.cache.test_region:go|1 2"), ) ], ) def test_fn_decorator_with_kw(self): acr = self._fixture() reg = CacheRegion(async_creation_runner=acr) reg.configure("mock", expiration_time=5) @reg.cache_on_arguments() def go(x, **kw): return x test_value = TestProxyValue("Decorator Test") self.assertRaises(ValueError, go, x=1, foo=test_value) @reg.cache_on_arguments() def go2(x): return x # keywords that match positional names can be passed result = go2(x=test_value) self.assertTrue(isinstance(result, TestProxyValue)) class ProxyBackendTest(TestCase): class GetCounterProxy(ProxyBackend): counter = 0 def get(self, key): ProxyBackendTest.GetCounterProxy.counter += 1 return self.proxied.get(key) class SetCounterProxy(ProxyBackend): counter = 0 def set(self, key, value): ProxyBackendTest.SetCounterProxy.counter += 1 return self.proxied.set(key, value) class UsedKeysProxy(ProxyBackend): """ Keep a counter of hose often we set a particular key""" def __init__(self, *args, **kwargs): super(ProxyBackendTest.UsedKeysProxy, self).__init__( *args, **kwargs ) self._key_count = defaultdict(lambda: 0) def setcount(self, key): return self._key_count[key] def set(self, key, value): self._key_count[key] += 1 self.proxied.set(key, value) class NeverSetProxy(ProxyBackend): """ A totally contrived example of a Proxy that we pass arguments to. Never set a key that matches never_set """ def __init__(self, never_set, *args, **kwargs): super(ProxyBackendTest.NeverSetProxy, self).__init__( *args, **kwargs ) self.never_set = never_set self._key_count = defaultdict(lambda: 0) def set(self, key, value): if key != self.never_set: self.proxied.set(key, value) class CanModifyCachedValueProxy(ProxyBackend): def get(self, key): value = ProxyBackend.get(self, key) assert isinstance(value, CachedValue) return value def set(self, key, value): assert isinstance(value, CachedValue) ProxyBackend.set(self, key, value) def _region(self, init_args={}, config_args={}, backend="mock"): reg = CacheRegion(**init_args) reg.configure(backend, **config_args) return reg def test_cachedvalue_passed(self): reg = self._region( config_args={"wrap": [ProxyBackendTest.CanModifyCachedValueProxy]} ) reg.set("some key", "some value") eq_(reg.get("some key"), "some value") def test_counter_proxies(self): # count up the gets and sets and make sure they are passed through # to the backend properly. Test that methods not overridden # continue to work reg = self._region( config_args={ "wrap": [ ProxyBackendTest.GetCounterProxy, ProxyBackendTest.SetCounterProxy, ] } ) ProxyBackendTest.GetCounterProxy.counter = 0 ProxyBackendTest.SetCounterProxy.counter = 0 # set a range of values in the cache for i in range(10): reg.set(i, i) eq_(ProxyBackendTest.GetCounterProxy.counter, 0) eq_(ProxyBackendTest.SetCounterProxy.counter, 10) # check that the range of values is still there for i in range(10): v = reg.get(i) eq_(v, i) eq_(ProxyBackendTest.GetCounterProxy.counter, 10) eq_(ProxyBackendTest.SetCounterProxy.counter, 10) # make sure the delete function(not overridden) still # executes properly for i in range(10): reg.delete(i) v = reg.get(i) is_(v, NO_VALUE) def test_instance_proxies(self): # Test that we can create an instance of a new proxy and # pass that to make_region instead of the class. The two instances # should not interfere with each other proxy_num = ProxyBackendTest.UsedKeysProxy(5) proxy_abc = ProxyBackendTest.UsedKeysProxy(5) reg_num = self._region(config_args={"wrap": [proxy_num]}) reg_abc = self._region(config_args={"wrap": [proxy_abc]}) for i in range(10): reg_num.set(i, True) reg_abc.set(chr(ord("a") + i), True) for i in range(5): reg_num.set(i, True) reg_abc.set(chr(ord("a") + i), True) # make sure proxy_num has the right counts per key eq_(proxy_num.setcount(1), 2) eq_(proxy_num.setcount(9), 1) eq_(proxy_num.setcount("a"), 0) # make sure proxy_abc has the right counts per key eq_(proxy_abc.setcount("a"), 2) eq_(proxy_abc.setcount("g"), 1) eq_(proxy_abc.setcount("9"), 0) def test_argument_proxies(self): # Test that we can pass an argument to Proxy on creation proxy = ProxyBackendTest.NeverSetProxy(5) reg = self._region(config_args={"wrap": [proxy]}) for i in range(10): reg.set(i, True) # make sure 1 was set, but 5 was not eq_(reg.get(5), NO_VALUE) eq_(reg.get(1), True) def test_actual_backend_proxied(self): # ensure that `reg.actual_backend` is the actual backend # also ensure that `reg.backend` is a proxied backend reg = self._region( config_args={ "wrap": [ ProxyBackendTest.GetCounterProxy, ProxyBackendTest.SetCounterProxy, ] } ) assert isinstance(reg.backend, ProxyBackend) assert isinstance(reg.actual_backend, CacheBackend) def test_actual_backend_noproxy(self): # ensure that `reg.actual_backend` is the actual backend # also ensure that `reg.backend` is NOT a proxied backend reg = self._region() assert isinstance(reg.backend, CacheBackend) assert isinstance(reg.actual_backend, CacheBackend) class LoggingTest(TestCase): def _region(self, init_args={}, config_args={}, backend="mock"): reg = CacheRegion(**init_args) reg.configure(backend, **config_args) return reg def test_log_time(self): reg = self._region() times = [50, 55, 60] def mock_time(): return times.pop(0) with mock.patch("dogpile.cache.region.log") as mock_log, mock.patch( "dogpile.cache.region.time", mock.Mock(time=mock_time) ): with reg._log_time(["foo", "bar", "bat"]): pass eq_( mock_log.mock_calls, [ mock.call.debug( "Cache value generated in %(seconds).3f " "seconds for key(s): %(keys)r", { "seconds": 5, "keys": util.repr_obj(["foo", "bar", "bat"]), }, ) ], ) def test_repr_obj_truncated(self): eq_( repr(util.repr_obj(["some_big_long_name" for i in range(200)])), "['some_big_long_name', 'some_big_long_name', " "'some_big_long_name', 'some_big_long_name', 'some_big_long_name'," " 'some_big_long_name', 'some_big_long_na ... " "(4100 characters truncated) ... me_big_long_name', " "'some_big_long_name', 'some_big_long_name', 'some_big_long_" "name', 'some_big_long_name', 'some_big_long_name', " "'some_big_long_name']", ) def test_log_is_cache_miss(self): reg = self._region() with mock.patch("dogpile.cache.region.log") as mock_log: is_(reg._is_cache_miss(NO_VALUE, "some key"), True) eq_( mock_log.mock_calls, [mock.call.debug("No value present for key: %r", "some key")], ) def test_log_is_value_version_miss(self): reg = self._region() inv = mock.Mock(is_hard_invalidated=lambda val: True) with mock.patch( "dogpile.cache.region.log" ) as mock_log, mock.patch.object(reg, "region_invalidator", inv): is_( reg._is_cache_miss( CachedValue( "some value", {"v": value_version - 5, "ct": 500} ), "some key", ), True, ) eq_( mock_log.mock_calls, [ mock.call.debug( "Dogpile version update for key: %r", "some key" ) ], ) def test_log_is_hard_invalidated(self): reg = self._region() inv = mock.Mock(is_hard_invalidated=lambda val: True) with mock.patch( "dogpile.cache.region.log" ) as mock_log, mock.patch.object(reg, "region_invalidator", inv): is_( reg._is_cache_miss( CachedValue("some value", {"v": value_version, "ct": 500}), "some key", ), True, ) eq_( mock_log.mock_calls, [ mock.call.debug( "Hard invalidation detected for key: %r", "some key" ) ], ) dogpile.cache-0.9.0/tests/conftest.py0000664000175000017500000000136713555610667020661 0ustar classicclassic00000000000000import logging import logging.config import sys from _pytest.unittest import UnitTestCase logging.config.fileConfig("log_tests.ini") def is_unittest(obj): """Is obj a subclass of unittest.TestCase? Lifted from older versions of py.test, as this seems to be removed. """ unittest = sys.modules.get("unittest") if unittest is None: return # nobody can have derived unittest.TestCase try: return issubclass(obj, unittest.TestCase) except KeyboardInterrupt: raise except Exception: return False def pytest_pycollect_makeitem(collector, name, obj): if is_unittest(obj) and not obj.__name__.startswith("_"): return UnitTestCase(name, parent=collector) else: return [] dogpile.cache-0.9.0/tests/test_backgrounding.py0000664000175000017500000000105013555610667022675 0ustar classicclassic00000000000000import threading import unittest import dogpile class TestAsyncRunner(unittest.TestCase): def test_async_release(self): self.called = False def runner(mutex): self.called = True mutex.release() mutex = threading.Lock() create = lambda: ("value", 1) # noqa get = lambda: ("value", 1) # noqa expiretime = 1 assert not self.called with dogpile.Lock(mutex, create, get, expiretime, runner) as _: assert self.called assert self.called dogpile.cache-0.9.0/tests/test_lock.py0000664000175000017500000002356313555610667021025 0ustar classicclassic00000000000000import contextlib import logging import math import threading import time from unittest import TestCase import mock from dogpile import Lock from dogpile import NeedRegenerationException from dogpile.util import ReadWriteMutex log = logging.getLogger(__name__) class ConcurrencyTest(TestCase): # expiretime, time to create, num usages, time spend using, delay btw usage _assertion_lock = threading.Lock() def test_quick(self): self._test_multi(10, 2, 0.5, 50, 0.05, 0.1) def test_slow(self): self._test_multi(10, 5, 2, 50, 0.1, 0.1) # TODO: this is a port from the legacy test_dogpile test. # sequence and calculations need to be revised. # def test_get_value_plus_created_slow_write(self): # self._test_multi( # 10, 2, .5, 50, .05, .1, # slow_write_time=2 # ) def test_return_while_in_progress(self): self._test_multi(10, 5, 2, 50, 1, 0.1) def test_get_value_plus_created_long_create(self): self._test_multi(10, 2, 2.5, 50, 0.05, 0.1) def test_get_value_plus_created_registry_unsafe_cache(self): self._test_multi( 10, 1, 0.6, 100, 0.05, 0.1, cache_expire_time="unsafe" ) def test_get_value_plus_created_registry_safe_cache_quick(self): self._test_multi(10, 2, 0.5, 50, 0.05, 0.1, cache_expire_time="safe") def test_get_value_plus_created_registry_safe_cache_slow(self): self._test_multi(10, 5, 2, 50, 0.1, 0.1, cache_expire_time="safe") def _assert_synchronized(self): acq = self._assertion_lock.acquire(False) assert acq, "Could not acquire" @contextlib.contextmanager def go(): try: yield {} except Exception: raise finally: self._assertion_lock.release() return go() def _assert_log(self, cond, msg, *args): if cond: log.debug(msg, *args) else: log.error("Assertion failed: " + msg, *args) assert False, msg % args def _test_multi( self, num_threads, expiretime, creation_time, num_usages, usage_time, delay_time, cache_expire_time=None, slow_write_time=None, ): mutex = threading.Lock() if slow_write_time: readwritelock = ReadWriteMutex() unsafe_cache = False if cache_expire_time: if cache_expire_time == "unsafe": unsafe_cache = True cache_expire_time = expiretime * 0.8 elif cache_expire_time == "safe": cache_expire_time = (expiretime + creation_time) * 1.1 else: assert False, cache_expire_time log.info("Cache expire time: %s", cache_expire_time) effective_expiretime = min(cache_expire_time, expiretime) else: effective_expiretime = expiretime effective_creation_time = creation_time max_stale = ( effective_expiretime + effective_creation_time + usage_time + delay_time ) * 1.1 the_resource = [] slow_waiters = [0] failures = [0] def create_resource(): with self._assert_synchronized(): log.debug( "creating resource, will take %f sec" % creation_time ) time.sleep(creation_time) if slow_write_time: readwritelock.acquire_write_lock() try: saved = list(the_resource) # clear out the resource dict so that # usage threads hitting it will # raise the_resource[:] = [] time.sleep(slow_write_time) the_resource[:] = saved finally: readwritelock.release_write_lock() the_resource.append(time.time()) value = the_resource[-1] log.debug("finished creating resource") return value, time.time() def get_value(): if not the_resource: raise NeedRegenerationException() if cache_expire_time: if time.time() - the_resource[-1] > cache_expire_time: # should never hit a cache invalidation # if we've set expiretime below the cache # expire time (assuming a cache which # honors this). self._assert_log( cache_expire_time < expiretime, "Cache expiration hit, cache " "expire time %s, expiretime %s", cache_expire_time, expiretime, ) raise NeedRegenerationException() if slow_write_time: readwritelock.acquire_read_lock() try: return the_resource[-1], the_resource[-1] finally: if slow_write_time: readwritelock.release_read_lock() def use_dogpile(): try: for i in range(num_usages): now = time.time() with Lock( mutex, create_resource, get_value, expiretime ) as value: waited = time.time() - now if waited > 0.01: slow_waiters[0] += 1 check_value(value, waited) time.sleep(usage_time) time.sleep(delay_time) except Exception: log.error("thread failed", exc_info=True) failures[0] += 1 def check_value(value, waited): assert value # time since the current resource was # created time_since_create = time.time() - value self._assert_log( time_since_create < max_stale, "Time since create %.4f max stale time %s, " "total waited %s", time_since_create, max_stale, slow_waiters[0], ) started_at = time.time() threads = [] for i in range(num_threads): t = threading.Thread(target=use_dogpile) t.start() threads.append(t) for t in threads: t.join() actual_run_time = time.time() - started_at # time spent starts with num usages * time per usage, with a 10% fudge expected_run_time = (num_usages * (usage_time + delay_time)) * 1.1 expected_generations = math.ceil( expected_run_time / effective_expiretime ) if unsafe_cache: expected_slow_waiters = expected_generations * num_threads else: expected_slow_waiters = expected_generations + num_threads - 1 if slow_write_time: expected_slow_waiters = num_threads * expected_generations # time spent also increments by one wait period in the beginning... expected_run_time += effective_creation_time # and a fudged version of the periodic waiting time anticipated # for a single thread... expected_run_time += ( expected_slow_waiters * effective_creation_time ) / num_threads expected_run_time *= 1.1 log.info("Test Summary") log.info( "num threads: %s; expiretime: %s; creation_time: %s; " "num_usages: %s; " "usage_time: %s; delay_time: %s", num_threads, expiretime, creation_time, num_usages, usage_time, delay_time, ) log.info( "cache expire time: %s; unsafe cache: %s", cache_expire_time, unsafe_cache, ) log.info( "Estimated run time %.2f actual run time %.2f", expected_run_time, actual_run_time, ) log.info( "Effective expiretime (min(cache_exp_time, exptime)) %s", effective_expiretime, ) log.info( "Expected slow waits %s, Total slow waits %s", expected_slow_waiters, slow_waiters[0], ) log.info( "Total generations %s Max generations expected %s" % (len(the_resource), expected_generations) ) assert not failures[0], "%s failures occurred" % failures[0] assert actual_run_time <= expected_run_time assert slow_waiters[0] <= expected_slow_waiters, ( "Number of slow waiters %s exceeds expected slow waiters %s" % (slow_waiters[0], expected_slow_waiters) ) assert len(the_resource) <= expected_generations, ( "Number of resource generations %d exceeded " "expected %d" % (len(the_resource), expected_generations) ) class RaceConditionTests(TestCase): def test_no_double_get_on_expired(self): mutex = threading.Lock() the_value = "the value" expiration_time = 10 created_time = 10 current_time = 22 # e.g. it's expired def creator(): return the_value, current_time def value_and_created_fn(): return the_value, created_time value_and_created_fn = mock.Mock(side_effect=value_and_created_fn) def time_mock(): return current_time with mock.patch("dogpile.lock.time.time", time_mock): with Lock( mutex, creator, value_and_created_fn, expiration_time ) as entered_value: self.assertEqual("the value", entered_value) self.assertEqual(value_and_created_fn.call_count, 1) dogpile.cache-0.9.0/tests/test_utils.py0000664000175000017500000000160313555610667021224 0ustar classicclassic00000000000000from unittest import TestCase from dogpile import util class UtilsTest(TestCase): """ Test the relevant utils functionality. """ def test_coerce_string_conf(self): settings = {"expiration_time": "-1"} coerced = util.coerce_string_conf(settings) self.assertEqual(coerced["expiration_time"], -1) settings = {"expiration_time": "+1"} coerced = util.coerce_string_conf(settings) self.assertEqual(coerced["expiration_time"], 1) self.assertEqual(type(coerced["expiration_time"]), int) settings = {"arguments.lock_sleep": "0.1"} coerced = util.coerce_string_conf(settings) self.assertEqual(coerced["arguments.lock_sleep"], 0.1) settings = {"arguments.lock_sleep": "-3.14e-10"} coerced = util.coerce_string_conf(settings) self.assertEqual(coerced["arguments.lock_sleep"], -3.14e-10) dogpile.cache-0.9.0/tests/util/0000775000175000017500000000000013555610710017415 5ustar classicclassic00000000000000dogpile.cache-0.9.0/tests/util/__init__.py0000664000175000017500000000000013555610667021527 0ustar classicclassic00000000000000dogpile.cache-0.9.0/tests/util/test_nameregistry.py0000664000175000017500000000277713555610667023567 0ustar classicclassic00000000000000import logging import random import threading import time from unittest import TestCase from dogpile.util import NameRegistry log = logging.getLogger(__name__) class NameRegistryTest(TestCase): def test_name_registry(self): success = [True] num_operations = [0] def create(identifier): log.debug("Creator running for id: " + identifier) return threading.Lock() registry = NameRegistry(create) baton = {"beans": False, "means": False, "please": False} def do_something(name): for iteration in range(20): name = list(baton)[random.randint(0, 2)] lock = registry.get(name) lock.acquire() try: if baton[name]: success[0] = False log.debug("Baton is already populated") break baton[name] = True try: time.sleep(random.random() * 0.01) finally: num_operations[0] += 1 baton[name] = False finally: lock.release() log.debug("thread completed operations") threads = [] for id_ in range(1, 20): t = threading.Thread(target=do_something, args=("somename",)) t.start() threads.append(t) for t in threads: t.join() assert success[0] dogpile.cache-0.9.0/tox.ini0000664000175000017500000000251313555610667016625 0ustar classicclassic00000000000000[tox] envlist = py [testenv] cov_args=--cov=dogpile --cov-append --cov-report term --cov-report xml setenv= BASECOMMAND=python -m pytest {generic}: RUNTESTS=-k 'not test_dbm_backend and not test_memcached_backend and not test_redis_backend' {memcached}: PIFPAF=pifpaf --env-prefix DOGPILE run memcached --port {env:TOX_DOGPILE_PORT:11234} -- {memcached}: RUNTESTS=tests/cache/test_memcached_backend.py {redis}: PIFPAF=pifpaf --env-prefix DOGPILE run redis --port {env:TOX_DOGPILE_PORT:11234} -- {redis}: RUNTESTS=tests/cache/test_redis_backend.py {dbm}: RUNTESTS=tests/cache/test_dbm_backend.py {cov}: COVERAGE={[testenv]cov_args} deps= pytest mock Mako {memcached}: pylibmc # the py3k python-memcached fails for multiple # delete {py27-memcached}: python-memcached {memcached}: python-binary-memcached {memcached}: pifpaf {redis}: redis {redis}: pifpaf {cov}: pytest-cov commands= {env:PIFPAF:} {env:BASECOMMAND} {env:COVERAGE:} {env:RUNTESTS:} {posargs} sitepackages=False usedevelop=True # thanks to https://julien.danjou.info/the-best-flake8-extensions/ [testenv:pep8] basepython = python3 deps= flake8 flake8-import-order flake8-builtins flake8-docstrings flake8-rst-docstrings # used by flake8-rst-docstrings pygments commands = flake8 ./dogpile/ ./tests/ setup.py