python-aioredlock_0.7.3.orig/LICENSE.md0000644000000000000000000000206213042403735014530 0ustar00MIT License Copyright (c) 2016 Joan Vilà Cuñat Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. python-aioredlock_0.7.3.orig/PKG-INFO0000644000000000000000000001163414376445217014242 0ustar00Metadata-Version: 2.1 Name: aioredlock Version: 0.7.3 Summary: Asyncio implemetation of Redis distributed locks Home-page: https://github.com/joanvila/aioredlock Author: Joan Vilà Cuñat Author-email: vila.joan94@gmail.com License: MIT Keywords: redis redlock distributed locks asyncio Classifier: Development Status :: 5 - Production/Stable Classifier: Intended Audience :: Developers Classifier: Topic :: Software Development :: Libraries :: Python Modules Classifier: License :: OSI Approved :: MIT License Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.7 Classifier: Programming Language :: Python :: 3.8 Classifier: Programming Language :: Python :: 3.9 Requires-Python: >=3.6 Provides-Extra: cicd Provides-Extra: examples Provides-Extra: package Provides-Extra: test License-File: LICENSE.md aioredlock ========== .. image:: https://github.com/joanvila/aioredlock/workflows/Tests/badge.svg :target: https://travis-ci.org/joanvila/aioredlock .. image:: https://codecov.io/gh/joanvila/aioredlock/branch/master/graph/badge.svg :target: https://codecov.io/gh/joanvila/aioredlock .. image:: https://badge.fury.io/py/aioredlock.svg :target: https://pypi.python.org/pypi/aioredlock The asyncio redlock_ algorithm implementation. Redlock and asyncio ------------------- The redlock algorithm is a distributed lock implementation for Redis_. There are many implementations of it in several languages. In this case, this is the asyncio_ compatible implementation for python 3.5+. Usage ----- .. code-block:: python from aioredlock import Aioredlock, LockError, Sentinel # Define a list of connections to your Redis instances: redis_instances = [ ('localhost', 6379), {'host': 'localhost', 'port': 6379, 'db': 1}, 'redis://localhost:6379/2', Sentinel(('localhost', 26379), master='leader', db=3), Sentinel('redis://localhost:26379/4?master=leader&encoding=utf-8'), Sentinel('rediss://:password@localhost:26379/5?master=leader&encoding=utf-8&ssl_cert_reqs=CERT_NONE'), ] # Create a lock manager: lock_manager = Aioredlock(redis_instances) # Check wether a resourece acquired by any other redlock instance: assert not await lock_manager.is_locked("resource_name") # Try to acquire the lock: try: lock = await lock_manager.lock("resource_name", lock_timeout=10) except LockError: print('Lock not acquired') raise # Now the lock is acquired: assert lock.valid assert await lock_manager.is_locked("resource_name") # Extend lifetime of the lock: await lock_manager.extend(lock, lock_timeout=10) # Raises LockError if the lock manager can not extend the lock lifetime # on more then half of the Redis instances. # Release the lock: await lock_manager.unlock(lock) # Raises LockError if the lock manager can not release the lock # on more then half of redis instances. # The released lock become invalid: assert not lock.valid assert not await lock_manager.is_locked("resource_name") # Or you can use the lock as async context manager: try: async with await lock_manager.lock("resource_name") as lock: assert lock.valid is True # Do your stuff having the lock await lock.extend() # alias for lock_manager.extend(lock) # Do more stuff having the lock assert lock.valid is False # lock will be released by context manager except LockError: print('Lock not acquired') raise # Clear the connections with Redis: await lock_manager.destroy() How it works ------------ The Aioredlock constructor accepts the following optional parameters: - ``redis_connections``: A list of connections (dictionary of host and port and kwargs for ``aioredis.create_redis_pool()``, or tuple ``(host, port)``, or string Redis URI) where the Redis instances are running. The default value is ``[{'host': 'localhost', 'port': 6379}]``. - ``retry_count``: An integer representing number of maximum allowed retries to acquire the lock. The default value is ``3`` times. - ``retry_delay_min`` and ``retry_delay_max``: Float values representing waiting time (in seconds) before the next retry attempt. The default values are ``0.1`` and ``0.3``, respectively. In order to acquire the lock, the ``lock`` function should be called. If the lock operation is successful, ``lock.valid`` will be true, if the lock is not acquired then the ``LockError`` will be raised. From that moment, the lock is valid until the ``unlock`` function is called or when the ``lock_timeout`` is reached. Call the ``extend`` function to reset lifetime of the lock to ``lock_timeout`` interval. Use the ``is_locked`` function to check if the resource is locked by other redlock instance. In order to clear all the connections with Redis, the lock_manager ``destroy`` method can be called. To-do ----- .. _redlock: https://redis.io/topics/distlock .. _Redis: https://redis.io .. _asyncio: https://docs.python.org/3/library/asyncio.html python-aioredlock_0.7.3.orig/README.rst0000644000000000000000000001011313701134454014610 0ustar00aioredlock ========== .. image:: https://github.com/joanvila/aioredlock/workflows/Tests/badge.svg :target: https://travis-ci.org/joanvila/aioredlock .. image:: https://codecov.io/gh/joanvila/aioredlock/branch/master/graph/badge.svg :target: https://codecov.io/gh/joanvila/aioredlock .. image:: https://badge.fury.io/py/aioredlock.svg :target: https://pypi.python.org/pypi/aioredlock The asyncio redlock_ algorithm implementation. Redlock and asyncio ------------------- The redlock algorithm is a distributed lock implementation for Redis_. There are many implementations of it in several languages. In this case, this is the asyncio_ compatible implementation for python 3.5+. Usage ----- .. code-block:: python from aioredlock import Aioredlock, LockError, Sentinel # Define a list of connections to your Redis instances: redis_instances = [ ('localhost', 6379), {'host': 'localhost', 'port': 6379, 'db': 1}, 'redis://localhost:6379/2', Sentinel(('localhost', 26379), master='leader', db=3), Sentinel('redis://localhost:26379/4?master=leader&encoding=utf-8'), Sentinel('rediss://:password@localhost:26379/5?master=leader&encoding=utf-8&ssl_cert_reqs=CERT_NONE'), ] # Create a lock manager: lock_manager = Aioredlock(redis_instances) # Check wether a resourece acquired by any other redlock instance: assert not await lock_manager.is_locked("resource_name") # Try to acquire the lock: try: lock = await lock_manager.lock("resource_name", lock_timeout=10) except LockError: print('Lock not acquired') raise # Now the lock is acquired: assert lock.valid assert await lock_manager.is_locked("resource_name") # Extend lifetime of the lock: await lock_manager.extend(lock, lock_timeout=10) # Raises LockError if the lock manager can not extend the lock lifetime # on more then half of the Redis instances. # Release the lock: await lock_manager.unlock(lock) # Raises LockError if the lock manager can not release the lock # on more then half of redis instances. # The released lock become invalid: assert not lock.valid assert not await lock_manager.is_locked("resource_name") # Or you can use the lock as async context manager: try: async with await lock_manager.lock("resource_name") as lock: assert lock.valid is True # Do your stuff having the lock await lock.extend() # alias for lock_manager.extend(lock) # Do more stuff having the lock assert lock.valid is False # lock will be released by context manager except LockError: print('Lock not acquired') raise # Clear the connections with Redis: await lock_manager.destroy() How it works ------------ The Aioredlock constructor accepts the following optional parameters: - ``redis_connections``: A list of connections (dictionary of host and port and kwargs for ``aioredis.create_redis_pool()``, or tuple ``(host, port)``, or string Redis URI) where the Redis instances are running. The default value is ``[{'host': 'localhost', 'port': 6379}]``. - ``retry_count``: An integer representing number of maximum allowed retries to acquire the lock. The default value is ``3`` times. - ``retry_delay_min`` and ``retry_delay_max``: Float values representing waiting time (in seconds) before the next retry attempt. The default values are ``0.1`` and ``0.3``, respectively. In order to acquire the lock, the ``lock`` function should be called. If the lock operation is successful, ``lock.valid`` will be true, if the lock is not acquired then the ``LockError`` will be raised. From that moment, the lock is valid until the ``unlock`` function is called or when the ``lock_timeout`` is reached. Call the ``extend`` function to reset lifetime of the lock to ``lock_timeout`` interval. Use the ``is_locked`` function to check if the resource is locked by other redlock instance. In order to clear all the connections with Redis, the lock_manager ``destroy`` method can be called. To-do ----- .. _redlock: https://redis.io/topics/distlock .. _Redis: https://redis.io .. _asyncio: https://docs.python.org/3/library/asyncio.html python-aioredlock_0.7.3.orig/aioredlock.egg-info/0000755000000000000000000000000014376445217016746 5ustar00python-aioredlock_0.7.3.orig/aioredlock/0000755000000000000000000000000013042403735015240 5ustar00python-aioredlock_0.7.3.orig/setup.cfg0000644000000000000000000000057314376445217014766 0ustar00[flake8] max-line-length = 120 exclude = .venv build/ [pep8] max-line-length = 120 [pycodestyle] max-line-length = 120 [aliases] test = pytest [bdist_wheel] universal = 0 [tool:pytest] asyncio_mode = auto addopts = -ra log_cli_level = critical junit_family = xunit testpaths = tests/ norecursedirs = .git .tox filterwarnings = error [egg_info] tag_build = tag_date = 0 python-aioredlock_0.7.3.orig/setup.py0000644000000000000000000000257114200503644014640 0ustar00from codecs import open from os import path from setuptools import find_packages, setup here = path.abspath(path.dirname(__file__)) with open(path.join(here, 'README.rst'), encoding='utf-8') as f: long_description = f.read() setup( name='aioredlock', version='0.7.3', description='Asyncio implemetation of Redis distributed locks', long_description=long_description, url='https://github.com/joanvila/aioredlock', author='Joan Vilà Cuñat', author_email='vila.joan94@gmail.com', license='MIT', classifiers=[ 'Development Status :: 5 - Production/Stable', 'Intended Audience :: Developers', 'Topic :: Software Development :: Libraries :: Python Modules', 'License :: OSI Approved :: MIT License', 'Programming Language :: Python :: 3', 'Programming Language :: Python :: 3.7', 'Programming Language :: Python :: 3.8', 'Programming Language :: Python :: 3.9', ], keywords='redis redlock distributed locks asyncio', packages=find_packages(), python_requires='>=3.6', install_requires=['aioredis<2.0.0', 'attrs >= 17.4.0'], extras_require={ 'test': ['pytest==6.1.0', 'pytest-asyncio', 'pytest-mock', 'pytest-cov', 'flake8'], 'cicd': ['codecov'], 'package': ['bump2version', 'twine', 'wheel'], 'examples': ['aiodocker'], }, ) python-aioredlock_0.7.3.orig/aioredlock.egg-info/PKG-INFO0000644000000000000000000001163414376445217020050 0ustar00Metadata-Version: 2.1 Name: aioredlock Version: 0.7.3 Summary: Asyncio implemetation of Redis distributed locks Home-page: https://github.com/joanvila/aioredlock Author: Joan Vilà Cuñat Author-email: vila.joan94@gmail.com License: MIT Keywords: redis redlock distributed locks asyncio Classifier: Development Status :: 5 - Production/Stable Classifier: Intended Audience :: Developers Classifier: Topic :: Software Development :: Libraries :: Python Modules Classifier: License :: OSI Approved :: MIT License Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.7 Classifier: Programming Language :: Python :: 3.8 Classifier: Programming Language :: Python :: 3.9 Requires-Python: >=3.6 Provides-Extra: cicd Provides-Extra: examples Provides-Extra: package Provides-Extra: test License-File: LICENSE.md aioredlock ========== .. image:: https://github.com/joanvila/aioredlock/workflows/Tests/badge.svg :target: https://travis-ci.org/joanvila/aioredlock .. image:: https://codecov.io/gh/joanvila/aioredlock/branch/master/graph/badge.svg :target: https://codecov.io/gh/joanvila/aioredlock .. image:: https://badge.fury.io/py/aioredlock.svg :target: https://pypi.python.org/pypi/aioredlock The asyncio redlock_ algorithm implementation. Redlock and asyncio ------------------- The redlock algorithm is a distributed lock implementation for Redis_. There are many implementations of it in several languages. In this case, this is the asyncio_ compatible implementation for python 3.5+. Usage ----- .. code-block:: python from aioredlock import Aioredlock, LockError, Sentinel # Define a list of connections to your Redis instances: redis_instances = [ ('localhost', 6379), {'host': 'localhost', 'port': 6379, 'db': 1}, 'redis://localhost:6379/2', Sentinel(('localhost', 26379), master='leader', db=3), Sentinel('redis://localhost:26379/4?master=leader&encoding=utf-8'), Sentinel('rediss://:password@localhost:26379/5?master=leader&encoding=utf-8&ssl_cert_reqs=CERT_NONE'), ] # Create a lock manager: lock_manager = Aioredlock(redis_instances) # Check wether a resourece acquired by any other redlock instance: assert not await lock_manager.is_locked("resource_name") # Try to acquire the lock: try: lock = await lock_manager.lock("resource_name", lock_timeout=10) except LockError: print('Lock not acquired') raise # Now the lock is acquired: assert lock.valid assert await lock_manager.is_locked("resource_name") # Extend lifetime of the lock: await lock_manager.extend(lock, lock_timeout=10) # Raises LockError if the lock manager can not extend the lock lifetime # on more then half of the Redis instances. # Release the lock: await lock_manager.unlock(lock) # Raises LockError if the lock manager can not release the lock # on more then half of redis instances. # The released lock become invalid: assert not lock.valid assert not await lock_manager.is_locked("resource_name") # Or you can use the lock as async context manager: try: async with await lock_manager.lock("resource_name") as lock: assert lock.valid is True # Do your stuff having the lock await lock.extend() # alias for lock_manager.extend(lock) # Do more stuff having the lock assert lock.valid is False # lock will be released by context manager except LockError: print('Lock not acquired') raise # Clear the connections with Redis: await lock_manager.destroy() How it works ------------ The Aioredlock constructor accepts the following optional parameters: - ``redis_connections``: A list of connections (dictionary of host and port and kwargs for ``aioredis.create_redis_pool()``, or tuple ``(host, port)``, or string Redis URI) where the Redis instances are running. The default value is ``[{'host': 'localhost', 'port': 6379}]``. - ``retry_count``: An integer representing number of maximum allowed retries to acquire the lock. The default value is ``3`` times. - ``retry_delay_min`` and ``retry_delay_max``: Float values representing waiting time (in seconds) before the next retry attempt. The default values are ``0.1`` and ``0.3``, respectively. In order to acquire the lock, the ``lock`` function should be called. If the lock operation is successful, ``lock.valid`` will be true, if the lock is not acquired then the ``LockError`` will be raised. From that moment, the lock is valid until the ``unlock`` function is called or when the ``lock_timeout`` is reached. Call the ``extend`` function to reset lifetime of the lock to ``lock_timeout`` interval. Use the ``is_locked`` function to check if the resource is locked by other redlock instance. In order to clear all the connections with Redis, the lock_manager ``destroy`` method can be called. To-do ----- .. _redlock: https://redis.io/topics/distlock .. _Redis: https://redis.io .. _asyncio: https://docs.python.org/3/library/asyncio.html python-aioredlock_0.7.3.orig/aioredlock.egg-info/SOURCES.txt0000644000000000000000000000055114376445217020633 0ustar00LICENSE.md README.rst setup.cfg setup.py aioredlock/__init__.py aioredlock/algorithm.py aioredlock/errors.py aioredlock/lock.py aioredlock/redis.py aioredlock/sentinel.py aioredlock/utility.py aioredlock.egg-info/PKG-INFO aioredlock.egg-info/SOURCES.txt aioredlock.egg-info/dependency_links.txt aioredlock.egg-info/requires.txt aioredlock.egg-info/top_level.txtpython-aioredlock_0.7.3.orig/aioredlock.egg-info/dependency_links.txt0000644000000000000000000000000114376445217023014 0ustar00 python-aioredlock_0.7.3.orig/aioredlock.egg-info/requires.txt0000644000000000000000000000025214376445217021345 0ustar00aioredis<2.0.0 attrs>=17.4.0 [cicd] codecov [examples] aiodocker [package] bump2version twine wheel [test] flake8 pytest-asyncio pytest-cov pytest-mock pytest==6.1.0 python-aioredlock_0.7.3.orig/aioredlock.egg-info/top_level.txt0000644000000000000000000000001314376445217021472 0ustar00aioredlock python-aioredlock_0.7.3.orig/aioredlock/__init__.py0000644000000000000000000000050314056425656017363 0ustar00from aioredlock.algorithm import Aioredlock from aioredlock.errors import LockError, LockAcquiringError, LockRuntimeError from aioredlock.lock import Lock from aioredlock.sentinel import Sentinel __all__ = ( 'Aioredlock', 'Lock', 'LockError', 'LockAcquiringError', 'LockRuntimeError', 'Sentinel' ) python-aioredlock_0.7.3.orig/aioredlock/algorithm.py0000644000000000000000000002276414200503467017613 0ustar00import asyncio import contextlib import logging import random import uuid import attr from aioredlock.errors import LockError from aioredlock.lock import Lock from aioredlock.redis import Redis from aioredlock.utility import clean_password @attr.s class Aioredlock: redis_connections = attr.ib( default=[{"host": "localhost", "port": 6379}], repr=clean_password ) retry_count = attr.ib(default=3, converter=int) retry_delay_min = attr.ib(default=0.1, converter=float) retry_delay_max = attr.ib(default=0.3, converter=float) internal_lock_timeout = attr.ib(default=10.0, converter=float) def __attrs_post_init__(self): self.redis = Redis(self.redis_connections) self._watchdogs = {} self._locks = {} @retry_count.validator def _validate_retry_count(self, attribute, value): """ Validate if retry_count is greater or equal 1 """ if value < 1: raise ValueError("Retry count must be greater or equal 1.") @internal_lock_timeout.validator def _validate_internal_lock_timeout(self, attribute, value): """ Validate if internal_lock_timeout is greater than 0 """ if value <= 0: raise ValueError("Internal lock_timeout must be greater than 0 seconds.") @retry_delay_min.validator @retry_delay_max.validator def _validate_retry_delay(self, attribute, value): """ Validate if retry_delay_min and retry_delay_max is greater than 0 """ if value <= 0: raise ValueError("Retry delay must be greater than 0 seconds.") @property def log(self): return logging.getLogger(__name__) async def _set_lock(self, resource, lock_identifier, lease_time): error = RuntimeError('Retry count less then one') # Proportional drift time to the length of the lock # See https://redis.io/topics/distlock#is-the-algorithm-asynchronous for more info drift = lease_time * 0.01 + 0.002 try: # global try/except to catch CancelledError for n in range(self.retry_count): self.log.debug('Acquiring lock "%s" try %d/%d', resource, n + 1, self.retry_count) if n != 0: delay = random.uniform(self.retry_delay_min, self.retry_delay_max) await asyncio.sleep(delay) try: elapsed_time = await self.redis.set_lock(resource, lock_identifier, lease_time) except LockError as exc: error = exc continue if lease_time - elapsed_time - drift <= 0: error = LockError('Lock timeout') self.log.debug('Timeout in acquiring the lock "%s"', resource) continue error = None break else: # break never reached raise error except (Exception, asyncio.CancelledError): # cleanup in case of fault or cancellation will run in background async def cleanup(): self.log.debug('Cleaning up lock "%s"', resource) with contextlib.suppress(LockError): await self.redis.unset_lock(resource, lock_identifier) asyncio.ensure_future(cleanup()) raise async def _auto_extend(self, lock): """ Tries to reset the lock's lifetime to lock_timeout every 0.6*lock_timeout automatically In case of fault the LockError exception will be raised :param lock: :class:`aioredlock.Lock` :raises: LockError in case of fault """ await asyncio.sleep(0.6 * self.internal_lock_timeout) try: await self.extend(lock) except Exception: self.log.debug('Error in extending the lock "%s"', lock.resource) self._watchdogs[lock.resource] = asyncio.ensure_future(self._auto_extend(lock)) async def lock(self, resource, lock_timeout=None, lock_identifier=None): """ Tries to acquire the lock. If the lock is correctly acquired, the valid property of the returned lock is True. In case of fault the LockError exception will be raised :param resource str: The string identifier of the resource to lock :param lock_timeout int: Lock's lifetime :param lock_identifier str: identifier for the instance of the lock :return: :class:`aioredlock.Lock` :raises: LockError in case of fault """ lock_identifier = lock_identifier or str(uuid.uuid4()) if lock_timeout is not None and lock_timeout <= 0: raise ValueError("Lock timeout must be greater than 0 seconds.") lease_time = lock_timeout or self.internal_lock_timeout await self._set_lock(resource, lock_identifier, lease_time) lock = Lock(self, resource, lock_identifier, lock_timeout, valid=True) if lock_timeout is None: self._watchdogs[lock.resource] = asyncio.ensure_future(self._auto_extend(lock)) self._locks[resource] = lock return lock async def extend(self, lock, lock_timeout=None): """ Tries to reset the lock's lifetime to lock_timeout In case of fault the LockError exception will be raised :param lock: :class:`aioredlock.Lock` :param lock_timeout: extend lock's life time to lock_timeout :raises: RuntimeError if lock is not valid :raises: LockError in case of fault """ self.log.debug('Extending lock "%s"', lock.resource) if not lock.valid: raise RuntimeError('Lock is not valid') if lock_timeout is not None and lock_timeout <= 0: raise ValueError("Lock timeout must be greater than 0 seconds.") new_lease_time = lock_timeout or lock.lock_timeout or self.internal_lock_timeout try: await self._set_lock(lock.resource, lock.id, new_lease_time) except Exception: with contextlib.suppress(LockError): await self.unlock(lock) raise async def unlock(self, lock): """ Release the lock and sets it's validity to False if lock successfully released. In case of fault the LockError exception will be raised :param lock: :class:`aioredlock.Lock` :raises: LockError in case of fault """ self.log.debug('Releasing lock "%s"', lock.resource) lock.valid = False if lock.resource in self._watchdogs: self._watchdogs[lock.resource].cancel() done, _ = await asyncio.wait([self._watchdogs[lock.resource]]) for fut in done: try: await fut except asyncio.CancelledError: pass except Exception: self.log.exception('Can not unlock "%s"', lock.resource) self._watchdogs.pop(lock.resource) await self.redis.unset_lock(lock.resource, lock.id) # raises LockError if can not unlock self._locks.pop(lock.resource, None) async def is_locked(self, resource_or_lock): """ Checks if the resource or the lock is locked by any redlock instance. :param resource_or_lock: resource name or aioredlock.Lock instance :returns: True if locked else False """ if isinstance(resource_or_lock, Lock): resource = resource_or_lock.resource elif isinstance(resource_or_lock, str): resource = resource_or_lock else: raise TypeError( 'Argument should be ether aioredlock.Lock instance or string, ' '%s is given.', type(resource_or_lock) ) return await self.redis.is_locked(resource) async def destroy(self): """ cancel all _watchdogs, unlock all locks and Clear all the redis connections """ self.log.debug('Destroying %s', repr(self)) for resource, lock in self._locks.copy().items(): if lock.valid: try: await self.unlock(lock) except Exception: self.log.exception('Can not unlock "%s"', resource) self._locks.clear() self._watchdogs.clear() await self.redis.clear_connections() async def get_active_locks(self): """ Return all stored locks that are valid. .. note:: This function is only really useful in learning if there are no active locks. It is possible that by the time the a lock is returned from this function that it is no longer active. """ ret = [] for lock in self._locks.values(): if lock.valid is True and await lock.is_locked(): ret.append(lock) return ret async def get_lock(self, resource, lock_identifier): """ recreate a aioredlock.Lock from the goven params and the ttl from redis. so checks if the lock is valid somehow too... :param resource: The string identifier of the resource to lock :param lock_identifier: The identifier of the lock :return: a new `aioredlock.Lock`. """ ttl = await self.redis.get_lock_ttl(resource, lock_identifier) lock = Lock(self, resource, lock_identifier, ttl, valid=True) return lock python-aioredlock_0.7.3.orig/aioredlock/errors.py0000644000000000000000000000064114056425656017143 0ustar00class AioredlockError(Exception): """ Base exception for aioredlock """ class LockError(AioredlockError): """ Error in acquiring or releasing the lock """ class LockAcquiringError(LockError): """ Error in acquiring the lock during normal operation """ class LockRuntimeError(LockError): """ Error in acquiring or releasing the lock due to an unexpected event """ python-aioredlock_0.7.3.orig/aioredlock/lock.py0000644000000000000000000000107113701134424016537 0ustar00import attr @attr.s class Lock: lock_manager = attr.ib() resource = attr.ib() id = attr.ib() lock_timeout = attr.ib(default=10.0) valid = attr.ib(default=False) async def __aenter__(self): return self async def __aexit__(self, exc_type, exc, tb): await self.lock_manager.unlock(self) async def extend(self): await self.lock_manager.extend(self) async def release(self): await self.lock_manager.unlock(self) async def is_locked(self): return await self.lock_manager.is_locked(self) python-aioredlock_0.7.3.orig/aioredlock/redis.py0000644000000000000000000004130114065367330016725 0ustar00import asyncio import logging import re import time from distutils.version import StrictVersion from itertools import groupby import aioredis from aioredlock.errors import LockError, LockAcquiringError, LockRuntimeError from aioredlock.sentinel import Sentinel from aioredlock.utility import clean_password def all_equal(iterable): """Returns True if all the elements are equal to each other""" g = groupby(iterable) return next(g, True) and not next(g, False) def raise_error(results, default_message): errors = [e for e in results if isinstance(e, BaseException)] if any(type(e) is LockRuntimeError for e in errors): raise [e for e in errors if type(e) is LockRuntimeError][0] elif any(type(e) is LockAcquiringError for e in errors): raise [e for e in errors if type(e) is LockAcquiringError][0] else: raise LockError(default_message) from errors[0] class Instance: # KEYS[1] - lock resource key # ARGS[1] - lock unique identifier # ARGS[2] - expiration time in milliseconds SET_LOCK_SCRIPT = """ local identifier = redis.call('get', KEYS[1]) if not identifier or identifier == ARGV[1] then return redis.call("set", KEYS[1], ARGV[1], 'PX', ARGV[2]) else return redis.error_reply('ERROR') end""" # KEYS[1] - lock resource key # ARGS[1] - lock unique identifier UNSET_LOCK_SCRIPT = """ local identifier = redis.call('get', KEYS[1]) if not identifier then return redis.status_reply('OK') elseif identifier == ARGV[1] then return redis.call("del", KEYS[1]) else return redis.error_reply('ERROR') end""" # KEYS[1] - lock resource key GET_LOCK_TTL_SCRIPT = """ local identifier = redis.call('get', KEYS[1]) if not identifier then return redis.error_reply('ERROR') elseif identifier == ARGV[1] then return redis.call("TTL", KEYS[1]) else return redis.error_reply('ERROR') end""" def __init__(self, connection): """ Redis instance constructor Constructor takes single argument - a redis host address The address can be one of the following: * a dict - {'host': 'localhost', 'port': 6379, 'db': 0, 'password': 'pass'} all keys except host and port will be passed as kwargs to the aioredis.create_redis_pool(); * an aioredlock.redis.Sentinel object; * a Redis URI - "redis://host:6379/0?encoding=utf-8"; * a (host, port) tuple - ('localhost', 6379); * or a unix domain socket path string - "/path/to/redis.sock". * a redis connection pool. :param connection: redis host address (dict, tuple or str) """ self.connection = connection self._pool = None self._lock = asyncio.Lock() self.set_lock_script_sha1 = None self.unset_lock_script_sha1 = None self.get_lock_ttl_script_sha1 = None @property def log(self): return logging.getLogger(__name__) def __repr__(self): connection_details = clean_password(self.connection) return "<%s(connection='%s'>" % (self.__class__.__name__, connection_details) @staticmethod async def _create_redis_pool(*args, **kwargs): """ Adapter to support both aioredis-0.3.0 and aioredis-1.0.0 For aioredis-1.0.0 and later calls: aioredis.create_redis_pool(*args, **kwargs) For aioredis-0.3.0 calls: aioredis.create_pool(*args, **kwargs) """ if StrictVersion(aioredis.__version__) >= StrictVersion('1.0.0'): # pragma no cover return await aioredis.create_redis_pool(*args, **kwargs) else: # pragma no cover return await aioredis.create_pool(*args, **kwargs) async def _register_scripts(self, redis): tasks = [] for script in [ self.SET_LOCK_SCRIPT, self.UNSET_LOCK_SCRIPT, self.GET_LOCK_TTL_SCRIPT, ]: script = re.sub(r'^\s+', '', script, flags=re.M).strip() tasks.append(redis.script_load(script)) ( self.set_lock_script_sha1, self.unset_lock_script_sha1, self.get_lock_ttl_script_sha1, ) = (r.decode() if isinstance(r, bytes) else r for r in await asyncio.gather(*tasks)) async def connect(self): """ Get an connection for the self instance """ address, redis_kwargs = (), {} if isinstance(self.connection, Sentinel): self._pool = await self.connection.get_master() elif isinstance(self.connection, dict): # a dict like {'host': 'localhost', 'port': 6379, # 'db': 0, 'password': 'pass'} kwargs = self.connection.copy() address = ( kwargs.pop('host', 'localhost'), kwargs.pop('port', 6379) ) redis_kwargs = kwargs elif isinstance(self.connection, aioredis.Redis): self._pool = self.connection else: # a tuple or list ('localhost', 6379) # a string "redis://host:6379/0?encoding=utf-8" or # a unix domain socket path "/path/to/redis.sock" address = self.connection if self._pool is None: if 'minsize' not in redis_kwargs: redis_kwargs['minsize'] = 1 if 'maxsize' not in redis_kwargs: redis_kwargs['maxsize'] = 100 async with self._lock: if self._pool is None: self.log.debug('Connecting %s', repr(self)) self._pool = await self._create_redis_pool(address, **redis_kwargs) if self.set_lock_script_sha1 is None or self.unset_lock_script_sha1 is None: with await self._pool as redis: await self._register_scripts(redis) return await self._pool async def close(self): """ Closes connection and resets pool """ if self._pool is not None and not isinstance(self.connection, aioredis.Redis): self._pool.close() await self._pool.wait_closed() self._pool = None async def set_lock(self, resource, lock_identifier, lock_timeout, register_scripts=False): """ Lock this instance and set lock expiration time to lock_timeout :param resource: redis key to set :param lock_identifier: uniquie id of lock :param lock_timeout: timeout for lock in seconds :raises: LockError if lock is not acquired """ lock_timeout_ms = int(lock_timeout * 1000) try: with await self.connect() as redis: if register_scripts is True: await self._register_scripts(redis) await redis.evalsha( self.set_lock_script_sha1, keys=[resource], args=[lock_identifier, lock_timeout_ms] ) except aioredis.errors.ReplyError as exc: # script fault if exc.args[0].startswith('NOSCRIPT'): return await self.set_lock(resource, lock_identifier, lock_timeout, register_scripts=True) self.log.debug('Can not set lock "%s" on %s', resource, repr(self)) raise LockAcquiringError('Can not set lock') from exc except (aioredis.errors.RedisError, OSError) as exc: self.log.error('Can not set lock "%s" on %s: %s', resource, repr(self), repr(exc)) raise LockRuntimeError('Can not set lock') from exc except asyncio.CancelledError: self.log.debug('Lock "%s" is cancelled on %s', resource, repr(self)) raise except Exception: self.log.exception('Can not set lock "%s" on %s', resource, repr(self)) raise else: self.log.debug('Lock "%s" is set on %s', resource, repr(self)) async def get_lock_ttl(self, resource, lock_identifier, register_scripts=False): """ Fetch this instance and set lock expiration time to lock_timeout :param resource: redis key to get :param lock_identifier: unique id of the lock to get :param register_scripts: register redis, usually already done, so 'False'. :raises: LockError if lock is not available """ try: with await self.connect() as redis: if register_scripts is True: await self._register_scripts(redis) ttl = await redis.evalsha( self.get_lock_ttl_script_sha1, keys=[resource], args=[lock_identifier] ) except aioredis.errors.ReplyError as exc: # script fault if exc.args[0].startswith('NOSCRIPT'): return await self.get_lock_ttl(resource, lock_identifier, register_scripts=True) self.log.debug('Can not get lock "%s" on %s', resource, repr(self)) raise LockAcquiringError('Can not get lock') from exc except (aioredis.errors.RedisError, OSError) as exc: self.log.error('Can not get lock "%s" on %s: %s', resource, repr(self), repr(exc)) raise LockRuntimeError('Can not get lock') from exc except asyncio.CancelledError: self.log.debug('Lock "%s" is cancelled on %s', resource, repr(self)) raise except Exception: self.log.exception('Can not get lock "%s" on %s', resource, repr(self)) raise else: self.log.debug('Lock "%s" with TTL %s is on %s', resource, ttl, repr(self)) return ttl async def unset_lock(self, resource, lock_identifier, register_scripts=False): """ Unlock this instance :param resource: redis key to set :param lock_identifier: uniquie id of lock :raises: LockError if the lock resource acquired with different lock_identifier """ try: with await self.connect() as redis: if register_scripts is True: await self._register_scripts(redis) await redis.evalsha( self.unset_lock_script_sha1, keys=[resource], args=[lock_identifier] ) except aioredis.errors.ReplyError as exc: # script fault if exc.args[0].startswith('NOSCRIPT'): return await self.unset_lock(resource, lock_identifier, register_scripts=True) self.log.debug('Can not unset lock "%s" on %s', resource, repr(self)) raise LockAcquiringError('Can not unset lock') from exc except (aioredis.errors.RedisError, OSError) as exc: self.log.error('Can not unset lock "%s" on %s: %s', resource, repr(self), repr(exc)) raise LockRuntimeError('Can not unset lock') from exc except asyncio.CancelledError: self.log.debug('Lock "%s" unset is cancelled on %s', resource, repr(self)) raise except Exception: self.log.exception('Can not unset lock "%s" on %s', resource, repr(self)) raise else: self.log.debug('Lock "%s" is unset on %s', resource, repr(self)) async def is_locked(self, resource): """ Checks if the resource is locked by any redlock instance. :param resource: The resource string name to check :returns: True if locked else False """ with await self.connect() as redis: lock_identifier = await redis.get(resource) if lock_identifier: return True else: return False class Redis: def __init__(self, redis_connections): self.instances = [] for connection in redis_connections: self.instances.append(Instance(connection)) @property def log(self): return logging.getLogger(__name__) async def set_lock(self, resource, lock_identifier, lock_timeout=10.0): """ Tries to set the lock to all the redis instances :param resource: The resource string name to lock :param lock_identifier: The id of the lock. A unique string :param lock_timeout: lock's lifetime :return float: The elapsed time that took to lock the instances in seconds :raises: LockRuntimeError or LockAcquiringError or LockError if the lock has not been set to at least (N/2 + 1) instances """ start_time = time.monotonic() successes = await asyncio.gather(*[ i.set_lock(resource, lock_identifier, lock_timeout) for i in self.instances ], return_exceptions=True) successful_sets = sum(s is None for s in successes) elapsed_time = time.monotonic() - start_time locked = successful_sets >= int(len(self.instances) / 2) + 1 self.log.debug('Lock "%s" is set on %d/%d instances in %s seconds', resource, successful_sets, len(self.instances), elapsed_time) if not locked: raise_error(successes, 'Can not acquire the lock "%s"' % resource) return elapsed_time async def get_lock_ttl(self, resource, lock_identifier=None): """ Tries to get the lock from all the redis instances :param resource: The resource string name to fetch :param lock_identifier: The id of the lock. A unique string :return float: The TTL of that lock reported by redis :raises: LockRuntimeError or LockAcquiringError or LockError if the lock has not been set to at least (N/2 + 1) instances """ start_time = time.monotonic() successes = await asyncio.gather(*[ i.get_lock_ttl(resource, lock_identifier) for i in self.instances ], return_exceptions=True) successful_list = [s for s in successes if not isinstance(s, Exception)] # should check if all the value are approx. the same with math.isclose... locked = len(successful_list) >= int(len(self.instances) / 2) + 1 success = all_equal(successful_list) and locked elapsed_time = time.monotonic() - start_time self.log.debug('Lock "%s" is set on %d/%d instances in %s seconds', resource, len(successful_list), len(self.instances), elapsed_time) if not success: raise_error(successes, 'Could not fetch the TTL for lock "%s"' % resource) return successful_list[0] async def unset_lock(self, resource, lock_identifier): """ Tries to unset the lock to all the redis instances :param resource: The resource string name to lock :param lock_identifier: The id of the lock. A unique string :return float: The elapsed time that took to lock the instances in iseconds :raises: LockRuntimeError or LockAcquiringError or LockError if the lock has no matching identifier in more then (N/2 - 1) instances """ if not self.instances: return .0 start_time = time.monotonic() successes = await asyncio.gather(*[ i.unset_lock(resource, lock_identifier) for i in self.instances ], return_exceptions=True) successful_removes = sum(s is None for s in successes) elapsed_time = time.monotonic() - start_time unlocked = successful_removes >= int(len(self.instances) / 2) + 1 self.log.debug('Lock "%s" is unset on %d/%d instances in %s seconds', resource, successful_removes, len(self.instances), elapsed_time) if not unlocked: raise_error(successes, 'Can not release the lock') return elapsed_time async def is_locked(self, resource): """ Checks if the resource is locked by any redlock instance. :param resource: The resource string name to lock :returns: True if locked else False """ successes = await asyncio.gather(*[ i.is_locked(resource) for i in self.instances ], return_exceptions=True) successful_sets = sum(s is True for s in successes) return successful_sets >= int(len(self.instances) / 2) + 1 async def clear_connections(self): self.log.debug('Clearing connection') if self.instances: coros = [] while self.instances: coros.append(self.instances.pop().close()) await asyncio.gather(*(coros)) python-aioredlock_0.7.3.orig/aioredlock/sentinel.py0000644000000000000000000001060313701134454017434 0ustar00import re import ssl import urllib.parse import aioredis.sentinel class SentinelConfigError(Exception): ''' Exception raised if Configuration is not valid when instantiating a Sentinel object. ''' class Sentinel: def __init__(self, connection, master=None, password=None, db=None, ssl_context=None): ''' The connection address can be one of the following: * a dict - {'host': 'localhost', 'port': 6379} * a Redis URI - "redis://host:6379/0?encoding=utf-8&master=mymaster"; * a (host, port) tuple - ('localhost', 6379); * or a unix domain socket path string - "/path/to/redis.sock". * a redis connection pool. :param connection: The connection address can be one of the following: * a dict - { 'host': 'localhost', 'port': 26379, 'password': 'insecure', 'db': 0, 'master': 'mymaster', } * a Redis URI - "redis://:insecure@host:26379/0?master=mymaster&encoding=utf-8"; * a (host, port) tuple - ('localhost', 26379); :param master: The name of the master to connect to via the sentinel :param password: The password to use to connect to the redis master :param db: The db to use on the redis master :param ssl_context: The ssl context to assign to the redis connection. If ssl_context is ``True``, the default ssl context in python will be assigned, otherwise an ssl context must be provided. Explicitly specified parameters overwrite implicit options in the ``connection`` variable. For example, if 'master' is specified in the connection dictionary, but also specified as the master kwarg, the master kwarg will be used instead. ''' address, kwargs = (), {} if isinstance(connection, dict): kwargs.update(connection) address = [(kwargs.pop('host'), kwargs.pop('port', 26379))] elif isinstance(connection, str) and re.match(r'^rediss?://.*\:\d+/\d?\??.*$', connection): url = urllib.parse.urlparse(connection) query = {key: value[0] for key, value in urllib.parse.parse_qs(url.query).items()} address = [(url.hostname, url.port or 6379)] dbnum = url.path.strip('/') if url.scheme == 'rediss': kwargs['ssl'] = ssl.create_default_context() verify_mode = query.pop('ssl_cert_reqs', None) if verify_mode is not None and hasattr(ssl, verify_mode.upper()): if verify_mode == 'CERT_NONE': kwargs['ssl'].check_hostname = False kwargs['ssl'].verify_mode = getattr(ssl, verify_mode.upper()) kwargs['db'] = int(dbnum) if dbnum.isdigit() else 0 kwargs['password'] = url.password kwargs.update(query) elif isinstance(connection, tuple): address = [connection] elif isinstance(connection, list): address = connection else: raise SentinelConfigError('Invalid Sentinel Configuration') if db is not None: kwargs['db'] = db if password is not None: kwargs['password'] = password if ssl_context is True: kwargs['ssl'] = ssl.create_default_context() elif ssl_context is not None: kwargs['ssl'] = ssl_context self.master = kwargs.pop('master', None) if master: self.master = master if self.master is None: raise SentinelConfigError('Master name required for sentinel to be configured') kwargs['minsize'] = 1 if 'minsize' not in kwargs else int(kwargs['minsize']) kwargs['maxsize'] = 100 if 'maxsize' not in kwargs else int(kwargs['maxsize']) self.connection = address self.redis_kwargs = kwargs async def get_sentinel(self): ''' Retrieve sentinel object from aioredis. ''' return await aioredis.sentinel.create_sentinel( sentinels=self.connection, **self.redis_kwargs, ) async def get_master(self): ''' Get ``Redis`` instance for specified ``master`` ''' sentinel = await self.get_sentinel() return await sentinel.master_for(self.master) python-aioredlock_0.7.3.orig/aioredlock/utility.py0000644000000000000000000000101113750351313017306 0ustar00import re REDIS_DSN_PATTERN = r"(rediss?:\/\/)(:.+@)?(.*)" def clean_password(details, cast=str): if isinstance(details, dict): details = {**details} if "password" in details: details["password"] = "*******" elif isinstance(details, list): details = [clean_password(x, cast=type(x)) for x in details] elif isinstance(details, str) and re.match(REDIS_DSN_PATTERN, details): details = re.sub(REDIS_DSN_PATTERN, "\\1:*******@\\3", details) return cast(details)