././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1650973768.636392 backoff-2.2.1/LICENSE0000644000000000000000000000206514231756111011046 0ustar00The MIT License (MIT) Copyright (c) 2014 litl, LLC. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1664996506.2280657 backoff-2.2.1/README.rst0000644000000000000000000003233714317352232011536 0ustar00backoff ======= .. image:: https://travis-ci.org/litl/backoff.svg :target: https://travis-ci.org/litl/backoff .. image:: https://coveralls.io/repos/litl/backoff/badge.svg :target: https://coveralls.io/r/litl/backoff?branch=python-3 .. image:: https://github.com/litl/backoff/workflows/CodeQL/badge.svg :target: https://github.com/litl/backoff/actions/workflows/codeql-analysis.yml .. image:: https://img.shields.io/pypi/v/backoff.svg :target: https://pypi.python.org/pypi/backoff .. image:: https://img.shields.io/github/license/litl/backoff :target: https://github.com/litl/backoff/blob/master/LICENSE **Function decoration for backoff and retry** This module provides function decorators which can be used to wrap a function such that it will be retried until some condition is met. It is meant to be of use when accessing unreliable resources with the potential for intermittent failures i.e. network resources and external APIs. Somewhat more generally, it may also be of use for dynamically polling resources for externally generated content. Decorators support both regular functions for synchronous code and `asyncio `__'s coroutines for asynchronous code. Examples ======== Since Kenneth Reitz's `requests `_ module has become a defacto standard for synchronous HTTP clients in Python, networking examples below are written using it, but it is in no way required by the backoff module. @backoff.on_exception --------------------- The ``on_exception`` decorator is used to retry when a specified exception is raised. Here's an example using exponential backoff when any ``requests`` exception is raised: .. code-block:: python @backoff.on_exception(backoff.expo, requests.exceptions.RequestException) def get_url(url): return requests.get(url) The decorator will also accept a tuple of exceptions for cases where the same backoff behavior is desired for more than one exception type: .. code-block:: python @backoff.on_exception(backoff.expo, (requests.exceptions.Timeout, requests.exceptions.ConnectionError)) def get_url(url): return requests.get(url) **Give Up Conditions** Optional keyword arguments can specify conditions under which to give up. The keyword argument ``max_time`` specifies the maximum amount of total time in seconds that can elapse before giving up. .. code-block:: python @backoff.on_exception(backoff.expo, requests.exceptions.RequestException, max_time=60) def get_url(url): return requests.get(url) Keyword argument ``max_tries`` specifies the maximum number of calls to make to the target function before giving up. .. code-block:: python @backoff.on_exception(backoff.expo, requests.exceptions.RequestException, max_tries=8, jitter=None) def get_url(url): return requests.get(url) In some cases the raised exception instance itself may need to be inspected in order to determine if it is a retryable condition. The ``giveup`` keyword arg can be used to specify a function which accepts the exception and returns a truthy value if the exception should not be retried: .. code-block:: python def fatal_code(e): return 400 <= e.response.status_code < 500 @backoff.on_exception(backoff.expo, requests.exceptions.RequestException, max_time=300, giveup=fatal_code) def get_url(url): return requests.get(url) By default, when a give up event occurs, the exception in question is reraised and so code calling an `on_exception`-decorated function may still need to do exception handling. This behavior can optionally be disabled using the `raise_on_giveup` keyword argument. In the code below, `requests.exceptions.RequestException` will not be raised when giveup occurs. Note that the decorated function will return `None` in this case, regardless of the logic in the `on_exception` handler. .. code-block:: python def fatal_code(e): return 400 <= e.response.status_code < 500 @backoff.on_exception(backoff.expo, requests.exceptions.RequestException, max_time=300, raise_on_giveup=False, giveup=fatal_code) def get_url(url): return requests.get(url) This is useful for non-mission critical code where you still wish to retry the code inside of `backoff.on_exception` but wish to proceed with execution even if all retries fail. @backoff.on_predicate --------------------- The ``on_predicate`` decorator is used to retry when a particular condition is true of the return value of the target function. This may be useful when polling a resource for externally generated content. Here's an example which uses a fibonacci sequence backoff when the return value of the target function is the empty list: .. code-block:: python @backoff.on_predicate(backoff.fibo, lambda x: x == [], max_value=13) def poll_for_messages(queue): return queue.get() Extra keyword arguments are passed when initializing the wait generator, so the ``max_value`` param above is passed as a keyword arg when initializing the fibo generator. When not specified, the predicate param defaults to the falsey test, so the above can more concisely be written: .. code-block:: python @backoff.on_predicate(backoff.fibo, max_value=13) def poll_for_message(queue): return queue.get() More simply, a function which continues polling every second until it gets a non-falsey result could be defined like like this: .. code-block:: python @backoff.on_predicate(backoff.constant, jitter=None, interval=1) def poll_for_message(queue): return queue.get() The jitter is disabled in order to keep the polling frequency fixed. @backoff.runtime ---------------- You can also use the ``backoff.runtime`` generator to make use of the return value or thrown exception of the decorated method. For example, to use the value in the ``Retry-After`` header of the response: .. code-block:: python @backoff.on_predicate( backoff.runtime, predicate=lambda r: r.status_code == 429, value=lambda r: int(r.headers.get("Retry-After")), jitter=None, ) def get_url(): return requests.get(url) Jitter ------ A jitter algorithm can be supplied with the ``jitter`` keyword arg to either of the backoff decorators. This argument should be a function accepting the original unadulterated backoff value and returning it's jittered counterpart. As of version 1.2, the default jitter function ``backoff.full_jitter`` implements the 'Full Jitter' algorithm as defined in the AWS Architecture Blog's `Exponential Backoff And Jitter `_ post. Note that with this algorithm, the time yielded by the wait generator is actually the *maximum* amount of time to wait. Previous versions of backoff defaulted to adding some random number of milliseconds (up to 1s) to the raw sleep value. If desired, this behavior is now available as ``backoff.random_jitter``. Using multiple decorators ------------------------- The backoff decorators may also be combined to specify different backoff behavior for different cases: .. code-block:: python @backoff.on_predicate(backoff.fibo, max_value=13) @backoff.on_exception(backoff.expo, requests.exceptions.HTTPError, max_time=60) @backoff.on_exception(backoff.expo, requests.exceptions.Timeout, max_time=300) def poll_for_message(queue): return queue.get() Runtime Configuration --------------------- The decorator functions ``on_exception`` and ``on_predicate`` are generally evaluated at import time. This is fine when the keyword args are passed as constant values, but suppose we want to consult a dictionary with configuration options that only become available at runtime. The relevant values are not available at import time. Instead, decorator functions can be passed callables which are evaluated at runtime to obtain the value: .. code-block:: python def lookup_max_time(): # pretend we have a global reference to 'app' here # and that it has a dictionary-like 'config' property return app.config["BACKOFF_MAX_TIME"] @backoff.on_exception(backoff.expo, ValueError, max_time=lookup_max_time) Event handlers -------------- Both backoff decorators optionally accept event handler functions using the keyword arguments ``on_success``, ``on_backoff``, and ``on_giveup``. This may be useful in reporting statistics or performing other custom logging. Handlers must be callables with a unary signature accepting a dict argument. This dict contains the details of the invocation. Valid keys include: * *target*: reference to the function or method being invoked * *args*: positional arguments to func * *kwargs*: keyword arguments to func * *tries*: number of invocation tries so far * *elapsed*: elapsed time in seconds so far * *wait*: seconds to wait (``on_backoff`` handler only) * *value*: value triggering backoff (``on_predicate`` decorator only) A handler which prints the details of the backoff event could be implemented like so: .. code-block:: python def backoff_hdlr(details): print ("Backing off {wait:0.1f} seconds after {tries} tries " "calling function {target} with args {args} and kwargs " "{kwargs}".format(**details)) @backoff.on_exception(backoff.expo, requests.exceptions.RequestException, on_backoff=backoff_hdlr) def get_url(url): return requests.get(url) **Multiple handlers per event type** In all cases, iterables of handler functions are also accepted, which are called in turn. For example, you might provide a simple list of handler functions as the value of the ``on_backoff`` keyword arg: .. code-block:: python @backoff.on_exception(backoff.expo, requests.exceptions.RequestException, on_backoff=[backoff_hdlr1, backoff_hdlr2]) def get_url(url): return requests.get(url) **Getting exception info** In the case of the ``on_exception`` decorator, all ``on_backoff`` and ``on_giveup`` handlers are called from within the except block for the exception being handled. Therefore exception info is available to the handler functions via the python standard library, specifically ``sys.exc_info()`` or the ``traceback`` module. The exception is also available at the *exception* key in the `details` dict passed to the handlers. Asynchronous code ----------------- Backoff supports asynchronous execution in Python 3.5 and above. To use backoff in asynchronous code based on `asyncio `__ you simply need to apply ``backoff.on_exception`` or ``backoff.on_predicate`` to coroutines. You can also use coroutines for the ``on_success``, ``on_backoff``, and ``on_giveup`` event handlers, with the interface otherwise being identical. The following examples use `aiohttp `__ asynchronous HTTP client/server library. .. code-block:: python @backoff.on_exception(backoff.expo, aiohttp.ClientError, max_time=60) async def get_url(url): async with aiohttp.ClientSession(raise_for_status=True) as session: async with session.get(url) as response: return await response.text() Logging configuration --------------------- By default, backoff and retry attempts are logged to the 'backoff' logger. By default, this logger is configured with a NullHandler, so there will be nothing output unless you configure a handler. Programmatically, this might be accomplished with something as simple as: .. code-block:: python logging.getLogger('backoff').addHandler(logging.StreamHandler()) The default logging level is INFO, which corresponds to logging anytime a retry event occurs. If you would instead like to log only when a giveup event occurs, set the logger level to ERROR. .. code-block:: python logging.getLogger('backoff').setLevel(logging.ERROR) It is also possible to specify an alternate logger with the ``logger`` keyword argument. If a string value is specified the logger will be looked up by name. .. code-block:: python @backoff.on_exception(backoff.expo, requests.exceptions.RequestException, logger='my_logger') # ... It is also supported to specify a Logger (or LoggerAdapter) object directly. .. code-block:: python my_logger = logging.getLogger('my_logger') my_handler = logging.StreamHandler() my_logger.addHandler(my_handler) my_logger.setLevel(logging.ERROR) @backoff.on_exception(backoff.expo, requests.exceptions.RequestException, logger=my_logger) # ... Default logging can be disabled all together by specifying ``logger=None``. In this case, if desired alternative logging behavior could be defined by using custom event handlers. ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1664997297.021946 backoff-2.2.1/backoff/__init__.py0000644000000000000000000000160214317353661013551 0ustar00# coding:utf-8 """ Function decoration for backoff and retry This module provides function decorators which can be used to wrap a function such that it will be retried until some condition is met. It is meant to be of use when accessing unreliable resources with the potential for intermittent failures i.e. network resources and external APIs. Somewhat more generally, it may also be of use for dynamically polling resources for externally generated content. For examples and full documentation see the README at https://github.com/litl/backoff """ from backoff._decorator import on_exception, on_predicate from backoff._jitter import full_jitter, random_jitter from backoff._wait_gen import constant, expo, fibo, runtime __all__ = [ 'on_predicate', 'on_exception', 'constant', 'expo', 'fibo', 'runtime', 'full_jitter', 'random_jitter', ] __version__ = "2.2.1" ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1664979867.9116116 backoff-2.2.1/backoff/_async.py0000644000000000000000000001516714317311634013273 0ustar00# coding:utf-8 import datetime import functools import asyncio from datetime import timedelta from backoff._common import (_init_wait_gen, _maybe_call, _next_wait) def _ensure_coroutine(coro_or_func): if asyncio.iscoroutinefunction(coro_or_func): return coro_or_func else: @functools.wraps(coro_or_func) async def f(*args, **kwargs): return coro_or_func(*args, **kwargs) return f def _ensure_coroutines(coros_or_funcs): return [_ensure_coroutine(f) for f in coros_or_funcs] async def _call_handlers(handlers, *, target, args, kwargs, tries, elapsed, **extra): details = { 'target': target, 'args': args, 'kwargs': kwargs, 'tries': tries, 'elapsed': elapsed, } details.update(extra) for handler in handlers: await handler(details) def retry_predicate(target, wait_gen, predicate, *, max_tries, max_time, jitter, on_success, on_backoff, on_giveup, wait_gen_kwargs): on_success = _ensure_coroutines(on_success) on_backoff = _ensure_coroutines(on_backoff) on_giveup = _ensure_coroutines(on_giveup) # Easy to implement, please report if you need this. assert not asyncio.iscoroutinefunction(max_tries) assert not asyncio.iscoroutinefunction(jitter) assert asyncio.iscoroutinefunction(target) @functools.wraps(target) async def retry(*args, **kwargs): # update variables from outer function args max_tries_value = _maybe_call(max_tries) max_time_value = _maybe_call(max_time) tries = 0 start = datetime.datetime.now() wait = _init_wait_gen(wait_gen, wait_gen_kwargs) while True: tries += 1 elapsed = timedelta.total_seconds(datetime.datetime.now() - start) details = { "target": target, "args": args, "kwargs": kwargs, "tries": tries, "elapsed": elapsed, } ret = await target(*args, **kwargs) if predicate(ret): max_tries_exceeded = (tries == max_tries_value) max_time_exceeded = (max_time_value is not None and elapsed >= max_time_value) if max_tries_exceeded or max_time_exceeded: await _call_handlers(on_giveup, **details, value=ret) break try: seconds = _next_wait(wait, ret, jitter, elapsed, max_time_value) except StopIteration: await _call_handlers(on_giveup, **details, value=ret) break await _call_handlers(on_backoff, **details, value=ret, wait=seconds) # Note: there is no convenient way to pass explicit event # loop to decorator, so here we assume that either default # thread event loop is set and correct (it mostly is # by default), or Python >= 3.5.3 or Python >= 3.6 is used # where loop.get_event_loop() in coroutine guaranteed to # return correct value. # See for details: # # await asyncio.sleep(seconds) continue else: await _call_handlers(on_success, **details, value=ret) break return ret return retry def retry_exception(target, wait_gen, exception, *, max_tries, max_time, jitter, giveup, on_success, on_backoff, on_giveup, raise_on_giveup, wait_gen_kwargs): on_success = _ensure_coroutines(on_success) on_backoff = _ensure_coroutines(on_backoff) on_giveup = _ensure_coroutines(on_giveup) giveup = _ensure_coroutine(giveup) # Easy to implement, please report if you need this. assert not asyncio.iscoroutinefunction(max_tries) assert not asyncio.iscoroutinefunction(jitter) @functools.wraps(target) async def retry(*args, **kwargs): max_tries_value = _maybe_call(max_tries) max_time_value = _maybe_call(max_time) tries = 0 start = datetime.datetime.now() wait = _init_wait_gen(wait_gen, wait_gen_kwargs) while True: tries += 1 elapsed = timedelta.total_seconds(datetime.datetime.now() - start) details = { "target": target, "args": args, "kwargs": kwargs, "tries": tries, "elapsed": elapsed, } try: ret = await target(*args, **kwargs) except exception as e: giveup_result = await giveup(e) max_tries_exceeded = (tries == max_tries_value) max_time_exceeded = (max_time_value is not None and elapsed >= max_time_value) if giveup_result or max_tries_exceeded or max_time_exceeded: await _call_handlers(on_giveup, **details, exception=e) if raise_on_giveup: raise return None try: seconds = _next_wait(wait, e, jitter, elapsed, max_time_value) except StopIteration: await _call_handlers(on_giveup, **details, exception=e) raise e await _call_handlers(on_backoff, **details, wait=seconds, exception=e) # Note: there is no convenient way to pass explicit event # loop to decorator, so here we assume that either default # thread event loop is set and correct (it mostly is # by default), or Python >= 3.5.3 or Python >= 3.6 is used # where loop.get_event_loop() in coroutine guaranteed to # return correct value. # See for details: # # await asyncio.sleep(seconds) else: await _call_handlers(on_success, **details) return ret return retry ././@PaxHeader0000000000000000000000000000003200000000000010210 xustar0026 mtime=1650973768.63746 backoff-2.2.1/backoff/_common.py0000644000000000000000000000662614231756111013444 0ustar00# coding:utf-8 import functools import logging import sys import traceback import warnings # Use module-specific logger with a default null handler. _logger = logging.getLogger('backoff') _logger.addHandler(logging.NullHandler()) # pragma: no cover _logger.setLevel(logging.INFO) # Evaluate arg that can be either a fixed value or a callable. def _maybe_call(f, *args, **kwargs): if callable(f): try: return f(*args, **kwargs) except TypeError: return f else: return f def _init_wait_gen(wait_gen, wait_gen_kwargs): kwargs = {k: _maybe_call(v) for k, v in wait_gen_kwargs.items()} initialized = wait_gen(**kwargs) initialized.send(None) # Initialize with an empty send return initialized def _next_wait(wait, send_value, jitter, elapsed, max_time): value = wait.send(send_value) try: if jitter is not None: seconds = jitter(value) else: seconds = value except TypeError: warnings.warn( "Nullary jitter function signature is deprecated. Use " "unary signature accepting a wait value in seconds and " "returning a jittered version of it.", DeprecationWarning, stacklevel=2, ) seconds = value + jitter() # don't sleep longer than remaining allotted max_time if max_time is not None: seconds = min(seconds, max_time - elapsed) return seconds def _prepare_logger(logger): if isinstance(logger, str): logger = logging.getLogger(logger) return logger # Configure handler list with user specified handler and optionally # with a default handler bound to the specified logger. def _config_handlers( user_handlers, *, default_handler=None, logger=None, log_level=None ): handlers = [] if logger is not None: assert log_level is not None, "Log level is not specified" # bind the specified logger to the default log handler log_handler = functools.partial( default_handler, logger=logger, log_level=log_level ) handlers.append(log_handler) if user_handlers is None: return handlers # user specified handlers can either be an iterable of handlers # or a single handler. either way append them to the list. if hasattr(user_handlers, '__iter__'): # add all handlers in the iterable handlers += list(user_handlers) else: # append a single handler handlers.append(user_handlers) return handlers # Default backoff handler def _log_backoff(details, logger, log_level): msg = "Backing off %s(...) for %.1fs (%s)" log_args = [details['target'].__name__, details['wait']] exc_typ, exc, _ = sys.exc_info() if exc is not None: exc_fmt = traceback.format_exception_only(exc_typ, exc)[-1] log_args.append(exc_fmt.rstrip("\n")) else: log_args.append(details['value']) logger.log(log_level, msg, *log_args) # Default giveup handler def _log_giveup(details, logger, log_level): msg = "Giving up %s(...) after %d tries (%s)" log_args = [details['target'].__name__, details['tries']] exc_typ, exc, _ = sys.exc_info() if exc is not None: exc_fmt = traceback.format_exception_only(exc_typ, exc)[-1] log_args.append(exc_fmt.rstrip("\n")) else: log_args.append(details['value']) logger.log(log_level, msg, *log_args) ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1664979901.400606 backoff-2.2.1/backoff/_decorator.py0000644000000000000000000002311414317311675014134 0ustar00# coding:utf-8 import asyncio import logging import operator from typing import Any, Callable, Iterable, Optional, Type, Union from backoff._common import ( _prepare_logger, _config_handlers, _log_backoff, _log_giveup ) from backoff._jitter import full_jitter from backoff import _async, _sync from backoff._typing import ( _CallableT, _Handler, _Jitterer, _MaybeCallable, _MaybeLogger, _MaybeSequence, _Predicate, _WaitGenerator, ) def on_predicate(wait_gen: _WaitGenerator, predicate: _Predicate[Any] = operator.not_, *, max_tries: Optional[_MaybeCallable[int]] = None, max_time: Optional[_MaybeCallable[float]] = None, jitter: Union[_Jitterer, None] = full_jitter, on_success: Union[_Handler, Iterable[_Handler], None] = None, on_backoff: Union[_Handler, Iterable[_Handler], None] = None, on_giveup: Union[_Handler, Iterable[_Handler], None] = None, logger: _MaybeLogger = 'backoff', backoff_log_level: int = logging.INFO, giveup_log_level: int = logging.ERROR, **wait_gen_kwargs: Any) -> Callable[[_CallableT], _CallableT]: """Returns decorator for backoff and retry triggered by predicate. Args: wait_gen: A generator yielding successive wait times in seconds. predicate: A function which when called on the return value of the target function will trigger backoff when considered truthily. If not specified, the default behavior is to backoff on falsey return values. max_tries: The maximum number of attempts to make before giving up. In the case of failure, the result of the last attempt will be returned. The default value of None means there is no limit to the number of tries. If a callable is passed, it will be evaluated at runtime and its return value used. max_time: The maximum total amount of time to try for before giving up. If this time expires, the result of the last attempt will be returned. If a callable is passed, it will be evaluated at runtime and its return value used. jitter: A function of the value yielded by wait_gen returning the actual time to wait. This distributes wait times stochastically in order to avoid timing collisions across concurrent clients. Wait times are jittered by default using the full_jitter function. Jittering may be disabled altogether by passing jitter=None. on_success: Callable (or iterable of callables) with a unary signature to be called in the event of success. The parameter is a dict containing details about the invocation. on_backoff: Callable (or iterable of callables) with a unary signature to be called in the event of a backoff. The parameter is a dict containing details about the invocation. on_giveup: Callable (or iterable of callables) with a unary signature to be called in the event that max_tries is exceeded. The parameter is a dict containing details about the invocation. logger: Name of logger or Logger object to log to. Defaults to 'backoff'. backoff_log_level: log level for the backoff event. Defaults to "INFO" giveup_log_level: log level for the give up event. Defaults to "ERROR" **wait_gen_kwargs: Any additional keyword args specified will be passed to wait_gen when it is initialized. Any callable args will first be evaluated and their return values passed. This is useful for runtime configuration. """ def decorate(target): nonlocal logger, on_success, on_backoff, on_giveup logger = _prepare_logger(logger) on_success = _config_handlers(on_success) on_backoff = _config_handlers( on_backoff, default_handler=_log_backoff, logger=logger, log_level=backoff_log_level ) on_giveup = _config_handlers( on_giveup, default_handler=_log_giveup, logger=logger, log_level=giveup_log_level ) if asyncio.iscoroutinefunction(target): retry = _async.retry_predicate else: retry = _sync.retry_predicate return retry( target, wait_gen, predicate, max_tries=max_tries, max_time=max_time, jitter=jitter, on_success=on_success, on_backoff=on_backoff, on_giveup=on_giveup, wait_gen_kwargs=wait_gen_kwargs ) # Return a function which decorates a target with a retry loop. return decorate def on_exception(wait_gen: _WaitGenerator, exception: _MaybeSequence[Type[Exception]], *, max_tries: Optional[_MaybeCallable[int]] = None, max_time: Optional[_MaybeCallable[float]] = None, jitter: Union[_Jitterer, None] = full_jitter, giveup: _Predicate[Exception] = lambda e: False, on_success: Union[_Handler, Iterable[_Handler], None] = None, on_backoff: Union[_Handler, Iterable[_Handler], None] = None, on_giveup: Union[_Handler, Iterable[_Handler], None] = None, raise_on_giveup: bool = True, logger: _MaybeLogger = 'backoff', backoff_log_level: int = logging.INFO, giveup_log_level: int = logging.ERROR, **wait_gen_kwargs: Any) -> Callable[[_CallableT], _CallableT]: """Returns decorator for backoff and retry triggered by exception. Args: wait_gen: A generator yielding successive wait times in seconds. exception: An exception type (or tuple of types) which triggers backoff. max_tries: The maximum number of attempts to make before giving up. Once exhausted, the exception will be allowed to escape. The default value of None means there is no limit to the number of tries. If a callable is passed, it will be evaluated at runtime and its return value used. max_time: The maximum total amount of time to try for before giving up. Once expired, the exception will be allowed to escape. If a callable is passed, it will be evaluated at runtime and its return value used. jitter: A function of the value yielded by wait_gen returning the actual time to wait. This distributes wait times stochastically in order to avoid timing collisions across concurrent clients. Wait times are jittered by default using the full_jitter function. Jittering may be disabled altogether by passing jitter=None. giveup: Function accepting an exception instance and returning whether or not to give up. Optional. The default is to always continue. on_success: Callable (or iterable of callables) with a unary signature to be called in the event of success. The parameter is a dict containing details about the invocation. on_backoff: Callable (or iterable of callables) with a unary signature to be called in the event of a backoff. The parameter is a dict containing details about the invocation. on_giveup: Callable (or iterable of callables) with a unary signature to be called in the event that max_tries is exceeded. The parameter is a dict containing details about the invocation. raise_on_giveup: Boolean indicating whether the registered exceptions should be raised on giveup. Defaults to `True` logger: Name or Logger object to log to. Defaults to 'backoff'. backoff_log_level: log level for the backoff event. Defaults to "INFO" giveup_log_level: log level for the give up event. Defaults to "ERROR" **wait_gen_kwargs: Any additional keyword args specified will be passed to wait_gen when it is initialized. Any callable args will first be evaluated and their return values passed. This is useful for runtime configuration. """ def decorate(target): nonlocal logger, on_success, on_backoff, on_giveup logger = _prepare_logger(logger) on_success = _config_handlers(on_success) on_backoff = _config_handlers( on_backoff, default_handler=_log_backoff, logger=logger, log_level=backoff_log_level, ) on_giveup = _config_handlers( on_giveup, default_handler=_log_giveup, logger=logger, log_level=giveup_log_level, ) if asyncio.iscoroutinefunction(target): retry = _async.retry_exception else: retry = _sync.retry_exception return retry( target, wait_gen, exception, max_tries=max_tries, max_time=max_time, jitter=jitter, giveup=giveup, on_success=on_success, on_backoff=on_backoff, on_giveup=on_giveup, raise_on_giveup=raise_on_giveup, wait_gen_kwargs=wait_gen_kwargs ) # Return a function which decorates a target with a retry loop. return decorate ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1650973768.6377773 backoff-2.2.1/backoff/_jitter.py0000644000000000000000000000141614231756111013445 0ustar00# coding:utf-8 import random def random_jitter(value: float) -> float: """Jitter the value a random number of milliseconds. This adds up to 1 second of additional time to the original value. Prior to backoff version 1.2 this was the default jitter behavior. Args: value: The unadulterated backoff value. """ return value + random.random() def full_jitter(value: float) -> float: """Jitter the value across the full range (0 to value). This corresponds to the "Full Jitter" algorithm specified in the AWS blog's post on the performance of various jitter algorithms. (http://www.awsarchitectureblog.com/2015/03/backoff.html) Args: value: The unadulterated backoff value. """ return random.uniform(0, value) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1664979867.9135916 backoff-2.2.1/backoff/_sync.py0000644000000000000000000001016614317311634013124 0ustar00# coding:utf-8 import datetime import functools import time from datetime import timedelta from backoff._common import (_init_wait_gen, _maybe_call, _next_wait) def _call_handlers(hdlrs, target, args, kwargs, tries, elapsed, **extra): details = { 'target': target, 'args': args, 'kwargs': kwargs, 'tries': tries, 'elapsed': elapsed, } details.update(extra) for hdlr in hdlrs: hdlr(details) def retry_predicate(target, wait_gen, predicate, *, max_tries, max_time, jitter, on_success, on_backoff, on_giveup, wait_gen_kwargs): @functools.wraps(target) def retry(*args, **kwargs): max_tries_value = _maybe_call(max_tries) max_time_value = _maybe_call(max_time) tries = 0 start = datetime.datetime.now() wait = _init_wait_gen(wait_gen, wait_gen_kwargs) while True: tries += 1 elapsed = timedelta.total_seconds(datetime.datetime.now() - start) details = { "target": target, "args": args, "kwargs": kwargs, "tries": tries, "elapsed": elapsed, } ret = target(*args, **kwargs) if predicate(ret): max_tries_exceeded = (tries == max_tries_value) max_time_exceeded = (max_time_value is not None and elapsed >= max_time_value) if max_tries_exceeded or max_time_exceeded: _call_handlers(on_giveup, **details, value=ret) break try: seconds = _next_wait(wait, ret, jitter, elapsed, max_time_value) except StopIteration: _call_handlers(on_giveup, **details) break _call_handlers(on_backoff, **details, value=ret, wait=seconds) time.sleep(seconds) continue else: _call_handlers(on_success, **details, value=ret) break return ret return retry def retry_exception(target, wait_gen, exception, *, max_tries, max_time, jitter, giveup, on_success, on_backoff, on_giveup, raise_on_giveup, wait_gen_kwargs): @functools.wraps(target) def retry(*args, **kwargs): max_tries_value = _maybe_call(max_tries) max_time_value = _maybe_call(max_time) tries = 0 start = datetime.datetime.now() wait = _init_wait_gen(wait_gen, wait_gen_kwargs) while True: tries += 1 elapsed = timedelta.total_seconds(datetime.datetime.now() - start) details = { "target": target, "args": args, "kwargs": kwargs, "tries": tries, "elapsed": elapsed, } try: ret = target(*args, **kwargs) except exception as e: max_tries_exceeded = (tries == max_tries_value) max_time_exceeded = (max_time_value is not None and elapsed >= max_time_value) if giveup(e) or max_tries_exceeded or max_time_exceeded: _call_handlers(on_giveup, **details, exception=e) if raise_on_giveup: raise return None try: seconds = _next_wait(wait, e, jitter, elapsed, max_time_value) except StopIteration: _call_handlers(on_giveup, **details, exception=e) raise e _call_handlers(on_backoff, **details, wait=seconds, exception=e) time.sleep(seconds) else: _call_handlers(on_success, **details) return ret return retry ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1664979901.4016204 backoff-2.2.1/backoff/_typing.py0000644000000000000000000000246014317311675013465 0ustar00# coding:utf-8 import logging import sys from typing import (Any, Callable, Coroutine, Dict, Generator, Sequence, Tuple, TypeVar, Union) if sys.version_info >= (3, 8): # pragma: no cover from typing import TypedDict else: # pragma: no cover # use typing_extensions if installed but don't require it try: from typing_extensions import TypedDict except ImportError: class TypedDict(dict): def __init_subclass__(cls, **kwargs: Any) -> None: return super().__init_subclass__() class _Details(TypedDict): target: Callable[..., Any] args: Tuple[Any, ...] kwargs: Dict[str, Any] tries: int elapsed: float class Details(_Details, total=False): wait: float # present in the on_backoff handler case for either decorator value: Any # present in the on_predicate decorator case T = TypeVar("T") _CallableT = TypeVar('_CallableT', bound=Callable[..., Any]) _Handler = Union[ Callable[[Details], None], Callable[[Details], Coroutine[Any, Any, None]], ] _Jitterer = Callable[[float], float] _MaybeCallable = Union[T, Callable[[], T]] _MaybeLogger = Union[str, logging.Logger, None] _MaybeSequence = Union[T, Sequence[T]] _Predicate = Callable[[T], bool] _WaitGenerator = Callable[..., Generator[float, None, None]] ././@PaxHeader0000000000000000000000000000003300000000000010211 xustar0027 mtime=1664996691.216323 backoff-2.2.1/backoff/_wait_gen.py0000644000000000000000000000453414317352523013751 0ustar00# coding:utf-8 import itertools from typing import Any, Callable, Generator, Iterable, Optional, Union def expo( base: float = 2, factor: float = 1, max_value: Optional[float] = None ) -> Generator[float, Any, None]: """Generator for exponential decay. Args: base: The mathematical base of the exponentiation operation factor: Factor to multiply the exponentiation by. max_value: The maximum value to yield. Once the value in the true exponential sequence exceeds this, the value of max_value will forever after be yielded. """ # Advance past initial .send() call yield # type: ignore[misc] n = 0 while True: a = factor * base ** n if max_value is None or a < max_value: yield a n += 1 else: yield max_value def fibo(max_value: Optional[int] = None) -> Generator[int, None, None]: """Generator for fibonaccial decay. Args: max_value: The maximum value to yield. Once the value in the true fibonacci sequence exceeds this, the value of max_value will forever after be yielded. """ # Advance past initial .send() call yield # type: ignore[misc] a = 1 b = 1 while True: if max_value is None or a < max_value: yield a a, b = b, a + b else: yield max_value def constant( interval: Union[int, Iterable[float]] = 1 ) -> Generator[float, None, None]: """Generator for constant intervals. Args: interval: A constant value to yield or an iterable of such values. """ # Advance past initial .send() call yield # type: ignore[misc] try: itr = iter(interval) # type: ignore except TypeError: itr = itertools.repeat(interval) # type: ignore for val in itr: yield val def runtime( *, value: Callable[[Any], float] ) -> Generator[float, None, None]: """Generator that is based on parsing the return value or thrown exception of the decorated method Args: value: a callable which takes as input the decorated function's return value or thrown exception and determines how long to wait """ ret_or_exc = yield # type: ignore[misc] while True: ret_or_exc = yield value(ret_or_exc) ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1650973768.6383314 backoff-2.2.1/backoff/py.typed0000644000000000000000000000000014231756111013116 0ustar00././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1650973768.6384625 backoff-2.2.1/backoff/types.py0000644000000000000000000000011114231756111013140 0ustar00# coding:utf-8 from ._typing import Details __all__ = [ 'Details' ] ././@PaxHeader0000000000000000000000000000003400000000000010212 xustar0028 mtime=1664997151.4165199 backoff-2.2.1/pyproject.toml0000644000000000000000000000270314317353437012765 0ustar00[tool.poetry] name = "backoff" version = "2.2.1" description = "Function decoration for backoff and retry" authors = ["Bob Green "] readme = "README.rst" repository = "https://github.com/litl/backoff" license = "MIT" keywords = ["retry", "backoff", "decorators"] classifiers = ['Development Status :: 5 - Production/Stable', 'Intended Audience :: Developers', 'Programming Language :: Python', 'License :: OSI Approved :: MIT License', 'Natural Language :: English', 'Operating System :: OS Independent', 'Programming Language :: Python', 'Programming Language :: Python :: 3', 'Programming Language :: Python :: 3.7', 'Programming Language :: Python :: 3.8', 'Programming Language :: Python :: 3.9', 'Programming Language :: Python :: 3.10', 'Topic :: Internet :: WWW/HTTP', 'Topic :: Software Development :: Libraries :: Python Modules', 'Topic :: Utilities'] packages = [ { include = "backoff" }, ] [tool.poetry.dependencies] python = "^3.7" [tool.poetry.dev-dependencies] flake8 = "^4.0.1" mypy = "^0.942" pytest = "^7.1.2" pytest-asyncio = "^0.18.3" pytest-cov = "^3.0.0" requests = "^2.26.0" responses = "^0.20.0" types-requests = "^2.27.20" [build-system] requires = ["poetry-core>=1.0.0"] build-backend = "poetry.core.masonry.api" backoff-2.2.1/setup.py0000644000000000000000000003425600000000000011522 0ustar00# -*- coding: utf-8 -*- from setuptools import setup packages = \ ['backoff'] package_data = \ {'': ['*']} setup_kwargs = { 'name': 'backoff', 'version': '2.2.1', 'description': 'Function decoration for backoff and retry', 'long_description': 'backoff\n=======\n\n.. image:: https://travis-ci.org/litl/backoff.svg\n :target: https://travis-ci.org/litl/backoff\n.. image:: https://coveralls.io/repos/litl/backoff/badge.svg\n :target: https://coveralls.io/r/litl/backoff?branch=python-3\n.. image:: https://github.com/litl/backoff/workflows/CodeQL/badge.svg\n :target: https://github.com/litl/backoff/actions/workflows/codeql-analysis.yml\n.. image:: https://img.shields.io/pypi/v/backoff.svg\n :target: https://pypi.python.org/pypi/backoff\n.. image:: https://img.shields.io/github/license/litl/backoff\n :target: https://github.com/litl/backoff/blob/master/LICENSE\n\n**Function decoration for backoff and retry**\n\nThis module provides function decorators which can be used to wrap a\nfunction such that it will be retried until some condition is met. It\nis meant to be of use when accessing unreliable resources with the\npotential for intermittent failures i.e. network resources and external\nAPIs. Somewhat more generally, it may also be of use for dynamically\npolling resources for externally generated content.\n\nDecorators support both regular functions for synchronous code and\n`asyncio `__\'s coroutines\nfor asynchronous code.\n\nExamples\n========\n\nSince Kenneth Reitz\'s `requests `_ module\nhas become a defacto standard for synchronous HTTP clients in Python,\nnetworking examples below are written using it, but it is in no way required\nby the backoff module.\n\n@backoff.on_exception\n---------------------\n\nThe ``on_exception`` decorator is used to retry when a specified exception\nis raised. Here\'s an example using exponential backoff when any\n``requests`` exception is raised:\n\n.. code-block:: python\n\n @backoff.on_exception(backoff.expo,\n requests.exceptions.RequestException)\n def get_url(url):\n return requests.get(url)\n\nThe decorator will also accept a tuple of exceptions for cases where\nthe same backoff behavior is desired for more than one exception type:\n\n.. code-block:: python\n\n @backoff.on_exception(backoff.expo,\n (requests.exceptions.Timeout,\n requests.exceptions.ConnectionError))\n def get_url(url):\n return requests.get(url)\n\n**Give Up Conditions**\n\nOptional keyword arguments can specify conditions under which to give\nup.\n\nThe keyword argument ``max_time`` specifies the maximum amount\nof total time in seconds that can elapse before giving up.\n\n.. code-block:: python\n\n @backoff.on_exception(backoff.expo,\n requests.exceptions.RequestException,\n max_time=60)\n def get_url(url):\n return requests.get(url)\n\n\nKeyword argument ``max_tries`` specifies the maximum number of calls\nto make to the target function before giving up.\n\n.. code-block:: python\n\n @backoff.on_exception(backoff.expo,\n requests.exceptions.RequestException,\n max_tries=8,\n jitter=None)\n def get_url(url):\n return requests.get(url)\n\n\nIn some cases the raised exception instance itself may need to be\ninspected in order to determine if it is a retryable condition. The\n``giveup`` keyword arg can be used to specify a function which accepts\nthe exception and returns a truthy value if the exception should not\nbe retried:\n\n.. code-block:: python\n\n def fatal_code(e):\n return 400 <= e.response.status_code < 500\n\n @backoff.on_exception(backoff.expo,\n requests.exceptions.RequestException,\n max_time=300,\n giveup=fatal_code)\n def get_url(url):\n return requests.get(url)\n\nBy default, when a give up event occurs, the exception in question is reraised\nand so code calling an `on_exception`-decorated function may still\nneed to do exception handling. This behavior can optionally be disabled\nusing the `raise_on_giveup` keyword argument.\n\nIn the code below, `requests.exceptions.RequestException` will not be raised\nwhen giveup occurs. Note that the decorated function will return `None` in this\ncase, regardless of the logic in the `on_exception` handler.\n\n.. code-block:: python\n\n def fatal_code(e):\n return 400 <= e.response.status_code < 500\n\n @backoff.on_exception(backoff.expo,\n requests.exceptions.RequestException,\n max_time=300,\n raise_on_giveup=False,\n giveup=fatal_code)\n def get_url(url):\n return requests.get(url)\n\nThis is useful for non-mission critical code where you still wish to retry\nthe code inside of `backoff.on_exception` but wish to proceed with execution\neven if all retries fail.\n\n@backoff.on_predicate\n---------------------\n\nThe ``on_predicate`` decorator is used to retry when a particular\ncondition is true of the return value of the target function. This may\nbe useful when polling a resource for externally generated content.\n\nHere\'s an example which uses a fibonacci sequence backoff when the\nreturn value of the target function is the empty list:\n\n.. code-block:: python\n\n @backoff.on_predicate(backoff.fibo, lambda x: x == [], max_value=13)\n def poll_for_messages(queue):\n return queue.get()\n\nExtra keyword arguments are passed when initializing the\nwait generator, so the ``max_value`` param above is passed as a keyword\narg when initializing the fibo generator.\n\nWhen not specified, the predicate param defaults to the falsey test,\nso the above can more concisely be written:\n\n.. code-block:: python\n\n @backoff.on_predicate(backoff.fibo, max_value=13)\n def poll_for_message(queue):\n return queue.get()\n\nMore simply, a function which continues polling every second until it\ngets a non-falsey result could be defined like like this:\n\n.. code-block:: python\n\n @backoff.on_predicate(backoff.constant, jitter=None, interval=1)\n def poll_for_message(queue):\n return queue.get()\n\nThe jitter is disabled in order to keep the polling frequency fixed. \n\n@backoff.runtime\n----------------\n\nYou can also use the ``backoff.runtime`` generator to make use of the\nreturn value or thrown exception of the decorated method.\n\nFor example, to use the value in the ``Retry-After`` header of the response:\n\n.. code-block:: python\n\n @backoff.on_predicate(\n backoff.runtime,\n predicate=lambda r: r.status_code == 429,\n value=lambda r: int(r.headers.get("Retry-After")),\n jitter=None,\n )\n def get_url():\n return requests.get(url)\n\nJitter\n------\n\nA jitter algorithm can be supplied with the ``jitter`` keyword arg to\neither of the backoff decorators. This argument should be a function\naccepting the original unadulterated backoff value and returning it\'s\njittered counterpart.\n\nAs of version 1.2, the default jitter function ``backoff.full_jitter``\nimplements the \'Full Jitter\' algorithm as defined in the AWS\nArchitecture Blog\'s `Exponential Backoff And Jitter\n`_ post.\nNote that with this algorithm, the time yielded by the wait generator\nis actually the *maximum* amount of time to wait.\n\nPrevious versions of backoff defaulted to adding some random number of\nmilliseconds (up to 1s) to the raw sleep value. If desired, this\nbehavior is now available as ``backoff.random_jitter``.\n\nUsing multiple decorators\n-------------------------\n\nThe backoff decorators may also be combined to specify different\nbackoff behavior for different cases:\n\n.. code-block:: python\n\n @backoff.on_predicate(backoff.fibo, max_value=13)\n @backoff.on_exception(backoff.expo,\n requests.exceptions.HTTPError,\n max_time=60)\n @backoff.on_exception(backoff.expo,\n requests.exceptions.Timeout,\n max_time=300)\n def poll_for_message(queue):\n return queue.get()\n\n\nRuntime Configuration\n---------------------\n\nThe decorator functions ``on_exception`` and ``on_predicate`` are\ngenerally evaluated at import time. This is fine when the keyword args\nare passed as constant values, but suppose we want to consult a\ndictionary with configuration options that only become available at\nruntime. The relevant values are not available at import time. Instead,\ndecorator functions can be passed callables which are evaluated at\nruntime to obtain the value:\n\n.. code-block:: python\n\n def lookup_max_time():\n # pretend we have a global reference to \'app\' here\n # and that it has a dictionary-like \'config\' property\n return app.config["BACKOFF_MAX_TIME"]\n\n @backoff.on_exception(backoff.expo,\n ValueError,\n max_time=lookup_max_time)\n\nEvent handlers\n--------------\n\nBoth backoff decorators optionally accept event handler functions\nusing the keyword arguments ``on_success``, ``on_backoff``, and ``on_giveup``.\nThis may be useful in reporting statistics or performing other custom\nlogging.\n\nHandlers must be callables with a unary signature accepting a dict\nargument. This dict contains the details of the invocation. Valid keys\ninclude:\n\n* *target*: reference to the function or method being invoked\n* *args*: positional arguments to func\n* *kwargs*: keyword arguments to func\n* *tries*: number of invocation tries so far\n* *elapsed*: elapsed time in seconds so far\n* *wait*: seconds to wait (``on_backoff`` handler only)\n* *value*: value triggering backoff (``on_predicate`` decorator only)\n\nA handler which prints the details of the backoff event could be\nimplemented like so:\n\n.. code-block:: python\n\n def backoff_hdlr(details):\n print ("Backing off {wait:0.1f} seconds after {tries} tries "\n "calling function {target} with args {args} and kwargs "\n "{kwargs}".format(**details))\n\n @backoff.on_exception(backoff.expo,\n requests.exceptions.RequestException,\n on_backoff=backoff_hdlr)\n def get_url(url):\n return requests.get(url)\n\n**Multiple handlers per event type**\n\nIn all cases, iterables of handler functions are also accepted, which\nare called in turn. For example, you might provide a simple list of\nhandler functions as the value of the ``on_backoff`` keyword arg:\n\n.. code-block:: python\n\n @backoff.on_exception(backoff.expo,\n requests.exceptions.RequestException,\n on_backoff=[backoff_hdlr1, backoff_hdlr2])\n def get_url(url):\n return requests.get(url)\n\n**Getting exception info**\n\nIn the case of the ``on_exception`` decorator, all ``on_backoff`` and\n``on_giveup`` handlers are called from within the except block for the\nexception being handled. Therefore exception info is available to the\nhandler functions via the python standard library, specifically\n``sys.exc_info()`` or the ``traceback`` module. The exception is also\navailable at the *exception* key in the `details` dict passed to the\nhandlers.\n\nAsynchronous code\n-----------------\n\nBackoff supports asynchronous execution in Python 3.5 and above.\n\nTo use backoff in asynchronous code based on\n`asyncio `__\nyou simply need to apply ``backoff.on_exception`` or ``backoff.on_predicate``\nto coroutines.\nYou can also use coroutines for the ``on_success``, ``on_backoff``, and\n``on_giveup`` event handlers, with the interface otherwise being identical.\n\nThe following examples use `aiohttp `__\nasynchronous HTTP client/server library.\n\n.. code-block:: python\n\n @backoff.on_exception(backoff.expo, aiohttp.ClientError, max_time=60)\n async def get_url(url):\n async with aiohttp.ClientSession(raise_for_status=True) as session:\n async with session.get(url) as response:\n return await response.text()\n\nLogging configuration\n---------------------\n\nBy default, backoff and retry attempts are logged to the \'backoff\'\nlogger. By default, this logger is configured with a NullHandler, so\nthere will be nothing output unless you configure a handler.\nProgrammatically, this might be accomplished with something as simple\nas:\n\n.. code-block:: python\n\n logging.getLogger(\'backoff\').addHandler(logging.StreamHandler())\n\nThe default logging level is INFO, which corresponds to logging\nanytime a retry event occurs. If you would instead like to log\nonly when a giveup event occurs, set the logger level to ERROR.\n\n.. code-block:: python\n\n logging.getLogger(\'backoff\').setLevel(logging.ERROR)\n\nIt is also possible to specify an alternate logger with the ``logger``\nkeyword argument. If a string value is specified the logger will be\nlooked up by name.\n\n.. code-block:: python\n\n @backoff.on_exception(backoff.expo,\n requests.exceptions.RequestException,\n\t\t\t logger=\'my_logger\')\n # ...\n\nIt is also supported to specify a Logger (or LoggerAdapter) object\ndirectly.\n\n.. code-block:: python\n\n my_logger = logging.getLogger(\'my_logger\')\n my_handler = logging.StreamHandler()\n my_logger.addHandler(my_handler)\n my_logger.setLevel(logging.ERROR)\n\n @backoff.on_exception(backoff.expo,\n requests.exceptions.RequestException,\n\t\t\t logger=my_logger)\n # ...\n\nDefault logging can be disabled all together by specifying\n``logger=None``. In this case, if desired alternative logging behavior\ncould be defined by using custom event handlers.\n', 'author': 'Bob Green', 'author_email': 'rgreen@aquent.com', 'maintainer': 'None', 'maintainer_email': 'None', 'url': 'https://github.com/litl/backoff', 'packages': packages, 'package_data': package_data, 'python_requires': '>=3.7,<4.0', } setup(**setup_kwargs) backoff-2.2.1/PKG-INFO0000644000000000000000000003475300000000000011107 0ustar00Metadata-Version: 2.1 Name: backoff Version: 2.2.1 Summary: Function decoration for backoff and retry Home-page: https://github.com/litl/backoff License: MIT Keywords: retry,backoff,decorators Author: Bob Green Author-email: rgreen@aquent.com Requires-Python: >=3.7,<4.0 Classifier: Development Status :: 5 - Production/Stable Classifier: Intended Audience :: Developers Classifier: License :: OSI Approved :: MIT License Classifier: Natural Language :: English Classifier: Operating System :: OS Independent Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.7 Classifier: Programming Language :: Python :: 3.8 Classifier: Programming Language :: Python :: 3.9 Classifier: Programming Language :: Python :: 3.10 Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.10 Classifier: Programming Language :: Python :: 3.7 Classifier: Programming Language :: Python :: 3.8 Classifier: Programming Language :: Python :: 3.9 Classifier: Topic :: Internet :: WWW/HTTP Classifier: Topic :: Software Development :: Libraries :: Python Modules Classifier: Topic :: Utilities Project-URL: Repository, https://github.com/litl/backoff Description-Content-Type: text/x-rst backoff ======= .. image:: https://travis-ci.org/litl/backoff.svg :target: https://travis-ci.org/litl/backoff .. image:: https://coveralls.io/repos/litl/backoff/badge.svg :target: https://coveralls.io/r/litl/backoff?branch=python-3 .. image:: https://github.com/litl/backoff/workflows/CodeQL/badge.svg :target: https://github.com/litl/backoff/actions/workflows/codeql-analysis.yml .. image:: https://img.shields.io/pypi/v/backoff.svg :target: https://pypi.python.org/pypi/backoff .. image:: https://img.shields.io/github/license/litl/backoff :target: https://github.com/litl/backoff/blob/master/LICENSE **Function decoration for backoff and retry** This module provides function decorators which can be used to wrap a function such that it will be retried until some condition is met. It is meant to be of use when accessing unreliable resources with the potential for intermittent failures i.e. network resources and external APIs. Somewhat more generally, it may also be of use for dynamically polling resources for externally generated content. Decorators support both regular functions for synchronous code and `asyncio `__'s coroutines for asynchronous code. Examples ======== Since Kenneth Reitz's `requests `_ module has become a defacto standard for synchronous HTTP clients in Python, networking examples below are written using it, but it is in no way required by the backoff module. @backoff.on_exception --------------------- The ``on_exception`` decorator is used to retry when a specified exception is raised. Here's an example using exponential backoff when any ``requests`` exception is raised: .. code-block:: python @backoff.on_exception(backoff.expo, requests.exceptions.RequestException) def get_url(url): return requests.get(url) The decorator will also accept a tuple of exceptions for cases where the same backoff behavior is desired for more than one exception type: .. code-block:: python @backoff.on_exception(backoff.expo, (requests.exceptions.Timeout, requests.exceptions.ConnectionError)) def get_url(url): return requests.get(url) **Give Up Conditions** Optional keyword arguments can specify conditions under which to give up. The keyword argument ``max_time`` specifies the maximum amount of total time in seconds that can elapse before giving up. .. code-block:: python @backoff.on_exception(backoff.expo, requests.exceptions.RequestException, max_time=60) def get_url(url): return requests.get(url) Keyword argument ``max_tries`` specifies the maximum number of calls to make to the target function before giving up. .. code-block:: python @backoff.on_exception(backoff.expo, requests.exceptions.RequestException, max_tries=8, jitter=None) def get_url(url): return requests.get(url) In some cases the raised exception instance itself may need to be inspected in order to determine if it is a retryable condition. The ``giveup`` keyword arg can be used to specify a function which accepts the exception and returns a truthy value if the exception should not be retried: .. code-block:: python def fatal_code(e): return 400 <= e.response.status_code < 500 @backoff.on_exception(backoff.expo, requests.exceptions.RequestException, max_time=300, giveup=fatal_code) def get_url(url): return requests.get(url) By default, when a give up event occurs, the exception in question is reraised and so code calling an `on_exception`-decorated function may still need to do exception handling. This behavior can optionally be disabled using the `raise_on_giveup` keyword argument. In the code below, `requests.exceptions.RequestException` will not be raised when giveup occurs. Note that the decorated function will return `None` in this case, regardless of the logic in the `on_exception` handler. .. code-block:: python def fatal_code(e): return 400 <= e.response.status_code < 500 @backoff.on_exception(backoff.expo, requests.exceptions.RequestException, max_time=300, raise_on_giveup=False, giveup=fatal_code) def get_url(url): return requests.get(url) This is useful for non-mission critical code where you still wish to retry the code inside of `backoff.on_exception` but wish to proceed with execution even if all retries fail. @backoff.on_predicate --------------------- The ``on_predicate`` decorator is used to retry when a particular condition is true of the return value of the target function. This may be useful when polling a resource for externally generated content. Here's an example which uses a fibonacci sequence backoff when the return value of the target function is the empty list: .. code-block:: python @backoff.on_predicate(backoff.fibo, lambda x: x == [], max_value=13) def poll_for_messages(queue): return queue.get() Extra keyword arguments are passed when initializing the wait generator, so the ``max_value`` param above is passed as a keyword arg when initializing the fibo generator. When not specified, the predicate param defaults to the falsey test, so the above can more concisely be written: .. code-block:: python @backoff.on_predicate(backoff.fibo, max_value=13) def poll_for_message(queue): return queue.get() More simply, a function which continues polling every second until it gets a non-falsey result could be defined like like this: .. code-block:: python @backoff.on_predicate(backoff.constant, jitter=None, interval=1) def poll_for_message(queue): return queue.get() The jitter is disabled in order to keep the polling frequency fixed. @backoff.runtime ---------------- You can also use the ``backoff.runtime`` generator to make use of the return value or thrown exception of the decorated method. For example, to use the value in the ``Retry-After`` header of the response: .. code-block:: python @backoff.on_predicate( backoff.runtime, predicate=lambda r: r.status_code == 429, value=lambda r: int(r.headers.get("Retry-After")), jitter=None, ) def get_url(): return requests.get(url) Jitter ------ A jitter algorithm can be supplied with the ``jitter`` keyword arg to either of the backoff decorators. This argument should be a function accepting the original unadulterated backoff value and returning it's jittered counterpart. As of version 1.2, the default jitter function ``backoff.full_jitter`` implements the 'Full Jitter' algorithm as defined in the AWS Architecture Blog's `Exponential Backoff And Jitter `_ post. Note that with this algorithm, the time yielded by the wait generator is actually the *maximum* amount of time to wait. Previous versions of backoff defaulted to adding some random number of milliseconds (up to 1s) to the raw sleep value. If desired, this behavior is now available as ``backoff.random_jitter``. Using multiple decorators ------------------------- The backoff decorators may also be combined to specify different backoff behavior for different cases: .. code-block:: python @backoff.on_predicate(backoff.fibo, max_value=13) @backoff.on_exception(backoff.expo, requests.exceptions.HTTPError, max_time=60) @backoff.on_exception(backoff.expo, requests.exceptions.Timeout, max_time=300) def poll_for_message(queue): return queue.get() Runtime Configuration --------------------- The decorator functions ``on_exception`` and ``on_predicate`` are generally evaluated at import time. This is fine when the keyword args are passed as constant values, but suppose we want to consult a dictionary with configuration options that only become available at runtime. The relevant values are not available at import time. Instead, decorator functions can be passed callables which are evaluated at runtime to obtain the value: .. code-block:: python def lookup_max_time(): # pretend we have a global reference to 'app' here # and that it has a dictionary-like 'config' property return app.config["BACKOFF_MAX_TIME"] @backoff.on_exception(backoff.expo, ValueError, max_time=lookup_max_time) Event handlers -------------- Both backoff decorators optionally accept event handler functions using the keyword arguments ``on_success``, ``on_backoff``, and ``on_giveup``. This may be useful in reporting statistics or performing other custom logging. Handlers must be callables with a unary signature accepting a dict argument. This dict contains the details of the invocation. Valid keys include: * *target*: reference to the function or method being invoked * *args*: positional arguments to func * *kwargs*: keyword arguments to func * *tries*: number of invocation tries so far * *elapsed*: elapsed time in seconds so far * *wait*: seconds to wait (``on_backoff`` handler only) * *value*: value triggering backoff (``on_predicate`` decorator only) A handler which prints the details of the backoff event could be implemented like so: .. code-block:: python def backoff_hdlr(details): print ("Backing off {wait:0.1f} seconds after {tries} tries " "calling function {target} with args {args} and kwargs " "{kwargs}".format(**details)) @backoff.on_exception(backoff.expo, requests.exceptions.RequestException, on_backoff=backoff_hdlr) def get_url(url): return requests.get(url) **Multiple handlers per event type** In all cases, iterables of handler functions are also accepted, which are called in turn. For example, you might provide a simple list of handler functions as the value of the ``on_backoff`` keyword arg: .. code-block:: python @backoff.on_exception(backoff.expo, requests.exceptions.RequestException, on_backoff=[backoff_hdlr1, backoff_hdlr2]) def get_url(url): return requests.get(url) **Getting exception info** In the case of the ``on_exception`` decorator, all ``on_backoff`` and ``on_giveup`` handlers are called from within the except block for the exception being handled. Therefore exception info is available to the handler functions via the python standard library, specifically ``sys.exc_info()`` or the ``traceback`` module. The exception is also available at the *exception* key in the `details` dict passed to the handlers. Asynchronous code ----------------- Backoff supports asynchronous execution in Python 3.5 and above. To use backoff in asynchronous code based on `asyncio `__ you simply need to apply ``backoff.on_exception`` or ``backoff.on_predicate`` to coroutines. You can also use coroutines for the ``on_success``, ``on_backoff``, and ``on_giveup`` event handlers, with the interface otherwise being identical. The following examples use `aiohttp `__ asynchronous HTTP client/server library. .. code-block:: python @backoff.on_exception(backoff.expo, aiohttp.ClientError, max_time=60) async def get_url(url): async with aiohttp.ClientSession(raise_for_status=True) as session: async with session.get(url) as response: return await response.text() Logging configuration --------------------- By default, backoff and retry attempts are logged to the 'backoff' logger. By default, this logger is configured with a NullHandler, so there will be nothing output unless you configure a handler. Programmatically, this might be accomplished with something as simple as: .. code-block:: python logging.getLogger('backoff').addHandler(logging.StreamHandler()) The default logging level is INFO, which corresponds to logging anytime a retry event occurs. If you would instead like to log only when a giveup event occurs, set the logger level to ERROR. .. code-block:: python logging.getLogger('backoff').setLevel(logging.ERROR) It is also possible to specify an alternate logger with the ``logger`` keyword argument. If a string value is specified the logger will be looked up by name. .. code-block:: python @backoff.on_exception(backoff.expo, requests.exceptions.RequestException, logger='my_logger') # ... It is also supported to specify a Logger (or LoggerAdapter) object directly. .. code-block:: python my_logger = logging.getLogger('my_logger') my_handler = logging.StreamHandler() my_logger.addHandler(my_handler) my_logger.setLevel(logging.ERROR) @backoff.on_exception(backoff.expo, requests.exceptions.RequestException, logger=my_logger) # ... Default logging can be disabled all together by specifying ``logger=None``. In this case, if desired alternative logging behavior could be defined by using custom event handlers.