././@PaxHeader0000000000000000000000000000002600000000000011453 xustar000000000000000022 mtime=1460142330.0 backoff-1.11.1/LICENSE0000644000000000000000000000206500000000000012325 0ustar0000000000000000The MIT License (MIT) Copyright (c) 2014 litl, LLC. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1626270698.4528246 backoff-1.11.1/README.rst0000644000000000000000000002667200000000000013021 0ustar0000000000000000backoff ======= .. image:: https://travis-ci.org/litl/backoff.svg?branch=master :target: https://travis-ci.org/litl/backoff?branch=master .. image:: https://coveralls.io/repos/litl/backoff/badge.svg?branch=master :target: https://coveralls.io/r/litl/backoff?branch=master .. image:: https://img.shields.io/pypi/v/backoff.svg :target: https://pypi.python.org/pypi/backoff **Function decoration for backoff and retry** This module provides function decorators which can be used to wrap a function such that it will be retried until some condition is met. It is meant to be of use when accessing unreliable resources with the potential for intermittent failures i.e. network resources and external APIs. Somewhat more generally, it may also be of use for dynamically polling resources for externally generated content. Decorators support both regular functions for synchronous code and `asyncio `_'s coroutines for asynchronous code. Examples ======== Since Kenneth Reitz's `requests `_ module has become a defacto standard for synchronous HTTP clients in Python, networking examples below are written using it, but it is in no way required by the backoff module. @backoff.on_exception --------------------- The ``on_exception`` decorator is used to retry when a specified exception is raised. Here's an example using exponential backoff when any ``requests`` exception is raised: .. code-block:: python @backoff.on_exception(backoff.expo, requests.exceptions.RequestException) def get_url(url): return requests.get(url) The decorator will also accept a tuple of exceptions for cases where the same backoff behavior is desired for more than one exception type: .. code-block:: python @backoff.on_exception(backoff.expo, (requests.exceptions.Timeout, requests.exceptions.ConnectionError)) def get_url(url): return requests.get(url) **Give Up Conditions** Optional keyword arguments can specify conditions under which to give up. The keyword argument ``max_time`` specifies the maximum amount of total time in seconds that can elapse before giving up. .. code-block:: python @backoff.on_exception(backoff.expo, requests.exceptions.RequestException, max_time=60) def get_url(url): return requests.get(url) Keyword argument ``max_tries`` specifies the maximum number of calls to make to the target function before giving up. .. code-block:: python @backoff.on_exception(backoff.expo, requests.exceptions.RequestException, max_tries=8, jitter=None) def get_url(url): return requests.get(url) In some cases the raised exception instance itself may need to be inspected in order to determine if it is a retryable condition. The ``giveup`` keyword arg can be used to specify a function which accepts the exception and returns a truthy value if the exception should not be retried: .. code-block:: python def fatal_code(e): return 400 <= e.response.status_code < 500 @backoff.on_exception(backoff.expo, requests.exceptions.RequestException, max_time=300, giveup=fatal_code) def get_url(url): return requests.get(url) When a give up event occurs, the exception in question is reraised and so code calling an `on_exception`-decorated function may still need to do exception handling. @backoff.on_predicate --------------------- The ``on_predicate`` decorator is used to retry when a particular condition is true of the return value of the target function. This may be useful when polling a resource for externally generated content. Here's an example which uses a fibonacci sequence backoff when the return value of the target function is the empty list: .. code-block:: python @backoff.on_predicate(backoff.fibo, lambda x: x == [], max_value=13) def poll_for_messages(queue): return queue.get() Extra keyword arguments are passed when initializing the wait generator, so the ``max_value`` param above is passed as a keyword arg when initializing the fibo generator. When not specified, the predicate param defaults to the falsey test, so the above can more concisely be written: .. code-block:: python @backoff.on_predicate(backoff.fibo, max_value=13) def poll_for_message(queue) return queue.get() More simply, a function which continues polling every second until it gets a non-falsey result could be defined like like this: .. code-block:: python @backoff.on_predicate(backoff.constant, interval=1) def poll_for_message(queue) return queue.get() Jitter ------ A jitter algorithm can be supplied with the ``jitter`` keyword arg to either of the backoff decorators. This argument should be a function accepting the original unadulterated backoff value and returning it's jittered counterpart. As of version 1.2, the default jitter function ``backoff.full_jitter`` implements the 'Full Jitter' algorithm as defined in the AWS Architecture Blog's `Exponential Backoff And Jitter `_ post. Note that with this algorithm, the time yielded by the wait generator is actually the *maximum* amount of time to wait. Previous versions of backoff defaulted to adding some random number of milliseconds (up to 1s) to the raw sleep value. If desired, this behavior is now available as ``backoff.random_jitter``. Using multiple decorators ------------------------- The backoff decorators may also be combined to specify different backoff behavior for different cases: .. code-block:: python @backoff.on_predicate(backoff.fibo, max_value=13) @backoff.on_exception(backoff.expo, requests.exceptions.HTTPError, max_time=60) @backoff.on_exception(backoff.expo, requests.exceptions.Timeout, max_time=300) def poll_for_message(queue): return queue.get() Runtime Configuration --------------------- The decorator functions ``on_exception`` and ``on_predicate`` are generally evaluated at import time. This is fine when the keyword args are passed as constant values, but suppose we want to consult a dictionary with configuration options that only become available at runtime. The relevant values are not available at import time. Instead, decorator functions can be passed callables which are evaluated at runtime to obtain the value: .. code-block:: python def lookup_max_time(): # pretend we have a global reference to 'app' here # and that it has a dictionary-like 'config' property return app.config["BACKOFF_MAX_TIME"] @backoff.on_exception(backoff.expo, ValueError, max_time=lookup_max_time) Event handlers -------------- Both backoff decorators optionally accept event handler functions using the keyword arguments ``on_success``, ``on_backoff``, and ``on_giveup``. This may be useful in reporting statistics or performing other custom logging. Handlers must be callables with a unary signature accepting a dict argument. This dict contains the details of the invocation. Valid keys include: * *target*: reference to the function or method being invoked * *args*: positional arguments to func * *kwargs*: keyword arguments to func * *tries*: number of invocation tries so far * *elapsed*: elapsed time in seconds so far * *wait*: seconds to wait (``on_backoff`` handler only) * *value*: value triggering backoff (``on_predicate`` decorator only) A handler which prints the details of the backoff event could be implemented like so: .. code-block:: python def backoff_hdlr(details): print ("Backing off {wait:0.1f} seconds after {tries} tries " "calling function {target} with args {args} and kwargs " "{kwargs}".format(**details)) @backoff.on_exception(backoff.expo, requests.exceptions.RequestException, on_backoff=backoff_hdlr) def get_url(url): return requests.get(url) **Multiple handlers per event type** In all cases, iterables of handler functions are also accepted, which are called in turn. For example, you might provide a simple list of handler functions as the value of the ``on_backoff`` keyword arg: .. code-block:: python @backoff.on_exception(backoff.expo, requests.exceptions.RequestException, on_backoff=[backoff_hdlr1, backoff_hdlr2]) def get_url(url): return requests.get(url) **Getting exception info** In the case of the ``on_exception`` decorator, all ``on_backoff`` and ``on_giveup`` handlers are called from within the except block for the exception being handled. Therefore exception info is available to the handler functions via the python standard library, specifically ``sys.exc_info()`` or the ``traceback`` module. Asynchronous code ----------------- Backoff supports asynchronous execution in Python 3.5 and above. To use backoff in asynchronous code based on `asyncio `_ you simply need to apply ``backoff.on_exception`` or ``backoff.on_predicate`` to coroutines. You can also use coroutines for the ``on_success``, ``on_backoff``, and ``on_giveup`` event handlers, with the interface otherwise being identical. The following examples use `aiohttp `_ asynchronous HTTP client/server library. .. code-block:: python @backoff.on_exception(backoff.expo, aiohttp.ClientError, max_time=60) async def get_url(url): async with aiohttp.ClientSession(raise_for_status=True) as session: async with session.get(url) as response: return await response.text() Logging configuration --------------------- By default, backoff and retry attempts are logged to the 'backoff' logger. By default, this logger is configured with a NullHandler, so there will be nothing output unless you configure a handler. Programmatically, this might be accomplished with something as simple as: .. code-block:: python logging.getLogger('backoff').addHandler(logging.StreamHandler()) The default logging level is INFO, which corresponds to logging anytime a retry event occurs. If you would instead like to log only when a giveup event occurs, set the logger level to ERROR. .. code-block:: python logging.getLogger('backoff').setLevel(logging.ERROR) It is also possible to specify an alternate logger with the ``logger`` keyword argument. If a string value is specified the logger will be looked up by name. .. code-block:: python @backoff.on_exception(backoff.expo, requests.exceptions.RequestException, logger='my_logger') # ... It is also supported to specify a Logger (or LoggerAdapter) object directly. .. code-block:: python my_logger = logging.getLogger('my_logger') my_handler = logging.StreamHandler() my_logger.addHandler(my_handler) my_logger.setLevel(logging.ERROR) @backoff.on_exception(backoff.expo, requests.exceptions.RequestException, logger=my_logger) # ... Default logging can be disabled all together by specifying ``logger=None``. In this case, if desired alternative logging behavior could be defined by using custom event handlers. ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1626270759.201973 backoff-1.11.1/backoff/__init__.py0000644000000000000000000000211500000000000015020 0ustar0000000000000000# coding:utf-8 """ Function decoration for backoff and retry This module provides function decorators which can be used to wrap a function such that it will be retried until some condition is met. It is meant to be of use when accessing unreliable resources with the potential for intermittent failures i.e. network resources and external APIs. Somewhat more generally, it may also be of use for dynamically polling resources for externally generated content. For examples and full documentation see the README at https://github.com/litl/backoff """ import sys import warnings from backoff._decorator import on_predicate, on_exception from backoff._jitter import full_jitter, random_jitter from backoff._wait_gen import constant, expo, fibo __all__ = [ 'on_predicate', 'on_exception', 'constant', 'expo', 'fibo', 'full_jitter', 'random_jitter', ] __version__ = '1.11.1' if sys.version_info[0] < 3: warnings.warn( "Python 2.7 support is deprecated and will be dropped " "in the next release", DeprecationWarning, ) # pragma: no cover ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1626270698.5338569 backoff-1.11.1/backoff/_async.py0000644000000000000000000001401300000000000014535 0ustar0000000000000000# coding:utf-8 import datetime import functools import asyncio # Python 3.5 code and syntax is allowed in this file from datetime import timedelta from backoff._common import (_init_wait_gen, _maybe_call, _next_wait) def _ensure_coroutine(coro_or_func): if asyncio.iscoroutinefunction(coro_or_func): return coro_or_func else: @functools.wraps(coro_or_func) async def f(*args, **kwargs): return coro_or_func(*args, **kwargs) return f def _ensure_coroutines(coros_or_funcs): return [_ensure_coroutine(f) for f in coros_or_funcs] async def _call_handlers(hdlrs, target, args, kwargs, tries, elapsed, **extra): details = { 'target': target, 'args': args, 'kwargs': kwargs, 'tries': tries, 'elapsed': elapsed, } details.update(extra) for hdlr in hdlrs: await hdlr(details) def retry_predicate(target, wait_gen, predicate, max_tries, max_time, jitter, on_success, on_backoff, on_giveup, wait_gen_kwargs): on_success = _ensure_coroutines(on_success) on_backoff = _ensure_coroutines(on_backoff) on_giveup = _ensure_coroutines(on_giveup) # Easy to implement, please report if you need this. assert not asyncio.iscoroutinefunction(max_tries) assert not asyncio.iscoroutinefunction(jitter) assert asyncio.iscoroutinefunction(target) @functools.wraps(target) async def retry(*args, **kwargs): # change names because python 2.x doesn't have nonlocal max_tries_ = _maybe_call(max_tries) max_time_ = _maybe_call(max_time) tries = 0 start = datetime.datetime.now() wait = _init_wait_gen(wait_gen, wait_gen_kwargs) while True: tries += 1 elapsed = timedelta.total_seconds(datetime.datetime.now() - start) details = (target, args, kwargs, tries, elapsed) ret = await target(*args, **kwargs) if predicate(ret): max_tries_exceeded = (tries == max_tries_) max_time_exceeded = (max_time_ is not None and elapsed >= max_time_) if max_tries_exceeded or max_time_exceeded: await _call_handlers(on_giveup, *details, value=ret) break try: seconds = _next_wait(wait, jitter, elapsed, max_time_) except StopIteration: await _call_handlers(on_giveup, *details, value=ret) break await _call_handlers(on_backoff, *details, value=ret, wait=seconds) # Note: there is no convenient way to pass explicit event # loop to decorator, so here we assume that either default # thread event loop is set and correct (it mostly is # by default), or Python >= 3.5.3 or Python >= 3.6 is used # where loop.get_event_loop() in coroutine guaranteed to # return correct value. # See for details: # # await asyncio.sleep(seconds) continue else: await _call_handlers(on_success, *details, value=ret) break return ret return retry def retry_exception(target, wait_gen, exception, max_tries, max_time, jitter, giveup, on_success, on_backoff, on_giveup, wait_gen_kwargs): on_success = _ensure_coroutines(on_success) on_backoff = _ensure_coroutines(on_backoff) on_giveup = _ensure_coroutines(on_giveup) giveup = _ensure_coroutine(giveup) # Easy to implement, please report if you need this. assert not asyncio.iscoroutinefunction(max_tries) assert not asyncio.iscoroutinefunction(jitter) @functools.wraps(target) async def retry(*args, **kwargs): # change names because python 2.x doesn't have nonlocal max_tries_ = _maybe_call(max_tries) max_time_ = _maybe_call(max_time) tries = 0 start = datetime.datetime.now() wait = _init_wait_gen(wait_gen, wait_gen_kwargs) while True: tries += 1 elapsed = timedelta.total_seconds(datetime.datetime.now() - start) details = (target, args, kwargs, tries, elapsed) try: ret = await target(*args, **kwargs) except exception as e: giveup_result = await giveup(e) max_tries_exceeded = (tries == max_tries_) max_time_exceeded = (max_time_ is not None and elapsed >= max_time_) if giveup_result or max_tries_exceeded or max_time_exceeded: await _call_handlers(on_giveup, *details) raise try: seconds = _next_wait(wait, jitter, elapsed, max_time_) except StopIteration: await _call_handlers(on_giveup, *details) raise e await _call_handlers(on_backoff, *details, wait=seconds) # Note: there is no convenient way to pass explicit event # loop to decorator, so here we assume that either default # thread event loop is set and correct (it mostly is # by default), or Python >= 3.5.3 or Python >= 3.6 is used # where loop.get_event_loop() in coroutine guaranteed to # return correct value. # See for details: # # await asyncio.sleep(seconds) else: await _call_handlers(on_success, *details) return ret return retry ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1626270698.5735104 backoff-1.11.1/backoff/_common.py0000644000000000000000000000652500000000000014721 0ustar0000000000000000# coding:utf-8 import functools import logging import sys import traceback import warnings # python 2.7 -> 3.x compatibility for str and unicode try: basestring except NameError: # pragma: python=3.5 basestring = str # Use module-specific logger with a default null handler. _logger = logging.getLogger('backoff') _logger.addHandler(logging.NullHandler()) # pragma: no cover _logger.setLevel(logging.INFO) # Evaluate arg that can be either a fixed value or a callable. def _maybe_call(f, *args, **kwargs): return f(*args, **kwargs) if callable(f) else f def _init_wait_gen(wait_gen, wait_gen_kwargs): kwargs = {k: _maybe_call(v) for k, v in wait_gen_kwargs.items()} return wait_gen(**kwargs) def _next_wait(wait, jitter, elapsed, max_time): value = next(wait) try: if jitter is not None: seconds = jitter(value) else: seconds = value except TypeError: warnings.warn( "Nullary jitter function signature is deprecated. Use " "unary signature accepting a wait value in seconds and " "returning a jittered version of it.", DeprecationWarning, stacklevel=2, ) seconds = value + jitter() # don't sleep longer than remaining allotted max_time if max_time is not None: seconds = min(seconds, max_time - elapsed) return seconds def _prepare_logger(logger): if isinstance(logger, basestring): logger = logging.getLogger(logger) return logger # Configure handler list with user specified handler and optionally # with a default handler bound to the specified logger. def _config_handlers( user_handlers, default_handler=None, logger=None, log_level=None ): handlers = [] if logger is not None: assert log_level is not None, "Log level is not specified" # bind the specified logger to the default log handler log_handler = functools.partial( default_handler, logger=logger, log_level=log_level ) handlers.append(log_handler) if user_handlers is None: return handlers # user specified handlers can either be an iterable of handlers # or a single handler. either way append them to the list. if hasattr(user_handlers, '__iter__'): # add all handlers in the iterable handlers += list(user_handlers) else: # append a single handler handlers.append(user_handlers) return handlers # Default backoff handler def _log_backoff(details, logger, log_level): msg = "Backing off %s(...) for %.1fs (%s)" log_args = [details['target'].__name__, details['wait']] exc_typ, exc, _ = sys.exc_info() if exc is not None: exc_fmt = traceback.format_exception_only(exc_typ, exc)[-1] log_args.append(exc_fmt.rstrip("\n")) else: log_args.append(details['value']) logger.log(log_level, msg, *log_args) # Default giveup handler def _log_giveup(details, logger, log_level): msg = "Giving up %s(...) after %d tries (%s)" log_args = [details['target'].__name__, details['tries']] exc_typ, exc, _ = sys.exc_info() if exc is not None: exc_fmt = traceback.format_exception_only(exc_typ, exc)[-1] log_args.append(exc_fmt.rstrip("\n")) else: log_args.append(details['value']) logger.log(log_level, msg, *log_args) ././@PaxHeader0000000000000000000000000000003200000000000011450 xustar000000000000000026 mtime=1626270698.61497 backoff-1.11.1/backoff/_decorator.py0000644000000000000000000002034600000000000015410 0ustar0000000000000000# coding:utf-8 from __future__ import unicode_literals import logging import operator import sys from backoff._common import ( _prepare_logger, _config_handlers, _log_backoff, _log_giveup ) from backoff._jitter import full_jitter from backoff import _sync def on_predicate(wait_gen, predicate=operator.not_, max_tries=None, max_time=None, jitter=full_jitter, on_success=None, on_backoff=None, on_giveup=None, logger='backoff', backoff_log_level=logging.INFO, giveup_log_level=logging.ERROR, **wait_gen_kwargs): """Returns decorator for backoff and retry triggered by predicate. Args: wait_gen: A generator yielding successive wait times in seconds. predicate: A function which when called on the return value of the target function will trigger backoff when considered truthily. If not specified, the default behavior is to backoff on falsey return values. max_tries: The maximum number of attempts to make before giving up. In the case of failure, the result of the last attempt will be returned. The default value of None means there is no limit to the number of tries. If a callable is passed, it will be evaluated at runtime and its return value used. max_time: The maximum total amount of time to try for before giving up. If this time expires, the result of the last attempt will be returned. If a callable is passed, it will be evaluated at runtime and its return value used. jitter: A function of the value yielded by wait_gen returning the actual time to wait. This distributes wait times stochastically in order to avoid timing collisions across concurrent clients. Wait times are jittered by default using the full_jitter function. Jittering may be disabled altogether by passing jitter=None. on_success: Callable (or iterable of callables) with a unary signature to be called in the event of success. The parameter is a dict containing details about the invocation. on_backoff: Callable (or iterable of callables) with a unary signature to be called in the event of a backoff. The parameter is a dict containing details about the invocation. on_giveup: Callable (or iterable of callables) with a unary signature to be called in the event that max_tries is exceeded. The parameter is a dict containing details about the invocation. logger: Name of logger or Logger object to log to. Defaults to 'backoff'. backoff_log_level: log level for the backoff event. Defaults to "INFO" giveup_log_level: log level for the give up event. Defaults to "ERROR" **wait_gen_kwargs: Any additional keyword args specified will be passed to wait_gen when it is initialized. Any callable args will first be evaluated and their return values passed. This is useful for runtime configuration. """ def decorate(target): # change names because python 2.x doesn't have nonlocal logger_ = _prepare_logger(logger) on_success_ = _config_handlers(on_success) on_backoff_ = _config_handlers( on_backoff, _log_backoff, logger_, backoff_log_level ) on_giveup_ = _config_handlers( on_giveup, _log_giveup, logger_, giveup_log_level ) retry = None if sys.version_info >= (3, 5): # pragma: python=3.5 import asyncio if asyncio.iscoroutinefunction(target): import backoff._async retry = backoff._async.retry_predicate if retry is None: retry = _sync.retry_predicate return retry(target, wait_gen, predicate, max_tries, max_time, jitter, on_success_, on_backoff_, on_giveup_, wait_gen_kwargs) # Return a function which decorates a target with a retry loop. return decorate def on_exception(wait_gen, exception, max_tries=None, max_time=None, jitter=full_jitter, giveup=lambda e: False, on_success=None, on_backoff=None, on_giveup=None, logger='backoff', backoff_log_level=logging.INFO, giveup_log_level=logging.ERROR, **wait_gen_kwargs): """Returns decorator for backoff and retry triggered by exception. Args: wait_gen: A generator yielding successive wait times in seconds. exception: An exception type (or tuple of types) which triggers backoff. max_tries: The maximum number of attempts to make before giving up. Once exhausted, the exception will be allowed to escape. The default value of None means there is no limit to the number of tries. If a callable is passed, it will be evaluated at runtime and its return value used. max_time: The maximum total amount of time to try for before giving up. Once expired, the exception will be allowed to escape. If a callable is passed, it will be evaluated at runtime and its return value used. jitter: A function of the value yielded by wait_gen returning the actual time to wait. This distributes wait times stochastically in order to avoid timing collisions across concurrent clients. Wait times are jittered by default using the full_jitter function. Jittering may be disabled altogether by passing jitter=None. giveup: Function accepting an exception instance and returning whether or not to give up. Optional. The default is to always continue. on_success: Callable (or iterable of callables) with a unary signature to be called in the event of success. The parameter is a dict containing details about the invocation. on_backoff: Callable (or iterable of callables) with a unary signature to be called in the event of a backoff. The parameter is a dict containing details about the invocation. on_giveup: Callable (or iterable of callables) with a unary signature to be called in the event that max_tries is exceeded. The parameter is a dict containing details about the invocation. logger: Name or Logger object to log to. Defaults to 'backoff'. backoff_log_level: log level for the backoff event. Defaults to "INFO" giveup_log_level: log level for the give up event. Defaults to "ERROR" **wait_gen_kwargs: Any additional keyword args specified will be passed to wait_gen when it is initialized. Any callable args will first be evaluated and their return values passed. This is useful for runtime configuration. """ def decorate(target): # change names because python 2.x doesn't have nonlocal logger_ = _prepare_logger(logger) on_success_ = _config_handlers(on_success) on_backoff_ = _config_handlers( on_backoff, _log_backoff, logger_, backoff_log_level ) on_giveup_ = _config_handlers( on_giveup, _log_giveup, logger_, giveup_log_level ) retry = None if sys.version_info[:2] >= (3, 5): # pragma: python=3.5 import asyncio if asyncio.iscoroutinefunction(target): import backoff._async retry = backoff._async.retry_exception if retry is None: retry = _sync.retry_exception return retry(target, wait_gen, exception, max_tries, max_time, jitter, giveup, on_success_, on_backoff_, on_giveup_, wait_gen_kwargs) # Return a function which decorates a target with a retry loop. return decorate ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1626270698.6155086 backoff-1.11.1/backoff/_jitter.py0000644000000000000000000000135600000000000014727 0ustar0000000000000000# coding:utf-8 import random def random_jitter(value): """Jitter the value a random number of milliseconds. This adds up to 1 second of additional time to the original value. Prior to backoff version 1.2 this was the default jitter behavior. Args: value: The unadulterated backoff value. """ return value + random.random() def full_jitter(value): """Jitter the value across the full range (0 to value). This corresponds to the "Full Jitter" algorithm specified in the AWS blog's post on the performance of various jitter algorithms. (http://www.awsarchitectureblog.com/2015/03/backoff.html) Args: value: The unadulterated backoff value. """ return random.uniform(0, value) ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1626270698.6166084 backoff-1.11.1/backoff/_sync.py0000644000000000000000000000715300000000000014403 0ustar0000000000000000# coding:utf-8 import datetime import functools import time from datetime import timedelta from backoff._common import (_init_wait_gen, _maybe_call, _next_wait) def _call_handlers(hdlrs, target, args, kwargs, tries, elapsed, **extra): details = { 'target': target, 'args': args, 'kwargs': kwargs, 'tries': tries, 'elapsed': elapsed, } details.update(extra) for hdlr in hdlrs: hdlr(details) def retry_predicate(target, wait_gen, predicate, max_tries, max_time, jitter, on_success, on_backoff, on_giveup, wait_gen_kwargs): @functools.wraps(target) def retry(*args, **kwargs): # change names because python 2.x doesn't have nonlocal max_tries_ = _maybe_call(max_tries) max_time_ = _maybe_call(max_time) tries = 0 start = datetime.datetime.now() wait = _init_wait_gen(wait_gen, wait_gen_kwargs) while True: tries += 1 elapsed = timedelta.total_seconds(datetime.datetime.now() - start) details = (target, args, kwargs, tries, elapsed) ret = target(*args, **kwargs) if predicate(ret): max_tries_exceeded = (tries == max_tries_) max_time_exceeded = (max_time_ is not None and elapsed >= max_time_) if max_tries_exceeded or max_time_exceeded: _call_handlers(on_giveup, *details, value=ret) break try: seconds = _next_wait(wait, jitter, elapsed, max_time_) except StopIteration: _call_handlers(on_giveup, *details) break _call_handlers(on_backoff, *details, value=ret, wait=seconds) time.sleep(seconds) continue else: _call_handlers(on_success, *details, value=ret) break return ret return retry def retry_exception(target, wait_gen, exception, max_tries, max_time, jitter, giveup, on_success, on_backoff, on_giveup, wait_gen_kwargs): @functools.wraps(target) def retry(*args, **kwargs): # change names because python 2.x doesn't have nonlocal max_tries_ = _maybe_call(max_tries) max_time_ = _maybe_call(max_time) tries = 0 start = datetime.datetime.now() wait = _init_wait_gen(wait_gen, wait_gen_kwargs) while True: tries += 1 elapsed = timedelta.total_seconds(datetime.datetime.now() - start) details = (target, args, kwargs, tries, elapsed) try: ret = target(*args, **kwargs) except exception as e: max_tries_exceeded = (tries == max_tries_) max_time_exceeded = (max_time_ is not None and elapsed >= max_time_) if giveup(e) or max_tries_exceeded or max_time_exceeded: _call_handlers(on_giveup, *details) raise try: seconds = _next_wait(wait, jitter, elapsed, max_time_) except StopIteration: _call_handlers(on_giveup, *details) raise e _call_handlers(on_backoff, *details, wait=seconds) time.sleep(seconds) else: _call_handlers(on_success, *details) return ret return retry ././@PaxHeader0000000000000000000000000000003200000000000011450 xustar000000000000000026 mtime=1626270698.65341 backoff-1.11.1/backoff/_wait_gen.py0000644000000000000000000000255400000000000015224 0ustar0000000000000000# coding:utf-8 import itertools def expo(base=2, factor=1, max_value=None): """Generator for exponential decay. Args: base: The mathematical base of the exponentiation operation factor: Factor to multiply the exponentiation by. max_value: The maximum value to yield. Once the value in the true exponential sequence exceeds this, the value of max_value will forever after be yielded. """ n = 0 while True: a = factor * base ** n if max_value is None or a < max_value: yield a n += 1 else: yield max_value def fibo(max_value=None): """Generator for fibonaccial decay. Args: max_value: The maximum value to yield. Once the value in the true fibonacci sequence exceeds this, the value of max_value will forever after be yielded. """ a = 1 b = 1 while True: if max_value is None or a < max_value: yield a a, b = b, a + b else: yield max_value def constant(interval=1): """Generator for constant intervals. Args: interval: A constant value to yield or an iterable of such values. """ try: itr = iter(interval) except TypeError: itr = itertools.repeat(interval) for val in itr: yield val ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1626270735.5468836 backoff-1.11.1/pyproject.toml0000644000000000000000000000307600000000000014237 0ustar0000000000000000[tool.poetry] name = "backoff" version = "1.11.1" description = "Function decoration for backoff and retry" authors = ["Bob Green "] readme = "README.rst" repository = "https://github.com/litl/backoff" license = "MIT" keywords = ["retry", "backoff", "decorators"] classifiers = ['Development Status :: 5 - Production/Stable', 'Intended Audience :: Developers', 'Programming Language :: Python', 'License :: OSI Approved :: MIT License', 'Natural Language :: English', 'Operating System :: OS Independent', 'Programming Language :: Python', 'Programming Language :: Python :: 2', 'Programming Language :: Python :: 2.7', 'Programming Language :: Python :: 3', 'Programming Language :: Python :: 3.5', 'Programming Language :: Python :: 3.6', 'Programming Language :: Python :: 3.7', 'Programming Language :: Python :: 3.8', 'Programming Language :: Python :: 3.9', 'Topic :: Internet :: WWW/HTTP', 'Topic :: Software Development :: Libraries :: Python Modules', 'Topic :: Utilities'] packages = [ { include = "backoff" }, ] [tool.poetry.dependencies] python = "^2.7 || ^3.5" [tool.poetry.dev-dependencies] flake8 = "^3.6" pytest = "^4.0" pytest-cov = "^2.6" pytest-asyncio = {version = "^0.10.0",python = "^3.5"} autopep8 = "^1.5.7" [build-system] requires = ["poetry-core"] build-backend = "poetry.core.masonry.api" ././@PaxHeader0000000000000000000000000000003400000000000011452 xustar000000000000000028 mtime=1626270911.0985248 backoff-1.11.1/setup.py0000644000000000000000000003057400000000000013040 0ustar0000000000000000# -*- coding: utf-8 -*- from setuptools import setup packages = \ ['backoff'] package_data = \ {'': ['*']} setup_kwargs = { 'name': 'backoff', 'version': '1.11.1', 'description': 'Function decoration for backoff and retry', 'long_description': 'backoff\n=======\n\n.. image:: https://travis-ci.org/litl/backoff.svg?branch=master\n :target: https://travis-ci.org/litl/backoff?branch=master\n.. image:: https://coveralls.io/repos/litl/backoff/badge.svg?branch=master\n :target: https://coveralls.io/r/litl/backoff?branch=master\n.. image:: https://img.shields.io/pypi/v/backoff.svg\n :target: https://pypi.python.org/pypi/backoff\n\n**Function decoration for backoff and retry**\n\nThis module provides function decorators which can be used to wrap a\nfunction such that it will be retried until some condition is met. It\nis meant to be of use when accessing unreliable resources with the\npotential for intermittent failures i.e. network resources and external\nAPIs. Somewhat more generally, it may also be of use for dynamically\npolling resources for externally generated content.\n\nDecorators support both regular functions for synchronous code and\n`asyncio `_\'s coroutines\nfor asynchronous code.\n\nExamples\n========\n\nSince Kenneth Reitz\'s `requests `_ module\nhas become a defacto standard for synchronous HTTP clients in Python,\nnetworking examples below are written using it, but it is in no way required\nby the backoff module.\n\n@backoff.on_exception\n---------------------\n\nThe ``on_exception`` decorator is used to retry when a specified exception\nis raised. Here\'s an example using exponential backoff when any\n``requests`` exception is raised:\n\n.. code-block:: python\n\n @backoff.on_exception(backoff.expo,\n requests.exceptions.RequestException)\n def get_url(url):\n return requests.get(url)\n\nThe decorator will also accept a tuple of exceptions for cases where\nthe same backoff behavior is desired for more than one exception type:\n\n.. code-block:: python\n\n @backoff.on_exception(backoff.expo,\n (requests.exceptions.Timeout,\n requests.exceptions.ConnectionError))\n def get_url(url):\n return requests.get(url)\n\n**Give Up Conditions**\n\nOptional keyword arguments can specify conditions under which to give\nup.\n\nThe keyword argument ``max_time`` specifies the maximum amount\nof total time in seconds that can elapse before giving up.\n\n.. code-block:: python\n\n @backoff.on_exception(backoff.expo,\n requests.exceptions.RequestException,\n max_time=60)\n def get_url(url):\n return requests.get(url)\n\n\nKeyword argument ``max_tries`` specifies the maximum number of calls\nto make to the target function before giving up.\n\n.. code-block:: python\n\n @backoff.on_exception(backoff.expo,\n requests.exceptions.RequestException,\n max_tries=8,\n jitter=None)\n def get_url(url):\n return requests.get(url)\n\n\nIn some cases the raised exception instance itself may need to be\ninspected in order to determine if it is a retryable condition. The\n``giveup`` keyword arg can be used to specify a function which accepts\nthe exception and returns a truthy value if the exception should not\nbe retried:\n\n.. code-block:: python\n\n def fatal_code(e):\n return 400 <= e.response.status_code < 500\n\n @backoff.on_exception(backoff.expo,\n requests.exceptions.RequestException,\n max_time=300,\n giveup=fatal_code)\n def get_url(url):\n return requests.get(url)\n\nWhen a give up event occurs, the exception in question is reraised\nand so code calling an `on_exception`-decorated function may still\nneed to do exception handling.\n\n@backoff.on_predicate\n---------------------\n\nThe ``on_predicate`` decorator is used to retry when a particular\ncondition is true of the return value of the target function. This may\nbe useful when polling a resource for externally generated content.\n\nHere\'s an example which uses a fibonacci sequence backoff when the\nreturn value of the target function is the empty list:\n\n.. code-block:: python\n\n @backoff.on_predicate(backoff.fibo, lambda x: x == [], max_value=13)\n def poll_for_messages(queue):\n return queue.get()\n\nExtra keyword arguments are passed when initializing the\nwait generator, so the ``max_value`` param above is passed as a keyword\narg when initializing the fibo generator.\n\nWhen not specified, the predicate param defaults to the falsey test,\nso the above can more concisely be written:\n\n.. code-block:: python\n\n @backoff.on_predicate(backoff.fibo, max_value=13)\n def poll_for_message(queue)\n return queue.get()\n\nMore simply, a function which continues polling every second until it\ngets a non-falsey result could be defined like like this:\n\n.. code-block:: python\n\n @backoff.on_predicate(backoff.constant, interval=1)\n def poll_for_message(queue)\n return queue.get()\n\nJitter\n------\n\nA jitter algorithm can be supplied with the ``jitter`` keyword arg to\neither of the backoff decorators. This argument should be a function\naccepting the original unadulterated backoff value and returning it\'s\njittered counterpart.\n\nAs of version 1.2, the default jitter function ``backoff.full_jitter``\nimplements the \'Full Jitter\' algorithm as defined in the AWS\nArchitecture Blog\'s `Exponential Backoff And Jitter\n`_ post.\nNote that with this algorithm, the time yielded by the wait generator\nis actually the *maximum* amount of time to wait.\n\nPrevious versions of backoff defaulted to adding some random number of\nmilliseconds (up to 1s) to the raw sleep value. If desired, this\nbehavior is now available as ``backoff.random_jitter``.\n\nUsing multiple decorators\n-------------------------\n\nThe backoff decorators may also be combined to specify different\nbackoff behavior for different cases:\n\n.. code-block:: python\n\n @backoff.on_predicate(backoff.fibo, max_value=13)\n @backoff.on_exception(backoff.expo,\n requests.exceptions.HTTPError,\n max_time=60)\n @backoff.on_exception(backoff.expo,\n requests.exceptions.Timeout,\n max_time=300)\n def poll_for_message(queue):\n return queue.get()\n\nRuntime Configuration\n---------------------\n\nThe decorator functions ``on_exception`` and ``on_predicate`` are\ngenerally evaluated at import time. This is fine when the keyword args\nare passed as constant values, but suppose we want to consult a\ndictionary with configuration options that only become available at\nruntime. The relevant values are not available at import time. Instead,\ndecorator functions can be passed callables which are evaluated at\nruntime to obtain the value:\n\n.. code-block:: python\n\n def lookup_max_time():\n # pretend we have a global reference to \'app\' here\n # and that it has a dictionary-like \'config\' property\n return app.config["BACKOFF_MAX_TIME"]\n\n @backoff.on_exception(backoff.expo,\n ValueError,\n max_time=lookup_max_time)\n\nEvent handlers\n--------------\n\nBoth backoff decorators optionally accept event handler functions\nusing the keyword arguments ``on_success``, ``on_backoff``, and ``on_giveup``.\nThis may be useful in reporting statistics or performing other custom\nlogging.\n\nHandlers must be callables with a unary signature accepting a dict\nargument. This dict contains the details of the invocation. Valid keys\ninclude:\n\n* *target*: reference to the function or method being invoked\n* *args*: positional arguments to func\n* *kwargs*: keyword arguments to func\n* *tries*: number of invocation tries so far\n* *elapsed*: elapsed time in seconds so far\n* *wait*: seconds to wait (``on_backoff`` handler only)\n* *value*: value triggering backoff (``on_predicate`` decorator only)\n\nA handler which prints the details of the backoff event could be\nimplemented like so:\n\n.. code-block:: python\n\n def backoff_hdlr(details):\n print ("Backing off {wait:0.1f} seconds after {tries} tries "\n "calling function {target} with args {args} and kwargs "\n "{kwargs}".format(**details))\n\n @backoff.on_exception(backoff.expo,\n requests.exceptions.RequestException,\n on_backoff=backoff_hdlr)\n def get_url(url):\n return requests.get(url)\n\n**Multiple handlers per event type**\n\nIn all cases, iterables of handler functions are also accepted, which\nare called in turn. For example, you might provide a simple list of\nhandler functions as the value of the ``on_backoff`` keyword arg:\n\n.. code-block:: python\n\n @backoff.on_exception(backoff.expo,\n requests.exceptions.RequestException,\n on_backoff=[backoff_hdlr1, backoff_hdlr2])\n def get_url(url):\n return requests.get(url)\n\n**Getting exception info**\n\nIn the case of the ``on_exception`` decorator, all ``on_backoff`` and\n``on_giveup`` handlers are called from within the except block for the\nexception being handled. Therefore exception info is available to the\nhandler functions via the python standard library, specifically\n``sys.exc_info()`` or the ``traceback`` module.\n\nAsynchronous code\n-----------------\n\nBackoff supports asynchronous execution in Python 3.5 and above.\n\nTo use backoff in asynchronous code based on\n`asyncio `_\nyou simply need to apply ``backoff.on_exception`` or ``backoff.on_predicate``\nto coroutines.\nYou can also use coroutines for the ``on_success``, ``on_backoff``, and\n``on_giveup`` event handlers, with the interface otherwise being identical.\n\nThe following examples use `aiohttp `_\nasynchronous HTTP client/server library.\n\n.. code-block:: python\n\n @backoff.on_exception(backoff.expo, aiohttp.ClientError, max_time=60)\n async def get_url(url):\n async with aiohttp.ClientSession(raise_for_status=True) as session:\n async with session.get(url) as response:\n return await response.text()\n\nLogging configuration\n---------------------\n\nBy default, backoff and retry attempts are logged to the \'backoff\'\nlogger. By default, this logger is configured with a NullHandler, so\nthere will be nothing output unless you configure a handler.\nProgrammatically, this might be accomplished with something as simple\nas:\n\n.. code-block:: python\n\n logging.getLogger(\'backoff\').addHandler(logging.StreamHandler())\n\nThe default logging level is INFO, which corresponds to logging\nanytime a retry event occurs. If you would instead like to log\nonly when a giveup event occurs, set the logger level to ERROR.\n\n.. code-block:: python\n\n logging.getLogger(\'backoff\').setLevel(logging.ERROR)\n\nIt is also possible to specify an alternate logger with the ``logger``\nkeyword argument. If a string value is specified the logger will be\nlooked up by name.\n\n.. code-block:: python\n\n @backoff.on_exception(backoff.expo,\n requests.exceptions.RequestException,\n\t\t\t logger=\'my_logger\')\n # ...\n\nIt is also supported to specify a Logger (or LoggerAdapter) object\ndirectly.\n\n.. code-block:: python\n\n my_logger = logging.getLogger(\'my_logger\')\n my_handler = logging.StreamHandler()\n my_logger.addHandler(my_handler)\n my_logger.setLevel(logging.ERROR)\n\n @backoff.on_exception(backoff.expo,\n requests.exceptions.RequestException,\n\t\t\t logger=my_logger)\n # ...\n\nDefault logging can be disabled all together by specifying\n``logger=None``. In this case, if desired alternative logging behavior\ncould be defined by using custom event handlers.\n', 'author': 'Bob Green', 'author_email': 'rgreen@aquent.com', 'maintainer': None, 'maintainer_email': None, 'url': 'https://github.com/litl/backoff', 'packages': packages, 'package_data': package_data, 'python_requires': '>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*', } setup(**setup_kwargs) ././@PaxHeader0000000000000000000000000000003300000000000011451 xustar000000000000000027 mtime=1626270911.099427 backoff-1.11.1/PKG-INFO0000644000000000000000000003121100000000000012410 0ustar0000000000000000Metadata-Version: 2.1 Name: backoff Version: 1.11.1 Summary: Function decoration for backoff and retry Home-page: https://github.com/litl/backoff License: MIT Keywords: retry,backoff,decorators Author: Bob Green Author-email: rgreen@aquent.com Requires-Python: >=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.* Classifier: Development Status :: 5 - Production/Stable Classifier: Intended Audience :: Developers Classifier: License :: OSI Approved :: MIT License Classifier: Natural Language :: English Classifier: Operating System :: OS Independent Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 2 Classifier: Programming Language :: Python :: 2.7 Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.5 Classifier: Programming Language :: Python :: 3.6 Classifier: Programming Language :: Python :: 3.7 Classifier: Programming Language :: Python :: 3.8 Classifier: Programming Language :: Python :: 3.9 Classifier: Topic :: Internet :: WWW/HTTP Classifier: Topic :: Software Development :: Libraries :: Python Modules Classifier: Topic :: Utilities Project-URL: Repository, https://github.com/litl/backoff Description-Content-Type: text/x-rst backoff ======= .. image:: https://travis-ci.org/litl/backoff.svg?branch=master :target: https://travis-ci.org/litl/backoff?branch=master .. image:: https://coveralls.io/repos/litl/backoff/badge.svg?branch=master :target: https://coveralls.io/r/litl/backoff?branch=master .. image:: https://img.shields.io/pypi/v/backoff.svg :target: https://pypi.python.org/pypi/backoff **Function decoration for backoff and retry** This module provides function decorators which can be used to wrap a function such that it will be retried until some condition is met. It is meant to be of use when accessing unreliable resources with the potential for intermittent failures i.e. network resources and external APIs. Somewhat more generally, it may also be of use for dynamically polling resources for externally generated content. Decorators support both regular functions for synchronous code and `asyncio `_'s coroutines for asynchronous code. Examples ======== Since Kenneth Reitz's `requests `_ module has become a defacto standard for synchronous HTTP clients in Python, networking examples below are written using it, but it is in no way required by the backoff module. @backoff.on_exception --------------------- The ``on_exception`` decorator is used to retry when a specified exception is raised. Here's an example using exponential backoff when any ``requests`` exception is raised: .. code-block:: python @backoff.on_exception(backoff.expo, requests.exceptions.RequestException) def get_url(url): return requests.get(url) The decorator will also accept a tuple of exceptions for cases where the same backoff behavior is desired for more than one exception type: .. code-block:: python @backoff.on_exception(backoff.expo, (requests.exceptions.Timeout, requests.exceptions.ConnectionError)) def get_url(url): return requests.get(url) **Give Up Conditions** Optional keyword arguments can specify conditions under which to give up. The keyword argument ``max_time`` specifies the maximum amount of total time in seconds that can elapse before giving up. .. code-block:: python @backoff.on_exception(backoff.expo, requests.exceptions.RequestException, max_time=60) def get_url(url): return requests.get(url) Keyword argument ``max_tries`` specifies the maximum number of calls to make to the target function before giving up. .. code-block:: python @backoff.on_exception(backoff.expo, requests.exceptions.RequestException, max_tries=8, jitter=None) def get_url(url): return requests.get(url) In some cases the raised exception instance itself may need to be inspected in order to determine if it is a retryable condition. The ``giveup`` keyword arg can be used to specify a function which accepts the exception and returns a truthy value if the exception should not be retried: .. code-block:: python def fatal_code(e): return 400 <= e.response.status_code < 500 @backoff.on_exception(backoff.expo, requests.exceptions.RequestException, max_time=300, giveup=fatal_code) def get_url(url): return requests.get(url) When a give up event occurs, the exception in question is reraised and so code calling an `on_exception`-decorated function may still need to do exception handling. @backoff.on_predicate --------------------- The ``on_predicate`` decorator is used to retry when a particular condition is true of the return value of the target function. This may be useful when polling a resource for externally generated content. Here's an example which uses a fibonacci sequence backoff when the return value of the target function is the empty list: .. code-block:: python @backoff.on_predicate(backoff.fibo, lambda x: x == [], max_value=13) def poll_for_messages(queue): return queue.get() Extra keyword arguments are passed when initializing the wait generator, so the ``max_value`` param above is passed as a keyword arg when initializing the fibo generator. When not specified, the predicate param defaults to the falsey test, so the above can more concisely be written: .. code-block:: python @backoff.on_predicate(backoff.fibo, max_value=13) def poll_for_message(queue) return queue.get() More simply, a function which continues polling every second until it gets a non-falsey result could be defined like like this: .. code-block:: python @backoff.on_predicate(backoff.constant, interval=1) def poll_for_message(queue) return queue.get() Jitter ------ A jitter algorithm can be supplied with the ``jitter`` keyword arg to either of the backoff decorators. This argument should be a function accepting the original unadulterated backoff value and returning it's jittered counterpart. As of version 1.2, the default jitter function ``backoff.full_jitter`` implements the 'Full Jitter' algorithm as defined in the AWS Architecture Blog's `Exponential Backoff And Jitter `_ post. Note that with this algorithm, the time yielded by the wait generator is actually the *maximum* amount of time to wait. Previous versions of backoff defaulted to adding some random number of milliseconds (up to 1s) to the raw sleep value. If desired, this behavior is now available as ``backoff.random_jitter``. Using multiple decorators ------------------------- The backoff decorators may also be combined to specify different backoff behavior for different cases: .. code-block:: python @backoff.on_predicate(backoff.fibo, max_value=13) @backoff.on_exception(backoff.expo, requests.exceptions.HTTPError, max_time=60) @backoff.on_exception(backoff.expo, requests.exceptions.Timeout, max_time=300) def poll_for_message(queue): return queue.get() Runtime Configuration --------------------- The decorator functions ``on_exception`` and ``on_predicate`` are generally evaluated at import time. This is fine when the keyword args are passed as constant values, but suppose we want to consult a dictionary with configuration options that only become available at runtime. The relevant values are not available at import time. Instead, decorator functions can be passed callables which are evaluated at runtime to obtain the value: .. code-block:: python def lookup_max_time(): # pretend we have a global reference to 'app' here # and that it has a dictionary-like 'config' property return app.config["BACKOFF_MAX_TIME"] @backoff.on_exception(backoff.expo, ValueError, max_time=lookup_max_time) Event handlers -------------- Both backoff decorators optionally accept event handler functions using the keyword arguments ``on_success``, ``on_backoff``, and ``on_giveup``. This may be useful in reporting statistics or performing other custom logging. Handlers must be callables with a unary signature accepting a dict argument. This dict contains the details of the invocation. Valid keys include: * *target*: reference to the function or method being invoked * *args*: positional arguments to func * *kwargs*: keyword arguments to func * *tries*: number of invocation tries so far * *elapsed*: elapsed time in seconds so far * *wait*: seconds to wait (``on_backoff`` handler only) * *value*: value triggering backoff (``on_predicate`` decorator only) A handler which prints the details of the backoff event could be implemented like so: .. code-block:: python def backoff_hdlr(details): print ("Backing off {wait:0.1f} seconds after {tries} tries " "calling function {target} with args {args} and kwargs " "{kwargs}".format(**details)) @backoff.on_exception(backoff.expo, requests.exceptions.RequestException, on_backoff=backoff_hdlr) def get_url(url): return requests.get(url) **Multiple handlers per event type** In all cases, iterables of handler functions are also accepted, which are called in turn. For example, you might provide a simple list of handler functions as the value of the ``on_backoff`` keyword arg: .. code-block:: python @backoff.on_exception(backoff.expo, requests.exceptions.RequestException, on_backoff=[backoff_hdlr1, backoff_hdlr2]) def get_url(url): return requests.get(url) **Getting exception info** In the case of the ``on_exception`` decorator, all ``on_backoff`` and ``on_giveup`` handlers are called from within the except block for the exception being handled. Therefore exception info is available to the handler functions via the python standard library, specifically ``sys.exc_info()`` or the ``traceback`` module. Asynchronous code ----------------- Backoff supports asynchronous execution in Python 3.5 and above. To use backoff in asynchronous code based on `asyncio `_ you simply need to apply ``backoff.on_exception`` or ``backoff.on_predicate`` to coroutines. You can also use coroutines for the ``on_success``, ``on_backoff``, and ``on_giveup`` event handlers, with the interface otherwise being identical. The following examples use `aiohttp `_ asynchronous HTTP client/server library. .. code-block:: python @backoff.on_exception(backoff.expo, aiohttp.ClientError, max_time=60) async def get_url(url): async with aiohttp.ClientSession(raise_for_status=True) as session: async with session.get(url) as response: return await response.text() Logging configuration --------------------- By default, backoff and retry attempts are logged to the 'backoff' logger. By default, this logger is configured with a NullHandler, so there will be nothing output unless you configure a handler. Programmatically, this might be accomplished with something as simple as: .. code-block:: python logging.getLogger('backoff').addHandler(logging.StreamHandler()) The default logging level is INFO, which corresponds to logging anytime a retry event occurs. If you would instead like to log only when a giveup event occurs, set the logger level to ERROR. .. code-block:: python logging.getLogger('backoff').setLevel(logging.ERROR) It is also possible to specify an alternate logger with the ``logger`` keyword argument. If a string value is specified the logger will be looked up by name. .. code-block:: python @backoff.on_exception(backoff.expo, requests.exceptions.RequestException, logger='my_logger') # ... It is also supported to specify a Logger (or LoggerAdapter) object directly. .. code-block:: python my_logger = logging.getLogger('my_logger') my_handler = logging.StreamHandler() my_logger.addHandler(my_handler) my_logger.setLevel(logging.ERROR) @backoff.on_exception(backoff.expo, requests.exceptions.RequestException, logger=my_logger) # ... Default logging can be disabled all together by specifying ``logger=None``. In this case, if desired alternative logging behavior could be defined by using custom event handlers.