pax_global_header00006660000000000000000000000064142563233700014517gustar00rootroot0000000000000052 comment=e6cd4fdf3b159d7f118f154e28e884069da89d7e redis-py-4.3.4/000077500000000000000000000000001425632337000132635ustar00rootroot00000000000000redis-py-4.3.4/.coveragerc000066400000000000000000000000251425632337000154010ustar00rootroot00000000000000[run] source = redis redis-py-4.3.4/.dockerignore000066400000000000000000000000631425632337000157360ustar00rootroot00000000000000**/__pycache__ **/*.pyc .tox .coverage .coverage.* redis-py-4.3.4/.github/000077500000000000000000000000001425632337000146235ustar00rootroot00000000000000redis-py-4.3.4/.github/ISSUE_TEMPLATE.md000066400000000000000000000010571425632337000173330ustar00rootroot00000000000000Thanks for wanting to report an issue you've found in redis-py. Please delete this text and fill in the template below. It is of course not always possible to reduce your code to a small test case, but it's highly appreciated to have as much data as possible. Thank you! **Version**: What redis-py and what redis version is the issue happening on? **Platform**: What platform / version? (For example Python 3.5.1 on Windows 7 / Ubuntu 15.10 / Azure) **Description**: Description of your issue, stack traces from errors and code that reproduces the issue redis-py-4.3.4/.github/PULL_REQUEST_TEMPLATE.md000066400000000000000000000013341425632337000204250ustar00rootroot00000000000000### Pull Request check-list _Please make sure to review and check all of these items:_ - [ ] Does `$ tox` pass with this change (including linting)? - [ ] Do the CI tests pass with this change (enable it first in your forked repo and wait for the github action build to finish)? - [ ] Is the new or changed code fully tested? - [ ] Is a documentation update included (if this change modifies existing APIs, or introduces new ones)? - [ ] Is there an example added to the examples folder (if applicable)? - [ ] Was the change added to CHANGES file? _NOTE: these things are not required to open a PR and can be done afterwards / while the PR is open._ ### Description of change _Please provide a description of the change here._ redis-py-4.3.4/.github/release-drafter-config.yml000066400000000000000000000015321425632337000216570ustar00rootroot00000000000000name-template: 'Version $NEXT_PATCH_VERSION' tag-template: 'v$NEXT_PATCH_VERSION' autolabeler: - label: 'maintenance' files: - '*.md' - '.github/*' - label: 'bug' branch: - '/bug-.+' - label: 'maintenance' branch: - '/maintenance-.+' - label: 'feature' branch: - '/feature-.+' categories: - title: '🔥 Breaking Changes' labels: - 'breakingchange' - title: '🚀 New Features' labels: - 'feature' - 'enhancement' - title: '🐛 Bug Fixes' labels: - 'fix' - 'bugfix' - 'bug' - title: '🧰 Maintenance' label: 'maintenance' change-template: '- $TITLE (#$NUMBER)' exclude-labels: - 'skip-changelog' template: | ## Changes $CHANGES ## Contributors We'd like to thank all the contributors who worked on this release! $CONTRIBUTORS redis-py-4.3.4/.github/workflows/000077500000000000000000000000001425632337000166605ustar00rootroot00000000000000redis-py-4.3.4/.github/workflows/codeql-analysis.yml000066400000000000000000000043741425632337000225030ustar00rootroot00000000000000# For most projects, this workflow file will not need changing; you simply need # to commit it to your repository. # # You may wish to alter this file to override the set of languages analyzed, # or to provide custom queries or build logic. # # ******** NOTE ******** # We have attempted to detect the languages in your repository. Please check # the `language` matrix defined below to confirm you have the correct set of # supported CodeQL languages. # name: "CodeQL" on: push: branches: [ master ] pull_request: # The branches below must be a subset of the branches above branches: [ master ] jobs: analyze: name: Analyze runs-on: ubuntu-latest permissions: actions: read contents: read security-events: write strategy: fail-fast: false matrix: language: [ 'python' ] # CodeQL supports [ 'cpp', 'csharp', 'go', 'java', 'javascript', 'python', 'ruby' ] # Learn more about CodeQL language support at https://git.io/codeql-language-support steps: - name: Checkout repository uses: actions/checkout@v2 # Initializes the CodeQL tools for scanning. - name: Initialize CodeQL uses: github/codeql-action/init@v1 with: languages: ${{ matrix.language }} # If you wish to specify custom queries, you can do so here or in a config file. # By default, queries listed here will override any specified in a config file. # Prefix the list here with "+" to use these queries and those in the config file. # queries: ./path/to/local/query, your-org/your-repo/queries@main # Autobuild attempts to build any compiled languages (C/C++, C#, or Java). # If this step fails, then you should remove it and run the build manually (see below) - name: Autobuild uses: github/codeql-action/autobuild@v1 # ℹ️ Command-line programs to run using the OS shell. # 📚 https://git.io/JvXDl # ✏️ If the Autobuild fails above, remove it and uncomment the following three lines # and modify them (or add more) to build your code if your project # uses a compiled language #- run: | # make bootstrap # make release - name: Perform CodeQL Analysis uses: github/codeql-action/analyze@v1 redis-py-4.3.4/.github/workflows/install_and_test.sh000077500000000000000000000016561425632337000225560ustar00rootroot00000000000000#!/bin/bash set -e SUFFIX=$1 if [ -z ${SUFFIX} ]; then echo "Supply valid python package extension such as whl or tar.gz. Exiting." exit 3 fi script=`pwd`/${BASH_SOURCE[0]} HERE=`dirname ${script}` ROOT=`realpath ${HERE}/../..` cd ${ROOT} DESTENV=${ROOT}/.venvforinstall if [ -d ${DESTENV} ]; then rm -rf ${DESTENV} fi python -m venv ${DESTENV} source ${DESTENV}/bin/activate pip install --upgrade --quiet pip pip install --quiet -r dev_requirements.txt invoke devenv invoke package # find packages PKG=`ls ${ROOT}/dist/*.${SUFFIX}` ls -l ${PKG} TESTDIR=${ROOT}/STAGETESTS if [ -d ${TESTDIR} ]; then rm -rf ${TESTDIR} fi mkdir ${TESTDIR} cp -R ${ROOT}/tests ${TESTDIR}/tests cd ${TESTDIR} # install, run tests pip install ${PKG} # Redis tests pytest -m 'not onlycluster' # RedisCluster tests CLUSTER_URL="redis://localhost:16379/0" pytest -m 'not onlynoncluster and not redismod and not ssl' --redis-url=${CLUSTER_URL} redis-py-4.3.4/.github/workflows/integration.yaml000066400000000000000000000051461425632337000220750ustar00rootroot00000000000000name: CI on: push: paths-ignore: - 'docs/**' - '**/*.rst' - '**/*.md' branches: - master - '[0-9].[0-9]' pull_request: branches: - master - '[0-9].[0-9]' jobs: lint: name: Code linters runs-on: ubuntu-latest steps: - uses: actions/checkout@v2 - name: install python uses: actions/setup-python@v3 with: python-version: 3.9 cache: 'pip' - name: run code linters run: | pip install -r dev_requirements.txt invoke linters run-tests: runs-on: ubuntu-latest timeout-minutes: 30 strategy: max-parallel: 15 matrix: python-version: ['3.6', '3.7', '3.8', '3.9', '3.10', 'pypy-3.7'] test-type: ['standalone', 'cluster'] connection-type: ['hiredis', 'plain'] env: ACTIONS_ALLOW_UNSECURE_COMMANDS: true name: Python ${{ matrix.python-version }} ${{matrix.test-type}}-${{matrix.connection-type}} tests steps: - uses: actions/checkout@v2 - name: install python uses: actions/setup-python@v3 with: python-version: ${{ matrix.python-version }} cache: 'pip' - name: run tests run: | pip install -U setuptools wheel pip install -r dev_requirements.txt tox -e ${{matrix.test-type}}-${{matrix.connection-type}} - name: Upload codecov coverage uses: codecov/codecov-action@v2 with: fail_ci_if_error: false token: ${{ secrets.CODECOV_TOKEN }} build_and_test_package: name: Validate building and installing the package runs-on: ubuntu-latest strategy: matrix: extension: ['tar.gz', 'whl'] steps: - uses: actions/checkout@v2 - name: install python uses: actions/setup-python@v3 with: python-version: 3.9 - name: Run installed unit tests run: | bash .github/workflows/install_and_test.sh ${{ matrix.extension }} install_package_from_commit: name: Install package from commit hash runs-on: ubuntu-latest strategy: matrix: python-version: ['3.6', '3.7', '3.8', '3.9', '3.10', 'pypy-3.7'] steps: - uses: actions/checkout@v2 - name: install python ${{ matrix.python-version }} uses: actions/setup-python@v3 with: python-version: ${{ matrix.python-version }} cache: 'pip' - name: install from pip run: | pip install --quiet git+${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}.git@${GITHUB_SHA} redis-py-4.3.4/.github/workflows/pypi-publish.yaml000066400000000000000000000012721425632337000221730ustar00rootroot00000000000000name: Publish tag to Pypi on: release: types: [published] jobs: build_and_package: runs-on: ubuntu-latest steps: - uses: actions/checkout@v2 - name: install python uses: actions/setup-python@v3 with: python-version: 3.9 - name: Install dev tools run: | pip install -r dev_requirements.txt pip install twine wheel - name: Build package run: | python setup.py build python setup.py sdist bdist_wheel - name: Publish to Pypi uses: pypa/gh-action-pypi-publish@release/v1 with: user: __token__ password: ${{ secrets.PYPI_API_TOKEN }} redis-py-4.3.4/.github/workflows/release-drafter.yml000066400000000000000000000010621425632337000224470ustar00rootroot00000000000000name: Release Drafter on: push: # branches to consider in the event; optional, defaults to all branches: - master jobs: update_release_draft: runs-on: ubuntu-latest steps: # Drafts your next Release notes as Pull Requests are merged into "master" - uses: release-drafter/release-drafter@v5 with: # (Optional) specify config name to use, relative to .github/. Default: release-drafter.yml config-name: release-drafter-config.yml env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} redis-py-4.3.4/.github/workflows/stale-issues.yml000066400000000000000000000011671425632337000220310ustar00rootroot00000000000000name: "Close stale issues" on: schedule: - cron: "0 0 * * *" jobs: stale: runs-on: ubuntu-latest steps: - uses: actions/stale@v3 with: repo-token: ${{ secrets.GITHUB_TOKEN }} stale-issue-message: 'This issue is marked stale. It will be closed in 30 days if it is not updated.' stale-pr-message: 'This pull request is marked stale. It will be closed in 30 days if it is not updated.' days-before-stale: 365 days-before-close: 30 stale-issue-label: "Stale" stale-pr-label: "Stale" operations-per-run: 10 remove-stale-when-updated: true redis-py-4.3.4/.gitignore000066400000000000000000000002701425632337000152520ustar00rootroot00000000000000*.pyc redis.egg-info build/ dist/ dump.rdb /.tox _build vagrant/.vagrant .python-version .cache .eggs .idea .coverage env venv coverage.xml .venv* *.xml .coverage* docker/stunnel/keys redis-py-4.3.4/.mypy.ini000066400000000000000000000011621425632337000150400ustar00rootroot00000000000000[mypy] #, docs/examples, tests files = redis check_untyped_defs = True follow_imports_for_stubs asyncio.= True #disallow_any_decorated = True disallow_subclassing_any = True #disallow_untyped_calls = True disallow_untyped_decorators = True #disallow_untyped_defs = True implicit_reexport = False no_implicit_optional = True show_error_codes = True strict_equality = True warn_incomplete_stub = True warn_redundant_casts = True warn_unreachable = True warn_unused_ignores = True disallow_any_unimported = True #warn_return_any = True [mypy-redis.asyncio.lock] # TODO: Remove once locks has been rewritten ignore_errors = True redis-py-4.3.4/.readthedocs.yml000066400000000000000000000002501425632337000163460ustar00rootroot00000000000000version: 2 python: install: - requirements: ./docs/requirements.txt build: os: ubuntu-20.04 tools: python: "3.9" sphinx: configuration: docs/conf.py redis-py-4.3.4/CHANGES000066400000000000000000001723111425632337000142630ustar00rootroot00000000000000 * Compare commands case-insensitively in the asyncio command parser * Allow negative `retries` for `Retry` class to retry forever * Add `items` parameter to `hset` signature * Create codeql-analysis.yml (#1988). Thanks @chayim * Add limited support for Lua scripting with RedisCluster * Implement `.lock()` method on RedisCluster * Fix cursor returned by SCAN for RedisCluster & change default target to PRIMARIES * Fix scan_iter for RedisCluster * Remove verbose logging when initializing ClusterPubSub, ClusterPipeline or RedisCluster * Fix broken connection writer lock-up for asyncio (#2065) * Fix auth bug when provided with no username (#2086) * Fix missing ClusterPipeline._lock (#2189) * Added dynaminc_startup_nodes configuration to RedisCluster * Fix reusing the old nodes' connections when cluster topology refresh is being done * Fix RedisCluster to immediately raise AuthenticationError without a retry * 4.1.3 (Feb 8, 2022) * Fix flushdb and flushall (#1926) * Add redis5 and redis4 dockers (#1871) * Change json.clear test multi to be up to date with redisjson (#1922) * Fixing volume for unstable_cluster docker (#1914) * Update changes file with changes since 4.0.0-beta2 (#1915) * 4.1.2 (Jan 27, 2022) * Invalid OCSP certificates should raise ConnectionError on failed validation (#1907) * Added retry mechanism on socket timeouts when connecting to the server (#1895) * LMOVE, BLMOVE return incorrect responses (#1906) * Fixing AttributeError in UnixDomainSocketConnection (#1903) * Fixing TypeError in GraphCommands.explain (#1901) * For tests, increasing wait time for the cluster (#1908) * Increased pubsub's wait_for_messages timeout to prevent flaky tests (#1893) * README code snippets formatted to highlight properly (#1888) * Fix link in the main page (#1897) * Documentation fixes: JSON Example, SSL Connection Examples, RTD version (#1887) * Direct link to readthedocs (#1885) * 4.1.1 (Jan 17, 2022) * Add retries to connections in Sentinel Pools (#1879) * OCSP Stapling Support (#1873) * Define incr/decr as aliases of incrby/decrby (#1874) * FT.CREATE - support MAXTEXTFIELDS, TEMPORARY, NOHL, NOFREQS, SKIPINITIALSCAN (#1847) * Timeseries docs fix (#1877) * get_connection: catch OSError too (#1832) * Set keys var otherwise variable not created (#1853) * Clusters should optionally require full slot coverage (#1845) * Triple quote docstrings in client.py PEP 257 (#1876) * syncing requirements (#1870) * Typo and typing in GraphCommands documentation (#1855) * Allowing poetry and redis-py to install together (#1854) * setup.py: Add project_urls for PyPI (#1867) * Support test with redis unstable docker (#1850) * Connection examples (#1835) * Documentation cleanup (#1841) * 4.1.0 (Dec 26, 2021) * OCSP stapling support (#1820) * Support for SELECT (#1825) * Support for specifying error types with retry (#1817) * Support for RESET command since Redis 6.2.0 (#1824) * Support CLIENT TRACKING (#1612) * Support WRITE in CLIENT PAUSE (#1549) * JSON set_file and set_path support (#1818) * Allow ssl_ca_path with rediss:// urls (#1814) * Support for password-encrypted SSL private keys (#1782) * Support SYNC and PSYNC (#1741) * Retry on error exception and timeout fixes (#1821) * Fixing read race condition during pubsub (#1737) * Fixing exception in listen (#1823) * Fixed MovedError, and stopped iterating through startup nodes when slots are fully covered (#1819) * Socket not closing after server disconnect (#1797) * Single sourcing the package version (#1791) * Ensure redis_connect_func is set on uds connection (#1794) * SRTALGO - Skip for redis versions greater than 7.0.0 (#1831) * Documentation updates (#1822) * Add CI action to install package from repository commit hash (#1781) (#1790) * Fix link in lmove docstring (#1793) * Disabling JSON.DEBUG tests (#1787) * Migrated targeted nodes to kwargs in Cluster Mode (#1762) * Added support for MONITOR in clusters (#1756) * Adding ROLE Command (#1610) * Integrate RedisBloom support (#1683) * Adding RedisGraph support (#1556) * Allow overriding connection class via keyword arguments (#1752) * Aggregation LOAD * support for RediSearch (#1735) * Adding cluster, bloom, and graph docs (#1779) * Add packaging to setup_requires, and use >= to play nice to setup.py (fixes #1625) (#1780) * Fixing the license link in the readme (#1778) * Removing distutils from tests (#1773) * Fix cluster ACL tests (#1774) * Improved RedisCluster's reinitialize_steps and documentation (#1765) * Added black and isort (#1734) * Link Documents for all module commands (#1711) * Pyupgrade + flynt + f-strings (#1759) * Remove unused aggregation subclasses in RediSearch (#1754) * Adding RedisCluster client to support Redis Cluster Mode (#1660) * Support RediSearch FT.PROFILE command (#1727) * Adding support for non-decodable commands (#1731) * COMMAND GETKEYS support (#1738) * RedisJSON 2.0.4 behaviour support (#1747) * Removing deprecating distutils (PEP 632) (#1730) * Updating PR template (#1745) * Removing duplication of Script class (#1751) * Splitting documentation for read the docs (#1743) * Improve code coverage for aggregation tests (#1713) * Fixing COMMAND GETKEYS tests (#1750) * GitHub release improvements (#1684) * 4.0.2 (Nov 22, 2021) * Restoring Sentinel commands to redis client (#1723) * Better removal of hiredis warning (#1726) * Adding links to redis documents in function calls (#1719) * 4.0.1 (Nov 17, 2021) * Removing command on initial connections (#1722) * Removing hiredis warning when not installed (#1721) * 4.0.0 (Nov 15, 2021) * FT.EXPLAINCLI intentionally raising NotImplementedError * Restoring ZRANGE desc for Redis < 6.2.0 (#1697) * Response parsing occasionally fails to parse floats (#1692) * Re-enabling read-the-docs (#1707) * Call HSET after FT.CREATE to avoid keyspace scan (#1706) * Unit tests fixes for compatibility (#1703) * Improve documentation about Locks (#1701) * Fixes to allow --redis-url to pass through all tests (#1700) * Fix unit tests running against Redis 4.0.0 (#1699) * Search alias test fix (#1695) * Adding RediSearch/RedisJSON tests (#1691) * Updating codecov rules (#1689) * Tests to validate custom JSON decoders (#1681) * Added breaking icon to release drafter (#1702) * Removing dependency on six (#1676) * Re-enable pipeline support for JSON and TimeSeries (#1674) * Export Sentinel, and SSL like other classes (#1671) * Restore zrange functionality for older versions of Redis (#1670) * Fixed garbage collection deadlock (#1578) * Tests to validate built python packages (#1678) * Sleep for flaky search test (#1680) * Test function renames, to match standards (#1679) * Docstring improvements for Redis class (#1675) * Fix georadius tests (#1672) * Improvements to JSON coverage (#1666) * Add python_requires setuptools check for python > 3.6 (#1656) * SMISMEMBER support (#1667) * Exposing the module version in loaded_modules (#1648) * RedisTimeSeries support (#1652) * Support for json multipath ($) (#1663) * Added boolean parsing to PEXPIRE and PEXPIREAT (#1665) * Add python_requires setuptools check for python > 3.6 (#1656) * Adding vulture for static analysis (#1655) * Starting to clean the docs (#1657) * Update README.md (#1654) * Adding description format for package (#1651) * Publish to pypi as releases are generated with the release drafter (#1647) * Restore actions to prs (#1653) * Fixing the package to include commands (#1649) * Re-enabling codecov as part of CI process (#1646) * Adding support for redisearch (#1640) Thanks @chayim * redisjson support (#1636) Thanks @chayim * Sentinel: Add SentinelManagedSSLConnection (#1419) Thanks @AbdealiJK * Enable floating parameters in SET (ex and px) (#1635) Thanks @AvitalFineRedis * Add warning when hiredis not installed. Recommend installation. (#1621) Thanks @adiamzn * Raising NotImplementedError for SCRIPT DEBUG and DEBUG SEGFAULT (#1624) Thanks @chayim * CLIENT REDIR command support (#1623) Thanks @chayim * REPLICAOF command implementation (#1622) Thanks @chayim * Add support to NX XX and CH to GEOADD (#1605) Thanks @AvitalFineRedis * Add support to ZRANGE and ZRANGESTORE parameters (#1603) Thanks @AvitalFineRedis * Pre 6.2 redis should default to None for script flush (#1641) Thanks @chayim * Add FULL option to XINFO SUMMARY (#1638) Thanks @agusdmb * Geosearch test should use any=True (#1594) Thanks @Andrew-Chen-Wang * Removing packaging dependency (#1626) Thanks @chayim * Fix client_kill_filter docs for skimpy (#1596) Thanks @Andrew-Chen-Wang * Normalize minid and maxlen docs (#1593) Thanks @Andrew-Chen-Wang * Update docs for multiple usernames for ACL DELUSER (#1595) Thanks @Andrew-Chen-Wang * Fix grammar of get param in set command (#1588) Thanks @Andrew-Chen-Wang * Fix docs for client_kill_filter (#1584) Thanks @Andrew-Chen-Wang * Convert README & CONTRIBUTING from rst to md (#1633) Thanks @davidylee * Test BYLEX param in zrangestore (#1634) Thanks @AvitalFineRedis * Tox integrations with invoke and docker (#1632) Thanks @chayim * Adding the release drafter to help simplify release notes (#1618). Thanks @chayim * BACKWARDS INCOMPATIBLE: Removed support for end of life Python 2.7. #1318 * BACKWARDS INCOMPATIBLE: All values within Redis URLs are unquoted via urllib.parse.unquote. Prior versions of redis-py supported this by specifying the ``decode_components`` flag to the ``from_url`` functions. This is now done by default and cannot be disabled. #589 * POTENTIALLY INCOMPATIBLE: Redis commands were moved into a mixin (see commands.py). Anyone importing ``redis.client`` to access commands directly should import ``redis.commands``. #1534, #1550 * Removed technical debt on REDIS_6_VERSION placeholder. Thanks @chayim #1582. * Various docus fixes. Thanks @Andrew-Chen-Wang #1585, #1586. * Support for LOLWUT command, available since Redis 5.0.0. Thanks @brainix #1568. * Added support for CLIENT REPLY, available in Redis 3.2.0. Thanks @chayim #1581. * Support for Auto-reconnect PubSub on get_message. Thanks @luhn #1574. * Fix RST syntax error in README/ Thanks @JanCBrammer #1451. * IDLETIME and FREQ support for RESTORE. Thanks @chayim #1580. * Supporting args with MODULE LOAD. Thanks @chayim #1579. * Updating RedisLabs with Redis. Thanks @gkorland #1575. * Added support for ASYNC to SCRIPT FLUSH available in Redis 6.2.0. Thanks @chayim. #1567 * Added CLIENT LIST fix to support multiple client ids available in Redis 2.8.12. Thanks @chayim #1563. * Added DISCARD support for pipelines available in Redis 2.0.0. Thanks @chayim #1565. * Added ACL DELUSER support for deleting lists of users available in Redis 6.2.0. Thanks @chayim. #1562 * Added CLIENT TRACKINFO support available in Redis 6.2.0. Thanks @chayim. #1560 * Added GEOSEARCH and GEOSEARCHSTORE support available in Redis 6.2.0. Thanks @AvitalFine Redis. #1526 * Added LPUSHX support for lists available in Redis 4.0.0. Thanks @chayim. #1559 * Added support for QUIT available in Redis 1.0.0. Thanks @chayim. #1558 * Added support for COMMAND COUNT available in Redis 2.8.13. Thanks @chayim. #1554. * Added CREATECONSUMER support for XGROUP available in Redis 6.2.0. Thanks @AvitalFineRedis. #1553 * Including slowly complexity in INFO if available. Thanks @ian28223 #1489. * Added support for STRALGO available in Redis 6.0.0. Thanks @AvitalFineRedis. #1528 * Addes support for ZMSCORE available in Redis 6.2.0. Thanks @2014BDuck and @jiekun.zhu. #1437 * Support MINID and LIMIT on XADD available in Redis 6.2.0. Thanks @AvitalFineRedis. #1548 * Added sentinel commands FLUSHCONFIG, CKQUORUM, FAILOVER, and RESET available in Redis 2.8.12. Thanks @otherpirate. #834 * Migrated Version instead of StrictVersion for Python 3.10. Thanks @tirkarthi. #1552 * Added retry mechanism with backoff. Thanks @nbraun-amazon. #1494 * Migrated commands to a mixin. Thanks @chayim. #1534 * Added support for ZUNION, available in Redis 6.2.0. Thanks @AvitalFineRedis. #1522 * Added support for CLIENT LIST with ID, available in Redis 6.2.0. Thanks @chayim. #1505 * Added support for MINID and LIMIT with xtrim, available in Reds 6.2.0. Thanks @chayim. #1508 * Implemented LMOVE and BLMOVE commands, available in Redis 6.2.0. Thanks @chayim. #1504 * Added GET argument to SET command, available in Redis 6.2.0. Thanks @2014BDuck. #1412 * Documentation fixes. Thanks @enjoy-binbin @jonher937. #1496 #1532 * Added support for XAUTOCLAIM, available in Redis 6.2.0. Thanks @AvitalFineRedis. #1529 * Added IDLE support for XPENDING, available in Redis 6.2.0. Thanks @AvitalFineRedis. #1523 * Add a count parameter to lpop/rpop, available in Redis 6.2.0. Thanks @wavenator. #1487 * Added a (pypy) trove classifier for Python 3.9. Thanks @D3X. #1535 * Added ZINTER support, available in Redis 6.2.0. Thanks @AvitalFineRedis. #1520 * Added ZINTER support, available in Redis 6.2.0. Thanks @AvitalFineRedis. #1520 * Added ZDIFF and ZDIFFSTORE support, available in Redis 6.2.0. Thanks @AvitalFineRedis. #1518 * Added ZRANGESTORE support, available in Redis 6.2.0. Thanks @AvitalFineRedis. #1521 * Added LT and GT support for ZADD, available in Redis 6.2.0. Thanks @chayim. #1509 * Added ZRANDMEMBER support, available in Redis 6.2.0. Thanks @AvitalFineRedis. #1519 * Added GETDEL support, available in Redis 6.2.0. Thanks @AvitalFineRedis. #1514 * Added CLIENT KILL laddr filter, available in Redis 6.2.0. Thanks @chayim. #1506 * Added CLIENT UNPAUSE, available in Redis 6.2.0. Thanks @chayim. #1512 * Added NOMKSTREAM support for XADD, available in Redis 6.2.0. Thanks @chayim. #1507 * Added HRANDFIELD support, available in Redis 6.2.0. Thanks @AvitalFineRedis. #1513 * Added CLIENT INFO support, available in Redis 6.2.0. Thanks @AvitalFineRedis. #1517 * Added GETEX support, available in Redis 6.2.0. Thanks @AvitalFineRedis. #1515 * Added support for COPY command, available in Redis 6.2.0. Thanks @malinaa96. #1492 * Provide a development and testing environment via docker. Thanks @abrookins. #1365 * Added support for the LPOS command available in Redis 6.0.6. Thanks @aparcar #1353/#1354 * Added support for the ACL LOG command available in Redis 6. Thanks @2014BDuck. #1307 * Added support for ABSTTL option of the RESTORE command available in Redis 5.0. Thanks @charettes. #1423 * 3.5.3 (June 1, 2020) * Restore try/except clauses to __del__ methods. These will be removed in 4.0 when more explicit resource management if enforced. #1339 * Update the master_address when Sentinels promote a new master. #847 * Update SentinelConnectionPool to not forcefully disconnect other in-use connections which can negatively affect threaded applications. #1345 * 3.5.2 (May 14, 2020) * Tune the locking in ConnectionPool.get_connection so that the lock is not held while waiting for the socket to establish and validate the TCP connection. * 3.5.1 (May 9, 2020) * Fix for HSET argument validation to allow any non-None key. Thanks @AleksMat, #1337, #1341 * 3.5.0 (April 29, 2020) * Removed exception trapping from __del__ methods. redis-py objects that hold various resources implement __del__ cleanup methods to release those resources when the object goes out of scope. This provides a fallback for when these objects aren't explicitly closed by user code. Prior to this change any errors encountered in closing these resources would be hidden from the user. Thanks @jdufresne. #1281 * Expanded support for connection strings specifying a username connecting to pre-v6 servers. #1274 * Optimized Lock's blocking_timeout and sleep. If the lock cannot be acquired and the sleep value would cause the loop to sleep beyond blocking_timeout, fail immediately. Thanks @clslgrnc. #1263 * Added support for passing Python memoryviews to Redis command args that expect strings or bytes. The memoryview instance is sent directly to the socket such that there are zero copies made of the underlying data during command packing. Thanks @Cody-G. #1265, #1285 * HSET command now can accept multiple pairs. HMSET has been marked as deprecated now. Thanks to @laixintao #1271 * Don't manually DISCARD when encountering an ExecAbortError. Thanks @nickgaya, #1300/#1301 * Reset the watched state of pipelines after calling exec. This saves a roundtrip to the server by not having to call UNWATCH within Pipeline.reset(). Thanks @nickgaya, #1299/#1302 * Added the KEEPTTL option for the SET command. Thanks @laixintao #1304/#1280 * Added the MEMORY STATS command. #1268 * Lock.extend() now has a new option, `replace_ttl`. When False (the default), Lock.extend() adds the `additional_time` to the lock's existing TTL. When replace_ttl=True, the lock's existing TTL is replaced with the value of `additional_time`. * Add testing and support for PyPy. * 3.4.1 * Move the username argument in the Redis and Connection classes to the end of the argument list. This helps those poor souls that specify all their connection options as non-keyword arguments. #1276 * Prior to ACL support, redis-py ignored the username component of Connection URLs. With ACL support, usernames are no longer ignored and are used to authenticate against an ACL rule. Some cloud vendors with managed Redis instances (like Heroku) provide connection URLs with a username component pre-ACL that is not intended to be used. Sending that username to Redis servers < 6.0.0 results in an error. Attempt to detect this condition and retry the AUTH command with only the password such that authentication continues to work for these users. #1274 * Removed the __eq__ hooks to Redis and ConnectionPool that were added in 3.4.0. This ended up being a bad idea as two separate connection pools be considered equal yet manage a completely separate set of connections. * 3.4.0 * Allow empty pipelines to be executed if there are WATCHed keys. This is a convenient way to test if any of the watched keys changed without actually running any other commands. Thanks @brianmaissy. #1233, #1234 * Removed support for end of life Python 3.4. * Added support for all ACL commands in Redis 6. Thanks @IAmATeaPot418 for helping. * Pipeline instances now always evaluate to True. Prior to this change, pipeline instances relied on __len__ for boolean evaluation which meant that pipelines with no commands on the stack would be considered False. #994 * Client instances and Connection pools now support a 'client_name' argument. If supplied, all connections created will call CLIENT SETNAME as soon as the connection is opened. Thanks to @Habbie for supplying the basis of this change. #802 * Added the 'ssl_check_hostname' argument to specify whether SSL connections should require the server hostname to match the hostname specified in the SSL cert. By default 'ssl_check_hostname' is False for backwards compatibility. #1196 * Slightly optimized command packing. Thanks @Deneby67. #1255 * Added support for the TYPE argument to SCAN. Thanks @netocp. #1220 * Better thread and fork safety in ConnectionPool and BlockingConnectionPool. Added better locking to synchronize critical sections rather than relying on CPython-specific implementation details relating to atomic operations. Adjusted how the pools identify and deal with a fork. Added a ChildDeadlockedError exception that is raised by child processes in the very unlikely chance that a deadlock is encountered. Thanks @gmbnomis, @mdellweg, @yht804421715. #1270, #1138, #1178, #906, #1262 * Added __eq__ hooks to the Redis and ConnectionPool classes. Thanks @brainix. #1240 * 3.3.11 * Further fix for the SSLError -> TimeoutError mapping to work on obscure releases of Python 2.7. * 3.3.10 * Fixed a potential error handling bug for the SSLError -> TimeoutError mapping introduced in 3.3.9. Thanks @zbristow. #1224 * 3.3.9 * Mapped Python 2.7 SSLError to TimeoutError where appropriate. Timeouts should now consistently raise TimeoutErrors on Python 2.7 for both unsecured and secured connections. Thanks @zbristow. #1222 * 3.3.8 * Fixed MONITOR parsing to properly parse IPv6 client addresses, unix socket connections and commands issued from Lua. Thanks @kukey. #1201 * 3.3.7 * Fixed a regression introduced in 3.3.0 where socket.error exceptions (or subclasses) could potentially be raised instead of redis.exceptions.ConnectionError. #1202 * 3.3.6 * Fixed a regression in 3.3.5 that caused PubSub.get_message() to raise a socket.timeout exception when passing a timeout value. #1200 * 3.3.5 * Fix an issue where socket.timeout errors could be handled by the wrong exception handler in Python 2.7. * 3.3.4 * More specifically identify nonblocking read errors for both SSL and non-SSL connections. 3.3.1, 3.3.2 and 3.3.3 on Python 2.7 could potentially mask a ConnectionError. #1197 * 3.3.3 * The SSL module in Python < 2.7.9 handles non-blocking sockets differently than 2.7.9+. This patch accommodates older versions. #1197 * 3.3.2 * Further fixed a regression introduced in 3.3.0 involving SSL and non-blocking sockets. #1197 * 3.3.1 * Fixed a regression introduced in 3.3.0 involving SSL and non-blocking sockets. #1197 * 3.3.0 * Resolve a race condition with the PubSubWorkerThread. #1150 * Cleanup socket read error messages. Thanks Vic Yu. #1159 * Cleanup the Connection's selector correctly. Thanks Bruce Merry. #1153 * Added a Monitor object to make working with MONITOR output easy. Thanks Roey Prat #1033 * Internal cleanup: Removed the legacy Token class which was necessary with older version of Python that are no longer supported. #1066 * Response callbacks are now case insensitive. This allows users that call Redis.execute_command() directly to pass lower-case command names and still get reasonable responses. #1168 * Added support for hiredis-py 1.0.0 encoding error support. This should make the PythonParser and the HiredisParser behave identically when encountering encoding errors. Thanks Brian Candler. #1161/#1162 * All authentication errors now properly raise AuthenticationError. AuthenticationError is now a subclass of ConnectionError, which will cause the connection to be disconnected and cleaned up appropriately. #923 * Add READONLY and READWRITE commands. Thanks @theodesp. #1114 * Remove selectors in favor of nonblocking sockets. Selectors had issues in some environments including eventlet and gevent. This should resolve those issues with no other side effects. * Fixed an issue with XCLAIM and previously claimed but not removed messages. Thanks @thomdask. #1192/#1191 * Allow for single connection client instances. These instances are not thread safe but offer other benefits including a subtle performance increase. * Added extensive health checks that keep the connections lively. Passing the "health_check_interval=N" option to the Redis client class or to a ConnectionPool ensures that a round trip PING/PONG is successful before any command if the underlying connection has been idle for more than N seconds. ConnectionErrors and TimeoutErrors are automatically retried once for health checks. * Changed the PubSubWorkerThread to use a threading.Event object rather than a boolean to control the thread's life cycle. Thanks Timothy Rule. #1194/#1195. * Fixed a bug in Pipeline error handling that would incorrectly retry ConnectionErrors. * 3.2.1 * Fix SentinelConnectionPool to work in multiprocess/forked environments. * 3.2.0 * Added support for `select.poll` to test whether data can be read on a socket. This should allow for significantly more connections to be used with pubsub. Fixes #486/#1115 * Attempt to guarantee that the ConnectionPool hands out healthy connections. Healthy connections are those that have an established socket connection to the Redis server, are ready to accept a command and have no data available to read. Fixes #1127/#886 * Use the socket.IPPROTO_TCP constant instead of socket.SOL_TCP. IPPROTO_TCP is available on more interpreters (Jython for instance). Thanks @Junnplus. #1130 * Fixed a regression introduced in 3.0 that mishandles exceptions not derived from the base Exception class. KeyboardInterrupt and gevent.timeout notable. Thanks Christian Fersch. #1128/#1129 * Significant improvements to handing connections with forked processes. Parent and child processes no longer trample on each others' connections. Thanks to Jay Rolette for the patch and highlighting this issue. #504/#732/#784/#863 * PythonParser no longer closes the associated connection's socket. The connection itself will close the socket. #1108/#1085 * 3.1.0 * Connection URLs must have one of the following schemes: redis://, rediss://, unix://. Thanks @jdupl123. #961/#969 * Fixed an issue with retry_on_timeout logic that caused some TimeoutErrors to be retried. Thanks Aaron Yang. #1022/#1023 * Added support for SNI for SSL. Thanks @oridistor and Roey Prat. #1087 * Fixed ConnectionPool repr for pools with no connections. Thanks Cody Scott. #1043/#995 * Fixed GEOHASH to return a None value when specifying a place that doesn't exist on the server. Thanks @guybe7. #1126 * Fixed XREADGROUP to return an empty dictionary for messages that have been deleted but still exist in the unacknowledged queue. Thanks @xeizmendi. #1116 * Added an owned method to Lock objects. owned returns a boolean indicating whether the current lock instance still owns the lock. Thanks Dave Johansen. #1112 * Allow lock.acquire() to accept an optional token argument. If provided, the token argument is used as the unique value used to claim the lock. Thankd Dave Johansen. #1112 * Added a reacquire method to Lock objects. reacquire attempts to renew the lock such that the timeout is extended to the same value that the lock was initially acquired with. Thanks Ihor Kalnytskyi. #1014 * Stream names found within XREAD and XREADGROUP responses now properly respect the decode_responses flag. * XPENDING_RANGE now requires the user the specify the min, max and count arguments. Newer versions of Redis prevent count from being infinite so it's left to the user to specify these values explicitly. * ZADD now returns None when xx=True and incr=True and an element is specified that doesn't exist in the sorted set. This matches what the server returns in this case. #1084 * Added client_kill_filter that accepts various filters to identify and kill clients. Thanks Theofanis Despoudis. #1098 * Fixed a race condition that occurred when unsubscribing and resubscribing to the same channel or pattern in rapid succession. Thanks Marcin Raczyński. #764 * Added a LockNotOwnedError that is raised when trying to extend or release a lock that is no longer owned. This is a subclass of LockError so previous code should continue to work as expected. Thanks Joshua Harlow. #1095 * Fixed a bug in GEORADIUS that forced decoding of places without respecting the decode_responses option. Thanks Bo Bayles. #1082 * 3.0.1 * Fixed regression with UnixDomainSocketConnection caused by 3.0.0. Thanks Jyrki Muukkonen * Fixed an issue with the new asynchronous flag on flushdb and flushall. Thanks rogeryen * Updated Lock.locked() method to indicate whether *any* process has acquired the lock, not just the current one. This is in line with the behavior of threading.Lock. Thanks Alan Justino da Silva * 3.0.0 BACKWARDS INCOMPATIBLE CHANGES * When using a Lock as a context manager and the lock fails to be acquired a LockError is now raised. This prevents the code block inside the context manager from being executed if the lock could not be acquired. * Renamed LuaLock to Lock. * Removed the pipeline based Lock implementation in favor of the LuaLock implementation. * Only bytes, strings and numbers (ints, longs and floats) are acceptable for keys and values. Previously redis-py attempted to cast other types to str() and store the result. This caused must confusion and frustration when passing boolean values (cast to 'True' and 'False') or None values (cast to 'None'). It is now the user's responsibility to cast all key names and values to bytes, strings or numbers before passing the value to redis-py. * The StrictRedis class has been renamed to Redis. StrictRedis will continue to exist as an alias of Redis for the foreseeable future. * The legacy Redis client class has been removed. It caused much confusion to users. * ZINCRBY arguments 'value' and 'amount' have swapped order to match the the Redis server. The new argument order is: keyname, amount, value. * MGET no longer raises an error if zero keys are passed in. Instead an empty list is returned. * MSET and MSETNX now require all keys/values to be specified in a single dictionary argument named mapping. This was changed to allow for future options to these commands in the future. * ZADD now requires all element names/scores be specified in a single dictionary argument named mapping. This was required to allow the NX, XX, CH and INCR options to be specified. * ssl_cert_reqs now has a default value of 'required' by default. This should make connecting to a remote Redis server over SSL more secure. Thanks u2mejc * Removed support for EOL Python 2.6 and 3.3. Thanks jdufresne OTHER CHANGES * Added missing DECRBY command. Thanks derek-dchu * CLUSTER INFO and CLUSTER NODES responses are now properly decoded to strings. * Added a 'locked()' method to Lock objects. This method returns True if the lock has been acquired and owned by the current process, otherwise False. * EXISTS now supports multiple keys. It's return value is now the number of keys in the list that exist. * Ensure all commands can accept key names as bytes. This fixes issues with BLPOP, BRPOP and SORT. * All errors resulting from bad user input are raised as DataError exceptions. DataError is a subclass of RedisError so this should be transparent to anyone previously catching these. * Added support for NX, XX, CH and INCR options to ZADD * Added support for the MIGRATE command * Added support for the MEMORY USAGE and MEMORY PURGE commands. Thanks Itamar Haber * Added support for the 'asynchronous' argument to FLUSHDB and FLUSHALL commands. Thanks Itamar Haber * Added support for the BITFIELD command. Thanks Charles Leifer and Itamar Haber * Improved performance on pipeline requests with large chunks of data. Thanks tzickel * Fixed test suite to not fail if another client is connected to the server the tests are running against. * Added support for SWAPDB. Thanks Itamar Haber * Added support for all STREAM commands. Thanks Roey Prat and Itamar Haber * SHUTDOWN now accepts the 'save' and 'nosave' arguments. Thanks dwilliams-kenzan * Added support for ZPOPMAX, ZPOPMIN, BZPOPMAX, BZPOPMIN. Thanks Itamar Haber * Added support for the 'type' argument in CLIENT LIST. Thanks Roey Prat * Added support for CLIENT PAUSE. Thanks Roey Prat * Added support for CLIENT ID and CLIENT UNBLOCK. Thanks Itamar Haber * GEODIST now returns a None value when referencing a place that does not exist. Thanks qingping209 * Added a ping() method to pubsub objects. Thanks krishan-carbon * Fixed a bug with keys in the INFO dict that contained ':' symbols. Thanks mzalimeni * Fixed the select system call retry compatibility with Python 2.x. Thanks lddubeau * max_connections is now a valid querystring argument for creating connection pools from URLs. Thanks mmaslowskicc * Added the UNLINK command. Thanks yozel * Added socket_type option to Connection for configurability. Thanks garlicnation * Lock.do_acquire now atomically sets acquires the lock and sets the expire value via set(nx=True, px=timeout). Thanks 23doors * Added 'count' argument to SPOP. Thanks AlirezaSadeghi * Fixed an issue parsing client_list responses that contained an '='. Thanks swilly22 * 2.10.6 * Various performance improvements. Thanks cjsimpson * Fixed a bug with SRANDMEMBER where the behavior for `number=0` did not match the spec. Thanks Alex Wang * Added HSTRLEN command. Thanks Alexander Putilin * Added the TOUCH command. Thanks Anis Jonischkeit * Remove unnecessary calls to the server when registering Lua scripts. Thanks Ben Greenberg * SET's EX and PX arguments now allow values of zero. Thanks huangqiyin * Added PUBSUB {CHANNELS, NUMPAT, NUMSUB} commands. Thanks Angus Pearson * PubSub connections that encounter `InterruptedError`s now retry automatically. Thanks Carlton Gibson and Seth M. Larson * LPUSH and RPUSH commands run on PyPy now correctly returns the number of items of the list. Thanks Jeong YunWon * Added support to automatically retry socket EINTR errors. Thanks Thomas Steinacher * PubSubWorker threads started with `run_in_thread` are now daemonized so the thread shuts down when the running process goes away. Thanks Keith Ainsworth * Added support for GEO commands. Thanks Pau Freixes, Alex DeBrie and Abraham Toriz * Made client construction from URLs smarter. Thanks Tim Savage * Added support for CLUSTER * commands. Thanks Andy Huang * The RESTORE command now accepts an optional `replace` boolean. Thanks Yoshinari Takaoka * Attempt to connect to a new Sentinel if a TimeoutError occurs. Thanks Bo Lopker * Fixed a bug in the client's `__getitem__` where a KeyError would be raised if the value returned by the server is an empty string. Thanks Javier Candeira. * Socket timeouts when connecting to a server are now properly raised as TimeoutErrors. * 2.10.5 * Allow URL encoded parameters in Redis URLs. Characters like a "/" can now be URL encoded and redis-py will correctly decode them. Thanks Paul Keene. * Added support for the WAIT command. Thanks https://github.com/eshizhan * Better shutdown support for the PubSub Worker Thread. It now properly cleans up the connection, unsubscribes from any channels and patterns previously subscribed to and consumes any waiting messages on the socket. * Added the ability to sleep for a brief period in the event of a WatchError occurring. Thanks Joshua Harlow. * Fixed a bug with pipeline error reporting when dealing with characters in error messages that could not be encoded to the connection's character set. Thanks Hendrik Muhs. * Fixed a bug in Sentinel connections that would inadvertently connect to the master when the connection pool resets. Thanks https://github.com/df3n5 * Better timeout support in Pubsub get_message. Thanks Andy Isaacson. * Fixed a bug with the HiredisParser that would cause the parser to get stuck in an endless loop if a specific number of bytes were delivered from the socket. This fix also increases performance of parsing large responses from the Redis server. * Added support for ZREVRANGEBYLEX. * ConnectionErrors are now raised if Redis refuses a connection due to the maxclients limit being exceeded. Thanks Roman Karpovich. * max_connections can now be set when instantiating client instances. Thanks Ohad Perry. * 2.10.4 (skipped due to a PyPI snafu) * 2.10.3 * Fixed a bug with the bytearray support introduced in 2.10.2. Thanks Josh Owen. * 2.10.2 * Added support for Hiredis's new bytearray support. Thanks https://github.com/tzickel * POSSIBLE BACKWARDS INCOMPATIBLE CHANGE: Fixed a possible race condition when multiple threads share the same Lock instance with a timeout. Lock tokens are now stored in thread local storage by default. If you have code that acquires a lock in one thread and passes that lock instance to another thread to release it, you need to disable thread local storage. Refer to the doc strings on the Lock class about the thread_local argument information. * Fixed a regression in from_url where "charset" and "errors" weren't valid options. "encoding" and "encoding_errors" are still accepted and preferred. * The "charset" and "errors" options have been deprecated. Passing either to StrictRedis.__init__ or from_url will still work but will also emit a DeprecationWarning. Instead use the "encoding" and "encoding_errors" options. * Fixed a compatibility bug with Python 3 when the server closes a connection. * Added BITPOS command. Thanks https://github.com/jettify. * Fixed a bug when attempting to send large values to Redis in a Pipeline. * 2.10.1 * Fixed a bug where Sentinel connections to a server that's no longer a master and receives a READONLY error will disconnect and reconnect to the master. * 2.10.0 * Discontinued support for Python 2.5. Upgrade. You'll be happier. * The HiRedis parser will now properly raise ConnectionErrors. * Completely refactored PubSub support. Fixes all known PubSub bugs and adds a bunch of new features. Docs can be found in the README under the new "Publish / Subscribe" section. * Added the new HyperLogLog commands (PFADD, PFCOUNT, PFMERGE). Thanks Pepijn de Vos and Vincent Ohprecio. * Updated TTL and PTTL commands with Redis 2.8+ semantics. Thanks Markus Kaiserswerth. * *SCAN commands now return a long (int on Python3) cursor value rather than the string representation. This might be slightly backwards incompatible in code using *SCAN commands loops such as "while cursor != '0':". * Added extra *SCAN commands that return iterators instead of the normal [cursor, data] type. Use scan_iter, hscan_iter, sscan_iter, and zscan_iter for iterators. Thanks Mathieu Longtin. * Added support for SLOWLOG commands. Thanks Rick van Hattem. * Added lexicographical commands ZRANGEBYLEX, ZREMRANGEBYLEX, and ZLEXCOUNT for sorted sets. * Connection objects now support an optional argument, socket_read_size, indicating how much data to read during each socket.recv() call. After benchmarking, increased the default size to 64k, which dramatically improves performance when fetching large values, such as many results in a pipeline or a large (>1MB) string value. * Improved the pack_command and send_packed_command functions to increase performance when sending large (>1MB) values. * Sentinel Connections to master servers now detect when a READONLY error is encountered and disconnect themselves and all other active connections to the same master so that the new master can be discovered. * Fixed Sentinel state parsing on Python 3. * Added support for SENTINEL MONITOR, SENTINEL REMOVE, and SENTINEL SET commands. Thanks Greg Murphy. * INFO output that doesn't follow the "key:value" format will now be appended to a key named "__raw__" in the INFO dictionary. Thanks Pedro Larroy. * The "vagrant" directory contains a complete vagrant environment for redis-py developers. The environment runs a Redis master, a Redis slave, and 3 Sentinels. Future iterations of the test suite will incorporate more integration style tests, ensuring things like failover happen correctly. * It's now possible to create connection pool instances from a URL. StrictRedis.from_url() now uses this feature to create a connection pool instance and use that when creating a new client instance. Thanks https://github.com/chillipino * When creating client instances or connection pool instances from an URL, it's now possible to pass additional options to the connection pool with querystring arguments. * Fixed a bug where some encodings (like utf-16) were unusable on Python 3 as command names and literals would get encoded. * Added an SSLConnection class that allows for secure connections through stunnel or other means. Construct an SSL connection with the ssl=True option on client classes, using the rediss:// scheme from an URL, or by passing the SSLConnection class to a connection pool's connection_class argument. Thanks https://github.com/oranagra. * Added a socket_connect_timeout option to control how long to wait while establishing a TCP connection before timing out. This lets the client fail fast when attempting to connect to a downed server while keeping a more lenient timeout for all other socket operations. * Added TCP Keep-alive support by passing use the socket_keepalive=True option. Finer grain control can be achieved using the socket_keepalive_options option which expects a dictionary with any of the keys (socket.TCP_KEEPIDLE, socket.TCP_KEEPCNT, socket.TCP_KEEPINTVL) and integers for values. Thanks Yossi Gottlieb. * Added a `retry_on_timeout` option that controls how socket.timeout errors are handled. By default it is set to False and will cause the client to raise a TimeoutError anytime a socket.timeout is encountered. If `retry_on_timeout` is set to True, the client will retry a command that timed out once like other `socket.error`s. * Completely refactored the Lock system. There is now a LuaLock class that's used when the Redis server is capable of running Lua scripts along with a fallback class for Redis servers < 2.6. The new locks fix several subtle race consider that the old lock could face. In additional, a new method, "extend" is available on lock instances that all a lock owner to extend the amount of time they have the lock for. Thanks to Eli Finkelshteyn and https://github.com/chillipino for contributions. * 2.9.1 * IPv6 support. Thanks https://github.com/amashinchi * 2.9.0 * Performance improvement for packing commands when using the PythonParser. Thanks Guillaume Viot. * Executing an empty pipeline transaction no longer sends MULTI/EXEC to the server. Thanks EliFinkelshteyn. * Errors when authenticating (incorrect password) and selecting a database now close the socket. * Full Sentinel support thanks to Vitja Makarov. Thanks! * Better repr support for client and connection pool instances. Thanks Mark Roberts. * Error messages that the server sends to the client are now included in the client error message. Thanks Sangjin Lim. * Added the SCAN, SSCAN, HSCAN, and ZSCAN commands. Thanks Jingchao Hu. * ResponseErrors generated by pipeline execution provide addition context including the position of the command in the pipeline and the actual command text generated the error. * ConnectionPools now play nicer in threaded environments that fork. Thanks Christian Joergensen. * 2.8.0 * redis-py should play better with gevent when a gevent Timeout is raised. Thanks leifkb. * Added SENTINEL command. Thanks Anna Janackova. * Fixed a bug where pipelines could potentially corrupt a connection if the MULTI command generated a ResponseError. Thanks EliFinkelshteyn for the report. * Connections now call socket.shutdown() prior to socket.close() to ensure communication ends immediately per the note at https://docs.python.org/2/library/socket.html#socket.socket.close Thanks to David Martin for pointing this out. * Lock checks are now based on floats rather than ints. Thanks Vitja Makarov. * 2.7.6 * Added CONFIG RESETSTAT command. Thanks Yossi Gottlieb. * Fixed a bug introduced in 2.7.3 that caused issues with script objects and pipelines. Thanks Carpentier Pierre-Francois. * Converted redis-py's test suite to use the awesome py.test library. * Fixed a bug introduced in 2.7.5 that prevented a ConnectionError from being raised when the Redis server is LOADING data. * Added a BusyLoadingError exception that's raised when the Redis server is starting up and not accepting commands yet. BusyLoadingError subclasses ConnectionError, which this state previously returned. Thanks Yossi Gottlieb. * 2.7.5 * DEL, HDEL and ZREM commands now return the numbers of keys deleted instead of just True/False. * from_url now supports URIs with a port number. Thanks Aaron Westendorf. * 2.7.4 * Added missing INCRBY method. Thanks Krzysztof Dorosz. * SET now accepts the EX, PX, NX and XX options from Redis 2.6.12. These options will generate errors if these options are used when connected to a Redis server < 2.6.12. Thanks George Yoshida. * 2.7.3 * Fixed a bug with BRPOPLPUSH and lists with empty strings. * All empty except: clauses have been replaced to only catch Exception subclasses. This prevents a KeyboardInterrupt from triggering exception handlers. Thanks Lucian Branescu Mihaila. * All exceptions that are the result of redis server errors now share a command Exception subclass, ServerError. Thanks Matt Robenolt. * Prevent DISCARD from being called if MULTI wasn't also called. Thanks Pete Aykroyd. * SREM now returns an integer indicating the number of items removed from the set. Thanks https://github.com/ronniekk. * Fixed a bug with BGSAVE and BGREWRITEAOF response callbacks with Python3. Thanks Nathan Wan. * Added CLIENT GETNAME and CLIENT SETNAME commands. Thanks https://github.com/bitterb. * It's now possible to use len() on a pipeline instance to determine the number of commands that will be executed. Thanks Jon Parise. * Fixed a bug in INFO's parse routine with floating point numbers. Thanks Ali Onur Uyar. * Fixed a bug with BITCOUNT to allow `start` and `end` to both be zero. Thanks Tim Bart. * The transaction() method now accepts a boolean keyword argument, value_from_callable. By default, or if False is passes, the transaction() method will return the value of the pipelines execution. Otherwise, it will return whatever func() returns. * Python3 compatibility fix ensuring we're not already bytes(). Thanks Salimane Adjao Moustapha. * Added PSETEX. Thanks YAMAMOTO Takashi. * Added a BlockingConnectionPool to limit the number of connections that can be created. Thanks James Arthur. * SORT now accepts a `groups` option that if specified, will return tuples of n-length, where n is the number of keys specified in the GET argument. This allows for convenient row-based iteration. Thanks Ionuț Arțăriși. * 2.7.2 * Parse errors are now *always* raised on multi/exec pipelines, regardless of the `raise_on_error` flag. See https://groups.google.com/forum/?hl=en&fromgroups=#!topic/redis-db/VUiEFT8U8U0 for more info. * 2.7.1 * Packaged tests with source code * 2.7.0 * Added BITOP and BITCOUNT commands. Thanks Mark Tozzi. * Added the TIME command. Thanks Jason Knight. * Added support for LUA scripting. Thanks to Angus Peart, Drew Smathers, Issac Kelly, Louis-Philippe Perron, Sean Bleier, Jeffrey Kaditz, and Dvir Volk for various patches and contributions to this feature. * Changed the default error handling in pipelines. By default, the first error in a pipeline will now be raised. A new parameter to the pipeline's execute, `raise_on_error`, can be set to False to keep the old behavior of embeedding the exception instances in the result. * Fixed a bug with pipelines where parse errors won't corrupt the socket. * Added the optional `number` argument to SRANDMEMBER for use with Redis 2.6+ servers. * Added PEXPIRE/PEXPIREAT/PTTL commands. Thanks Luper Rouch. * Added INCRBYFLOAT/HINCRBYFLOAT commands. Thanks Nikita Uvarov. * High precision floating point values won't lose their precision when being sent to the Redis server. Thanks Jason Oster and Oleg Pudeyev. * Added CLIENT LIST/CLIENT KILL commands * 2.6.2 * `from_url` is now available as a classmethod on client classes. Thanks Jon Parise for the patch. * Fixed several encoding errors resulting from the Python 3.x support. * 2.6.1 * Python 3.x support! Big thanks to Alex Grönholm. * Fixed a bug in the PythonParser's read_response that could hide an error from the client (#251). * 2.6.0 * Changed (p)subscribe and (p)unsubscribe to no longer return messages indicating the channel was subscribed/unsubscribed to. These messages are available in the listen() loop instead. This is to prevent the following scenario: * Client A is subscribed to "foo" * Client B publishes message to "foo" * Client A subscribes to channel "bar" at the same time. Prior to this change, the subscribe() call would return the published messages on "foo" rather than the subscription confirmation to "bar". * Added support for GETRANGE, thanks Jean-Philippe Caruana * A new setting "decode_responses" specifies whether return values from Redis commands get decoded automatically using the client's charset value. Thanks to Frankie Dintino for the patch. * 2.4.13 * redis.from_url() can take an URL representing a Redis connection string and return a client object. Thanks Kenneth Reitz for the patch. * 2.4.12 * ConnectionPool is now fork-safe. Thanks Josiah Carson for the patch. * 2.4.11 * AuthenticationError will now be correctly raised if an invalid password is supplied. * If Hiredis is unavailable, the HiredisParser will raise a RedisError if selected manually. * Made the INFO command more tolerant of Redis changes formatting. Fix for #217. * 2.4.10 * Buffer reads from socket in the PythonParser. Fix for a Windows-specific bug (#205). * Added the OBJECT and DEBUG OBJECT commands. * Added __del__ methods for classes that hold on to resources that need to be cleaned up. This should prevent resource leakage when these objects leave scope due to misuse or unhandled exceptions. Thanks David Wolever for the suggestion. * Added the ECHO command for completeness. * Fixed a bug where attempting to subscribe to a PubSub channel of a Redis server that's down would blow out the stack. Fixes #179 and #195. Thanks Ovidiu Predescu for the test case. * StrictRedis's TTL command now returns a -1 when querying a key with no expiration. The Redis class continues to return None. * ZADD and SADD now return integer values indicating the number of items added. Thanks Homer Strong. * Renamed the base client class to StrictRedis, replacing ZADD and LREM in favor of their official argument order. The Redis class is now a subclass of StrictRedis, implementing the legacy redis-py implementations of ZADD and LREM. Docs have been updated to suggesting the use of StrictRedis. * SETEX in StrictRedis is now compliant with official Redis SETEX command. the name, value, time implementation moved to "Redis" for backwards compatibility. * 2.4.9 * Removed socket retry logic in Connection. This is the responsibility of the caller to determine if the command is safe and can be retried. Thanks David Wolver. * Added some extra guards around various types of exceptions being raised when sending or parsing data. Thanks David Wolver and Denis Bilenko. * 2.4.8 * Imported with_statement from __future__ for Python 2.5 compatibility. * 2.4.7 * Fixed a bug where some connections were not getting released back to the connection pool after pipeline execution. * Pipelines can now be used as context managers. This is the preferred way of use to ensure that connections get cleaned up properly. Thanks David Wolever. * Added a convenience method called transaction() on the base Redis class. This method eliminates much of the boilerplate used when using pipelines to watch Redis keys. See the documentation for details on usage. * 2.4.6 * Variadic arguments for SADD, SREM, ZREN, HDEL, LPUSH, and RPUSH. Thanks Raphaël Vinot. * (CRITICAL) Fixed an error in the Hiredis parser that occasionally caused the socket connection to become corrupted and unusable. This became noticeable once connection pools started to be used. * ZRANGE, ZREVRANGE, ZRANGEBYSCORE, and ZREVRANGEBYSCORE now take an additional optional argument, score_cast_func, which is a callable used to cast the score value in the return type. The default is float. * Removed the PUBLISH method from the PubSub class. Connections that are [P]SUBSCRIBEd cannot issue PUBLISH commands, so it doesn't make sense to have it here. * Pipelines now contain WATCH and UNWATCH. Calling WATCH or UNWATCH from the base client class will result in a deprecation warning. After WATCHing one or more keys, the pipeline will be placed in immediate execution mode until UNWATCH or MULTI are called. Refer to the new pipeline docs in the README for more information. Thanks to David Wolever and Randall Leeds for greatly helping with this. * 2.4.5 * The PythonParser now works better when reading zero length strings. * 2.4.4 * Fixed a typo introduced in 2.4.3 * 2.4.3 * Fixed a bug in the UnixDomainSocketConnection caused when trying to form an error message after a socket error. * 2.4.2 * Fixed a bug in pipeline that caused an exception while trying to reconnect after a connection timeout. * 2.4.1 * Fixed a bug in the PythonParser if disconnect is called before connect. * 2.4.0 * WARNING: 2.4 contains several backwards incompatible changes. * Completely refactored Connection objects. Moved much of the Redis protocol packing for requests here, and eliminated the nasty dependencies it had on the client to do AUTH and SELECT commands on connect. * Connection objects now have a parser attribute. Parsers are responsible for reading data Redis sends. Two parsers ship with redis-py: a PythonParser and the HiRedis parser. redis-py will automatically use the HiRedis parser if you have the Python hiredis module installed, otherwise it will fall back to the PythonParser. You can force or the other, or even an external one by passing the `parser_class` argument to ConnectionPool. * Added a UnixDomainSocketConnection for users wanting to talk to the Redis instance running on a local machine only. You can use this connection by passing it to the `connection_class` argument of the ConnectionPool. * Connections no longer derive from threading.local. See threading.local note below. * ConnectionPool has been completely refactored. The ConnectionPool now maintains a list of connections. The redis-py client only hangs on to a ConnectionPool instance, calling get_connection() anytime it needs to send a command. When get_connection() is called, the command name and any keys involved in the command are passed as arguments. Subclasses of ConnectionPool could use this information to identify the shard the keys belong to and return a connection to it. ConnectionPool also implements disconnect() to force all connections in the pool to disconnect from the Redis server. * redis-py no longer support the SELECT command. You can still connect to a specific database by specifying it when instantiating a client instance or by creating a connection pool. If you need to talk to multiple databases within your application, you should use a separate client instance for each database you want to talk to. * Completely refactored Publish/Subscribe support. The subscribe and listen commands are no longer available on the redis-py Client class. Instead, the `pubsub` method returns an instance of the PubSub class which contains all publish/subscribe support. Note, you can still PUBLISH from the redis-py client class if you desire. * Removed support for all previously deprecated commands or options. * redis-py no longer uses threading.local in any way. Since the Client class no longer holds on to a connection, it's no longer needed. You can now pass client instances between threads, and commands run on those threads will retrieve an available connection from the pool, use it and release it. It should now be trivial to use redis-py with eventlet or greenlet. * ZADD now accepts pairs of value=score keyword arguments. This should help resolve the long standing #72. The older value and score arguments have been deprecated in favor of the keyword argument style. * Client instances now get their own copy of RESPONSE_CALLBACKS. The new set_response_callback method adds a user defined callback to the instance. * Support Jython, fixing #97. Thanks to Adam Vandenberg for the patch. * Using __getitem__ now properly raises a KeyError when the key is not found. Thanks Ionuț Arțăriși for the patch. * Newer Redis versions return a LOADING message for some commands while the database is loading from disk during server start. This could cause problems with SELECT. We now force a socket disconnection prior to raising a ResponseError so subsequent connections have to reconnect and re-select the appropriate database. Thanks to Benjamin Anderson for finding this and fixing. * 2.2.4 * WARNING: Potential backwards incompatible change - Changed order of parameters of ZREVRANGEBYSCORE to match those of the actual Redis command. This is only backwards-incompatible if you were passing max and min via keyword args. If passing by normal args, nothing in user code should have to change. Thanks Stéphane Angel for the fix. * Fixed INFO to properly parse the Redis data correctly for both 2.2.x and 2.3+. Thanks Stéphane Angel for the fix. * Lock objects now store their timeout value as a float. This allows floats to be used as timeout values. No changes to existing code required. * WATCH now supports multiple keys. Thanks Rich Schumacher. * Broke out some code that was Python 2.4 incompatible. redis-py should now be usable on 2.4, but this hasn't actually been tested. Thanks Dan Colish for the patch. * Optimized some code using izip and islice. Should have a pretty good speed up on larger data sets. Thanks Dan Colish. * Better error handling when submitting an empty mapping to HMSET. Thanks Dan Colish. * Subscription status is now reset after every (re)connection. * 2.2.3 * Added support for Hiredis. To use, simply "pip install hiredis" or "easy_install hiredis". Thanks for Pieter Noordhuis for the hiredis-py bindings and the patch to redis-py. * The connection class is chosen based on whether hiredis is installed or not. To force the use of the PythonConnection, simply create your own ConnectionPool instance with the connection_class argument assigned to to PythonConnection class. * Added missing command ZREVRANGEBYSCORE. Thanks Jay Baird for the patch. * The INFO command should be parsed correctly on 2.2.x server versions and is backwards compatible with older versions. Thanks Brett Hoerner. * 2.2.2 * Fixed a bug in ZREVRANK where retrieving the rank of a value not in the zset would raise an error. * Fixed a bug in Connection.send where the errno import was getting overwritten by a local variable. * Fixed a bug in SLAVEOF when promoting an existing slave to a master. * Reverted change of download URL back to redis-VERSION.tar.gz. 2.2.1's change of this actually broke Pypi for Pip installs. Sorry! * 2.2.1 * Changed archive name to redis-py-VERSION.tar.gz to not conflict with the Redis server archive. * 2.2.0 * Implemented SLAVEOF * Implemented CONFIG as config_get and config_set * Implemented GETBIT/SETBIT * Implemented BRPOPLPUSH * Implemented STRLEN * Implemented PERSIST * Implemented SETRANGE redis-py-4.3.4/CONTRIBUTING.md000066400000000000000000000143731425632337000155240ustar00rootroot00000000000000# Contributing ## Introduction First off, thank you for considering contributing to redis-py. We value community contributions! ## Contributions We Need You may already know what you want to contribute \-- a fix for a bug you encountered, or a new feature your team wants to use. If you don't know what to contribute, keep an open mind! Improving documentation, bug triaging, and writing tutorials are all examples of helpful contributions that mean less work for you. ## Your First Contribution Unsure where to begin contributing? You can start by looking through [help-wanted issues](https://github.com/andymccurdy/redis-py/issues?q=is%3Aopen+is%3Aissue+label%3ahelp-wanted). Never contributed to open source before? Here are a couple of friendly tutorials: - - ## Getting Started Here's how to get started with your code contribution: 1. Create your own fork of redis-py 2. Do the changes in your fork 3. *Create a virtualenv and install the development dependencies from the dev_requirements.txt file:* a. python -m venv .venv b. source .venv/bin/activate c. pip install -r dev_requirements.txt 4. If you need a development environment, run `invoke devenv` 5. While developing, make sure the tests pass by running `invoke tests` 6. If you like the change and think the project could use it, send a pull request To see what else is part of the automation, run `invoke -l` ## The Development Environment Running `invoke devenv` installs the development dependencies specified in the dev_requirements.txt. It starts all of the dockers used by this project, and leaves them running. These can be easily cleaned up with `invoke clean`. NOTE: it is assumed that the user running these tests, can execute docker and its various commands. - A master Redis node - A Redis replica node - Three sentinel Redis nodes - A redis cluster - An stunnel docker, fronting the master Redis node - A Redis node, running unstable - the latest redis The replica node, is a replica of the master node, using the [leader-follower replication](https://redis.io/topics/replication) feature. The sentinels monitor the master node in a [sentinel high-availability configuration](https://redis.io/topics/sentinel). ## Testing Call `invoke tests` to run all tests, or `invoke all-tests` to run linters tests as well. With the 'tests' and 'all-tests' targets, all Redis and RedisCluster tests will be run. It is possible to run only Redis client tests (with cluster mode disabled) by using `invoke standalone-tests`; similarly, RedisCluster tests can be run by using `invoke cluster-tests`. Each run of tox starts and stops the various dockers required. Sometimes things get stuck, an `invoke clean` can help. Continuous Integration uses these same wrappers to run all of these tests against multiple versions of python. Feel free to test your changes against all the python versions supported, as declared by the tox.ini file (eg: tox -e py39). If you have the various python versions on your desktop, you can run *tox* by itself, to test all supported versions. ### Docker Tips Following are a few tips that can help you work with the Docker-based development environment. To get a bash shell inside of a container: `$ docker run -it /bin/bash` **Note**: The term \"service\" refers to the \"services\" defined in the `tox.ini` file at the top of the repo: \"master\", \"replicaof\", \"sentinel_1\", \"sentinel_2\", \"sentinel_3\". Containers run a minimal Debian image that probably lacks tools you want to use. To install packages, first get a bash session (see previous tip) and then run: `$ apt update && apt install ` You can see the logging output of a containers like this: `$ docker logs -f ` The command make test runs all tests in all tested Python environments. To run the tests in a single environment, like Python 3.9, use a command like this: `$ docker-compose run test tox -e py39 -- --redis-url=redis://master:6379/9` Here, the flag `-e py39` runs tests against the Python 3.9 tox environment. And note from the example that whenever you run tests like this, instead of using make test, you need to pass `-- --redis-url=redis://master:6379/9`. This points the tests at the \"master\" container. Our test suite uses `pytest`. You can run a specific test suite against a specific Python version like this: `$ docker-compose run test tox -e py36 -- --redis-url=redis://master:6379/9 tests/test_commands.py` ### Troubleshooting If you get any errors when running `make dev` or `make test`, make sure that you are using supported versions of Docker. Please try at least versions of Docker. - Docker 19.03.12 ## How to Report a Bug ### Security Vulnerabilities **NOTE**: If you find a security vulnerability, do NOT open an issue. Email [Redis Open Source ()](mailto:oss@redis.com) instead. In order to determine whether you are dealing with a security issue, ask yourself these two questions: - Can I access something that's not mine, or something I shouldn't have access to? - Can I disable something for other people? If the answer to either of those two questions are *yes*, then you're probably dealing with a security issue. Note that even if you answer *no* to both questions, you may still be dealing with a security issue, so if you're unsure, just email [us](mailto:oss@redis.com). ### Everything Else When filing an issue, make sure to answer these five questions: 1. What version of redis-py are you using? 2. What version of redis are you using? 3. What did you do? 4. What did you expect to see? 5. What did you see instead? ## How to Suggest a Feature or Enhancement If you'd like to contribute a new feature, make sure you check our issue list to see if someone has already proposed it. Work may already be under way on the feature you want -- or we may have rejected a feature like it already. If you don't see anything, open a new issue that describes the feature you would like and how it should work. ## Code Review Process The core team looks at Pull Requests on a regular basis. We will give feedback as as soon as possible. After feedback, we expect a response within two weeks. After that time, we may close your PR if it isn't showing any activity. redis-py-4.3.4/INSTALL000066400000000000000000000001341425632337000143120ustar00rootroot00000000000000 Please use python setup.py install and report errors to Andy McCurdy (sedrik@gmail.com) redis-py-4.3.4/LICENSE000066400000000000000000000020621425632337000142700ustar00rootroot00000000000000Copyright (c) 2012 Andy McCurdy Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. redis-py-4.3.4/MANIFEST.in000066400000000000000000000001761425632337000150250ustar00rootroot00000000000000include INSTALL include LICENSE include README.md exclude __pycache__ recursive-include tests * recursive-exclude tests *.pyc redis-py-4.3.4/README.md000066400000000000000000001453271425632337000145560ustar00rootroot00000000000000# redis-py The Python interface to the Redis key-value store. [![CI](https://github.com/redis/redis-py/workflows/CI/badge.svg?branch=master)](https://github.com/redis/redis-py/actions?query=workflow%3ACI+branch%3Amaster) [![docs](https://readthedocs.org/projects/redis/badge/?version=stable&style=flat)](https://redis-py.readthedocs.io/en/stable/) [![MIT licensed](https://img.shields.io/badge/license-MIT-blue.svg)](./LICENSE) [![pypi](https://badge.fury.io/py/redis.svg)](https://pypi.org/project/redis/) [![codecov](https://codecov.io/gh/redis/redis-py/branch/master/graph/badge.svg?token=yenl5fzxxr)](https://codecov.io/gh/redis/redis-py) [![Total alerts](https://img.shields.io/lgtm/alerts/g/redis/redis-py.svg?logo=lgtm&logoWidth=18)](https://lgtm.com/projects/g/redis/redis-py/alerts/) [Installation](#installation) | [Contributing](#contributing) | [Getting Started](#getting-started) | [Connecting To Redis](#connecting-to-redis) --------------------------------------------- ## Python Notice redis-py 4.2.x will be the last generation of redis-py to support python 3.6 as it has been [End of Life'd](https://www.python.org/dev/peps/pep-0494/#schedule-last-security-only-release). Async support was introduced in redis-py 4.2.x thanks to [aioredis](https://github.com/aio-libs/aioredis-py), which necessitates this change. We will continue to maintain 3.6 support as long as possible - but the plan is for redis-py version 5+ to officially remove 3.6. --------------------------- ## Installation redis-py requires a running Redis server. See [Redis's quickstart](https://redis.io/topics/quickstart) for installation instructions. redis-py can be installed using pip similar to other Python packages. Do not use sudo with pip. It is usually good to work in a [virtualenv](https://virtualenv.pypa.io/en/latest/) or [venv](https://docs.python.org/3/library/venv.html) to avoid conflicts with other package managers and Python projects. For a quick introduction see [Python Virtual Environments in Five Minutes](https://bit.ly/py-env). To install redis-py, simply: ``` bash $ pip install redis ``` or from source: ``` bash $ python setup.py install ``` View the current documentation [here](https://readthedocs.org/projects/redis/). ## Contributing Want to contribute a feature, bug fix, or report an issue? Check out our [guide to contributing](https://github.com/redis/redis-py/blob/master/CONTRIBUTING.md). ## Getting Started redis-py supports Python 3.7+. ``` pycon >>> import redis >>> r = redis.Redis(host='localhost', port=6379, db=0) >>> r.set('foo', 'bar') True >>> r.get('foo') b'bar' ``` By default, all responses are returned as bytes in Python 3. If **all** string responses from a client should be decoded, the user can specify *decode_responses=True* in ```Redis.__init__```. In this case, any Redis command that returns a string type will be decoded with the encoding specified. The default encoding is utf-8, but this can be customized by specifying the encoding argument for the redis.Redis class. The encoding will be used to automatically encode any strings passed to commands, such as key names and values. -------------------- ### MSET, MSETNX and ZADD These commands all accept a mapping of key/value pairs. In redis-py 2.X this mapping could be specified as **args* or as `**kwargs`. Both of these styles caused issues when Redis introduced optional flags to ZADD. Relying on `*args` caused issues with the optional argument order, especially in Python 2.7. Relying on `**kwargs` caused potential collision issues of user keys with the argument names in the method signature. To resolve this, redis-py 3.0 has changed these three commands to all accept a single positional argument named mapping that is expected to be a dict. For MSET and MSETNX, the dict is a mapping of key-names -\> values. For ZADD, the dict is a mapping of element-names -\> score. MSET, MSETNX and ZADD now look like: ``` pycon def mset(self, mapping): def msetnx(self, mapping): def zadd(self, name, mapping, nx=False, xx=False, ch=False, incr=False): ``` All 2.X users that use these commands must modify their code to supply keys and values as a dict to these commands. ### ZINCRBY redis-py 2.X accidentally modified the argument order of ZINCRBY, swapping the order of value and amount. ZINCRBY now looks like: ``` python def zincrby(self, name, amount, value): ``` All 2.X users that rely on ZINCRBY must swap the order of amount and value for the command to continue to work as intended. ### Encoding of User Input redis-py 3.0 only accepts user data as bytes, strings or numbers (ints, longs and floats). Attempting to specify a key or a value as any other type will raise a DataError exception. redis-py 2.X attempted to coerce any type of input into a string. While occasionally convenient, this caused all sorts of hidden errors when users passed boolean values (which were coerced to \'True\' or \'False\'), a None value (which was coerced to \'None\') or other values, such as user defined types. All 2.X users should make sure that the keys and values they pass into redis-py are either bytes, strings or numbers. ### Locks redis-py 3.0 drops support for the pipeline-based Lock and now only supports the Lua-based lock. In doing so, LuaLock has been renamed to Lock. This also means that redis-py Lock objects require Redis server 2.6 or greater. 2.X users that were explicitly referring to *LuaLock* will have to now refer to *Lock* instead. ### Locks as Context Managers redis-py 3.0 now raises a LockError when using a lock as a context manager and the lock cannot be acquired within the specified timeout. This is more of a bug fix than a backwards incompatible change. However, given an error is now raised where none was before, this might alarm some users. 2.X users should make sure they're wrapping their lock code in a try/catch like this: ``` python try: with r.lock('my-lock-key', blocking_timeout=5) as lock: # code you want executed only after the lock has been acquired except LockError: # the lock wasn't acquired ``` ## API Reference The [official Redis command documentation](https://redis.io/commands) does a great job of explaining each command in detail. redis-py attempts to adhere to the official command syntax. There are a few exceptions: - **SELECT**: Not implemented. See the explanation in the Thread Safety section below. - **DEL**: *del* is a reserved keyword in the Python syntax. Therefore redis-py uses *delete* instead. - **MULTI/EXEC**: These are implemented as part of the Pipeline class. The pipeline is wrapped with the MULTI and EXEC statements by default when it is executed, which can be disabled by specifying transaction=False. See more about Pipelines below. - **SUBSCRIBE/LISTEN**: Similar to pipelines, PubSub is implemented as a separate class as it places the underlying connection in a state where it can\'t execute non-pubsub commands. Calling the pubsub method from the Redis client will return a PubSub instance where you can subscribe to channels and listen for messages. You can only call PUBLISH from the Redis client (see [this comment on issue #151](https://github.com/redis/redis-py/issues/151#issuecomment-1545015) for details). - **SCAN/SSCAN/HSCAN/ZSCAN**: The *SCAN commands are implemented as they exist in the Redis documentation. In addition, each command has an equivalent iterator method. These are purely for convenience so the user doesn't have to keep track of the cursor while iterating. Use the scan_iter/sscan_iter/hscan_iter/zscan_iter methods for this behavior. ## Connecting to Redis ### Client Classes: Redis and StrictRedis redis-py 3.0 drops support for the legacy *Redis* client class. *StrictRedis* has been renamed to *Redis* and an alias named *StrictRedis* is provided so that users previously using *StrictRedis* can continue to run unchanged. The 2.X *Redis* class provided alternative implementations of a few commands. This confused users (rightfully so) and caused a number of support issues. To make things easier going forward, it was decided to drop support for these alternate implementations and instead focus on a single client class. 2.X users that are already using StrictRedis don\'t have to change the class name. StrictRedis will continue to work for the foreseeable future. 2.X users that are using the Redis class will have to make changes if they use any of the following commands: - SETEX: The argument order has changed. The new order is (name, time, value). - LREM: The argument order has changed. The new order is (name, num, value). - TTL and PTTL: The return value is now always an int and matches the official Redis command (>0 indicates the timeout, -1 indicates that the key exists but that it has no expire time set, -2 indicates that the key does not exist) ### Connection Pools Behind the scenes, redis-py uses a connection pool to manage connections to a Redis server. By default, each Redis instance you create will in turn create its own connection pool. You can override this behavior and use an existing connection pool by passing an already created connection pool instance to the connection_pool argument of the Redis class. You may choose to do this in order to implement client side sharding or have fine-grain control of how connections are managed. ``` pycon >>> pool = redis.ConnectionPool(host='localhost', port=6379, db=0) >>> r = redis.Redis(connection_pool=pool) ``` ### Connections ConnectionPools manage a set of Connection instances. redis-py ships with two types of Connections. The default, Connection, is a normal TCP socket based connection. The UnixDomainSocketConnection allows for clients running on the same device as the server to connect via a unix domain socket. To use a UnixDomainSocketConnection connection, simply pass the unix_socket_path argument, which is a string to the unix domain socket file. Additionally, make sure the unixsocket parameter is defined in your redis.conf file. It\'s commented out by default. ``` pycon >>> r = redis.Redis(unix_socket_path='/tmp/redis.sock') ``` You can create your own Connection subclasses as well. This may be useful if you want to control the socket behavior within an async framework. To instantiate a client class using your own connection, you need to create a connection pool, passing your class to the connection_class argument. Other keyword parameters you pass to the pool will be passed to the class specified during initialization. ``` pycon >>> pool = redis.ConnectionPool(connection_class=YourConnectionClass, your_arg='...', ...) ``` Connections maintain an open socket to the Redis server. Sometimes these sockets are interrupted or disconnected for a variety of reasons. For example, network appliances, load balancers and other services that sit between clients and servers are often configured to kill connections that remain idle for a given threshold. When a connection becomes disconnected, the next command issued on that connection will fail and redis-py will raise a ConnectionError to the caller. This allows each application that uses redis-py to handle errors in a way that\'s fitting for that specific application. However, constant error handling can be verbose and cumbersome, especially when socket disconnections happen frequently in many production environments. To combat this, redis-py can issue regular health checks to assess the liveliness of a connection just before issuing a command. Users can pass `health_check_interval=N` to the Redis or ConnectionPool classes or as a query argument within a Redis URL. The value of `health_check_interval` must be an integer. A value of `0`, the default, disables health checks. Any positive integer will enable health checks. Health checks are performed just before a command is executed if the underlying connection has been idle for more than `health_check_interval` seconds. For example, `health_check_interval=30` will ensure that a health check is run on any connection that has been idle for 30 or more seconds just before a command is executed on that connection. If your application is running in an environment that disconnects idle connections after 30 seconds you should set the `health_check_interval` option to a value less than 30. This option also works on any PubSub connection that is created from a client with `health_check_interval` enabled. PubSub users need to ensure that *get_message()* or `listen()` are called more frequently than `health_check_interval` seconds. It is assumed that most workloads already do this. If your PubSub use case doesn\'t call `get_message()` or `listen()` frequently, you should call `pubsub.check_health()` explicitly on a regularly basis. ### SSL Connections redis-py 3.0 changes the default value of the ssl_cert_reqs option from None to \'required\'. See [Issue 1016](https://github.com/redis/redis-py/issues/1016). This change enforces hostname validation when accepting a cert from a remote SSL terminator. If the terminator doesn\'t properly set the hostname on the cert this will cause redis-py 3.0 to raise a ConnectionError. This check can be disabled by setting ssl_cert_reqs to None. Note that doing so removes the security check. Do so at your own risk. Example with hostname verification using a local certificate bundle (linux): ``` pycon >>> import redis >>> r = redis.Redis(host='xxxxxx.cache.amazonaws.com', port=6379, db=0, ssl=True, ssl_ca_certs='/etc/ssl/certs/ca-certificates.crt') >>> r.set('foo', 'bar') True >>> r.get('foo') b'bar' ``` Example with hostname verification using [certifi](https://pypi.org/project/certifi/): ``` pycon >>> import redis, certifi >>> r = redis.Redis(host='xxxxxx.cache.amazonaws.com', port=6379, db=0, ssl=True, ssl_ca_certs=certifi.where()) >>> r.set('foo', 'bar') True >>> r.get('foo') b'bar' ``` Example turning off hostname verification (not recommended): ``` pycon >>> import redis >>> r = redis.Redis(host='xxxxxx.cache.amazonaws.com', port=6379, db=0, ssl=True, ssl_cert_reqs=None) >>> r.set('foo', 'bar') True >>> r.get('foo') b'bar' ``` ### Sentinel support redis-py can be used together with [Redis Sentinel](https://redis.io/topics/sentinel) to discover Redis nodes. You need to have at least one Sentinel daemon running in order to use redis-py's Sentinel support. Connecting redis-py to the Sentinel instance(s) is easy. You can use a Sentinel connection to discover the master and slaves network addresses: ``` pycon >>> from redis import Sentinel >>> sentinel = Sentinel([('localhost', 26379)], socket_timeout=0.1) >>> sentinel.discover_master('mymaster') ('127.0.0.1', 6379) >>> sentinel.discover_slaves('mymaster') [('127.0.0.1', 6380)] ``` To connect to a sentinel which uses SSL ([see SSL connections](#ssl-connections) for more examples of SSL configurations): ``` pycon >>> from redis import Sentinel >>> sentinel = Sentinel([('localhost', 26379)], ssl=True, ssl_ca_certs='/etc/ssl/certs/ca-certificates.crt') >>> sentinel.discover_master('mymaster') ('127.0.0.1', 6379) ``` You can also create Redis client connections from a Sentinel instance. You can connect to either the master (for write operations) or a slave (for read-only operations). ``` pycon >>> master = sentinel.master_for('mymaster', socket_timeout=0.1) >>> slave = sentinel.slave_for('mymaster', socket_timeout=0.1) >>> master.set('foo', 'bar') >>> slave.get('foo') b'bar' ``` The master and slave objects are normal Redis instances with their connection pool bound to the Sentinel instance. When a Sentinel backed client attempts to establish a connection, it first queries the Sentinel servers to determine an appropriate host to connect to. If no server is found, a MasterNotFoundError or SlaveNotFoundError is raised. Both exceptions are subclasses of ConnectionError. When trying to connect to a slave client, the Sentinel connection pool will iterate over the list of slaves until it finds one that can be connected to. If no slaves can be connected to, a connection will be established with the master. See [Guidelines for Redis clients with support for Redis Sentinel](https://redis.io/topics/sentinel-clients) to learn more about Redis Sentinel. -------------------------- ### Parsers Parser classes provide a way to control how responses from the Redis server are parsed. redis-py ships with two parser classes, the PythonParser and the HiredisParser. By default, redis-py will attempt to use the HiredisParser if you have the hiredis module installed and will fallback to the PythonParser otherwise. Hiredis is a C library maintained by the core Redis team. Pieter Noordhuis was kind enough to create Python bindings. Using Hiredis can provide up to a 10x speed improvement in parsing responses from the Redis server. The performance increase is most noticeable when retrieving many pieces of data, such as from LRANGE or SMEMBERS operations. Hiredis is available on PyPI, and can be installed via pip just like redis-py. ``` bash $ pip install hiredis ``` ### Response Callbacks The client class uses a set of callbacks to cast Redis responses to the appropriate Python type. There are a number of these callbacks defined on the Redis client class in a dictionary called RESPONSE_CALLBACKS. Custom callbacks can be added on a per-instance basis using the set_response_callback method. This method accepts two arguments: a command name and the callback. Callbacks added in this manner are only valid on the instance the callback is added to. If you want to define or override a callback globally, you should make a subclass of the Redis client and add your callback to its RESPONSE_CALLBACKS class dictionary. Response callbacks take at least one parameter: the response from the Redis server. Keyword arguments may also be accepted in order to further control how to interpret the response. These keyword arguments are specified during the command\'s call to execute_command. The ZRANGE implementation demonstrates the use of response callback keyword arguments with its \"withscores\" argument. ### Thread Safety Redis client instances can safely be shared between threads. Internally, connection instances are only retrieved from the connection pool during command execution, and returned to the pool directly after. Command execution never modifies state on the client instance. However, there is one caveat: the Redis SELECT command. The SELECT command allows you to switch the database currently in use by the connection. That database remains selected until another is selected or until the connection is closed. This creates an issue in that connections could be returned to the pool that are connected to a different database. As a result, redis-py does not implement the SELECT command on client instances. If you use multiple Redis databases within the same application, you should create a separate client instance (and possibly a separate connection pool) for each database. It is not safe to pass PubSub or Pipeline objects between threads. ### Pipelines Pipelines are a subclass of the base Redis class that provide support for buffering multiple commands to the server in a single request. They can be used to dramatically increase the performance of groups of commands by reducing the number of back-and-forth TCP packets between the client and server. Pipelines are quite simple to use: ``` pycon >>> r = redis.Redis(...) >>> r.set('bing', 'baz') >>> # Use the pipeline() method to create a pipeline instance >>> pipe = r.pipeline() >>> # The following SET commands are buffered >>> pipe.set('foo', 'bar') >>> pipe.get('bing') >>> # the EXECUTE call sends all buffered commands to the server, returning >>> # a list of responses, one for each command. >>> pipe.execute() [True, b'baz'] ``` For ease of use, all commands being buffered into the pipeline return the pipeline object itself. Therefore calls can be chained like: ``` pycon >>> pipe.set('foo', 'bar').sadd('faz', 'baz').incr('auto_number').execute() [True, True, 6] ``` In addition, pipelines can also ensure the buffered commands are executed atomically as a group. This happens by default. If you want to disable the atomic nature of a pipeline but still want to buffer commands, you can turn off transactions. ``` pycon >>> pipe = r.pipeline(transaction=False) ``` A common issue occurs when requiring atomic transactions but needing to retrieve values in Redis prior for use within the transaction. For instance, let\'s assume that the INCR command didn\'t exist and we need to build an atomic version of INCR in Python. The completely naive implementation could GET the value, increment it in Python, and SET the new value back. However, this is not atomic because multiple clients could be doing this at the same time, each getting the same value from GET. Enter the WATCH command. WATCH provides the ability to monitor one or more keys prior to starting a transaction. If any of those keys change prior the execution of that transaction, the entire transaction will be canceled and a WatchError will be raised. To implement our own client-side INCR command, we could do something like this: ``` pycon >>> with r.pipeline() as pipe: ... while True: ... try: ... # put a WATCH on the key that holds our sequence value ... pipe.watch('OUR-SEQUENCE-KEY') ... # after WATCHing, the pipeline is put into immediate execution ... # mode until we tell it to start buffering commands again. ... # this allows us to get the current value of our sequence ... current_value = pipe.get('OUR-SEQUENCE-KEY') ... next_value = int(current_value) + 1 ... # now we can put the pipeline back into buffered mode with MULTI ... pipe.multi() ... pipe.set('OUR-SEQUENCE-KEY', next_value) ... # and finally, execute the pipeline (the set command) ... pipe.execute() ... # if a WatchError wasn't raised during execution, everything ... # we just did happened atomically. ... break ... except WatchError: ... # another client must have changed 'OUR-SEQUENCE-KEY' between ... # the time we started WATCHing it and the pipeline's execution. ... # our best bet is to just retry. ... continue ``` Note that, because the Pipeline must bind to a single connection for the duration of a WATCH, care must be taken to ensure that the connection is returned to the connection pool by calling the reset() method. If the Pipeline is used as a context manager (as in the example above) reset() will be called automatically. Of course you can do this the manual way by explicitly calling reset(): ``` pycon >>> pipe = r.pipeline() >>> while True: ... try: ... pipe.watch('OUR-SEQUENCE-KEY') ... ... ... pipe.execute() ... break ... except WatchError: ... continue ... finally: ... pipe.reset() ``` A convenience method named \"transaction\" exists for handling all the boilerplate of handling and retrying watch errors. It takes a callable that should expect a single parameter, a pipeline object, and any number of keys to be WATCHed. Our client-side INCR command above can be written like this, which is much easier to read: ``` pycon >>> def client_side_incr(pipe): ... current_value = pipe.get('OUR-SEQUENCE-KEY') ... next_value = int(current_value) + 1 ... pipe.multi() ... pipe.set('OUR-SEQUENCE-KEY', next_value) >>> >>> r.transaction(client_side_incr, 'OUR-SEQUENCE-KEY') [True] ``` Be sure to call pipe.multi() in the callable passed to Redis.transaction prior to any write commands. ### Publish / Subscribe redis-py includes a PubSub object that subscribes to channels and listens for new messages. Creating a PubSub object is easy. ``` pycon >>> r = redis.Redis(...) >>> p = r.pubsub() ``` Once a PubSub instance is created, channels and patterns can be subscribed to. ``` pycon >>> p.subscribe('my-first-channel', 'my-second-channel', ...) >>> p.psubscribe('my-*', ...) ``` The PubSub instance is now subscribed to those channels/patterns. The subscription confirmations can be seen by reading messages from the PubSub instance. ``` pycon >>> p.get_message() {'pattern': None, 'type': 'subscribe', 'channel': b'my-second-channel', 'data': 1} >>> p.get_message() {'pattern': None, 'type': 'subscribe', 'channel': b'my-first-channel', 'data': 2} >>> p.get_message() {'pattern': None, 'type': 'psubscribe', 'channel': b'my-*', 'data': 3} ``` Every message read from a PubSub instance will be a dictionary with the following keys. - **type**: One of the following: \'subscribe\', \'unsubscribe\', \'psubscribe\', \'punsubscribe\', \'message\', \'pmessage\' - **channel**: The channel \[un\]subscribed to or the channel a message was published to - **pattern**: The pattern that matched a published message\'s channel. Will be None in all cases except for \'pmessage\' types. - **data**: The message data. With \[un\]subscribe messages, this value will be the number of channels and patterns the connection is currently subscribed to. With \[p\]message messages, this value will be the actual published message. Let\'s send a message now. ``` pycon # the publish method returns the number matching channel and pattern # subscriptions. 'my-first-channel' matches both the 'my-first-channel' # subscription and the 'my-*' pattern subscription, so this message will # be delivered to 2 channels/patterns >>> r.publish('my-first-channel', 'some data') 2 >>> p.get_message() {'channel': b'my-first-channel', 'data': b'some data', 'pattern': None, 'type': 'message'} >>> p.get_message() {'channel': b'my-first-channel', 'data': b'some data', 'pattern': b'my-*', 'type': 'pmessage'} ``` Unsubscribing works just like subscribing. If no arguments are passed to \[p\]unsubscribe, all channels or patterns will be unsubscribed from. ``` pycon >>> p.unsubscribe() >>> p.punsubscribe('my-*') >>> p.get_message() {'channel': b'my-second-channel', 'data': 2, 'pattern': None, 'type': 'unsubscribe'} >>> p.get_message() {'channel': b'my-first-channel', 'data': 1, 'pattern': None, 'type': 'unsubscribe'} >>> p.get_message() {'channel': b'my-*', 'data': 0, 'pattern': None, 'type': 'punsubscribe'} ``` redis-py also allows you to register callback functions to handle published messages. Message handlers take a single argument, the message, which is a dictionary just like the examples above. To subscribe to a channel or pattern with a message handler, pass the channel or pattern name as a keyword argument with its value being the callback function. When a message is read on a channel or pattern with a message handler, the message dictionary is created and passed to the message handler. In this case, a None value is returned from get_message() since the message was already handled. ``` pycon >>> def my_handler(message): ... print('MY HANDLER: ', message['data']) >>> p.subscribe(**{'my-channel': my_handler}) # read the subscribe confirmation message >>> p.get_message() {'pattern': None, 'type': 'subscribe', 'channel': b'my-channel', 'data': 1} >>> r.publish('my-channel', 'awesome data') 1 # for the message handler to work, we need tell the instance to read data. # this can be done in several ways (read more below). we'll just use # the familiar get_message() function for now >>> message = p.get_message() MY HANDLER: awesome data # note here that the my_handler callback printed the string above. # `message` is None because the message was handled by our handler. >>> print(message) None ``` If your application is not interested in the (sometimes noisy) subscribe/unsubscribe confirmation messages, you can ignore them by passing ignore_subscribe_messages=True to r.pubsub(). This will cause all subscribe/unsubscribe messages to be read, but they won\'t bubble up to your application. ``` pycon >>> p = r.pubsub(ignore_subscribe_messages=True) >>> p.subscribe('my-channel') >>> p.get_message() # hides the subscribe message and returns None >>> r.publish('my-channel', 'my data') 1 >>> p.get_message() {'channel': b'my-channel', 'data': b'my data', 'pattern': None, 'type': 'message'} ``` There are three different strategies for reading messages. The examples above have been using pubsub.get_message(). Behind the scenes, get_message() uses the system\'s \'select\' module to quickly poll the connection\'s socket. If there\'s data available to be read, get_message() will read it, format the message and return it or pass it to a message handler. If there\'s no data to be read, get_message() will immediately return None. This makes it trivial to integrate into an existing event loop inside your application. ``` pycon >>> while True: >>> message = p.get_message() >>> if message: >>> # do something with the message >>> time.sleep(0.001) # be nice to the system :) ``` Older versions of redis-py only read messages with pubsub.listen(). listen() is a generator that blocks until a message is available. If your application doesn\'t need to do anything else but receive and act on messages received from redis, listen() is an easy way to get up an running. ``` pycon >>> for message in p.listen(): ... # do something with the message ``` The third option runs an event loop in a separate thread. pubsub.run_in_thread() creates a new thread and starts the event loop. The thread object is returned to the caller of [un_in_thread(). The caller can use the thread.stop() method to shut down the event loop and thread. Behind the scenes, this is simply a wrapper around get_message() that runs in a separate thread, essentially creating a tiny non-blocking event loop for you. run_in_thread() takes an optional sleep_time argument. If specified, the event loop will call time.sleep() with the value in each iteration of the loop. Note: Since we\'re running in a separate thread, there\'s no way to handle messages that aren\'t automatically handled with registered message handlers. Therefore, redis-py prevents you from calling run_in_thread() if you\'re subscribed to patterns or channels that don\'t have message handlers attached. ``` pycon >>> p.subscribe(**{'my-channel': my_handler}) >>> thread = p.run_in_thread(sleep_time=0.001) # the event loop is now running in the background processing messages # when it's time to shut it down... >>> thread.stop() ``` run_in_thread also supports an optional exception handler, which lets you catch exceptions that occur within the worker thread and handle them appropriately. The exception handler will take as arguments the exception itself, the pubsub object, and the worker thread returned by run_in_thread. ``` pycon >>> p.subscribe(**{'my-channel': my_handler}) >>> def exception_handler(ex, pubsub, thread): >>> print(ex) >>> thread.stop() >>> thread.join(timeout=1.0) >>> pubsub.close() >>> thread = p.run_in_thread(exception_handler=exception_handler) ``` A PubSub object adheres to the same encoding semantics as the client instance it was created from. Any channel or pattern that\'s unicode will be encoded using the charset specified on the client before being sent to Redis. If the client\'s decode_responses flag is set the False (the default), the \'channel\', \'pattern\' and \'data\' values in message dictionaries will be byte strings (str on Python 2, bytes on Python 3). If the client\'s decode_responses is True, then the \'channel\', \'pattern\' and \'data\' values will be automatically decoded to unicode strings using the client\'s charset. PubSub objects remember what channels and patterns they are subscribed to. In the event of a disconnection such as a network error or timeout, the PubSub object will re-subscribe to all prior channels and patterns when reconnecting. Messages that were published while the client was disconnected cannot be delivered. When you\'re finished with a PubSub object, call its .close() method to shutdown the connection. ``` pycon >>> p = r.pubsub() >>> ... >>> p.close() ``` The PUBSUB set of subcommands CHANNELS, NUMSUB and NUMPAT are also supported: ``` pycon >>> r.pubsub_channels() [b'foo', b'bar'] >>> r.pubsub_numsub('foo', 'bar') [(b'foo', 9001), (b'bar', 42)] >>> r.pubsub_numsub('baz') [(b'baz', 0)] >>> r.pubsub_numpat() 1204 ``` ### Monitor redis-py includes a Monitor object that streams every command processed by the Redis server. Use listen() on the Monitor object to block until a command is received. ``` pycon >>> r = redis.Redis(...) >>> with r.monitor() as m: >>> for command in m.listen(): >>> print(command) ``` ### Lua Scripting redis-py supports the EVAL, EVALSHA, and SCRIPT commands. However, there are a number of edge cases that make these commands tedious to use in real world scenarios. Therefore, redis-py exposes a Script object that makes scripting much easier to use. (RedisClusters have limited support for scripting.) To create a Script instance, use the register_script function on a client instance passing the Lua code as the first argument. register_script returns a Script instance that you can use throughout your code. The following trivial Lua script accepts two parameters: the name of a key and a multiplier value. The script fetches the value stored in the key, multiplies it with the multiplier value and returns the result. ``` pycon >>> r = redis.Redis() >>> lua = """ ... local value = redis.call('GET', KEYS[1]) ... value = tonumber(value) ... return value * ARGV[1]""" >>> multiply = r.register_script(lua) ``` multiply is now a Script instance that is invoked by calling it like a function. Script instances accept the following optional arguments: - **keys**: A list of key names that the script will access. This becomes the KEYS list in Lua. - **args**: A list of argument values. This becomes the ARGV list in Lua. - **client**: A redis-py Client or Pipeline instance that will invoke the script. If client isn\'t specified, the client that initially created the Script instance (the one that register_script was invoked from) will be used. Continuing the example from above: ``` pycon >>> r.set('foo', 2) >>> multiply(keys=['foo'], args=[5]) 10 ``` The value of key \'foo\' is set to 2. When multiply is invoked, the \'foo\' key is passed to the script along with the multiplier value of 5. Lua executes the script and returns the result, 10. Script instances can be executed using a different client instance, even one that points to a completely different Redis server. ``` pycon >>> r2 = redis.Redis('redis2.example.com') >>> r2.set('foo', 3) >>> multiply(keys=['foo'], args=[5], client=r2) 15 ``` The Script object ensures that the Lua script is loaded into Redis\'s script cache. In the event of a NOSCRIPT error, it will load the script and retry executing it. Script objects can also be used in pipelines. The pipeline instance should be passed as the client argument when calling the script. Care is taken to ensure that the script is registered in Redis\'s script cache just prior to pipeline execution. ``` pycon >>> pipe = r.pipeline() >>> pipe.set('foo', 5) >>> multiply(keys=['foo'], args=[5], client=pipe) >>> pipe.execute() [True, 25] ``` ### Scan Iterators The \*SCAN commands introduced in Redis 2.8 can be cumbersome to use. While these commands are fully supported, redis-py also exposes the following methods that return Python iterators for convenience: scan_iter, hscan_iter, sscan_iter and zscan_iter. ``` pycon >>> for key, value in (('A', '1'), ('B', '2'), ('C', '3')): ... r.set(key, value) >>> for key in r.scan_iter(): ... print(key, r.get(key)) A 1 B 2 C 3 ``` ### Cluster Mode redis-py now supports cluster mode and provides a client for [Redis Cluster](). The cluster client is based on Grokzen's [redis-py-cluster](https://github.com/Grokzen/redis-py-cluster), has added bug fixes, and now supersedes that library. Support for these changes is thanks to his contributions. To learn more about Redis Cluster, see [Redis Cluster specifications](https://redis.io/topics/cluster-spec). **Create RedisCluster:** Connecting redis-py to a Redis Cluster instance(s) requires at a minimum a single node for cluster discovery. There are multiple ways in which a cluster instance can be created: - Using 'host' and 'port' arguments: ``` pycon >>> from redis.cluster import RedisCluster as Redis >>> rc = Redis(host='localhost', port=6379) >>> print(rc.get_nodes()) [[host=127.0.0.1,port=6379,name=127.0.0.1:6379,server_type=primary,redis_connection=Redis>>], [host=127.0.0.1,port=6378,name=127.0.0.1:6378,server_type=primary,redis_connection=Redis>>], [host=127.0.0.1,port=6377,name=127.0.0.1:6377,server_type=replica,redis_connection=Redis>>]] ``` - Using the Redis URL specification: ``` pycon >>> from redis.cluster import RedisCluster as Redis >>> rc = Redis.from_url("redis://localhost:6379/0") ``` - Directly, via the ClusterNode class: ``` pycon >>> from redis.cluster import RedisCluster as Redis >>> from redis.cluster import ClusterNode >>> nodes = [ClusterNode('localhost', 6379), ClusterNode('localhost', 6378)] >>> rc = Redis(startup_nodes=nodes) ``` When a RedisCluster instance is being created it first attempts to establish a connection to one of the provided startup nodes. If none of the startup nodes are reachable, a 'RedisClusterException' will be thrown. After a connection to the one of the cluster's nodes is established, the RedisCluster instance will be initialized with 3 caches: a slots cache which maps each of the 16384 slots to the node/s handling them, a nodes cache that contains ClusterNode objects (name, host, port, redis connection) for all of the cluster's nodes, and a commands cache contains all the server supported commands that were retrieved using the Redis 'COMMAND' output. See *RedisCluster specific options* below for more. RedisCluster instance can be directly used to execute Redis commands. When a command is being executed through the cluster instance, the target node(s) will be internally determined. When using a key-based command, the target node will be the node that holds the key's slot. Cluster management commands and other commands that are not key-based have a parameter called 'target_nodes' where you can specify which nodes to execute the command on. In the absence of target_nodes, the command will be executed on the default cluster node. As part of cluster instance initialization, the cluster's default node is randomly selected from the cluster's primaries, and will be updated upon reinitialization. Using r.get_default_node(), you can get the cluster's default node, or you can change it using the 'set_default_node' method. The 'target_nodes' parameter is explained in the following section, 'Specifying Target Nodes'. ``` pycon >>> # target-nodes: the node that holds 'foo1's key slot >>> rc.set('foo1', 'bar1') >>> # target-nodes: the node that holds 'foo2's key slot >>> rc.set('foo2', 'bar2') >>> # target-nodes: the node that holds 'foo1's key slot >>> print(rc.get('foo1')) b'bar' >>> # target-node: default-node >>> print(rc.keys()) [b'foo1'] >>> # target-node: default-node >>> rc.ping() ``` **Specifying Target Nodes:** As mentioned above, all non key-based RedisCluster commands accept the kwarg parameter 'target_nodes' that specifies the node/nodes that the command should be executed on. The best practice is to specify target nodes using RedisCluster class's node flags: PRIMARIES, REPLICAS, ALL_NODES, RANDOM. When a nodes flag is passed along with a command, it will be internally resolved to the relevant node/s. If the nodes topology of the cluster changes during the execution of a command, the client will be able to resolve the nodes flag again with the new topology and attempt to retry executing the command. ``` pycon >>> from redis.cluster import RedisCluster as Redis >>> # run cluster-meet command on all of the cluster's nodes >>> rc.cluster_meet('127.0.0.1', 6379, target_nodes=Redis.ALL_NODES) >>> # ping all replicas >>> rc.ping(target_nodes=Redis.REPLICAS) >>> # ping a random node >>> rc.ping(target_nodes=Redis.RANDOM) >>> # get the keys from all cluster nodes >>> rc.keys(target_nodes=Redis.ALL_NODES) [b'foo1', b'foo2'] >>> # execute bgsave in all primaries >>> rc.bgsave(Redis.PRIMARIES) ``` You could also pass ClusterNodes directly if you want to execute a command on a specific node / node group that isn't addressed by the nodes flag. However, if the command execution fails due to cluster topology changes, a retry attempt will not be made, since the passed target node/s may no longer be valid, and the relevant cluster or connection error will be returned. ``` pycon >>> node = rc.get_node('localhost', 6379) >>> # Get the keys only for that specific node >>> rc.keys(target_nodes=node) >>> # get Redis info from a subset of primaries >>> subset_primaries = [node for node in rc.get_primaries() if node.port > 6378] >>> rc.info(target_nodes=subset_primaries) ``` In addition, the RedisCluster instance can query the Redis instance of a specific node and execute commands on that node directly. The Redis client, however, does not handle cluster failures and retries. ``` pycon >>> cluster_node = rc.get_node(host='localhost', port=6379) >>> print(cluster_node) [host=127.0.0.1,port=6379,name=127.0.0.1:6379,server_type=primary,redis_connection=Redis>>] >>> r = cluster_node.redis_connection >>> r.client_list() [{'id': '276', 'addr': '127.0.0.1:64108', 'fd': '16', 'name': '', 'age': '0', 'idle': '0', 'flags': 'N', 'db': '0', 'sub': '0', 'psub': '0', 'multi': '-1', 'qbuf': '26', 'qbuf-free': '32742', 'argv-mem': '10', 'obl': '0', 'oll': '0', 'omem': '0', 'tot-mem': '54298', 'events': 'r', 'cmd': 'client', 'user': 'default'}] >>> # Get the keys only for that specific node >>> r.keys() [b'foo1'] ``` **Multi-key commands:** Redis supports multi-key commands in Cluster Mode, such as Set type unions or intersections, mset and mget, as long as the keys all hash to the same slot. By using RedisCluster client, you can use the known functions (e.g. mget, mset) to perform an atomic multi-key operation. However, you must ensure all keys are mapped to the same slot, otherwise a RedisClusterException will be thrown. Redis Cluster implements a concept called hash tags that can be used in order to force certain keys to be stored in the same hash slot, see [Keys hash tag](https://redis.io/topics/cluster-spec#keys-hash-tags). You can also use nonatomic for some of the multikey operations, and pass keys that aren't mapped to the same slot. The client will then map the keys to the relevant slots, sending the commands to the slots' node owners. Non-atomic operations batch the keys according to their hash value, and then each batch is sent separately to the slot's owner. ``` pycon # Atomic operations can be used when all keys are mapped to the same slot >>> rc.mset({'{foo}1': 'bar1', '{foo}2': 'bar2'}) >>> rc.mget('{foo}1', '{foo}2') [b'bar1', b'bar2'] # Non-atomic multi-key operations splits the keys into different slots >>> rc.mset_nonatomic({'foo': 'value1', 'bar': 'value2', 'zzz': 'value3') >>> rc.mget_nonatomic('foo', 'bar', 'zzz') [b'value1', b'value2', b'value3'] ``` **Cluster PubSub:** When a ClusterPubSub instance is created without specifying a node, a single node will be transparently chosen for the pubsub connection on the first command execution. The node will be determined by: 1. Hashing the channel name in the request to find its keyslot 2. Selecting a node that handles the keyslot: If read_from_replicas is set to true, a replica can be selected. *Known limitations with pubsub:* Pattern subscribe and publish do not currently work properly due to key slots. If we hash a pattern like fo* we will receive a keyslot for that string but there are endless possibilities for channel names based on this pattern - unknowable in advance. This feature is not disabled but the commands are not currently recommended for use. See [redis-py-cluster documentation](https://redis-py-cluster.readthedocs.io/en/stable/pubsub.html) for more. ``` pycon >>> p1 = rc.pubsub() # p1 connection will be set to the node that holds 'foo' keyslot >>> p1.subscribe('foo') # p2 connection will be set to node 'localhost:6379' >>> p2 = rc.pubsub(rc.get_node('localhost', 6379)) ``` **Read Only Mode** By default, Redis Cluster always returns MOVE redirection response on accessing a replica node. You can overcome this limitation and scale read commands by triggering READONLY mode. To enable READONLY mode pass read_from_replicas=True to RedisCluster constructor. When set to true, read commands will be assigned between the primary and its replications in a Round-Robin manner. READONLY mode can be set at runtime by calling the readonly() method with target_nodes='replicas', and read-write access can be restored by calling the readwrite() method. ``` pycon >>> from cluster import RedisCluster as Redis # Use 'debug' log level to print the node that the command is executed on >>> rc_readonly = Redis(startup_nodes=startup_nodes, ... read_from_replicas=True) >>> rc_readonly.set('{foo}1', 'bar1') >>> for i in range(0, 4): ... # Assigns read command to the slot's hosts in a Round-Robin manner ... rc_readonly.get('{foo}1') # set command would be directed only to the slot's primary node >>> rc_readonly.set('{foo}2', 'bar2') # reset READONLY flag >>> rc_readonly.readwrite(target_nodes='replicas') # now the get command would be directed only to the slot's primary node >>> rc_readonly.get('{foo}1') ``` **Cluster Pipeline** ClusterPipeline is a subclass of RedisCluster that provides support for Redis pipelines in cluster mode. When calling the execute() command, all the commands are grouped by the node on which they will be executed, and are then executed by the respective nodes in parallel. The pipeline instance will wait for all the nodes to respond before returning the result to the caller. Command responses are returned as a list sorted in the same order in which they were sent. Pipelines can be used to dramatically increase the throughput of Redis Cluster by significantly reducing the the number of network round trips between the client and the server. ``` pycon >>> with rc.pipeline() as pipe: ... pipe.set('foo', 'value1') ... pipe.set('bar', 'value2') ... pipe.get('foo') ... pipe.get('bar') ... print(pipe.execute()) [True, True, b'value1', b'value2'] ... pipe.set('foo1', 'bar1').get('foo1').execute() [True, b'bar1'] ``` Please note: - RedisCluster pipelines currently only support key-based commands. - The pipeline gets its 'read_from_replicas' value from the cluster's parameter. Thus, if read from replications is enabled in the cluster instance, the pipeline will also direct read commands to replicas. - The 'transaction' option is NOT supported in cluster-mode. In non-cluster mode, the 'transaction' option is available when executing pipelines. This wraps the pipeline commands with MULTI/EXEC commands, and effectively turns the pipeline commands into a single transaction block. This means that all commands are executed sequentially without any interruptions from other clients. However, in cluster-mode this is not possible, because commands are partitioned according to their respective destination nodes. This means that we can not turn the pipeline commands into one transaction block, because in most cases they are split up into several smaller pipelines. **Lua Scripting in Cluster Mode** Cluster mode has limited support for lua scripting. The following commands are supported, with caveats: - `EVAL` and `EVALSHA`: The command is sent to the relevant node, depending on the keys (i.e., in `EVAL "