pax_global_header00006660000000000000000000000064147067241400014517gustar00rootroot0000000000000052 comment=b4eb93df5aa2165423adcb3bb7518531a5583732 fakeredis-py-2.26.1/000077500000000000000000000000001470672414000141725ustar00rootroot00000000000000fakeredis-py-2.26.1/.github/000077500000000000000000000000001470672414000155325ustar00rootroot00000000000000fakeredis-py-2.26.1/.github/CODEOWNERS000066400000000000000000000000101470672414000171140ustar00rootroot00000000000000* @cunlafakeredis-py-2.26.1/.github/CONTRIBUTING.md000066400000000000000000000277151470672414000177770ustar00rootroot00000000000000 # Contributing to fakeredis First off, thanks for taking the time to contribute! ❀️ All types of contributions are encouraged and valued. See the [Table of Contents](#table-of-contents) for different ways to help and details about how this project handles them. Please make sure to read the relevant section before making your contribution. It will make it a lot easier for us maintainers and smooth out the experience for all involved. The community looks forward to your contributions. πŸŽ‰ > And if you like the project, but just don't have time to contribute, that's fine. There are other easy ways to support > the project and show your appreciation, which we would also be very happy about: > - Star the project > - Tweet about it > - Refer this project in your project's readme > - Mention the project at local meetups and tell your friends/colleagues ## Table of Contents - [Code of Conduct](#code-of-conduct) - [I Have a Question](#i-have-a-question) - [I Want To Contribute](#i-want-to-contribute) - [Reporting Bugs](#reporting-bugs) - [Suggesting Enhancements](#suggesting-enhancements) - [Your First Code Contribution](#your-first-code-contribution) - [Improving The Documentation](#improving-the-documentation) - [Styleguides](#styleguides) - [Commit Messages](#commit-messages) - [Join The Project Team](#join-the-project-team) ## Code of Conduct This project and everyone participating in it is governed by the [fakeredis Code of Conduct](https://github.com/cunla/fakeredis-py/blob/master/CODE_OF_CONDUCT.md). By participating, you are expected to uphold this code. Please report unacceptable behavior to . ## I Have a Question > If you want to ask a question, we assume that you have read the > available [Documentation](https://github.com/cunla/fakeredis-py). Before you ask a question, it is best to search for existing [Issues](https://github.com/cunla/fakeredis-py/issues) that might help you. In case you have found a suitable issue and still need clarification, you can write your question in this issue. It is also advisable to search the internet for answers first. If you then still feel the need to ask a question and need clarification, we recommend the following: - Open an [Issue](https://github.com/cunla/fakeredis-py/issues/new). - Provide as much context as you can about what you're running into. - Provide project and platform versions (nodejs, npm, etc), depending on what seems relevant. We will then take care of the issue as soon as possible. ## I Want To Contribute > ### Legal Notice > When contributing to this project, you must agree that you have authored 100% of the content, that you have the > necessary rights to the content and that the content you contribute may be provided under the project license. ### Reporting Bugs #### Before Submitting a Bug Report A good bug report shouldn't leave others needing to chase you up for more information. Therefore, we ask you to investigate carefully, collect information and describe the issue in detail in your report. Please complete the following steps in advance to help us fix any potential bug as fast as possible. - Make sure that you are using the latest version. - Determine if your bug is really a bug and not an error on your side e.g. using incompatible environment components/versions (Make sure that you have read the [documentation](https://github.com/cunla/fakeredis-py). If you are looking for support, you might want to check [this section](#i-have-a-question)). - To see if other users have experienced (and potentially already solved) the same issue you are having, check if there is not already a bug report existing for your bug or error in the [bug tracker](https://github.com/cunla/fakeredis-py/issues?q=label%3Abug). - Also make sure to search the internet (including Stack Overflow) to see if users outside the GitHub community have discussed the issue. - Collect information about the bug: - Stack trace (Traceback) - OS, Platform and Version (Windows, Linux, macOS, x86, ARM) - Version of the interpreter, compiler, SDK, runtime environment, package manager, depending on what seems relevant. - Possibly your input and the output - Can you reliably reproduce the issue? And can you also reproduce it with older versions? #### How Do I Submit a Good Bug Report? > You must never report security related issues, vulnerabilities or bugs including sensitive information > to the issue tracker, or elsewhere in public. > Instead sensitive bugs must be sent by email to . We use GitHub issues to track bugs and errors. If you run into an issue with the project: - Open an [Issue](https://github.com/cunla/fakeredis-py/issues/new). (Since we can't be sure at this point whether it is a bug or not, we ask you not to talk about a bug yet and not to label the issue.) - Follow the issue template and provide as much context as possible and describe the *reproduction steps* that someone else can follow to recreate the issue on their own. This usually includes your code. For good bug reports you should isolate the problem and create a reduced test case. - Provide the information you collected in the previous section. Once it's filed: - The project team will label the issue accordingly. - A team member will try to reproduce the issue with your provided steps. If there are no reproduction steps or no obvious way to reproduce the issue, the team will ask you for those steps and mark the issue as `needs-repro`. Bugs with the `needs-repro` tag will not be addressed until they are reproduced. - If the team is able to reproduce the issue, it will be marked `needs-fix`, as well as possibly other tags (such as `critical`), and the issue will be left to be [implemented by someone](#your-first-code-contribution). ### Suggesting Enhancements This section guides you through submitting an enhancement suggestion for fakeredis, **including completely new features and minor improvements to existing functionality**. Following these guidelines will help maintainers and the community to understand your suggestion and find related suggestions. #### Before Submitting an Enhancement - Make sure that you are using the latest version. - Read the [documentation](https://github.com/cunla/fakeredis-py) carefully and find out if the functionality is already covered, maybe by an individual configuration. - Perform a [search](https://github.com/cunla/fakeredis-py/issues) to see if the enhancement has already been suggested. If it has, add a comment to the existing issue instead of opening a new one. - Find out whether your idea fits with the scope and aims of the project. It's up to you to make a strong case to convince the project's developers of the merits of this feature. Keep in mind that we want features that will be useful to the majority of our users and not just a small subset. If you're just targeting a minority of users, consider writing an add-on/plugin library. #### How Do I Submit a Good Enhancement Suggestion? Enhancement suggestions are tracked as [GitHub issues](https://github.com/cunla/fakeredis-py/issues). - Use a **clear and descriptive title** for the issue to identify the suggestion. - Provide a **step-by-step description of the suggested enhancement** in as many details as possible. - **Describe the current behavior** and **explain which behavior you expected to see instead** and why. At this point you can also tell which alternatives do not work for you. - You may want to **include screenshots and animated GIFs** which help you demonstrate the steps or point out the part which the suggestion is related to. You can use [this tool](https://www.cockos.com/licecap/) to record GIFs on macOS and Windows, and [this tool](https://github.com/colinkeenan/silentcast) or [this tool](https://github.com/GNOME/byzanz) on Linux. - **Explain why this enhancement would be useful** to most fakeredis users. You may also want to point out the other projects that solved it better and which could serve as inspiration. ### Your First Code Contribution Unsure where to begin contributing? You can start by looking through [help-wanted issues](https://github.com/cunla/fakeredis-py/labels/help%20wanted). Never contributed to open source before? Here are a couple of friendly tutorials: - - ### Getting started - Create your own fork of the repository - Do the changes in your fork - Setup poetry `pip install poetry` - Let poetry install everything required for a local environment `poetry install` - To run all tests, use: `poetry run pytest -v` - Note: In order to run the tests, a real redis server should be running. The tests are comparing the results of each command between fakeredis and a real redis. - You can use `docker-compose up redis6` or `docker-compose up redis7` to run redis. - Run test with coverage using `poetry run pytest -v --cov=fakeredis --cov-branch` and then you can run `coverage report`. ### Improving The Documentation - Create your own fork of the repository - Do the changes in your fork, probably in `README.md` - Create a pull request with the changes. ## Styleguides ### Commit Messages Taken from [conventional commits](https://www.conventionalcommits.org/en/v1.0.0/) ``` [optional scope]: [optional body] [optional footer(s)] ``` The commit contains the following structural elements, to communicate intent to the consumers of your library: * fix: a commit of the type fix patches a bug in your codebase (this correlates with `PATCH` in Semantic Versioning). * feat: a commit of the type feat introduces a new feature to the codebase (this correlates with `MINOR` in Semantic Versioning). * BREAKING CHANGE: a commit that has a footer BREAKING CHANGE:, or appends a ! after the type/scope, introduces a breaking API change (correlating with MAJOR in Semantic Versioning). A BREAKING CHANGE can be part of commits of any type. * types other than `fix:` and `feat:` are allowed, for example @commitlint/config-conventional (based on the Angular convention) recommends `build:`, `chore:`, `ci:`, `docs:`, `style:`, `refactor:`, `perf:`, `test:`, and others. * footers other than `BREAKING CHANGE: ` may be provided and follow a convention similar to [git trailer format](https://git-scm.com/docs/git-interpret-trailers). Additional types are not mandated by the Conventional Commits specification, and have no implicit effect in Semantic Versioning (unless they include a BREAKING CHANGE). A scope may be provided to a commit’s type, to provide additional contextual information and is contained within parenthesis, e.g., feat(parser): add ability to parse arrays. ## Join The Project Team If you wish to be added to the project team as a collaborator, please send a message to daniel@moransoftware.ca with explanation. ## Attribution This guide is based on the **contributing-gen**. [Make your own](https://github.com/bttger/contributing-gen)! fakeredis-py-2.26.1/.github/FUNDING.yml000066400000000000000000000000721470672414000173460ustar00rootroot00000000000000--- tidelift: "pypi/fakeredis" github: cunla polar: cunla fakeredis-py-2.26.1/.github/ISSUE_TEMPLATE/000077500000000000000000000000001470672414000177155ustar00rootroot00000000000000fakeredis-py-2.26.1/.github/ISSUE_TEMPLATE/bug_report.md000066400000000000000000000014121470672414000224050ustar00rootroot00000000000000--- name: Bug report about: Create a report to help us improve title: '' labels: bug assignees: '' --- **Describe the bug** A clear and concise description of what the bug is. Add the code you are trying to run, the stacktrace you are getting and anything else that might help. **To Reproduce** Steps to reproduce the behavior: 1. Go to '...' 2. Click on '....' 3. Scroll down to '....' 4. See error **Expected behavior** A clear and concise description of what you expected to happen. **Screenshots** If applicable, add screenshots to help explain your problem. **Desktop (please complete the following information):** - OS: [e.g. iOS] - python version - redis-py version - full requirements.txt? **Additional context** Add any other context about the problem here. fakeredis-py-2.26.1/.github/ISSUE_TEMPLATE/feature_request.md000066400000000000000000000011341470672414000234410ustar00rootroot00000000000000--- name: Feature request about: Suggest an idea for this project title: '' labels: enhancement assignees: '' --- **Is your feature request related to a problem? Please describe.** A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] **Describe the solution you'd like** A clear and concise description of what you want to happen. **Describe alternatives you've considered** A clear and concise description of any alternative solutions or features you've considered. **Additional context** Add any other context or screenshots about the feature request here. fakeredis-py-2.26.1/.github/actions/000077500000000000000000000000001470672414000171725ustar00rootroot00000000000000fakeredis-py-2.26.1/.github/actions/test-coverage/000077500000000000000000000000001470672414000217425ustar00rootroot00000000000000fakeredis-py-2.26.1/.github/actions/test-coverage/action.yml000066400000000000000000000026551470672414000237520ustar00rootroot00000000000000name: 'Run test with coverage' description: 'Greet someone' inputs: github-secret: description: 'GITHUB_TOKEN' required: true gist-secret: description: 'gist secret' required: true runs: using: "composite" steps: - name: Test with coverage shell: bash run: | poetry run flake8 fakeredis/ poetry run pytest -v --cov=fakeredis --cov-branch poetry run coverage json echo "COVERAGE=$(jq '.totals.percent_covered_display|tonumber' coverage.json)" >> $GITHUB_ENV - name: Create coverage badge if: ${{ github.event_name == 'push' }} uses: schneegans/dynamic-badges-action@v1.7.0 with: auth: ${{ inputs.gist-secret }} gistID: b756396efb895f0e34558c980f1ca0c7 filename: fakeredis-py.json label: coverage message: ${{ env.COVERAGE }}% color: green - name: Coverage report if: ${{ github.event_name == 'pull_request' }} id: coverage_report shell: bash run: | echo 'REPORT<> $GITHUB_ENV poetry run coverage report >> $GITHUB_ENV echo 'EOF' >> $GITHUB_ENV - uses: mshick/add-pr-comment@v2 if: ${{ github.event_name == 'pull_request' }} with: message: | Coverage report: ``` ${{ env.REPORT }} ``` repo-token: ${{ inputs.github-secret }} allow-repeats: false message-id: coveragefakeredis-py-2.26.1/.github/dependabot.yml000066400000000000000000000012431470672414000203620ustar00rootroot00000000000000# To get started with Dependabot version updates, you'll need to specify which # package ecosystems to update and where the package manifests are located. # Please see the documentation for all configuration options: # https://docs.github.com/github/administering-a-repository/configuration-options-for-dependency-updates version: 2 updates: - package-ecosystem: "github-actions" # See documentation for possible values directory: "/" # Location of package manifests schedule: interval: "weekly" - package-ecosystem: "pip" # See documentation for possible values directory: "/" # Location of package manifests schedule: interval: "weekly" fakeredis-py-2.26.1/.github/release-drafter.yml000066400000000000000000000020651470672414000213250ustar00rootroot00000000000000--- name-template: 'v$RESOLVED_VERSION 🌈' tag-template: 'v$RESOLVED_VERSION' categories: - title: 'πŸš€ Features' labels: - 'feature' - 'enhancement' - title: 'πŸ› Bug Fixes' labels: - 'fix' - 'bugfix' - 'bug' - title: '🧰 Maintenance' label: 'chore' - title: '⬆️ Dependency Updates' label: 'dependencies' change-template: '- $TITLE (#$NUMBER)' change-title-escapes: '\<*_&' autolabeler: - label: 'chore' files: - '*.md' - '.github/*' - label: 'bug' title: - '/fix/i' - label: 'dependencies' files: - 'poetry.lock' version-resolver: major: labels: - 'breaking' minor: labels: - 'feature' - 'enhancement' patch: labels: - 'chore' - 'dependencies' - 'bug' default: patch template: | # Changes $CHANGES ## Contributors We'd like to thank all the contributors who worked on this release! $CONTRIBUTORS **Full Changelog**: https://github.com/$OWNER/$REPOSITORY/compare/$PREVIOUS_TAG...v$RESOLVED_VERSION fakeredis-py-2.26.1/.github/workflows/000077500000000000000000000000001470672414000175675ustar00rootroot00000000000000fakeredis-py-2.26.1/.github/workflows/publish-documentation.yml000066400000000000000000000016251470672414000246330ustar00rootroot00000000000000--- name: Generate and publish documentation on: release: types: [published] workflow_dispatch: jobs: publish_documentation: runs-on: ubuntu-latest permissions: contents: write environment: name: pypi url: https://pypi.org/p/fakeredis steps: - uses: actions/checkout@v4 - name: Set up Python uses: actions/setup-python@v5 with: python-version: "3.12" - name: Configure Git Credentials run: | git config user.name github-actions[bot] git config user.email 41898282+github-actions[bot]@users.noreply.github.com - name: Publish documentation env: GH_TOKEN: ${{ secrets.GITHUB_TOKEN }} GOOGLE_ANALYTICS_KEY: ${{ secrets.GOOGLE_ANALYTICS_KEY }} run: | pip install -r docs/requirements.txt mkdocs gh-deploy --force mkdocs --version fakeredis-py-2.26.1/.github/workflows/publish-pypi.yml000066400000000000000000000015211470672414000227360ustar00rootroot00000000000000--- name: Upload Python Package to PyPI on: release: types: [published] jobs: publish: runs-on: ubuntu-latest permissions: id-token: write # IMPORTANT: this permission is mandatory for trusted publishing environment: name: pypi url: https://pypi.org/p/fakeredis steps: - uses: actions/checkout@v4 - name: Set up Python uses: actions/setup-python@v5 with: python-version: "3.12" - name: Install dependencies env: PYTHON_KEYRING_BACKEND: keyring.backends.null.Keyring run: | python -m pip install --upgrade pip pip install build - name: Build package run: python -m build - name: Publish package to pypi uses: pypa/gh-action-pypi-publish@v1.9.0 with: print-hash: true fakeredis-py-2.26.1/.github/workflows/test-dragonfly.yml000066400000000000000000000037211470672414000232570ustar00rootroot00000000000000--- name: Test Dragonfly on: workflow_dispatch: concurrency: group: dragon-fly-${{ github.workflow }}-${{ github.ref }} cancel-in-progress: true jobs: test: runs-on: ubuntu-latest strategy: fail-fast: false matrix: tests: - "test_json" - "test_mixins" - "test_stack" - "test_connection.py" - "test_asyncredis.py" - "test_general.py" - "test_scan.py" - "test_zadd.py" - "test_translations.py" - "test_sortedset_commands.py" permissions: pull-requests: write services: redis: image: docker.dragonflydb.io/dragonflydb/dragonfly:latest ports: - 6390:6379 options: >- --health-cmd "redis-cli ping" --health-interval 10s --health-timeout 5s --health-retries 5 steps: - uses: actions/checkout@v4 - uses: actions/setup-python@v5 with: cache-dependency-path: poetry.lock python-version: 3.12 - name: Install dependencies env: PYTHON_KEYRING_BACKEND: keyring.backends.null.Keyring run: | python -m pip --quiet install poetry echo "$HOME/.poetry/bin" >> $GITHUB_PATH poetry install poetry run pip install "fakeredis[json,bf,cf,lua]" - name: Test without coverage run: | poetry run pytest test/${{ matrix.tests }} \ --html=report-${{ matrix.tests }}.html \ --self-contained-html \ -v - name: Upload Tests Result if: always() uses: actions/upload-artifact@v4 with: name: tests-result-${{ matrix.tests }} path: report-${{ matrix.tests }}.html upload-results: needs: test if: always() runs-on: ubuntu-latest steps: - name: Collect Tests Result uses: actions/upload-artifact/merge@v4 with: delete-merged: truefakeredis-py-2.26.1/.github/workflows/test.yml000066400000000000000000000110321470672414000212660ustar00rootroot00000000000000--- name: Unit tests on: push: branches: - master pull_request: branches: - master concurrency: group: ${{ github.workflow }}-${{ github.ref }} cancel-in-progress: true jobs: lint: name: "Code linting (flake8/black)" runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - uses: actions/setup-python@v5 with: cache-dependency-path: poetry.lock python-version: "3.12" - name: Install dependencies env: PYTHON_KEYRING_BACKEND: keyring.backends.null.Keyring run: | python -m pip --quiet install poetry echo "$HOME/.poetry/bin" >> $GITHUB_PATH poetry install - name: Run flake8 shell: bash run: | poetry run flake8 fakeredis/ - name: Run black shell: bash run: | poetry run black --check --verbose fakeredis test - name: Test import run: | poetry build pip install dist/fakeredis-*.tar.gz python -c "import fakeredis" test: name: > redis tests py:${{ matrix.python-version }},${{ matrix.redis-image }}, redis-py:${{ matrix.redis-py }},cov:${{ matrix.coverage }}, extra:${{matrix.extra}} needs: - "lint" runs-on: ubuntu-latest strategy: max-parallel: 8 fail-fast: false matrix: redis-image: [ "redis:6.2.14", "redis:7.0.15", "redis:7.4.0" ] python-version: [ "3.8", "3.9", "3.11", "3.12" ] redis-py: [ "4.3.6", "4.6.0", "5.0.8", "5.2.0" ] include: - python-version: "3.12" redis-image: "valkey/valkey:8.0" redis-py: "5.2.0" extra: "lua" hypothesis: true - python-version: "3.12" redis-image: "redis/redis-stack-server:6.2.6-v15" redis-py: "5.2.0" extra: "json, bf, lua, cf" hypothesis: true - python-version: "3.12" redis-image: "redis/redis-stack-server:7.4.0-v0" redis-py: "5.2.0" extra: "json, bf, lua, cf" coverage: true hypothesis: true permissions: pull-requests: write services: redis: image: ${{ matrix.redis-image }} ports: - 6390:6379 options: >- --health-cmd "redis-cli ping" --health-interval 10s --health-timeout 5s --health-retries 5 outputs: version: ${{ steps.getVersion.outputs.VERSION }} steps: - uses: actions/checkout@v4 - uses: actions/setup-python@v5 with: cache-dependency-path: poetry.lock python-version: ${{ matrix.python-version }} - name: Install dependencies env: PYTHON_KEYRING_BACKEND: keyring.backends.null.Keyring run: | python -m pip --quiet install poetry echo "$HOME/.poetry/bin" >> $GITHUB_PATH poetry install # if python version is below 3.10 and redis-py is 5.0.9 - change it to 5.0.8 if [[ ${{ matrix.python-version }} != "3.10" && ${{ matrix.redis-py }} == "5.0.9" ]]; then poetry run pip install redis==5.0.8 fi poetry run pip install redis==${{ matrix.redis-py }} - name: Install json if: ${{ matrix.extra }} run: | poetry run pip install "fakeredis[${{ matrix.extra }}]" - name: Get version id: getVersion shell: bash run: | VERSION=$(poetry version -s --no-ansi -n) echo "VERSION=$VERSION" >> $GITHUB_OUTPUT - name: Test without coverage if: ${{ !matrix.coverage }} run: | poetry run pytest -v -m "not slow" - name: Test with coverage if: ${{ matrix.coverage }} uses: ./.github/actions/test-coverage with: github-secret: ${{ secrets.GITHUB_TOKEN }} gist-secret: ${{ secrets.GIST_SECRET }} # Prepare a draft release for GitHub Releases page for the manual verification # If accepted and published, release workflow would be triggered update_release_draft: name: "Create or Update release draft" permissions: # write permission is required to create a GitHub release contents: write # write permission is required for auto-labeler # otherwise, read permission is required at least pull-requests: write needs: - "test" runs-on: ubuntu-latest steps: - uses: release-drafter/release-drafter@v6 env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} fakeredis-py-2.26.1/.gitignore000066400000000000000000000003561470672414000161660ustar00rootroot00000000000000scripts/*.json fakeredis.egg-info dump.rdb extras/* .tox *.pyc .idea .hypothesis .coverage cover/ venv/ dist/ build/* docker-compose.yml .DS_Store *.iml .venv/ .fleet .mypy_cache .pytest_cache **/__pycache__ scratch* .python-version .env fakeredis-py-2.26.1/.readthedocs.yaml000066400000000000000000000002771470672414000174270ustar00rootroot00000000000000version: 2 build: os: "ubuntu-20.04" tools: python: "3.11" mkdocs: configuration: mkdocs.yml fail_on_warning: false python: install: - requirements: docs/requirements.txt fakeredis-py-2.26.1/CODE_OF_CONDUCT.md000066400000000000000000000115241470672414000167740ustar00rootroot00000000000000# Code of Conduct β€” fakeredis ## Our Pledge In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to make participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation. ## Our Standards Examples of behavior that contributes to a positive environment for our community include: * Demonstrating empathy and kindness toward other people * Being respectful of differing opinions, viewpoints, and experiences * Giving and gracefully accepting constructive feedback * Accepting responsibility and apologizing to those affected by our mistakes, and learning from the experience * Focusing on what is best not just for us as individuals, but for the overall community Examples of unacceptable behavior include: * The use of sexualized language or imagery, and sexual attention or advances * Trolling, insulting or derogatory comments, and personal or political attacks * Public or private harassment * Publishing others' private information, such as a physical or email address, without their explicit permission * Other conduct which could reasonably be considered inappropriate in a professional setting ## Our Responsibilities Project maintainers are responsible for clarifying and enforcing our standards of acceptable behavior and will take appropriate and fair corrective action in response to any behavior that they deem inappropriate, threatening, offensive, or harmful. Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, and will communicate reasons for moderation decisions when appropriate. ## Scope This Code of Conduct applies within all community spaces, and also applies when an individual is officially representing the community in public spaces. Examples of representing our community include using an official e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. ## Enforcement Instances of abusive, harassing, or otherwise unacceptable behavior may be reported to the community leaders responsible for enforcement at . All complaints will be reviewed and investigated promptly and fairly. All community leaders are obligated to respect the privacy and security of the reporter of any incident. ## Enforcement Guidelines Community leaders will follow these Community Impact Guidelines in determining the consequences for any action they deem in violation of this Code of Conduct: ### 1. Correction **Community Impact**: Use of inappropriate language or other behavior deemed unprofessional or unwelcome in the community. **Consequence**: A private, written warning from community leaders, providing clarity around the nature of the violation and an explanation of why the behavior was inappropriate. A public apology may be requested. ### 2. Warning **Community Impact**: A violation through a single incident or series of actions. **Consequence**: A warning with consequences for continued behavior. No interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, for a specified period of time. This includes avoiding interactions in community spaces as well as external channels like social media. Violating these terms may lead to a temporary or permanent ban. ### 3. Temporary Ban **Community Impact**: A serious violation of community standards, including sustained inappropriate behavior. **Consequence**: A temporary ban from any sort of interaction or public communication with the community for a specified period of time. No public or private interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, is allowed during this period. Violating these terms may lead to a permanent ban. ### 4. Permanent Ban **Community Impact**: Demonstrating a pattern a community standards' violation, including sustained inappropriate behavior, harassment of an individual, or aggression toward or disparagement of classes of individuals. **Consequence**: A permanent ban from any sort of public interaction within the community. ## Attribution This Code of Conduct is adapted from the [Contributor Covenant](https://contributor-covenant.org/), version [1.4](https://www.contributor-covenant.org/version/1/4/code-of-conduct/code_of_conduct.md) and [2.0](https://www.contributor-covenant.org/version/2/0/code_of_conduct/code_of_conduct.md), and was generated by [contributing-gen](https://github.com/bttger/contributing-gen).fakeredis-py-2.26.1/LICENSE000066400000000000000000000030431470672414000151770ustar00rootroot00000000000000BSD 3-Clause License Copyright (c) 2022-, Daniel Moran, 2017-2018, Bruce Merry, 2011 James Saryerwinnie, All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. fakeredis-py-2.26.1/README.md000066400000000000000000000035131470672414000154530ustar00rootroot00000000000000fakeredis: A fake version of a redis-py ======================================= [![badge](https://img.shields.io/pypi/v/fakeredis)](https://pypi.org/project/fakeredis/) [![CI](https://github.com/cunla/fakeredis-py/actions/workflows/test.yml/badge.svg)](https://github.com/cunla/fakeredis-py/actions/workflows/test.yml) [![badge](https://img.shields.io/endpoint?url=https://gist.githubusercontent.com/cunla/b756396efb895f0e34558c980f1ca0c7/raw/fakeredis-py.json)](https://github.com/cunla/fakeredis-py/actions/workflows/test.yml) [![badge](https://img.shields.io/pypi/dm/fakeredis)](https://pypi.org/project/fakeredis/) [![badge](https://img.shields.io/pypi/l/fakeredis)](./LICENSE) [![Open Source Helpers](https://www.codetriage.com/cunla/fakeredis-py/badges/users.svg)](https://www.codetriage.com/cunla/fakeredis-py) [![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black) -------------------- Documentation is hosted in https://fakeredis.readthedocs.io/ # Intro FakeRedis is a pure-Python implementation of the Redis key-value store. It enables running tests requiring redis server without an actual server. It provides enhanced versions of the redis-py Python bindings for Redis. That provide the following added functionality: A built-in Redis server that is automatically installed, configured and managed when the Redis bindings are used. A single server shared by multiple programs or multiple independent servers. All the servers provided by FakeRedis support all Redis functionality including advanced features such as RedisJson, GeoCommands. See [official documentation](https://fakeredis.readthedocs.io/) for list of supported commands. # Sponsor fakeredis-py is developed for free. You can support this project by becoming a sponsor using [this link](https://github.com/sponsors/cunla). fakeredis-py-2.26.1/SECURITY.md000066400000000000000000000007501470672414000157650ustar00rootroot00000000000000# Security Policy ## Supported Versions Use this section to tell people about which versions of your project are currently being supported with security updates. | Version | Supported | |---------|--------------------| | 2.18.x | :white_check_mark: | | 1.10.x | :white_check_mark: | ## Reporting a Vulnerability To report a security vulnerability, please use the [Tidelift security contact](https://tidelift.com/security). Tidelift will coordinate the fix and disclosure. fakeredis-py-2.26.1/docs/000077500000000000000000000000001470672414000151225ustar00rootroot00000000000000fakeredis-py-2.26.1/docs/about/000077500000000000000000000000001470672414000162345ustar00rootroot00000000000000fakeredis-py-2.26.1/docs/about/changelog.md000066400000000000000000000420141470672414000205060ustar00rootroot00000000000000--- title: Change log description: Changelog of all fakeredis releases tags: - changelog - release-notes toc_depth: 2 --- ## v2.26.1 ### πŸ› Bug Fixes - Minor fix: using typing_extensions instead of typing for python 3.7 support #341/#343 ## v2.26.0 ### πŸš€ Features - Support for server-type specific commands #340 - Support for Dragonfly `SADDEX` command #340 ### πŸ› Bug Fixes - Fix bug in bitpos function for the clear bit mode @Diskein #337 ## v2.25.1 ### πŸ› Bug Fixes - Fix missing default values for version/server_type in `FakeBaseConnectionMixin` #334 ## v2.25.0 ### πŸš€ Features - Implement support for hash expiration related commands @j00bar #328 - `HEXPIRE`, `HEXPIREAT`, `HEXPIRETIME`, `HPERSIST`, `HPEXPIRE`, `HPEXPIREAT`, `HPEXPIRETIME`, `HPTTL`, `HTTL`, - Implement support for `SORT_RO` #325, `EXPIRETIME` #323, and `PEXPIRETIME` #324 - Support for creating a tcp server listening to multiple clients - Testing against valkey 8.0 #333 - Improve documentation #332 ### πŸ› Bug Fixes - Replace `typing_extensions` dependency with `typing-extensions` #330 ## v2.24.1 ### πŸ› Bug Fixes - Fix license file added to site-packages #320 ## v2.24.0 ### πŸš€ Features - Support for TIME SERIES commands (no support for align arguments on some commands) #310 ### πŸ› Bug Fixes - fix:xrevrange to work with exclusive ranges @hurlenko #319 ### 🧰 Maintenance - Update all dependencies, particularly pytest to v8 - Add tests against Dragonfly server #318 - Implement decocator `unsupported_server_types` to enable excluding tests from running against certain server types #318 ## v2.23.5 ### πŸ› Bug Fixes - fix:issue with async connection and blocking operations writing responses twice to socket #316 ## v2.23.4 ### πŸ› Bug Fixes - fix:move random seed to HeavyKeeper to avoid issues #315 ### 🧰 Maintenance - Documented how to use fakeredis with FastAPI. @ sjjessop #292 - Using black for linting python code. ## v2.23.3 ### 🧰 Maintenance - docs: Full code for FastAPI integration (#312) ### πŸ› Bug Fixes - Fix ttl for empty stream #313 ## v2.23.2 ### πŸ› Bug Fixes - Fix reading multiple streams with blocking #309 ## v2.23.1 ### πŸ› Bug Fixes - Fix `XREAD` behavior when `COUNT` is not provided but `BLOCKING` is provided #308 ## v2.23.0 ### πŸš€ Features - Support for TDigest commands: `TDIGEST.ADD`,`TDIGEST.BYRANK`,`TDIGEST.BYREVRANK`,`TDIGEST.CDF`, `TDIGEST.CREATE`, `TDIGEST.INFO`, `TDIGEST.MAX`, `TDIGEST.MERGE`, `TDIGEST.MIN`, `TDIGEST.QUANTILE`, `TDIGEST.RANK`, `TDIGEST.RESET`, `TDIGEST.REVRANK`, `TDIGEST.TRIMMED_MEAN`. ### πŸ› Bug Fixes - Import `Self` from typing vs. typing_extension ### 🧰 Maintenance - Update dependencies - Add redis-py 5.0.4 to tests - Update lupa version constraint #306 @noamkush ## v2.22.0 ### πŸš€ Features - Support for setting LUA version from environment variable `FAKEREDIS_LUA_VERSION` #287 - Support for loading LUA binary modules in fakeredis #304 ### πŸ› Bug Fixes - Fix the type hint for the version parameter in the async client #302 - Using LUA 5.1 like real redis #287 - fix: FakeRedisMixin.from_url() return type is really Self. @ben-xo #305 ## v2.21.3 ### πŸ› Bug Fixes - Revert behavior of defaulting to share the same server data structure between connections @howamith #303 - Fix type hint for version #302 ## v2.21.2 > Note: Since connection params are defaulted to be the same between async and sync connections, different FakeRedis > connections with the same connection params (or without connection parameters) will share the same server data > structure. ### πŸ› Bug Fixes - Connection params are defaulted to be the same between async and sync connections #297 - `xinfo_stream` raises exception when stream does not exist #296 ## v2.21.1 ### πŸ› Bug Fixes - Support for float timeout values #289 ### 🧰 Maintenance - Fix django cache documentation #286 ## v2.21.0 ### πŸš€ Features - Implement all TOP-K commands (`TOPK.INFO`, `TOPK.LIST`, `TOPK.RESERVE`, `TOPK.ADD`, `TOPK.COUNT`, `TOPK.QUERY`, `TOPK.INCRBY`) #278 - Implement all cuckoo filter commands #276 - Implement all Count-Min Sketch commands #277 ### πŸ› Bug Fixes - Fix XREAD blocking bug #274 #275 - EXAT option does not work #279 ### 🧰 Maintenance - Support for redis-py 5.1.0b3 - Improve `@testtools.run_test_if_redispy_ver` - Refactor bloom filter commands implementation to use [pyprobables](https://github.com/barrust/pyprobables) instead of pybloom_live ## v2.20.1 ### πŸ› Bug Fixes - Fix `XREAD` bug #256 ### 🧰 Maintenance - Testing for python 3.12 - Dependencies update ## v2.20.0 ### πŸš€ Features - Implement `BITFIELD` command #247 - Implement `COMMAND`, `COMMAND INFO`, `COMMAND COUNT` #248 ## v2.19.0 ### πŸš€ Features - Implement Bloom filters commands #239 ### πŸ› Bug Fixes - Fix error on blocking XREADGROUP #237 ## v2.18.1 ### πŸ› Bug Fixes - Fix stream type issue #233 ### 🧰 Maintenance - Add mypy hints to everything - Officially for redis-py 5.0.0, redis 7.2 ## v2.18.0 ### πŸš€ Features - Implement `PUBSUB NUMPAT` #195, `SSUBSCRIBE` #199, `SPUBLISH` #198, `SUNSUBSCRIBE` #200, `PUBSUB SHARDCHANNELS` #196, `PUBSUB SHARDNUMSUB` #197 ### πŸ› Bug Fixes - Fix All aio.FakeRedis instances share the same server #218 ## v2.17.0 ### πŸš€ Features - Implement `LPOS` #207, `LMPOP` #184, and `BLMPOP` #183 - Implement `ZMPOP` #191, `BZMPOP` #186 ### πŸ› Bug Fixes - Fix incorrect error msg for the group not found #210 - fix: use the same server_key within the pipeline when issued watch #213 - issue with ZRANGE and ZRANGESTORE with BYLEX #214 ### Contributors We'd like to thank all the contributors who worked on this release! @OlegZv, @sjjessop ## v2.16.0 ### πŸš€ Features - Implemented support for `JSON.MSET` #174, `JSON.MERGE` #181 ### πŸ› Bug Fixes - Add support for `version` for async FakeRedis #205 ### 🧰 Maintenance - Updated how to test django_rq #204 ## v2.15.0 ### πŸš€ Features - Implemented support for various stream groups commands: - `XGROUP CREATE` #161, `XGROUP DESTROY` #164, `XGROUP SETID` #165, `XGROUP DELCONSUMER` #162, `XGROUP CREATECONSUMER` #163, `XINFO GROUPS` #168, `XINFO CONSUMERS` #168, `XINFO STREAM` #169, `XREADGROUP` #171, `XACK` #157, `XPENDING` #170, `XCLAIM` #159, `XAUTOCLAIM` #158 - Implemented sorted set commands: - `ZRANDMEMBER` #192, `ZDIFF` #187, `ZINTER` #189, `ZUNION` #194, `ZDIFFSTORE` #188, `ZINTERCARD` #190, `ZRANGESTORE` #193 - Implemented list commands: - `BLMOVE` #182, ### 🧰 Maintenance - Improved documentation. ## v2.14.2 ### πŸ› Bug Fixes - Fix documentation link ## v2.14.1 ### πŸ› Bug Fixes - Fix requirement for packaging.Version #177 ## v2.14.0 ### πŸš€ Features - Implement `HRANDFIELD` #156 - Implement `JSON.MSET` ### 🧰 Maintenance - Improve streams code ## v2.13.0 ### πŸ› Bug Fixes - Fixed xadd timestamp (fixes #151) (#152) - Implement XDEL #153 ### 🧰 Maintenance - Improve test code - Fix reported security issue ## v2.12.1 ### πŸ› Bug Fixes - Add support for `Connection.read_response` arguments used in redis-py 4.5.5 and 5.0.0 - Adding state for scan commands (#99) ### 🧰 Maintenance - Improved documentation (added async sample, etc.) - Add redis-py 5.0.0b3 to GitHub workflow ## v2.12.0 ### πŸš€ Features - Implement `XREAD` #147 ## v2.11.2 ### πŸ› Bug Fixes - Unique FakeServer when no connection params are provided (#142) ## v2.11.1 ### 🧰 Maintenance - Minor fixes supporting multiple connections - Update documentation ## v2.11.0 ### πŸš€ Features - connection parameters awareness: Creating multiple clients with the same connection parameters will result in the same server data structure. ### πŸ› Bug Fixes - Fix creating fakeredis.aioredis using url with user/password (#139) ## v2.10.3 ### 🧰 Maintenance - Support for redis-py 5.0.0b1 - Include tests in sdist (#133) ### πŸ› Bug Fixes - Fix import used in GenericCommandsMixin.randomkey (#135) ## v2.10.2 ### πŸ› Bug Fixes - Fix async_timeout usage on py3.11 (#132) ## v2.10.1 ### πŸ› Bug Fixes - Enable testing django-cache using `FakeConnection`. ## v2.10.0 ### πŸš€ Features - All geo commands implemented ## v2.9.2 ### πŸ› Bug Fixes - Fix bug for `xrange` ## v2.9.1 ### πŸ› Bug Fixes - Fix bug for `xrevrange` ## v2.9.0 ### πŸš€ Features - Implement `XTRIM` - Add support for `MAXLEN`, `MAXID`, `LIMIT` arguments for `XADD` command - Add support for `ZRANGE` arguments for `ZRANGE` command [#127](https://github.com/cunla/fakeredis-py/issues/127) ### 🧰 Maintenance - Relax python version requirement #128 ## v2.8.0 ### πŸš€ Features - Support for redis-py 4.5.0 [#125](https://github.com/cunla/fakeredis-py/issues/125) ### πŸ› Bug Fixes - Fix import error for redis-py v3+ [#121](https://github.com/cunla/fakeredis-py/issues/121) ## v2.7.1 ### πŸ› Bug Fixes - Fix import error for NoneType #527 ## v2.7.0 ### πŸš€ Features - Implement `JSON.ARRINDEX`, `JSON.OBJLEN`, `JSON.OBJKEYS` , `JSON.ARRPOP`, `JSON.ARRTRIM`, `JSON.NUMINCRBY`, `JSON.NUMMULTBY`, `XADD`, `XLEN`, `XRANGE`, `XREVRANGE` ### 🧰 Maintenance - Improve json commands implementation. - Improve commands documentation. ## v2.6.0 ### πŸš€ Features - Implement `JSON.TYPE`, `JSON.ARRLEN` and `JSON.ARRAPPEND` ### πŸ› Bug Fixes - Fix encoding of None (#118) ### 🧰 Maintenance - Start skeleton for streams commands in `streams_mixin.py` and `test_streams_commands.py` - Start migrating documentation to https://fakeredis.readthedocs.io/ **Full Changelog**: https://github.com/cunla/fakeredis-py/compare/v2.5.0...v2.6.0 ## v2.5.0 #### πŸš€ Features - Implement support for `BITPOS` (bitmap command) (#112) #### πŸ› Bug Fixes - Fix json mget when dict is returned (#114) - fix: proper export (#116) #### 🧰 Maintenance - Extract param handling (#113) #### Contributors We'd like to thank all the contributors who worked on this release! @Meemaw, @Pichlerdom **Full Changelog**: https://github.com/cunla/fakeredis-py/compare/v2.4.0...v2.5.0 ## v2.4.0 #### πŸš€ Features - Implement `LCS` (#111), `BITOP` (#110) #### πŸ› Bug Fixes - Fix a bug the checking type in scan\_iter (#109) **Full Changelog**: https://github.com/cunla/fakeredis-py/compare/v2.3.0...v2.4.0 ## v2.3.0 #### πŸš€ Features - Implement `GETEX` (#102) - Implement support for `JSON.STRAPPEND` (json command) (#98) - Implement `JSON.STRLEN`, `JSON.TOGGLE` and fix bugs with `JSON.DEL` (#96) - Implement `PUBSUB CHANNELS`, `PUBSUB NUMSUB` #### πŸ› Bug Fixes - ZADD with XX \& GT allows updates with lower scores (#105) **Full Changelog**: https://github.com/cunla/fakeredis-py/compare/v2.2.0...v2.3.0 ## v2.2.0 #### πŸš€ Features - Implement `JSON.CLEAR` (#87) - Support for [redis-py v4.4.0](https://github.com/redis/redis-py/releases/tag/v4.4.0) #### 🧰 Maintenance - Implement script to create issues for missing commands - Remove checking for deprecated redis-py versions in tests **Full Changelog**: https://github.com/cunla/fakeredis-py/compare/v2.1.0...v2.2.0 ## v2.1.0 #### πŸš€ Features - Implement json.mget (#85) - Initial json module support - `JSON.GET`, `JSON.SET` and `JSON.DEL` (#80) #### πŸ› Bug Fixes - fix: add nowait for asyncio disconnect (#76) #### 🧰 Maintenance - Refactor how commands are registered (#79) - Refactor tests from redispy4\_plus (#77) #### Contributors We'd like to thank all the contributors who worked on this release! @hyeongguen-song, @the-wondersmith **Full Changelog**: https://github.com/cunla/fakeredis-py/compare/v2.0.0...v2.1.0 ## v2.0.0 #### πŸš€ Breaking changes - Remove support for aioredis separate from redis-py (redis-py versions 4.1.2 and below). (#65) #### πŸš€ Features - Add support for redis-py v4.4rc4 (#73) - Add mypy support (#74) #### 🧰 Maintenance - Separate commands to mixins (#71) - Use release-drafter - Update GitHub workflows **Full Changelog**: https://github.com/cunla/fakeredis-py/compare/v1.10.1...v2.0.0 ## v1.10.1 #### What's Changed * Implement support for `zmscore` by @the-wondersmith in #67 #### New Contributors * @the-wondersmith made their first contribution in #67 **Full Changelog**: https://github.com/cunla/fakeredis-py/compare/v1.10.0...v1.10.1 ## v1.10.0 #### What's Changed * implement `GETDEL` and `SINTERCARD` support in #57 * Test get float-type behavior in #59 * Implement `BZPOPMIN`/`BZPOPMAX` support in #60 **Full Changelog**: https://github.com/cunla/fakeredis-py/compare/v1.9.4...v1.10.0 ## v1.9.4 ### What's Changed * Separate LUA support to a different file in [#55](https://github.com/cunla/fakeredis-py/pull/55) **Full Changelog**: https://github.com/cunla/fakeredis-py/compare/v1.9.3...v1.9.4 ## v1.9.3 ### Changed * Removed python-six dependency **Full Changelog**: https://github.com/cunla/fakeredis-py/compare/v1.9.2...v1.9.3 ## v1.9.2 #### What's Changed * zadd support for GT/LT in [#49](https://github.com/cunla/fakeredis-py/pull/49) * Remove `six` dependency in [#51](https://github.com/cunla/fakeredis-py/pull/51) * Add host to `conn_pool_args` in [#51](https://github.com/cunla/fakeredis-py/pull/51) **Full Changelog**: https://github.com/cunla/fakeredis-py/compare/v1.9.1...v1.9.2 ## v1.9.1 #### What's Changed * Zrange byscore in [#44](https://github.com/cunla/fakeredis-py/pull/44) * Expire options in [#46](https://github.com/cunla/fakeredis-py/pull/46) **Full Changelog**: https://github.com/cunla/fakeredis-py/compare/v1.9.0...v1.9.1 ## v1.9.0 #### What's Changed * Enable redis7 support in [#42](https://github.com/cunla/fakeredis-py/pull/42) **Full Changelog**: https://github.com/cunla/fakeredis-py/compare/v1.8.2...v1.9.0 ## v1.8.2 #### What's Changed * Update the `publish` GitHub action to create an issue on failure by @terencehonles in [#33](https://github.com/dsoftwareinc/fakeredis-py/pull/33) * Add `release draft` job in [#37](https://github.com/dsoftwareinc/fakeredis-py/pull/37) * Fix input and output type of cursors for SCAN commands by @tohin in [#40](https://github.com/dsoftwareinc/fakeredis-py/pull/40) * Fix passing params in argsβ€”Fix #36 in [#41](https://github.com/dsoftwareinc/fakeredis-py/pull/41) #### New Contributors * @tohin made their first contribution in [#40](https://github.com/dsoftwareinc/fakeredis-py/pull/40) **Full Changelog**: https://github.com/dsoftwareinc/fakeredis-py/compare/v1.8.1...v1.8.2 ## v1.8.1 #### What's Changed * fix: allow redis 4.3.* by @terencehonles in [#30](https://github.com/dsoftwareinc/fakeredis-py/pull/30) #### New Contributors * @terencehonles made their first contribution in [#30](https://github.com/dsoftwareinc/fakeredis-py/pull/30) **Full Changelog**: https://github.com/dsoftwareinc/fakeredis-py/compare/v1.8...v1.8.1 ## v1.8 #### What's Changed * Fix handling url with username and password in [#27](https://github.com/dsoftwareinc/fakeredis-py/pull/27) * Refactor tests in [#28](https://github.com/dsoftwareinc/fakeredis-py/pull/28) * 23 - Re-add dependencies lost during switch to poetry by @xkortex in [#26](https://github.com/dsoftwareinc/fakeredis-py/pull/26) **Full Changelog**: https://github.com/dsoftwareinc/fakeredis-py/compare/v1.7.6.1...v1.8 ## v1.7.6 #### Added * add `IMOVE` operation by @BGroever in [#11](https://github.com/dsoftwareinc/fakeredis-py/pull/11) * Add `SMISMEMBER` command by @OlegZv in [#20](https://github.com/dsoftwareinc/fakeredis-py/pull/20) #### Removed * Remove Python 3.7 by @nzw0301 in [#8](https://github.com/dsoftwareinc/fakeredis-py/pull/8) #### What's Changed * fix: work with `redis.asyncio` by @zhongkechen in [#10](https://github.com/dsoftwareinc/fakeredis-py/pull/10) * Migrate to poetry in [#12](https://github.com/dsoftwareinc/fakeredis-py/pull/12) * Create annotation for redis4+ tests in [#14](https://github.com/dsoftwareinc/fakeredis-py/pull/14) * Make aioredis and lupa optional dependencies in [#16](https://github.com/dsoftwareinc/fakeredis-py/pull/16) * Remove aioredis requirement if redis-py 4.2+ by @ikornaselur in [#19](https://github.com/dsoftwareinc/fakeredis-py/pull/19) #### New Contributors * @nzw0301 made their first contribution in [#8](https://github.com/dsoftwareinc/fakeredis-py/pull/8) * @zhongkechen made their first contribution in [#10](https://github.com/dsoftwareinc/fakeredis-py/pull/10) * @BGroever made their first contribution in [#11](https://github.com/dsoftwareinc/fakeredis-py/pull/11) * @ikornaselur made their first contribution in [#19](https://github.com/dsoftwareinc/fakeredis-py/pull/19) * @OlegZv made their first contribution in [#20](https://github.com/dsoftwareinc/fakeredis-py/pull/20) **Full Changelog**: https://github.com/dsoftwareinc/fakeredis-py/compare/v1.7.5...v1.7.6 #### Thanks to our sponsors this month - @beatgeek ## v1.7.5 #### What's Changed * Fix python3.8 redis4.2+ issue in [#6](https://github.com/dsoftwareinc/fakeredis-py/pull/6) **Full Changelog**: https://github.com/dsoftwareinc/fakeredis-py/compare/v1.7.4...v1.7.5 ## v1.7.4 #### What's Changed * Support for python3.8 in [#1](https://github.com/dsoftwareinc/fakeredis-py/pull/1) * Feature/publish action in [#2](https://github.com/dsoftwareinc/fakeredis-py/pull/2) **Full Changelog**: https://github.com/dsoftwareinc/fakeredis-py/compare/1.7.1...v1.7.4 fakeredis-py-2.26.1/docs/about/contributing.md000066400000000000000000000263261470672414000212760ustar00rootroot00000000000000Contributing to fakeredis ========================= First off, thanks for taking the time to contribute! ❀️ All types of contributions are encouraged and valued. See the table of contents for different ways to help and details about how this project handles them. Please make sure to read the relevant section before making your contribution. It will make it a lot easier for the maintainers and smooth out the experience for all involved. The community looks forward to your contributions. πŸŽ‰ > And if you like the project, but just don't have time to contribute, that's fine. > > There are other easy ways to support the project and show your appreciation, which we would also be very happy about: > > - Star the project > - Tweet about it > - Refer to this project in your project's readme > - Mention the project at local meetups and tell your friends/colleagues ## Code of Conduct This project and everyone participating in it is governed by the [fakeredis Code of Conduct](https://github.com/cunla/fakeredis-py/blob/master/CODE_OF_CONDUCT.md). By participating, you are expected to uphold this code. Please report unacceptable behavior to . ## I Have a Question > If you want to ask a question, we assume that you have read the > available [Documentation](https://fakeredis.readthedocs.io/). Before you ask a question, it is best to search for existing [Issues](https://github.com/cunla/fakeredis-py/issues) that might help you. In case you have found a suitable issue and still need clarification, you can write your question in this issue. It is also advisable to search the internet for answers first. If you then still feel the need to ask a question and need clarification, we recommend the following: - Open an [Issue](https://github.com/cunla/fakeredis-py/issues/new). - Provide as much context as you can about what you're running into. - Provide project and platform versions (nodejs, npm, etc), depending on what seems relevant. We will then take care of the issue as soon as possible. ## I Want To Contribute > ### Legal Notice > When contributing to this project, you must agree that you have authored 100% of the content, that you have the > necessary rights to the content and that the content you contribute may be provided under the project license. ## Reporting Bugs ### Before Submitting a Bug Report A good bug report shouldn't leave others needing to chase you up for more information. Therefore, we ask you to investigate carefully, collect information and describe the issue in detail in your report. Please complete the following steps in advance to help us fix any potential bug as fast as possible. - Make sure that you are using the latest version. - Determine if your bug is really a bug and not an error on your side, e.g., using incompatible environment components/versions (Make sure that you have read the [documentation](https://github.com/cunla/fakeredis-py). If you are looking for support, you might want to check [this section](#i-have-a-question)). - To see if other users have experienced (and potentially already solved) the same issue you are having, check if there is not already a bug report existing for your bug or error in the [bug tracker](https://github.com/cunla/fakeredis-py/issues?q=label%3Abug). - Also make sure to search the internet (including Stack Overflow) to see if users outside the GitHub community have discussed the issue. - Collect information about the bug: - Stack trace (Traceback) - OS, Platform and Version (Windows, Linux, macOS, x86, ARM) - Version of the interpreter, compiler, SDK, runtime environment, package manager, depending on what seems relevant. - Possibly your input and the output - Can you reliably reproduce the issue? And can you also reproduce it with older versions? ### How Do I Submit a Good Bug Report? > You must never report security related issues, vulnerabilities or bugs, including sensitive information > to the issue tracker, or elsewhere in public. > Instead, sensitive bugs must be sent by email to . We use GitHub issues to track bugs and errors. If you run into an issue with the project: - Open an [Issue](https://github.com/cunla/fakeredis-py/issues/new). (Since we can't be sure at this point whether it is a bug or not, we ask you not to talk about a bug yet and not to label the issue.) - Follow the issue template and provide as much context as possible and describe the *reproduction steps* that someone else can follow to recreate the issue on their own. This usually includes your code. For good bug reports, you should isolate the problem and create a reduced test case. - Provide the information you collected in the previous section. Once it's filed: - The project team will label the issue accordingly. - A team member will try to reproduce the issue with your provided steps. If there are no reproduction steps or no obvious way to reproduce the issue, the team will ask you for those steps and mark the issue as `needs-repro`. Bugs with the `needs-repro` tag will not be addressed until they are reproduced. - If the team is able to reproduce the issue, it will be marked `needs-fix`, as well as possibly other tags (such as `critical`), and the issue will be left to be [implemented by someone](#your-first-code-contribution). ## Suggesting Enhancements This section guides you through submitting an enhancement suggestion for fakeredis, **including completely new features and minor improvements to existing functionality**. Following these guidelines will help maintainers and the community to understand your suggestion and find related suggestions. ### Before Submitting an Enhancement - Make sure that you are using the latest version. - Read the [documentation](https://github.com/cunla/fakeredis-py) carefully and find out if the functionality is already covered, maybe by an individual configuration. - Perform a [search](https://github.com/cunla/fakeredis-py/issues) to see if the enhancement has already been suggested. If it has, add a comment to the existing issue instead of opening a new one. - Find out whether your idea fits with the scope and aims of the project. It's up to you to make a strong case to convince the project's developers of the merits of this feature. Keep in mind that we want features that will be useful to the majority of our users and not just a small subset. If you're just targeting a minority of users, consider writing an add-on/plugin library. ### How Do I Submit a Good Enhancement Suggestion? Enhancement suggestions are tracked as [GitHub issues](https://github.com/cunla/fakeredis-py/issues). - Use a **clear and descriptive title** for the issue to identify the suggestion. - Provide a **step-by-step description of the suggested enhancement** in as many details as possible. - **Describe the current behavior** and **explain which behavior you expected to see instead** and why. At this point you can also tell which alternatives do not work for you. - You may want to **include screenshots and animated GIFs** which help you demonstrate the steps or point out the part which the suggestion is related to. You can use [this tool](https://www.cockos.com/licecap/) to record GIFs on macOS and Windows, and [this tool](https://github.com/colinkeenan/silentcast) or [this tool](https://github.com/GNOME/byzanz) on Linux. - **Explain why this enhancement would be useful** to most fakeredis users. You may also want to point out the other projects that solved it better and which could serve as inspiration. ## Your First Code Contribution Unsure where to begin contributing? You can start by looking through [help-wanted issues](https://github.com/cunla/fakeredis-py/labels/help%20wanted). Never contributed to open source before? Here are a couple of friendly tutorials: - - ### Getting started - Create your own fork of the repository - Do the changes in your fork - Setup poetry `pip install poetry` - Let poetry install everything required for a local environment `poetry install` - To run all tests, use: `poetry run pytest -v` - Note: In order to run the tests, a real redis server should be running. The tests are comparing the results of each command between fakeredis and a real redis. - You can use `docker-compose up redis6` or `docker-compose up redis7` to run redis. - Run test with coverage using `poetry run pytest -v --cov=fakeredis --cov-branch` and then you can run `coverage report`. ## Improving The Documentation - Create your own fork of the repository. - Do the changes in your fork, probably under the `docs/` folder. - Create a pull request with the changes. ## Style guides ### Commit Messages Taken from [conventional commits](https://www.conventionalcommits.org/en/v1.0.0/) ``` [optional scope]: [optional body] [optional footer(s)] ``` The commit contains the following structural elements to communicate intent to the consumers of your library: * `fix:` a commit of the type fix patches a bug in your codebase (this correlates with `PATCH` in Semantic Versioning). * `feat:` a commit of the type feat introduces a new feature to the codebase (this correlates with `MINOR` in Semantic Versioning). * `BREAKING CHANGE:` a commit that has a footer BREAKING CHANGE:, or appends a ! after the type/scope, introduces a breaking API change (correlating with MAJOR in Semantic Versioning). A BREAKING CHANGE can be part of commits of any type. * types other than `fix:` and `feat:` are allowed, for example, `@commitlint/config-conventional` (based on the Angular convention) recommends `build:`, `chore:`, `ci:`, `docs:`, `style:`, `refactor:`, `perf:`, `test:`, and others. * footers other than `BREAKING CHANGE: ` may be provided and follow a convention similar to [git trailer format](https://git-scm.com/docs/git-interpret-trailers). Additional types are not mandated by the Conventional Commits specification, and have no implicit effect in Semantic Versioning (unless they include a BREAKING CHANGE). A scope may be provided to a commit’s type, to provide additional contextual information and is contained within parenthesis, e.g., feat(parser): add ability to parse arrays. ## Join The Project Team If you wish to be added to the project team as a collaborator, please send a message to daniel@moransoftware.ca with explanation. fakeredis-py-2.26.1/docs/about/license.md000066400000000000000000000031051470672414000201770ustar00rootroot00000000000000# License The legal stuff. --- BSD 3-Clause License Copyright (c) 2022-, Daniel Moran, 2017-2018, Bruce Merry, 2011 James Saryerwinnie, All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. fakeredis-py-2.26.1/docs/ads.txt000066400000000000000000000000731470672414000164320ustar00rootroot00000000000000google.com, pub-2802331499006697, DIRECT, f08c47fec0942fa0 fakeredis-py-2.26.1/docs/guides/000077500000000000000000000000001470672414000164025ustar00rootroot00000000000000fakeredis-py-2.26.1/docs/guides/implement-command.md000066400000000000000000000110421470672414000223300ustar00rootroot00000000000000# Implementing support for a command Creating a new command support should be done in the `FakeSocket` class (in `_fakesocket.py`) by creating the method and using `@command` decorator (which should be the command syntax, you can use existing samples on the file). For example: ```python class FakeSocket(BaseFakeSocket, FakeLuaSocket): # ... @command(name='zscore', fixed=(Key(ZSet), bytes), repeat=(), flags=[]) def zscore(self, key, member): try: return self._encodefloat(key.value[member], False) except KeyError: return None ``` ## Parsing command arguments The `extract_args` method should help to extract arguments from `*args`. It extracts from actual arguments which arguments exist and their value if relevant. Parameters `extract_args` expect: - `actual_args` The actual arguments to parse - `expected` Arguments to look for, see below explanation. - `error_on_unexpected` (default: True) Should an error be raised when actual_args contain an unexpected argument? - `left_from_first_unexpected` (default: True) Once reaching an unexpected argument in actual_args, Should parsing stop? It returns two lists: - List of values for expected arguments. - List of remaining args. ### Expected argument structure: - If expected argument has only a name, it will be parsed as boolean (Whether it exists in actual `*args` or not). - In order to parse a numerical value following the expected argument, a `+` prefix is needed, e.g., `+px` will parse `args=('px', '1')` as `px=1` - In order to parse a string value following the expected argument, a `*` prefix is needed, e.g., `*type` will parse `args=('type', 'number')` as `type='number'` - You can have more than one `+`/`*`, e.g., `++limit` will parse `args=('limit','1','10')` as `limit=(1,10)` ## How to use `@command` decorator The `@command` decorator register the method as a redis command and define the accepted format for it. It will create a `Signature` instance for the command. Whenever the command is triggered, the `Signature.apply(..)` method will be triggered to check the validity of syntax and analyze the command arguments. By default, it takes the name of the method as the command name. If the method implements a subcommand (e.g., `SCRIPT LOAD`), a Redis module command (e.g., `JSON.GET`), or a python reserve word where you can not use it as the method name (e.g., `EXEC`), then you can explicitly supply the name parameter. If the command implemented requires certain arguments, they can be supplied in the first parameter as a tuple. When receiving the command through the socket, the bytes will be converted to the argument types supplied or remain as `bytes`. Argument types (All in `_commands.py`): - `Key(KeyType)` - Will get from the DB the key and validate its value is of `KeyType` (if `KeyType` is supplied). It will generate a `CommandItem` from it which provides access to the database value. - `Int` - Decode the `bytes` to `int` and vice versa. - `DbIndex`/`BitOffset`/`BitValue`/`Timeout` - Basically the same behavior as `Int`, but with different messages when encode/decode fail. - `Hash` - dictionary, usually describe the type of value stored in Key `Key(Hash)` - `Float` - Encode/Decode `bytes` <-> `float` - `SortFloat` - Similar to `Float` with different error messages. - `ScoreTest` - Argument converter for sorted set score endpoints. - `StringTest` - Argument converter for sorted set endpoints (lex). - `ZSet` - Sorted Set. ## Implement a test for it There are multiple scenarios for test, with different versions of redis server, redis-py, etc. The tests not only assert the validity of output but runs the same test on a real redis-server and compares the output to the real server output. - Create tests in the relevant test file. - If support for the command was introduced in a certain version of redis-py (see [redis-py release notes](https://github.com/redis/redis-py/releases/tag/v4.3.4)) you can use the decorator `@testtools.run_test_if_redispy_ver` on your tests. example: ```python @testtools.run_test_if_redispy_ver('gte', '4.2.0') # This will run for redis-py 4.2.0 or above. def test_expire_should_not_expire__when_no_expire_is_set(r): r.set('foo', 'bar') assert r.get('foo') == b'bar' assert r.expire('foo', 1, xx=True) == 0 ``` ## Updating documentation Lastly, run from the root of the project the script to regenerate documentation for supported and unsupported commands: ```bash python scripts/generate_supported_commands_doc.py ``` Include the changes in the `docs/` directory in your pull request. fakeredis-py-2.26.1/docs/guides/test-case.md000066400000000000000000000023721470672414000206200ustar00rootroot00000000000000 # Write a new test case There are multiple scenarios for test, with different versions of python, redis-py and redis server, etc. The tests not only assert the validity of the expected output with FakeRedis but also with a real redis server. That way parity of real Redis and FakeRedis is ensured. To write a new test case for a command: - Determine which mixin the command belongs to and the test file for the mixin (e.g., `string_mixin.py` => `test_string_commands.py`). - Tests should support python 3.7 and above. - Determine when support for the command was introduced - To limit the redis-server versions, it will run on use: `@pytest.mark.max_server(version)` and `@pytest.mark.min_server(version)` - To limit the redis-py version use `@run_test_if_redispy_ver('gte', version)` (you can use `ge`/`gte`/`lte`/`lt`/`eq`/`ne`). - pytest will inject a redis connection to the argument `r` of the test. Sample of running a test for redis-py v4.2.0 and above, redis-server 7.0 and above. ```python @pytest.mark.min_server('7') @testtools.run_test_if_redispy_ver('gte', '4.2.0') def test_expire_should_not_expire__when_no_expire_is_set(r): r.set('foo', 'bar') assert r.get('foo') == b'bar' assert r.expire('foo', 1, xx=True) == 0 ``` fakeredis-py-2.26.1/docs/index.md000066400000000000000000000275571470672414000165730ustar00rootroot00000000000000from test.test_hypothesis import server_typefrom test.test_hypothesis import server_type--- toc: toc_depth: 3 --- fakeredis: A python implementation of redis server ================================================= FakeRedis is a pure-Python implementation of the Redis key-value store. It enables running tests requiring redis server without an actual server. It provides enhanced versions of the redis-py Python bindings for Redis. That provides the following added functionality: A built-in Redis server that is automatically installed, configured and managed when the Redis bindings are used. A single server shared by multiple programs or multiple independent servers. All the servers provided by FakeRedis support all Redis functionality including advanced features such as RedisJson, GeoCommands. For a list of supported/unsupported redis commands, see [Supported commands][6]. ## Installation To install fakeredis-py, simply: ```bash pip install fakeredis ## No additional modules support pip install fakeredis[lua] ## Support for LUA scripts pip install fakeredis[json] ## Support for RedisJSON commands pip install fakeredis[probabilistic,json] ## Support for RedisJSON and BloomFilter/CuckooFilter/CountMinSketch commands ``` ## How to Use ### Start a server on a thread It is possible to start a server on a thread and use it as a connect to it as you would a real redis server. ```python from threading import Thread from fakeredis import TcpFakeServer server_address = ("127.0.0.1", 6379) server = TcpFakeServer(server_address, server_type="redis") t = Thread(target=server.serve_forever, daemon=True) t.start() import redis r = redis.Redis(host=server_address[0], port=server_address[1]) r.set("foo", "bar") assert r.get("foo") == b"bar" ``` ### Use as a pytest fixture ```python import pytest @pytest.fixture def redis_client(request): import fakeredis redis_client = fakeredis.FakeRedis() return redis_client ``` ### General usage FakeRedis can imitate Redis server version 6.x or 7.x, [Valkey server](./valkey-support), and [dragonfly server][dragonfly]. Redis version 7 is used by default. The intent is for fakeredis to act as though you're talking to a real redis server. It does this by storing the state internally. For example: ```pycon >>> import fakeredis >>> r = fakeredis.FakeStrictRedis(server_type="redis") >>> r.set('foo', 'bar') True >>> r.get('foo') 'bar' >>> r.lpush('bar', 1) 1 >>> r.lpush('bar', 2) 2 >>> r.lrange('bar', 0, -1) [2, 1] ``` The state is stored in an instance of `FakeServer`. If one is not provided at construction, a new instance is automatically created for you, but you can explicitly create one to share state: ```pycon >>> import fakeredis >>> server = fakeredis.FakeServer() >>> r1 = fakeredis.FakeStrictRedis(server=server) >>> r1.set('foo', 'bar') True >>> r2 = fakeredis.FakeStrictRedis(server=server) >>> r2.get('foo') 'bar' >>> r2.set('bar', 'baz') True >>> r1.get('bar') 'baz' >>> r2.get('bar') 'baz' ``` It is also possible to mock connection errors, so you can effectively test your error handling. Set the connected attribute of the server to `False` after initialization. ```pycon >>> import fakeredis >>> server = fakeredis.FakeServer() >>> server.connected = False >>> r = fakeredis.FakeStrictRedis(server=server) >>> r.set('foo', 'bar') ConnectionError: FakeRedis is emulating a connection error. >>> server.connected = True >>> r.set('foo', 'bar') True ``` Fakeredis implements the same interface as `redis-py`, the popular redis client for python, and models the responses of redis 6.x or 7.x. ### async Redis Async redis client is supported. Instead of using `fakeredis.FakeRedis`, use `fakeredis.aioredis.FakeRedis`. ```pycon >>> from fakeredis import FakeAsyncRedis >>> r1 = FakeAsyncRedis() >>> await r1.set('foo', 'bar') True >>> await r1.get('foo') 'bar' ``` ### Use to test django cache Update your cache settings: ```python from fakeredis import FakeConnection CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.redis.RedisCache', 'LOCATION': '...', 'OPTIONS': { 'connection_class': FakeConnection } } } ``` For [django-redis][8] library, use the following `OPTIONS`: ``` 'OPTIONS': { 'CONNECTION_POOL_KWARGS': {'connection_class': FakeConnection}, } ``` You can use django [`@override_settings` decorator][9] ### Use to test django-rq There is a need to override `django_rq.queues.get_redis_connection` with a method returning the same connection. ```python import django_rq # RQ # Configuration to pretend there is a Redis service available. # Set up the connection before RQ Django reads the settings. # The connection must be the same because in fakeredis connections # do not share the state. Therefore, we define a singleton object to reuse it. def get_fake_connection(config: Dict[str, Any], strict: bool): from fakeredis import FakeRedis, FakeStrictRedis redis_cls = FakeStrictRedis if strict else FakeRedis if "URL" in config: return redis_cls.from_url( config["URL"], db=config.get("DB"), ) return redis_cls( host=config["HOST"], port=config["PORT"], db=config.get("DB", 0), username=config.get("USERNAME", None), password=config.get("PASSWORD"), ) django_rq.queues.get_redis_connection = get_fake_connection ``` ### Use to test FastAPI See info on [this issue][7] If you're using FastAPI dependency injection to provide a Redis connection, then you can override that dependency for testing. Your FastAPI application main.py: ```python from typing import Annotated, Any, AsyncIterator from redis import asyncio as redis from fastapi import Depends, FastAPI app = FastAPI() async def get_redis() -> AsyncIterator[redis.Redis]: # Code to handle creating a redis connection goes here, for example async with redis.from_url("redis://localhost:6379") as client: # type: ignore[no-untyped-call] yield client @app.get("/") async def root(redis_client: Annotated[redis.Redis, Depends(get_redis)]) -> Any: # Code that does something with redis goes here, for example: await redis_client.set("foo", "bar") return {"redis_keys": await redis_client.keys()} ``` Assuming you use pytest-asyncio, your test file (or you can put the fixtures in conftest.py as usual): ```python from typing import AsyncIterator from unittest import mock import fakeredis import httpx import pytest import pytest_asyncio from redis import asyncio as redis from main import app, get_redis @pytest_asyncio.fixture async def redis_client() -> AsyncIterator[redis.Redis]: async with fakeredis.FakeAsyncRedis() as client: yield client @pytest_asyncio.fixture async def app_client(redis_client: redis.Redis) -> AsyncIterator[httpx.AsyncClient]: async def get_redis_override() -> redis.Redis: return redis_client transport = httpx.ASGITransport(app=app) # type: ignore[arg-type] # https://github.com/encode/httpx/issues/3111 async with httpx.AsyncClient(transport=transport, base_url="http://test") as app_client: with mock.patch.dict(app.dependency_overrides, {get_redis: get_redis_override}): yield app_client @pytest.mark.asyncio async def test_app(app_client: httpx.AsyncClient) -> None: response = await app_client.get("/") assert response.json()["redis_keys"] == ["foo"] ``` ## Known Limitations Apart from unimplemented commands, there are a number of cases where fakeredis won't give identical results to real redis. The following are differences that are unlikely to ever be fixed; there are also differences that are fixable (such as commands that do not support all features) which should be filed as bugs in GitHub. - Hyperloglogs are implemented using sets underneath. This means that the `type` command will return the wrong answer, you can't use `get` to retrieve the encoded value, and counts will be slightly different (they will in fact be exact). - When a command has multiple error conditions, such as operating on a key of the wrong type and an integer argument is not well-formed, the choice of error to return may not match redis. - The `incrbyfloat` and `hincrbyfloat` commands in redis use the C `long double` type, which typically has more precision than Python's `float` type. - Redis makes guarantees about the order in which clients blocked on blocking commands are woken up. Fakeredis does not honor these guarantees. - Where redis contains bugs, fakeredis generally does not try to provide exact bug compatibility. It's not practical for fakeredis to try to match the set of bugs in your specific version of redis. - There are a number of cases where the behavior of redis is undefined, such as the order of elements returned by set and hash commands. Fakeredis will generally not produce the same results, and in Python versions before 3.6 may produce different results each time the process is re-run. - SCAN/ZSCAN/HSCAN/SSCAN will not necessarily iterate all items if items are deleted or renamed during iteration. They also won't necessarily iterate in the same chunk sizes or the same order as redis. This is aligned with redis behavior as can be seen in tests `test_scan_delete_key_while_scanning_should_not_returns_it_in_scan`. - DUMP/RESTORE will not return or expect data in the RDB format. Instead, the `pickle` module is used to mimic an opaque and non-standard format. **WARNING**: Do not use RESTORE with untrusted data, as a malicious pickle can execute arbitrary code. ## Local development environment To ensure parity with the real redis, there are a set of integration tests that mirror the unittests. For every unittest that is written, the same test is run against a real redis instance using a real redis-py client instance. To run these tests, you must have a redis server running on localhost, port 6379 (the default settings). **WARNING**: the tests will completely wipe your database! First install poetry if you don't have it, and then install all the dependencies: ```bash pip install poetry poetry install ``` To run all the tests: ```bash poetry run pytest -v ``` If you only want to run tests against fake redis, without a real redis:: ```bash poetry run pytest -m fake ``` Because this module is attempting to provide the same interface as `redis-py`, the python bindings to redis, a reasonable way to test this to take each unittest and run it against a real redis server. Fakeredis and the real redis server should give the same result. To run tests against a real redis instance instead: ```bash poetry run pytest -m real ``` If redis is not running, and you try to run tests against a real redis server, these tests will have a result of 's' for skipped. There are some tests that test redis blocking operations that are somewhat slow. If you want to skip these tests during day-to-day development, they have all been tagged as 'slow' so you can skip them by running: ```bash poetry run pytest -m "not slow" ``` ## Contributing Contributions are welcome. You can contribute in many ways: - Report bugs you found. - Check out issues with [`Help wanted`][5] label. - Implement commands which are not yet implemented. Follow the [guide how to implement a new command][1]. - Write additional test cases. Follow the [guide how to write a test-case][4]. Please follow coding standards listed in the [contributing guide][3]. ## Sponsor fakeredis-py is developed for free. You can support this project by becoming a sponsor using [this link][2]. [1]:./guides/implement-command/ [2]:https://github.com/sponsors/cunla [3]:./about/contributing.md [4]:./guides/test-case/ [5]:https://github.com/cunla/fakeredis-py/issues?q=is%3Aissue+is%3Aopen+label%3A%22help+wanted%22 [6]:./redis-commands/ [7]:https://github.com/cunla/fakeredis-py/issues/292 [8]:https://github.com/jazzband/django-redis [9]:https://docs.djangoproject.com/en/4.1/topics/testing/tools/#django.test.override_settings [dragonfly]:https://www.dragonflydb.io/ fakeredis-py-2.26.1/docs/overrides/000077500000000000000000000000001470672414000171245ustar00rootroot00000000000000fakeredis-py-2.26.1/docs/overrides/main.html000066400000000000000000000004411470672414000207350ustar00rootroot00000000000000{% extends "base.html" %} {% block extrahead %} {% endblock %}fakeredis-py-2.26.1/docs/overrides/partials/000077500000000000000000000000001470672414000207435ustar00rootroot00000000000000fakeredis-py-2.26.1/docs/overrides/partials/toc-item.html000066400000000000000000000011771470672414000233600ustar00rootroot00000000000000
  • {{ toc_item.title }} {% if toc_item.children %} {% endif %}
  • fakeredis-py-2.26.1/docs/redis-commands/000077500000000000000000000000001470672414000200275ustar00rootroot00000000000000fakeredis-py-2.26.1/docs/redis-commands/Redis/000077500000000000000000000000001470672414000210755ustar00rootroot00000000000000fakeredis-py-2.26.1/docs/redis-commands/Redis/BITMAP.md000066400000000000000000000013171470672414000223750ustar00rootroot00000000000000# Redis `bitmap` commands (6/6 implemented) ## [BITCOUNT](https://redis.io/commands/bitcount/) Counts the number of set bits (population counting) in a string. ## [BITFIELD](https://redis.io/commands/bitfield/) Performs arbitrary bitfield integer operations on strings. ## [BITOP](https://redis.io/commands/bitop/) Performs bitwise operations on multiple strings, and stores the result. ## [BITPOS](https://redis.io/commands/bitpos/) Finds the first set (1) or clear (0) bit in a string. ## [GETBIT](https://redis.io/commands/getbit/) Returns a bit value by offset. ## [SETBIT](https://redis.io/commands/setbit/) Sets or clears the bit at offset of the string value. Creates the key if it doesn't exist. fakeredis-py-2.26.1/docs/redis-commands/Redis/CLUSTER.md000066400000000000000000000113561470672414000225460ustar00rootroot00000000000000 ## Unsupported cluster commands > To implement support for a command, see [here](/guides/implement-command/) #### [ASKING](https://redis.io/commands/asking/) (not implemented) Signals that a cluster client is following an -ASK redirect. #### [CLUSTER](https://redis.io/commands/cluster/) (not implemented) A container for Redis Cluster commands. #### [CLUSTER ADDSLOTS](https://redis.io/commands/cluster-addslots/) (not implemented) Assigns new hash slots to a node. #### [CLUSTER ADDSLOTSRANGE](https://redis.io/commands/cluster-addslotsrange/) (not implemented) Assigns new hash slot ranges to a node. #### [CLUSTER BUMPEPOCH](https://redis.io/commands/cluster-bumpepoch/) (not implemented) Advances the cluster config epoch. #### [CLUSTER COUNT-FAILURE-REPORTS](https://redis.io/commands/cluster-count-failure-reports/) (not implemented) Returns the number of active failure reports active for a node. #### [CLUSTER COUNTKEYSINSLOT](https://redis.io/commands/cluster-countkeysinslot/) (not implemented) Returns the number of keys in a hash slot. #### [CLUSTER DELSLOTS](https://redis.io/commands/cluster-delslots/) (not implemented) Sets hash slots as unbound for a node. #### [CLUSTER DELSLOTSRANGE](https://redis.io/commands/cluster-delslotsrange/) (not implemented) Sets hash slot ranges as unbound for a node. #### [CLUSTER FAILOVER](https://redis.io/commands/cluster-failover/) (not implemented) Forces a replica to perform a manual failover of its master. #### [CLUSTER FLUSHSLOTS](https://redis.io/commands/cluster-flushslots/) (not implemented) Deletes all slots information from a node. #### [CLUSTER FORGET](https://redis.io/commands/cluster-forget/) (not implemented) Removes a node from the nodes table. #### [CLUSTER GETKEYSINSLOT](https://redis.io/commands/cluster-getkeysinslot/) (not implemented) Returns the key names in a hash slot. #### [CLUSTER HELP](https://redis.io/commands/cluster-help/) (not implemented) Returns helpful text about the different subcommands. #### [CLUSTER INFO](https://redis.io/commands/cluster-info/) (not implemented) Returns information about the state of a node. #### [CLUSTER KEYSLOT](https://redis.io/commands/cluster-keyslot/) (not implemented) Returns the hash slot for a key. #### [CLUSTER LINKS](https://redis.io/commands/cluster-links/) (not implemented) Returns a list of all TCP links to and from peer nodes. #### [CLUSTER MEET](https://redis.io/commands/cluster-meet/) (not implemented) Forces a node to handshake with another node. #### [CLUSTER MYID](https://redis.io/commands/cluster-myid/) (not implemented) Returns the ID of a node. #### [CLUSTER MYSHARDID](https://redis.io/commands/cluster-myshardid/) (not implemented) Returns the shard ID of a node. #### [CLUSTER NODES](https://redis.io/commands/cluster-nodes/) (not implemented) Returns the cluster configuration for a node. #### [CLUSTER REPLICAS](https://redis.io/commands/cluster-replicas/) (not implemented) Lists the replica nodes of a master node. #### [CLUSTER REPLICATE](https://redis.io/commands/cluster-replicate/) (not implemented) Configure a node as replica of a master node. #### [CLUSTER RESET](https://redis.io/commands/cluster-reset/) (not implemented) Resets a node. #### [CLUSTER SAVECONFIG](https://redis.io/commands/cluster-saveconfig/) (not implemented) Forces a node to save the cluster configuration to disk. #### [CLUSTER SET-CONFIG-EPOCH](https://redis.io/commands/cluster-set-config-epoch/) (not implemented) Sets the configuration epoch for a new node. #### [CLUSTER SETSLOT](https://redis.io/commands/cluster-setslot/) (not implemented) Binds a hash slot to a node. #### [CLUSTER SHARDS](https://redis.io/commands/cluster-shards/) (not implemented) Returns the mapping of cluster slots to shards. #### [CLUSTER SLAVES](https://redis.io/commands/cluster-slaves/) (not implemented) Lists the replica nodes of a master node. #### [CLUSTER SLOTS](https://redis.io/commands/cluster-slots/) (not implemented) Returns the mapping of cluster slots to nodes. #### [READONLY](https://redis.io/commands/readonly/) (not implemented) Enables read-only queries for a connection to a Redis Cluster replica node. #### [READWRITE](https://redis.io/commands/readwrite/) (not implemented) Enables read-write queries for a connection to a Reids Cluster replica node. fakeredis-py-2.26.1/docs/redis-commands/Redis/CONNECTION.md000066400000000000000000000066721470672414000230710ustar00rootroot00000000000000# Redis `connection` commands (4/25 implemented) ## [CLIENT SETINFO](https://redis.io/commands/client-setinfo/) Sets information specific to the client or connection. ## [ECHO](https://redis.io/commands/echo/) Returns the given string. ## [PING](https://redis.io/commands/ping/) Returns the server's liveliness response. ## [SELECT](https://redis.io/commands/select/) Changes the selected database. ## Unsupported connection commands > To implement support for a command, see [here](/guides/implement-command/) #### [AUTH](https://redis.io/commands/auth/) (not implemented) Authenticates the connection. #### [CLIENT](https://redis.io/commands/client/) (not implemented) A container for client connection commands. #### [CLIENT CACHING](https://redis.io/commands/client-caching/) (not implemented) Instructs the server whether to track the keys in the next request. #### [CLIENT GETNAME](https://redis.io/commands/client-getname/) (not implemented) Returns the name of the connection. #### [CLIENT GETREDIR](https://redis.io/commands/client-getredir/) (not implemented) Returns the client ID to which the connection's tracking notifications are redirected. #### [CLIENT ID](https://redis.io/commands/client-id/) (not implemented) Returns the unique client ID of the connection. #### [CLIENT INFO](https://redis.io/commands/client-info/) (not implemented) Returns information about the connection. #### [CLIENT KILL](https://redis.io/commands/client-kill/) (not implemented) Terminates open connections. #### [CLIENT LIST](https://redis.io/commands/client-list/) (not implemented) Lists open connections. #### [CLIENT NO-EVICT](https://redis.io/commands/client-no-evict/) (not implemented) Sets the client eviction mode of the connection. #### [CLIENT NO-TOUCH](https://redis.io/commands/client-no-touch/) (not implemented) Controls whether commands sent by the client affect the LRU/LFU of accessed keys. #### [CLIENT PAUSE](https://redis.io/commands/client-pause/) (not implemented) Suspends commands processing. #### [CLIENT REPLY](https://redis.io/commands/client-reply/) (not implemented) Instructs the server whether to reply to commands. #### [CLIENT SETNAME](https://redis.io/commands/client-setname/) (not implemented) Sets the connection name. #### [CLIENT TRACKING](https://redis.io/commands/client-tracking/) (not implemented) Controls server-assisted client-side caching for the connection. #### [CLIENT TRACKINGINFO](https://redis.io/commands/client-trackinginfo/) (not implemented) Returns information about server-assisted client-side caching for the connection. #### [CLIENT UNBLOCK](https://redis.io/commands/client-unblock/) (not implemented) Unblocks a client blocked by a blocking command from a different connection. #### [CLIENT UNPAUSE](https://redis.io/commands/client-unpause/) (not implemented) Resumes processing commands from paused clients. #### [HELLO](https://redis.io/commands/hello/) (not implemented) Handshakes with the Redis server. #### [QUIT](https://redis.io/commands/quit/) (not implemented) Closes the connection. #### [RESET](https://redis.io/commands/reset/) (not implemented) Resets the connection. fakeredis-py-2.26.1/docs/redis-commands/Redis/GENERIC.md000066400000000000000000000057201470672414000224770ustar00rootroot00000000000000# Redis `generic` commands (23/26 implemented) ## [DEL](https://redis.io/commands/del/) Deletes one or more keys. ## [DUMP](https://redis.io/commands/dump/) Returns a serialized representation of the value stored at a key. ## [EXISTS](https://redis.io/commands/exists/) Determines whether one or more keys exist. ## [EXPIRE](https://redis.io/commands/expire/) Sets the expiration time of a key in seconds. ## [EXPIREAT](https://redis.io/commands/expireat/) Sets the expiration time of a key to a Unix timestamp. ## [EXPIRETIME](https://redis.io/commands/expiretime/) Returns the expiration time of a key as a Unix timestamp. ## [KEYS](https://redis.io/commands/keys/) Returns all key names that match a pattern. ## [MOVE](https://redis.io/commands/move/) Moves a key to another database. ## [PERSIST](https://redis.io/commands/persist/) Removes the expiration time of a key. ## [PEXPIRE](https://redis.io/commands/pexpire/) Sets the expiration time of a key in milliseconds. ## [PEXPIREAT](https://redis.io/commands/pexpireat/) Sets the expiration time of a key to a Unix milliseconds timestamp. ## [PEXPIRETIME](https://redis.io/commands/pexpiretime/) Returns the expiration time of a key as a Unix milliseconds timestamp. ## [PTTL](https://redis.io/commands/pttl/) Returns the expiration time in milliseconds of a key. ## [RANDOMKEY](https://redis.io/commands/randomkey/) Returns a random key name from the database. ## [RENAME](https://redis.io/commands/rename/) Renames a key and overwrites the destination. ## [RENAMENX](https://redis.io/commands/renamenx/) Renames a key only when the target key name doesn't exist. ## [RESTORE](https://redis.io/commands/restore/) Creates a key from the serialized representation of a value. ## [SCAN](https://redis.io/commands/scan/) Iterates over the key names in the database. ## [SORT](https://redis.io/commands/sort/) Sorts the elements in a list, a set, or a sorted set, optionally storing the result. ## [SORT_RO](https://redis.io/commands/sort_ro/) Returns the sorted elements of a list, a set, or a sorted set. ## [TTL](https://redis.io/commands/ttl/) Returns the expiration time in seconds of a key. ## [TYPE](https://redis.io/commands/type/) Determines the type of value stored at a key. ## [UNLINK](https://redis.io/commands/unlink/) Asynchronously deletes one or more keys. ## Unsupported generic commands > To implement support for a command, see [here](/guides/implement-command/) #### [COPY](https://redis.io/commands/copy/) (not implemented) Copies the value of a key to a new key. #### [WAIT](https://redis.io/commands/wait/) (not implemented) Blocks until the asynchronous replication of all preceding write commands sent by the connection is completed. #### [WAITAOF](https://redis.io/commands/waitaof/) (not implemented) Blocks until all of the preceding write commands sent by the connection are written to the append-only file of the master and/or replicas. fakeredis-py-2.26.1/docs/redis-commands/Redis/GEO.md000066400000000000000000000026771470672414000220450ustar00rootroot00000000000000# Redis `geo` commands (10/10 implemented) ## [GEOADD](https://redis.io/commands/geoadd/) Adds one or more members to a geospatial index. The key is created if it doesn't exist. ## [GEODIST](https://redis.io/commands/geodist/) Returns the distance between two members of a geospatial index. ## [GEOHASH](https://redis.io/commands/geohash/) Returns members from a geospatial index as geohash strings. ## [GEOPOS](https://redis.io/commands/geopos/) Returns the longitude and latitude of members from a geospatial index. ## [GEORADIUS](https://redis.io/commands/georadius/) Queries a geospatial index for members within a distance from a coordinate, optionally stores the result. ## [GEORADIUSBYMEMBER](https://redis.io/commands/georadiusbymember/) Queries a geospatial index for members within a distance from a member, optionally stores the result. ## [GEORADIUSBYMEMBER_RO](https://redis.io/commands/georadiusbymember_ro/) Returns members from a geospatial index that are within a distance from a member. ## [GEORADIUS_RO](https://redis.io/commands/georadius_ro/) Returns members from a geospatial index that are within a distance from a coordinate. ## [GEOSEARCH](https://redis.io/commands/geosearch/) Queries a geospatial index for members inside an area of a box or a circle. ## [GEOSEARCHSTORE](https://redis.io/commands/geosearchstore/) Queries a geospatial index for members inside an area of a box or a circle, optionally stores the result. fakeredis-py-2.26.1/docs/redis-commands/Redis/HASH.md000066400000000000000000000063471470672414000221540ustar00rootroot00000000000000# Redis `hash` commands (25/27 implemented) ## [HDEL](https://redis.io/commands/hdel/) Deletes one or more fields and their values from a hash. Deletes the hash if no fields remain. ## [HEXISTS](https://redis.io/commands/hexists/) Determines whether a field exists in a hash. ## [HEXPIRE](https://redis.io/commands/hexpire/) Set expiry for hash field using relative time to expire (seconds) ## [HEXPIREAT](https://redis.io/commands/hexpireat/) Set expiry for hash field using an absolute Unix timestamp (seconds) ## [HEXPIRETIME](https://redis.io/commands/hexpiretime/) Returns the expiration time of a hash field as a Unix timestamp, in seconds. ## [HGET](https://redis.io/commands/hget/) Returns the value of a field in a hash. ## [HGETALL](https://redis.io/commands/hgetall/) Returns all fields and values in a hash. ## [HINCRBY](https://redis.io/commands/hincrby/) Increments the integer value of a field in a hash by a number. Uses 0 as initial value if the field doesn't exist. ## [HINCRBYFLOAT](https://redis.io/commands/hincrbyfloat/) Increments the floating point value of a field by a number. Uses 0 as initial value if the field doesn't exist. ## [HKEYS](https://redis.io/commands/hkeys/) Returns all fields in a hash. ## [HLEN](https://redis.io/commands/hlen/) Returns the number of fields in a hash. ## [HMGET](https://redis.io/commands/hmget/) Returns the values of all fields in a hash. ## [HMSET](https://redis.io/commands/hmset/) Sets the values of multiple fields. ## [HPERSIST](https://redis.io/commands/hpersist/) Removes the expiration time for each specified field ## [HPEXPIRE](https://redis.io/commands/hpexpire/) Set expiry for hash field using relative time to expire (milliseconds) ## [HPEXPIREAT](https://redis.io/commands/hpexpireat/) Set expiry for hash field using an absolute Unix timestamp (milliseconds) ## [HPEXPIRETIME](https://redis.io/commands/hpexpiretime/) Returns the expiration time of a hash field as a Unix timestamp, in msec. ## [HPTTL](https://redis.io/commands/hpttl/) Returns the TTL in milliseconds of a hash field. ## [HRANDFIELD](https://redis.io/commands/hrandfield/) Returns one or more random fields from a hash. ## [HSCAN](https://redis.io/commands/hscan/) Iterates over fields and values of a hash. ## [HSET](https://redis.io/commands/hset/) Creates or modifies the value of a field in a hash. ## [HSETNX](https://redis.io/commands/hsetnx/) Sets the value of a field in a hash only when the field doesn't exist. ## [HSTRLEN](https://redis.io/commands/hstrlen/) Returns the length of the value of a field. ## [HTTL](https://redis.io/commands/httl/) Returns the TTL in seconds of a hash field. ## [HVALS](https://redis.io/commands/hvals/) Returns all values in a hash. ## Unsupported hash commands > To implement support for a command, see [here](/guides/implement-command/) #### [HGETF](https://redis.io/commands/hgetf/) (not implemented) For each specified field, returns its value and optionally set the field's remaining expiration time in seconds / milliseconds #### [HSETF](https://redis.io/commands/hsetf/) (not implemented) For each specified field, returns its value and optionally set the field's remaining expiration time in seconds / milliseconds fakeredis-py-2.26.1/docs/redis-commands/Redis/HYPERLOGLOG.md000066400000000000000000000006421470672414000232140ustar00rootroot00000000000000# Redis `hyperloglog` commands (3/3 implemented) ## [PFADD](https://redis.io/commands/pfadd/) Adds elements to a HyperLogLog key. Creates the key if it doesn't exist. ## [PFCOUNT](https://redis.io/commands/pfcount/) Returns the approximated cardinality of the set(s) observed by the HyperLogLog key(s). ## [PFMERGE](https://redis.io/commands/pfmerge/) Merges one or more HyperLogLog values into a single key. fakeredis-py-2.26.1/docs/redis-commands/Redis/LIST.md000066400000000000000000000061411470672414000221740ustar00rootroot00000000000000# Redis `list` commands (22/22 implemented) ## [BLMOVE](https://redis.io/commands/blmove/) Pops an element from a list, pushes it to another list and returns it. Blocks until an element is available otherwise. Deletes the list if the last element was moved. ## [BLMPOP](https://redis.io/commands/blmpop/) Pops the first element from one of multiple lists. Blocks until an element is available otherwise. Deletes the list if the last element was popped. ## [BLPOP](https://redis.io/commands/blpop/) Removes and returns the first element in a list. Blocks until an element is available otherwise. Deletes the list if the last element was popped. ## [BRPOP](https://redis.io/commands/brpop/) Removes and returns the last element in a list. Blocks until an element is available otherwise. Deletes the list if the last element was popped. ## [BRPOPLPUSH](https://redis.io/commands/brpoplpush/) Pops an element from a list, pushes it to another list and returns it. Block until an element is available otherwise. Deletes the list if the last element was popped. ## [LINDEX](https://redis.io/commands/lindex/) Returns an element from a list by its index. ## [LINSERT](https://redis.io/commands/linsert/) Inserts an element before or after another element in a list. ## [LLEN](https://redis.io/commands/llen/) Returns the length of a list. ## [LMOVE](https://redis.io/commands/lmove/) Returns an element after popping it from one list and pushing it to another. Deletes the list if the last element was moved. ## [LMPOP](https://redis.io/commands/lmpop/) Returns multiple elements from a list after removing them. Deletes the list if the last element was popped. ## [LPOP](https://redis.io/commands/lpop/) Returns the first elements in a list after removing it. Deletes the list if the last element was popped. ## [LPOS](https://redis.io/commands/lpos/) Returns the index of matching elements in a list. ## [LPUSH](https://redis.io/commands/lpush/) Prepends one or more elements to a list. Creates the key if it doesn't exist. ## [LPUSHX](https://redis.io/commands/lpushx/) Prepends one or more elements to a list only when the list exists. ## [LRANGE](https://redis.io/commands/lrange/) Returns a range of elements from a list. ## [LREM](https://redis.io/commands/lrem/) Removes elements from a list. Deletes the list if the last element was removed. ## [LSET](https://redis.io/commands/lset/) Sets the value of an element in a list by its index. ## [LTRIM](https://redis.io/commands/ltrim/) Removes elements from both ends a list. Deletes the list if all elements were trimmed. ## [RPOP](https://redis.io/commands/rpop/) Returns and removes the last elements of a list. Deletes the list if the last element was popped. ## [RPOPLPUSH](https://redis.io/commands/rpoplpush/) Returns the last element of a list after removing and pushing it to another list. Deletes the list if the last element was popped. ## [RPUSH](https://redis.io/commands/rpush/) Appends one or more elements to a list. Creates the key if it doesn't exist. ## [RPUSHX](https://redis.io/commands/rpushx/) Appends an element to a list only when the list exists. fakeredis-py-2.26.1/docs/redis-commands/Redis/PUBSUB.md000066400000000000000000000032061470672414000224200ustar00rootroot00000000000000# Redis `pubsub` commands (15/15 implemented) ## [PSUBSCRIBE](https://redis.io/commands/psubscribe/) Listens for messages published to channels that match one or more patterns. ## [PUBLISH](https://redis.io/commands/publish/) Posts a message to a channel. ## [PUBSUB](https://redis.io/commands/pubsub/) A container for Pub/Sub commands. ## [PUBSUB CHANNELS](https://redis.io/commands/pubsub-channels/) Returns the active channels. ## [PUBSUB HELP](https://redis.io/commands/pubsub-help/) Returns helpful text about the different subcommands. ## [PUBSUB NUMPAT](https://redis.io/commands/pubsub-numpat/) Returns a count of unique pattern subscriptions. ## [PUBSUB NUMSUB](https://redis.io/commands/pubsub-numsub/) Returns a count of subscribers to channels. ## [PUBSUB SHARDCHANNELS](https://redis.io/commands/pubsub-shardchannels/) Returns the active shard channels. ## [PUBSUB SHARDNUMSUB](https://redis.io/commands/pubsub-shardnumsub/) Returns the count of subscribers of shard channels. ## [PUNSUBSCRIBE](https://redis.io/commands/punsubscribe/) Stops listening to messages published to channels that match one or more patterns. ## [SPUBLISH](https://redis.io/commands/spublish/) Post a message to a shard channel ## [SSUBSCRIBE](https://redis.io/commands/ssubscribe/) Listens for messages published to shard channels. ## [SUBSCRIBE](https://redis.io/commands/subscribe/) Listens for messages published to channels. ## [SUNSUBSCRIBE](https://redis.io/commands/sunsubscribe/) Stops listening to messages posted to shard channels. ## [UNSUBSCRIBE](https://redis.io/commands/unsubscribe/) Stops listening to messages posted to channels. fakeredis-py-2.26.1/docs/redis-commands/Redis/SCRIPTING.md000066400000000000000000000056171470672414000227720ustar00rootroot00000000000000# Redis `scripting` commands (7/22 implemented) ## [EVAL](https://redis.io/commands/eval/) Executes a server-side Lua script. ## [EVALSHA](https://redis.io/commands/evalsha/) Executes a server-side Lua script by SHA1 digest. ## [SCRIPT](https://redis.io/commands/script/) A container for Lua scripts management commands. ## [SCRIPT EXISTS](https://redis.io/commands/script-exists/) Determines whether server-side Lua scripts exist in the script cache. ## [SCRIPT FLUSH](https://redis.io/commands/script-flush/) Removes all server-side Lua scripts from the script cache. ## [SCRIPT HELP](https://redis.io/commands/script-help/) Returns helpful text about the different subcommands. ## [SCRIPT LOAD](https://redis.io/commands/script-load/) Loads a server-side Lua script to the script cache. ## Unsupported scripting commands > To implement support for a command, see [here](/guides/implement-command/) #### [EVALSHA_RO](https://redis.io/commands/evalsha_ro/) (not implemented) Executes a read-only server-side Lua script by SHA1 digest. #### [EVAL_RO](https://redis.io/commands/eval_ro/) (not implemented) Executes a read-only server-side Lua script. #### [FCALL](https://redis.io/commands/fcall/) (not implemented) Invokes a function. #### [FCALL_RO](https://redis.io/commands/fcall_ro/) (not implemented) Invokes a read-only function. #### [FUNCTION](https://redis.io/commands/function/) (not implemented) A container for function commands. #### [FUNCTION DELETE](https://redis.io/commands/function-delete/) (not implemented) Deletes a library and its functions. #### [FUNCTION DUMP](https://redis.io/commands/function-dump/) (not implemented) Dumps all libraries into a serialized binary payload. #### [FUNCTION FLUSH](https://redis.io/commands/function-flush/) (not implemented) Deletes all libraries and functions. #### [FUNCTION KILL](https://redis.io/commands/function-kill/) (not implemented) Terminates a function during execution. #### [FUNCTION LIST](https://redis.io/commands/function-list/) (not implemented) Returns information about all libraries. #### [FUNCTION LOAD](https://redis.io/commands/function-load/) (not implemented) Creates a library. #### [FUNCTION RESTORE](https://redis.io/commands/function-restore/) (not implemented) Restores all libraries from a payload. #### [FUNCTION STATS](https://redis.io/commands/function-stats/) (not implemented) Returns information about a function during execution. #### [SCRIPT DEBUG](https://redis.io/commands/script-debug/) (not implemented) Sets the debug mode of server-side Lua scripts. #### [SCRIPT KILL](https://redis.io/commands/script-kill/) (not implemented) Terminates a server-side Lua script during execution. fakeredis-py-2.26.1/docs/redis-commands/Redis/SERVER.md000066400000000000000000000224541470672414000224340ustar00rootroot00000000000000# Redis `server` commands (11/70 implemented) ## [BGSAVE](https://redis.io/commands/bgsave/) Asynchronously saves the database(s) to disk. ## [COMMAND](https://redis.io/commands/command/) Returns detailed information about all commands. ## [COMMAND COUNT](https://redis.io/commands/command-count/) Returns a count of commands. ## [COMMAND INFO](https://redis.io/commands/command-info/) Returns information about one, multiple or all commands. ## [DBSIZE](https://redis.io/commands/dbsize/) Returns the number of keys in the database. ## [FLUSHALL](https://redis.io/commands/flushall/) Removes all keys from all databases. ## [FLUSHDB](https://redis.io/commands/flushdb/) Remove all keys from the current database. ## [LASTSAVE](https://redis.io/commands/lastsave/) Returns the Unix timestamp of the last successful save to disk. ## [SAVE](https://redis.io/commands/save/) Synchronously saves the database(s) to disk. ## [SWAPDB](https://redis.io/commands/swapdb/) Swaps two Redis databases. ## [TIME](https://redis.io/commands/time/) Returns the server time. ## Unsupported server commands > To implement support for a command, see [here](/guides/implement-command/) #### [ACL](https://redis.io/commands/acl/) (not implemented) A container for Access List Control commands. #### [ACL CAT](https://redis.io/commands/acl-cat/) (not implemented) Lists the ACL categories, or the commands inside a category. #### [ACL DELUSER](https://redis.io/commands/acl-deluser/) (not implemented) Deletes ACL users, and terminates their connections. #### [ACL DRYRUN](https://redis.io/commands/acl-dryrun/) (not implemented) Simulates the execution of a command by a user, without executing the command. #### [ACL GENPASS](https://redis.io/commands/acl-genpass/) (not implemented) Generates a pseudorandom, secure password that can be used to identify ACL users. #### [ACL GETUSER](https://redis.io/commands/acl-getuser/) (not implemented) Lists the ACL rules of a user. #### [ACL LIST](https://redis.io/commands/acl-list/) (not implemented) Dumps the effective rules in ACL file format. #### [ACL LOAD](https://redis.io/commands/acl-load/) (not implemented) Reloads the rules from the configured ACL file. #### [ACL LOG](https://redis.io/commands/acl-log/) (not implemented) Lists recent security events generated due to ACL rules. #### [ACL SAVE](https://redis.io/commands/acl-save/) (not implemented) Saves the effective ACL rules in the configured ACL file. #### [ACL SETUSER](https://redis.io/commands/acl-setuser/) (not implemented) Creates and modifies an ACL user and its rules. #### [ACL USERS](https://redis.io/commands/acl-users/) (not implemented) Lists all ACL users. #### [ACL WHOAMI](https://redis.io/commands/acl-whoami/) (not implemented) Returns the authenticated username of the current connection. #### [BGREWRITEAOF](https://redis.io/commands/bgrewriteaof/) (not implemented) Asynchronously rewrites the append-only file to disk. #### [COMMAND DOCS](https://redis.io/commands/command-docs/) (not implemented) Returns documentary information about one, multiple or all commands. #### [COMMAND GETKEYS](https://redis.io/commands/command-getkeys/) (not implemented) Extracts the key names from an arbitrary command. #### [COMMAND GETKEYSANDFLAGS](https://redis.io/commands/command-getkeysandflags/) (not implemented) Extracts the key names and access flags for an arbitrary command. #### [COMMAND LIST](https://redis.io/commands/command-list/) (not implemented) Returns a list of command names. #### [CONFIG](https://redis.io/commands/config/) (not implemented) A container for server configuration commands. #### [CONFIG GET](https://redis.io/commands/config-get/) (not implemented) Returns the effective values of configuration parameters. #### [CONFIG RESETSTAT](https://redis.io/commands/config-resetstat/) (not implemented) Resets the server's statistics. #### [CONFIG REWRITE](https://redis.io/commands/config-rewrite/) (not implemented) Persists the effective configuration to file. #### [CONFIG SET](https://redis.io/commands/config-set/) (not implemented) Sets configuration parameters in-flight. #### [FAILOVER](https://redis.io/commands/failover/) (not implemented) Starts a coordinated failover from a server to one of its replicas. #### [INFO](https://redis.io/commands/info/) (not implemented) Returns information and statistics about the server. #### [LATENCY](https://redis.io/commands/latency/) (not implemented) A container for latency diagnostics commands. #### [LATENCY DOCTOR](https://redis.io/commands/latency-doctor/) (not implemented) Returns a human-readable latency analysis report. #### [LATENCY GRAPH](https://redis.io/commands/latency-graph/) (not implemented) Returns a latency graph for an event. #### [LATENCY HELP](https://redis.io/commands/latency-help/) (not implemented) Returns helpful text about the different subcommands. #### [LATENCY HISTOGRAM](https://redis.io/commands/latency-histogram/) (not implemented) Returns the cumulative distribution of latencies of a subset or all commands. #### [LATENCY HISTORY](https://redis.io/commands/latency-history/) (not implemented) Returns timestamp-latency samples for an event. #### [LATENCY LATEST](https://redis.io/commands/latency-latest/) (not implemented) Returns the latest latency samples for all events. #### [LATENCY RESET](https://redis.io/commands/latency-reset/) (not implemented) Resets the latency data for one or more events. #### [LOLWUT](https://redis.io/commands/lolwut/) (not implemented) Displays computer art and the Redis version #### [MEMORY](https://redis.io/commands/memory/) (not implemented) A container for memory diagnostics commands. #### [MEMORY DOCTOR](https://redis.io/commands/memory-doctor/) (not implemented) Outputs a memory problems report. #### [MEMORY MALLOC-STATS](https://redis.io/commands/memory-malloc-stats/) (not implemented) Returns the allocator statistics. #### [MEMORY PURGE](https://redis.io/commands/memory-purge/) (not implemented) Asks the allocator to release memory. #### [MEMORY STATS](https://redis.io/commands/memory-stats/) (not implemented) Returns details about memory usage. #### [MEMORY USAGE](https://redis.io/commands/memory-usage/) (not implemented) Estimates the memory usage of a key. #### [MODULE](https://redis.io/commands/module/) (not implemented) A container for module commands. #### [MODULE LIST](https://redis.io/commands/module-list/) (not implemented) Returns all loaded modules. #### [MODULE LOAD](https://redis.io/commands/module-load/) (not implemented) Loads a module. #### [MODULE LOADEX](https://redis.io/commands/module-loadex/) (not implemented) Loads a module using extended parameters. #### [MODULE UNLOAD](https://redis.io/commands/module-unload/) (not implemented) Unloads a module. #### [MONITOR](https://redis.io/commands/monitor/) (not implemented) Listens for all requests received by the server in real-time. #### [PSYNC](https://redis.io/commands/psync/) (not implemented) An internal command used in replication. #### [REPLCONF](https://redis.io/commands/replconf/) (not implemented) An internal command for configuring the replication stream. #### [REPLICAOF](https://redis.io/commands/replicaof/) (not implemented) Configures a server as replica of another, or promotes it to a master. #### [RESTORE-ASKING](https://redis.io/commands/restore-asking/) (not implemented) An internal command for migrating keys in a cluster. #### [ROLE](https://redis.io/commands/role/) (not implemented) Returns the replication role. #### [SHUTDOWN](https://redis.io/commands/shutdown/) (not implemented) Synchronously saves the database(s) to disk and shuts down the Redis server. #### [SLAVEOF](https://redis.io/commands/slaveof/) (not implemented) Sets a Redis server as a replica of another, or promotes it to being a master. #### [SLOWLOG](https://redis.io/commands/slowlog/) (not implemented) A container for slow log commands. #### [SLOWLOG GET](https://redis.io/commands/slowlog-get/) (not implemented) Returns the slow log's entries. #### [SLOWLOG HELP](https://redis.io/commands/slowlog-help/) (not implemented) Show helpful text about the different subcommands #### [SLOWLOG LEN](https://redis.io/commands/slowlog-len/) (not implemented) Returns the number of entries in the slow log. #### [SLOWLOG RESET](https://redis.io/commands/slowlog-reset/) (not implemented) Clears all entries from the slow log. #### [SYNC](https://redis.io/commands/sync/) (not implemented) An internal command used in replication. fakeredis-py-2.26.1/docs/redis-commands/Redis/SET.md000066400000000000000000000034171470672414000220570ustar00rootroot00000000000000# Redis `set` commands (17/17 implemented) ## [SADD](https://redis.io/commands/sadd/) Adds one or more members to a set. Creates the key if it doesn't exist. ## [SCARD](https://redis.io/commands/scard/) Returns the number of members in a set. ## [SDIFF](https://redis.io/commands/sdiff/) Returns the difference of multiple sets. ## [SDIFFSTORE](https://redis.io/commands/sdiffstore/) Stores the difference of multiple sets in a key. ## [SINTER](https://redis.io/commands/sinter/) Returns the intersect of multiple sets. ## [SINTERCARD](https://redis.io/commands/sintercard/) Returns the number of members of the intersect of multiple sets. ## [SINTERSTORE](https://redis.io/commands/sinterstore/) Stores the intersect of multiple sets in a key. ## [SISMEMBER](https://redis.io/commands/sismember/) Determines whether a member belongs to a set. ## [SMEMBERS](https://redis.io/commands/smembers/) Returns all members of a set. ## [SMISMEMBER](https://redis.io/commands/smismember/) Determines whether multiple members belong to a set. ## [SMOVE](https://redis.io/commands/smove/) Moves a member from one set to another. ## [SPOP](https://redis.io/commands/spop/) Returns one or more random members from a set after removing them. Deletes the set if the last member was popped. ## [SRANDMEMBER](https://redis.io/commands/srandmember/) Get one or multiple random members from a set ## [SREM](https://redis.io/commands/srem/) Removes one or more members from a set. Deletes the set if the last member was removed. ## [SSCAN](https://redis.io/commands/sscan/) Iterates over members of a set. ## [SUNION](https://redis.io/commands/sunion/) Returns the union of multiple sets. ## [SUNIONSTORE](https://redis.io/commands/sunionstore/) Stores the union of multiple sets in a key. fakeredis-py-2.26.1/docs/redis-commands/Redis/SORTED-SET.md000066400000000000000000000114331470672414000230520ustar00rootroot00000000000000# Redis `sorted-set` commands (35/35 implemented) ## [BZMPOP](https://redis.io/commands/bzmpop/) Removes and returns a member by score from one or more sorted sets. Blocks until a member is available otherwise. Deletes the sorted set if the last element was popped. ## [BZPOPMAX](https://redis.io/commands/bzpopmax/) Removes and returns the member with the highest score from one or more sorted sets. Blocks until a member available otherwise. Deletes the sorted set if the last element was popped. ## [BZPOPMIN](https://redis.io/commands/bzpopmin/) Removes and returns the member with the lowest score from one or more sorted sets. Blocks until a member is available otherwise. Deletes the sorted set if the last element was popped. ## [ZADD](https://redis.io/commands/zadd/) Adds one or more members to a sorted set, or updates their scores. Creates the key if it doesn't exist. ## [ZCARD](https://redis.io/commands/zcard/) Returns the number of members in a sorted set. ## [ZCOUNT](https://redis.io/commands/zcount/) Returns the count of members in a sorted set that have scores within a range. ## [ZDIFF](https://redis.io/commands/zdiff/) Returns the difference between multiple sorted sets. ## [ZDIFFSTORE](https://redis.io/commands/zdiffstore/) Stores the difference of multiple sorted sets in a key. ## [ZINCRBY](https://redis.io/commands/zincrby/) Increments the score of a member in a sorted set. ## [ZINTER](https://redis.io/commands/zinter/) Returns the intersect of multiple sorted sets. ## [ZINTERCARD](https://redis.io/commands/zintercard/) Returns the number of members of the intersect of multiple sorted sets. ## [ZINTERSTORE](https://redis.io/commands/zinterstore/) Stores the intersect of multiple sorted sets in a key. ## [ZLEXCOUNT](https://redis.io/commands/zlexcount/) Returns the number of members in a sorted set within a lexicographical range. ## [ZMPOP](https://redis.io/commands/zmpop/) Returns the highest- or lowest-scoring members from one or more sorted sets after removing them. Deletes the sorted set if the last member was popped. ## [ZMSCORE](https://redis.io/commands/zmscore/) Returns the score of one or more members in a sorted set. ## [ZPOPMAX](https://redis.io/commands/zpopmax/) Returns the highest-scoring members from a sorted set after removing them. Deletes the sorted set if the last member was popped. ## [ZPOPMIN](https://redis.io/commands/zpopmin/) Returns the lowest-scoring members from a sorted set after removing them. Deletes the sorted set if the last member was popped. ## [ZRANDMEMBER](https://redis.io/commands/zrandmember/) Returns one or more random members from a sorted set. ## [ZRANGE](https://redis.io/commands/zrange/) Returns members in a sorted set within a range of indexes. ## [ZRANGEBYLEX](https://redis.io/commands/zrangebylex/) Returns members in a sorted set within a lexicographical range. ## [ZRANGEBYSCORE](https://redis.io/commands/zrangebyscore/) Returns members in a sorted set within a range of scores. ## [ZRANGESTORE](https://redis.io/commands/zrangestore/) Stores a range of members from sorted set in a key. ## [ZRANK](https://redis.io/commands/zrank/) Returns the index of a member in a sorted set ordered by ascending scores. ## [ZREM](https://redis.io/commands/zrem/) Removes one or more members from a sorted set. Deletes the sorted set if all members were removed. ## [ZREMRANGEBYLEX](https://redis.io/commands/zremrangebylex/) Removes members in a sorted set within a lexicographical range. Deletes the sorted set if all members were removed. ## [ZREMRANGEBYRANK](https://redis.io/commands/zremrangebyrank/) Removes members in a sorted set within a range of indexes. Deletes the sorted set if all members were removed. ## [ZREMRANGEBYSCORE](https://redis.io/commands/zremrangebyscore/) Removes members in a sorted set within a range of scores. Deletes the sorted set if all members were removed. ## [ZREVRANGE](https://redis.io/commands/zrevrange/) Returns members in a sorted set within a range of indexes in reverse order. ## [ZREVRANGEBYLEX](https://redis.io/commands/zrevrangebylex/) Returns members in a sorted set within a lexicographical range in reverse order. ## [ZREVRANGEBYSCORE](https://redis.io/commands/zrevrangebyscore/) Returns members in a sorted set within a range of scores in reverse order. ## [ZREVRANK](https://redis.io/commands/zrevrank/) Returns the index of a member in a sorted set ordered by descending scores. ## [ZSCAN](https://redis.io/commands/zscan/) Iterates over members and scores of a sorted set. ## [ZSCORE](https://redis.io/commands/zscore/) Returns the score of a member in a sorted set. ## [ZUNION](https://redis.io/commands/zunion/) Returns the union of multiple sorted sets. ## [ZUNIONSTORE](https://redis.io/commands/zunionstore/) Stores the union of multiple sorted sets in a key. fakeredis-py-2.26.1/docs/redis-commands/Redis/STREAM.md000066400000000000000000000047771470672414000224310ustar00rootroot00000000000000# Redis `stream` commands (20/20 implemented) ## [XACK](https://redis.io/commands/xack/) Returns the number of messages that were successfully acknowledged by the consumer group member of a stream. ## [XADD](https://redis.io/commands/xadd/) Appends a new message to a stream. Creates the key if it doesn't exist. ## [XAUTOCLAIM](https://redis.io/commands/xautoclaim/) Changes, or acquires, ownership of messages in a consumer group, as if the messages were delivered to as consumer group member. ## [XCLAIM](https://redis.io/commands/xclaim/) Changes, or acquires, ownership of a message in a consumer group, as if the message was delivered a consumer group member. ## [XDEL](https://redis.io/commands/xdel/) Returns the number of messages after removing them from a stream. ## [XGROUP CREATE](https://redis.io/commands/xgroup-create/) Creates a consumer group. ## [XGROUP CREATECONSUMER](https://redis.io/commands/xgroup-createconsumer/) Creates a consumer in a consumer group. ## [XGROUP DELCONSUMER](https://redis.io/commands/xgroup-delconsumer/) Deletes a consumer from a consumer group. ## [XGROUP DESTROY](https://redis.io/commands/xgroup-destroy/) Destroys a consumer group. ## [XGROUP SETID](https://redis.io/commands/xgroup-setid/) Sets the last-delivered ID of a consumer group. ## [XINFO CONSUMERS](https://redis.io/commands/xinfo-consumers/) Returns a list of the consumers in a consumer group. ## [XINFO GROUPS](https://redis.io/commands/xinfo-groups/) Returns a list of the consumer groups of a stream. ## [XINFO STREAM](https://redis.io/commands/xinfo-stream/) Returns information about a stream. ## [XLEN](https://redis.io/commands/xlen/) Return the number of messages in a stream. ## [XPENDING](https://redis.io/commands/xpending/) Returns the information and entries from a stream consumer group's pending entries list. ## [XRANGE](https://redis.io/commands/xrange/) Returns the messages from a stream within a range of IDs. ## [XREAD](https://redis.io/commands/xread/) Returns messages from multiple streams with IDs greater than the ones requested. Blocks until a message is available otherwise. ## [XREADGROUP](https://redis.io/commands/xreadgroup/) Returns new or historical messages from a stream for a consumer in a group. Blocks until a message is available otherwise. ## [XREVRANGE](https://redis.io/commands/xrevrange/) Returns the messages from a stream within a range of IDs in reverse order. ## [XTRIM](https://redis.io/commands/xtrim/) Deletes messages from the beginning of a stream. fakeredis-py-2.26.1/docs/redis-commands/Redis/STRING.md000066400000000000000000000053071470672414000224320ustar00rootroot00000000000000# Redis `string` commands (22/22 implemented) ## [APPEND](https://redis.io/commands/append/) Appends a string to the value of a key. Creates the key if it doesn't exist. ## [DECR](https://redis.io/commands/decr/) Decrements the integer value of a key by one. Uses 0 as initial value if the key doesn't exist. ## [DECRBY](https://redis.io/commands/decrby/) Decrements a number from the integer value of a key. Uses 0 as initial value if the key doesn't exist. ## [GET](https://redis.io/commands/get/) Returns the string value of a key. ## [GETDEL](https://redis.io/commands/getdel/) Returns the string value of a key after deleting the key. ## [GETEX](https://redis.io/commands/getex/) Returns the string value of a key after setting its expiration time. ## [GETRANGE](https://redis.io/commands/getrange/) Returns a substring of the string stored at a key. ## [GETSET](https://redis.io/commands/getset/) Returns the previous string value of a key after setting it to a new value. ## [INCR](https://redis.io/commands/incr/) Increments the integer value of a key by one. Uses 0 as initial value if the key doesn't exist. ## [INCRBY](https://redis.io/commands/incrby/) Increments the integer value of a key by a number. Uses 0 as initial value if the key doesn't exist. ## [INCRBYFLOAT](https://redis.io/commands/incrbyfloat/) Increment the floating point value of a key by a number. Uses 0 as initial value if the key doesn't exist. ## [LCS](https://redis.io/commands/lcs/) Finds the longest common substring. ## [MGET](https://redis.io/commands/mget/) Atomically returns the string values of one or more keys. ## [MSET](https://redis.io/commands/mset/) Atomically creates or modifies the string values of one or more keys. ## [MSETNX](https://redis.io/commands/msetnx/) Atomically modifies the string values of one or more keys only when all keys don't exist. ## [PSETEX](https://redis.io/commands/psetex/) Sets both string value and expiration time in milliseconds of a key. The key is created if it doesn't exist. ## [SET](https://redis.io/commands/set/) Sets the string value of a key, ignoring its type. The key is created if it doesn't exist. ## [SETEX](https://redis.io/commands/setex/) Sets the string value and expiration time of a key. Creates the key if it doesn't exist. ## [SETNX](https://redis.io/commands/setnx/) Set the string value of a key only when the key doesn't exist. ## [SETRANGE](https://redis.io/commands/setrange/) Overwrites a part of a string value with another by an offset. Creates the key if it doesn't exist. ## [STRLEN](https://redis.io/commands/strlen/) Returns the length of a string value. ## [SUBSTR](https://redis.io/commands/substr/) Returns a substring from a string value. fakeredis-py-2.26.1/docs/redis-commands/Redis/TRANSACTIONS.md000066400000000000000000000007571470672414000233400ustar00rootroot00000000000000# Redis `transactions` commands (5/5 implemented) ## [DISCARD](https://redis.io/commands/discard/) Discards a transaction. ## [EXEC](https://redis.io/commands/exec/) Executes all commands in a transaction. ## [MULTI](https://redis.io/commands/multi/) Starts a transaction. ## [UNWATCH](https://redis.io/commands/unwatch/) Forgets about watched keys of a transaction. ## [WATCH](https://redis.io/commands/watch/) Monitors changes to keys to determine the execution of a transaction. fakeredis-py-2.26.1/docs/redis-commands/RedisBloom/000077500000000000000000000000001470672414000220665ustar00rootroot00000000000000fakeredis-py-2.26.1/docs/redis-commands/RedisBloom/BF.md000066400000000000000000000021331470672414000226760ustar00rootroot00000000000000# RedisBloom `bf` commands (10/10 implemented) ## [BF.RESERVE](https://redis.io/commands/bf.reserve/) Creates a new Bloom Filter ## [BF.ADD](https://redis.io/commands/bf.add/) Adds an item to a Bloom Filter ## [BF.MADD](https://redis.io/commands/bf.madd/) Adds one or more items to a Bloom Filter. A filter will be created if it does not exist ## [BF.INSERT](https://redis.io/commands/bf.insert/) Adds one or more items to a Bloom Filter. A filter will be created if it does not exist ## [BF.EXISTS](https://redis.io/commands/bf.exists/) Checks whether an item exists in a Bloom Filter ## [BF.MEXISTS](https://redis.io/commands/bf.mexists/) Checks whether one or more items exist in a Bloom Filter ## [BF.SCANDUMP](https://redis.io/commands/bf.scandump/) Begins an incremental save of the bloom filter ## [BF.LOADCHUNK](https://redis.io/commands/bf.loadchunk/) Restores a filter previously saved using SCANDUMP ## [BF.INFO](https://redis.io/commands/bf.info/) Returns information about a Bloom Filter ## [BF.CARD](https://redis.io/commands/bf.card/) Returns the cardinality of a Bloom filter fakeredis-py-2.26.1/docs/redis-commands/RedisBloom/CF.md000066400000000000000000000026011470672414000226770ustar00rootroot00000000000000# RedisBloom `cf` commands (12/12 implemented) ## [CF.RESERVE](https://redis.io/commands/cf.reserve/) Creates a new Cuckoo Filter ## [CF.ADD](https://redis.io/commands/cf.add/) Adds an item to a Cuckoo Filter ## [CF.ADDNX](https://redis.io/commands/cf.addnx/) Adds an item to a Cuckoo Filter if the item did not exist previously. ## [CF.INSERT](https://redis.io/commands/cf.insert/) Adds one or more items to a Cuckoo Filter. A filter will be created if it does not exist ## [CF.INSERTNX](https://redis.io/commands/cf.insertnx/) Adds one or more items to a Cuckoo Filter if the items did not exist previously. A filter will be created if it does not exist ## [CF.EXISTS](https://redis.io/commands/cf.exists/) Checks whether one or more items exist in a Cuckoo Filter ## [CF.MEXISTS](https://redis.io/commands/cf.mexists/) Checks whether one or more items exist in a Cuckoo Filter ## [CF.DEL](https://redis.io/commands/cf.del/) Deletes an item from a Cuckoo Filter ## [CF.COUNT](https://redis.io/commands/cf.count/) Return the number of times an item might be in a Cuckoo Filter ## [CF.SCANDUMP](https://redis.io/commands/cf.scandump/) Begins an incremental save of the bloom filter ## [CF.LOADCHUNK](https://redis.io/commands/cf.loadchunk/) Restores a filter previously saved using SCANDUMP ## [CF.INFO](https://redis.io/commands/cf.info/) Returns information about a Cuckoo Filter fakeredis-py-2.26.1/docs/redis-commands/RedisBloom/CMS.md000066400000000000000000000013051470672414000230310ustar00rootroot00000000000000# RedisBloom `cms` commands (6/6 implemented) ## [CMS.INITBYDIM](https://redis.io/commands/cms.initbydim/) Initializes a Count-Min Sketch to dimensions specified by user ## [CMS.INITBYPROB](https://redis.io/commands/cms.initbyprob/) Initializes a Count-Min Sketch to accommodate requested tolerances. ## [CMS.INCRBY](https://redis.io/commands/cms.incrby/) Increases the count of one or more items by increment ## [CMS.QUERY](https://redis.io/commands/cms.query/) Returns the count for one or more items in a sketch ## [CMS.MERGE](https://redis.io/commands/cms.merge/) Merges several sketches into one sketch ## [CMS.INFO](https://redis.io/commands/cms.info/) Returns information about a sketch fakeredis-py-2.26.1/docs/redis-commands/RedisBloom/TDIGEST.md000066400000000000000000000046001470672414000235130ustar00rootroot00000000000000# RedisBloom `tdigest` commands (14/14 implemented) ## [TDIGEST.CREATE](https://redis.io/commands/tdigest.create/) Allocates memory and initializes a new t-digest sketch ## [TDIGEST.RESET](https://redis.io/commands/tdigest.reset/) Resets a t-digest sketch: empty the sketch and re-initializes it. ## [TDIGEST.ADD](https://redis.io/commands/tdigest.add/) Adds one or more observations to a t-digest sketch ## [TDIGEST.MERGE](https://redis.io/commands/tdigest.merge/) Merges multiple t-digest sketches into a single sketch ## [TDIGEST.MIN](https://redis.io/commands/tdigest.min/) Returns the minimum observation value from a t-digest sketch ## [TDIGEST.MAX](https://redis.io/commands/tdigest.max/) Returns the maximum observation value from a t-digest sketch ## [TDIGEST.QUANTILE](https://redis.io/commands/tdigest.quantile/) Returns, for each input fraction, an estimation of the value (floating point) that is smaller than the given fraction of observations ## [TDIGEST.CDF](https://redis.io/commands/tdigest.cdf/) Returns, for each input value, an estimation of the fraction (floating-point) of (observations smaller than the given value + half the observations equal to the given value) ## [TDIGEST.TRIMMED_MEAN](https://redis.io/commands/tdigest.trimmed_mean/) Returns an estimation of the mean value from the sketch, excluding observation values outside the low and high cutoff quantiles ## [TDIGEST.RANK](https://redis.io/commands/tdigest.rank/) Returns, for each input value (floating-point), the estimated rank of the value (the number of observations in the sketch that are smaller than the value + half the number of observations that are equal to the value) ## [TDIGEST.REVRANK](https://redis.io/commands/tdigest.revrank/) Returns, for each input value (floating-point), the estimated reverse rank of the value (the number of observations in the sketch that are larger than the value + half the number of observations that are equal to the value) ## [TDIGEST.BYRANK](https://redis.io/commands/tdigest.byrank/) Returns, for each input rank, an estimation of the value (floating-point) with that rank ## [TDIGEST.BYREVRANK](https://redis.io/commands/tdigest.byrevrank/) Returns, for each input reverse rank, an estimation of the value (floating-point) with that reverse rank ## [TDIGEST.INFO](https://redis.io/commands/tdigest.info/) Returns information and statistics about a t-digest sketch fakeredis-py-2.26.1/docs/redis-commands/RedisBloom/TOPK.md000066400000000000000000000014131470672414000231640ustar00rootroot00000000000000# RedisBloom `topk` commands (7/7 implemented) ## [TOPK.RESERVE](https://redis.io/commands/topk.reserve/) Initializes a TopK with specified parameters ## [TOPK.ADD](https://redis.io/commands/topk.add/) Increases the count of one or more items by increment ## [TOPK.INCRBY](https://redis.io/commands/topk.incrby/) Increases the count of one or more items by increment ## [TOPK.QUERY](https://redis.io/commands/topk.query/) Checks whether one or more items are in a sketch ## [TOPK.COUNT](https://redis.io/commands/topk.count/) Return the count for one or more items are in a sketch ## [TOPK.LIST](https://redis.io/commands/topk.list/) Return full list of items in Top K list ## [TOPK.INFO](https://redis.io/commands/topk.info/) Returns information about a sketch fakeredis-py-2.26.1/docs/redis-commands/RedisJson/000077500000000000000000000000001470672414000217275ustar00rootroot00000000000000fakeredis-py-2.26.1/docs/redis-commands/RedisJson/JSON.md000066400000000000000000000051031470672414000230210ustar00rootroot00000000000000# RedisJson `json` commands (22/22 implemented) ## [JSON.DEL](https://redis.io/commands/json.del/) Deletes a value ## [JSON.FORGET](https://redis.io/commands/json.forget/) Deletes a value ## [JSON.GET](https://redis.io/commands/json.get/) Gets the value at one or more paths in JSON serialized form ## [JSON.TOGGLE](https://redis.io/commands/json.toggle/) Toggles a boolean value ## [JSON.CLEAR](https://redis.io/commands/json.clear/) Clears all values from an array or an object and sets numeric values to `0` ## [JSON.SET](https://redis.io/commands/json.set/) Sets or updates the JSON value at a path ## [JSON.MSET](https://redis.io/commands/json.mset/) Sets or updates the JSON value of one or more keys ## [JSON.MERGE](https://redis.io/commands/json.merge/) Merges a given JSON value into matching paths. Consequently, JSON values at matching paths are updated, deleted, or expanded with new children ## [JSON.MGET](https://redis.io/commands/json.mget/) Returns the values at a path from one or more keys ## [JSON.NUMINCRBY](https://redis.io/commands/json.numincrby/) Increments the numeric value at path by a value ## [JSON.NUMMULTBY](https://redis.io/commands/json.nummultby/) Multiplies the numeric value at path by a value ## [JSON.STRAPPEND](https://redis.io/commands/json.strappend/) Appends a string to a JSON string value at path ## [JSON.STRLEN](https://redis.io/commands/json.strlen/) Returns the length of the JSON String at path in key ## [JSON.ARRAPPEND](https://redis.io/commands/json.arrappend/) Append one or more json values into the array at path after the last element in it. ## [JSON.ARRINDEX](https://redis.io/commands/json.arrindex/) Returns the index of the first occurrence of a JSON scalar value in the array at path ## [JSON.ARRINSERT](https://redis.io/commands/json.arrinsert/) Inserts the JSON scalar(s) value at the specified index in the array at path ## [JSON.ARRLEN](https://redis.io/commands/json.arrlen/) Returns the length of the array at path ## [JSON.ARRPOP](https://redis.io/commands/json.arrpop/) Removes and returns the element at the specified index in the array at path ## [JSON.ARRTRIM](https://redis.io/commands/json.arrtrim/) Trims the array at path to contain only the specified inclusive range of indices from start to stop ## [JSON.OBJKEYS](https://redis.io/commands/json.objkeys/) Returns the JSON keys of the object at path ## [JSON.OBJLEN](https://redis.io/commands/json.objlen/) Returns the number of keys of the object at path ## [JSON.TYPE](https://redis.io/commands/json.type/) Returns the type of the JSON value at path fakeredis-py-2.26.1/docs/redis-commands/RedisSearch/000077500000000000000000000000001470672414000222235ustar00rootroot00000000000000fakeredis-py-2.26.1/docs/redis-commands/RedisSearch/SEARCH.md000066400000000000000000000067611470672414000235240ustar00rootroot00000000000000 ## Unsupported search commands > To implement support for a command, see [here](/guides/implement-command/) #### [FT.CREATE](https://redis.io/commands/ft.create/) (not implemented) Creates an index with the given spec #### [FT.INFO](https://redis.io/commands/ft.info/) (not implemented) Returns information and statistics on the index #### [FT.EXPLAIN](https://redis.io/commands/ft.explain/) (not implemented) Returns the execution plan for a complex query #### [FT.EXPLAINCLI](https://redis.io/commands/ft.explaincli/) (not implemented) Returns the execution plan for a complex query #### [FT.ALTER](https://redis.io/commands/ft.alter/) (not implemented) Adds a new field to the index #### [FT.DROPINDEX](https://redis.io/commands/ft.dropindex/) (not implemented) Deletes the index #### [FT.ALIASADD](https://redis.io/commands/ft.aliasadd/) (not implemented) Adds an alias to the index #### [FT.ALIASUPDATE](https://redis.io/commands/ft.aliasupdate/) (not implemented) Adds or updates an alias to the index #### [FT.ALIASDEL](https://redis.io/commands/ft.aliasdel/) (not implemented) Deletes an alias from the index #### [FT.TAGVALS](https://redis.io/commands/ft.tagvals/) (not implemented) Returns the distinct tags indexed in a Tag field #### [FT.SYNUPDATE](https://redis.io/commands/ft.synupdate/) (not implemented) Creates or updates a synonym group with additional terms #### [FT.SYNDUMP](https://redis.io/commands/ft.syndump/) (not implemented) Dumps the contents of a synonym group #### [FT.SPELLCHECK](https://redis.io/commands/ft.spellcheck/) (not implemented) Performs spelling correction on a query, returning suggestions for misspelled terms #### [FT.DICTADD](https://redis.io/commands/ft.dictadd/) (not implemented) Adds terms to a dictionary #### [FT.DICTDEL](https://redis.io/commands/ft.dictdel/) (not implemented) Deletes terms from a dictionary #### [FT.DICTDUMP](https://redis.io/commands/ft.dictdump/) (not implemented) Dumps all terms in the given dictionary #### [FT._LIST](https://redis.io/commands/ft._list/) (not implemented) Returns a list of all existing indexes #### [FT.CONFIG SET](https://redis.io/commands/ft.config-set/) (not implemented) Sets runtime configuration options #### [FT.CONFIG GET](https://redis.io/commands/ft.config-get/) (not implemented) Retrieves runtime configuration options #### [FT.CONFIG HELP](https://redis.io/commands/ft.config-help/) (not implemented) Help description of runtime configuration options #### [FT.SEARCH](https://redis.io/commands/ft.search/) (not implemented) Searches the index with a textual query, returning either documents or just ids #### [FT.AGGREGATE](https://redis.io/commands/ft.aggregate/) (not implemented) Run a search query on an index and perform aggregate transformations on the results #### [FT.PROFILE](https://redis.io/commands/ft.profile/) (not implemented) Performs a `FT.SEARCH` or `FT.AGGREGATE` command and collects performance information #### [FT.CURSOR READ](https://redis.io/commands/ft.cursor-read/) (not implemented) Reads from a cursor #### [FT.CURSOR DEL](https://redis.io/commands/ft.cursor-del/) (not implemented) Deletes a cursor fakeredis-py-2.26.1/docs/redis-commands/RedisSearch/SUGGESTION.md000066400000000000000000000012521470672414000242340ustar00rootroot00000000000000 ## Unsupported suggestion commands > To implement support for a command, see [here](/guides/implement-command/) #### [FT.SUGADD](https://redis.io/commands/ft.sugadd/) (not implemented) Adds a suggestion string to an auto-complete suggestion dictionary #### [FT.SUGGET](https://redis.io/commands/ft.sugget/) (not implemented) Gets completion suggestions for a prefix #### [FT.SUGDEL](https://redis.io/commands/ft.sugdel/) (not implemented) Deletes a string from a suggestion index #### [FT.SUGLEN](https://redis.io/commands/ft.suglen/) (not implemented) Gets the size of an auto-complete suggestion dictionary fakeredis-py-2.26.1/docs/redis-commands/RedisTimeSeries/000077500000000000000000000000001470672414000230675ustar00rootroot00000000000000fakeredis-py-2.26.1/docs/redis-commands/RedisTimeSeries/TIMESERIES.md000066400000000000000000000042051470672414000250630ustar00rootroot00000000000000# RedisTimeSeries `timeseries` commands (17/17 implemented) ## [TS.CREATE](https://redis.io/commands/ts.create/) Create a new time series ## [TS.DEL](https://redis.io/commands/ts.del/) Delete all samples between two timestamps for a given time series ## [TS.ALTER](https://redis.io/commands/ts.alter/) Update the retention, chunk size, duplicate policy, and labels of an existing time series ## [TS.ADD](https://redis.io/commands/ts.add/) Append a sample to a time series ## [TS.MADD](https://redis.io/commands/ts.madd/) Append new samples to one or more time series ## [TS.INCRBY](https://redis.io/commands/ts.incrby/) Increase the value of the sample with the maximum existing timestamp, or create a new sample with a value equal to the value of the sample with the maximum existing timestamp with a given increment ## [TS.DECRBY](https://redis.io/commands/ts.decrby/) Decrease the value of the sample with the maximum existing timestamp, or create a new sample with a value equal to the value of the sample with the maximum existing timestamp with a given decrement ## [TS.CREATERULE](https://redis.io/commands/ts.createrule/) Create a compaction rule ## [TS.DELETERULE](https://redis.io/commands/ts.deleterule/) Delete a compaction rule ## [TS.RANGE](https://redis.io/commands/ts.range/) Query a range in forward direction ## [TS.REVRANGE](https://redis.io/commands/ts.revrange/) Query a range in reverse direction ## [TS.MRANGE](https://redis.io/commands/ts.mrange/) Query a range across multiple time series by filters in forward direction ## [TS.MREVRANGE](https://redis.io/commands/ts.mrevrange/) Query a range across multiple time-series by filters in reverse direction ## [TS.GET](https://redis.io/commands/ts.get/) Get the sample with the highest timestamp from a given time series ## [TS.MGET](https://redis.io/commands/ts.mget/) Get the sample with the highest timestamp from each time series matching a specific filter ## [TS.INFO](https://redis.io/commands/ts.info/) Returns information and statistics for a time series ## [TS.QUERYINDEX](https://redis.io/commands/ts.queryindex/) Get all time series keys matching a filter list fakeredis-py-2.26.1/docs/redis-commands/index.md000066400000000000000000000004271470672414000214630ustar00rootroot00000000000000# Supported commands Commands from [Redis][1], [RedisJSON][2], [RedisTimeSeries][3], and [RedisBloom][4] are supported. [1]: /redis-commands/Redis/BITMAP/ [2]: /redis-commands/RedisJSON/JSON/ [3]: /redis-commands/RedisTimeSeries/TIMESERIES/ [4]: /redis-commands/RedisBloom/BF/ fakeredis-py-2.26.1/docs/redis-stack.md000066400000000000000000000106051470672414000176570ustar00rootroot00000000000000# Support for redis-stack To install all supported modules, you can install fakeredis with `pip install fakeredis[lua,json,bf]`. ## RedisJson support The JSON capability of Redis Stack provides JavaScript Object Notation (JSON) support for Redis. It lets you store, update, and retrieve JSON values in a Redis database, similar to any other Redis data type. Redis JSON also works seamlessly with Search and Query to let you index and query JSON documents. JSONPath's syntax: The following JSONPath syntax table was adapted from Goessner's [path syntax comparison][4]. Currently, Redis Json module is fully implemented (see [supported commands][1]). Support for JSON commands (e.g., [`JSON.GET`][2]) is implemented using [jsonpath-ng,][3] you can install it using `pip install 'fakeredis[json]'`. ```pycon >>> import fakeredis >>> from redis.commands.json.path import Path >>> r = fakeredis.FakeStrictRedis() >>> assert r.json().set("foo", Path.root_path(), {"x": "bar"}, ) == 1 >>> r.json().get("foo") {'x': 'bar'} >>> r.json().get("foo", Path("x")) 'bar' ``` ## Bloom filter support Bloom filters are a probabilistic data structure that checks for the presence of an element in a set. Instead of storing all the elements in the set, Bloom Filters store only the elements' hashed representation, thus sacrificing some precision. The trade-off is that Bloom Filters are very space-efficient and fast. You can get a false positive result, but never a false negative, i.e., if the bloom filter says that an element is not in the set, then it is definitely not in the set. If the bloom filter says that an element is in the set, then it is most likely in the set, but it is not guaranteed. Currently, RedisBloom module bloom filter commands are fully implemented using [pybloom-live][5]( see [supported commands][6]). You can install it using `pip install 'fakeredis[probabilistic]'`. ```pycon >>> import fakeredis >>> r = fakeredis.FakeStrictRedis() >>> r.bf().madd('key', 'v1', 'v2', 'v3') == [1, 1, 1] >>> r.bf().exists('key', 'v1') 1 >>> r.bf().exists('key', 'v5') 0 ``` ## [Count-Min Sketch][8] support Count-min sketch is a probabilistic data structure that estimates the frequency of an element in a data stream. You can install it using `pip install 'fakeredis[probabilistic]'`. ```pycon >>> import fakeredis >>> r = fakeredis.FakeStrictRedis() >>> r.cms().initbydim("cmsDim", 100, 5) OK >>> r.cms().incrby("cmsDim", ["foo"], [3]) [3] ``` ## [Cuckoo filter][9] support Cuckoo filters are a probabilistic data structure that checks for the presence of an element in a set You can install it using `pip install 'fakeredis[probabilistic]'`. ## [Redis programmability][7] Redis provides a programming interface that lets you execute custom scripts on the server itself. In Redis 7 and beyond, you can use Redis Functions to manage and run your scripts. In Redis 6.2 and below, you use Lua scripting with the EVAL command to program the server. If you wish to have Lua scripting support (this includes features like ``redis.lock.Lock``, which are implemented in Lua), you will need [lupa][10], you can install it using `pip install 'fakeredis[lua]'` By default, FakeRedis works with LUA version 5.1, to use a different version supported by lupa, set the `FAKEREDIS_LUA_VERSION` environment variable to the desired version (e.g., `5.4`). ### LUA binary modules fakeredis supports using LUA binary modules as well. In order to have your FakeRedis instance load a LUA binary module, you can use the `lua_modules` parameter. ```pycon >>> import fakeredis >>> r = fakeredis.FakeStrictRedis(lua_modules={"my_module.so"}) ``` The module `.so`/`.dll` file should be in the working directory. To install LUA modules, you can use [luarocks][11] to install the module and then copy the `.so`/`.dll` file to the working directory. For example, to install `lua-cjson`: ```sh luarocks install lua-cjson cp /opt/homebrew/lib/lua/5.4/cjson.so `pwd` ``` [1]:./redis-commands/RedisJson/ [2]:https://redis.io/commands/json.get/ [3]:https://github.com/h2non/jsonpath-ng [4]:https://goessner.net/articles/JsonPath/index.html#e2 [5]:https://github.com/joseph-fox/python-bloomfilter [6]:./redis-commands/BloomFilter/ [7]:https://redis.io/docs/interact/programmability/ [8]:https://redis.io/docs/data-types/probabilistic/count-min-sketch/ [9]:https://redis.io/docs/data-types/probabilistic/cuckoo-filter/ [10]:https://pypi.org/project/lupa/ [11]:https://luarocks.org/ fakeredis-py-2.26.1/docs/requirements.txt000066400000000000000000000000461470672414000204060ustar00rootroot00000000000000mkdocs==1.6.0 mkdocs-material==9.5.33 fakeredis-py-2.26.1/docs/valkey-support.md000066400000000000000000000022041470672414000204470ustar00rootroot00000000000000# Support for valkey [Valkey][1] is an open source (BSD) high-performance key/value datastore that supports a variety of workloads such as caching, message queues, and can act as a primary database. The project was forked from the open source Redis project right before the transition to their new source available licenses. FakeRedis can be used as a valkey replacement for testing and development purposes as well. To make the process more straightforward, the `FakeValkey` class sets all relevant arguments in `FakeRedis` to the valkey values. ```python from fakeredis import FakeValkey valkey = FakeValkey() valkey.set("key", "value") print(valkey.get("key")) ``` Alternatively, you can start a thread with a Fake Valkey server. ```python from threading import Thread from fakeredis import TcpFakeServer server_address = ("127.0.0.1", 6379) server = TcpFakeServer(server_address, server_type="valkey") t = Thread(target=server.serve_forever, daemon=True) t.start() import valkey r = valkey.Valkey(host=server_address[0], port=server_address[1]) r.set("foo", "bar") assert r.get("foo") == b"bar" ``` [1]: https://github.com/valkey-io/valkey fakeredis-py-2.26.1/fakeredis/000077500000000000000000000000001470672414000161275ustar00rootroot00000000000000fakeredis-py-2.26.1/fakeredis/__init__.py000066400000000000000000000022551470672414000202440ustar00rootroot00000000000000import sys from ._connection import ( FakeRedis, FakeStrictRedis, FakeConnection, ) from ._server import FakeServer from ._valkey import FakeValkey, FakeAsyncValkey, FakeStrictValkey from .aioredis import ( FakeRedis as FakeAsyncRedis, FakeConnection as FakeAsyncConnection, ) if sys.version_info >= (3, 11): from ._tcp_server import TcpFakeServer else: class TcpFakeServer: def __init__(self, *args, **kwargs): raise NotImplementedError("TcpFakeServer is only available in Python 3.11+") try: from importlib import metadata except ImportError: # for Python < 3.8 import importlib_metadata as metadata # type: ignore __version__ = metadata.version("fakeredis") __author__ = "Daniel Moran" __maintainer__ = "Daniel Moran" __email__ = "daniel@moransoftware.ca" __license__ = "BSD-3-Clause" __url__ = "https://github.com/cunla/fakeredis-py" __bugtrack_url__ = "https://github.com/cunla/fakeredis-py/issues" __all__ = [ "FakeServer", "FakeRedis", "FakeStrictRedis", "FakeConnection", "FakeAsyncRedis", "FakeAsyncConnection", "TcpFakeServer", "FakeValkey", "FakeAsyncValkey", "FakeStrictValkey", ] fakeredis-py-2.26.1/fakeredis/_basefakesocket.py000066400000000000000000000374171470672414000216260ustar00rootroot00000000000000import itertools import queue import time import weakref from typing import List, Any, Tuple, Optional, Callable, Union, Match, AnyStr, Generator from xmlrpc.client import ResponseError import redis from redis.connection import DefaultParser from fakeredis.model import XStream from fakeredis.model import ZSet from . import _msgs as msgs from ._command_args_parsing import extract_args from ._commands import Int, Float, SUPPORTED_COMMANDS, COMMANDS_WITH_SUB, Signature, CommandItem, Hash from ._helpers import ( SimpleError, valid_response_type, SimpleString, NoResponse, casematch, compile_pattern, QUEUED, decode_command_bytes, ) def _extract_command(fields: List[bytes]) -> Tuple[Any, List[Any]]: """Extracts the command and command arguments from a list of `bytes` fields. :param fields: A list of `bytes` fields containing the command and command arguments. :return: A tuple of the command and command arguments. Example: ``` fields = [b'GET', b'key1'] result = _extract_command(fields) print(result) # ('GET', ['key1']) ``` """ cmd = decode_command_bytes(fields[0]) if cmd in COMMANDS_WITH_SUB and len(fields) >= 2: cmd += " " + decode_command_bytes(fields[1]) cmd_arguments = fields[2:] else: cmd_arguments = fields[1:] return cmd, cmd_arguments def bin_reverse(x: int, bits_count: int) -> int: result = 0 for i in range(bits_count): if (x >> i) & 1: result |= 1 << (bits_count - 1 - i) return result class BaseFakeSocket: _clear_watches: Callable[[], None] ACCEPTED_COMMANDS_WHILE_PUBSUB = { "ping", "subscribe", "unsubscribe", "psubscribe", "punsubscribe", "quit", "ssubscribe", "sunsubscribe", } _connection_error_class = redis.ConnectionError def __init__(self, server: "FakeServer", db: int, *args: Any, **kwargs: Any) -> None: # type: ignore # noqa: F821 super(BaseFakeSocket, self).__init__(*args, **kwargs) from fakeredis import FakeServer self._server: FakeServer = server self._db_num = db self._db = server.dbs[self._db_num] self.responses: Optional[queue.Queue[bytes]] = queue.Queue() # Prevents parser from processing commands. Not used in this module, # but set by aioredis module to prevent new commands being processed # while handling a blocking command. self._paused = False self._parser = self._parse_commands() self._parser.send(None) # Assigned elsewhere self._transaction: Optional[List[Any]] self._in_transaction: bool self._pubsub: int self._transaction_failed: bool @property def version(self) -> Tuple[int, ...]: return self._server.version @property def server_type(self) -> str: return self._server.server_type def put_response(self, msg: Any) -> None: """Put a response message into the queue of responses. :param msg: The response message. """ # redis.Connection.__del__ might call self.close at any time, which # will set self.responses to None. We assume this will happen # atomically, and the code below then protects us against this. responses = self.responses if responses: responses.put(msg) def pause(self) -> None: self._paused = True def resume(self) -> None: self._paused = False self._parser.send(b"") def shutdown(self, _: Any) -> None: self._parser.close() @staticmethod def fileno() -> int: # Our fake socket must return an integer from `FakeSocket.fileno()` since a real selector # will be created. The value does not matter since we replace the selector with our own # `FakeSelector` before it is ever used. return 0 def _cleanup(self, server: Any) -> None: # noqa: F821 """Remove all the references to `self` from `server`. This is called with the server lock held, but it may be some time after self.close. """ for subs in server.subscribers.values(): subs.discard(self) for subs in server.psubscribers.values(): subs.discard(self) self._clear_watches() def close(self) -> None: # Mark ourselves for cleanup. This might be called from # redis.Connection.__del__, which the garbage collection could call # at any time, and hence we can't safely take the server lock. # We rely on list.append being atomic. self._server.closed_sockets.append(weakref.ref(self)) self._server = None # type: ignore self._db = None self.responses = None @staticmethod def _extract_line(buf: bytes) -> Tuple[bytes, bytes]: pos = buf.find(b"\n") + 1 assert pos > 0 line = buf[:pos] buf = buf[pos:] assert line.endswith(b"\r\n") return line, buf def _parse_commands(self) -> Generator[None, Any, None]: """Generator that parses commands. It is fed pieces of redis protocol data (via `send`) and calls `_process_command` whenever it has a complete one. """ buf = b"" while True: while self._paused or b"\n" not in buf: buf += yield line, buf = self._extract_line(buf) assert line[:1] == b"*" # array n_fields = int(line[1:-2]) fields = [] for i in range(n_fields): while b"\n" not in buf: buf += yield line, buf = self._extract_line(buf) assert line[:1] == b"$" # string length = int(line[1:-2]) while len(buf) < length + 2: buf += yield fields.append(buf[:length]) buf = buf[length + 2 :] # +2 to skip the CRLF self._process_command(fields) def _run_command( self, func: Optional[Callable[[Any], Any]], sig: Signature, args: List[Any], from_script: bool ) -> Any: command_items: List[CommandItem] = [] try: ret = sig.apply(args, self._db, self.version) if from_script and msgs.FLAG_NO_SCRIPT in sig.flags: raise SimpleError(msgs.COMMAND_IN_SCRIPT_MSG) if self._pubsub and sig.name not in BaseFakeSocket.ACCEPTED_COMMANDS_WHILE_PUBSUB: raise SimpleError(msgs.BAD_COMMAND_IN_PUBSUB_MSG) if len(ret) == 1: result = ret[0] else: args, command_items = ret result = func(*args) # type: ignore assert valid_response_type(result) except SimpleError as exc: result = exc for command_item in command_items: command_item.writeback(remove_empty_val=msgs.FLAG_LEAVE_EMPTY_VAL not in sig.flags) return result def _decode_error(self, error: SimpleError) -> ResponseError: return DefaultParser(socket_read_size=65536).parse_error(error.value) # type: ignore def _decode_result(self, result: Any) -> Any: """Convert SimpleString and SimpleError, recursively""" if isinstance(result, list): return [self._decode_result(r) for r in result] elif isinstance(result, SimpleString): return result.value elif isinstance(result, SimpleError): return self._decode_error(result) else: return result def _blocking(self, timeout: Optional[Union[float, int]], func: Callable[[bool], Any]) -> Any: """Run a function until it succeeds or timeout is reached. The timeout is in seconds, and 0 means infinite. The function is called with a boolean to indicate whether this is the first call. If it returns None, it is considered to have "failed" and is retried each time the condition variable is notified, until the timeout is reached. Returns the function return value, or None if the timeout has passed. """ ret = func(True) # Call with first_pass=True if ret is not None or self._in_transaction: return ret deadline = time.time() + timeout if timeout else None while True: timeout = (deadline - time.time()) if deadline is not None else None if timeout is not None and timeout <= 0: return None if self._db.condition.wait(timeout=timeout) is False: return None # Timeout expired ret = func(False) # Second pass => first_pass=False if ret is not None: return ret def _name_to_func(self, cmd_name: str) -> Tuple[Optional[Callable[[Any], Any]], Signature]: """Get the signature and the method from the command name.""" if cmd_name not in SUPPORTED_COMMANDS: # redis remaps \r or \n in an error to ' ' to make it legal protocol clean_name = cmd_name.replace("\r", " ").replace("\n", " ") raise SimpleError(msgs.UNKNOWN_COMMAND_MSG.format(clean_name)) sig = SUPPORTED_COMMANDS[cmd_name] if self._server.server_type not in sig.server_types: # redis remaps \r or \n in an error to ' ' to make it legal protocol clean_name = cmd_name.replace("\r", " ").replace("\n", " ") raise SimpleError(msgs.UNKNOWN_COMMAND_MSG.format(clean_name)) func = getattr(self, sig.func_name, None) return func, sig def sendall(self, data: AnyStr) -> None: if not self._server.connected: raise self._connection_error_class(msgs.CONNECTION_ERROR_MSG) if isinstance(data, str): data = data.encode("ascii") # type: ignore self._parser.send(data) def _process_command(self, fields: List[bytes]) -> None: if not fields: return result: Any cmd, cmd_arguments = _extract_command(fields) try: func, sig = self._name_to_func(cmd) with self._server.lock: # Clean out old connections while True: try: weak_sock = self._server.closed_sockets.pop() except IndexError: break else: sock = weak_sock() if sock: sock._cleanup(self._server) now = time.time() for db in self._server.dbs.values(): db.time = now sig.check_arity(cmd_arguments, self.version) if self._transaction is not None and msgs.FLAG_TRANSACTION not in sig.flags: self._transaction.append((func, sig, cmd_arguments)) result = QUEUED else: result = self._run_command(func, sig, cmd_arguments, False) except SimpleError as exc: if self._transaction is not None: # TODO: should not apply if the exception is from _run_command # e.g. watch inside multi self._transaction_failed = True if cmd == "exec" and exc.value.startswith("ERR "): exc.value = "EXECABORT Transaction discarded because of: " + exc.value[4:] self._transaction = None self._transaction_failed = False self._clear_watches() result = exc result = self._decode_result(result) if not isinstance(result, NoResponse): self.put_response(result) def _scan(self, keys, cursor, *args): """This is the basis of most of the ``scan`` methods. This implementation is KNOWN to be un-performant, as it requires grabbing the full set of keys over which we are investigating subsets. The SCAN command, and the other commands in the SCAN family, are able to provide to the user a set of guarantees associated with full iterations. - A full iteration always retrieves all the elements that were present in the collection from the start to the end of a full iteration. This means that if a given element is inside the collection when an iteration is started and is still there when an iteration terminates, then at some point the SCAN command returned it to the user. - A full iteration never returns any element that was NOT present in the collection from the start to the end of a full iteration. So if an element was removed before the start of an iteration and is never added back to the collection for all the time an iteration lasts, the SCAN command ensures that this element will never be returned. However, because the SCAN command has very little state associated (just the cursor), it has the following drawbacks: - A given element may be returned multiple times. It is up to the application to handle the case of duplicated elements, for example, only using the returned elements to perform operations that are safe when re-applied multiple times. - Elements that were not constantly present in the collection during a full iteration may be returned or not: it is undefined. """ cursor = int(cursor) (pattern, _type, count), _ = extract_args(args, ("*match", "*type", "+count")) count = 10 if count is None else count data = sorted(keys) bits_len = (len(keys) - 1).bit_length() cursor = bin_reverse(cursor, bits_len) if cursor >= len(keys): return [0, []] result_cursor = cursor + count result_data = [] regex = compile_pattern(pattern) if pattern is not None else None def match_key(key: bytes) -> Union[bool, Match[bytes], None]: return regex.match(key) if regex is not None else True def match_type(key) -> bool: return _type is None or casematch(BaseFakeSocket._key_value_type(self._db[key]).value, _type) if pattern is not None or _type is not None: for val in itertools.islice(data, cursor, cursor + count): compare_val = val[0] if isinstance(val, tuple) else val if match_key(compare_val) and match_type(compare_val): result_data.append(val) else: result_data = data[cursor : cursor + count] if result_cursor >= len(data): result_cursor = 0 return [str(bin_reverse(result_cursor, bits_len)).encode(), result_data] def _ttl(self, key: CommandItem, scale: float) -> int: if not key: return -2 elif key.expireat is None: return -1 else: return int(round((key.expireat - self._db.time) * scale)) def _encodefloat(self, value: float, humanfriendly: bool) -> bytes: if self.version >= (7,): value = 0 + value return Float.encode(value, humanfriendly) def _encodeint(self, value: int) -> bytes: if self.version >= (7,): value = 0 + value return Int.encode(value) @staticmethod def _key_value_type(key: CommandItem) -> SimpleString: if key.value is None: return SimpleString(b"none") elif isinstance(key.value, bytes): return SimpleString(b"string") elif isinstance(key.value, list): return SimpleString(b"list") elif isinstance(key.value, set): return SimpleString(b"set") elif isinstance(key.value, ZSet): return SimpleString(b"zset") elif isinstance(key.value, Hash): return SimpleString(b"hash") elif isinstance(key.value, XStream): return SimpleString(b"stream") else: assert False # pragma: nocover fakeredis-py-2.26.1/fakeredis/_command_args_parsing.py000066400000000000000000000116161470672414000230220ustar00rootroot00000000000000from typing import Tuple, List, Dict, Any, Sequence, Optional from . import _msgs as msgs from ._commands import Int, Float from ._helpers import SimpleError, null_terminate def _count_params(s: str) -> int: res = 0 while res < len(s) and s[res] in ".+*~": res += 1 return res def _encode_arg(s: str) -> bytes: return s[_count_params(s) :].encode() def _default_value(s: str) -> Any: if s[0] == "~": return None ind = _count_params(s) if ind == 0: return False elif ind == 1: return None else: return [None] * ind def extract_args( actual_args: Tuple[bytes, ...], expected: Tuple[str, ...], error_on_unexpected: bool = True, left_from_first_unexpected: bool = True, exception: Optional[str] = None, ) -> Tuple[List[Any], Sequence[Any]]: """Parse argument values. Extract from actual arguments which arguments exist and their value if relevant. :param actual_args: The actual arguments to parse :param expected: Arguments to look for, see below explanation. :param error_on_unexpected: Should an error be raised when actual_args contain an unexpected argument? :param left_from_first_unexpected: Once reaching an unexpected argument in actual_args, Should parsing stop? :param exception: What exception msg to raise :returns: - List of values for expected arguments. - List of remaining args. An expected argument can have parameters: - A numerical (Int) parameter is identified with '+' - A float (Float) parameter is identified with '.' - A non-numerical parameter is identified with a '*' - An argument with potentially ~ or = between the argument name and the value is identified with a '~' - A numberical argument with potentially ~ or = between the argument name and the value marked with a '~+' E.g. '++limit' will translate as an argument with 2 int parameters. >>> extract_args((b'nx', b'ex', b'324', b'xx',), ('nx', 'xx', '+ex', 'keepttl')) [True, True, 324, False], None >>> extract_args( (b'maxlen', b'10',b'nx', b'ex', b'324', b'xx',), ('~+maxlen', 'nx', 'xx', '+ex', 'keepttl')) 10, [True, True, 324, False], None """ args_info: Dict[bytes, Tuple[int, int]] = {_encode_arg(k): (i, _count_params(k)) for (i, k) in enumerate(expected)} def _parse_params(key: bytes, ind: int, _actual_args: Tuple[bytes, ...]) -> Tuple[Any, int]: """Parse an argument from actual args. :param key: Argument name to parse :param ind: index of argument in actual_args :param _actual_args: actual args """ pos, expected_following = args_info[key] argument_name = expected[pos] # Deal with parameters with optional ~/= before numerical value. arg: Any if argument_name[0] == "~": if ind + 1 >= len(_actual_args): raise SimpleError(msgs.SYNTAX_ERROR_MSG) if _actual_args[ind + 1] != b"~" and _actual_args[ind + 1] != b"=": arg, _parsed = _actual_args[ind + 1], 1 elif ind + 2 >= len(_actual_args): raise SimpleError(msgs.SYNTAX_ERROR_MSG) else: arg, _parsed = _actual_args[ind + 2], 2 if argument_name[1] == "+": arg = Int.decode(arg) return arg, _parsed # Boolean parameters if expected_following == 0: return True, 0 if ind + expected_following >= len(_actual_args): raise SimpleError(msgs.SYNTAX_ERROR_MSG) temp_res = [] for i in range(expected_following): curr_arg: Any = _actual_args[ind + i + 1] if argument_name[i] == "+": curr_arg = Int.decode(curr_arg) elif argument_name[i] == ".": curr_arg = Float.decode(curr_arg) temp_res.append(curr_arg) if len(temp_res) == 1: return temp_res[0], expected_following else: return temp_res, expected_following results: List[Any] = [_default_value(key) for key in expected] left_args = [] i = 0 while i < len(actual_args): found = False for key in args_info: if null_terminate(actual_args[i]) == key: arg_position, _ = args_info[key] results[arg_position], parsed = _parse_params(key, i, actual_args) i += parsed found = True break if not found: if error_on_unexpected: raise ( SimpleError(msgs.SYNTAX_ERROR_MSG) if exception is None else SimpleError(exception.format(actual_args[i])) ) if left_from_first_unexpected: return results, actual_args[i:] left_args.append(actual_args[i]) i += 1 return results, left_args fakeredis-py-2.26.1/fakeredis/_commands.py000066400000000000000000000423371470672414000204520ustar00rootroot00000000000000""" Helper classes and methods used in mixins implementing various commands. Unlike _helpers.py, here the methods should be used only in mixins. """ import functools import math import re import sys import time from typing import Iterable, Tuple, Union, Optional, Any, Type, List, Callable, Sequence, Dict, Set from . import _msgs as msgs from ._helpers import null_terminate, SimpleError, Database, current_time MAX_STRING_SIZE = 512 * 1024 * 1024 SUPPORTED_COMMANDS: Dict[str, "Signature"] = dict() # Dictionary of supported commands name => Signature COMMANDS_WITH_SUB: Set[str] = set() # Commands with sub-commands class Key: """Marker to indicate that argument in signature is a key""" UNSPECIFIED = object() def __init__(self, type_: Optional[Type[Any]] = None, missing_return: Any = UNSPECIFIED) -> None: self.type_ = type_ self.missing_return = missing_return class Item: """An item stored in the database""" __slots__ = ["value", "expireat"] def __init__(self, value: Any) -> None: self.value = value self.expireat = None class CommandItem: """An item referenced by a command. It wraps an Item but has extra fields to manage updates and notifications. """ def __init__(self, key: bytes, db: Database, item: Optional["CommandItem"] = None, default: Any = None) -> None: self._expireat: Optional[float] if item is None: self._value = default self._expireat = None else: self._value = item.value self._expireat = item.expireat self.key = key self.db = db self._modified = False self._expireat_modified = False @property def value(self) -> Any: return self._value @value.setter def value(self, new_value: Any) -> None: self._value = new_value self._modified = True self.expireat = None @property def expireat(self) -> Optional[float]: return self._expireat @expireat.setter def expireat(self, value: Optional[float]) -> None: self._expireat = value self._expireat_modified = True self._modified = True # Since redis 6.0.7 def get(self, default: Any) -> Any: return self._value if self else default def update(self, new_value: Any) -> None: self._value = new_value self._modified = True def updated(self) -> None: self._modified = True def writeback(self, remove_empty_val: bool = True) -> None: if self._modified: self.db.notify_watch(self.key) if not isinstance(self.value, bytes) and (self.value is None or (not self.value and remove_empty_val)): self.db.pop(self.key, None) return item = self.db.setdefault(self.key, Item(None)) item.value = self.value item.expireat = self.expireat return if self._expireat_modified and self.key in self.db: self.db[self.key].expireat = self.expireat def __bool__(self) -> bool: return bool(self._value) or isinstance(self._value, bytes) __nonzero__ = __bool__ # For Python 2 class Hash: # type:ignore DECODE_ERROR = msgs.INVALID_HASH_MSG redis_type = b"hash" def __init__(self, *args: Any, **kwargs: Any) -> None: super().__init__(*args, **kwargs) self._expirations: Dict[bytes, int] = dict() self._values: Dict[bytes, Any] = dict() def _expire_keys(self): removed = [] now = current_time() for k in self._expirations: if self._expirations[k] < now: self._values.pop(k, None) removed.append(k) for k in removed: self._expirations.pop(k, None) def set_key_expireat(self, key: bytes, when_ms: int) -> int: now = current_time() if when_ms <= now: self._values.pop(key, None) self._expirations.pop(key, None) return 2 self._expirations[key] = when_ms return 1 def clear_key_expireat(self, key: bytes) -> bool: return self._expirations.pop(key, None) is not None def get_key_expireat(self, key: bytes) -> Optional[int]: self._expire_keys() return self._expirations.get(key, None) def __getitem__(self, key: bytes) -> Any: self._expire_keys() return self._values.get(key) def __contains__(self, key: bytes) -> bool: self._expire_keys() return self._values.__contains__(key) def __setitem__(self, key: bytes, value: Any) -> None: self._expirations.pop(key, None) self._values[key] = value def __delitem__(self, key): self._values.pop(key, None) self._expirations.pop(key, None) def __len__(self): return len(self._values) def __iter__(self): return iter(self._values) def get(self, key: bytes, default: Any = None) -> Any: return self._values.get(key, default) def keys(self) -> Iterable[bytes]: self._expire_keys() return self._values.keys() def values(self) -> Iterable[Any]: return [v for k, v in self.items()] def items(self) -> Iterable[Tuple[bytes, Any]]: self._expire_keys() return self._values.items() def update(self, values: Dict[bytes, Any]) -> None: self._expire_keys() self._values.update(values) class RedisType: @classmethod def decode(cls, *args, **kwargs): # type:ignore raise NotImplementedError class Int(RedisType): """Argument converter for 64-bit signed integers""" DECODE_ERROR = msgs.INVALID_INT_MSG ENCODE_ERROR = msgs.OVERFLOW_MSG MIN_VALUE = -(2**63) MAX_VALUE = 2**63 - 1 @classmethod def valid(cls, value: int) -> bool: return cls.MIN_VALUE <= value <= cls.MAX_VALUE @classmethod def decode(cls, value: bytes, decode_error: Optional[str] = None) -> int: try: out = int(value) if not cls.valid(out) or str(out).encode() != value: raise ValueError return out except ValueError: raise SimpleError(decode_error or cls.DECODE_ERROR) @classmethod def encode(cls, value: int) -> bytes: if cls.valid(value): return str(value).encode() else: raise SimpleError(cls.ENCODE_ERROR) class DbIndex(Int): """Argument converter for database indices""" DECODE_ERROR = msgs.INVALID_DB_MSG MIN_VALUE = 0 MAX_VALUE = 15 class BitOffset(Int): """Argument converter for unsigned bit positions""" DECODE_ERROR = msgs.INVALID_BIT_OFFSET_MSG MIN_VALUE = 0 MAX_VALUE = 8 * MAX_STRING_SIZE - 1 # Redis imposes 512MB limit on keys class BitValue(Int): DECODE_ERROR = msgs.INVALID_BIT_VALUE_MSG MIN_VALUE = 0 MAX_VALUE = 1 class Float(RedisType): """Argument converter for floating-point values. Redis uses long double for some cases (INCRBYFLOAT, HINCRBYFLOAT) and double for others (zset scores), but Python doesn't support `long double`. """ DECODE_ERROR = msgs.INVALID_FLOAT_MSG @classmethod def decode( cls, value: bytes, allow_leading_whitespace: bool = False, allow_erange: bool = False, allow_empty: bool = False, crop_null: bool = False, decode_error: Optional[str] = None, ) -> float: # Redis has some quirks in float parsing, with several variants. # See https://github.com/antirez/redis/issues/5706 try: if crop_null: value = null_terminate(value) if allow_empty and value == b"": value = b"0.0" if not allow_leading_whitespace and value[:1].isspace(): raise ValueError if value[-1:].isspace(): raise ValueError out = float(value) if math.isnan(out): raise ValueError if not allow_erange: # Values that over- or under-flow are explicitly rejected by # redis. This is a crude hack to determine whether the input # may have been such a value. if out in (math.inf, -math.inf, 0.0) and re.match(b"^[^a-zA-Z]*[1-9]", value): raise ValueError return out except ValueError: raise SimpleError(decode_error or cls.DECODE_ERROR) @classmethod def encode(cls, value: float, humanfriendly: bool) -> bytes: if math.isinf(value): return str(value).encode() elif humanfriendly: # Algorithm from `ld2string` in redis out = "{:.17f}".format(value) out = re.sub(r"\.?0+$", "", out) return out.encode() else: return "{:.17g}".format(value).encode() class Timeout(Float): """Argument converter for timeouts""" DECODE_ERROR = msgs.TIMEOUT_NEGATIVE_MSG MIN_VALUE = 0.0 class SortFloat(Float): DECODE_ERROR = msgs.INVALID_SORT_FLOAT_MSG @classmethod def decode( cls, value: bytes, allow_leading_whitespace: bool = True, allow_erange: bool = False, allow_empty: bool = True, crop_null: bool = True, decode_error: Optional[str] = None, ) -> float: return super().decode(value, allow_leading_whitespace=True, allow_empty=True, crop_null=True) @functools.total_ordering class BeforeAny: def __gt__(self, other: Any) -> bool: return False def __eq__(self, other: Any) -> bool: return isinstance(other, BeforeAny) def __hash__(self) -> int: return 1 @functools.total_ordering class AfterAny: def __lt__(self, other: Any) -> bool: return False def __eq__(self, other: Any) -> bool: return isinstance(other, AfterAny) def __hash__(self) -> int: return 1 class ScoreTest(RedisType): """Argument converter for sorted set score endpoints.""" def __init__(self, value: float, exclusive: bool = False, bytes_val: Optional[bytes] = None): self.value = value self.exclusive = exclusive self.bytes_val = bytes_val @classmethod def decode(cls, value: bytes) -> "ScoreTest": try: original_value = value exclusive = False if value[:1] == b"(": exclusive = True value = value[1:] fvalue = Float.decode( value, allow_leading_whitespace=True, allow_erange=True, allow_empty=True, crop_null=True, ) return cls(fvalue, exclusive, original_value) except SimpleError: raise SimpleError(msgs.INVALID_MIN_MAX_FLOAT_MSG) def __str__(self) -> str: if self.exclusive: return "({!r}".format(self.value) else: return repr(self.value) @property def lower_bound(self) -> Tuple[float, Union[AfterAny, BeforeAny]]: return self.value, AfterAny() if self.exclusive else BeforeAny() @property def upper_bound(self) -> Tuple[float, Union[AfterAny, BeforeAny]]: return self.value, BeforeAny() if self.exclusive else AfterAny() class StringTest(RedisType): """Argument converter for sorted set LEX endpoints.""" def __init__(self, value: Union[bytes, BeforeAny, AfterAny], exclusive: bool): self.value = value self.exclusive = exclusive @classmethod def decode(cls, value: bytes) -> "StringTest": if value == b"-": return cls(BeforeAny(), True) elif value == b"+": return cls(AfterAny(), True) elif value[:1] == b"(": return cls(value[1:], True) elif value[:1] == b"[": return cls(value[1:], False) else: raise SimpleError(msgs.INVALID_MIN_MAX_STR_MSG) # def to_scoretest(self, zset: ZSet) -> ScoreTest: # if isinstance(self.value, BeforeAny): # return ScoreTest(float("-inf"), False) # if isinstance(self.value, AfterAny): # return ScoreTest(float("inf"), False) # val: float = zset.get(self.value, None) # return ScoreTest(val, self.exclusive) class Signature: def __init__( self, name: str, func_name: str, fixed: Tuple[Type[Union[RedisType, bytes]]], repeat: Tuple[Type[Union[RedisType, bytes]]] = (), # type:ignore args: Tuple[str] = (), # type:ignore flags: str = "", server_types: Tuple[str] = ("redis", "valkey", "dragonfly"), # supported server types: redis, dragonfly, valkey ): self.name = name self.func_name = func_name self.fixed = fixed self.repeat = repeat self.flags = set(flags) self.command_args = args self.server_types: Set[str] = set(server_types) def check_arity(self, args: Sequence[Any], version: Tuple[int]) -> None: if len(args) == len(self.fixed): return delta = len(args) - len(self.fixed) if delta < 0 or not self.repeat: msg = msgs.WRONG_ARGS_MSG6.format(self.name) raise SimpleError(msg) if delta % len(self.repeat) != 0: msg = msgs.WRONG_ARGS_MSG7 if version >= (7,) else msgs.WRONG_ARGS_MSG6.format(self.name) raise SimpleError(msg) def apply( self, args: Sequence[Any], db: Database, version: Tuple[int] ) -> Union[Tuple[Any], Tuple[List[Any], List[CommandItem]]]: """Returns a tuple, which is either: - transformed args and a dict of CommandItems; or - a single containing a short-circuit return value """ self.check_arity(args, version) types = list(self.fixed) for i in range(len(args) - len(types)): types.append(self.repeat[i % len(self.repeat)]) args_list = list(args) # First pass: convert/validate non-keys, and short-circuit on missing keys for i, (arg, type_) in enumerate(zip(args_list, types)): if isinstance(type_, Key): if type_.missing_return is not Key.UNSPECIFIED and arg not in db: return (type_.missing_return,) elif type_ != bytes: args_list[i] = type_.decode( args_list[i], ) # Second pass: read keys and check their types command_items: List[CommandItem] = [] for i, (arg, type_) in enumerate(zip(args_list, types)): if isinstance(type_, Key): item = db.get(arg) default = None if type_.type_ is not None and item is not None and type(item.value) is not type_.type_: raise SimpleError(msgs.WRONGTYPE_MSG) if ( msgs.FLAG_DO_NOT_CREATE not in self.flags and type_.type_ is not None and item is None and type_.type_ is not bytes ): default = type_.type_() args_list[i] = CommandItem(arg, db, item, default=default) command_items.append(args_list[i]) return args_list, command_items def command(*args, **kwargs) -> Callable: # type:ignore def create_signature(func: Callable[..., Any], cmd_name: str) -> None: if " " in cmd_name: COMMANDS_WITH_SUB.add(cmd_name.split(" ")[0]) SUPPORTED_COMMANDS[cmd_name] = Signature(cmd_name, func.__name__, *args, **kwargs) def decorator(func: Callable[..., Any]) -> Callable[..., Any]: cmd_names = kwargs.pop("name", func.__name__) if isinstance(cmd_names, list): # Support for alias commands for cmd_name in cmd_names: create_signature(func, cmd_name.lower()) elif isinstance(cmd_names, str): create_signature(func, cmd_names.lower()) else: raise ValueError("command name should be a string or list of strings") return func return decorator def delete_keys(*keys: CommandItem) -> int: ans = 0 done = set() for key in keys: if key and key.key not in done: key.value = None done.add(key.key) ans += 1 return ans def fix_range(start: int, end: int, length: int) -> Tuple[int, int]: # Redis handles negative slightly differently for zrange if start < 0: start = max(0, start + length) if end < 0: end += length if start > end or start >= length: return -1, -1 end = min(end, length - 1) return start, end + 1 def fix_range_string(start: int, end: int, length: int) -> Tuple[int, int]: # Negative number handling is based on the redis source code if 0 > start > end and end < 0: return -1, -1 if start < 0: start = max(0, start + length) if end < 0: end = max(0, end + length) end = min(end, length - 1) return start, end + 1 class Timestamp(Int): """Argument converter for timestamps""" @classmethod def decode(cls, value: bytes, decode_error: Optional[str] = None) -> int: if value == b"*": return int(time.time()) if value == b"-": return -1 if value == b"+": return sys.maxsize return super().decode(value, decode_error=msgs.INVALID_EXPIRE_MSG) fakeredis-py-2.26.1/fakeredis/_connection.py000066400000000000000000000140761470672414000210070ustar00rootroot00000000000000import inspect import queue import sys import uuid import warnings from typing import Tuple, Any, List, Optional, Set from ._server import FakeBaseConnectionMixin, FakeServer, VersionType if sys.version_info >= (3, 11): from typing import Self else: from typing_extensions import Self import redis from fakeredis._fakesocket import FakeSocket from fakeredis._helpers import FakeSelector from . import _msgs as msgs class FakeConnection(FakeBaseConnectionMixin, redis.Connection): def connect(self) -> None: super().connect() # The selector is set in redis.Connection.connect() after _connect() is called self._selector: Optional[FakeSelector] = FakeSelector(self._sock) def _connect(self) -> FakeSocket: if not self._server.connected: raise redis.ConnectionError(msgs.CONNECTION_ERROR_MSG) return FakeSocket(self._server, db=self.db, lua_modules=self._lua_modules) def can_read(self, timeout: Optional[float] = 0) -> bool: if not self._server.connected: return True if not self._sock: self.connect() # We use check_can_read rather than can_read, because on redis-py<3.2, # FakeSelector inherits from a stub BaseSelector which doesn't # implement can_read. Normally can_read provides retries on EINTR, # but that's not necessary for the implementation of # FakeSelector.check_can_read. return self._selector is not None and self._selector.check_can_read(timeout) def _decode(self, response: Any) -> Any: if isinstance(response, list): return [self._decode(item) for item in response] elif isinstance(response, bytes): return self.encoder.decode(response) else: return response def read_response(self, **kwargs: Any) -> Any: # type: ignore if not self._sock: raise redis.ConnectionError(msgs.CONNECTION_ERROR_MSG) if not self._server.connected: try: response = self._sock.responses.get_nowait() except queue.Empty: if kwargs.get("disconnect_on_error", True): self.disconnect() raise redis.ConnectionError(msgs.CONNECTION_ERROR_MSG) else: response = self._sock.responses.get() if isinstance(response, redis.ResponseError): raise response if kwargs.get("disable_decoding", False): return response else: return self._decode(response) def repr_pieces(self) -> List[Tuple[str, Any]]: pieces = [("server", self._server), ("db", self.db)] if self.client_name: pieces.append(("client_name", self.client_name)) return pieces def __str__(self) -> str: return self.server_key class FakeRedisMixin: def __init__( self, *args: Any, server: Optional[FakeServer] = None, version: VersionType = (7,), server_type: str = "redis", lua_modules: Optional[Set[str]] = None, **kwargs: Any, ) -> None: # Interpret the positional and keyword arguments according to the # version of redis in use. parameters = list(inspect.signature(redis.Redis.__init__).parameters.values())[1:] # Convert args => kwargs kwargs.update({parameters[i].name: args[i] for i in range(len(args))}) kwargs.setdefault("host", uuid.uuid4().hex) kwds = { p.name: kwargs.get(p.name, p.default) for ind, p in enumerate(parameters) if p.default != inspect.Parameter.empty } if not kwds.get("connection_pool", None): charset = kwds.get("charset", None) errors = kwds.get("errors", None) # Adapted from redis-py if charset is not None: warnings.warn(DeprecationWarning('"charset" is deprecated. Use "encoding" instead')) kwds["encoding"] = charset if errors is not None: warnings.warn(DeprecationWarning('"errors" is deprecated. Use "encoding_errors" instead')) kwds["encoding_errors"] = errors conn_pool_args = { "host", "port", "db", # Ignoring because AUTH is not implemented # 'username', # 'password', "socket_timeout", "encoding", "encoding_errors", "decode_responses", "retry_on_timeout", "max_connections", "health_check_interval", "client_name", "connected", } connection_kwargs = { "connection_class": FakeConnection, "server": server, "version": version, "server_type": server_type, "lua_modules": lua_modules, } connection_kwargs.update({arg: kwds[arg] for arg in conn_pool_args if arg in kwds}) kwds["connection_pool"] = redis.connection.ConnectionPool(**connection_kwargs) # type: ignore kwds.pop("server", None) kwds.pop("connected", None) kwds.pop("version", None) kwds.pop("server_type", None) kwds.pop("lua_modules", None) super().__init__(**kwds) @classmethod def from_url(cls, *args: Any, **kwargs: Any) -> Self: kwargs.setdefault("version", "7.4") kwargs.setdefault("server_type", "redis") pool = redis.ConnectionPool.from_url(*args, **kwargs) # Now override how it creates connections pool.connection_class = FakeConnection # Using username and password fails since AUTH is not implemented. # https://github.com/cunla/fakeredis-py/issues/9 pool.connection_kwargs.pop("username", None) pool.connection_kwargs.pop("password", None) return cls(connection_pool=pool) class FakeStrictRedis(FakeRedisMixin, redis.StrictRedis): # type: ignore pass class FakeRedis(FakeRedisMixin, redis.Redis): # type: ignore pass fakeredis-py-2.26.1/fakeredis/_fakesocket.py000066400000000000000000000031441470672414000207610ustar00rootroot00000000000000from typing import Optional, Set from fakeredis.commands_mixins import ( BitmapCommandsMixin, ConnectionCommandsMixin, GenericCommandsMixin, GeoCommandsMixin, HashCommandsMixin, ListCommandsMixin, PubSubCommandsMixin, ScriptingCommandsMixin, ServerCommandsMixin, StringCommandsMixin, TransactionsCommandsMixin, SetCommandsMixin, StreamsCommandsMixin, ) from fakeredis.stack import ( JSONCommandsMixin, BFCommandsMixin, CFCommandsMixin, CMSCommandsMixin, TopkCommandsMixin, TDigestCommandsMixin, TimeSeriesCommandsMixin, ) from ._basefakesocket import BaseFakeSocket from ._server import FakeServer from .commands_mixins.sortedset_mixin import SortedSetCommandsMixin from .server_specific_commands import DragonflyCommandsMixin class FakeSocket( BaseFakeSocket, GenericCommandsMixin, ScriptingCommandsMixin, HashCommandsMixin, ConnectionCommandsMixin, ListCommandsMixin, ServerCommandsMixin, StringCommandsMixin, TransactionsCommandsMixin, PubSubCommandsMixin, SetCommandsMixin, BitmapCommandsMixin, SortedSetCommandsMixin, StreamsCommandsMixin, JSONCommandsMixin, GeoCommandsMixin, BFCommandsMixin, CFCommandsMixin, CMSCommandsMixin, TopkCommandsMixin, TDigestCommandsMixin, TimeSeriesCommandsMixin, DragonflyCommandsMixin, ): def __init__( self, server: "FakeServer", db: int, lua_modules: Optional[Set[str]] = None, # noqa: F821 ) -> None: super(FakeSocket, self).__init__(server, db, lua_modules=lua_modules) fakeredis-py-2.26.1/fakeredis/_helpers.py000066400000000000000000000167151470672414000203140ustar00rootroot00000000000000import re import threading import time import weakref from collections import defaultdict from collections.abc import MutableMapping from typing import Any, Set, Callable, Dict, Optional, Iterator class SimpleString: def __init__(self, value: bytes) -> None: assert isinstance(value, bytes) self.value = value @classmethod def decode(cls, value: bytes) -> bytes: return value class SimpleError(Exception): """Exception that will be turned into a frontend-specific exception.""" def __init__(self, value: str) -> None: assert isinstance(value, str) self.value = value class NoResponse: """Returned by pub/sub commands to indicate that no response should be returned""" pass OK = SimpleString(b"OK") QUEUED = SimpleString(b"QUEUED") BGSAVE_STARTED = SimpleString(b"Background saving started") def current_time() -> int: return int(time.time() * 1000) def null_terminate(s: bytes) -> bytes: # Redis uses C functions on some strings, which means they stop at the # first NULL. ind = s.find(b"\0") if ind > -1: return s[:ind].lower() return s.lower() def casematch(a: bytes, b: bytes) -> bool: return null_terminate(a) == null_terminate(b) def decode_command_bytes(s: bytes) -> str: return s.decode(encoding="utf-8", errors="replace").lower() def compile_pattern(pattern_bytes: bytes) -> re.Pattern: # type: ignore """Compile a glob pattern (e.g., for keys) to a `bytes` regex. `fnmatch.fnmatchcase` doesn't work for this because it uses different escaping rules to redis, uses ! instead of ^ to negate a character set, and handles invalid cases (such as a [ without a ]) differently. This implementation was written by studying the redis implementation. """ # It's easier to work with text than bytes, because indexing bytes # doesn't behave the same in Python 3. Latin-1 will round-trip safely. pattern: str = pattern_bytes.decode( "latin-1", ) parts = ["^"] i = 0 pattern_len = len(pattern) while i < pattern_len: c = pattern[i] i += 1 if c == "?": parts.append(".") elif c == "*": parts.append(".*") elif c == "\\": if i == pattern_len: i -= 1 parts.append(re.escape(pattern[i])) i += 1 elif c == "[": parts.append("[") if i < pattern_len and pattern[i] == "^": i += 1 parts.append("^") parts_len = len(parts) # To detect if anything was added while i < pattern_len: if pattern[i] == "\\" and i + 1 < pattern_len: i += 1 parts.append(re.escape(pattern[i])) elif pattern[i] == "]": i += 1 break elif i + 2 < pattern_len and pattern[i + 1] == "-": start = pattern[i] end = pattern[i + 2] if start > end: start, end = end, start parts.append(re.escape(start) + "-" + re.escape(end)) i += 2 else: parts.append(re.escape(pattern[i])) i += 1 if len(parts) == parts_len: if parts[-1] == "[": # Empty group - will never match parts[-1] = "(?:$.)" else: # Negated empty group - matches any character assert parts[-1] == "^" parts.pop() parts[-1] = "." else: parts.append("]") else: parts.append(re.escape(c)) parts.append("\\Z") regex: bytes = "".join(parts).encode("latin-1") return re.compile(regex, flags=re.S) class Database(MutableMapping): # type: ignore def __init__(self, lock: Optional[threading.Lock], *args: Any, **kwargs: Any) -> None: self._dict: Dict[bytes, Any] = dict(*args, **kwargs) self.time = 0.0 # key to the set of connections self._watches: Dict[bytes, weakref.WeakSet[Any]] = defaultdict(weakref.WeakSet) self.condition = threading.Condition(lock) self._change_callbacks: Set[Callable[[], None]] = set() def swap(self, other: "Database") -> None: self._dict, other._dict = other._dict, self._dict self.time, other.time = other.time, self.time def notify_watch(self, key: bytes) -> None: for sock in self._watches.get(key, set()): sock.notify_watch() self.condition.notify_all() for callback in self._change_callbacks: callback() def add_watch(self, key: bytes, sock: Any) -> None: self._watches[key].add(sock) def remove_watch(self, key: bytes, sock: Any) -> None: watches = self._watches[key] watches.discard(sock) if not watches: del self._watches[key] def add_change_callback(self, callback: Callable[[], None]) -> None: self._change_callbacks.add(callback) def remove_change_callback(self, callback: Callable[[], None]) -> None: self._change_callbacks.remove(callback) def clear(self) -> None: for key in self: self.notify_watch(key) self._dict.clear() def expired(self, item: Any) -> bool: return item.expireat is not None and item.expireat < self.time def _remove_expired(self) -> None: for key in list(self._dict): item = self._dict[key] if self.expired(item): del self._dict[key] def __getitem__(self, key: bytes) -> Any: item = self._dict[key] if self.expired(item): del self._dict[key] raise KeyError(key) return item def __setitem__(self, key: bytes, value: Any) -> None: self._dict[key] = value def __delitem__(self, key: bytes) -> None: del self._dict[key] def __iter__(self) -> Iterator[bytes]: self._remove_expired() return iter(self._dict) def __len__(self) -> int: self._remove_expired() return len(self._dict) def __hash__(self) -> int: return hash(super(object, self)) def __eq__(self, other: object) -> bool: return super(object, self) == other def valid_response_type(value: Any, nested: bool = False) -> bool: if isinstance(value, NoResponse) and not nested: return True if value is not None and not isinstance(value, (bytes, SimpleString, SimpleError, float, int, list)): return False if isinstance(value, list): if any(not valid_response_type(item, True) for item in value): return False return True class FakeSelector(object): def __init__(self, sock: Any): self.sock = sock def check_can_read(self, timeout: Optional[float]) -> bool: if self.sock.responses.qsize(): return True if timeout is not None and timeout <= 0: return False # A sleep/poll loop is easier to mock out than messing with condition # variables. start = time.time() while True: if self.sock.responses.qsize(): return True time.sleep(0.01) now = time.time() if timeout is not None and now > start + timeout: return False @staticmethod def check_is_ready_for_command(_: Any) -> bool: return True fakeredis-py-2.26.1/fakeredis/_msgs.py000066400000000000000000000165511470672414000176210ustar00rootroot00000000000000INVALID_EXPIRE_MSG = "ERR invalid expire time in {}" WRONGTYPE_MSG = "WRONGTYPE Operation against a key holding the wrong kind of value" SYNTAX_ERROR_MSG = "ERR syntax error" SYNTAX_ERROR_LIMIT_ONLY_WITH_MSG = ( "ERR syntax error, LIMIT is only supported in combination with either BYSCORE or BYLEX" ) INVALID_HASH_MSG = "ERR hash value is not an integer" INVALID_INT_MSG = "ERR value is not an integer or out of range" INVALID_FLOAT_MSG = "ERR value is not a valid float" INVALID_WEIGHT_MSG = "ERR weight value is not a float" INVALID_OFFSET_MSG = "ERR offset is out of range" INVALID_BIT_OFFSET_MSG = "ERR bit offset is not an integer or out of range" INVALID_BIT_VALUE_MSG = "ERR bit is not an integer or out of range" BITOP_NOT_ONE_KEY_ONLY = "ERR BITOP NOT must be called with a single source key" INVALID_DB_MSG = "ERR DB index is out of range" INVALID_MIN_MAX_FLOAT_MSG = "ERR min or max is not a float" INVALID_MIN_MAX_STR_MSG = "ERR min or max not a valid string range item" STRING_OVERFLOW_MSG = "ERR string exceeds maximum allowed size (proto-max-bulk-len)" OVERFLOW_MSG = "ERR increment or decrement would overflow" NONFINITE_MSG = "ERR increment would produce NaN or Infinity" SCORE_NAN_MSG = "ERR resulting score is not a number (NaN)" INVALID_SORT_FLOAT_MSG = "ERR One or more scores can't be converted into double" SRC_DST_SAME_MSG = "ERR source and destination objects are the same" NO_KEY_MSG = "ERR no such key" INDEX_ERROR_MSG = "ERR index out of range" INDEX_NEGATIVE_ERROR_MSG = "ERR value is out of range, must be positive" # ZADD_NX_XX_ERROR_MSG6 = "ERR ZADD allows either 'nx' or 'xx', not both" ZADD_NX_XX_ERROR_MSG = "ERR XX and NX options at the same time are not compatible" ZADD_INCR_LEN_ERROR_MSG = "ERR INCR option supports a single increment-element pair" ZADD_NX_GT_LT_ERROR_MSG = "ERR GT, LT, and/or NX options at the same time are not compatible" NX_XX_GT_LT_ERROR_MSG = "ERR NX and XX, GT or LT options at the same time are not compatible" EXPIRE_UNSUPPORTED_OPTION = "ERR Unsupported option {}" ZUNIONSTORE_KEYS_MSG = "ERR at least 1 input key is needed for {}" WRONG_ARGS_MSG7 = "ERR Wrong number of args calling Redis command from script" WRONG_ARGS_MSG6 = "ERR wrong number of arguments for '{}' command" UNKNOWN_COMMAND_MSG = "ERR unknown command `{}`, with args beginning with: " EXECABORT_MSG = "EXECABORT Transaction discarded because of previous errors." MULTI_NESTED_MSG = "ERR MULTI calls can not be nested" WITHOUT_MULTI_MSG = "ERR {0} without MULTI" WATCH_INSIDE_MULTI_MSG = "ERR WATCH inside MULTI is not allowed" NEGATIVE_KEYS_MSG = "ERR Number of keys can't be negative" TOO_MANY_KEYS_MSG = "ERR Number of keys can't be greater than number of args" TIMEOUT_NEGATIVE_MSG = "ERR timeout is negative" NO_MATCHING_SCRIPT_MSG = "NOSCRIPT No matching script. Please use EVAL." GLOBAL_VARIABLE_MSG = "ERR Script attempted to set global variables: {}" COMMAND_IN_SCRIPT_MSG = "ERR This Redis command is not allowed from scripts" BAD_SUBCOMMAND_MSG = "ERR Unknown {} subcommand or wrong # of args." BAD_COMMAND_IN_PUBSUB_MSG = "ERR only (P)SUBSCRIBE / (P)UNSUBSCRIBE / PING / QUIT allowed in this context" CONNECTION_ERROR_MSG = "FakeRedis is emulating a connection error." REQUIRES_MORE_ARGS_MSG = "ERR {} requires {} arguments or more." LOG_INVALID_DEBUG_LEVEL_MSG = "ERR Invalid debug level." LUA_COMMAND_ARG_MSG6 = "ERR Lua redis() command arguments must be strings or integers" LUA_COMMAND_ARG_MSG = "ERR Lua redis lib command arguments must be strings or integers" VALKEY_LUA_COMMAND_ARG_MSG = "Command arguments must be strings or integers script: {}" LUA_WRONG_NUMBER_ARGS_MSG = "ERR wrong number or type of arguments" SCRIPT_ERROR_MSG = "ERR Error running script (call to f_{}): @user_script:?: {}" RESTORE_KEY_EXISTS = "BUSYKEY Target key name already exists." RESTORE_INVALID_CHECKSUM_MSG = "ERR DUMP payload version or checksum are wrong" RESTORE_INVALID_TTL_MSG = "ERR Invalid TTL value, must be >= 0" JSON_WRONG_REDIS_TYPE = "ERR Existing key has wrong Redis type" JSON_KEY_NOT_FOUND = "ERR could not perform this operation on a key that doesn't exist" JSON_PATH_NOT_FOUND_OR_NOT_STRING = "ERR Path '{}' does not exist or not a string" JSON_PATH_DOES_NOT_EXIST = "ERR Path '{}' does not exist" LCS_CANT_HAVE_BOTH_LEN_AND_IDX = "ERR If you want both the length and indexes, please just use IDX." BIT_ARG_MUST_BE_ZERO_OR_ONE = "ERR The bit argument must be 1 or 0." XADD_ID_LOWER_THAN_LAST = "ERR The ID specified in XADD is equal or smaller than the target stream top item" XADD_INVALID_ID = "ERR Invalid stream ID specified as stream command argument" XGROUP_BUSYGROUP = "ERR BUSYGROUP Consumer Group name already exists" XREADGROUP_KEY_OR_GROUP_NOT_FOUND_MSG = ( "NOGROUP No such key '{0}' or consumer group '{1}' in XREADGROUP with GROUP option" ) XGROUP_GROUP_NOT_FOUND_MSG = "NOGROUP No such consumer group '{0}' for key name '{1}'" XGROUP_KEY_NOT_FOUND_MSG = ( "ERR The XGROUP subcommand requires the key to exist." " Note that for CREATE you may want to use the MKSTREAM option to create an empty stream automatically." ) GEO_UNSUPPORTED_UNIT = "unsupported unit provided. please use M, KM, FT, MI" LPOS_RANK_CAN_NOT_BE_ZERO = ( "RANK can't be zero: use 1 to start from the first match, 2 from the second ... " "or use negative to start from the end of the list" ) NUMKEYS_GREATER_THAN_ZERO_MSG = "numkeys should be greater than 0" FILTER_FULL_MSG = "" NONSCALING_FILTERS_CANNOT_EXPAND_MSG = "Nonscaling filters cannot expand" ITEM_EXISTS_MSG = "item exists" NOT_FOUND_MSG = "not found" INVALID_BITFIELD_TYPE = ( "ERR Invalid bitfield type. Use something like i16 u8. " "Note that u64 is not supported but i64 is." ) INVALID_OVERFLOW_TYPE = "ERR Invalid OVERFLOW type specified" # TDigest error messages TDIGEST_KEY_EXISTS = "T-Digest: key already exists" TDIGEST_KEY_NOT_EXISTS = "T-Digest: key does not exist" TDIGEST_ERROR_PARSING_VALUE = "T-Digest: error parsing val parameter" TDIGEST_BAD_QUANTILE = "T-Digest: quantile should be in [0,1]" TDIGEST_BAD_RANK = "T-Digest: rank needs to be non negative" # TimeSeries error messages TIMESERIES_KEY_EXISTS = "TSDB: key already exists" TIMESERIES_INVALID_DUPLICATE_POLICY = "TSDB: Unknown DUPLICATE_POLICY" TIMESERIES_KEY_DOES_NOT_EXIST = "TSDB: the key does not exist" TIMESERIES_RULE_DOES_NOT_EXIST = "TSDB: compaction rule does not exist" TIMESERIES_RULE_EXISTS = "TSDB: the destination key already has a src rule" TIMESERIES_BAD_AGGREGATION_TYPE = "TSDB: Unknown aggregation type" TIMESERIES_INVALID_TIMESTAMP = "TSDB: invalid timestamp" TIMESERIES_BAD_TIMESTAMP = "TSDB: Couldn't parse alignTimestamp" TIMESERIES_TIMESTAMP_OLDER_THAN_RETENTION = "TSDB: Timestamp is older than retention" TIMESERIES_TIMESTAMP_LOWER_THAN_MAX_V7 = ( "TSDB: timestamp must be equal to or higher than the maximum existing timestamp" ) TIMESERIES_TIMESTAMP_LOWER_THAN_MAX_V6 = "TSDB: for incrby/decrby, timestamp should be newer than the lastest one" TIMESERIES_BAD_CHUNK_SIZE = "TSDB: CHUNK_SIZE value must be a multiple of 8 in the range [48 .. 1048576]" TIMESERIES_DUPLICATE_POLICY_BLOCK = ( "TSDB: Error at upsert, update is not supported when DUPLICATE_POLICY is set to BLOCK mode" ) TIMESERIES_BAD_FILTER_EXPRESSION = "TSDB: failed parsing labels" HEXPIRE_NUMFIELDS_DIFFERENT = "The `numfields` parameter must match the number of arguments" # Command flags FLAG_NO_SCRIPT = "s" # Command not allowed in scripts FLAG_LEAVE_EMPTY_VAL = "v" FLAG_TRANSACTION = "t" FLAG_DO_NOT_CREATE = "i" fakeredis-py-2.26.1/fakeredis/_server.py000066400000000000000000000056521470672414000201560ustar00rootroot00000000000000import logging import threading import time import weakref from collections import defaultdict from typing import Dict, Tuple, Any, List, Optional, Union try: from typing import Literal except ImportError: from typing_extensions import Literal from fakeredis._helpers import Database, FakeSelector LOGGER = logging.getLogger("fakeredis") VersionType = Union[Tuple[int, ...], int, str] ServerType = Literal["redis", "dragonfly", "valkey"] def _create_version(v: VersionType) -> Tuple[int, ...]: if isinstance(v, tuple): return v if isinstance(v, int): return (v,) if isinstance(v, str): v_split = v.split(".") return tuple(int(x) for x in v_split) return v class FakeServer: _servers_map: Dict[str, "FakeServer"] = dict() def __init__(self, version: VersionType = (7,), server_type: ServerType = "redis") -> None: self.lock = threading.Lock() self.dbs: Dict[int, Database] = defaultdict(lambda: Database(self.lock)) # Maps channel/pattern to a weak set of sockets self.subscribers: Dict[bytes, weakref.WeakSet[Any]] = defaultdict(weakref.WeakSet) self.psubscribers: Dict[bytes, weakref.WeakSet[Any]] = defaultdict(weakref.WeakSet) self.ssubscribers: Dict[bytes, weakref.WeakSet[Any]] = defaultdict(weakref.WeakSet) self.lastsave: int = int(time.time()) self.connected = True # List of weakrefs to sockets that are being closed lazily self.closed_sockets: List[Any] = [] self.version: Tuple[int, ...] = _create_version(version) if server_type not in ("redis", "dragonfly", "valkey"): raise ValueError(f"Unsupported server type: {server_type}") self.server_type: str = server_type @staticmethod def get_server(key: str, version: VersionType, server_type: str) -> "FakeServer": return FakeServer._servers_map.setdefault(key, FakeServer(version=version, server_type=server_type)) class FakeBaseConnectionMixin(object): def __init__(self, *args: Any, version: VersionType = (7, 0), server_type: str = "redis", **kwargs: Any) -> None: self.client_name: Optional[str] = None self.server_key: str self._sock = None self._selector: Optional[FakeSelector] = None self._server = kwargs.pop("server", None) self._lua_modules = kwargs.pop("lua_modules", set()) path = kwargs.pop("path", None) connected = kwargs.pop("connected", True) if self._server is None: if path: self.server_key = path else: host, port = kwargs.get("host"), kwargs.get("port") self.server_key = f"{host}:{port}" self.server_key += f":{server_type}:v{version}" self._server = FakeServer.get_server(self.server_key, server_type=server_type, version=version) self._server.connected = connected super().__init__(*args, **kwargs) fakeredis-py-2.26.1/fakeredis/_tcp_server.py000066400000000000000000000100601470672414000210110ustar00rootroot00000000000000import logging from dataclasses import dataclass from itertools import count from socketserver import ThreadingTCPServer, StreamRequestHandler from typing import BinaryIO, Dict, Tuple from fakeredis import FakeRedis from fakeredis import FakeServer from fakeredis._server import ServerType LOGGER = logging.getLogger("fakeredis") def to_bytes(value) -> bytes: if isinstance(value, bytes): return value return str(value).encode() @dataclass class Client: connection: FakeRedis client_address: int @dataclass class Reader: reader: BinaryIO def load_array(self, length: int): array = [None] * length for i in range(length): array[i] = self.load() return array def load(self): line = self.reader.readline().strip() match line[0:1], line[1:]: case b"*", length: return self.load_array(int(length)) case b"$", length: bulk_string = self.reader.read(int(length) + 2).strip() if len(bulk_string) != int(length): raise ValueError() return bulk_string case b":", value: return int(value) case b"+", value: return value case b"-", value: return Exception(value) case _: return None @dataclass class Writer: writer: BinaryIO def dump(self, value, dump_bulk=False): if isinstance(value, int): self.writer.write(f":{value}\r\n".encode()) elif isinstance(value, (str, bytes)): value = to_bytes(value) if dump_bulk or b"\r" in value or b"\n" in value: self.writer.write(b"$" + str(len(value)).encode() + b"\r\n" + value + b"\r\n") else: self.writer.write(b"+" + value + b"\r\n") elif isinstance(value, (list, set)): self.writer.write(f"*{len(value)}\r\n".encode()) for item in value: self.dump(item, dump_bulk=True) elif value is None: self.writer.write("$-1\r\n".encode()) elif isinstance(value, Exception): self.writer.write(f"-{value.args[0]}\r\n".encode()) class TCPFakeRequestHandler(StreamRequestHandler): def setup(self) -> None: super().setup() if self.client_address in self.server.clients: self.current_client = self.server.clients[self.client_address] else: self.current_client = Client( connection=FakeRedis(server=self.server.fake_server), client_address=self.client_address, ) self.reader = Reader(self.rfile) self.writer = Writer(self.wfile) self.server.clients[self.client_address] = self.current_client def handle(self): while True: try: self.data = self.reader.load() LOGGER.debug(f">>> {self.client_address[0]}: {self.data}") res = self.current_client.connection.execute_command(*self.data) LOGGER.debug(f"<<< {self.client_address[0]}: {res}") self.writer.dump(res) except Exception as e: LOGGER.debug(f"!!! {self.client_address[0]}: {e}") self.writer.dump(e) break def finish(self) -> None: del self.server.clients[self.current_client.client_address] super().finish() class TcpFakeServer(ThreadingTCPServer): def __init__( self, server_address: Tuple[str | bytes | bytearray, int], bind_and_activate: bool = True, server_type: ServerType = "redis", server_version: Tuple[int, ...] = (7, 4), ): super().__init__(server_address, TCPFakeRequestHandler, bind_and_activate) self.fake_server = FakeServer(server_type=server_type, version=server_version) self.client_ids = count(0) self.clients: Dict[int, FakeRedis] = dict() if __name__ == "__main__": server = TcpFakeServer(("localhost", 19000)) server.serve_forever() fakeredis-py-2.26.1/fakeredis/_valkey.py000066400000000000000000000027701470672414000201410ustar00rootroot00000000000000import sys from typing import Any, Dict from . import FakeStrictRedis from ._connection import FakeRedis from .aioredis import FakeRedis as FakeAsyncRedis if sys.version_info >= (3, 11): from typing import Self else: from typing_extensions import Self def _validate_server_type(args_dict: Dict[str, Any]) -> None: if "server_type" in args_dict and args_dict["server_type"] != "valkey": raise ValueError("server_type must be valkey") args_dict.setdefault("server_type", "valkey") class FakeValkey(FakeRedis): def __init__(self, *args: Any, **kwargs: Any) -> None: _validate_server_type(kwargs) super().__init__(*args, **kwargs) @classmethod def from_url(cls, *args: Any, **kwargs: Any) -> Self: _validate_server_type(kwargs) return super().from_url(*args, **kwargs) class FakeStrictValkey(FakeStrictRedis): def __init__(self, *args: Any, **kwargs: Any) -> None: _validate_server_type(kwargs) super().__init__(*args, **kwargs) @classmethod def from_url(cls, *args: Any, **kwargs: Any) -> Self: _validate_server_type(kwargs) return super().from_url(*args, **kwargs) class FakeAsyncValkey(FakeAsyncRedis): def __init__(self, *args: Any, **kwargs: Any) -> None: _validate_server_type(kwargs) super().__init__(*args, **kwargs) @classmethod def from_url(cls, *args: Any, **kwargs: Any) -> Self: _validate_server_type(kwargs) return super().from_url(*args, **kwargs) fakeredis-py-2.26.1/fakeredis/aioredis.py000066400000000000000000000224771470672414000203140ustar00rootroot00000000000000from __future__ import annotations import asyncio import sys import uuid from typing import Union, Optional, Any, Callable, Iterable, Tuple, List, Set from redis import ResponseError from ._helpers import SimpleError from ._server import FakeBaseConnectionMixin, VersionType if sys.version_info >= (3, 11): from asyncio import timeout as async_timeout else: from async_timeout import timeout as async_timeout import redis.asyncio as redis_async # aioredis was integrated into redis in version 4.2.0 as redis.asyncio from redis.asyncio.connection import DefaultParser from . import _fakesocket from . import _helpers from . import _msgs as msgs from . import _server class AsyncFakeSocket(_fakesocket.FakeSocket): _connection_error_class = redis_async.ConnectionError def __init__(self, *args: Any, **kwargs: Any) -> None: super().__init__(*args, **kwargs) self.responses: asyncio.Queue = asyncio.Queue() # type:ignore def _decode_error(self, error: SimpleError) -> ResponseError: parser = DefaultParser(1) return parser.parse_error(error.value) def put_response(self, msg: Any) -> None: if not self.responses: return self.responses.put_nowait(msg) async def _async_blocking( self, timeout: Optional[Union[float, int]], func: Callable[[bool], Any], event: asyncio.Event, callback: Callable[[], None], ) -> None: result = None try: async with async_timeout(timeout if timeout else None): while True: await event.wait() event.clear() # This is a coroutine outside the normal control flow that # locks the server, so we have to take our own lock. with self._server.lock: ret = func(False) if ret is not None: result = self._decode_result(ret) break except asyncio.TimeoutError: pass finally: with self._server.lock: self._db.remove_change_callback(callback) self.put_response(result) self.resume() def _blocking( self, timeout: Optional[Union[float, int]], func: Callable[[bool], None], ) -> Any: loop = asyncio.get_event_loop() ret = func(True) if ret is not None or self._in_transaction: return ret event = asyncio.Event() def callback() -> None: loop.call_soon_threadsafe(event.set) self._db.add_change_callback(callback) self.pause() loop.create_task(self._async_blocking(timeout, func, event, callback)) return _helpers.NoResponse() class FakeReader: def __init__(self, socket: AsyncFakeSocket) -> None: self._socket = socket async def read(self, _: int) -> bytes: return await self._socket.responses.get() # type:ignore def at_eof(self) -> bool: return self._socket.responses.empty() and not self._socket._server.connected class FakeWriter: def __init__(self, socket: AsyncFakeSocket) -> None: self._socket: Optional[AsyncFakeSocket] = socket def close(self) -> None: self._socket = None async def wait_closed(self) -> None: pass async def drain(self) -> None: pass def writelines(self, data: Iterable[Any]) -> None: if self._socket is None: return for chunk in data: self._socket.sendall(chunk) # type:ignore class FakeConnection(FakeBaseConnectionMixin, redis_async.Connection): async def _connect(self) -> None: if not self._server.connected: raise redis_async.ConnectionError(msgs.CONNECTION_ERROR_MSG) self._sock: Optional[AsyncFakeSocket] = AsyncFakeSocket(self._server, self.db, lua_modules=self._lua_modules) self._reader: Optional[FakeReader] = FakeReader(self._sock) self._writer: Optional[FakeWriter] = FakeWriter(self._sock) async def disconnect(self, nowait: bool = False, **kwargs: Any) -> None: await super().disconnect(**kwargs) self._sock = None async def can_read(self, timeout: Optional[float] = 0) -> bool: if not self.is_connected: await self.connect() if timeout == 0: return self._sock is not None and not self._sock.responses.empty() # asyncio.Queue doesn't have a way to wait for the queue to be # non-empty without consuming an item, so kludge it with a sleep/poll # loop. loop = asyncio.get_event_loop() start = loop.time() while True: if self._sock and not self._sock.responses.empty(): return True await asyncio.sleep(0.01) now = loop.time() if timeout is not None and now > start + timeout: return False def _decode(self, response: Any) -> Any: if isinstance(response, list): return [self._decode(item) for item in response] elif isinstance(response, bytes): return self.encoder.decode(response) else: return response async def read_response(self, **kwargs: Any) -> Any: # type: ignore if not self._sock: raise redis_async.ConnectionError(msgs.CONNECTION_ERROR_MSG) if not self._server.connected: try: response = self._sock.responses.get_nowait() except asyncio.QueueEmpty: if kwargs.get("disconnect_on_error", True): await self.disconnect() raise redis_async.ConnectionError(msgs.CONNECTION_ERROR_MSG) else: timeout: Optional[float] = kwargs.pop("timeout", None) can_read = await self.can_read(timeout) response = await self._reader.read(0) if can_read and self._reader else None if isinstance(response, redis_async.ResponseError): raise response return self._decode(response) def repr_pieces(self) -> List[Tuple[str, Any]]: pieces = [("server", self._server), ("db", self.db)] if self.client_name: pieces.append(("client_name", self.client_name)) return pieces def __str__(self) -> str: return self.server_key class FakeRedis(redis_async.Redis): def __init__( self, *, host: Optional[str] = None, port: int = 6379, db: Union[str, int] = 0, password: Optional[str] = None, socket_timeout: Optional[float] = None, connection_pool: Optional[redis_async.ConnectionPool] = None, encoding: str = "utf-8", encoding_errors: str = "strict", decode_responses: bool = False, retry_on_timeout: bool = False, max_connections: Optional[int] = None, health_check_interval: int = 0, client_name: Optional[str] = None, username: Optional[str] = None, server: Optional[_server.FakeServer] = None, connected: bool = True, version: VersionType = (7,), server_type: str = "redis", lua_modules: Optional[Set[str]] = None, **kwargs: Any, ) -> None: if not connection_pool: # Adapted from aioredis connection_kwargs = dict( host=host or uuid.uuid4().hex, port=port, db=db, # Ignoring because AUTH is not implemented # 'username', # 'password', socket_timeout=socket_timeout, encoding=encoding, encoding_errors=encoding_errors, decode_responses=decode_responses, retry_on_timeout=retry_on_timeout, health_check_interval=health_check_interval, client_name=client_name, server=server, connected=connected, connection_class=FakeConnection, max_connections=max_connections, version=version, server_type=server_type, lua_modules=lua_modules, ) connection_pool = redis_async.ConnectionPool(**connection_kwargs) # type:ignore kwargs.update( dict( db=db, password=password, socket_timeout=socket_timeout, connection_pool=connection_pool, encoding=encoding, encoding_errors=encoding_errors, decode_responses=decode_responses, retry_on_timeout=retry_on_timeout, max_connections=max_connections, health_check_interval=health_check_interval, client_name=client_name, username=username, ) ) super().__init__(**kwargs) @classmethod def from_url(cls, url: str, **kwargs: Any) -> redis_async.Redis: self = super().from_url(url, **kwargs) pool = self.connection_pool # Now override how it creates connections pool.connection_class = FakeConnection pool.connection_kwargs.setdefault("version", "7.4") pool.connection_kwargs.setdefault("server_type", "redis") pool.connection_kwargs.pop("username", None) pool.connection_kwargs.pop("password", None) return self fakeredis-py-2.26.1/fakeredis/commands.json000066400000000000000000001050751470672414000206330ustar00rootroot00000000000000{"append": ["append", 3, ["write", "denyoom", "fast"], 1, 1, 1, ["@write", "@string", "@fast"], [], [], []], "bgsave": ["bgsave", -1, ["admin", "noscript", "no_async_loading"], 0, 0, 0, ["@admin", "@slow", "@dangerous"], [], [], []], "bitcount": ["bitcount", -2, ["readonly"], 1, 1, 1, ["@read", "@bitmap", "@slow"], [], [], []], "bitfield": ["bitfield", -2, ["write", "denyoom"], 1, 1, 1, ["@write", "@bitmap", "@slow"], [], [], [["bitfield_ro", -2, ["readonly", "fast"], 1, 1, 1, ["@read", "@bitmap", "@fast"], [], [], []]]], "bitop": ["bitop", -4, ["write", "denyoom"], 2, 3, 1, ["@write", "@bitmap", "@slow"], [], [], []], "bitpos": ["bitpos", -3, ["readonly"], 1, 1, 1, ["@read", "@bitmap", "@slow"], [], [], []], "blmove": ["blmove", 6, ["write", "denyoom", "blocking"], 1, 2, 1, ["@write", "@list", "@slow", "@blocking"], [], [], []], "blmpop": ["blmpop", -5, ["write", "blocking", "movablekeys"], 2, 2, 1, ["@write", "@list", "@slow", "@blocking"], [], [], []], "blpop": ["blpop", -3, ["write", "blocking"], 1, 1, 1, ["@write", "@list", "@slow", "@blocking"], [], [], []], "brpop": ["brpop", -3, ["write", "blocking"], 1, 1, 1, ["@write", "@list", "@slow", "@blocking"], [], [], [["brpoplpush", 4, ["write", "denyoom", "blocking"], 1, 2, 1, ["@write", "@list", "@slow", "@blocking"], [], [], []]]], "brpoplpush": ["brpoplpush", 4, ["write", "denyoom", "blocking"], 1, 2, 1, ["@write", "@list", "@slow", "@blocking"], [], [], []], "bzmpop": ["bzmpop", -5, ["write", "blocking", "movablekeys"], 2, 2, 1, ["@write", "@sortedset", "@slow", "@blocking"], [], [], []], "bzpopmax": ["bzpopmax", -3, ["write", "blocking", "fast"], 1, 1, 1, ["@write", "@sortedset", "@fast", "@blocking"], [], [], []], "bzpopmin": ["bzpopmin", -3, ["write", "blocking", "fast"], 1, 1, 1, ["@write", "@sortedset", "@fast", "@blocking"], [], [], []], "command": ["command", -1, ["loading", "stale"], 0, 0, 0, ["@slow", "@connection"], [], [], [["command|count", 2, ["loading", "stale"], 0, 0, 0, ["@slow", "@connection"], [], [], []], ["command|docs", -2, ["loading", "stale"], 0, 0, 0, ["@slow", "@connection"], [], [], []], ["command|getkeys", -3, ["loading", "stale"], 0, 0, 0, ["@slow", "@connection"], [], [], [["command|getkeysandflags", -3, ["loading", "stale"], 0, 0, 0, ["@slow", "@connection"], [], [], []]]], ["command|getkeysandflags", -3, ["loading", "stale"], 0, 0, 0, ["@slow", "@connection"], [], [], []], ["command|help", 2, ["loading", "stale"], 0, 0, 0, ["@slow", "@connection"], [], [], []], ["command|info", -2, ["loading", "stale"], 0, 0, 0, ["@slow", "@connection"], [], [], []], ["command|list", -2, ["loading", "stale"], 0, 0, 0, ["@slow", "@connection"], [], [], []]]], "dbsize": ["dbsize", 1, ["readonly", "fast"], 0, 0, 0, ["@keyspace", "@read", "@fast"], [], [], []], "decr": ["decr", 2, ["write", "denyoom", "fast"], 1, 1, 1, ["@write", "@string", "@fast"], [], [], [["decrby", 3, ["write", "denyoom", "fast"], 1, 1, 1, ["@write", "@string", "@fast"], [], [], []]]], "decrby": ["decrby", 3, ["write", "denyoom", "fast"], 1, 1, 1, ["@write", "@string", "@fast"], [], [], []], "del": ["del", -2, ["write"], 1, 1, 1, ["@keyspace", "@write", "@slow"], [], [], []], "discard": ["discard", 1, ["noscript", "loading", "stale", "fast", "allow_busy"], 0, 0, 0, ["@fast", "@transaction"], [], [], []], "dump": ["dump", 2, ["readonly"], 1, 1, 1, ["@keyspace", "@read", "@slow"], [], [], []], "echo": ["echo", 2, ["loading", "stale", "fast"], 0, 0, 0, ["@fast", "@connection"], [], [], []], "eval": ["eval", -3, ["noscript", "stale", "skip_monitor", "no_mandatory_keys", "movablekeys"], 2, 2, 1, ["@slow", "@scripting"], [], [], [["evalsha", -3, ["noscript", "stale", "skip_monitor", "no_mandatory_keys", "movablekeys"], 2, 2, 1, ["@slow", "@scripting"], [], [], [["evalsha_ro", -3, ["readonly", "noscript", "stale", "skip_monitor", "no_mandatory_keys", "movablekeys"], 2, 2, 1, ["@slow", "@scripting"], [], [], []]]], ["evalsha_ro", -3, ["readonly", "noscript", "stale", "skip_monitor", "no_mandatory_keys", "movablekeys"], 2, 2, 1, ["@slow", "@scripting"], [], [], []], ["eval_ro", -3, ["readonly", "noscript", "stale", "skip_monitor", "no_mandatory_keys", "movablekeys"], 2, 2, 1, ["@slow", "@scripting"], [], [], []]]], "evalsha": ["evalsha", -3, ["noscript", "stale", "skip_monitor", "no_mandatory_keys", "movablekeys"], 2, 2, 1, ["@slow", "@scripting"], [], [], [["evalsha_ro", -3, ["readonly", "noscript", "stale", "skip_monitor", "no_mandatory_keys", "movablekeys"], 2, 2, 1, ["@slow", "@scripting"], [], [], []]]], "exec": ["exec", 1, ["noscript", "loading", "stale", "skip_slowlog"], 0, 0, 0, ["@slow", "@transaction"], [], [], []], "exists": ["exists", -2, ["readonly", "fast"], 1, 1, 1, ["@keyspace", "@read", "@fast"], [], [], []], "expire": ["expire", -3, ["write", "fast"], 1, 1, 1, ["@keyspace", "@write", "@fast"], [], [], [["expireat", -3, ["write", "fast"], 1, 1, 1, ["@keyspace", "@write", "@fast"], [], [], []], ["expiretime", 2, ["readonly", "fast"], 1, 1, 1, ["@keyspace", "@read", "@fast"], [], [], []]]], "expireat": ["expireat", -3, ["write", "fast"], 1, 1, 1, ["@keyspace", "@write", "@fast"], [], [], []], "expiretime": ["expiretime", 2, ["readonly", "fast"], 1, 1, 1, ["@keyspace", "@read", "@fast"], [], [], []], "flushall": ["flushall", -1, ["write"], 0, 0, 0, ["@keyspace", "@write", "@slow", "@dangerous"], [], [], []], "flushdb": ["flushdb", -1, ["write"], 0, 0, 0, ["@keyspace", "@write", "@slow", "@dangerous"], [], [], []], "geoadd": ["geoadd", -5, ["write", "denyoom"], 1, 1, 1, ["@write", "@geo", "@slow"], [], [], []], "geodist": ["geodist", -4, ["readonly"], 1, 1, 1, ["@read", "@geo", "@slow"], [], [], []], "geohash": ["geohash", -2, ["readonly"], 1, 1, 1, ["@read", "@geo", "@slow"], [], [], []], "geopos": ["geopos", -2, ["readonly"], 1, 1, 1, ["@read", "@geo", "@slow"], [], [], []], "georadius": ["georadius", -6, ["write", "denyoom", "movablekeys"], 1, 0, 1, ["@write", "@geo", "@slow"], [], [], [["georadiusbymember", -5, ["write", "denyoom", "movablekeys"], 1, 0, 1, ["@write", "@geo", "@slow"], [], [], [["georadiusbymember_ro", -5, ["readonly"], 1, 1, 1, ["@read", "@geo", "@slow"], [], [], []]]], ["georadiusbymember_ro", -5, ["readonly"], 1, 1, 1, ["@read", "@geo", "@slow"], [], [], []], ["georadius_ro", -6, ["readonly"], 1, 1, 1, ["@read", "@geo", "@slow"], [], [], []]]], "georadiusbymember": ["georadiusbymember", -5, ["write", "denyoom", "movablekeys"], 1, 0, 1, ["@write", "@geo", "@slow"], [], [], [["georadiusbymember_ro", -5, ["readonly"], 1, 1, 1, ["@read", "@geo", "@slow"], [], [], []]]], "georadiusbymember_ro": ["georadiusbymember_ro", -5, ["readonly"], 1, 1, 1, ["@read", "@geo", "@slow"], [], [], []], "georadius_ro": ["georadius_ro", -6, ["readonly"], 1, 1, 1, ["@read", "@geo", "@slow"], [], [], []], "geosearch": ["geosearch", -7, ["readonly"], 1, 1, 1, ["@read", "@geo", "@slow"], [], [], [["geosearchstore", -8, ["write", "denyoom"], 1, 2, 1, ["@write", "@geo", "@slow"], [], [], []]]], "geosearchstore": ["geosearchstore", -8, ["write", "denyoom"], 1, 2, 1, ["@write", "@geo", "@slow"], [], [], []], "get": ["get", 2, ["readonly", "fast"], 1, 1, 1, ["@read", "@string", "@fast"], [], [], [["getbit", 3, ["readonly", "fast"], 1, 1, 1, ["@read", "@bitmap", "@fast"], [], [], []], ["getdel", 2, ["write", "fast"], 1, 1, 1, ["@write", "@string", "@fast"], [], [], []], ["getex", -2, ["write", "fast"], 1, 1, 1, ["@write", "@string", "@fast"], [], [], []], ["getrange", 4, ["readonly"], 1, 1, 1, ["@read", "@string", "@slow"], [], [], []], ["getset", 3, ["write", "denyoom", "fast"], 1, 1, 1, ["@write", "@string", "@fast"], [], [], []]]], "getbit": ["getbit", 3, ["readonly", "fast"], 1, 1, 1, ["@read", "@bitmap", "@fast"], [], [], []], "getdel": ["getdel", 2, ["write", "fast"], 1, 1, 1, ["@write", "@string", "@fast"], [], [], []], "getex": ["getex", -2, ["write", "fast"], 1, 1, 1, ["@write", "@string", "@fast"], [], [], []], "getrange": ["getrange", 4, ["readonly"], 1, 1, 1, ["@read", "@string", "@slow"], [], [], []], "getset": ["getset", 3, ["write", "denyoom", "fast"], 1, 1, 1, ["@write", "@string", "@fast"], [], [], []], "hdel": ["hdel", -3, ["write", "fast"], 1, 1, 1, ["@write", "@hash", "@fast"], [], [], []], "hexists": ["hexists", 3, ["readonly", "fast"], 1, 1, 1, ["@read", "@hash", "@fast"], [], [], []], "hexpire": ["hexpire", -5, ["write", "denyoom", "fast"], 1, 1, 1, ["@write", "@hash", "@fast"], [], [], [["hexpireat", -5, ["write", "denyoom", "fast"], 1, 1, 1, ["@write", "@hash", "@fast"], [], [], []], ["hexpiretime", -4, ["readonly", "fast"], 1, 1, 1, ["@read", "@hash", "@fast"], [], [], []]]], "hexpireat": ["hexpireat", -5, ["write", "denyoom", "fast"], 1, 1, 1, ["@write", "@hash", "@fast"], [], [], []], "hexpiretime": ["hexpiretime", -4, ["readonly", "fast"], 1, 1, 1, ["@read", "@hash", "@fast"], [], [], []], "hget": ["hget", 3, ["readonly", "fast"], 1, 1, 1, ["@read", "@hash", "@fast"], [], [], [["hgetall", 2, ["readonly"], 1, 1, 1, ["@read", "@hash", "@slow"], [], [], []], ["hgetf", -5, ["write", "denyoom", "fast"], 1, 1, 1, ["@write", "@hash", "@fast"], [], [], []]]], "hgetall": ["hgetall", 2, ["readonly"], 1, 1, 1, ["@read", "@hash", "@slow"], [], [], []], "hincrby": ["hincrby", 4, ["write", "denyoom", "fast"], 1, 1, 1, ["@write", "@hash", "@fast"], [], [], [["hincrbyfloat", 4, ["write", "denyoom", "fast"], 1, 1, 1, ["@write", "@hash", "@fast"], [], [], []]]], "hincrbyfloat": ["hincrbyfloat", 4, ["write", "denyoom", "fast"], 1, 1, 1, ["@write", "@hash", "@fast"], [], [], []], "hkeys": ["hkeys", 2, ["readonly"], 1, 1, 1, ["@read", "@hash", "@slow"], [], [], []], "hlen": ["hlen", 2, ["readonly", "fast"], 1, 1, 1, ["@read", "@hash", "@fast"], [], [], []], "hmget": ["hmget", -3, ["readonly", "fast"], 1, 1, 1, ["@read", "@hash", "@fast"], [], [], []], "hmset": ["hmset", -4, ["write", "denyoom", "fast"], 1, 1, 1, ["@write", "@hash", "@fast"], [], [], []], "hpersist": ["hpersist", -4, ["readonly", "fast"], 1, 1, 1, ["@read", "@hash", "@fast"], [], [], []], "hpexpire": ["hpexpire", -5, ["write", "denyoom", "fast"], 1, 1, 1, ["@write", "@hash", "@fast"], [], [], [["hpexpireat", -5, ["write", "denyoom", "fast"], 1, 1, 1, ["@write", "@hash", "@fast"], [], [], []], ["hpexpiretime", -4, ["readonly", "fast"], 1, 1, 1, ["@read", "@hash", "@fast"], [], [], []]]], "hpexpireat": ["hpexpireat", -5, ["write", "denyoom", "fast"], 1, 1, 1, ["@write", "@hash", "@fast"], [], [], []], "hpexpiretime": ["hpexpiretime", -4, ["readonly", "fast"], 1, 1, 1, ["@read", "@hash", "@fast"], [], [], []], "hpttl": ["hpttl", -4, ["readonly", "fast"], 1, 1, 1, ["@read", "@hash", "@fast"], [], [], []], "hrandfield": ["hrandfield", -2, ["readonly"], 1, 1, 1, ["@read", "@hash", "@slow"], [], [], []], "hscan": ["hscan", -3, ["readonly"], 1, 1, 1, ["@read", "@hash", "@slow"], [], [], []], "hset": ["hset", -4, ["write", "denyoom", "fast"], 1, 1, 1, ["@write", "@hash", "@fast"], [], [], [["hsetf", -6, ["write", "denyoom", "fast"], 1, 1, 1, ["@write", "@hash", "@fast"], [], [], []], ["hsetnx", 4, ["write", "denyoom", "fast"], 1, 1, 1, ["@write", "@hash", "@fast"], [], [], []]]], "hsetnx": ["hsetnx", 4, ["write", "denyoom", "fast"], 1, 1, 1, ["@write", "@hash", "@fast"], [], [], []], "hstrlen": ["hstrlen", 3, ["readonly", "fast"], 1, 1, 1, ["@read", "@hash", "@fast"], [], [], []], "httl": ["httl", -4, ["readonly", "fast"], 1, 1, 1, ["@read", "@hash", "@fast"], [], [], []], "hvals": ["hvals", 2, ["readonly"], 1, 1, 1, ["@read", "@hash", "@slow"], [], [], []], "incr": ["incr", 2, ["write", "denyoom", "fast"], 1, 1, 1, ["@write", "@string", "@fast"], [], [], [["incrby", 3, ["write", "denyoom", "fast"], 1, 1, 1, ["@write", "@string", "@fast"], [], [], [["incrbyfloat", 3, ["write", "denyoom", "fast"], 1, 1, 1, ["@write", "@string", "@fast"], [], [], []]]], ["incrbyfloat", 3, ["write", "denyoom", "fast"], 1, 1, 1, ["@write", "@string", "@fast"], [], [], []]]], "incrby": ["incrby", 3, ["write", "denyoom", "fast"], 1, 1, 1, ["@write", "@string", "@fast"], [], [], [["incrbyfloat", 3, ["write", "denyoom", "fast"], 1, 1, 1, ["@write", "@string", "@fast"], [], [], []]]], "incrbyfloat": ["incrbyfloat", 3, ["write", "denyoom", "fast"], 1, 1, 1, ["@write", "@string", "@fast"], [], [], []], "keys": ["keys", 2, ["readonly"], 0, 0, 0, ["@keyspace", "@read", "@slow", "@dangerous"], [], [], []], "lastsave": ["lastsave", 1, ["loading", "stale", "fast"], 0, 0, 0, ["@admin", "@fast", "@dangerous"], [], [], []], "lcs": ["lcs", -3, ["readonly"], 1, 1, 1, ["@read", "@string", "@slow"], [], [], []], "lindex": ["lindex", 3, ["readonly"], 1, 1, 1, ["@read", "@list", "@slow"], [], [], []], "linsert": ["linsert", 5, ["write", "denyoom"], 1, 1, 1, ["@write", "@list", "@slow"], [], [], []], "llen": ["llen", 2, ["readonly", "fast"], 1, 1, 1, ["@read", "@list", "@fast"], [], [], []], "lmove": ["lmove", 5, ["write", "denyoom"], 1, 2, 1, ["@write", "@list", "@slow"], [], [], []], "lmpop": ["lmpop", -4, ["write", "movablekeys"], 1, 1, 1, ["@write", "@list", "@slow"], [], [], []], "lpop": ["lpop", -2, ["write", "fast"], 1, 1, 1, ["@write", "@list", "@fast"], [], [], []], "lpos": ["lpos", -3, ["readonly"], 1, 1, 1, ["@read", "@list", "@slow"], [], [], []], "lpush": ["lpush", -3, ["write", "denyoom", "fast"], 1, 1, 1, ["@write", "@list", "@fast"], [], [], [["lpushx", -3, ["write", "denyoom", "fast"], 1, 1, 1, ["@write", "@list", "@fast"], [], [], []]]], "lpushx": ["lpushx", -3, ["write", "denyoom", "fast"], 1, 1, 1, ["@write", "@list", "@fast"], [], [], []], "lrange": ["lrange", 4, ["readonly"], 1, 1, 1, ["@read", "@list", "@slow"], [], [], []], "lrem": ["lrem", 4, ["write"], 1, 1, 1, ["@write", "@list", "@slow"], [], [], []], "lset": ["lset", 4, ["write", "denyoom"], 1, 1, 1, ["@write", "@list", "@slow"], [], [], []], "ltrim": ["ltrim", 4, ["write"], 1, 1, 1, ["@write", "@list", "@slow"], [], [], []], "mget": ["mget", -2, ["readonly", "fast"], 1, 1, 1, ["@read", "@string", "@fast"], [], [], []], "move": ["move", 3, ["write", "fast"], 1, 1, 1, ["@keyspace", "@write", "@fast"], [], [], []], "mset": ["mset", -3, ["write", "denyoom"], 1, 1, 2, ["@write", "@string", "@slow"], [], [], [["msetnx", -3, ["write", "denyoom"], 1, 1, 2, ["@write", "@string", "@slow"], [], [], []]]], "msetnx": ["msetnx", -3, ["write", "denyoom"], 1, 1, 2, ["@write", "@string", "@slow"], [], [], []], "multi": ["multi", 1, ["noscript", "loading", "stale", "fast", "allow_busy"], 0, 0, 0, ["@fast", "@transaction"], [], [], []], "persist": ["persist", 2, ["write", "fast"], 1, 1, 1, ["@keyspace", "@write", "@fast"], [], [], []], "pexpire": ["pexpire", -3, ["write", "fast"], 1, 1, 1, ["@keyspace", "@write", "@fast"], [], [], [["pexpireat", -3, ["write", "fast"], 1, 1, 1, ["@keyspace", "@write", "@fast"], [], [], []], ["pexpiretime", 2, ["readonly", "fast"], 1, 1, 1, ["@keyspace", "@read", "@fast"], [], [], []]]], "pexpireat": ["pexpireat", -3, ["write", "fast"], 1, 1, 1, ["@keyspace", "@write", "@fast"], [], [], []], "pexpiretime": ["pexpiretime", 2, ["readonly", "fast"], 1, 1, 1, ["@keyspace", "@read", "@fast"], [], [], []], "pfadd": ["pfadd", -2, ["write", "denyoom", "fast"], 1, 1, 1, ["@write", "@hyperloglog", "@fast"], [], [], []], "pfcount": ["pfcount", -2, ["readonly"], 1, 1, 1, ["@read", "@hyperloglog", "@slow"], [], [], []], "pfmerge": ["pfmerge", -2, ["write", "denyoom"], 1, 2, 1, ["@write", "@hyperloglog", "@slow"], [], [], []], "ping": ["ping", -1, ["fast"], 0, 0, 0, ["@fast", "@connection"], [], [], []], "psetex": ["psetex", 4, ["write", "denyoom"], 1, 1, 1, ["@write", "@string", "@slow"], [], [], []], "psubscribe": ["psubscribe", -2, ["pubsub", "noscript", "loading", "stale"], 0, 0, 0, ["@pubsub", "@slow"], [], [], []], "pttl": ["pttl", 2, ["readonly", "fast"], 1, 1, 1, ["@keyspace", "@read", "@fast"], [], [], []], "publish": ["publish", 3, ["pubsub", "loading", "stale", "fast"], 0, 0, 0, ["@pubsub", "@fast"], [], [], []], "pubsub": ["pubsub", -2, [], 0, 0, 0, ["@slow"], [], [], [["pubsub|channels", -2, ["pubsub", "loading", "stale"], 0, 0, 0, ["@pubsub", "@slow"], [], [], []], ["pubsub|help", 2, ["loading", "stale"], 0, 0, 0, ["@slow"], [], [], []], ["pubsub|numpat", 2, ["pubsub", "loading", "stale"], 0, 0, 0, ["@pubsub", "@slow"], [], [], []], ["pubsub|numsub", -2, ["pubsub", "loading", "stale"], 0, 0, 0, ["@pubsub", "@slow"], [], [], []], ["pubsub|shardchannels", -2, ["pubsub", "loading", "stale"], 0, 0, 0, ["@pubsub", "@slow"], [], [], []], ["pubsub|shardnumsub", -2, ["pubsub", "loading", "stale"], 0, 0, 0, ["@pubsub", "@slow"], [], [], []]]], "punsubscribe": ["punsubscribe", -1, ["pubsub", "noscript", "loading", "stale"], 0, 0, 0, ["@pubsub", "@slow"], [], [], []], "randomkey": ["randomkey", 1, ["readonly"], 0, 0, 0, ["@keyspace", "@read", "@slow"], [], [], []], "rename": ["rename", 3, ["write"], 1, 2, 1, ["@keyspace", "@write", "@slow"], [], [], [["renamenx", 3, ["write", "fast"], 1, 2, 1, ["@keyspace", "@write", "@fast"], [], [], []]]], "renamenx": ["renamenx", 3, ["write", "fast"], 1, 2, 1, ["@keyspace", "@write", "@fast"], [], [], []], "restore": ["restore", -4, ["write", "denyoom"], 1, 1, 1, ["@keyspace", "@write", "@slow", "@dangerous"], [], [], [["restore-asking", -4, ["write", "denyoom", "asking"], 1, 1, 1, ["@keyspace", "@write", "@slow", "@dangerous"], [], [], []]]], "rpop": ["rpop", -2, ["write", "fast"], 1, 1, 1, ["@write", "@list", "@fast"], [], [], [["rpoplpush", 3, ["write", "denyoom"], 1, 2, 1, ["@write", "@list", "@slow"], [], [], []]]], "rpoplpush": ["rpoplpush", 3, ["write", "denyoom"], 1, 2, 1, ["@write", "@list", "@slow"], [], [], []], "rpush": ["rpush", -3, ["write", "denyoom", "fast"], 1, 1, 1, ["@write", "@list", "@fast"], [], [], [["rpushx", -3, ["write", "denyoom", "fast"], 1, 1, 1, ["@write", "@list", "@fast"], [], [], []]]], "rpushx": ["rpushx", -3, ["write", "denyoom", "fast"], 1, 1, 1, ["@write", "@list", "@fast"], [], [], []], "sadd": ["sadd", -3, ["write", "denyoom", "fast"], 1, 1, 1, ["@write", "@set", "@fast"], [], [], []], "save": ["save", 1, ["admin", "noscript", "no_async_loading", "no_multi"], 0, 0, 0, ["@admin", "@slow", "@dangerous"], [], [], []], "scan": ["scan", -2, ["readonly"], 0, 0, 0, ["@keyspace", "@read", "@slow"], [], [], []], "scard": ["scard", 2, ["readonly", "fast"], 1, 1, 1, ["@read", "@set", "@fast"], [], [], []], "script": ["script", -2, [], 0, 0, 0, ["@slow"], [], [], [["script|debug", 3, ["noscript"], 0, 0, 0, ["@slow", "@scripting"], [], [], []], ["script|exists", -3, ["noscript"], 0, 0, 0, ["@slow", "@scripting"], [], [], []], ["script|flush", -2, ["noscript"], 0, 0, 0, ["@slow", "@scripting"], [], [], []], ["script|help", 2, ["loading", "stale"], 0, 0, 0, ["@slow", "@scripting"], [], [], []], ["script|kill", 2, ["noscript", "allow_busy"], 0, 0, 0, ["@slow", "@scripting"], [], [], []], ["script|load", 3, ["noscript", "stale"], 0, 0, 0, ["@slow", "@scripting"], [], [], []]]], "sdiff": ["sdiff", -2, ["readonly"], 1, 1, 1, ["@read", "@set", "@slow"], [], [], [["sdiffstore", -3, ["write", "denyoom"], 1, 2, 1, ["@write", "@set", "@slow"], [], [], []]]], "sdiffstore": ["sdiffstore", -3, ["write", "denyoom"], 1, 2, 1, ["@write", "@set", "@slow"], [], [], []], "select": ["select", 2, ["loading", "stale", "fast"], 0, 0, 0, ["@fast", "@connection"], [], [], []], "set": ["set", -3, ["write", "denyoom"], 1, 1, 1, ["@write", "@string", "@slow"], [], [], [["setbit", 4, ["write", "denyoom"], 1, 1, 1, ["@write", "@bitmap", "@slow"], [], [], []], ["setex", 4, ["write", "denyoom"], 1, 1, 1, ["@write", "@string", "@slow"], [], [], []], ["setnx", 3, ["write", "denyoom", "fast"], 1, 1, 1, ["@write", "@string", "@fast"], [], [], []], ["setrange", 4, ["write", "denyoom"], 1, 1, 1, ["@write", "@string", "@slow"], [], [], []]]], "setbit": ["setbit", 4, ["write", "denyoom"], 1, 1, 1, ["@write", "@bitmap", "@slow"], [], [], []], "setex": ["setex", 4, ["write", "denyoom"], 1, 1, 1, ["@write", "@string", "@slow"], [], [], []], "setnx": ["setnx", 3, ["write", "denyoom", "fast"], 1, 1, 1, ["@write", "@string", "@fast"], [], [], []], "setrange": ["setrange", 4, ["write", "denyoom"], 1, 1, 1, ["@write", "@string", "@slow"], [], [], []], "sinter": ["sinter", -2, ["readonly"], 1, 1, 1, ["@read", "@set", "@slow"], [], [], [["sintercard", -3, ["readonly", "movablekeys"], 1, 1, 1, ["@read", "@set", "@slow"], [], [], []], ["sinterstore", -3, ["write", "denyoom"], 1, 2, 1, ["@write", "@set", "@slow"], [], [], []]]], "sintercard": ["sintercard", -3, ["readonly", "movablekeys"], 1, 1, 1, ["@read", "@set", "@slow"], [], [], []], "sinterstore": ["sinterstore", -3, ["write", "denyoom"], 1, 2, 1, ["@write", "@set", "@slow"], [], [], []], "sismember": ["sismember", 3, ["readonly", "fast"], 1, 1, 1, ["@read", "@set", "@fast"], [], [], []], "smembers": ["smembers", 2, ["readonly"], 1, 1, 1, ["@read", "@set", "@slow"], [], [], []], "smismember": ["smismember", -3, ["readonly", "fast"], 1, 1, 1, ["@read", "@set", "@fast"], [], [], []], "smove": ["smove", 4, ["write", "fast"], 1, 2, 1, ["@write", "@set", "@fast"], [], [], []], "sort": ["sort", -2, ["write", "denyoom", "movablekeys"], 1, 0, 1, ["@write", "@set", "@sortedset", "@list", "@slow", "@dangerous"], [], [], [["sort_ro", -2, ["readonly", "movablekeys"], 1, 0, 1, ["@read", "@set", "@sortedset", "@list", "@slow", "@dangerous"], [], [], []]]], "sort_ro": ["sort_ro", -2, ["readonly", "movablekeys"], 1, 0, 1, ["@read", "@set", "@sortedset", "@list", "@slow", "@dangerous"], [], [], []], "spop": ["spop", -2, ["write", "fast"], 1, 1, 1, ["@write", "@set", "@fast"], [], [], []], "spublish": ["spublish", 3, ["pubsub", "loading", "stale", "fast"], 1, 1, 1, ["@pubsub", "@fast"], [], [], []], "srandmember": ["srandmember", -2, ["readonly"], 1, 1, 1, ["@read", "@set", "@slow"], [], [], []], "srem": ["srem", -3, ["write", "fast"], 1, 1, 1, ["@write", "@set", "@fast"], [], [], []], "sscan": ["sscan", -3, ["readonly"], 1, 1, 1, ["@read", "@set", "@slow"], [], [], []], "ssubscribe": ["ssubscribe", -2, ["pubsub", "noscript", "loading", "stale"], 1, 1, 1, ["@pubsub", "@slow"], [], [], []], "strlen": ["strlen", 2, ["readonly", "fast"], 1, 1, 1, ["@read", "@string", "@fast"], [], [], []], "subscribe": ["subscribe", -2, ["pubsub", "noscript", "loading", "stale"], 0, 0, 0, ["@pubsub", "@slow"], [], [], []], "substr": ["substr", 4, ["readonly"], 1, 1, 1, ["@read", "@string", "@slow"], [], [], []], "sunion": ["sunion", -2, ["readonly"], 1, 1, 1, ["@read", "@set", "@slow"], [], [], [["sunionstore", -3, ["write", "denyoom"], 1, 2, 1, ["@write", "@set", "@slow"], [], [], []]]], "sunionstore": ["sunionstore", -3, ["write", "denyoom"], 1, 2, 1, ["@write", "@set", "@slow"], [], [], []], "sunsubscribe": ["sunsubscribe", -1, ["pubsub", "noscript", "loading", "stale"], 1, 1, 1, ["@pubsub", "@slow"], [], [], []], "swapdb": ["swapdb", 3, ["write", "fast"], 0, 0, 0, ["@keyspace", "@write", "@fast", "@dangerous"], [], [], []], "time": ["time", 1, ["loading", "stale", "fast"], 0, 0, 0, ["@fast"], [], [], []], "ttl": ["ttl", 2, ["readonly", "fast"], 1, 1, 1, ["@keyspace", "@read", "@fast"], [], [], []], "type": ["type", 2, ["readonly", "fast"], 1, 1, 1, ["@keyspace", "@read", "@fast"], [], [], []], "unlink": ["unlink", -2, ["write", "fast"], 1, 1, 1, ["@keyspace", "@write", "@fast"], [], [], []], "unsubscribe": ["unsubscribe", -1, ["pubsub", "noscript", "loading", "stale"], 0, 0, 0, ["@pubsub", "@slow"], [], [], []], "unwatch": ["unwatch", 1, ["noscript", "loading", "stale", "fast", "allow_busy"], 0, 0, 0, ["@fast", "@transaction"], [], [], []], "watch": ["watch", -2, ["noscript", "loading", "stale", "fast", "allow_busy"], 1, 1, 1, ["@fast", "@transaction"], [], [], []], "xack": ["xack", -4, ["write", "fast"], 1, 1, 1, ["@write", "@stream", "@fast"], [], [], []], "xadd": ["xadd", -5, ["write", "denyoom", "fast"], 1, 1, 1, ["@write", "@stream", "@fast"], [], [], []], "xautoclaim": ["xautoclaim", -6, ["write", "fast"], 1, 1, 1, ["@write", "@stream", "@fast"], [], [], []], "xclaim": ["xclaim", -6, ["write", "fast"], 1, 1, 1, ["@write", "@stream", "@fast"], [], [], []], "xdel": ["xdel", -3, ["write", "fast"], 1, 1, 1, ["@write", "@stream", "@fast"], [], [], []], "xlen": ["xlen", 2, ["readonly", "fast"], 1, 1, 1, ["@read", "@stream", "@fast"], [], [], []], "xpending": ["xpending", -3, ["readonly"], 1, 1, 1, ["@read", "@stream", "@slow"], [], [], []], "xrange": ["xrange", -4, ["readonly"], 1, 1, 1, ["@read", "@stream", "@slow"], [], [], []], "xread": ["xread", -4, ["readonly", "blocking", "movablekeys"], 0, 0, 1, ["@read", "@stream", "@slow", "@blocking"], [], [], [["xreadgroup", -7, ["write", "blocking", "movablekeys"], 0, 0, 1, ["@write", "@stream", "@slow", "@blocking"], [], [], []]]], "xreadgroup": ["xreadgroup", -7, ["write", "blocking", "movablekeys"], 0, 0, 1, ["@write", "@stream", "@slow", "@blocking"], [], [], []], "xrevrange": ["xrevrange", -4, ["readonly"], 1, 1, 1, ["@read", "@stream", "@slow"], [], [], []], "xtrim": ["xtrim", -4, ["write"], 1, 1, 1, ["@write", "@stream", "@slow"], [], [], []], "zadd": ["zadd", -4, ["write", "denyoom", "fast"], 1, 1, 1, ["@write", "@sortedset", "@fast"], [], [], []], "zcard": ["zcard", 2, ["readonly", "fast"], 1, 1, 1, ["@read", "@sortedset", "@fast"], [], [], []], "zcount": ["zcount", 4, ["readonly", "fast"], 1, 1, 1, ["@read", "@sortedset", "@fast"], [], [], []], "zdiff": ["zdiff", -3, ["readonly", "movablekeys"], 1, 1, 1, ["@read", "@sortedset", "@slow"], [], [], [["zdiffstore", -4, ["write", "denyoom", "movablekeys"], 1, 2, 1, ["@write", "@sortedset", "@slow"], [], [], []]]], "zdiffstore": ["zdiffstore", -4, ["write", "denyoom", "movablekeys"], 1, 2, 1, ["@write", "@sortedset", "@slow"], [], [], []], "zincrby": ["zincrby", 4, ["write", "denyoom", "fast"], 1, 1, 1, ["@write", "@sortedset", "@fast"], [], [], []], "zinter": ["zinter", -3, ["readonly", "movablekeys"], 1, 1, 1, ["@read", "@sortedset", "@slow"], [], [], [["zintercard", -3, ["readonly", "movablekeys"], 1, 1, 1, ["@read", "@sortedset", "@slow"], [], [], []], ["zinterstore", -4, ["write", "denyoom", "movablekeys"], 1, 2, 1, ["@write", "@sortedset", "@slow"], [], [], []]]], "zintercard": ["zintercard", -3, ["readonly", "movablekeys"], 1, 1, 1, ["@read", "@sortedset", "@slow"], [], [], []], "zinterstore": ["zinterstore", -4, ["write", "denyoom", "movablekeys"], 1, 2, 1, ["@write", "@sortedset", "@slow"], [], [], []], "zlexcount": ["zlexcount", 4, ["readonly", "fast"], 1, 1, 1, ["@read", "@sortedset", "@fast"], [], [], []], "zmpop": ["zmpop", -4, ["write", "movablekeys"], 1, 1, 1, ["@write", "@sortedset", "@slow"], [], [], []], "zmscore": ["zmscore", -3, ["readonly", "fast"], 1, 1, 1, ["@read", "@sortedset", "@fast"], [], [], []], "zpopmax": ["zpopmax", -2, ["write", "fast"], 1, 1, 1, ["@write", "@sortedset", "@fast"], [], [], []], "zpopmin": ["zpopmin", -2, ["write", "fast"], 1, 1, 1, ["@write", "@sortedset", "@fast"], [], [], []], "zrandmember": ["zrandmember", -2, ["readonly"], 1, 1, 1, ["@read", "@sortedset", "@slow"], [], [], []], "zrange": ["zrange", -4, ["readonly"], 1, 1, 1, ["@read", "@sortedset", "@slow"], [], [], [["zrangebylex", -4, ["readonly"], 1, 1, 1, ["@read", "@sortedset", "@slow"], [], [], []], ["zrangebyscore", -4, ["readonly"], 1, 1, 1, ["@read", "@sortedset", "@slow"], [], [], []], ["zrangestore", -5, ["write", "denyoom"], 1, 2, 1, ["@write", "@sortedset", "@slow"], [], [], []]]], "zrangebylex": ["zrangebylex", -4, ["readonly"], 1, 1, 1, ["@read", "@sortedset", "@slow"], [], [], []], "zrangebyscore": ["zrangebyscore", -4, ["readonly"], 1, 1, 1, ["@read", "@sortedset", "@slow"], [], [], []], "zrangestore": ["zrangestore", -5, ["write", "denyoom"], 1, 2, 1, ["@write", "@sortedset", "@slow"], [], [], []], "zrank": ["zrank", -3, ["readonly", "fast"], 1, 1, 1, ["@read", "@sortedset", "@fast"], [], [], []], "zrem": ["zrem", -3, ["write", "fast"], 1, 1, 1, ["@write", "@sortedset", "@fast"], [], [], [["zremrangebylex", 4, ["write"], 1, 1, 1, ["@write", "@sortedset", "@slow"], [], [], []], ["zremrangebyrank", 4, ["write"], 1, 1, 1, ["@write", "@sortedset", "@slow"], [], [], []], ["zremrangebyscore", 4, ["write"], 1, 1, 1, ["@write", "@sortedset", "@slow"], [], [], []]]], "zremrangebylex": ["zremrangebylex", 4, ["write"], 1, 1, 1, ["@write", "@sortedset", "@slow"], [], [], []], "zremrangebyrank": ["zremrangebyrank", 4, ["write"], 1, 1, 1, ["@write", "@sortedset", "@slow"], [], [], []], "zremrangebyscore": ["zremrangebyscore", 4, ["write"], 1, 1, 1, ["@write", "@sortedset", "@slow"], [], [], []], "zrevrange": ["zrevrange", -4, ["readonly"], 1, 1, 1, ["@read", "@sortedset", "@slow"], [], [], [["zrevrangebylex", -4, ["readonly"], 1, 1, 1, ["@read", "@sortedset", "@slow"], [], [], []], ["zrevrangebyscore", -4, ["readonly"], 1, 1, 1, ["@read", "@sortedset", "@slow"], [], [], []]]], "zrevrangebylex": ["zrevrangebylex", -4, ["readonly"], 1, 1, 1, ["@read", "@sortedset", "@slow"], [], [], []], "zrevrangebyscore": ["zrevrangebyscore", -4, ["readonly"], 1, 1, 1, ["@read", "@sortedset", "@slow"], [], [], []], "zrevrank": ["zrevrank", -3, ["readonly", "fast"], 1, 1, 1, ["@read", "@sortedset", "@fast"], [], [], []], "zscan": ["zscan", -3, ["readonly"], 1, 1, 1, ["@read", "@sortedset", "@slow"], [], [], []], "zscore": ["zscore", 3, ["readonly", "fast"], 1, 1, 1, ["@read", "@sortedset", "@fast"], [], [], []], "zunion": ["zunion", -3, ["readonly", "movablekeys"], 1, 1, 1, ["@read", "@sortedset", "@slow"], [], [], [["zunionstore", -4, ["write", "denyoom", "movablekeys"], 1, 2, 1, ["@write", "@sortedset", "@slow"], [], [], []]]], "zunionstore": ["zunionstore", -4, ["write", "denyoom", "movablekeys"], 1, 2, 1, ["@write", "@sortedset", "@slow"], [], [], []], "json.del": ["json.del", -1, [], 0, 0, 0, [], [], [], []], "json.forget": ["json.forget", -1, [], 0, 0, 0, [], [], [], []], "json.get": ["json.get", -1, [], 0, 0, 0, [], [], [], []], "json.toggle": ["json.toggle", -1, [], 0, 0, 0, [], [], [], []], "json.clear": ["json.clear", -1, [], 0, 0, 0, [], [], [], []], "json.set": ["json.set", -1, [], 0, 0, 0, [], [], [], []], "json.mset": ["json.mset", -1, [], 0, 0, 0, [], [], [], []], "json.merge": ["json.merge", -1, [], 0, 0, 0, [], [], [], []], "json.mget": ["json.mget", -1, [], 0, 0, 0, [], [], [], []], "json.numincrby": ["json.numincrby", -1, [], 0, 0, 0, [], [], [], []], "json.nummultby": ["json.nummultby", -1, [], 0, 0, 0, [], [], [], []], "json.strappend": ["json.strappend", -1, [], 0, 0, 0, [], [], [], []], "json.strlen": ["json.strlen", -1, [], 0, 0, 0, [], [], [], []], "json.arrappend": ["json.arrappend", -1, [], 0, 0, 0, [], [], [], []], "json.arrindex": ["json.arrindex", -1, [], 0, 0, 0, [], [], [], []], "json.arrinsert": ["json.arrinsert", -1, [], 0, 0, 0, [], [], [], []], "json.arrlen": ["json.arrlen", -1, [], 0, 0, 0, [], [], [], []], "json.arrpop": ["json.arrpop", -1, [], 0, 0, 0, [], [], [], []], "json.arrtrim": ["json.arrtrim", -1, [], 0, 0, 0, [], [], [], []], "json.objkeys": ["json.objkeys", -1, [], 0, 0, 0, [], [], [], []], "json.objlen": ["json.objlen", -1, [], 0, 0, 0, [], [], [], []], "json.type": ["json.type", -1, [], 0, 0, 0, [], [], [], []], "ts.create": ["ts.create", -1, [], 0, 0, 0, [], [], [], [["ts.createrule", -1, [], 0, 0, 0, [], [], [], []]]], "ts.del": ["ts.del", -1, [], 0, 0, 0, [], [], [], [["ts.deleterule", -1, [], 0, 0, 0, [], [], [], []]]], "ts.alter": ["ts.alter", -1, [], 0, 0, 0, [], [], [], []], "ts.add": ["ts.add", -1, [], 0, 0, 0, [], [], [], []], "ts.madd": ["ts.madd", -1, [], 0, 0, 0, [], [], [], []], "ts.incrby": ["ts.incrby", -1, [], 0, 0, 0, [], [], [], []], "ts.decrby": ["ts.decrby", -1, [], 0, 0, 0, [], [], [], []], "ts.createrule": ["ts.createrule", -1, [], 0, 0, 0, [], [], [], []], "ts.deleterule": ["ts.deleterule", -1, [], 0, 0, 0, [], [], [], []], "ts.range": ["ts.range", -1, [], 0, 0, 0, [], [], [], []], "ts.revrange": ["ts.revrange", -1, [], 0, 0, 0, [], [], [], []], "ts.mrange": ["ts.mrange", -1, [], 0, 0, 0, [], [], [], []], "ts.mrevrange": ["ts.mrevrange", -1, [], 0, 0, 0, [], [], [], []], "ts.get": ["ts.get", -1, [], 0, 0, 0, [], [], [], []], "ts.mget": ["ts.mget", -1, [], 0, 0, 0, [], [], [], []], "ts.info": ["ts.info", -1, [], 0, 0, 0, [], [], [], []], "ts.queryindex": ["ts.queryindex", -1, [], 0, 0, 0, [], [], [], []], "bf.reserve": ["bf.reserve", -1, [], 0, 0, 0, [], [], [], []], "bf.add": ["bf.add", -1, [], 0, 0, 0, [], [], [], []], "bf.madd": ["bf.madd", -1, [], 0, 0, 0, [], [], [], []], "bf.insert": ["bf.insert", -1, [], 0, 0, 0, [], [], [], []], "bf.exists": ["bf.exists", -1, [], 0, 0, 0, [], [], [], []], "bf.mexists": ["bf.mexists", -1, [], 0, 0, 0, [], [], [], []], "bf.scandump": ["bf.scandump", -1, [], 0, 0, 0, [], [], [], []], "bf.loadchunk": ["bf.loadchunk", -1, [], 0, 0, 0, [], [], [], []], "bf.info": ["bf.info", -1, [], 0, 0, 0, [], [], [], []], "bf.card": ["bf.card", -1, [], 0, 0, 0, [], [], [], []], "cf.reserve": ["cf.reserve", -1, [], 0, 0, 0, [], [], [], []], "cf.add": ["cf.add", -1, [], 0, 0, 0, [], [], [], [["cf.addnx", -1, [], 0, 0, 0, [], [], [], []]]], "cf.addnx": ["cf.addnx", -1, [], 0, 0, 0, [], [], [], []], "cf.insert": ["cf.insert", -1, [], 0, 0, 0, [], [], [], [["cf.insertnx", -1, [], 0, 0, 0, [], [], [], []]]], "cf.insertnx": ["cf.insertnx", -1, [], 0, 0, 0, [], [], [], []], "cf.exists": ["cf.exists", -1, [], 0, 0, 0, [], [], [], []], "cf.mexists": ["cf.mexists", -1, [], 0, 0, 0, [], [], [], []], "cf.del": ["cf.del", -1, [], 0, 0, 0, [], [], [], []], "cf.count": ["cf.count", -1, [], 0, 0, 0, [], [], [], []], "cf.scandump": ["cf.scandump", -1, [], 0, 0, 0, [], [], [], []], "cf.loadchunk": ["cf.loadchunk", -1, [], 0, 0, 0, [], [], [], []], "cf.info": ["cf.info", -1, [], 0, 0, 0, [], [], [], []], "cms.initbydim": ["cms.initbydim", -1, [], 0, 0, 0, [], [], [], []], "cms.initbyprob": ["cms.initbyprob", -1, [], 0, 0, 0, [], [], [], []], "cms.incrby": ["cms.incrby", -1, [], 0, 0, 0, [], [], [], []], "cms.query": ["cms.query", -1, [], 0, 0, 0, [], [], [], []], "cms.merge": ["cms.merge", -1, [], 0, 0, 0, [], [], [], []], "cms.info": ["cms.info", -1, [], 0, 0, 0, [], [], [], []], "topk.reserve": ["topk.reserve", -1, [], 0, 0, 0, [], [], [], []], "topk.add": ["topk.add", -1, [], 0, 0, 0, [], [], [], []], "topk.incrby": ["topk.incrby", -1, [], 0, 0, 0, [], [], [], []], "topk.query": ["topk.query", -1, [], 0, 0, 0, [], [], [], []], "topk.count": ["topk.count", -1, [], 0, 0, 0, [], [], [], []], "topk.list": ["topk.list", -1, [], 0, 0, 0, [], [], [], []], "topk.info": ["topk.info", -1, [], 0, 0, 0, [], [], [], []], "tdigest.create": ["tdigest.create", -1, [], 0, 0, 0, [], [], [], []], "tdigest.reset": ["tdigest.reset", -1, [], 0, 0, 0, [], [], [], []], "tdigest.add": ["tdigest.add", -1, [], 0, 0, 0, [], [], [], []], "tdigest.merge": ["tdigest.merge", -1, [], 0, 0, 0, [], [], [], []], "tdigest.min": ["tdigest.min", -1, [], 0, 0, 0, [], [], [], []], "tdigest.max": ["tdigest.max", -1, [], 0, 0, 0, [], [], [], []], "tdigest.quantile": ["tdigest.quantile", -1, [], 0, 0, 0, [], [], [], []], "tdigest.cdf": ["tdigest.cdf", -1, [], 0, 0, 0, [], [], [], []], "tdigest.trimmed_mean": ["tdigest.trimmed_mean", -1, [], 0, 0, 0, [], [], [], []], "tdigest.rank": ["tdigest.rank", -1, [], 0, 0, 0, [], [], [], []], "tdigest.revrank": ["tdigest.revrank", -1, [], 0, 0, 0, [], [], [], []], "tdigest.byrank": ["tdigest.byrank", -1, [], 0, 0, 0, [], [], [], []], "tdigest.byrevrank": ["tdigest.byrevrank", -1, [], 0, 0, 0, [], [], [], []], "tdigest.info": ["tdigest.info", -1, [], 0, 0, 0, [], [], [], []]}fakeredis-py-2.26.1/fakeredis/commands_mixins/000077500000000000000000000000001470672414000213175ustar00rootroot00000000000000fakeredis-py-2.26.1/fakeredis/commands_mixins/__init__.py000066400000000000000000000024211470672414000234270ustar00rootroot00000000000000from typing import Any from .bitmap_mixin import BitmapCommandsMixin from .connection_mixin import ConnectionCommandsMixin from .generic_mixin import GenericCommandsMixin from .geo_mixin import GeoCommandsMixin from .hash_mixin import HashCommandsMixin from .list_mixin import ListCommandsMixin from .pubsub_mixin import PubSubCommandsMixin from .server_mixin import ServerCommandsMixin from .set_mixin import SetCommandsMixin from .streams_mixin import StreamsCommandsMixin from .string_mixin import StringCommandsMixin try: from .scripting_mixin import ScriptingCommandsMixin except ImportError: class ScriptingCommandsMixin: # type: ignore # noqa: E303 def __init__(self, *args: Any, **kwargs: Any) -> None: kwargs.pop("lua_modules", None) super(ScriptingCommandsMixin, self).__init__(*args, **kwargs) # type: ignore from .transactions_mixin import TransactionsCommandsMixin __all__ = [ "BitmapCommandsMixin", "ConnectionCommandsMixin", "GenericCommandsMixin", "GeoCommandsMixin", "HashCommandsMixin", "ListCommandsMixin", "PubSubCommandsMixin", "ScriptingCommandsMixin", "TransactionsCommandsMixin", "ServerCommandsMixin", "SetCommandsMixin", "StreamsCommandsMixin", "StringCommandsMixin", ] fakeredis-py-2.26.1/fakeredis/commands_mixins/bitmap_mixin.py000066400000000000000000000245511470672414000243600ustar00rootroot00000000000000import re from typing import Tuple, Any, Callable, List, Optional from fakeredis import _msgs as msgs from fakeredis._commands import ( command, Key, Int, BitOffset, BitValue, fix_range_string, fix_range, CommandItem, ) from fakeredis._helpers import SimpleError, casematch class BitfieldEncoding: signed: bool size: int def __init__(self, encoding: bytes) -> None: match = re.match(rb"^([ui])(\d+)$", encoding) if match is None: raise SimpleError(msgs.INVALID_BITFIELD_TYPE) self.signed = match[1] == b"i" self.size = int(match[2]) if self.size < 1 or self.size > (64 if self.signed else 63): raise SimpleError(msgs.INVALID_BITFIELD_TYPE) class BitmapCommandsMixin: def __init(self, *args: Any, **kwargs: Any) -> None: super().__init__(*args, **kwargs) self.version: Tuple[int] @staticmethod def _bytes_as_bin_string(value: bytes) -> str: return "".join([bin(i).lstrip("0b").rjust(8, "0") for i in value]) @command((Key(bytes), Int), (bytes,), flags=msgs.FLAG_DO_NOT_CREATE) def bitpos(self, key: CommandItem, bit: int, *args: bytes) -> int: if bit != 0 and bit != 1: raise SimpleError(msgs.BIT_ARG_MUST_BE_ZERO_OR_ONE) if len(args) > 3: raise SimpleError(msgs.SYNTAX_ERROR_MSG) if len(args) == 3 and self.version < (7,): raise SimpleError(msgs.SYNTAX_ERROR_MSG) bit_mode = False if len(args) == 3 and self.version >= (7,): bit_mode = casematch(args[2], b"bit") if not bit_mode and not casematch(args[2], b"byte"): raise SimpleError(msgs.SYNTAX_ERROR_MSG) if key.value is None: # The first clear bit is at 0, the first set bit is not found (-1). return -1 if bit == 1 else 0 start = 0 if len(args) == 0 else Int.decode(args[0]) source_value = self._bytes_as_bin_string(key.value) if bit_mode else key.value end = len(source_value) if len(args) <= 1 else Int.decode(args[1]) length = len(source_value) start, end = fix_range(start, end, length) if start == end == -1: return -1 source_value = source_value[start:end] if bit_mode else self._bytes_as_bin_string(source_value[start:end]) result = source_value.find(str(bit)) if result != -1: result += start if bit_mode else (start * 8) elif bit == 0 and len(args) <= 1: # Redis treats the value as padded with zero bytes to an infinity # if the user is looking for the first clear bit and no end is set. result = len(key.value) * 8 return result @command(name="BITCOUNT", fixed=(Key(bytes),), repeat=(bytes,), flags=msgs.FLAG_DO_NOT_CREATE) def bitcount(self, key: CommandItem, *args: bytes) -> int: # Redis checks the argument count before decoding integers. That's why # we can't declare them as Int. if len(args) == 0: if key.value is None: return 0 return bin(int.from_bytes(key.value, "little")).count("1") if not 2 <= len(args) <= 3: raise SimpleError(msgs.SYNTAX_ERROR_MSG) try: start = Int.decode(args[0]) end = Int.decode(args[1]) except SimpleError as e: if self.version >= (7, 4): raise e return 0 bit_mode = False if len(args) == 3 and self.version < (7,): raise SimpleError(msgs.SYNTAX_ERROR_MSG) if len(args) == 3 and self.version >= (7,): bit_mode = casematch(args[2], b"bit") if not bit_mode and not casematch(args[2], b"byte"): raise SimpleError(msgs.SYNTAX_ERROR_MSG) if key.value is None: return 0 if bit_mode: value = self._bytes_as_bin_string(key.value if key.value else b"") start, end = fix_range_string(start, end, len(value)) res: int = value.count("1", start, end) return res start, end = fix_range_string(start, end, len(key.value)) value = key.value[start:end] return bin(int.from_bytes(value, "little")).count("1") @command(fixed=(Key(bytes), BitOffset)) def getbit(self, key: CommandItem, offset: int) -> int: value = key.get(b"") byte = offset // 8 remaining = offset % 8 actual_bitoffset = 7 - remaining try: actual_val = value[byte] except IndexError: return 0 return 1 if (1 << actual_bitoffset) & actual_val else 0 @command((Key(bytes), BitOffset, BitValue)) def setbit(self, key: CommandItem, offset: int, value: int) -> int: val = key.value if key.value is not None else b"\x00" byte = offset // 8 remaining = offset % 8 actual_bitoffset = 7 - remaining if len(val) - 1 < byte: # We need to expand val so that we can set the appropriate # bit. needed = byte - (len(val) - 1) val += b"\x00" * needed old_byte = val[byte] if value == 1: new_byte = old_byte | (1 << actual_bitoffset) else: new_byte = old_byte & ~(1 << actual_bitoffset) old_value = value if old_byte == new_byte else 1 - value reconstructed = bytearray(val) reconstructed[byte] = new_byte if bytes(reconstructed) != key.value or (self.version == (6,) and old_byte != new_byte): key.update(bytes(reconstructed)) return old_value @staticmethod def _bitop(op: Callable[[Any, Any], Any], *keys: CommandItem) -> Any: value = keys[0].value ans = keys[0].value i = 1 while i < len(keys): value = keys[i].value if keys[i].value is not None else b"" ans = bytes(op(a, b) for a, b in zip(ans, value)) i += 1 return ans @command((bytes, Key()), (Key(bytes),)) def bitop(self, op_name: bytes, dst: CommandItem, *keys: CommandItem) -> int: if len(keys) == 0: raise SimpleError(msgs.WRONG_ARGS_MSG6.format("bitop")) if casematch(op_name, b"and"): res = self._bitop(lambda a, b: a & b, *keys) elif casematch(op_name, b"or"): res = self._bitop(lambda a, b: a | b, *keys) elif casematch(op_name, b"xor"): res = self._bitop(lambda a, b: a ^ b, *keys) elif casematch(op_name, b"not"): if len(keys) != 1: raise SimpleError(msgs.BITOP_NOT_ONE_KEY_ONLY) val = keys[0].value res = bytes([((1 << 8) - 1 - val[i]) for i in range(len(val))]) else: raise SimpleError(msgs.WRONG_ARGS_MSG6.format("bitop")) dst.value = res return len(dst.value) def _bitfield_get(self, key: CommandItem, encoding: BitfieldEncoding, offset: int) -> int: ans = 0 for i in range(0, encoding.size): ans <<= 1 if self.getbit(key, offset + i): ans += -1 if encoding.signed and i == 0 else 1 return ans def _bitfield_set( self, key: CommandItem, encoding: BitfieldEncoding, offset: int, overflow: bytes, value: Optional[int] = None, incr: int = 0, ) -> Optional[int]: if encoding.signed: min_value = -(1 << (encoding.size - 1)) max_value = (1 << (encoding.size - 1)) - 1 else: min_value = 0 max_value = (1 << encoding.size) - 1 ans = self._bitfield_get(key, encoding, offset) new_value = ans if value is None else value if not encoding.signed: new_value &= (1 << 64) - 1 # force cast to uint64_t if overflow == b"FAIL" and not (min_value <= new_value + incr <= max_value): return None # yes, failing in this context is not writing the value elif overflow == b"SAT": if new_value + incr > max_value: new_value, incr = max_value, 0 # REDIS only checks for unsigned underflow on negative incr: if (encoding.signed or incr < 0) and new_value + incr < min_value: new_value, incr = min_value, 0 new_value += incr new_value &= (1 << encoding.size) - 1 # normalize signed number by changing the sign associated to higher bit: if encoding.signed and new_value > max_value: new_value -= 1 << encoding.size for i in range(0, encoding.size): bit = (new_value >> (encoding.size - i - 1)) & 1 self.setbit(key, offset + i, bit) return new_value if value is None else ans @command(fixed=(Key(bytes),), repeat=(bytes,)) def bitfield(self, key: CommandItem, *args: bytes) -> List[Optional[int]]: overflow = b"WRAP" results: List[Optional[int]] = [] i = 0 while i < len(args): if casematch(args[i], b"overflow") and i + 1 < len(args): overflow = args[i + 1].upper() if overflow not in (b"WRAP", b"SAT", b"FAIL"): raise SimpleError(msgs.INVALID_OVERFLOW_TYPE) i += 2 elif casematch(args[i], b"get") and i + 2 < len(args): encoding = BitfieldEncoding(args[i + 1]) offset = BitOffset.decode(args[i + 2]) results.append(self._bitfield_get(key, encoding, offset)) i += 3 elif casematch(args[i], b"set") and i + 3 < len(args): old_value = self._bitfield_set( key=key, encoding=BitfieldEncoding(args[i + 1]), offset=BitOffset.decode(args[i + 2]), value=Int.decode(args[i + 3]), overflow=overflow, ) results.append(old_value) i += 4 elif casematch(args[i], b"incrby") and i + 3 < len(args): old_value = self._bitfield_set( key=key, encoding=BitfieldEncoding(args[i + 1]), offset=BitOffset.decode(args[i + 2]), incr=Int.decode(args[i + 3]), overflow=overflow, ) results.append(old_value) i += 4 else: raise SimpleError(msgs.SYNTAX_ERROR_MSG) return results fakeredis-py-2.26.1/fakeredis/commands_mixins/connection_mixin.py000066400000000000000000000027551470672414000252450ustar00rootroot00000000000000from typing import Any, List, Union from fakeredis import _msgs as msgs from fakeredis._commands import command, DbIndex from fakeredis._helpers import SimpleError, OK, SimpleString, Database, casematch PONG = SimpleString(b"PONG") class ConnectionCommandsMixin: # Connection commands # TODO: auth, quit def __init__(self, *args: Any, **kwargs: Any) -> None: super(ConnectionCommandsMixin, self).__init__(*args, **kwargs) self._db: Database self._db_num: int self._pubsub: int self._server: Any @command((bytes,)) def echo(self, message: bytes) -> bytes: return message @command((), (bytes,)) def ping(self, *args: bytes) -> Union[List[bytes], bytes, SimpleString]: if len(args) > 1: msg = msgs.WRONG_ARGS_MSG6.format("ping") raise SimpleError(msg) if self._pubsub: return [b"pong", args[0] if args else b""] else: return args[0] if args else PONG @command(name="SELECT", fixed=(DbIndex,)) def select(self, index: DbIndex) -> SimpleString: self._db = self._server.dbs[index] self._db_num = index # type: ignore return OK @command(name="CLIENT SETINFO", fixed=(bytes, bytes), repeat=()) def client_setinfo(self, lib_data: bytes, value: bytes) -> SimpleString: if not casematch(lib_data, b"LIB-NAME") and not casematch(lib_data, b"LIB-VER"): raise SimpleError(msgs.SYNTAX_ERROR_MSG) return OK fakeredis-py-2.26.1/fakeredis/commands_mixins/generic_mixin.py000066400000000000000000000322151470672414000245140ustar00rootroot00000000000000import hashlib import pickle import random from typing import Tuple, Any, Callable, List, Optional from fakeredis import _msgs as msgs from fakeredis._command_args_parsing import extract_args from fakeredis._commands import ( command, Key, Int, DbIndex, BeforeAny, CommandItem, SortFloat, delete_keys, Hash, ) from fakeredis._helpers import compile_pattern, SimpleError, OK, casematch, Database, SimpleString from fakeredis.model import ZSet class GenericCommandsMixin: _ttl: Callable[[CommandItem, float], int] _scan: Callable[[CommandItem, int, bytes, bytes], Tuple[int, List[bytes]]] _key_value_type: Callable[[CommandItem], SimpleString] def __init__(self, *args: Any, **kwargs: Any) -> None: super(GenericCommandsMixin, self).__init__(*args, **kwargs) self.version: Tuple[int] self._server: Any self._db: Database self._db_num: int def _lookup_key(self, key: bytes, pattern: bytes) -> Optional[bytes]: """Python implementation of lookupKeyByPattern from redis""" if pattern == b"#": return key p = pattern.find(b"*") if p == -1: return None prefix = pattern[:p] suffix = pattern[p + 1 :] arrow = suffix.find(b"->", 0, -1) if arrow != -1: field = suffix[arrow + 2 :] suffix = suffix[:arrow] else: field = None new_key = prefix + key + suffix item = CommandItem(new_key, self._db, item=self._db.get(new_key)) if item.value is None: return None if field is not None: if not isinstance(item.value, Hash): return None return item.value.get(field) # type: ignore else: if not isinstance(item.value, bytes): return None return item.value def _expireat(self, key: CommandItem, timestamp: float, *args: bytes) -> int: ( nx, xx, gt, lt, ), _ = extract_args( args, ( "nx", "xx", "gt", "lt", ), exception=msgs.EXPIRE_UNSUPPORTED_OPTION, ) if self.version < (7,) and any((nx, xx, gt, lt)): raise SimpleError(msgs.WRONG_ARGS_MSG6.format("expire")) counter = (nx, gt, lt).count(True) if (counter > 1) or (nx and xx): raise SimpleError(msgs.NX_XX_GT_LT_ERROR_MSG) if ( not key or (xx and key.expireat is None) or (nx and key.expireat is not None) or (gt and key.expireat is not None and timestamp < key.expireat) or (lt and key.expireat is not None and timestamp > key.expireat) ): return 0 key.expireat = timestamp return 1 @command(name="DEL", fixed=(Key(),), repeat=(Key(),)) def del_(self, *keys: CommandItem): return delete_keys(*keys) @command(name="DUMP", fixed=(Key(missing_return=None),)) def dump(self, key: CommandItem) -> Optional[bytes]: value = pickle.dumps(key.value) checksum = hashlib.sha1(value).digest() return checksum + value @command(name="EXISTS", fixed=(Key(),), repeat=(Key(),)) def exists(self, *keys): ret = 0 for key in keys: if key: ret += 1 return ret @command(name="EXPIRE", fixed=(Key(), Int), repeat=(bytes,)) def expire(self, key: CommandItem, seconds: int, *args: bytes) -> int: res = self._expireat(key, self._db.time + seconds, *args) return res @command(name="EXPIREAT", fixed=(Key(), Int)) def expireat(self, key: CommandItem, timestamp: int) -> int: return self._expireat(key, float(timestamp)) @command(name="KEYS", fixed=(bytes,)) def keys(self, pattern: bytes) -> List[bytes]: if pattern == b"*": return list(self._db) else: regex = compile_pattern(pattern) return [key for key in self._db if regex.match(key)] @command(name="MOVE", fixed=(Key(), DbIndex)) def move(self, key: CommandItem, db: int) -> int: if db == self._db_num: raise SimpleError(msgs.SRC_DST_SAME_MSG) if not key or key.key in self._server.dbs[db]: return 0 # TODO: what is the interaction with expiry? self._server.dbs[db][key.key] = self._server.dbs[self._db_num][key.key] key.value = None # Causes deletion return 1 @command(name="PERSIST", fixed=(Key(),)) def persist(self, key: CommandItem) -> int: if key.expireat is None: return 0 key.expireat = None return 1 @command(name="PEXPIRE", fixed=(Key(), Int)) def pexpire(self, key: CommandItem, ms: int) -> int: return self._expireat(key, self._db.time + ms / 1000.0) @command(name="PEXPIREAT", fixed=(Key(), Int)) def pexpireat(self, key: CommandItem, ms_timestamp: int) -> int: return self._expireat(key, ms_timestamp / 1000.0) @command(name="PTTL", fixed=(Key(),)) def pttl(self, key: CommandItem) -> int: return self._ttl(key, 1000.0) @command(name="EXPIRETIME", fixed=(Key(),)) def expiretime(self, key: CommandItem) -> int: if key.value is None: return -2 if key.expireat is None: return -1 return key.expireat @command(name="PEXPIRETIME", fixed=(Key(),)) def pexpiretime(self, key: CommandItem) -> float: if key.value is None: return -2 if key.expireat is None: return -1 return key.expireat * 1000 @command(name="RANDOMKEY", fixed=()) def randomkey(self): keys = list(self._db.keys()) if not keys: return None return random.choice(keys) @command(name="RENAME", fixed=(Key(), Key())) def rename(self, key: CommandItem, newkey: CommandItem) -> SimpleString: if not key: raise SimpleError(msgs.NO_KEY_MSG) # TODO: check interaction with WATCH if newkey.key != key.key: newkey.value = key.value newkey.expireat = key.expireat key.value = None return OK @command(name="RENAMENX", fixed=(Key(), Key())) def renamenx(self, key: CommandItem, newkey: CommandItem) -> int: if not key: raise SimpleError(msgs.NO_KEY_MSG) if newkey: return 0 self.rename(key, newkey) return 1 @command(name="RESTORE", fixed=(Key(), Int, bytes), repeat=(bytes,)) def restore(self, key: CommandItem, ttl: int, value: bytes, *args: bytes) -> str: (replace,), _ = extract_args(args, ("replace",)) if key and not replace: raise SimpleError(msgs.RESTORE_KEY_EXISTS) checksum, value = value[:20], value[20:] if hashlib.sha1(value).digest() != checksum: raise SimpleError(msgs.RESTORE_INVALID_CHECKSUM_MSG) if ttl < 0: raise SimpleError(msgs.RESTORE_INVALID_TTL_MSG) if ttl == 0: expireat = None else: expireat = self._db.time + ttl / 1000.0 key.value = pickle.loads(value) key.expireat = expireat return OK @command(name="SCAN", fixed=(Int,), repeat=(bytes, bytes)) def scan(self, cursor, *args): return self._scan(list(self._db), cursor, *args) @command(name="SORT", fixed=(Key(),), repeat=(bytes,)) def sort(self, key, *args): if key.value is not None and not isinstance(key.value, (set, list, ZSet)): raise SimpleError(msgs.WRONGTYPE_MSG) ( asc, desc, alpha, store, sortby, (limit_start, limit_count), ), left_args = extract_args( args, ("asc", "desc", "alpha", "*store", "*by", "++limit"), error_on_unexpected=False, left_from_first_unexpected=False, ) limit_start = limit_start or 0 limit_count = -1 if limit_count is None else limit_count dontsort = sortby is not None and b"*" not in sortby i = 0 get = [] while i < len(left_args): if casematch(left_args[i], b"get") and i + 1 < len(left_args): get.append(left_args[i + 1]) i += 2 else: raise SimpleError(msgs.SYNTAX_ERROR_MSG) # TODO: force sorting if the object is a set and either in Lua or # storing to a key, to match redis behaviour. items = list(key.value) if key.value is not None else [] # These transformations are based on the redis implementation, but # changed to produce a half-open range. start = max(limit_start, 0) end = len(items) if limit_count < 0 else start + limit_count if start >= len(items): start = end = len(items) - 1 end = min(end, len(items)) if not get: get.append(b"#") if sortby is None: sortby = b"#" if not dontsort: def sort_key(val: bytes) -> bytes: byval = self._lookup_key(val, sortby) # TODO: use locale.strxfrm when not storing? But then need to decode too. if byval is None: byval = BeforeAny() return byval def sort_key_score(val: bytes) -> Tuple[float, bytes]: byval = self._lookup_key(val, sortby) score = SortFloat.decode(byval) if byval is not None else 0.0 return score, val sort_func = sort_key if alpha else sort_key_score items.sort(key=sort_func, reverse=desc) elif isinstance(key.value, (list, ZSet)): items.reverse() out = [] for row in items[start:end]: for g in get: v = self._lookup_key(row, g) if store is not None and v is None: v = b"" out.append(v) if store is not None: item = CommandItem(store, self._db, item=self._db.get(store)) item.value = out item.writeback() return len(out) else: return out @command(name="SORT_RO", fixed=(Key(),), repeat=(bytes,)) def sort_ro(self, key: CommandItem, *args: bytes) -> List[bytes]: if key.value is not None and not isinstance(key.value, (set, list, ZSet)): raise SimpleError(msgs.WRONGTYPE_MSG) ( asc, desc, alpha, sortby, (limit_start, limit_count), ), left_args = extract_args( args, ("asc", "desc", "alpha", "*by", "++limit"), error_on_unexpected=False, left_from_first_unexpected=False, ) limit_start = limit_start or 0 limit_count = -1 if limit_count is None else limit_count dontsort = sortby is not None and b"*" not in sortby i = 0 get = [] while i < len(left_args): if casematch(left_args[i], b"get") and i + 1 < len(left_args): get.append(left_args[i + 1]) i += 2 else: raise SimpleError(msgs.SYNTAX_ERROR_MSG) # TODO: force sorting if the object is a set and either in Lua or # storing to a key, to match redis behaviour. items = list(key.value) if key.value is not None else [] # These transformations are based on the redis implementation, but # changed to produce a half-open range. start = max(limit_start, 0) end = len(items) if limit_count < 0 else start + limit_count if start >= len(items): start = end = len(items) - 1 end = min(end, len(items)) if not get: get.append(b"#") if sortby is None: sortby = b"#" if not dontsort: def sort_key(val: bytes) -> bytes: byval = self._lookup_key(val, sortby) # TODO: use locale.strxfrm when not storing? But then need to decode too. if byval is None: byval = BeforeAny() return byval def sort_key_score(val: bytes) -> Tuple[float, bytes]: byval = self._lookup_key(val, sortby) score = SortFloat.decode(byval) if byval is not None else 0.0 return score, val sort_func = sort_key if alpha else sort_key_score items.sort(key=sort_func, reverse=desc) elif isinstance(key.value, (list, ZSet)): items.reverse() out: List[bytes] = [] for row in items[start:end]: for g in get: v = self._lookup_key(row, g) out.append(v) return out @command(name="TTL", fixed=(Key(),)) def ttl(self, key: CommandItem) -> int: return self._ttl(key, 1.0) @command(name="TYPE", fixed=(Key(),)) def type_cmd(self, key: CommandItem) -> SimpleString: return self._key_value_type(key) @command(name="UNLINK", fixed=(Key(),), repeat=(Key(),)) def unlink(self, *keys: CommandItem) -> int: return delete_keys(*keys) fakeredis-py-2.26.1/fakeredis/commands_mixins/geo_mixin.py000066400000000000000000000254011470672414000236510ustar00rootroot00000000000000import sys from collections import namedtuple from typing import List, Any, Callable, Optional, Union from fakeredis import _msgs as msgs from fakeredis._command_args_parsing import extract_args from fakeredis._commands import command, Key, Float, CommandItem from fakeredis._helpers import SimpleError, Database from fakeredis.model import ZSet from fakeredis.geo import distance, geo_encode, geo_decode UNIT_TO_M = {"km": 0.001, "mi": 0.000621371, "ft": 3.28084, "m": 1} def translate_meters_to_unit(unit_arg: bytes) -> float: """number of meters in a unit. :param unit_arg: unit name (km, mi, ft, m) :returns: number of meters in unit """ unit = UNIT_TO_M.get(unit_arg.decode().lower()) if unit is None: raise SimpleError(msgs.GEO_UNSUPPORTED_UNIT) return unit GeoResult = namedtuple("GeoResult", "name long lat hash distance") def _parse_results(items: List[GeoResult], withcoord: bool, withdist: bool) -> List[Any]: """Parse list of GeoResults to redis response :param withcoord: include coordinates in response :param withdist: include distance in response :returns: Parsed list """ res = list() for item in items: new_item = [ item.name, ] if withdist: new_item.append(Float.encode(item.distance, False)) if withcoord: new_item.append([Float.encode(item.long, False), Float.encode(item.lat, False)]) if len(new_item) == 1: new_item = new_item[0] res.append(new_item) return res def _find_near( zset: ZSet, lat: float, long: float, radius: float, conv: float, count: int, count_any: bool, desc: bool, ) -> List[GeoResult]: """Find items within area (lat,long)+radius :param zset: list of items to check :param lat: latitude :param long: longitude :param radius: radius in whatever units :param conv: conversion of radius to meters :param count: number of results to give :param count_any: should we return any results that match? (vs. sorted) :param desc: should results be sorted descending order? :returns: List of GeoResults """ results = list() for name, _hash in zset.items(): p_lat, p_long, _, _ = geo_decode(_hash) dist = distance((p_lat, p_long), (lat, long)) * conv if dist < radius: results.append(GeoResult(name, p_long, p_lat, _hash, dist)) if count_any and len(results) >= count: break results = sorted(results, key=lambda x: x.distance, reverse=desc) if count: results = results[:count] return results class GeoCommandsMixin: _encodefloat: Callable[[float, bool], bytes] def __init__(self, *args: Any, **kwargs: Any) -> None: super(GeoCommandsMixin, self).__init__(*args, **kwargs) self._db: Database def _store_geo_results(self, item_name: bytes, geo_results: List[GeoResult], scoredist: bool) -> int: db_item = CommandItem(item_name, self._db, item=self._db.get(item_name), default=ZSet()) db_item.value = ZSet() for item in geo_results: val = item.distance if scoredist else item.hash db_item.value.add(item.name, val) db_item.writeback() return len(geo_results) @command(name="GEOADD", fixed=(Key(ZSet),), repeat=(bytes,)) def geoadd(self, key: CommandItem, *args: bytes) -> int: (xx, nx, ch), data = extract_args( args, ("nx", "xx", "ch"), error_on_unexpected=False, left_from_first_unexpected=True, ) if xx and nx: raise SimpleError(msgs.NX_XX_GT_LT_ERROR_MSG) if len(data) == 0 or len(data) % 3 != 0: raise SimpleError(msgs.SYNTAX_ERROR_MSG) zset = key.value old_len, changed_items = len(zset), 0 for i in range(0, len(data), 3): long, lat, name = ( Float.decode(data[i + 0]), Float.decode(data[i + 1]), data[i + 2], ) if (name in zset and not xx) or (name not in zset and not nx): if zset.add(name, geo_encode(lat, long, 10)): changed_items += 1 if changed_items: key.updated() if ch: return changed_items return len(zset) - old_len @command(name="GEOHASH", fixed=(Key(ZSet), bytes), repeat=(bytes,)) def geohash(self, key: CommandItem, *members: bytes) -> List[bytes]: hashes = map(key.value.get, members) geohash_list = [((x + "0").encode() if x is not None else x) for x in hashes] return geohash_list @command(name="GEOPOS", fixed=(Key(ZSet), bytes), repeat=(bytes,)) def geopos(self, key: CommandItem, *members: bytes) -> List[Optional[List[bytes]]]: gospositions = map( lambda x: geo_decode(x) if x is not None else x, map(key.value.get, members), ) res = [ ( [ self._encodefloat(x[1], False), self._encodefloat(x[0], False), ] if x is not None else None ) for x in gospositions ] return res @command(name="GEODIST", fixed=(Key(ZSet), bytes, bytes), repeat=(bytes,)) def geodist(self, key: CommandItem, m1: bytes, m2: bytes, *args: bytes) -> Optional[float]: geohashes = [key.value.get(m1), key.value.get(m2)] if any(elem is None for elem in geohashes): return None geo_locs = [geo_decode(x) for x in geohashes] res = distance((geo_locs[0][0], geo_locs[0][1]), (geo_locs[1][0], geo_locs[1][1])) unit = translate_meters_to_unit(args[0]) if len(args) == 1 else 1 return res * unit def _search( self, key: CommandItem, long: float, lat: float, radius: float, conv: float, withcoord: bool, withdist: bool, _: bool, count: int, count_any: bool, desc: bool, store: Optional[bytes], storedist: Optional[bytes], ) -> Union[List[Any], int]: zset = key.value geo_results = _find_near(zset, lat, long, radius, conv, count, count_any, desc) if store: self._store_geo_results(store, geo_results, scoredist=False) return len(geo_results) if storedist: self._store_geo_results(storedist, geo_results, scoredist=True) return len(geo_results) ret = _parse_results(geo_results, withcoord, withdist) return ret @command(name="GEORADIUS_RO", fixed=(Key(ZSet), Float, Float, Float), repeat=(bytes,)) def georadius_ro(self, key: CommandItem, long: float, lat: float, radius: float, *args: bytes) -> List[Any]: (withcoord, withdist, withhash, count, count_any, desc), left_args = extract_args( args, ("withcoord", "withdist", "withhash", "+count", "any", "desc"), error_on_unexpected=False, left_from_first_unexpected=False, ) count = count or sys.maxsize conv: float = translate_meters_to_unit(args[0]) if len(args) >= 1 else 1.0 res: List[Any] = self._search( # type: ignore key, long, lat, radius, conv, withcoord, withdist, withhash, count, count_any, desc, None, None ) return res @command(name="GEORADIUS", fixed=(Key(ZSet), Float, Float, Float), repeat=(bytes,)) def georadius(self, key: CommandItem, long: float, lat: float, radius: float, *args: bytes) -> List[Any]: (withcoord, withdist, withhash, count, count_any, desc, store, storedist), left_args = extract_args( args, ("withcoord", "withdist", "withhash", "+count", "any", "desc", "*store", "*storedist"), error_on_unexpected=False, left_from_first_unexpected=False, ) count = count or sys.maxsize conv = translate_meters_to_unit(args[0]) if len(args) >= 1 else 1 res: List[Any] = self._search( # type: ignore key, long, lat, radius, conv, withcoord, withdist, withhash, count, count_any, desc, store, storedist, ) return res @command(name="GEORADIUSBYMEMBER", fixed=(Key(ZSet), bytes, Float), repeat=(bytes,)) def georadiusbymember(self, key: CommandItem, member_name: bytes, radius: float, *args: bytes): member_score = key.value.get(member_name) lat, long, _, _ = geo_decode(member_score) return self.georadius(key, long, lat, radius, *args) @command(name="GEORADIUSBYMEMBER_RO", fixed=(Key(ZSet), bytes, Float), repeat=(bytes,)) def georadiusbymember_ro(self, key: CommandItem, member_name: bytes, radius: float, *args: float) -> List[Any]: member_score = key.value.get(member_name) lat, long, _, _ = geo_decode(member_score) return self.georadius_ro(key, long, lat, radius, *args) @command(name="GEOSEARCH", fixed=(Key(ZSet),), repeat=(bytes,)) def geosearch(self, key: CommandItem, *args: bytes) -> List[Any]: (frommember, (long, lat), radius), left_args = extract_args( args, ("*frommember", "..fromlonlat", ".byradius"), error_on_unexpected=False, left_from_first_unexpected=False, ) if frommember is None and long is None: raise SimpleError(msgs.SYNTAX_ERROR_MSG) if frommember is not None and long is not None: raise SimpleError(msgs.SYNTAX_ERROR_MSG) if frommember: return self.georadiusbymember_ro(key, frommember, radius, *left_args) else: return self.georadius_ro(key, long, lat, radius, *left_args) # type: ignore @command( name="GEOSEARCHSTORE", fixed=( bytes, Key(ZSet), ), repeat=(bytes,), ) def geosearchstore(self, dst: bytes, src: CommandItem, *args: bytes) -> List[Any]: (frommember, (long, lat), radius, storedist), left_args = extract_args( args, ("*frommember", "..fromlonlat", ".byradius", "storedist"), error_on_unexpected=False, left_from_first_unexpected=False, ) if frommember is None and long is None: raise SimpleError(msgs.SYNTAX_ERROR_MSG) if frommember is not None and long is not None: raise SimpleError(msgs.SYNTAX_ERROR_MSG) additional = [b"storedist", dst] if storedist else [b"store", dst] if frommember: return self.georadiusbymember(src, frommember, radius, *left_args, *additional) else: return self.georadius(src, long, lat, radius, *left_args, *additional) # type: ignore fakeredis-py-2.26.1/fakeredis/commands_mixins/hash_mixin.py000066400000000000000000000236351470672414000240310ustar00rootroot00000000000000import itertools import math import random from typing import Callable, List, Tuple, Any, Optional from fakeredis import _msgs as msgs from fakeredis._command_args_parsing import extract_args from fakeredis._commands import command, Key, Hash, Int, Float, CommandItem from fakeredis._helpers import SimpleError, OK, casematch, SimpleString from fakeredis._helpers import current_time class HashCommandsMixin: _encodeint: Callable[ [ int, ], bytes, ] _encodefloat: Callable[[float, bool], bytes] _scan: Callable[[CommandItem, int, bytes, bytes], Tuple[int, List[bytes]]] def _hset(self, key: CommandItem, *args: bytes) -> int: h = key.value keys_count = len(h.keys()) h.update(dict(zip(*[iter(args)] * 2))) # type: ignore # https://stackoverflow.com/a/12739974/1056460 created = len(h.keys()) - keys_count key.updated() return created @command((Key(Hash), bytes), (bytes,)) def hdel(self, key: CommandItem, *fields: bytes) -> int: h = key.value rem = 0 for field in fields: if field in h: del h[field] key.updated() rem += 1 return rem @command((Key(Hash), bytes)) def hexists(self, key: CommandItem, field: bytes) -> int: return int(field in key.value) @command((Key(Hash), bytes)) def hget(self, key: CommandItem, field: bytes) -> Any: return key.value.get(field) @command((Key(Hash),)) def hgetall(self, key: CommandItem) -> List[bytes]: return list(itertools.chain(*key.value.items())) @command(fixed=(Key(Hash), bytes, bytes)) def hincrby(self, key: CommandItem, field: bytes, amount_bytes: bytes) -> int: amount = Int.decode(amount_bytes) field_value = Int.decode(key.value.get(field, b"0"), decode_error=msgs.INVALID_HASH_MSG) c = field_value + amount key.value[field] = self._encodeint(c) key.updated() return c @command((Key(Hash), bytes, bytes)) def hincrbyfloat(self, key: CommandItem, field: bytes, amount: bytes) -> bytes: c = Float.decode(key.value.get(field, b"0")) + Float.decode(amount) if not math.isfinite(c): raise SimpleError(msgs.NONFINITE_MSG) encoded = self._encodefloat(c, True) key.value[field] = encoded key.updated() return encoded @command((Key(Hash),)) def hkeys(self, key: CommandItem) -> List[bytes]: return list(key.value.keys()) @command((Key(Hash),)) def hlen(self, key: CommandItem) -> int: return len(key.value) @command((Key(Hash), bytes), (bytes,)) def hmget(self, key: CommandItem, *fields: bytes) -> List[bytes]: return [key.value.get(field) for field in fields] @command((Key(Hash), bytes, bytes), (bytes, bytes)) def hmset(self, key: CommandItem, *args: bytes) -> SimpleString: self.hset(key, *args) return OK @command((Key(Hash), Int), (bytes, bytes)) def hscan(self, key: CommandItem, cursor: int, *args: bytes) -> List[Any]: cursor, keys = self._scan(key.value, cursor, *args) items = [] for k in keys: items.append(k) items.append(key.value[k]) return [cursor, items] @command((Key(Hash), bytes, bytes), (bytes, bytes)) def hset(self, key: CommandItem, *args: bytes) -> int: return self._hset(key, *args) @command((Key(Hash), bytes, bytes)) def hsetnx(self, key: CommandItem, field: bytes, value: bytes) -> int: if field in key.value: return 0 return self._hset(key, field, value) @command((Key(Hash), bytes)) def hstrlen(self, key: CommandItem, field: bytes) -> int: return len(key.value.get(field, b"")) @command((Key(Hash),)) def hvals(self, key: CommandItem) -> List[bytes]: return list(key.value.values()) @command(name="HRANDFIELD", fixed=(Key(Hash),), repeat=(bytes,)) def hrandfield(self, key: CommandItem, *args: bytes) -> Optional[List[bytes]]: if len(args) > 2: raise SimpleError(msgs.SYNTAX_ERROR_MSG) if key.value is None or len(key.value) == 0: return None count = min(Int.decode(args[0]) if len(args) >= 1 else 1, len(key.value)) withvalues = casematch(args[1], b"withvalues") if len(args) >= 2 else False if count == 0: return list() if count < 0: # Allow repetitions res = random.choices(sorted(key.value.items()), k=-count) else: # Unique values from hash res = random.sample(sorted(key.value.items()), count) if withvalues: res = [item for t in res for item in t] else: res = [t[0] for t in res] return res def _hexpire(self, key: CommandItem, when_ms: int, *args: bytes) -> List[int]: # Deal with input arguments (nx, xx, gt, lt), left_args = extract_args( args, ("nx", "xx", "gt", "lt"), left_from_first_unexpected=True, error_on_unexpected=False ) if (nx, xx, gt, lt).count(True) > 1: raise SimpleError(msgs.NX_XX_GT_LT_ERROR_MSG) if len(left_args) < 3 or not casematch(left_args[0], b"fields"): raise SimpleError(msgs.WRONG_ARGS_MSG6.format("HEXPIRE")) num_fields = Int.decode(left_args[1]) if num_fields != len(left_args) - 2: raise SimpleError(msgs.HEXPIRE_NUMFIELDS_DIFFERENT) hash_val: Hash = key.value if hash_val is None: return [-2] * num_fields fields = left_args[2:] # process command res = [] for field in fields: if field not in hash_val: res.append(-2) continue current_expiration = hash_val.get_key_expireat(field) if ( (nx and current_expiration is not None) or (xx and current_expiration is None) or (gt and (current_expiration is None or when_ms <= current_expiration)) or (lt and current_expiration is not None and when_ms >= current_expiration) ): res.append(0) continue res.append(hash_val.set_key_expireat(field, when_ms)) return res def _get_expireat(self, command: bytes, key: CommandItem, *args: bytes) -> List[int]: if len(args) < 3 or not casematch(args[0], b"fields"): raise SimpleError(msgs.WRONG_ARGS_MSG6.format(command)) num_fields = Int.decode(args[1]) if num_fields != len(args) - 2: raise SimpleError(msgs.HEXPIRE_NUMFIELDS_DIFFERENT) hash_val: Hash = key.value if hash_val is None: return [-2] * num_fields fields = args[2:] res = list() for field in fields: if field not in hash_val: res.append(-2) continue when_ms = hash_val.get_key_expireat(field) if when_ms is None: res.append(-1) else: res.append(when_ms) return res @command(name="HEXPIRE", fixed=(Key(Hash), Int), repeat=(bytes,)) def hexpire(self, key: CommandItem, seconds: int, *args: bytes) -> List[int]: when_ms = current_time() + seconds * 1000 return self._hexpire(key, when_ms, *args) @command(name="HPEXPIRE", fixed=(Key(Hash), Int), repeat=(bytes,)) def hpexpire(self, key: CommandItem, milliseconds: int, *args: bytes) -> List[int]: when_ms = current_time() + milliseconds return self._hexpire(key, when_ms, *args) @command(name="HEXPIREAT", fixed=(Key(Hash), Int), repeat=(bytes,)) def hexpireat(self, key: CommandItem, unix_time_seconds: int, *args: bytes) -> List[int]: when_ms = unix_time_seconds * 1000 return self._hexpire(key, when_ms, *args) @command(name="HPEXPIREAT", fixed=(Key(Hash), Int), repeat=(bytes,)) def hpexpireat(self, key: CommandItem, unix_time_ms: int, *args: bytes) -> List[int]: return self._hexpire(key, unix_time_ms, *args) @command(name="HPERSIST", fixed=(Key(Hash),), repeat=(bytes,)) def hpersist(self, key: CommandItem, *args: bytes) -> List[int]: if len(args) < 3 or not casematch(args[0], b"fields"): raise SimpleError(msgs.WRONG_ARGS_MSG6.format("HEXPIRE")) num_fields = Int.decode(args[1]) if num_fields != len(args) - 2: raise SimpleError(msgs.HEXPIRE_NUMFIELDS_DIFFERENT) fields = args[2:] hash_val: Hash = key.value res = list() for field in fields: if field not in hash_val: res.append(-2) continue if hash_val.clear_key_expireat(field): res.append(1) else: res.append(-1) return res @command(name="HEXPIRETIME", fixed=(Key(Hash),), repeat=(bytes,), flags=msgs.FLAG_DO_NOT_CREATE) def hexpiretime(self, key: CommandItem, *args: bytes) -> List[int]: res = self._get_expireat(b"HEXPIRETIME", key, *args) return [(i // 1000 if i > 0 else i) for i in res] @command(name="HPEXPIRETIME", fixed=(Key(Hash),), repeat=(bytes,)) def hpexpiretime(self, key: CommandItem, *args: bytes) -> List[float]: res = self._get_expireat(b"HEXPIRETIME", key, *args) return res @command(name="HTTL", fixed=(Key(Hash),), repeat=(bytes,)) def httl(self, key: CommandItem, *args: bytes) -> List[int]: curr_expireat_ms = self._get_expireat(b"HEXPIRETIME", key, *args) curr_time_ms = current_time() return [((i - curr_time_ms) // 1000) if i > 0 else i for i in curr_expireat_ms] @command(name="HPTTL", fixed=(Key(Hash),), repeat=(bytes,)) def hpttl(self, key: CommandItem, *args: bytes) -> List[int]: curr_expireat_ms = self._get_expireat(b"HEXPIRETIME", key, *args) curr_time_ms = current_time() return [(i - curr_time_ms) if i > 0 else i for i in curr_expireat_ms] fakeredis-py-2.26.1/fakeredis/commands_mixins/list_mixin.py000066400000000000000000000256001470672414000240530ustar00rootroot00000000000000import functools from typing import Callable, List, Optional, Union, Any from fakeredis import _msgs as msgs from fakeredis._command_args_parsing import extract_args from fakeredis._commands import Key, command, Int, CommandItem, Timeout, fix_range from fakeredis._helpers import OK, SimpleError, SimpleString, casematch, Database def _list_pop_count(get_slice, key, count): if not key: return None elif type(key.value) is not list: raise SimpleError(msgs.WRONGTYPE_MSG) slc = get_slice(count) ret = key.value[slc] del key.value[slc] key.updated() return ret def _list_pop(get_slice, key, *args): """Implements lpop and rpop. `get_slice` must take a count and return a slice expression for the range to pop. """ # This implementation is somewhat contorted to match the odd # behaviours described in https://github.com/redis/redis/issues/9680. count = 1 if len(args) > 1: raise SimpleError(msgs.SYNTAX_ERROR_MSG) elif len(args) == 1: count = Int.decode(args[0], msgs.INDEX_NEGATIVE_ERROR_MSG) if count < 0: raise SimpleError(msgs.INDEX_NEGATIVE_ERROR_MSG) ret = _list_pop_count(get_slice, key, count) if ret and not args: ret = ret[0] return ret class ListCommandsMixin: _blocking: Callable[[Optional[Union[float, int]], Callable[[bool], Any]], Any] def __init__(self, *args, **kwargs): super(ListCommandsMixin, self).__init__(*args, **kwargs) self._db: Database def _bpop_pass(self, keys, op, first_pass): for key in keys: item = CommandItem(key, self._db, item=self._db.get(key), default=[]) if not isinstance(item.value, list): if first_pass: raise SimpleError(msgs.WRONGTYPE_MSG) else: continue if item.value: ret = op(item.value) item.updated() item.writeback() return [key, ret] return None def _bpop(self, args, op): keys = args[:-1] timeout = Timeout.decode(args[-1]) return self._blocking(timeout, functools.partial(self._bpop_pass, keys, op)) @command((bytes, bytes), (bytes,), flags=msgs.FLAG_NO_SCRIPT) def blpop(self, *args): return self._bpop(args, lambda lst: lst.pop(0)) @command((bytes, bytes), (bytes,), flags=msgs.FLAG_NO_SCRIPT) def brpop(self, *args): return self._bpop(args, lambda lst: lst.pop()) def _brpoplpush_pass(self, source, destination, first_pass): src = CommandItem(source, self._db, item=self._db.get(source), default=[]) if not isinstance(src.value, list): if first_pass: raise SimpleError(msgs.WRONGTYPE_MSG) else: return None if not src.value: return None # Empty list dst = CommandItem(destination, self._db, item=self._db.get(destination), default=[]) if not isinstance(dst.value, list): raise SimpleError(msgs.WRONGTYPE_MSG) el = src.value.pop() dst.value.insert(0, el) src.updated() src.writeback() if destination != source: # Ensure writeback only happens once dst.updated() dst.writeback() return el @command(name="BRPOPLPUSH", fixed=(bytes, bytes, Timeout), flags=msgs.FLAG_NO_SCRIPT) def brpoplpush(self, source, destination, timeout): return self._blocking(timeout, functools.partial(self._brpoplpush_pass, source, destination)) @command((Key(list, None), Int)) def lindex(self, key, index): try: return key.value[index] except IndexError: return None @command((Key(list), bytes, bytes, bytes)) def linsert(self, key, where, pivot, value): if not casematch(where, b"before") and not casematch(where, b"after"): raise SimpleError(msgs.SYNTAX_ERROR_MSG) if not key: return 0 else: try: index = key.value.index(pivot) except ValueError: return -1 if casematch(where, b"after"): index += 1 key.value.insert(index, value) key.updated() return len(key.value) @command((Key(list),)) def llen(self, key): return len(key.value) def _lmove(self, first_list, second_list, src, dst, first_pass): if (not casematch(src, b"left") and not casematch(src, b"right")) or ( not casematch(dst, b"left") and not casematch(dst, b"right") ): raise SimpleError(msgs.SYNTAX_ERROR_MSG) el = self.rpop(first_list) if casematch(src, b"RIGHT") else self.lpop(first_list) self.lpush(second_list, el) if casematch(dst, b"LEFT") else self.rpush(second_list, el) return el @command((Key(list, None), Key(list), SimpleString, SimpleString)) def lmove(self, first_list, second_list, src, dst): return self._lmove(first_list, second_list, src, dst, False) @command((Key(list, None), Key(list), SimpleString, SimpleString, Timeout)) def blmove(self, first_list, second_list, src, dst, timeout): return self._blocking(timeout, functools.partial(self._lmove, first_list, second_list, src, dst)) @command(fixed=(Key(),), repeat=(bytes,)) def lpop(self, key, *args): return _list_pop(lambda count: slice(None, count), key, *args) def _lmpop(self, keys, count, direction_left, first_pass): if direction_left: op = lambda count: slice(None, count) # noqa:E731 else: op = lambda count: slice(None, -count - 1, -1) # noqa:E731 for key in keys: item = CommandItem(key, self._db, item=self._db.get(key), default=[]) res = _list_pop_count(op, item, count) if res: return [key, res] return None @command(fixed=(Int,), repeat=(bytes,)) def lmpop(self, numkeys, *args): if numkeys <= 0: raise SimpleError(msgs.NUMKEYS_GREATER_THAN_ZERO_MSG) if casematch(args[-2], b"count"): count = Int.decode(args[-1]) args = args[:-2] else: count = 1 if len(args) != numkeys + 1 or (not casematch(args[-1], b"left") and not casematch(args[-1], b"right")): raise SimpleError(msgs.SYNTAX_ERROR_MSG) return self._lmpop(args[:-1], count, casematch(args[-1], b"left"), False) @command( fixed=( Timeout, Int, ), repeat=(bytes,), ) def blmpop(self, timeout, numkeys, *args): if numkeys <= 0: raise SimpleError(msgs.NUMKEYS_GREATER_THAN_ZERO_MSG) if casematch(args[-2], b"count"): count = Int.decode(args[-1]) args = args[:-2] else: count = 1 if len(args) != numkeys + 1 or (not casematch(args[-1], b"left") and not casematch(args[-1], b"right")): raise SimpleError(msgs.SYNTAX_ERROR_MSG) return self._blocking( timeout, functools.partial(self._lmpop, args[:-1], count, casematch(args[-1], b"left")), ) @command((Key(list), bytes), (bytes,)) def lpush(self, key, *values): for value in values: key.value.insert(0, value) key.updated() return len(key.value) @command((Key(list), bytes), (bytes,)) def lpushx(self, key, *values): if not key: return 0 return self.lpush(key, *values) @command((Key(list), Int, Int)) def lrange(self, key, start, stop): start, stop = fix_range(start, stop, len(key.value)) return key.value[start:stop] @command((Key(list), Int, bytes)) def lrem(self, key, count, value): a_list = key.value found = [] for i, el in enumerate(a_list): if el == value: found.append(i) if count > 0: indices_to_remove = found[:count] elif count < 0: indices_to_remove = found[count:] else: indices_to_remove = found # Iterating in reverse order to ensure the indices # remain valid during deletion. for index in reversed(indices_to_remove): del a_list[index] if indices_to_remove: key.updated() return len(indices_to_remove) @command((Key(list), bytes, bytes)) def lset(self, key, index, value): if not key: raise SimpleError(msgs.NO_KEY_MSG) index = Int.decode(index) try: key.value[index] = value key.updated() except IndexError: raise SimpleError(msgs.INDEX_ERROR_MSG) return OK @command((Key(list), Int, Int)) def ltrim(self, key, start, stop): if key: if stop == -1: stop = None else: stop += 1 new_value = key.value[start:stop] # TODO: check if this should actually be conditional if len(new_value) != len(key.value): key.update(new_value) return OK @command(fixed=(Key(),), repeat=(bytes,)) def rpop(self, key, *args): return _list_pop(lambda count: slice(None, -count - 1, -1), key, *args) @command((Key(list, None), Key(list))) def rpoplpush(self, src, dst): el = self.rpop(src) self.lpush(dst, el) return el @command((Key(list), bytes), (bytes,)) def rpush(self, key, *values): for value in values: key.value.append(value) key.updated() return len(key.value) @command((Key(list), bytes), (bytes,)) def rpushx(self, key, *values): if not key: return 0 return self.rpush(key, *values) @command( fixed=( Key(list), bytes, ), repeat=(bytes,), ) def lpos(self, key, elem, *args): (rank, count, maxlen), _ = extract_args( args, ( "+rank", "+count", "+maxlen", ), ) if rank == 0: raise SimpleError(msgs.LPOS_RANK_CAN_NOT_BE_ZERO) rank = rank or 1 ind, direction = (0, 1) if rank > 0 else (len(key.value) - 1, -1) rank = abs(rank) parse_count = len(key.value) if count == 0 else (count or 1) maxlen = maxlen or len(key.value) res: List[int] = [] comparisons = 0 while 0 <= ind <= len(key.value) - 1 and len(res) < parse_count and comparisons < maxlen: comparisons += 1 if key.value[ind] == elem: if rank > 1: rank -= 1 else: res.append(ind) ind += direction if len(res) == 0 and count is None: return None if len(res) == 1 and count is None: return res[0] return res fakeredis-py-2.26.1/fakeredis/commands_mixins/pubsub_mixin.py000066400000000000000000000165431470672414000244060ustar00rootroot00000000000000from typing import Tuple, Any, Dict, Callable, List, Iterable from fakeredis import _msgs as msgs from fakeredis._commands import command from fakeredis._helpers import NoResponse, compile_pattern, SimpleError class PubSubCommandsMixin: put_response: Callable[[Any], None] def __init__(self, *args: Any, **kwargs: Any) -> None: super(PubSubCommandsMixin, self).__init__(*args, **kwargs) self._pubsub = 0 # Count of subscriptions self._server: Any self.version: Tuple[int] def _subscribe(self, channels: Iterable[bytes], subscribers: Dict[bytes, Any], mtype: bytes) -> NoResponse: for channel in channels: subs = subscribers[channel] if self not in subs: subs.add(self) self._pubsub += 1 msg = [mtype, channel, self._pubsub] self.put_response(msg) return NoResponse() def _unsubscribe(self, channels: Iterable[bytes], subscribers: Dict[bytes, Any], mtype: bytes) -> NoResponse: if not channels: channels = [] for channel, subs in subscribers.items(): if self in subs: channels.append(channel) for channel in channels: subs = subscribers.get(channel, set()) if self in subs: subs.remove(self) if not subs: del subscribers[channel] self._pubsub -= 1 msg = [mtype, channel, self._pubsub] self.put_response(msg) return NoResponse() def _numsub(self, subscribers: Dict[bytes, Any], *channels: bytes) -> List[Any]: tuples_list = [(ch, len(subscribers.get(ch, []))) for ch in channels] return [item for sublist in tuples_list for item in sublist] @command((bytes,), (bytes,), flags=msgs.FLAG_NO_SCRIPT) def psubscribe(self, *patterns: bytes) -> NoResponse: return self._subscribe(patterns, self._server.psubscribers, b"psubscribe") @command((bytes,), (bytes,), flags=msgs.FLAG_NO_SCRIPT) def subscribe(self, *channels: bytes) -> NoResponse: return self._subscribe(channels, self._server.subscribers, b"subscribe") @command((bytes,), (bytes,), flags=msgs.FLAG_NO_SCRIPT) def ssubscribe(self, *channels: bytes) -> NoResponse: return self._subscribe(channels, self._server.ssubscribers, b"ssubscribe") @command((), (bytes,), flags=msgs.FLAG_NO_SCRIPT) def punsubscribe(self, *patterns: bytes) -> NoResponse: return self._unsubscribe(patterns, self._server.psubscribers, b"punsubscribe") @command((), (bytes,), flags=msgs.FLAG_NO_SCRIPT) def unsubscribe(self, *channels: bytes) -> NoResponse: return self._unsubscribe(channels, self._server.subscribers, b"unsubscribe") @command(fixed=(), repeat=(bytes,), flags=msgs.FLAG_NO_SCRIPT) def sunsubscribe(self, *channels: bytes) -> NoResponse: return self._unsubscribe(channels, self._server.ssubscribers, b"sunsubscribe") @command((bytes, bytes)) def publish(self, channel: bytes, message: bytes) -> int: receivers = 0 msg = [b"message", channel, message] subs = self._server.subscribers.get(channel, set()) for sock in subs: sock.put_response(msg) receivers += 1 for pattern, socks in self._server.psubscribers.items(): regex = compile_pattern(pattern) if regex.match(channel): msg = [b"pmessage", pattern, channel, message] for sock in socks: sock.put_response(msg) receivers += 1 return receivers @command((bytes, bytes)) def spublish(self, channel: bytes, message: bytes) -> int: receivers = 0 msg = [b"smessage", channel, message] subs = self._server.ssubscribers.get(channel, set()) for sock in subs: sock.put_response(msg) receivers += 1 for pattern, socks in self._server.psubscribers.items(): regex = compile_pattern(pattern) if regex.match(channel): msg = [b"pmessage", pattern, channel, message] for sock in socks: sock.put_response(msg) receivers += 1 return receivers @command(name="PUBSUB NUMPAT", fixed=(), repeat=()) def pubsub_numpat(self, *_: Any) -> int: return len(self._server.psubscribers) def _channels(self, subscribers_dict: Dict[bytes, Any], *patterns: bytes) -> List[bytes]: channels = list(subscribers_dict.keys()) if len(patterns) > 0: regex = compile_pattern(patterns[0]) channels = [ch for ch in channels if regex.match(ch)] return channels @command(name="PUBSUB CHANNELS", fixed=(), repeat=(bytes,)) def pubsub_channels(self, *args: bytes) -> List[bytes]: return self._channels(self._server.subscribers, *args) @command(name="PUBSUB SHARDCHANNELS", fixed=(), repeat=(bytes,)) def pubsub_shardchannels(self, *args: bytes) -> List[bytes]: return self._channels(self._server.ssubscribers, *args) @command(name="PUBSUB NUMSUB", fixed=(), repeat=(bytes,)) def pubsub_numsub(self, *args: bytes) -> List[Any]: return self._numsub(self._server.subscribers, *args) @command(name="PUBSUB SHARDNUMSUB", fixed=(), repeat=(bytes,)) def pubsub_shardnumsub(self, *args: bytes) -> List[Any]: return self._numsub(self._server.ssubscribers, *args) @command(name="PUBSUB", fixed=()) def pubsub(self, *args: Any) -> None: raise SimpleError(msgs.WRONG_ARGS_MSG6.format("pubsub")) @command(name="PUBSUB HELP", fixed=()) def pubsub_help(self, *args: Any) -> List[bytes]: if self.version >= (7,): help_strings = [ "PUBSUB [ [value] [opt] ...]. Subcommands are:", "CHANNELS []", " Return the currently active channels matching a (default: '*')" ".", "NUMPAT", " Return number of subscriptions to patterns.", "NUMSUB [ ...]", " Return the number of subscribers for the specified channels, excluding", " pattern subscriptions(default: no channels).", "SHARDCHANNELS []", " Return the currently active shard level channels matching a (d" "efault: '*').", "SHARDNUMSUB [ ...]", " Return the number of subscribers for the specified shard level channel(s" ")", "HELP", (" Prints this help." if self.version < (7, 1) else " Print this help."), ] else: help_strings = [ "PUBSUB [ [value] [opt] ...]. Subcommands are:", "CHANNELS []", " Return the currently active channels matching a (default: '*')" ".", "NUMPAT", " Return number of subscriptions to patterns.", "NUMSUB [ ...]", " Return the number of subscribers for the specified channels, excluding", " pattern subscriptions(default: no channels).", "HELP", " Prints this help.", ] return [s.encode() for s in help_strings] fakeredis-py-2.26.1/fakeredis/commands_mixins/scripting_mixin.py000066400000000000000000000300131470672414000250740ustar00rootroot00000000000000import functools import hashlib import importlib import itertools import logging import os from typing import Callable, AnyStr, Set, Any, Tuple, List, Dict, Optional import lupa from fakeredis import _msgs as msgs from fakeredis._commands import command, Int, Signature from fakeredis._helpers import ( SimpleError, SimpleString, null_terminate, OK, decode_command_bytes, ) __LUA_RUNTIMES_MAP = { "5.1": "lupa.lua51", "5.2": "lupa.lua52", "5.3": "lupa.lua53", "5.4": "lupa.lua54", } LUA_VERSION = os.getenv("FAKEREDIS_LUA_VERSION", "5.1") with lupa.allow_lua_module_loading(): LUA_MODULE = importlib.import_module(__LUA_RUNTIMES_MAP[LUA_VERSION]) LOGGER = logging.getLogger("fakeredis") REDIS_LOG_LEVELS = { b"LOG_DEBUG": 0, b"LOG_VERBOSE": 1, b"LOG_NOTICE": 2, b"LOG_WARNING": 3, } REDIS_LOG_LEVELS_TO_LOGGING = { 0: logging.DEBUG, 1: logging.INFO, 2: logging.INFO, 3: logging.WARNING, } def _ensure_str(s: AnyStr, encoding: str, replaceerr: str) -> str: if isinstance(s, bytes): res = s.decode(encoding=encoding, errors=replaceerr) else: res = str(s) return res def _check_for_lua_globals(lua_runtime: LUA_MODULE.LuaRuntime, expected_globals: Set[Any]) -> None: unexpected_globals = set(lua_runtime.globals().keys()) - expected_globals if len(unexpected_globals) > 0: unexpected = [_ensure_str(var, "utf-8", "replace") for var in unexpected_globals] raise SimpleError(msgs.GLOBAL_VARIABLE_MSG.format(", ".join(unexpected))) def _lua_redis_log(lua_runtime: LUA_MODULE.LuaRuntime, expected_globals: Set[Any], lvl: int, *args: Any) -> None: _check_for_lua_globals(lua_runtime, expected_globals) if len(args) < 1: raise SimpleError(msgs.REQUIRES_MORE_ARGS_MSG.format("redis.log()", "two")) if lvl not in REDIS_LOG_LEVELS_TO_LOGGING.keys(): raise SimpleError(msgs.LOG_INVALID_DEBUG_LEVEL_MSG) msg = " ".join([x.decode("utf-8") if isinstance(x, bytes) else str(x) for x in args if not isinstance(x, bool)]) LOGGER.log(REDIS_LOG_LEVELS_TO_LOGGING[lvl], msg) class ScriptingCommandsMixin: _name_to_func: Callable[ [ str, ], Tuple[Optional[Callable[..., Any]], Signature], ] _run_command: Callable[[Callable[..., Any], Signature, List[Any], bool], Any] def __init__(self, *args: Any, **kwargs: Any): self.script_cache: Dict[bytes, bytes] = dict() # Maps SHA1 to the script source self.server_type: str self.version: Tuple[int] self.load_lua_modules = set() lua_modules_set: Set[str] = kwargs.pop("lua_modules", None) or set() if len(lua_modules_set) > 0: lua_runtime: LUA_MODULE.LuaRuntime = LUA_MODULE.LuaRuntime(encoding=None, unpack_returned_tuples=True) for module in lua_modules_set: try: lua_runtime.require(module.encode()) self.load_lua_modules.add(module) except LUA_MODULE.LuaError as ex: LOGGER.error(f'Failed to load LUA module "{module}", make sure it is installed: {ex}') super(ScriptingCommandsMixin, self).__init__(*args, **kwargs) def _convert_redis_arg(self, lua_runtime: LUA_MODULE.LuaRuntime, value: Any) -> bytes: # Type checks are exact to avoid issues like bool being a subclass of int. if type(value) is bytes: return value elif type(value) in {int, float}: return "{:.17g}".format(value).encode() else: raise SimpleError(msgs.LUA_COMMAND_ARG_MSG) def _convert_redis_result(self, lua_runtime: LUA_MODULE.LuaRuntime, result: Any) -> Any: if isinstance(result, (bytes, int)): return result elif isinstance(result, SimpleString): return lua_runtime.table_from({b"ok": result.value}) elif result is None: return False elif isinstance(result, list): converted = [self._convert_redis_result(lua_runtime, item) for item in result] return lua_runtime.table_from(converted) elif isinstance(result, SimpleError): if result.value.startswith("ERR wrong number of arguments"): raise SimpleError(msgs.WRONG_ARGS_MSG7) raise result else: raise RuntimeError("Unexpected return type from redis: {}".format(type(result))) def _convert_lua_result(self, result: Any, nested: bool = True) -> Any: if LUA_MODULE.lua_type(result) == "table": for key in (b"ok", b"err"): if key in result: msg = self._convert_lua_result(result[key]) if not isinstance(msg, bytes): raise SimpleError(msgs.LUA_WRONG_NUMBER_ARGS_MSG) if key == b"ok": return SimpleString(msg) elif nested: return SimpleError(msg.decode("utf-8", "replace")) else: raise SimpleError(msg.decode("utf-8", "replace")) # Convert Lua tables into lists, starting from index 1, mimicking the behavior of StrictRedis. result_list = [] for index in itertools.count(1): if index not in result: break item = result[index] result_list.append(self._convert_lua_result(item)) return result_list elif isinstance(result, str): return result.encode() elif isinstance(result, float): return int(result) elif isinstance(result, bool): return 1 if result else None return result def _lua_redis_call( self, lua_runtime: LUA_MODULE.LuaRuntime, expected_globals: Set[Any], op: bytes, *args: Any ) -> Any: # Check if we've set any global variables before making any change. _check_for_lua_globals(lua_runtime, expected_globals) func, sig = self._name_to_func(decode_command_bytes(op)) new_args = [self._convert_redis_arg(lua_runtime, arg) for arg in args] result = self._run_command(func, sig, new_args, True) return self._convert_redis_result(lua_runtime, result) def _lua_redis_pcall( self, lua_runtime: LUA_MODULE.LuaRuntime, expected_globals: Set[Any], op: bytes, *args: Any ) -> Any: try: return self._lua_redis_call(lua_runtime, expected_globals, op, *args) except Exception as ex: return lua_runtime.table_from({b"err": str(ex)}) @command((bytes, Int), (bytes,), flags=msgs.FLAG_NO_SCRIPT) def eval(self, script: bytes, numkeys: int, *keys_and_args: bytes) -> Any: if numkeys > len(keys_and_args): raise SimpleError(msgs.TOO_MANY_KEYS_MSG) if numkeys < 0: raise SimpleError(msgs.NEGATIVE_KEYS_MSG) sha1 = hashlib.sha1(script).hexdigest().encode() self.script_cache[sha1] = script lua_runtime: LUA_MODULE.LuaRuntime = LUA_MODULE.LuaRuntime(encoding=None, unpack_returned_tuples=True) modules_import_str = "\n".join([f"{module} = require('{module}')" for module in self.load_lua_modules]) set_globals = lua_runtime.eval( f""" function(keys, argv, redis_call, redis_pcall, redis_log, redis_log_levels) redis = {{}} redis.call = redis_call redis.pcall = redis_pcall redis.log = redis_log for level, pylevel in python.iterex(redis_log_levels.items()) do redis[level] = pylevel end redis.error_reply = function(msg) return {{err=msg}} end redis.status_reply = function(msg) return {{ok=msg}} end KEYS = keys ARGV = argv {modules_import_str} end """ ) expected_globals: Set[Any] = set() set_globals( lua_runtime.table_from(keys_and_args[:numkeys]), lua_runtime.table_from(keys_and_args[numkeys:]), functools.partial(self._lua_redis_call, lua_runtime, expected_globals), functools.partial(self._lua_redis_pcall, lua_runtime, expected_globals), functools.partial(_lua_redis_log, lua_runtime, expected_globals), LUA_MODULE.as_attrgetter(REDIS_LOG_LEVELS), ) expected_globals.update(lua_runtime.globals().keys()) try: result = lua_runtime.execute(script) except SimpleError as ex: if ex.value == msgs.LUA_COMMAND_ARG_MSG: if self.version < (7,): raise SimpleError(msgs.LUA_COMMAND_ARG_MSG6) elif self.server_type == "valkey": raise SimpleError(msgs.VALKEY_LUA_COMMAND_ARG_MSG.format(sha1.decode())) else: raise SimpleError(msgs.LUA_COMMAND_ARG_MSG) if self.version < (7,): raise SimpleError(msgs.SCRIPT_ERROR_MSG.format(sha1.decode(), ex)) raise SimpleError(ex.value) except LUA_MODULE.LuaError as ex: raise SimpleError(msgs.SCRIPT_ERROR_MSG.format(sha1.decode(), ex)) _check_for_lua_globals(lua_runtime, expected_globals) return self._convert_lua_result(result, nested=False) @command(name="EVALSHA", fixed=(bytes, Int), repeat=(bytes,), flags=msgs.FLAG_NO_SCRIPT) def evalsha(self, sha1: bytes, numkeys: Int, *keys_and_args: bytes) -> Any: try: script = self.script_cache[sha1] except KeyError: raise SimpleError(msgs.NO_MATCHING_SCRIPT_MSG) return self.eval(script, numkeys, *keys_and_args) @command(name="SCRIPT LOAD", fixed=(bytes,), repeat=(bytes,), flags=msgs.FLAG_NO_SCRIPT) def script_load(self, *args: bytes) -> bytes: if len(args) != 1: raise SimpleError(msgs.BAD_SUBCOMMAND_MSG.format("SCRIPT")) script = args[0] sha1 = hashlib.sha1(script).hexdigest().encode() self.script_cache[sha1] = script return sha1 @command(name="SCRIPT EXISTS", fixed=(), repeat=(bytes,), flags=msgs.FLAG_NO_SCRIPT) def script_exists(self, *args: bytes) -> List[int]: if self.version >= (7,) and len(args) == 0: raise SimpleError(msgs.WRONG_ARGS_MSG7) return [int(sha1 in self.script_cache) for sha1 in args] @command(name="SCRIPT FLUSH", fixed=(), repeat=(bytes,), flags=msgs.FLAG_NO_SCRIPT) def script_flush(self, *args: bytes) -> SimpleString: if len(args) > 1 or (len(args) == 1 and null_terminate(args[0]) not in {b"sync", b"async"}): raise SimpleError(msgs.BAD_SUBCOMMAND_MSG.format("SCRIPT")) self.script_cache = {} return OK @command((), flags=msgs.FLAG_NO_SCRIPT) def script(self, *args: bytes) -> None: raise SimpleError(msgs.BAD_SUBCOMMAND_MSG.format("SCRIPT")) @command(name="SCRIPT HELP", fixed=()) def script_help(self, *args: bytes) -> List[bytes]: help_strings = [ "SCRIPT [ [value] [opt] ...]. Subcommands are:", "DEBUG (YES|SYNC|NO)", " Set the debug mode for subsequent scripts executed.", "EXISTS [ ...]", " Return information about the existence of the scripts in the script cach" "e.", "FLUSH [ASYNC|SYNC]", " Flush the Lua scripts cache. Very dangerous on replicas.", " When called without the optional mode argument, the behavior is determin" "ed by the", " lazyfree-lazy-user-flush configuration directive. Valid modes are:", " * ASYNC: Asynchronously flush the scripts cache.", " * SYNC: Synchronously flush the scripts cache.", "KILL", " Kill the currently executing Lua script.", "LOAD