pax_global_header00006660000000000000000000000064145502456210014516gustar00rootroot0000000000000052 comment=594831714df69657af75cac4fafecb0d6812d6dc channels_redis-4.2.0/000077500000000000000000000000001455024562100145025ustar00rootroot00000000000000channels_redis-4.2.0/.github/000077500000000000000000000000001455024562100160425ustar00rootroot00000000000000channels_redis-4.2.0/.github/ISSUE_TEMPLATE.md000066400000000000000000000014251455024562100205510ustar00rootroot00000000000000Issues are for **concrete, actionable bugs and feature requests** only - if you're just asking for debugging help or technical support we have to direct you elsewhere. If you just have questions or support requests please use: - Stack Overflow - The Django Users mailing list django-users@googlegroups.com (https://groups.google.com/forum/#!forum/django-users) We have to limit this because of limited volunteer time to respond to issues! Please also try and include, if you can: - Your OS and runtime environment, and browser if applicable - A `pip freeze` output showing your package versions - What you expected to happen vs. what actually happened - How you're running Channels (runserver? daphne/runworker? Nginx/Apache in front?) - Console logs and full tracebacks of any errors channels_redis-4.2.0/.github/workflows/000077500000000000000000000000001455024562100200775ustar00rootroot00000000000000channels_redis-4.2.0/.github/workflows/tests.yml000066400000000000000000000036131455024562100217670ustar00rootroot00000000000000name: Tests on: push: branches: - main pull_request: jobs: tests: name: Python ${{ matrix.python-version }} runs-on: ubuntu-latest timeout-minutes: 10 strategy: fail-fast: false matrix: python-version: - "3.8" - "3.9" - "3.10" - "3.11" - "3.12" services: redis: image: redis ports: - 6379:6379 options: >- --health-cmd "redis-cli ping" --health-interval 10s --health-timeout 5s --health-retries 5 sentinel: image: bitnami/redis-sentinel ports: - 26379:26379 options: >- --health-cmd "redis-cli -p 26379 ping" --health-interval 10s --health-timeout 5s --health-retries 5 env: REDIS_MASTER_HOST: redis REDIS_MASTER_SET: sentinel REDIS_SENTINEL_QUORUM: "1" REDIS_SENTINEL_PASSWORD: channels_redis steps: - uses: actions/checkout@v3 - name: Set up Python ${{ matrix.python-version }} uses: actions/setup-python@v4 with: python-version: ${{ matrix.python-version }} - name: Install dependencies run: | python -m pip install --upgrade pip wheel setuptools tox - name: Run tox targets for ${{ matrix.python-version }} run: | ENV_PREFIX=$(tr -C -d "0-9" <<< "${{ matrix.python-version }}") TOXENV=$(tox --listenvs | grep "^py$ENV_PREFIX" | tr '\n' ',') python -m tox lint: name: Lint runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 - name: Set up Python uses: actions/setup-python@v4 with: python-version: "3.11" - name: Install dependencies run: | python -m pip install --upgrade pip tox - name: Run lint run: tox -e qa channels_redis-4.2.0/.gitignore000066400000000000000000000001211455024562100164640ustar00rootroot00000000000000*.egg-info dist/ build/ .cache *.pyc /.tox .DS_Store .pytest_cache .vscode .idea channels_redis-4.2.0/CHANGELOG.txt000066400000000000000000000253771455024562100165500ustar00rootroot000000000000004.2.0 (2024-01-12) ------------------ * Dropped support for end-of-life Python 3.7. * Added support for Python 3.11 and 3.12. * Upped the minimum version of redis-py to 4.6. * Added CI testing against redis-py versions 4.6, 5, and the development branch. * Added CI testing against Channels versions 3, 4, and the development branch. 4.1.0 (2023-03-28) ------------------ * Adjusted the way Redis connections are handled: * Connection handling is now shared between the two, core and pub-sub, layers. * Both layers now ensure that connections are closed when an event loop shuts down. In particular, redis-py 4.x requires that connections are manually closed. In 4.0 that wasn't done by the core layer, which led to warnings for people using `async_to_sync()`, without closing connections when updating from 3.x. * Updated the minimum redis-py version to 4.5.3 because of a security release there. Note that this is not a security issue in channels-redis: installing an earlier version will still use the latest redis-py, but by bumping the dependency we make sure you'll get redis-py too, when you install the update here. 4.0.0 (2022-10-07) ------------------ Version 4.0.0 migrates the underlying Redis library from ``aioredis`` to ``redis-py``. (``aioredis`` was retired and moved into ``redis-py``, which will host the ongoing development.) Version 4.0.0 should be compatible with existing Channels 3 projects, as well as Channels 4 projects. * Migrated from ``aioredis`` to ``redis-py``. Specifying hosts as tuples is no longer supported. If hosts are specified as dicts, only the ``address`` key will be taken into account, i.e. a `password`` must be specified inline in the address. * Added support for passing kwargs to sentinel connections. * Updated dependencies and obsolete code. 3.4.1 (2022-07-12) ------------------ * Fixed RuntimeError when checking for stale connections. 3.4.0 (2022-03-10) ------------------ * Dropped support for Python 3.6, which is now end-of-life, and added CI testing for Python 3.10. (#301). * Added serialize and deserialize hooks to RedisPubSubChannelLayer (#281). * Fixed iscoroutine check for pubsub proxied methods (#297). * Fix workers support when using Redis PubSub layer (#298) 3.3.1 (2021-09-30) ------------------ Two bugfixes for the PubSub channel layer: * Scoped the channel layer per-event loop, in case multiple loops are in play (#262). * Ensured consistent hashing PubSub was maintained across processes, or process restarts (#274). 3.3.0 (2021-07-01) ------------------ Two important new features: * You can now connect using `Redis Sentinel `. Thanks to @qeternity. * There's a new ``RedisPubSubChannelLayer`` that uses Redis Pub/Sub to propagate messages, rather than managing channels and groups directly within the layer. For many use-cases this should be simpler, more robust, and more performant. Note though, the new ``RedisPubSubChannelLayer`` layer does not provide all the options of the existing layer, including ``expiry``, ``capacity``, and others. Please assess whether it's appropriate for your needs, particularly if you have an existing deployment. The ``RedisPubSubChannelLayer`` is currently marked as *Beta*. Please report any issues, and be prepared that there may be breaking changes whilst it matures. The ``RedisPubSubChannelLayer`` accepts ``on_disconnect`` and ``on_reconnect`` config options, providing callbacks to handle the relevant connection events to the Redis instance. Thanks to Ryan Henning @acu192. For both features see the README for more details. 3.2.0 (2020-10-29) ------------------ * Adjusted dependency specifiers to allow updating to the latest versions of ``asgiref`` and Channels. 3.1.0 (2020-09-06) ------------------ * Ensured per-channel queues are bounded in size to avoid a slow memory leak if consumers stop reading. Queues are bound to the channel layer's configured ``capacity``. You may adjust this to a suitably high value if you were relying on the previously unbounded behaviour. 3.0.1 (2020-07-15) ------------------ * Fixed error in Lua script introduced in 3.0.0. 3.0.0 (2020-07-03) ------------------ * Redis >= 5.0 is now required. * Updated msgpack requirement to `~=1.0`. * Ensured channel names are unique using UUIDs. * Ensured messages are expired even when channel is in constant activity. * Optimized Redis script caching. * Reduced group_send failure logging level to reduce log noise. * Removed trailing `:` from default channel layer `prefix` to avoid double `::` in group keys. (You can restore the old default specifying `prefix="asgi:"` if necessary.) 2.4.2 (2020-02-19) ------------------ * Fixed a bug where ``ConnectionPool.pop()`` might return an invalid connection. * Added logging for a group_send over capacity failure. 2.4.1 (2019-10-23) ------------------ * Fixed compatibility with Python 3.8. 2.4.0 (2019-04-14) ------------------ * Updated ASGI and Channels dependencies for ASGI v3. 2.3.3 (2019-01-10) ------------------ * Bumped msgpack to 0.6 * Enforced Python 3.6 and up because 3.5 is too unreliable. 2.3.2 (2018-11-27) ------------------ * Fix memory leaks with receive_buffer * Prevent double-locking problems with cancelled tasks 2.3.1 (2018-10-17) ------------------ * Fix issue with leaking of connections and instability introduced in 2.3.0 2.3.0 (2018-08-16) ------------------ * Messages to the same process (with the same prefix) are now bundled together in a single message for efficiency. * Connections to Redis are now kept in a connection pool with significantly improved performance as a result. This change required lists to be changed from oldest-first to newest-first, so immediately after any upgrade, existing messages in Redis will be drained in reverse order until your expiry time (normally 60 seconds) has passed. After this, behaviour will be normal again. 2.2.1 (2018-05-17) ------------------ * Fixed a bug in group_send where it would not work if channel_capacity was set 2.2.0 (2018-05-13) ------------------ * The group_send method now uses Lua to massively increase the speed of sending to large groups. 2.1.1 (2018-03-21) ------------------ * Fixed bug where receiving messages would hang after a while or at high concurrency levels. * Fixed bug where the default host values were invalid. 2.1.0 (2018-02-21) ------------------ * Internals have been reworked to remove connection pooling and sharing. All operations will now open a fresh Redis connection, but the backend will no longer leak connections or Futures if used in multiple successive event loops (e.g. via multiple calls to sync_to_async) 2.0.3 (2018-02-14) ------------------ * Don't allow connection pools from other event loops to be re-used (fixes various RuntimeErrors seen previously) * channel_capacity is compiled in the constructor and now works again 2.0.2 (2018-02-04) ------------------ * Capacity enforcement was off by one; it's now correct * group_send no longer errors with the wrong ChannelFull exception 2.0.1 (2018-02-02) ------------------ * Dependency fix in packaging so asgiref is set to ~=2.1, not ~=2.0.0 2.0.0 (2018-02-01) ------------------ * Rewrite and rename to channels_redis to be based on asyncio and the Channels 2 channel layer specification. 1.4.2 (2017-06-20) ------------------ * receive() no longer blocks indefinitely, just for a while. * Built-in lua scripts have their SHA pre-set to avoid a guaranteed cache miss on their first usage. 1.4.1 (2017-06-15) ------------------ * A keyspace leak has been fixed where message body keys were not deleted after receive, and instead left to expire. 1.4.0 (2017-05-18) ------------------ * Sharded mode support is now more robust with send/receive deterministically moving around the shard ring rather than picking random connections. This means there is no longer a slight chance of messages being missed when there are not significantly more readers on a channel than shards. Tests have also been updated so they run fully on sharded mode thanks to this. * Sentinel support has been considerably improved, with connection caching (via sentinal_refresh_interval), and automatic service discovery. * The Twisted backend now picks up the Redis password if one is configured. 1.3.0 (2017-04-07) ------------------ * Change format of connection arguments to be a single dict called ``connection_kwargs`` rather than individual options, as they change by connection type. You will need to change your settings if you have any of socket_connect_timeout, socket_timeout, socket_keepalive or socket_keepalive_options set to move them into a ``connection_kwargs`` dict. 1.2.1 (2017-04-02) ------------------ * Error with sending to multi-process channels with the same message fixed 1.2.0 (2017-04-01) ------------------ * Process-specific channel behaviour changed to match new spec * Redis Sentinel channel layer added 1.1.0 (2017-03-18) ------------------ * Support for the ASGI statistics extension * Distribution of items over multiple servers using consistent hashing is improved * Handles timeout exceptions in newer redis-py library versions correctly * Support for configuring the socket_connect_timeout, socket_timeout, socket_keepalive and socket_keepalive_options options that are passed to redis-py. 1.0.0 (2016-11-05) ------------------ * Renamed "receive_many" to "receive" * Improved (more explicit) error handling for Redis errors/old versions * Bad hosts (string not lost) configuration now errors explicitly 0.14.1 (2016-08-24) ------------------- * Removed unused reverse channels-to-groups mapping keys as they were not cleaned up proactively and quickly filled up databases. 0.14.0 (2016-07-16) ------------------- * Implemented group_channels method. 0.13.0 (2016-06-09) ------------------- * Added local-and-remote backend option (uses asgi_ipc) 0.12.0 (2016-05-25) ------------------- * Added symmetric encryption for messages and at-rest data with key rotation. 0.11.0 (2016-05-07) ------------------- * Implement backpressure with per-channel and default capacities. 0.10.0 (2016-03-27) ------------------- * Group expiry code re-added and fixed. 0.9.1 (2016-03-23) ------------------ * Remove old group expiry code that was killing groups after 60 seconds. 0.9.0 (2016-03-21) ------------------ * Connections now pooled per backend shard * Random portion of channel names now 12 characters * Implements new ASGI single-response-channel pattern spec 0.8.3 (2016-02-28) ------------------ * Nonblocking receive_many now uses Lua script rather than for loop. 0.8.2 (2016-02-22) ------------------ * Nonblocking receive_many now works, but is inefficient * Python 3 fixes 0.8.1 (2016-02-22) ------------------ * Fixed packaging issues channels_redis-4.2.0/LICENSE000066400000000000000000000030201455024562100155020ustar00rootroot00000000000000Copyright (c) Django Software Foundation and individual contributors. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of Django nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. channels_redis-4.2.0/MANIFEST.in000066400000000000000000000000201455024562100162300ustar00rootroot00000000000000include LICENSE channels_redis-4.2.0/README.rst000066400000000000000000000233601455024562100161750ustar00rootroot00000000000000channels_redis ============== .. image:: https://github.com/django/channels_redis/workflows/Tests/badge.svg :target: https://github.com/django/channels_redis/actions?query=workflow%3ATests .. image:: https://img.shields.io/pypi/v/channels_redis.svg :target: https://pypi.python.org/pypi/channels_redis Provides Django Channels channel layers that use Redis as a backing store. There are two available implementations: * ``RedisChannelLayer`` is the original layer, and implements channel and group handling itself. * ``RedisPubSubChannelLayer`` is newer and leverages Redis Pub/Sub for message dispatch. This layer is currently at *Beta* status, meaning it may be subject to breaking changes whilst it matures. Both layers support a single-server and sharded configurations. `channels_redis` is tested against Python 3.8 to 3.12, `redis-py` versions 4.6, 5.0, and the development branch, and Channels versions 3, 4 and the development branch there. Installation ------------ .. code-block:: pip install channels-redis **Note:** Prior versions of this package were called ``asgi_redis`` and are still available under PyPI as that name if you need them for Channels 1.x projects. This package is for Channels 2 projects only. Usage ----- Set up the channel layer in your Django settings file like so: .. code-block:: python CHANNEL_LAYERS = { "default": { "BACKEND": "channels_redis.core.RedisChannelLayer", "CONFIG": { "hosts": [("localhost", 6379)], }, }, } Or, you can use the alternate implementation which uses Redis Pub/Sub: .. code-block:: python CHANNEL_LAYERS = { "default": { "BACKEND": "channels_redis.pubsub.RedisPubSubChannelLayer", "CONFIG": { "hosts": [("localhost", 6379)], }, }, } Possible options for ``CONFIG`` are listed below. ``hosts`` ~~~~~~~~~ The server(s) to connect to, as either URIs, ``(host, port)`` tuples, or dicts conforming to `redis Connection `_. Defaults to ``redis://localhost:6379``. Pass multiple hosts to enable sharding, but note that changing the host list will lose some sharded data. SSL connections that are self-signed (ex: Heroku): .. code-block:: python "default": { "BACKEND": "channels_redis.pubsub.RedisPubSubChannelLayer", "CONFIG": { "hosts":[{ "address": "rediss://user@host:port", # "REDIS_TLS_URL" "ssl_cert_reqs": None, }] } } Sentinel connections require dicts conforming to: .. code-block:: { "sentinels": [ ("localhost", 26379), ], "master_name": SENTINEL_MASTER_SET, **kwargs } note the additional ``master_name`` key specifying the Sentinel master set and any additional connection kwargs can also be passed. Plain Redis and Sentinel connections can be mixed and matched if sharding. If your server is listening on a UNIX domain socket, you can also use that to connect: ``["unix:///path/to/redis.sock"]``. This should be slightly faster than a loopback TCP connection. ``prefix`` ~~~~~~~~~~ Prefix to add to all Redis keys. Defaults to ``asgi``. If you're running two or more entirely separate channel layers through the same Redis instance, make sure they have different prefixes. All servers talking to the same layer should have the same prefix, though. ``expiry`` ~~~~~~~~~~ Message expiry in seconds. Defaults to ``60``. You generally shouldn't need to change this, but you may want to turn it down if you have peaky traffic you wish to drop, or up if you have peaky traffic you want to backlog until you get to it. ``group_expiry`` ~~~~~~~~~~~~~~~~ Group expiry in seconds. Defaults to ``86400``. Channels will be removed from the group after this amount of time; it's recommended you reduce it for a healthier system that encourages disconnections. This value should not be lower than the relevant timeouts in the interface server (e.g. the ``--websocket_timeout`` to `daphne `_). ``capacity`` ~~~~~~~~~~~~ Default channel capacity. Defaults to ``100``. Once a channel is at capacity, it will refuse more messages. How this affects different parts of the system varies; a HTTP server will refuse connections, for example, while Django sending a response will just wait until there's space. ``channel_capacity`` ~~~~~~~~~~~~~~~~~~~~ Per-channel capacity configuration. This lets you tweak the channel capacity based on the channel name, and supports both globbing and regular expressions. It should be a dict mapping channel name pattern to desired capacity; if the dict key is a string, it's intepreted as a glob, while if it's a compiled ``re`` object, it's treated as a regular expression. This example sets ``http.request`` to 200, all ``http.response!`` channels to 10, and all ``websocket.send!`` channels to 20: .. code-block:: python CHANNEL_LAYERS = { "default": { "BACKEND": "channels_redis.core.RedisChannelLayer", "CONFIG": { "hosts": [("localhost", 6379)], "channel_capacity": { "http.request": 200, "http.response!*": 10, re.compile(r"^websocket.send\!.+"): 20, }, }, }, } If you want to enforce a matching order, use an ``OrderedDict`` as the argument; channels will then be matched in the order the dict provides them. ``symmetric_encryption_keys`` ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Pass this to enable the optional symmetric encryption mode of the backend. To use it, make sure you have the ``cryptography`` package installed, or specify the ``cryptography`` extra when you install ``channels-redis``:: pip install channels-redis[cryptography] ``symmetric_encryption_keys`` should be a list of strings, with each string being an encryption key. The first key is always used for encryption; all are considered for decryption, so you can rotate keys without downtime - just add a new key at the start and move the old one down, then remove the old one after the message expiry time has passed. Data is encrypted both on the wire and at rest in Redis, though we advise you also route your Redis connections over TLS for higher security; the Redis protocol is still unencrypted, and the channel and group key names could potentially contain metadata patterns of use to attackers. Keys **should have at least 32 bytes of entropy** - they are passed through the SHA256 hash function before being used as an encryption key. Any string will work, but the shorter the string, the easier the encryption is to break. If you're using Django, you may also wish to set this to your site's ``SECRET_KEY`` setting via the ``CHANNEL_LAYERS`` setting: .. code-block:: python CHANNEL_LAYERS = { "default": { "BACKEND": "channels_redis.core.RedisChannelLayer", "CONFIG": { "hosts": ["redis://:password@127.0.0.1:6379/0"], "symmetric_encryption_keys": [SECRET_KEY], }, }, } ``on_disconnect`` / ``on_reconnect`` ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The PubSub layer, which maintains long-running connections to Redis, can drop messages in the event of a network partition. To handle such situations the PubSub layer accepts optional arguments which will notify consumers of Redis disconnect/reconnect events. A common use-case is for consumers to ensure that they perform a full state re-sync to ensure that no messages have been missed. .. code-block:: python CHANNEL_LAYERS = { "default": { "BACKEND": "channels_redis.pubsub.RedisPubSubChannelLayer", "CONFIG": { "hosts": [...], "on_disconnect": "redis.disconnect", }, }, } And then in your channels consumer, you can implement the handler: .. code-block:: python async def redis_disconnect(self, *args): # Handle disconnect Dependencies ------------ Redis server >= 5.0 is required for `channels-redis`. Python 3.8 or higher is required. Used commands ~~~~~~~~~~~~~ Your Redis server must support the following commands: * ``RedisChannelLayer`` uses ``BZPOPMIN``, ``DEL``, ``EVAL``, ``EXPIRE``, ``KEYS``, ``PIPELINE``, ``ZADD``, ``ZCOUNT``, ``ZPOPMIN``, ``ZRANGE``, ``ZREM``, ``ZREMRANGEBYSCORE`` * ``RedisPubSubChannelLayer`` uses ``PUBLISH``, ``SUBSCRIBE``, ``UNSUBSCRIBE`` Local Development ----------------- You can run the necessary Redis instances in Docker with the following commands: .. code-block:: shell $ docker network create redis-network $ docker run --rm \ --network=redis-network \ --name=redis-server \ -p 6379:6379 \ redis $ docker run --rm \ --network redis-network \ --name redis-sentinel \ -e REDIS_MASTER_HOST=redis-server \ -e REDIS_MASTER_SET=sentinel \ -e REDIS_SENTINEL_QUORUM=1 \ -p 26379:26379 \ bitnami/redis-sentinel Contributing ------------ Please refer to the `main Channels contributing docs `_. That also contains advice on how to set up the development environment and run the tests. Maintenance and Security ------------------------ To report security issues, please contact security@djangoproject.com. For GPG signatures and more security process information, see https://docs.djangoproject.com/en/dev/internals/security/. To report bugs or request new features, please open a new GitHub issue. This repository is part of the Channels project. For the shepherd and maintenance team, please see the `main Channels readme `_. channels_redis-4.2.0/channels_redis/000077500000000000000000000000001455024562100174635ustar00rootroot00000000000000channels_redis-4.2.0/channels_redis/__init__.py000066400000000000000000000000261455024562100215720ustar00rootroot00000000000000__version__ = "4.2.0" channels_redis-4.2.0/channels_redis/core.py000066400000000000000000000663661455024562100210060ustar00rootroot00000000000000import asyncio import base64 import collections import functools import hashlib import itertools import logging import random import time import uuid import msgpack from redis import asyncio as aioredis from channels.exceptions import ChannelFull from channels.layers import BaseChannelLayer from .utils import ( _close_redis, _consistent_hash, _wrap_close, create_pool, decode_hosts, ) logger = logging.getLogger(__name__) class ChannelLock: """ Helper class for per-channel locking. Once a lock is released and has no waiters, it will also be deleted, to mitigate multi-event loop problems. """ def __init__(self): self.locks = collections.defaultdict(asyncio.Lock) self.wait_counts = collections.defaultdict(int) async def acquire(self, channel): """ Acquire the lock for the given channel. """ self.wait_counts[channel] += 1 return await self.locks[channel].acquire() def locked(self, channel): """ Return ``True`` if the lock for the given channel is acquired. """ return self.locks[channel].locked() def release(self, channel): """ Release the lock for the given channel. """ self.locks[channel].release() self.wait_counts[channel] -= 1 if self.wait_counts[channel] < 1: del self.locks[channel] del self.wait_counts[channel] class BoundedQueue(asyncio.Queue): def put_nowait(self, item): if self.full(): # see: https://github.com/django/channels_redis/issues/212 # if we actually get into this code block, it likely means that # this specific consumer has stopped reading # if we get into this code block, it's better to drop messages # that exceed the channel layer capacity than to continue to # malloc() forever self.get_nowait() return super(BoundedQueue, self).put_nowait(item) class RedisLoopLayer: def __init__(self, channel_layer): self._lock = asyncio.Lock() self.channel_layer = channel_layer self._connections = {} def get_connection(self, index): if index not in self._connections: pool = self.channel_layer.create_pool(index) self._connections[index] = aioredis.Redis(connection_pool=pool) return self._connections[index] async def flush(self): async with self._lock: for index in list(self._connections): connection = self._connections.pop(index) await _close_redis(connection) class RedisChannelLayer(BaseChannelLayer): """ Redis channel layer. It routes all messages into remote Redis server. Support for sharding among different Redis installations and message encryption are provided. """ brpop_timeout = 5 def __init__( self, hosts=None, prefix="asgi", expiry=60, group_expiry=86400, capacity=100, channel_capacity=None, symmetric_encryption_keys=None, ): # Store basic information self.expiry = expiry self.group_expiry = group_expiry self.capacity = capacity self.channel_capacity = self.compile_capacities(channel_capacity or {}) self.prefix = prefix assert isinstance(self.prefix, str), "Prefix must be unicode" # Configure the host objects self.hosts = decode_hosts(hosts) self.ring_size = len(self.hosts) # Cached redis connection pools and the event loop they are from self._layers = {} # Normal channels choose a host index by cycling through the available hosts self._receive_index_generator = itertools.cycle(range(len(self.hosts))) self._send_index_generator = itertools.cycle(range(len(self.hosts))) # Decide on a unique client prefix to use in ! sections self.client_prefix = uuid.uuid4().hex # Set up any encryption objects self._setup_encryption(symmetric_encryption_keys) # Number of coroutines trying to receive right now self.receive_count = 0 # The receive lock self.receive_lock = None # Event loop they are trying to receive on self.receive_event_loop = None # Buffered messages by process-local channel name self.receive_buffer = collections.defaultdict( functools.partial(BoundedQueue, self.capacity) ) # Detached channel cleanup tasks self.receive_cleaners = [] # Per-channel cleanup locks to prevent a receive starting and moving # a message back into the main queue before its cleanup has completed self.receive_clean_locks = ChannelLock() def create_pool(self, index): return create_pool(self.hosts[index]) def _setup_encryption(self, symmetric_encryption_keys): # See if we can do encryption if they asked if symmetric_encryption_keys: if isinstance(symmetric_encryption_keys, (str, bytes)): raise ValueError( "symmetric_encryption_keys must be a list of possible keys" ) try: from cryptography.fernet import MultiFernet except ImportError: raise ValueError( "Cannot run with encryption without 'cryptography' installed." ) sub_fernets = [self.make_fernet(key) for key in symmetric_encryption_keys] self.crypter = MultiFernet(sub_fernets) else: self.crypter = None ### Channel layer API ### extensions = ["groups", "flush"] async def send(self, channel, message): """ Send a message onto a (general or specific) channel. """ # Typecheck assert isinstance(message, dict), "message is not a dict" assert self.valid_channel_name(channel), "Channel name not valid" # Make sure the message does not contain reserved keys assert "__asgi_channel__" not in message # If it's a process-local channel, strip off local part and stick full name in message channel_non_local_name = channel if "!" in channel: message = dict(message.items()) message["__asgi_channel__"] = channel channel_non_local_name = self.non_local_name(channel) # Write out message into expiring key (avoids big items in list) channel_key = self.prefix + channel_non_local_name # Pick a connection to the right server - consistent for specific # channels, random for general channels if "!" in channel: index = self.consistent_hash(channel) else: index = next(self._send_index_generator) connection = self.connection(index) # Discard old messages based on expiry await connection.zremrangebyscore( channel_key, min=0, max=int(time.time()) - int(self.expiry) ) # Check the length of the list before send # This can allow the list to leak slightly over capacity, but that's fine. if await connection.zcount(channel_key, "-inf", "+inf") >= self.get_capacity( channel ): raise ChannelFull() # Push onto the list then set it to expire in case it's not consumed await connection.zadd(channel_key, {self.serialize(message): time.time()}) await connection.expire(channel_key, int(self.expiry)) def _backup_channel_name(self, channel): """ Construct the key used as a backup queue for the given channel. """ return channel + "$inflight" async def _brpop_with_clean(self, index, channel, timeout): """ Perform a Redis BRPOP and manage the backup processing queue. In case of cancellation, make sure the message is not lost. """ # The script will pop messages from the processing queue and push them in front # of the main message queue in the proper order; BRPOP must *not* be called # because that would deadlock the server cleanup_script = """ local backed_up = redis.call('ZRANGE', ARGV[2], 0, -1, 'WITHSCORES') for i = #backed_up, 1, -2 do redis.call('ZADD', ARGV[1], backed_up[i], backed_up[i - 1]) end redis.call('DEL', ARGV[2]) """ backup_queue = self._backup_channel_name(channel) connection = self.connection(index) # Cancellation here doesn't matter, we're not doing anything destructive # and the script executes atomically... await connection.eval(cleanup_script, 0, channel, backup_queue) # ...and it doesn't matter here either, the message will be safe in the backup. result = await connection.bzpopmin(channel, timeout=timeout) if result is not None: _, member, timestamp = result await connection.zadd(backup_queue, {member: float(timestamp)}) else: member = None return member async def _clean_receive_backup(self, index, channel): """ Pop the oldest message off the channel backup queue. The result isn't interesting as it was already processed. """ connection = self.connection(index) await connection.zpopmin(self._backup_channel_name(channel)) async def receive(self, channel): """ Receive the first message that arrives on the channel. If more than one coroutine waits on the same channel, the first waiter will be given the message when it arrives. """ # Make sure the channel name is valid then get the non-local part # and thus its index assert self.valid_channel_name(channel) if "!" in channel: real_channel = self.non_local_name(channel) assert real_channel.endswith( self.client_prefix + "!" ), "Wrong client prefix" # Enter receiving section loop = asyncio.get_running_loop() self.receive_count += 1 try: if self.receive_count == 1: # If we're the first coroutine in, create the receive lock! self.receive_lock = asyncio.Lock() self.receive_event_loop = loop else: # Otherwise, check our event loop matches if self.receive_event_loop != loop: raise RuntimeError( "Two event loops are trying to receive() on one channel layer at once!" ) # Wait for our message to appear message = None while self.receive_buffer[channel].empty(): tasks = [ self.receive_lock.acquire(), self.receive_buffer[channel].get(), ] tasks = [asyncio.ensure_future(task) for task in tasks] try: done, pending = await asyncio.wait( tasks, return_when=asyncio.FIRST_COMPLETED ) for task in pending: # Cancel all pending tasks. task.cancel() except asyncio.CancelledError: # Ensure all tasks are cancelled if we are cancelled. # Also see: https://bugs.python.org/issue23859 del self.receive_buffer[channel] for task in tasks: if not task.cancel(): assert task.done() if task.result() is True: self.receive_lock.release() raise message = token = exception = None for task in done: try: result = task.result() except BaseException as error: # NOQA # We should not propagate exceptions immediately as otherwise this may cause # the lock to be held and never be released. exception = error continue if result is True: token = result else: assert isinstance(result, dict) message = result if message or exception: if token: # We will not be receving as we already have the message. self.receive_lock.release() if exception: raise exception else: break else: assert token # We hold the receive lock, receive and then release it. try: # There is no interruption point from when the message is # unpacked in receive_single to when we get back here, so # the following lines are essentially atomic. message_channel, message = await self.receive_single( real_channel ) if isinstance(message_channel, list): for chan in message_channel: self.receive_buffer[chan].put_nowait(message) else: self.receive_buffer[message_channel].put_nowait(message) message = None except Exception: del self.receive_buffer[channel] raise finally: self.receive_lock.release() # We know there's a message available, because there # couldn't have been any interruption between empty() and here if message is None: message = self.receive_buffer[channel].get_nowait() if self.receive_buffer[channel].empty(): del self.receive_buffer[channel] return message finally: self.receive_count -= 1 # If we were the last out, drop the receive lock if self.receive_count == 0: assert not self.receive_lock.locked() self.receive_lock = None self.receive_event_loop = None else: # Do a plain direct receive return (await self.receive_single(channel))[1] async def receive_single(self, channel): """ Receives a single message off of the channel and returns it. """ # Check channel name assert self.valid_channel_name(channel, receive=True), "Channel name invalid" # Work out the connection to use if "!" in channel: assert channel.endswith("!") index = self.consistent_hash(channel) else: index = next(self._receive_index_generator) channel_key = self.prefix + channel content = None await self.receive_clean_locks.acquire(channel_key) try: while content is None: # Nothing is lost here by cancellations, messages will still # be in the backup queue. content = await self._brpop_with_clean( index, channel_key, timeout=self.brpop_timeout ) # Fire off a task to clean the message from its backup queue. # Per-channel locking isn't needed, because the backup is a queue # and additionally, we don't care about the order; all processed # messages need to be removed, no matter if the current one is # removed after the next one. # NOTE: Duplicate messages will be received eventually if any # of these cleaners are cancelled. cleaner = asyncio.ensure_future( self._clean_receive_backup(index, channel_key) ) self.receive_cleaners.append(cleaner) def _cleanup_done(cleaner): self.receive_cleaners.remove(cleaner) self.receive_clean_locks.release(channel_key) cleaner.add_done_callback(_cleanup_done) except BaseException: self.receive_clean_locks.release(channel_key) raise # Message decode message = self.deserialize(content) # TODO: message expiry? # If there is a full channel name stored in the message, unpack it. if "__asgi_channel__" in message: channel = message["__asgi_channel__"] del message["__asgi_channel__"] return channel, message async def new_channel(self, prefix="specific"): """ Returns a new channel name that can be used by something in our process as a specific channel. """ return f"{prefix}.{self.client_prefix}!{uuid.uuid4().hex}" ### Flush extension ### async def flush(self): """ Deletes all messages and groups on all shards. """ # Make sure all channel cleaners have finished before removing # keys from under their feet. await self.wait_received() # Lua deletion script delete_prefix = """ local keys = redis.call('keys', ARGV[1]) for i=1,#keys,5000 do redis.call('del', unpack(keys, i, math.min(i+4999, #keys))) end """ # Go through each connection and remove all with prefix for i in range(self.ring_size): connection = self.connection(i) await connection.eval(delete_prefix, 0, self.prefix + "*") # Now clear the pools as well await self.close_pools() async def close_pools(self): """ Close all connections in the event loop pools. """ # Flush all cleaners, in case somebody just wanted to close the # pools without flushing first. await self.wait_received() for layer in self._layers.values(): await layer.flush() async def wait_received(self): """ Wait for all channel cleanup functions to finish. """ if self.receive_cleaners: await asyncio.wait(self.receive_cleaners[:]) ### Groups extension ### async def group_add(self, group, channel): """ Adds the channel name to a group. """ # Check the inputs assert self.valid_group_name(group), "Group name not valid" assert self.valid_channel_name(channel), "Channel name not valid" # Get a connection to the right shard group_key = self._group_key(group) connection = self.connection(self.consistent_hash(group)) # Add to group sorted set with creation time as timestamp await connection.zadd(group_key, {channel: time.time()}) # Set expiration to be group_expiry, since everything in # it at this point is guaranteed to expire before that await connection.expire(group_key, self.group_expiry) async def group_discard(self, group, channel): """ Removes the channel from the named group if it is in the group; does nothing otherwise (does not error) """ assert self.valid_group_name(group), "Group name not valid" assert self.valid_channel_name(channel), "Channel name not valid" key = self._group_key(group) connection = self.connection(self.consistent_hash(group)) await connection.zrem(key, channel) async def group_send(self, group, message): """ Sends a message to the entire group. """ assert self.valid_group_name(group), "Group name not valid" # Retrieve list of all channel names key = self._group_key(group) connection = self.connection(self.consistent_hash(group)) # Discard old channels based on group_expiry await connection.zremrangebyscore( key, min=0, max=int(time.time()) - self.group_expiry ) channel_names = [x.decode("utf8") for x in await connection.zrange(key, 0, -1)] ( connection_to_channel_keys, channel_keys_to_message, channel_keys_to_capacity, ) = self._map_channel_keys_to_connection(channel_names, message) for connection_index, channel_redis_keys in connection_to_channel_keys.items(): # Discard old messages based on expiry pipe = connection.pipeline() for key in channel_redis_keys: pipe.zremrangebyscore( key, min=0, max=int(time.time()) - int(self.expiry) ) await pipe.execute() # Create a LUA script specific for this connection. # Make sure to use the message specific to this channel, it is # stored in channel_to_message dict and contains the # __asgi_channel__ key. group_send_lua = """ local over_capacity = 0 local current_time = ARGV[#ARGV - 1] local expiry = ARGV[#ARGV] for i=1,#KEYS do if redis.call('ZCOUNT', KEYS[i], '-inf', '+inf') < tonumber(ARGV[i + #KEYS]) then redis.call('ZADD', KEYS[i], current_time, ARGV[i]) redis.call('EXPIRE', KEYS[i], expiry) else over_capacity = over_capacity + 1 end end return over_capacity """ # We need to filter the messages to keep those related to the connection args = [ channel_keys_to_message[channel_key] for channel_key in channel_redis_keys ] # We need to send the capacity for each channel args += [ channel_keys_to_capacity[channel_key] for channel_key in channel_redis_keys ] args += [time.time(), self.expiry] # channel_keys does not contain a single redis key more than once connection = self.connection(connection_index) channels_over_capacity = await connection.eval( group_send_lua, len(channel_redis_keys), *channel_redis_keys, *args ) if channels_over_capacity > 0: logger.info( "%s of %s channels over capacity in group %s", channels_over_capacity, len(channel_names), group, ) def _map_channel_keys_to_connection(self, channel_names, message): """ For a list of channel names, GET 1. list of their redis keys bucket each one to a dict keyed by the connection index 2. for each unique channel redis key create a serialized message specific to that redis key, by adding the list of channels mapped to that redis key in __asgi_channel__ key to the message 3. returns a mapping of redis channels keys to their capacity """ # Connection dict keyed by index to list of redis keys mapped on that index connection_to_channel_keys = collections.defaultdict(list) # Message dict maps redis key to the message that needs to be send on that key channel_key_to_message = dict() # Channel key mapped to its capacity channel_key_to_capacity = dict() # For each channel for channel in channel_names: channel_non_local_name = channel if "!" in channel: channel_non_local_name = self.non_local_name(channel) # Get its redis key channel_key = self.prefix + channel_non_local_name # Have we come across the same redis key? if channel_key not in channel_key_to_message: # If not, fill the corresponding dicts message = dict(message.items()) message["__asgi_channel__"] = [channel] channel_key_to_message[channel_key] = message channel_key_to_capacity[channel_key] = self.get_capacity(channel) idx = self.consistent_hash(channel_non_local_name) connection_to_channel_keys[idx].append(channel_key) else: # Yes, Append the channel in message dict channel_key_to_message[channel_key]["__asgi_channel__"].append(channel) # Now that we know what message needs to be send on a redis key we serialize it for key, value in channel_key_to_message.items(): # Serialize the message stored for each redis key channel_key_to_message[key] = self.serialize(value) return ( connection_to_channel_keys, channel_key_to_message, channel_key_to_capacity, ) def _group_key(self, group): """ Common function to make the storage key for the group. """ return f"{self.prefix}:group:{group}".encode("utf8") ### Serialization ### def serialize(self, message): """ Serializes message to a byte string. """ value = msgpack.packb(message, use_bin_type=True) if self.crypter: value = self.crypter.encrypt(value) # As we use an sorted set to expire messages we need to guarantee uniqueness, with 12 bytes. random_prefix = random.getrandbits(8 * 12).to_bytes(12, "big") return random_prefix + value def deserialize(self, message): """ Deserializes from a byte string. """ # Removes the random prefix message = message[12:] if self.crypter: message = self.crypter.decrypt(message, self.expiry + 10) return msgpack.unpackb(message, raw=False) ### Internal functions ### def consistent_hash(self, value): return _consistent_hash(value, self.ring_size) def make_fernet(self, key): """ Given a single encryption key, returns a Fernet instance using it. """ from cryptography.fernet import Fernet if isinstance(key, str): key = key.encode("utf8") formatted_key = base64.urlsafe_b64encode(hashlib.sha256(key).digest()) return Fernet(formatted_key) def __str__(self): return f"{self.__class__.__name__}(hosts={self.hosts})" ### Connection handling ### def connection(self, index): """ Returns the correct connection for the index given. Lazily instantiates pools. """ # Catch bad indexes if not 0 <= index < self.ring_size: raise ValueError( f"There are only {self.ring_size} hosts - you asked for {index}!" ) loop = asyncio.get_running_loop() try: layer = self._layers[loop] except KeyError: _wrap_close(self, loop) layer = self._layers[loop] = RedisLoopLayer(self) return layer.get_connection(index) channels_redis-4.2.0/channels_redis/pubsub.py000066400000000000000000000277431455024562100213520ustar00rootroot00000000000000import asyncio import functools import logging import uuid import msgpack from redis import asyncio as aioredis from .utils import ( _close_redis, _consistent_hash, _wrap_close, create_pool, decode_hosts, ) logger = logging.getLogger(__name__) async def _async_proxy(obj, name, *args, **kwargs): # Must be defined as a function and not a method due to # https://bugs.python.org/issue38364 layer = obj._get_layer() return await getattr(layer, name)(*args, **kwargs) class RedisPubSubChannelLayer: def __init__(self, *args, **kwargs) -> None: self._args = args self._kwargs = kwargs self._layers = {} def __getattr__(self, name): if name in ( "new_channel", "send", "receive", "group_add", "group_discard", "group_send", "flush", ): return functools.partial(_async_proxy, self, name) else: return getattr(self._get_layer(), name) def serialize(self, message): """ Serializes message to a byte string. """ return msgpack.packb(message) def deserialize(self, message): """ Deserializes from a byte string. """ return msgpack.unpackb(message) def _get_layer(self): loop = asyncio.get_running_loop() try: layer = self._layers[loop] except KeyError: layer = RedisPubSubLoopLayer( *self._args, **self._kwargs, channel_layer=self, ) self._layers[loop] = layer _wrap_close(self, loop) return layer class RedisPubSubLoopLayer: """ Channel Layer that uses Redis's pub/sub functionality. """ def __init__( self, hosts=None, prefix="asgi", on_disconnect=None, on_reconnect=None, channel_layer=None, **kwargs, ): self.prefix = prefix self.on_disconnect = on_disconnect self.on_reconnect = on_reconnect self.channel_layer = channel_layer # Each consumer gets its own *specific* channel, created with the `new_channel()` method. # This dict maps `channel_name` to a queue of messages for that channel. self.channels = {} # A channel can subscribe to zero or more groups. # This dict maps `group_name` to set of channel names who are subscribed to that group. self.groups = {} # For each host, we create a `RedisSingleShardConnection` to manage the connection to that host. self._shards = [ RedisSingleShardConnection(host, self) for host in decode_hosts(hosts) ] def _get_shard(self, channel_or_group_name): """ Return the shard that is used exclusively for this channel or group. """ return self._shards[_consistent_hash(channel_or_group_name, len(self._shards))] def _get_group_channel_name(self, group): """ Return the channel name used by a group. Includes '__group__' in the returned string so that these names are distinguished from those returned by `new_channel()`. Technically collisions are possible, but it takes what I believe is intentional abuse in order to have colliding names. """ return f"{self.prefix}__group__{group}" async def _subscribe_to_channel(self, channel): self.channels[channel] = asyncio.Queue() shard = self._get_shard(channel) await shard.subscribe(channel) extensions = ["groups", "flush"] ################################################################################ # Channel layer API ################################################################################ async def send(self, channel, message): """ Send a message onto a (general or specific) channel. """ shard = self._get_shard(channel) await shard.publish(channel, self.channel_layer.serialize(message)) async def new_channel(self, prefix="specific."): """ Returns a new channel name that can be used by a consumer in our process as a specific channel. """ channel = f"{self.prefix}{prefix}{uuid.uuid4().hex}" await self._subscribe_to_channel(channel) return channel async def receive(self, channel): """ Receive the first message that arrives on the channel. If more than one coroutine waits on the same channel, a random one of the waiting coroutines will get the result. """ if channel not in self.channels: await self._subscribe_to_channel(channel) q = self.channels[channel] try: message = await q.get() except (asyncio.CancelledError, asyncio.TimeoutError, GeneratorExit): # We assume here that the reason we are cancelled is because the consumer # is exiting, therefore we need to cleanup by unsubscribe below. Indeed, # currently the way that Django Channels works, this is a safe assumption. # In the future, Dajngo Channels could change to call a *new* method that # would serve as the antithesis of `new_channel()`; this new method might # be named `delete_channel()`. If that were the case, we would do the # following cleanup from that new `delete_channel()` method, but, since # that's not how Django Channels works (yet), we do the cleanup below: if channel in self.channels: del self.channels[channel] try: shard = self._get_shard(channel) await shard.unsubscribe(channel) except BaseException: logger.exception("Unexpected exception while cleaning-up channel:") # We don't re-raise here because we want the CancelledError to be the one re-raised. raise return self.channel_layer.deserialize(message) ################################################################################ # Groups extension ################################################################################ async def group_add(self, group, channel): """ Adds the channel name to a group. """ if channel not in self.channels: raise RuntimeError( "You can only call group_add() on channels that exist in-process.\n" "Consumers are encouraged to use the common pattern:\n" f" self.channel_layer.group_add({repr(group)}, self.channel_name)" ) group_channel = self._get_group_channel_name(group) if group_channel not in self.groups: self.groups[group_channel] = set() group_channels = self.groups[group_channel] if channel not in group_channels: group_channels.add(channel) shard = self._get_shard(group_channel) await shard.subscribe(group_channel) async def group_discard(self, group, channel): """ Removes the channel from a group if it is in the group; does nothing otherwise (does not error) """ group_channel = self._get_group_channel_name(group) group_channels = self.groups.get(group_channel, set()) if channel not in group_channels: return group_channels.remove(channel) if len(group_channels) == 0: del self.groups[group_channel] shard = self._get_shard(group_channel) await shard.unsubscribe(group_channel) async def group_send(self, group, message): """ Send the message to all subscribers of the group. """ group_channel = self._get_group_channel_name(group) shard = self._get_shard(group_channel) await shard.publish(group_channel, self.channel_layer.serialize(message)) ################################################################################ # Flush extension ################################################################################ async def flush(self): """ Flush the layer, making it like new. It can continue to be used as if it was just created. This also closes connections, serving as a clean-up method; connections will be re-opened if you continue using this layer. """ self.channels = {} self.groups = {} for shard in self._shards: await shard.flush() class RedisSingleShardConnection: def __init__(self, host, channel_layer): self.host = host self.channel_layer = channel_layer self._subscribed_to = set() self._lock = asyncio.Lock() self._redis = None self._pubsub = None self._receive_task = None async def publish(self, channel, message): async with self._lock: self._ensure_redis() await self._redis.publish(channel, message) async def subscribe(self, channel): async with self._lock: if channel not in self._subscribed_to: self._ensure_redis() self._ensure_receiver() await self._pubsub.subscribe(channel) self._subscribed_to.add(channel) async def unsubscribe(self, channel): async with self._lock: if channel in self._subscribed_to: self._ensure_redis() self._ensure_receiver() await self._pubsub.unsubscribe(channel) self._subscribed_to.remove(channel) async def flush(self): async with self._lock: if self._receive_task is not None: self._receive_task.cancel() try: await self._receive_task except asyncio.CancelledError: pass self._receive_task = None if self._redis is not None: # The pool was created just for this client, so make sure it is closed, # otherwise it will schedule the connection to be closed inside the # __del__ method, which doesn't have a loop running anymore. await _close_redis(self._redis) self._redis = None self._pubsub = None self._subscribed_to = set() async def _do_receiving(self): while True: try: if self._pubsub and self._pubsub.subscribed: message = await self._pubsub.get_message( ignore_subscribe_messages=True, timeout=0.1 ) self._receive_message(message) else: await asyncio.sleep(0.1) except ( asyncio.CancelledError, asyncio.TimeoutError, GeneratorExit, ): raise except BaseException: logger.exception("Unexpected exception in receive task") await asyncio.sleep(1) def _receive_message(self, message): if message is not None: name = message["channel"] data = message["data"] if isinstance(name, bytes): name = name.decode() if name in self.channel_layer.channels: self.channel_layer.channels[name].put_nowait(data) elif name in self.channel_layer.groups: for channel_name in self.channel_layer.groups[name]: if channel_name in self.channel_layer.channels: self.channel_layer.channels[channel_name].put_nowait(data) def _ensure_redis(self): if self._redis is None: pool = create_pool(self.host) self._redis = aioredis.Redis(connection_pool=pool) self._pubsub = self._redis.pubsub() def _ensure_receiver(self): if self._receive_task is None: self._receive_task = asyncio.ensure_future(self._do_receiving()) channels_redis-4.2.0/channels_redis/utils.py000066400000000000000000000054241455024562100212020ustar00rootroot00000000000000import binascii import types from redis import asyncio as aioredis def _consistent_hash(value, ring_size): """ Maps the value to a node value between 0 and 4095 using CRC, then down to one of the ring nodes. """ if ring_size == 1: # Avoid the overhead of hashing and modulo when it is unnecessary. return 0 if isinstance(value, str): value = value.encode("utf8") bigval = binascii.crc32(value) & 0xFFF ring_divisor = 4096 / float(ring_size) return int(bigval / ring_divisor) def _wrap_close(proxy, loop): original_impl = loop.close def _wrapper(self, *args, **kwargs): if loop in proxy._layers: layer = proxy._layers[loop] del proxy._layers[loop] loop.run_until_complete(layer.flush()) self.close = original_impl return self.close(*args, **kwargs) loop.close = types.MethodType(_wrapper, loop) async def _close_redis(connection): """ Handle compatibility with redis-py 4.x and 5.x close methods """ try: await connection.aclose(close_connection_pool=True) except AttributeError: await connection.close(close_connection_pool=True) def decode_hosts(hosts): """ Takes the value of the "hosts" argument and returns a list of kwargs to use for the Redis connection constructor. """ # If no hosts were provided, return a default value if not hosts: return [{"address": "redis://localhost:6379"}] # If they provided just a string, scold them. if isinstance(hosts, (str, bytes)): raise ValueError( "You must pass a list of Redis hosts, even if there is only one." ) # Decode each hosts entry into a kwargs dict result = [] for entry in hosts: if isinstance(entry, dict): result.append(entry) elif isinstance(entry, (tuple, list)): result.append({"host": entry[0], "port": entry[1]}) else: result.append({"address": entry}) return result def create_pool(host): """ Takes the value of the "host" argument and returns a suited connection pool to the corresponding redis instance. """ # avoid side-effects from modifying host host = host.copy() if "address" in host: address = host.pop("address") return aioredis.ConnectionPool.from_url(address, **host) master_name = host.pop("master_name", None) if master_name is not None: sentinels = host.pop("sentinels") sentinel_kwargs = host.pop("sentinel_kwargs", None) return aioredis.sentinel.SentinelConnectionPool( master_name, aioredis.sentinel.Sentinel(sentinels, sentinel_kwargs=sentinel_kwargs), **host ) return aioredis.ConnectionPool(**host) channels_redis-4.2.0/setup.cfg000066400000000000000000000004421455024562100163230ustar00rootroot00000000000000[tool:pytest] addopts = -p no:django testpaths = tests asyncio_mode = auto timeout = 10 [flake8] exclude = venv/*,tox/*,specs/*,build/* ignore = E123,E128,E266,E402,W503,E731,W601 max-line-length = 119 [isort] profile = black known_first_party = channels, asgiref, channels_redis, daphne channels_redis-4.2.0/setup.py000066400000000000000000000020131455024562100162100ustar00rootroot00000000000000from os.path import dirname, join from setuptools import find_packages, setup from channels_redis import __version__ # We use the README as the long_description readme = open(join(dirname(__file__), "README.rst")).read() crypto_requires = ["cryptography>=1.3.0"] test_requires = crypto_requires + [ "pytest", "pytest-asyncio", "async-timeout", "pytest-timeout", ] setup( name="channels_redis", version=__version__, url="http://github.com/django/channels_redis/", author="Django Software Foundation", author_email="foundation@djangoproject.com", description="Redis-backed ASGI channel layer implementation", long_description=readme, license="BSD", zip_safe=False, packages=find_packages(exclude=["tests"]), include_package_data=True, python_requires=">=3.8", install_requires=[ "redis>=4.6", "msgpack~=1.0", "asgiref>=3.2.10,<4", "channels", ], extras_require={"cryptography": crypto_requires, "tests": test_requires}, ) channels_redis-4.2.0/tests/000077500000000000000000000000001455024562100156445ustar00rootroot00000000000000channels_redis-4.2.0/tests/__init__.py000066400000000000000000000000001455024562100177430ustar00rootroot00000000000000channels_redis-4.2.0/tests/test_core.py000066400000000000000000000566641455024562100202260ustar00rootroot00000000000000import asyncio import random import async_timeout import pytest from asgiref.sync import async_to_sync from channels_redis.core import ChannelFull, RedisChannelLayer TEST_HOSTS = ["redis://localhost:6379"] MULTIPLE_TEST_HOSTS = [ "redis://localhost:6379/0", "redis://localhost:6379/1", "redis://localhost:6379/2", "redis://localhost:6379/3", "redis://localhost:6379/4", "redis://localhost:6379/5", "redis://localhost:6379/6", "redis://localhost:6379/7", "redis://localhost:6379/8", "redis://localhost:6379/9", ] async def send_three_messages_with_delay(channel_name, channel_layer, delay): await channel_layer.send(channel_name, {"type": "test.message", "text": "First!"}) await asyncio.sleep(delay) await channel_layer.send(channel_name, {"type": "test.message", "text": "Second!"}) await asyncio.sleep(delay) await channel_layer.send(channel_name, {"type": "test.message", "text": "Third!"}) async def group_send_three_messages_with_delay(group_name, channel_layer, delay): await channel_layer.group_send( group_name, {"type": "test.message", "text": "First!"} ) await asyncio.sleep(delay) await channel_layer.group_send( group_name, {"type": "test.message", "text": "Second!"} ) await asyncio.sleep(delay) await channel_layer.group_send( group_name, {"type": "test.message", "text": "Third!"} ) @pytest.fixture() async def channel_layer(): """ Channel layer fixture that flushes automatically. """ channel_layer = RedisChannelLayer( hosts=TEST_HOSTS, capacity=3, channel_capacity={"tiny": 1} ) yield channel_layer await channel_layer.flush() @pytest.fixture() async def channel_layer_multiple_hosts(): """ Channel layer fixture that flushes automatically. """ channel_layer = RedisChannelLayer(hosts=MULTIPLE_TEST_HOSTS, capacity=3) yield channel_layer await channel_layer.flush() @pytest.mark.asyncio async def test_send_receive(channel_layer): """ Makes sure we can send a message to a normal channel then receive it. """ await channel_layer.send( "test-channel-1", {"type": "test.message", "text": "Ahoy-hoy!"} ) message = await channel_layer.receive("test-channel-1") assert message["type"] == "test.message" assert message["text"] == "Ahoy-hoy!" @pytest.mark.parametrize("channel_layer", [None]) # Fixture can't handle sync def test_double_receive(channel_layer): """ Makes sure we can receive from two different event loops using process-local channel names. """ channel_layer = RedisChannelLayer(hosts=TEST_HOSTS, capacity=3) # Aioredis connections can't be used from different event loops, so # send and close need to be done in the same async_to_sync call. async def send_and_close(*args, **kwargs): await channel_layer.send(*args, **kwargs) await channel_layer.close_pools() channel_name_1 = async_to_sync(channel_layer.new_channel)() channel_name_2 = async_to_sync(channel_layer.new_channel)() async_to_sync(send_and_close)(channel_name_1, {"type": "test.message.1"}) async_to_sync(send_and_close)(channel_name_2, {"type": "test.message.2"}) # Make things to listen on the loops async def listen1(): message = await channel_layer.receive(channel_name_1) assert message["type"] == "test.message.1" await channel_layer.close_pools() async def listen2(): message = await channel_layer.receive(channel_name_2) assert message["type"] == "test.message.2" await channel_layer.close_pools() # Run them inside threads async_to_sync(listen2)() async_to_sync(listen1)() # Clean up async_to_sync(channel_layer.flush)() @pytest.mark.asyncio async def test_send_capacity(channel_layer): """ Makes sure we get ChannelFull when we hit the send capacity """ await channel_layer.send("test-channel-1", {"type": "test.message"}) await channel_layer.send("test-channel-1", {"type": "test.message"}) await channel_layer.send("test-channel-1", {"type": "test.message"}) with pytest.raises(ChannelFull): await channel_layer.send("test-channel-1", {"type": "test.message"}) @pytest.mark.asyncio async def test_send_specific_capacity(channel_layer): """ Makes sure we get ChannelFull when we hit the send capacity on a specific channel """ custom_channel_layer = RedisChannelLayer( hosts=TEST_HOSTS, capacity=3, channel_capacity={"one": 1} ) await custom_channel_layer.send("one", {"type": "test.message"}) with pytest.raises(ChannelFull): await custom_channel_layer.send("one", {"type": "test.message"}) await custom_channel_layer.flush() @pytest.mark.asyncio async def test_process_local_send_receive(channel_layer): """ Makes sure we can send a message to a process-local channel then receive it. """ channel_name = await channel_layer.new_channel() await channel_layer.send( channel_name, {"type": "test.message", "text": "Local only please"} ) message = await channel_layer.receive(channel_name) assert message["type"] == "test.message" assert message["text"] == "Local only please" @pytest.mark.asyncio async def test_multi_send_receive(channel_layer): """ Tests overlapping sends and receives, and ordering. """ channel_layer = RedisChannelLayer(hosts=TEST_HOSTS) await channel_layer.send("test-channel-3", {"type": "message.1"}) await channel_layer.send("test-channel-3", {"type": "message.2"}) await channel_layer.send("test-channel-3", {"type": "message.3"}) assert (await channel_layer.receive("test-channel-3"))["type"] == "message.1" assert (await channel_layer.receive("test-channel-3"))["type"] == "message.2" assert (await channel_layer.receive("test-channel-3"))["type"] == "message.3" await channel_layer.flush() @pytest.mark.asyncio async def test_reject_bad_channel(channel_layer): """ Makes sure sending/receiving on an invalic channel name fails. """ with pytest.raises(TypeError): await channel_layer.send("=+135!", {"type": "foom"}) with pytest.raises(TypeError): await channel_layer.receive("=+135!") @pytest.mark.asyncio async def test_reject_bad_client_prefix(channel_layer): """ Makes sure receiving on a non-prefixed local channel is not allowed. """ with pytest.raises(AssertionError): await channel_layer.receive("not-client-prefix!local_part") @pytest.mark.asyncio async def test_groups_basic(channel_layer): """ Tests basic group operation. """ channel_layer = RedisChannelLayer(hosts=TEST_HOSTS) channel_name1 = await channel_layer.new_channel(prefix="test-gr-chan-1") channel_name2 = await channel_layer.new_channel(prefix="test-gr-chan-2") channel_name3 = await channel_layer.new_channel(prefix="test-gr-chan-3") await channel_layer.group_add("test-group", channel_name1) await channel_layer.group_add("test-group", channel_name2) await channel_layer.group_add("test-group", channel_name3) await channel_layer.group_discard("test-group", channel_name2) await channel_layer.group_send("test-group", {"type": "message.1"}) # Make sure we get the message on the two channels that were in async with async_timeout.timeout(1): assert (await channel_layer.receive(channel_name1))["type"] == "message.1" assert (await channel_layer.receive(channel_name3))["type"] == "message.1" # Make sure the removed channel did not get the message with pytest.raises(asyncio.TimeoutError): async with async_timeout.timeout(1): await channel_layer.receive(channel_name2) await channel_layer.flush() @pytest.mark.asyncio async def test_groups_channel_full(channel_layer): """ Tests that group_send ignores ChannelFull """ channel_layer = RedisChannelLayer(hosts=TEST_HOSTS) await channel_layer.group_add("test-group", "test-gr-chan-1") await channel_layer.group_send("test-group", {"type": "message.1"}) await channel_layer.group_send("test-group", {"type": "message.1"}) await channel_layer.group_send("test-group", {"type": "message.1"}) await channel_layer.group_send("test-group", {"type": "message.1"}) await channel_layer.group_send("test-group", {"type": "message.1"}) await channel_layer.flush() @pytest.mark.asyncio async def test_groups_multiple_hosts(channel_layer_multiple_hosts): """ Tests advanced group operation with multiple hosts. """ channel_layer = RedisChannelLayer(hosts=MULTIPLE_TEST_HOSTS, capacity=100) channel_name1 = await channel_layer.new_channel(prefix="channel1") channel_name2 = await channel_layer.new_channel(prefix="channel2") channel_name3 = await channel_layer.new_channel(prefix="channel3") await channel_layer.group_add("test-group", channel_name1) await channel_layer.group_add("test-group", channel_name2) await channel_layer.group_add("test-group", channel_name3) await channel_layer.group_discard("test-group", channel_name2) await channel_layer.group_send("test-group", {"type": "message.1"}) await channel_layer.group_send("test-group", {"type": "message.1"}) # Make sure we get the message on the two channels that were in async with async_timeout.timeout(1): assert (await channel_layer.receive(channel_name1))["type"] == "message.1" assert (await channel_layer.receive(channel_name3))["type"] == "message.1" with pytest.raises(asyncio.TimeoutError): async with async_timeout.timeout(1): await channel_layer.receive(channel_name2) await channel_layer.flush() @pytest.mark.asyncio async def test_groups_same_prefix(channel_layer): """ Tests group_send with multiple channels with same channel prefix """ channel_layer = RedisChannelLayer(hosts=TEST_HOSTS) channel_name1 = await channel_layer.new_channel(prefix="test-gr-chan") channel_name2 = await channel_layer.new_channel(prefix="test-gr-chan") channel_name3 = await channel_layer.new_channel(prefix="test-gr-chan") await channel_layer.group_add("test-group", channel_name1) await channel_layer.group_add("test-group", channel_name2) await channel_layer.group_add("test-group", channel_name3) await channel_layer.group_send("test-group", {"type": "message.1"}) # Make sure we get the message on the channels that were in async with async_timeout.timeout(1): assert (await channel_layer.receive(channel_name1))["type"] == "message.1" assert (await channel_layer.receive(channel_name2))["type"] == "message.1" assert (await channel_layer.receive(channel_name3))["type"] == "message.1" await channel_layer.flush() @pytest.mark.parametrize( "num_channels,timeout", [ (1, 1), # Edge cases - make sure we can send to a single channel (10, 1), (100, 10), ], ) @pytest.mark.asyncio async def test_groups_multiple_hosts_performance( channel_layer_multiple_hosts, num_channels, timeout ): """ Tests advanced group operation: can send efficiently to multiple channels with multiple hosts within a certain timeout """ channel_layer = RedisChannelLayer(hosts=MULTIPLE_TEST_HOSTS, capacity=100) channels = [] for i in range(0, num_channels): channel = await channel_layer.new_channel(prefix="channel%s" % i) await channel_layer.group_add("test-group", channel) channels.append(channel) async with async_timeout.timeout(timeout): await channel_layer.group_send("test-group", {"type": "message.1"}) # Make sure we get the message all the channels async with async_timeout.timeout(timeout): for channel in channels: assert (await channel_layer.receive(channel))["type"] == "message.1" await channel_layer.flush() @pytest.mark.asyncio async def test_group_send_capacity(channel_layer, caplog): """ Makes sure we dont group_send messages to channels that are over capacity. Make sure number of channels with full capacity are logged as an exception to help debug errors. """ channel = await channel_layer.new_channel() await channel_layer.group_add("test-group", channel) await channel_layer.group_send("test-group", {"type": "message.1"}) await channel_layer.group_send("test-group", {"type": "message.2"}) await channel_layer.group_send("test-group", {"type": "message.3"}) await channel_layer.group_send("test-group", {"type": "message.4"}) # We should receive the first 3 messages assert (await channel_layer.receive(channel))["type"] == "message.1" assert (await channel_layer.receive(channel))["type"] == "message.2" assert (await channel_layer.receive(channel))["type"] == "message.3" # Make sure we do NOT receive message 4 with pytest.raises(asyncio.TimeoutError): async with async_timeout.timeout(1): await channel_layer.receive(channel) # Make sure number of channels over capacity are logged for record in caplog.records: assert record.levelname == "INFO" assert ( record.getMessage() == "1 of 1 channels over capacity in group test-group" ) @pytest.mark.asyncio async def test_group_send_capacity_multiple_channels(channel_layer, caplog): """ Makes sure we dont group_send messages to channels that are over capacity Make sure number of channels with full capacity are logged as an exception to help debug errors. """ channel_1 = await channel_layer.new_channel() channel_2 = await channel_layer.new_channel(prefix="channel_2") await channel_layer.group_add("test-group", channel_1) await channel_layer.group_add("test-group", channel_2) # Let's put channel_2 over capacity await channel_layer.send(channel_2, {"type": "message.0"}) await channel_layer.group_send("test-group", {"type": "message.1"}) await channel_layer.group_send("test-group", {"type": "message.2"}) await channel_layer.group_send("test-group", {"type": "message.3"}) # Channel_1 should receive all 3 group messages assert (await channel_layer.receive(channel_1))["type"] == "message.1" assert (await channel_layer.receive(channel_1))["type"] == "message.2" assert (await channel_layer.receive(channel_1))["type"] == "message.3" # Channel_2 should receive the first message + 2 group messages assert (await channel_layer.receive(channel_2))["type"] == "message.0" assert (await channel_layer.receive(channel_2))["type"] == "message.1" assert (await channel_layer.receive(channel_2))["type"] == "message.2" # Make sure channel_2 does not receive the 3rd group message with pytest.raises(asyncio.TimeoutError): async with async_timeout.timeout(1): await channel_layer.receive(channel_2) # Make sure number of channels over capacity are logged for record in caplog.records: assert record.levelname == "INFO" assert ( record.getMessage() == "1 of 2 channels over capacity in group test-group" ) def test_repeated_group_send_with_async_to_sync(channel_layer): """ Makes sure repeated group_send calls wrapped in async_to_sync process-local channel names. """ channel_layer = RedisChannelLayer(hosts=TEST_HOSTS, capacity=3) try: async_to_sync(channel_layer.group_send)( "channel_name_1", {"type": "test.message.1"} ) async_to_sync(channel_layer.group_send)( "channel_name_2", {"type": "test.message.2"} ) except RuntimeError as exc: pytest.fail(f"repeated async_to_sync wrapped group_send calls raised {exc}") @pytest.mark.xfail( reason=""" Fails with error in redis-py: int() argument must be a string, a bytes-like object or a real number, not 'NoneType'. Refs: #348 """ ) @pytest.mark.asyncio async def test_receive_cancel(channel_layer): """ Makes sure we can cancel a receive without blocking """ channel_layer = RedisChannelLayer(capacity=30) channel = await channel_layer.new_channel() delay = 0 while delay < 0.01: await channel_layer.send(channel, {"type": "test.message", "text": "Ahoy-hoy!"}) task = asyncio.ensure_future(channel_layer.receive(channel)) await asyncio.sleep(delay) task.cancel() delay += 0.0001 try: await asyncio.wait_for(task, None) except asyncio.CancelledError: pass @pytest.mark.asyncio async def test_random_reset__channel_name(channel_layer): """ Makes sure resetting random seed does not make us reuse channel names. """ channel_layer = RedisChannelLayer() random.seed(1) channel_name_1 = await channel_layer.new_channel() random.seed(1) channel_name_2 = await channel_layer.new_channel() assert channel_name_1 != channel_name_2 @pytest.mark.asyncio async def test_random_reset__client_prefix(channel_layer): """ Makes sure resetting random seed does not make us reuse client_prefixes. """ random.seed(1) channel_layer_1 = RedisChannelLayer() random.seed(1) channel_layer_2 = RedisChannelLayer() assert channel_layer_1.client_prefix != channel_layer_2.client_prefix @pytest.mark.asyncio async def test_message_expiry__earliest_message_expires(channel_layer): expiry = 3 delay = 2 channel_layer = RedisChannelLayer(expiry=expiry) channel_name = await channel_layer.new_channel() task = asyncio.ensure_future( send_three_messages_with_delay(channel_name, channel_layer, delay) ) await asyncio.wait_for(task, None) # the first message should have expired, we should only see the second message and the third message = await channel_layer.receive(channel_name) assert message["type"] == "test.message" assert message["text"] == "Second!" message = await channel_layer.receive(channel_name) assert message["type"] == "test.message" assert message["text"] == "Third!" # Make sure there's no third message even out of order with pytest.raises(asyncio.TimeoutError): async with async_timeout.timeout(1): await channel_layer.receive(channel_name) @pytest.mark.asyncio async def test_message_expiry__all_messages_under_expiration_time(channel_layer): expiry = 3 delay = 1 channel_layer = RedisChannelLayer(expiry=expiry) channel_name = await channel_layer.new_channel() task = asyncio.ensure_future( send_three_messages_with_delay(channel_name, channel_layer, delay) ) await asyncio.wait_for(task, None) # expiry = 3, total delay under 3, all messages there message = await channel_layer.receive(channel_name) assert message["type"] == "test.message" assert message["text"] == "First!" message = await channel_layer.receive(channel_name) assert message["type"] == "test.message" assert message["text"] == "Second!" message = await channel_layer.receive(channel_name) assert message["type"] == "test.message" assert message["text"] == "Third!" @pytest.mark.asyncio async def test_message_expiry__group_send(channel_layer): expiry = 3 delay = 2 channel_layer = RedisChannelLayer(expiry=expiry) channel_name = await channel_layer.new_channel() await channel_layer.group_add("test-group", channel_name) task = asyncio.ensure_future( group_send_three_messages_with_delay("test-group", channel_layer, delay) ) await asyncio.wait_for(task, None) # the first message should have expired, we should only see the second message and the third message = await channel_layer.receive(channel_name) assert message["type"] == "test.message" assert message["text"] == "Second!" message = await channel_layer.receive(channel_name) assert message["type"] == "test.message" assert message["text"] == "Third!" # Make sure there's no third message even out of order with pytest.raises(asyncio.TimeoutError): async with async_timeout.timeout(1): await channel_layer.receive(channel_name) @pytest.mark.xfail(reason="Fails with timeout. Refs: #348") @pytest.mark.asyncio async def test_message_expiry__group_send__one_channel_expires_message(channel_layer): expiry = 3 delay = 1 channel_layer = RedisChannelLayer(expiry=expiry) channel_1 = await channel_layer.new_channel() channel_2 = await channel_layer.new_channel(prefix="channel_2") await channel_layer.group_add("test-group", channel_1) await channel_layer.group_add("test-group", channel_2) # Let's give channel_1 one additional message and then sleep await channel_layer.send(channel_1, {"type": "test.message", "text": "Zero!"}) await asyncio.sleep(2) task = asyncio.ensure_future( group_send_three_messages_with_delay("test-group", channel_layer, delay) ) await asyncio.wait_for(task, None) # message Zero! was sent about 2 + 1 + 1 seconds ago and it should have expired message = await channel_layer.receive(channel_1) assert message["type"] == "test.message" assert message["text"] == "First!" message = await channel_layer.receive(channel_1) assert message["type"] == "test.message" assert message["text"] == "Second!" message = await channel_layer.receive(channel_1) assert message["type"] == "test.message" assert message["text"] == "Third!" # Make sure there's no fourth message even out of order with pytest.raises(asyncio.TimeoutError): async with async_timeout.timeout(1): await channel_layer.receive(channel_1) # channel_2 should receive all three messages from group_send message = await channel_layer.receive(channel_2) assert message["type"] == "test.message" assert message["text"] == "First!" # the first message should have expired, we should only see the second message and the third message = await channel_layer.receive(channel_2) assert message["type"] == "test.message" assert message["text"] == "Second!" message = await channel_layer.receive(channel_2) assert message["type"] == "test.message" assert message["text"] == "Third!" def test_default_group_key_format(): channel_layer = RedisChannelLayer() group_name = channel_layer._group_key("test_group") assert group_name == b"asgi:group:test_group" def test_custom_group_key_format(): channel_layer = RedisChannelLayer(prefix="test_prefix") group_name = channel_layer._group_key("test_group") assert group_name == b"test_prefix:group:test_group" def test_receive_buffer_respects_capacity(): channel_layer = RedisChannelLayer() buff = channel_layer.receive_buffer["test-group"] for i in range(10000): buff.put_nowait(i) capacity = 100 assert channel_layer.capacity == capacity assert buff.full() is True assert buff.qsize() == capacity messages = [buff.get_nowait() for _ in range(capacity)] assert list(range(9900, 10000)) == messages def test_serialize(): """ Test default serialization method """ message = {"a": True, "b": None, "c": {"d": []}} channel_layer = RedisChannelLayer() serialized = channel_layer.serialize(message) assert isinstance(serialized, bytes) assert serialized[12:] == b"\x83\xa1a\xc3\xa1b\xc0\xa1c\x81\xa1d\x90" def test_deserialize(): """ Test default deserialization method """ message = b"Q\x0c\xbb?Q\xbc\xe3|D\xfd9\x00\x83\xa1a\xc3\xa1b\xc0\xa1c\x81\xa1d\x90" channel_layer = RedisChannelLayer() deserialized = channel_layer.deserialize(message) assert isinstance(deserialized, dict) assert deserialized == {"a": True, "b": None, "c": {"d": []}} channels_redis-4.2.0/tests/test_pubsub.py000066400000000000000000000231451455024562100205620ustar00rootroot00000000000000import asyncio import inspect import random import sys import async_timeout import pytest from asgiref.sync import async_to_sync from channels_redis.pubsub import RedisPubSubChannelLayer from channels_redis.utils import _close_redis TEST_HOSTS = ["redis://localhost:6379"] @pytest.fixture() async def channel_layer(): """ Channel layer fixture that flushes automatically. """ channel_layer = RedisPubSubChannelLayer(hosts=TEST_HOSTS) yield channel_layer async with async_timeout.timeout(1): await channel_layer.flush() @pytest.fixture() async def other_channel_layer(): """ Channel layer fixture that flushes automatically. """ channel_layer = RedisPubSubChannelLayer(hosts=TEST_HOSTS) yield channel_layer await channel_layer.flush() def test_layer_close(): """ If the channel layer does not close properly there will be a "Task was destroyed but it is pending!" warning at process exit. """ async def do_something_with_layer(): channel_layer = RedisPubSubChannelLayer(hosts=TEST_HOSTS) await channel_layer.send( "TestChannel", {"type": "test.message", "text": "Ahoy-hoy!"} ) async_to_sync(do_something_with_layer)() @pytest.mark.asyncio async def test_send_receive(channel_layer): """ Makes sure we can send a message to a normal channel then receive it. """ channel = await channel_layer.new_channel() await channel_layer.send(channel, {"type": "test.message", "text": "Ahoy-hoy!"}) message = await channel_layer.receive(channel) assert message["type"] == "test.message" assert message["text"] == "Ahoy-hoy!" def test_send_receive_sync(channel_layer, event_loop): _await = event_loop.run_until_complete channel = _await(channel_layer.new_channel()) async_to_sync(channel_layer.send, force_new_loop=True)( channel, {"type": "test.message", "text": "Ahoy-hoy!"} ) message = _await(channel_layer.receive(channel)) assert message["type"] == "test.message" assert message["text"] == "Ahoy-hoy!" @pytest.mark.asyncio async def test_multi_send_receive(channel_layer): """ Tests overlapping sends and receives, and ordering. """ channel = await channel_layer.new_channel() await channel_layer.send(channel, {"type": "message.1"}) await channel_layer.send(channel, {"type": "message.2"}) await channel_layer.send(channel, {"type": "message.3"}) assert (await channel_layer.receive(channel))["type"] == "message.1" assert (await channel_layer.receive(channel))["type"] == "message.2" assert (await channel_layer.receive(channel))["type"] == "message.3" def test_multi_send_receive_sync(channel_layer, event_loop): _await = event_loop.run_until_complete channel = _await(channel_layer.new_channel()) send = async_to_sync(channel_layer.send) send(channel, {"type": "message.1"}) send(channel, {"type": "message.2"}) send(channel, {"type": "message.3"}) assert _await(channel_layer.receive(channel))["type"] == "message.1" assert _await(channel_layer.receive(channel))["type"] == "message.2" assert _await(channel_layer.receive(channel))["type"] == "message.3" @pytest.mark.asyncio async def test_groups_basic(channel_layer): """ Tests basic group operation. """ channel_name1 = await channel_layer.new_channel(prefix="test-gr-chan-1") channel_name2 = await channel_layer.new_channel(prefix="test-gr-chan-2") channel_name3 = await channel_layer.new_channel(prefix="test-gr-chan-3") await channel_layer.group_add("test-group", channel_name1) await channel_layer.group_add("test-group", channel_name2) await channel_layer.group_add("test-group", channel_name3) await channel_layer.group_discard("test-group", channel_name2) await channel_layer.group_send("test-group", {"type": "message.1"}) # Make sure we get the message on the two channels that were in async with async_timeout.timeout(1): assert (await channel_layer.receive(channel_name1))["type"] == "message.1" assert (await channel_layer.receive(channel_name3))["type"] == "message.1" # Make sure the removed channel did not get the message with pytest.raises(asyncio.TimeoutError): async with async_timeout.timeout(1): await channel_layer.receive(channel_name2) @pytest.mark.asyncio async def test_groups_same_prefix(channel_layer): """ Tests group_send with multiple channels with same channel prefix """ channel_name1 = await channel_layer.new_channel(prefix="test-gr-chan") channel_name2 = await channel_layer.new_channel(prefix="test-gr-chan") channel_name3 = await channel_layer.new_channel(prefix="test-gr-chan") await channel_layer.group_add("test-group", channel_name1) await channel_layer.group_add("test-group", channel_name2) await channel_layer.group_add("test-group", channel_name3) await channel_layer.group_send("test-group", {"type": "message.1"}) # Make sure we get the message on the channels that were in async with async_timeout.timeout(1): assert (await channel_layer.receive(channel_name1))["type"] == "message.1" assert (await channel_layer.receive(channel_name2))["type"] == "message.1" assert (await channel_layer.receive(channel_name3))["type"] == "message.1" @pytest.mark.asyncio async def test_receive_on_non_owned_general_channel(channel_layer, other_channel_layer): """ Tests receive with general channel that is not owned by the layer """ receive_started = asyncio.Event() async def receive(): receive_started.set() return await other_channel_layer.receive("test-channel") receive_task = asyncio.create_task(receive()) await receive_started.wait() await asyncio.sleep(0.1) # Need to give time for "receive" to subscribe await channel_layer.send("test-channel", "message.1") try: # Make sure we get the message on the channels that were in async with async_timeout.timeout(1): assert await receive_task == "message.1" finally: receive_task.cancel() @pytest.mark.asyncio async def test_random_reset__channel_name(channel_layer): """ Makes sure resetting random seed does not make us reuse channel names. """ random.seed(1) channel_name_1 = await channel_layer.new_channel() random.seed(1) channel_name_2 = await channel_layer.new_channel() assert channel_name_1 != channel_name_2 @pytest.mark.asyncio async def test_loop_instance_channel_layer_reference(channel_layer): redis_pub_sub_loop_layer = channel_layer._get_layer() assert redis_pub_sub_loop_layer.channel_layer == channel_layer def test_serialize(channel_layer): """ Test default serialization method """ message = {"a": True, "b": None, "c": {"d": []}} serialized = channel_layer.serialize(message) assert isinstance(serialized, bytes) assert serialized == b"\x83\xa1a\xc3\xa1b\xc0\xa1c\x81\xa1d\x90" def test_deserialize(channel_layer): """ Test default deserialization method """ message = b"\x83\xa1a\xc3\xa1b\xc0\xa1c\x81\xa1d\x90" deserialized = channel_layer.deserialize(message) assert isinstance(deserialized, dict) assert deserialized == {"a": True, "b": None, "c": {"d": []}} def test_multi_event_loop_garbage_collection(channel_layer): """ Test loop closure layer flushing and garbage collection """ assert len(channel_layer._layers.values()) == 0 async_to_sync(test_send_receive)(channel_layer) assert len(channel_layer._layers.values()) == 0 @pytest.mark.asyncio async def test_proxied_methods_coroutine_check(channel_layer): # inspect.iscoroutinefunction does not work for partial functions # below Python 3.8. if sys.version_info >= (3, 8): assert inspect.iscoroutinefunction(channel_layer.send) @pytest.mark.asyncio async def test_receive_hang(channel_layer): channel_name = await channel_layer.new_channel(prefix="test-channel") with pytest.raises(asyncio.TimeoutError): await asyncio.wait_for(channel_layer.receive(channel_name), timeout=1) @pytest.mark.asyncio async def test_auto_reconnect(channel_layer): """ Tests redis-py reconnect and resubscribe """ channel_name1 = await channel_layer.new_channel(prefix="test-gr-chan-1") channel_name2 = await channel_layer.new_channel(prefix="test-gr-chan-2") channel_name3 = await channel_layer.new_channel(prefix="test-gr-chan-3") await channel_layer.group_add("test-group", channel_name1) await channel_layer.group_add("test-group", channel_name2) await _close_redis(channel_layer._shards[0]._redis) await channel_layer.group_add("test-group", channel_name3) await channel_layer.group_discard("test-group", channel_name2) await _close_redis(channel_layer._shards[0]._redis) await asyncio.sleep(1) await channel_layer.group_send("test-group", {"type": "message.1"}) # Make sure we get the message on the two channels that were in async with async_timeout.timeout(5): assert (await channel_layer.receive(channel_name1))["type"] == "message.1" assert (await channel_layer.receive(channel_name3))["type"] == "message.1" # Make sure the removed channel did not get the message with pytest.raises(asyncio.TimeoutError): async with async_timeout.timeout(1): await channel_layer.receive(channel_name2) @pytest.mark.asyncio async def test_discard_before_add(channel_layer): channel_name = await channel_layer.new_channel(prefix="test-channel") # Make sure that we can remove a group before it was ever added without crashing. await channel_layer.group_discard("test-group", channel_name) channels_redis-4.2.0/tests/test_pubsub_sentinel.py000066400000000000000000000174031455024562100224630ustar00rootroot00000000000000import asyncio import random import async_timeout import pytest from asgiref.sync import async_to_sync from channels_redis.pubsub import RedisPubSubChannelLayer from channels_redis.utils import _close_redis SENTINEL_MASTER = "sentinel" SENTINEL_KWARGS = {"password": "channels_redis"} TEST_HOSTS = [ { "sentinels": [("localhost", 26379)], "master_name": SENTINEL_MASTER, "sentinel_kwargs": SENTINEL_KWARGS, } ] @pytest.fixture() async def channel_layer(): """ Channel layer fixture that flushes automatically. """ channel_layer = RedisPubSubChannelLayer(hosts=TEST_HOSTS) yield channel_layer async with async_timeout.timeout(1): await channel_layer.flush() @pytest.mark.asyncio async def test_send_receive(channel_layer): """ Makes sure we can send a message to a normal channel then receive it. """ channel = await channel_layer.new_channel() await channel_layer.send(channel, {"type": "test.message", "text": "Ahoy-hoy!"}) message = await channel_layer.receive(channel) assert message["type"] == "test.message" assert message["text"] == "Ahoy-hoy!" def test_send_receive_sync(channel_layer, event_loop): _await = event_loop.run_until_complete channel = _await(channel_layer.new_channel()) async_to_sync(channel_layer.send, force_new_loop=True)( channel, {"type": "test.message", "text": "Ahoy-hoy!"} ) message = _await(channel_layer.receive(channel)) assert message["type"] == "test.message" assert message["text"] == "Ahoy-hoy!" @pytest.mark.asyncio async def test_multi_send_receive(channel_layer): """ Tests overlapping sends and receives, and ordering. """ channel = await channel_layer.new_channel() await channel_layer.send(channel, {"type": "message.1"}) await channel_layer.send(channel, {"type": "message.2"}) await channel_layer.send(channel, {"type": "message.3"}) assert (await channel_layer.receive(channel))["type"] == "message.1" assert (await channel_layer.receive(channel))["type"] == "message.2" assert (await channel_layer.receive(channel))["type"] == "message.3" def test_multi_send_receive_sync(channel_layer, event_loop): _await = event_loop.run_until_complete channel = _await(channel_layer.new_channel()) send = async_to_sync(channel_layer.send) send(channel, {"type": "message.1"}) send(channel, {"type": "message.2"}) send(channel, {"type": "message.3"}) assert _await(channel_layer.receive(channel))["type"] == "message.1" assert _await(channel_layer.receive(channel))["type"] == "message.2" assert _await(channel_layer.receive(channel))["type"] == "message.3" @pytest.mark.asyncio async def test_groups_basic(channel_layer): """ Tests basic group operation. """ channel_name1 = await channel_layer.new_channel(prefix="test-gr-chan-1") channel_name2 = await channel_layer.new_channel(prefix="test-gr-chan-2") channel_name3 = await channel_layer.new_channel(prefix="test-gr-chan-3") await channel_layer.group_add("test-group", channel_name1) await channel_layer.group_add("test-group", channel_name2) await channel_layer.group_add("test-group", channel_name3) await channel_layer.group_discard("test-group", channel_name2) await channel_layer.group_send("test-group", {"type": "message.1"}) # Make sure we get the message on the two channels that were in async with async_timeout.timeout(1): assert (await channel_layer.receive(channel_name1))["type"] == "message.1" assert (await channel_layer.receive(channel_name3))["type"] == "message.1" # Make sure the removed channel did not get the message with pytest.raises(asyncio.TimeoutError): async with async_timeout.timeout(1): await channel_layer.receive(channel_name2) @pytest.mark.asyncio async def test_groups_same_prefix(channel_layer): """ Tests group_send with multiple channels with same channel prefix """ channel_name1 = await channel_layer.new_channel(prefix="test-gr-chan") channel_name2 = await channel_layer.new_channel(prefix="test-gr-chan") channel_name3 = await channel_layer.new_channel(prefix="test-gr-chan") await channel_layer.group_add("test-group", channel_name1) await channel_layer.group_add("test-group", channel_name2) await channel_layer.group_add("test-group", channel_name3) await channel_layer.group_send("test-group", {"type": "message.1"}) # Make sure we get the message on the channels that were in async with async_timeout.timeout(1): assert (await channel_layer.receive(channel_name1))["type"] == "message.1" assert (await channel_layer.receive(channel_name2))["type"] == "message.1" assert (await channel_layer.receive(channel_name3))["type"] == "message.1" @pytest.mark.asyncio async def test_random_reset__channel_name(channel_layer): """ Makes sure resetting random seed does not make us reuse channel names. """ random.seed(1) channel_name_1 = await channel_layer.new_channel() random.seed(1) channel_name_2 = await channel_layer.new_channel() assert channel_name_1 != channel_name_2 @pytest.mark.asyncio async def test_loop_instance_channel_layer_reference(channel_layer): redis_pub_sub_loop_layer = channel_layer._get_layer() assert redis_pub_sub_loop_layer.channel_layer == channel_layer def test_serialize(channel_layer): """ Test default serialization method """ message = {"a": True, "b": None, "c": {"d": []}} serialized = channel_layer.serialize(message) assert isinstance(serialized, bytes) assert serialized == b"\x83\xa1a\xc3\xa1b\xc0\xa1c\x81\xa1d\x90" def test_deserialize(channel_layer): """ Test default deserialization method """ message = b"\x83\xa1a\xc3\xa1b\xc0\xa1c\x81\xa1d\x90" deserialized = channel_layer.deserialize(message) assert isinstance(deserialized, dict) assert deserialized == {"a": True, "b": None, "c": {"d": []}} def test_multi_event_loop_garbage_collection(channel_layer): """ Test loop closure layer flushing and garbage collection """ assert len(channel_layer._layers.values()) == 0 async_to_sync(test_send_receive)(channel_layer) assert len(channel_layer._layers.values()) == 0 @pytest.mark.asyncio async def test_receive_hang(channel_layer): channel_name = await channel_layer.new_channel(prefix="test-channel") with pytest.raises(asyncio.TimeoutError): await asyncio.wait_for(channel_layer.receive(channel_name), timeout=1) @pytest.mark.asyncio async def test_auto_reconnect(channel_layer): """ Tests redis-py reconnect and resubscribe """ channel_name1 = await channel_layer.new_channel(prefix="test-gr-chan-1") channel_name2 = await channel_layer.new_channel(prefix="test-gr-chan-2") channel_name3 = await channel_layer.new_channel(prefix="test-gr-chan-3") await channel_layer.group_add("test-group", channel_name1) await channel_layer.group_add("test-group", channel_name2) await _close_redis(channel_layer._shards[0]._redis) await channel_layer.group_add("test-group", channel_name3) await channel_layer.group_discard("test-group", channel_name2) await _close_redis(channel_layer._shards[0]._redis) await asyncio.sleep(1) await channel_layer.group_send("test-group", {"type": "message.1"}) # Make sure we get the message on the two channels that were in async with async_timeout.timeout(5): assert (await channel_layer.receive(channel_name1))["type"] == "message.1" assert (await channel_layer.receive(channel_name3))["type"] == "message.1" # Make sure the removed channel did not get the message with pytest.raises(asyncio.TimeoutError): async with async_timeout.timeout(1): await channel_layer.receive(channel_name2) channels_redis-4.2.0/tests/test_sentinel.py000066400000000000000000000604371455024562100211100ustar00rootroot00000000000000import asyncio import random import async_timeout import pytest from asgiref.sync import async_to_sync from channels_redis.core import ChannelFull, RedisChannelLayer SENTINEL_MASTER = "sentinel" SENTINEL_KWARGS = {"password": "channels_redis"} TEST_HOSTS = [ { "sentinels": [("localhost", 26379)], "master_name": SENTINEL_MASTER, "sentinel_kwargs": SENTINEL_KWARGS, } ] MULTIPLE_TEST_HOSTS = [ { "sentinels": [("localhost", 26379)], "master_name": SENTINEL_MASTER, "sentinel_kwargs": SENTINEL_KWARGS, "db": 0, }, { "sentinels": [("localhost", 26379)], "master_name": SENTINEL_MASTER, "sentinel_kwargs": SENTINEL_KWARGS, "db": 1, }, { "sentinels": [("localhost", 26379)], "master_name": SENTINEL_MASTER, "sentinel_kwargs": SENTINEL_KWARGS, "db": 2, }, { "sentinels": [("localhost", 26379)], "master_name": SENTINEL_MASTER, "sentinel_kwargs": SENTINEL_KWARGS, "db": 3, }, { "sentinels": [("localhost", 26379)], "master_name": SENTINEL_MASTER, "sentinel_kwargs": SENTINEL_KWARGS, "db": 4, }, { "sentinels": [("localhost", 26379)], "master_name": SENTINEL_MASTER, "sentinel_kwargs": SENTINEL_KWARGS, "db": 5, }, { "sentinels": [("localhost", 26379)], "master_name": SENTINEL_MASTER, "sentinel_kwargs": SENTINEL_KWARGS, "db": 6, }, { "sentinels": [("localhost", 26379)], "master_name": SENTINEL_MASTER, "sentinel_kwargs": SENTINEL_KWARGS, "db": 7, }, { "sentinels": [("localhost", 26379)], "master_name": SENTINEL_MASTER, "sentinel_kwargs": SENTINEL_KWARGS, "db": 8, }, { "sentinels": [("localhost", 26379)], "master_name": SENTINEL_MASTER, "sentinel_kwargs": SENTINEL_KWARGS, "db": 9, }, ] async def send_three_messages_with_delay(channel_name, channel_layer, delay): await channel_layer.send(channel_name, {"type": "test.message", "text": "First!"}) await asyncio.sleep(delay) await channel_layer.send(channel_name, {"type": "test.message", "text": "Second!"}) await asyncio.sleep(delay) await channel_layer.send(channel_name, {"type": "test.message", "text": "Third!"}) async def group_send_three_messages_with_delay(group_name, channel_layer, delay): await channel_layer.group_send( group_name, {"type": "test.message", "text": "First!"} ) await asyncio.sleep(delay) await channel_layer.group_send( group_name, {"type": "test.message", "text": "Second!"} ) await asyncio.sleep(delay) await channel_layer.group_send( group_name, {"type": "test.message", "text": "Third!"} ) @pytest.fixture() async def channel_layer(): """ Channel layer fixture that flushes automatically. """ channel_layer = RedisChannelLayer( hosts=TEST_HOSTS, capacity=3, channel_capacity={"tiny": 1} ) yield channel_layer await channel_layer.flush() @pytest.fixture() async def channel_layer_multiple_hosts(): """ Channel layer fixture that flushes automatically. """ channel_layer = RedisChannelLayer(hosts=MULTIPLE_TEST_HOSTS, capacity=3) yield channel_layer await channel_layer.flush() @pytest.mark.asyncio async def test_send_receive(channel_layer): """ Makes sure we can send a message to a normal channel then receive it. """ await channel_layer.send( "test-channel-1", {"type": "test.message", "text": "Ahoy-hoy!"} ) message = await channel_layer.receive("test-channel-1") assert message["type"] == "test.message" assert message["text"] == "Ahoy-hoy!" @pytest.mark.parametrize("channel_layer", [None]) # Fixture can't handle sync def test_double_receive(channel_layer): """ Makes sure we can receive from two different event loops using process-local channel names. """ channel_layer = RedisChannelLayer(hosts=TEST_HOSTS, capacity=3) # Aioredis connections can't be used from different event loops, so # send and close need to be done in the same async_to_sync call. async def send_and_close(*args, **kwargs): await channel_layer.send(*args, **kwargs) await channel_layer.close_pools() channel_name_1 = async_to_sync(channel_layer.new_channel)() channel_name_2 = async_to_sync(channel_layer.new_channel)() async_to_sync(send_and_close)(channel_name_1, {"type": "test.message.1"}) async_to_sync(send_and_close)(channel_name_2, {"type": "test.message.2"}) # Make things to listen on the loops async def listen1(): message = await channel_layer.receive(channel_name_1) assert message["type"] == "test.message.1" await channel_layer.close_pools() async def listen2(): message = await channel_layer.receive(channel_name_2) assert message["type"] == "test.message.2" await channel_layer.close_pools() # Run them inside threads async_to_sync(listen2)() async_to_sync(listen1)() # Clean up async_to_sync(channel_layer.flush)() @pytest.mark.asyncio async def test_send_capacity(channel_layer): """ Makes sure we get ChannelFull when we hit the send capacity """ await channel_layer.send("test-channel-1", {"type": "test.message"}) await channel_layer.send("test-channel-1", {"type": "test.message"}) await channel_layer.send("test-channel-1", {"type": "test.message"}) with pytest.raises(ChannelFull): await channel_layer.send("test-channel-1", {"type": "test.message"}) @pytest.mark.asyncio async def test_send_specific_capacity(channel_layer): """ Makes sure we get ChannelFull when we hit the send capacity on a specific channel """ custom_channel_layer = RedisChannelLayer( hosts=TEST_HOSTS, capacity=3, channel_capacity={"one": 1}, ) await custom_channel_layer.send("one", {"type": "test.message"}) with pytest.raises(ChannelFull): await custom_channel_layer.send("one", {"type": "test.message"}) await custom_channel_layer.flush() @pytest.mark.asyncio async def test_process_local_send_receive(channel_layer): """ Makes sure we can send a message to a process-local channel then receive it. """ channel_name = await channel_layer.new_channel() await channel_layer.send( channel_name, {"type": "test.message", "text": "Local only please"} ) message = await channel_layer.receive(channel_name) assert message["type"] == "test.message" assert message["text"] == "Local only please" @pytest.mark.asyncio async def test_multi_send_receive(channel_layer): """ Tests overlapping sends and receives, and ordering. """ channel_layer = RedisChannelLayer(hosts=TEST_HOSTS) await channel_layer.send("test-channel-3", {"type": "message.1"}) await channel_layer.send("test-channel-3", {"type": "message.2"}) await channel_layer.send("test-channel-3", {"type": "message.3"}) assert (await channel_layer.receive("test-channel-3"))["type"] == "message.1" assert (await channel_layer.receive("test-channel-3"))["type"] == "message.2" assert (await channel_layer.receive("test-channel-3"))["type"] == "message.3" await channel_layer.flush() @pytest.mark.asyncio async def test_reject_bad_channel(channel_layer): """ Makes sure sending/receiving on an invalic channel name fails. """ with pytest.raises(TypeError): await channel_layer.send("=+135!", {"type": "foom"}) with pytest.raises(TypeError): await channel_layer.receive("=+135!") @pytest.mark.asyncio async def test_reject_bad_client_prefix(channel_layer): """ Makes sure receiving on a non-prefixed local channel is not allowed. """ with pytest.raises(AssertionError): await channel_layer.receive("not-client-prefix!local_part") @pytest.mark.asyncio async def test_groups_basic(channel_layer): """ Tests basic group operation. """ channel_layer = RedisChannelLayer(hosts=TEST_HOSTS) channel_name1 = await channel_layer.new_channel(prefix="test-gr-chan-1") channel_name2 = await channel_layer.new_channel(prefix="test-gr-chan-2") channel_name3 = await channel_layer.new_channel(prefix="test-gr-chan-3") await channel_layer.group_add("test-group", channel_name1) await channel_layer.group_add("test-group", channel_name2) await channel_layer.group_add("test-group", channel_name3) await channel_layer.group_discard("test-group", channel_name2) await channel_layer.group_send("test-group", {"type": "message.1"}) # Make sure we get the message on the two channels that were in async with async_timeout.timeout(1): assert (await channel_layer.receive(channel_name1))["type"] == "message.1" assert (await channel_layer.receive(channel_name3))["type"] == "message.1" # Make sure the removed channel did not get the message with pytest.raises(asyncio.TimeoutError): async with async_timeout.timeout(1): await channel_layer.receive(channel_name2) await channel_layer.flush() @pytest.mark.asyncio async def test_groups_channel_full(channel_layer): """ Tests that group_send ignores ChannelFull """ channel_layer = RedisChannelLayer(hosts=TEST_HOSTS) await channel_layer.group_add("test-group", "test-gr-chan-1") await channel_layer.group_send("test-group", {"type": "message.1"}) await channel_layer.group_send("test-group", {"type": "message.1"}) await channel_layer.group_send("test-group", {"type": "message.1"}) await channel_layer.group_send("test-group", {"type": "message.1"}) await channel_layer.group_send("test-group", {"type": "message.1"}) await channel_layer.flush() @pytest.mark.asyncio async def test_groups_multiple_hosts(channel_layer_multiple_hosts): """ Tests advanced group operation with multiple hosts. """ channel_layer = RedisChannelLayer(hosts=MULTIPLE_TEST_HOSTS, capacity=100) channel_name1 = await channel_layer.new_channel(prefix="channel1") channel_name2 = await channel_layer.new_channel(prefix="channel2") channel_name3 = await channel_layer.new_channel(prefix="channel3") await channel_layer.group_add("test-group", channel_name1) await channel_layer.group_add("test-group", channel_name2) await channel_layer.group_add("test-group", channel_name3) await channel_layer.group_discard("test-group", channel_name2) await channel_layer.group_send("test-group", {"type": "message.1"}) await channel_layer.group_send("test-group", {"type": "message.1"}) # Make sure we get the message on the two channels that were in async with async_timeout.timeout(1): assert (await channel_layer.receive(channel_name1))["type"] == "message.1" assert (await channel_layer.receive(channel_name3))["type"] == "message.1" with pytest.raises(asyncio.TimeoutError): async with async_timeout.timeout(1): await channel_layer.receive(channel_name2) await channel_layer.flush() @pytest.mark.asyncio async def test_groups_same_prefix(channel_layer): """ Tests group_send with multiple channels with same channel prefix """ channel_layer = RedisChannelLayer(hosts=TEST_HOSTS) channel_name1 = await channel_layer.new_channel(prefix="test-gr-chan") channel_name2 = await channel_layer.new_channel(prefix="test-gr-chan") channel_name3 = await channel_layer.new_channel(prefix="test-gr-chan") await channel_layer.group_add("test-group", channel_name1) await channel_layer.group_add("test-group", channel_name2) await channel_layer.group_add("test-group", channel_name3) await channel_layer.group_send("test-group", {"type": "message.1"}) # Make sure we get the message on the channels that were in async with async_timeout.timeout(1): assert (await channel_layer.receive(channel_name1))["type"] == "message.1" assert (await channel_layer.receive(channel_name2))["type"] == "message.1" assert (await channel_layer.receive(channel_name3))["type"] == "message.1" await channel_layer.flush() @pytest.mark.parametrize( "num_channels,timeout", [ (1, 1), # Edge cases - make sure we can send to a single channel (10, 1), (100, 10), ], ) @pytest.mark.asyncio async def test_groups_multiple_hosts_performance( channel_layer_multiple_hosts, num_channels, timeout ): """ Tests advanced group operation: can send efficiently to multiple channels with multiple hosts within a certain timeout """ channel_layer = RedisChannelLayer(hosts=MULTIPLE_TEST_HOSTS, capacity=100) channels = [] for i in range(0, num_channels): channel = await channel_layer.new_channel(prefix="channel%s" % i) await channel_layer.group_add("test-group", channel) channels.append(channel) async with async_timeout.timeout(timeout): await channel_layer.group_send("test-group", {"type": "message.1"}) # Make sure we get the message all the channels async with async_timeout.timeout(timeout): for channel in channels: assert (await channel_layer.receive(channel))["type"] == "message.1" await channel_layer.flush() @pytest.mark.asyncio async def test_group_send_capacity(channel_layer, caplog): """ Makes sure we dont group_send messages to channels that are over capacity. Make sure number of channels with full capacity are logged as an exception to help debug errors. """ channel = await channel_layer.new_channel() await channel_layer.group_add("test-group", channel) await channel_layer.group_send("test-group", {"type": "message.1"}) await channel_layer.group_send("test-group", {"type": "message.2"}) await channel_layer.group_send("test-group", {"type": "message.3"}) await channel_layer.group_send("test-group", {"type": "message.4"}) # We should receive the first 3 messages assert (await channel_layer.receive(channel))["type"] == "message.1" assert (await channel_layer.receive(channel))["type"] == "message.2" assert (await channel_layer.receive(channel))["type"] == "message.3" # Make sure we do NOT receive message 4 with pytest.raises(asyncio.TimeoutError): async with async_timeout.timeout(1): await channel_layer.receive(channel) # Make sure number of channels over capacity are logged for record in caplog.records: assert record.levelname == "INFO" assert ( record.getMessage() == "1 of 1 channels over capacity in group test-group" ) @pytest.mark.asyncio async def test_group_send_capacity_multiple_channels(channel_layer, caplog): """ Makes sure we dont group_send messages to channels that are over capacity Make sure number of channels with full capacity are logged as an exception to help debug errors. """ channel_1 = await channel_layer.new_channel() channel_2 = await channel_layer.new_channel(prefix="channel_2") await channel_layer.group_add("test-group", channel_1) await channel_layer.group_add("test-group", channel_2) # Let's put channel_2 over capacity await channel_layer.send(channel_2, {"type": "message.0"}) await channel_layer.group_send("test-group", {"type": "message.1"}) await channel_layer.group_send("test-group", {"type": "message.2"}) await channel_layer.group_send("test-group", {"type": "message.3"}) # Channel_1 should receive all 3 group messages assert (await channel_layer.receive(channel_1))["type"] == "message.1" assert (await channel_layer.receive(channel_1))["type"] == "message.2" assert (await channel_layer.receive(channel_1))["type"] == "message.3" # Channel_2 should receive the first message + 2 group messages assert (await channel_layer.receive(channel_2))["type"] == "message.0" assert (await channel_layer.receive(channel_2))["type"] == "message.1" assert (await channel_layer.receive(channel_2))["type"] == "message.2" # Make sure channel_2 does not receive the 3rd group message with pytest.raises(asyncio.TimeoutError): async with async_timeout.timeout(1): await channel_layer.receive(channel_2) # Make sure number of channels over capacity are logged for record in caplog.records: assert record.levelname == "INFO" assert ( record.getMessage() == "1 of 2 channels over capacity in group test-group" ) @pytest.mark.xfail( reason=""" Fails with error in redis-py: int() argument must be a string, a bytes-like object or a real number, not 'NoneType'. Refs: #348 """ ) @pytest.mark.asyncio async def test_receive_cancel(channel_layer): """ Makes sure we can cancel a receive without blocking """ channel_layer = RedisChannelLayer(capacity=30) channel = await channel_layer.new_channel() delay = 0 while delay < 0.01: await channel_layer.send(channel, {"type": "test.message", "text": "Ahoy-hoy!"}) task = asyncio.ensure_future(channel_layer.receive(channel)) await asyncio.sleep(delay) task.cancel() delay += 0.0001 try: await asyncio.wait_for(task, None) except asyncio.CancelledError: pass @pytest.mark.asyncio async def test_random_reset__channel_name(channel_layer): """ Makes sure resetting random seed does not make us reuse channel names. """ channel_layer = RedisChannelLayer() random.seed(1) channel_name_1 = await channel_layer.new_channel() random.seed(1) channel_name_2 = await channel_layer.new_channel() assert channel_name_1 != channel_name_2 @pytest.mark.asyncio async def test_random_reset__client_prefix(channel_layer): """ Makes sure resetting random seed does not make us reuse client_prefixes. """ random.seed(1) channel_layer_1 = RedisChannelLayer() random.seed(1) channel_layer_2 = RedisChannelLayer() assert channel_layer_1.client_prefix != channel_layer_2.client_prefix @pytest.mark.asyncio async def test_message_expiry__earliest_message_expires(channel_layer): expiry = 3 delay = 2 channel_layer = RedisChannelLayer(expiry=expiry) channel_name = await channel_layer.new_channel() task = asyncio.ensure_future( send_three_messages_with_delay(channel_name, channel_layer, delay) ) await asyncio.wait_for(task, None) # the first message should have expired, we should only see the second message and the third message = await channel_layer.receive(channel_name) assert message["type"] == "test.message" assert message["text"] == "Second!" message = await channel_layer.receive(channel_name) assert message["type"] == "test.message" assert message["text"] == "Third!" # Make sure there's no third message even out of order with pytest.raises(asyncio.TimeoutError): async with async_timeout.timeout(1): await channel_layer.receive(channel_name) @pytest.mark.asyncio async def test_message_expiry__all_messages_under_expiration_time(channel_layer): expiry = 3 delay = 1 channel_layer = RedisChannelLayer(expiry=expiry) channel_name = await channel_layer.new_channel() task = asyncio.ensure_future( send_three_messages_with_delay(channel_name, channel_layer, delay) ) await asyncio.wait_for(task, None) # expiry = 3, total delay under 3, all messages there message = await channel_layer.receive(channel_name) assert message["type"] == "test.message" assert message["text"] == "First!" message = await channel_layer.receive(channel_name) assert message["type"] == "test.message" assert message["text"] == "Second!" message = await channel_layer.receive(channel_name) assert message["type"] == "test.message" assert message["text"] == "Third!" @pytest.mark.asyncio async def test_message_expiry__group_send(channel_layer): expiry = 3 delay = 2 channel_layer = RedisChannelLayer(expiry=expiry) channel_name = await channel_layer.new_channel() await channel_layer.group_add("test-group", channel_name) task = asyncio.ensure_future( group_send_three_messages_with_delay("test-group", channel_layer, delay) ) await asyncio.wait_for(task, None) # the first message should have expired, we should only see the second message and the third message = await channel_layer.receive(channel_name) assert message["type"] == "test.message" assert message["text"] == "Second!" message = await channel_layer.receive(channel_name) assert message["type"] == "test.message" assert message["text"] == "Third!" # Make sure there's no third message even out of order with pytest.raises(asyncio.TimeoutError): async with async_timeout.timeout(1): await channel_layer.receive(channel_name) @pytest.mark.xfail(reason="Fails with timeout. Refs: #348") @pytest.mark.asyncio async def test_message_expiry__group_send__one_channel_expires_message(channel_layer): expiry = 3 delay = 1 channel_layer = RedisChannelLayer(expiry=expiry) channel_1 = await channel_layer.new_channel() channel_2 = await channel_layer.new_channel(prefix="channel_2") await channel_layer.group_add("test-group", channel_1) await channel_layer.group_add("test-group", channel_2) # Let's give channel_1 one additional message and then sleep await channel_layer.send(channel_1, {"type": "test.message", "text": "Zero!"}) await asyncio.sleep(2) task = asyncio.ensure_future( group_send_three_messages_with_delay("test-group", channel_layer, delay) ) await asyncio.wait_for(task, None) # message Zero! was sent about 2 + 1 + 1 seconds ago and it should have expired message = await channel_layer.receive(channel_1) assert message["type"] == "test.message" assert message["text"] == "First!" message = await channel_layer.receive(channel_1) assert message["type"] == "test.message" assert message["text"] == "Second!" message = await channel_layer.receive(channel_1) assert message["type"] == "test.message" assert message["text"] == "Third!" # Make sure there's no fourth message even out of order with pytest.raises(asyncio.TimeoutError): async with async_timeout.timeout(1): await channel_layer.receive(channel_1) # channel_2 should receive all three messages from group_send message = await channel_layer.receive(channel_2) assert message["type"] == "test.message" assert message["text"] == "First!" # the first message should have expired, we should only see the second message and the third message = await channel_layer.receive(channel_2) assert message["type"] == "test.message" assert message["text"] == "Second!" message = await channel_layer.receive(channel_2) assert message["type"] == "test.message" assert message["text"] == "Third!" def test_default_group_key_format(): channel_layer = RedisChannelLayer() group_name = channel_layer._group_key("test_group") assert group_name == b"asgi:group:test_group" def test_custom_group_key_format(): channel_layer = RedisChannelLayer(prefix="test_prefix") group_name = channel_layer._group_key("test_group") assert group_name == b"test_prefix:group:test_group" def test_receive_buffer_respects_capacity(): channel_layer = RedisChannelLayer() buff = channel_layer.receive_buffer["test-group"] for i in range(10000): buff.put_nowait(i) capacity = 100 assert channel_layer.capacity == capacity assert buff.full() is True assert buff.qsize() == capacity messages = [buff.get_nowait() for _ in range(capacity)] assert list(range(9900, 10000)) == messages def test_serialize(): """ Test default serialization method """ message = {"a": True, "b": None, "c": {"d": []}} channel_layer = RedisChannelLayer() serialized = channel_layer.serialize(message) assert isinstance(serialized, bytes) assert serialized[12:] == b"\x83\xa1a\xc3\xa1b\xc0\xa1c\x81\xa1d\x90" def test_deserialize(): """ Test default deserialization method """ message = b"Q\x0c\xbb?Q\xbc\xe3|D\xfd9\x00\x83\xa1a\xc3\xa1b\xc0\xa1c\x81\xa1d\x90" channel_layer = RedisChannelLayer() deserialized = channel_layer.deserialize(message) assert isinstance(deserialized, dict) assert deserialized == {"a": True, "b": None, "c": {"d": []}} channels_redis-4.2.0/tests/test_utils.py000066400000000000000000000007411455024562100204170ustar00rootroot00000000000000import pytest from channels_redis.utils import _consistent_hash @pytest.mark.parametrize( "value,ring_size,expected", [ ("key_one", 1, 0), ("key_two", 1, 0), ("key_one", 2, 1), ("key_two", 2, 0), ("key_one", 10, 6), ("key_two", 10, 4), (b"key_one", 10, 6), (b"key_two", 10, 4), ], ) def test_consistent_hash_result(value, ring_size, expected): assert _consistent_hash(value, ring_size) == expected channels_redis-4.2.0/tox.ini000066400000000000000000000012151455024562100160140ustar00rootroot00000000000000[tox] envlist = py{38,39,310,311,312}-ch{30,40,main}-redis50 py311-chmain-redis{45,46,50,main} qa [testenv] usedevelop = true extras = tests commands = pytest -v {posargs} deps = ch30: channels>=3.0,<3.1 ch40: channels>=4.0,<4.1 chmain: https://github.com/django/channels/archive/main.tar.gz redis46: redis>=4.6,<4.7 redis50: redis>=5.0,<5.1 redismain: https://github.com/redis/redis-py/archive/master.tar.gz [testenv:qa] skip_install=true deps = black flake8 isort commands = flake8 channels_redis tests black --check channels_redis tests isort --check-only --diff channels_redis tests