pax_global_header 0000666 0000000 0000000 00000000064 13576505155 0014526 g ustar 00root root 0000000 0000000 52 comment=56bb2b9afc9207d19a66d78f39a8e337cde93551
channels-2.4.0/ 0000775 0000000 0000000 00000000000 13576505155 0013324 5 ustar 00root root 0000000 0000000 channels-2.4.0/.coveragerc 0000664 0000000 0000000 00000000230 13576505155 0015440 0 ustar 00root root 0000000 0000000 [run]
branch = True
source = channels
omit = tests/*
[report]
show_missing = True
skip_covered = True
omit = tests/*
[html]
directory = coverage_html
channels-2.4.0/.github/ 0000775 0000000 0000000 00000000000 13576505155 0014664 5 ustar 00root root 0000000 0000000 channels-2.4.0/.github/CODEOWNERS 0000664 0000000 0000000 00000000477 13576505155 0016267 0 ustar 00root root 0000000 0000000 # This is a comment.
# Each line is a file pattern followed by one or more owners.
# These owners will be the default owners for everything in
# the repo. Unless a later match takes precedence,
# @global-owner1 and @global-owner2 will be requested for
# review when someone opens a pull request.
* @carltongibson channels-2.4.0/.github/ISSUE_TEMPLATE.md 0000664 0000000 0000000 00000001425 13576505155 0017373 0 ustar 00root root 0000000 0000000 Issues are for **concrete, actionable bugs and feature requests** only - if you're just asking for debugging help or technical support we have to direct you elsewhere. If you just have questions or support requests please use:
- Stack Overflow
- The Django Users mailing list django-users@googlegroups.com (https://groups.google.com/forum/#!forum/django-users)
We have to limit this because of limited volunteer time to respond to issues!
Please also try and include, if you can:
- Your OS and runtime environment, and browser if applicable
- A `pip freeze` output showing your package versions
- What you expected to happen vs. what actually happened
- How you're running Channels (runserver? daphne/runworker? Nginx/Apache in front?)
- Console logs and full tracebacks of any errors
channels-2.4.0/.gitignore 0000664 0000000 0000000 00000000325 13576505155 0015314 0 ustar 00root root 0000000 0000000 *.egg-info
dist/
build/
docs/_build
__pycache__/
.cache
*.sqlite3
.tox/
*.swp
*.pyc
.coverage*
.pytest_cache
TODO
node_modules
# Pipenv
Pipfile
Pipfile.lock
# IDE and Tooling files
.idea/*
*~
# macOS
.DS_Store
channels-2.4.0/.travis.yml 0000664 0000000 0000000 00000002776 13576505155 0015451 0 ustar 00root root 0000000 0000000 dist: xenial
language: python
python:
- "3.8"
- "3.7"
- "3.6"
env:
- DJANGO="Django~=2.2.8"
- DJANGO="Django==3.0.*"
install:
- pip install -U pip wheel setuptools
- pip install $DJANGO -e .[tests]
- pip freeze
script:
- pytest
stages:
- test
- lint
- name: release
if: tag IS present
jobs:
include:
- python: "3.5"
env: DJANGO="Django==2.2.*"
- stage: lint
install: pip install -U -e .[tests] black pyflakes isort
script:
- pyflakes channels tests
- black --check --diff channels tests
- isort --check-only --diff --recursive channels tests
- stage: release
script: skip
deploy:
provider: pypi
user: andrewgodwin_bot
on:
tags: true
distributions: sdist bdist_wheel
password:
secure: JQrDaSHhyrQZE4FFzGF2WDWIVlZOXP9xoGtrXSaSp+neWpr2a83UmAjEQYGFLX5aXvVBpaILL5fOfMnbnIBzArEC01RTTWm1iDuQkpvYU6DgbTsLnCl5g9V+Z3G+8tGSy3lGhFiViLBIi5Xi6PrwlBeK/LW1b9Ja8jsZdLWYHloFtyy5HkYodFOxYdpiXLeva9YXWKPnU9lKLLrubcRtbnvyAn9WY2Wn09Cy/6xcGqX0lPebf3gaXYMFQ9thDoZPgSqrxYYK57LA0EdMjSyxrs5GVGSVoNhnsAJn6JFzHwvGeuzjs/92bNUAcTM+oe6cZWHPiECgINKNFLTxH9x0sNanw2walNnw+9X9caqYnzrReiwKJoTiXd3fZSgFV5WplVMARw9kSvSnNclzTiToTNXrRmL5ATMgv5qixRctoDRaYgz/6i1mGt7ey5qyq0qEViBb9hB2NBwF3kVbJG8evdXqw8bcBQ1OCfXKLV0cWvRpdbjZ6hdCcJPgptztyGnlGwX8/eeAxrCYkQy8FmOmEV8c00mMUaoRfDqXxtqyii2UdwRL10Du6V47jjYxCGXN5bfMGXVi/xNdmw1gbZx5fhf5rMhGvux9mxGZVtU4Q7I5jEhQ/H+IwaoEjUA7e5qDfWB+hqMhnsineQb12ZZ//iKVo2x2eTTn6yJ0hVn8QP4=
channels-2.4.0/CHANGELOG.txt 0000664 0000000 0000000 00000034756 13576505155 0015373 0 ustar 00root root 0000000 0000000 Full release notes, with more details and upgrade information, are available at:
https://channels.readthedocs.io/en/latest/releases
2.4.0 (2019-12-18)
------------------
* Wraps session save calls in ``database_sync_to_async()``, for compatibility
with Django 3.0's ``async_unsafe()`` checks.
* Drops compatibility with all Django versions lower than 2.2.
2.3.1 (2019-10-23)
------------------
* Adds compatibility with Python 3.8.
2.3.0 (2019-09-18)
------------------
* Adjusted ``AsgiHandler`` HTTP body handling to use a spooled temporary file,
rather than reading the whole request body into memory.
As a result, ``AsgiRequest.__init__()`` is adjusted to expect a file-like
``stream``, rather than the whole ``body`` as bytes. Test cases instantiating
requests directly will likely need to be updated to wrap the provided body
in, e.g., `io.BytesIO`.
2.2.0 (2019-04-14)
------------------
* Updated requirements for ASGI v3 and Daphne 2.3.
2.1.7 (2019-01-31)
------------------
* HTTP request body size limit is now enforced
* database_sync_to_async now closes old connections before it runs code
* Auth middleware closes old connections before it runs
2.1.6 (2018-12-08)
------------------
* HttpCommunicator now extracts query strings correctly
* AsyncHttpConsumer provides channel layer attributes
* Prevent late-Daphne import errors
2.1.5 (2018-10-22)
------------------
* Django middleware caching now works on Django 1.11 and Django 2.0.
The previous release only ran on 2.1.
2.1.4 (2018-10-19)
------------------
* Django middleware is now cached rather than instantiated per request
resulting in a significant speed improvement
* ChannelServerLiveTestCase now serves static files again
* Improved error message resulting from bad Origin headers
* runserver logging now goes through the Django logging framework
* Generic consumers can now have non-default channel layers
* Improved error when accessing scope['user'] before it's ready
2.1.3 (2018-08-16)
------------------
* An ALLOWED_ORIGINS value of "*" will now also allow requests without a Host
header at all (especially important for tests)
* The request.path value is now correct in cases when a server has SCRIPT_NAME
set
* Errors that happen inside channel listeners inside a runworker or Worker
class are now raised rather than suppressed
2.1.2 (2018-06-13)
------------------
* AsyncHttpConsumer now has a disconnect() method you can override
* Session and authentication middleware is now non-blocking.
* URL routing context now includes default arguments from the URLconf.
* The FORCE_SCRIPT_NAME setting is now respected in ASGI mode.
* ALLOWED_HOSTS is now set correctly during LiveServerTests.
2.1.1 (2018-04-18)
------------------
* The scope["user"] object is no longer a lazy object, as this conflicts with
any async-based consumers.
2.1.0 (2018-04-11)
------------------
* Async HTTP Consumers and WebSocket Consumers both gained new functionality
(groups, subprotocols, and an async HTTP variant)
* URLRouters now allow nesting
* Async login and logout functions for sessions
* Expiry and groups in the in-memory channel layer
* Improved Live Server test case
* More powerful OriginValidator
* Other small changes and fixes in the full release notes.
2.0.2 (2018-02-08)
------------------
* SyncConsumer now terminates old database connections, and there is a new
database_sync_to_async wrapper to allow async connections to do the same.
2.0.1 (2018-02-05)
------------------
* AsyncWebsocketConsumer and AsyncJsonWebsocketConsumer classes added
* OriginValidator and AllowedHostsOriginValidator ASGI middleware is now available
* URLRouter now correctly resolves long lists of URLs
2.0.0 (2018-02-01)
------------------
* Major backwards-incompatible rewrite to move to an asyncio base and remove
the requirement to transport data over the network, as well as overhauled
generic consumers, test helpers, routing and more.
1.1.6 (2017-06-28)
------------------
* The ``runserver`` ``server_cls`` override no longer fails with more modern
Django versions that pass an ``ipv6`` parameter.
1.1.5 (2017-06-16)
------------------
* The Daphne dependency requirement was bumped to 1.3.0.
1.1.4 (2017-06-15)
------------------
* Pending messages correctly handle retries in backlog situations
* Workers in threading mode now respond to ctrl-C and gracefully exit.
* ``request.meta['QUERY_STRING']`` is now correctly encoded at all times.
* Test client improvements
* ``ChannelServerLiveTestCase`` added, allows an equivalent of the Django
``LiveTestCase``.
* Decorator added to check ``Origin`` headers (``allowed_hosts_only``)
* New ``TEST_CONFIG`` setting in ``CHANNEL_LAYERS`` that allows varying of
the channel layer for tests (e.g. using a different Redis install)
1.1.3 (2017-04-05)
------------------
* ``enforce_ordering`` now works correctly with the new-style process-specific
channels
* ASGI channel layer versions are now explicitly checked for version compatibility
1.1.2 (2017-04-01)
------------------
* Session name hash changed to SHA-1 to satisfy FIPS-140-2. Due to this,
please force all WebSockets to reconnect after the upgrade.
* `scheme` key in ASGI-HTTP messages now translates into `request.is_secure()`
correctly.
* WebsocketBridge now exposes the underlying WebSocket as `.socket`
1.1.1 (2017-03-19)
------------------
* Fixed JS packaging issue
1.1.0 (2017-03-18)
------------------
* Channels now includes a JavaScript wrapper that wraps reconnection and
multiplexing for you on the client side.
* Test classes have been moved from ``channels.tests`` to ``channels.test``.
* Bindings now support non-integer fields for primary keys on models.
* The ``enforce_ordering`` decorator no longer suffers a race condition where
it would drop messages under high load.
* ``runserver`` no longer errors if the ``staticfiles`` app is not enabled in Django.
1.0.3 (2017-02-01)
------------------
* Database connections are no longer force-closed after each test is run.
* Channel sessions are not re-saved if they're empty even if they're marked as
modified, allowing logout to work correctly.
* WebsocketDemultiplexer now correctly does sessions for the second/third/etc.
connect and disconnect handlers.
* Request reading timeouts now correctly return 408 rather than erroring out.
* The ``rundelay`` delay server now only polls the database once per second,
and this interval is configurable with the ``--sleep`` option.
1.0.2 (2017-01-12)
------------------
* Websockets can now be closed from anywhere using the new ``WebsocketCloseException``.
There is also a generic ``ChannelSocketException`` so you can do custom behaviours.
* Calling ``Channel.send`` or ``Group.send`` from outside a consumer context
(i.e. in tests or management commands) will once again send the message immediately.
* The base implementation of databinding now correctly only calls ``group_names(instance)``,
as documented.
1.0.1 (2017-01-09)
------------------
* WebSocket generic views now accept connections by default in their connect
handler for better backwards compatibility.
1.0.0 (2017-01-08)
------------------
* BREAKING CHANGE: WebSockets must now be explicitly accepted or denied.
See https://channels.readthedocs.io/en/latest/releases/1.0.0.html for more.
* BREAKING CHANGE: Demultiplexers have been overhauled to directly dispatch
messages rather than using channels to new consumers. Consult the docs on
generic consumers for more: https://channels.readthedocs.io/en/latest/generics.html
* BREAKING CHANGE: Databinding now operates from implicit group membership,
where your code just has to say what groups would be used and Channels will
work out if it's a creation, modification or removal from a client's
perspective, including with permissions.
* Delay protocol server ships with Channels providing a specification on how
to delay jobs until later and a reference implementation.
* Serializers can now specify fields as `__all__` to auto-include all fields.
* Various other small fixes.
0.17.3 (2016-10-12)
-------------------
* channel_session now also rehydrates the http session with an option
* request.META['PATH_INFO'] is now present
* runserver shows Daphne log messages
* runserver --nothreading only starts a single worker thread
* Databinding changed to call group_names dynamically and imply changed/created from that;
other small changes to databinding, and more changes likely.
0.17.2 (2016-08-04)
-------------------
* New CHANNELS_WS_PROTOCOLS setting if you want Daphne to accept certain
subprotocols
* WebsocketBindingWithMembers allows serialization of non-fields on instances
* Class-based consumers have an .as_route() method that lets you skip using
route_class
* Bindings now work if loaded after app ready state
0.17.1 (2016-07-22)
-------------------
* Bindings now require that `fields` is defined on the class body so all fields
are not sent by default. To restore old behaviour, set it to ['__all__']
* Bindings can now be declared after app.ready() has been called and still work.
* Binding payloads now include the model name as `appname.modelname`.
* A worker_ready signal now gets triggered when `runworker` starts consuming
messages. It does not fire from within `runserver`.
0.17.0 (2016-07-19)
-------------------
* Data Binding framework is added, which allows easy tying of model changes
to WebSockets (and other protocols) and vice-versa.
* Standardised WebSocket/JSON multiplexing introduced
* WebSocket generic consumers now have a 'close' argument on send/group_send
0.16.1 (2016-07-12)
-------------------
* WebsocketConsumer now has a http_user option for auto user sessions.
* consumer_started and consumer_finished signals are now available under
channels.signals.
* Database connections are closed whenever a consumer finishes.
0.16.0 (2016-07-06)
-------------------
* websocket.connect and websocket.receive are now consumed by a no-op consumer
by default if you don't specify anything to consume it, to bring Channels in
line with the ASGI rules on WebSocket backpressure.
* You no longer need to call super's setUp in ChannelTestCase.
0.15.1 (2016-06-29)
-------------------
* Class based consumers now have a self.kwargs
* Fixed bug where empty streaming responses did not send headers or status code
0.15.0 (2016-06-22)
-------------------
* Query strings are now decoded entirely by Django. Must be used with Daphne
0.13 or higher.
0.14.3 (2016-06-21)
-------------------
* + signs in query strings are no longer double-decoded
* Message now has .values(), .keys() and .items() to match dict
0.14.2 (2016-06-16)
-------------------
* Class based consumers now have built-in channel_session and
channel_session_user support
0.14.1 (2016-06-09)
-------------------
* Fix unicode issues with test client under Python 2.7
0.14.0 (2016-05-25)
-------------------
* Class-based consumer pattern and WebSocket consumer now come with Channels
(see docs for more details)
* Better testing utilities including a higher-level Client abstraction with
optional HTTP/WebSocket HttpClient variant.
0.13.1 (2016-05-13)
-------------------
* enforce_ordering now queues future messages in a channel rather than
spinlocking worker processes to achieve delays.
* ConsumeLater no longer duplicates messages when they're requeued below the
limit.
0.13.0 (2016-05-07)
-------------------
* Backpressure is now implemented, meaning responses will pause sending if
the client does not read them fast enough.
* DatabaseChannelLayer has been removed; it was not sensible.
0.12.0 (2016-04-26)
-------------------
* HTTP paths and query strings are now expected to be sent over ASGI as
unescaped unicode. Daphne 0.11.0 is updated to send things in this format.
* request.FILES reading bug fixed
0.11.0 (2016-04-05)
-------------------
* ChannelTestCase base testing class for easier testing of consumers
* Routing rewrite to improve speed with nested includes and remove need for ^ operator
* Timeouts reading very slow request bodies
0.10.3 (2016-03-29)
-------------------
* Better error messages for wrongly-constructed routing lists
* Error when trying to use signed cookie backend with channel_session
* ASGI group_expiry implemented on database channel backend
0.10.2 (2016-03-23)
-------------------
* Regular expressions for routing include() can now be Unicode under Python 3
* Last-resort error handling for HTTP request exceptions inside Django's core
code. If DEBUG is on, shows plain text tracebacks; if it is off, shows
"Internal Server Error".
0.10.1 (2016-03-22)
-------------------
* Regular expressions for HTTP paths can now be Unicode under Python 3
* route() and include() now importable directly from `channels`
* FileResponse send speed improved for all code (previously just for staticfiles)
0.10.0 (2016-03-21)
-------------------
* New routing system
* Updated to match new ASGI single-reader-channel name spec
* Updated to match new ASGI HTTP header encoding spec
0.9.5 (2016-03-10)
------------------
* `runworker` now has an --alias option to specify a different channel layer
* `runserver` correctly falls back to WSGI mode if no channel layers configured
0.9.4 (2016-03-08)
------------------
* Worker processes now exit gracefully (finish their current processing) when
sent SIGTERM or SIGINT.
* `runserver` now has a shorter than standard HTTP timeout configured
of 60 seconds.
0.9.3 (2016-02-28)
------------------
* Static file serving is significantly faster thanks to larger chunk size
* `runworker` now refuses to start if an in memory layer is configured
0.9.2 (2016-02-28)
------------------
* ASGI spec updated to include `order` field for WebSocket messages
* `enforce_ordering` decorator introduced
* DatabaseChannelLayer now uses transactions to stop duplicated messages
0.9.1 (2016-02-21)
------------------
* Fix packaging issues with previous release
0.9 (2016-02-21)
----------------
* Staticfiles support in runserver
* Runserver logs requests and WebSocket connections to console
* Runserver autoreloads correctly
* --noasgi option on runserver to use the old WSGI-based server
* --noworker option on runserver to make it not launch worker threads
* Streaming responses work correctly
* Authentication decorators work again with new ASGI spec
* channel_session_user_from_http decorator introduced
* HTTP Long Poll support (raise ResponseLater)
* Handle non-latin1 request body encoding
* ASGI conformance tests for built-in database backend
* Moved some imports around for more sensible layout
channels-2.4.0/CONTRIBUTING.rst 0000664 0000000 0000000 00000001732 13576505155 0015770 0 ustar 00root root 0000000 0000000 Contributing to Channels
========================
As an open source project, Channels welcomes contributions of many forms. By participating in this project, you
agree to abide by the Django `code of conduct `_.
Examples of contributions include:
* Code patches
* Documentation improvements
* Bug reports and patch reviews
For more information, please see our `contribution guide `_.
Quick Setup
-----------
Fork, then clone the repo::
git clone git@github.com:your-username/channels.git
Make sure the tests pass::
pip install -e .[tests]
pytest
Make your change. Add tests for your change. Make the tests pass::
pytest
Make sure your code conforms to the coding style::
black ./channels ./tests
isort --check-only --diff --recursive ./channels ./tests
Push to your fork and `submit a pull request `_.
channels-2.4.0/LICENSE 0000664 0000000 0000000 00000003020 13576505155 0014324 0 ustar 00root root 0000000 0000000 Copyright (c) Django Software Foundation and individual contributors.
All rights reserved.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. Neither the name of Django nor the names of its contributors may be used
to endorse or promote products derived from this software without
specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
channels-2.4.0/MANIFEST.in 0000664 0000000 0000000 00000000052 13576505155 0015057 0 ustar 00root root 0000000 0000000 recursive-exclude tests *
include LICENSE
channels-2.4.0/README.rst 0000664 0000000 0000000 00000005657 13576505155 0015030 0 ustar 00root root 0000000 0000000 Django Channels
===============
.. image:: https://api.travis-ci.org/django/channels.svg
:target: https://travis-ci.org/django/channels
.. image:: https://readthedocs.org/projects/channels/badge/?version=latest
:target: https://channels.readthedocs.io/en/latest/?badge=latest
.. image:: https://img.shields.io/pypi/v/channels.svg
:target: https://pypi.python.org/pypi/channels
.. image:: https://img.shields.io/pypi/l/channels.svg
:target: https://pypi.python.org/pypi/channels
Channels augments Django to bring WebSocket, long-poll HTTP,
task offloading and other async support to your code, using familiar Django
design patterns and a flexible underlying framework that lets you not only
customize behaviours but also write support for your own protocols and needs.
Documentation, installation and getting started instructions are at
https://channels.readthedocs.io
Channels is an official Django Project and as such has a deprecation policy.
Details about what's deprecated or pending deprecation for each release is in
the `release notes `_.
Support can be obtained through several locations - see our
`support docs `_ for more.
You can install channels from PyPI as the ``channels`` package.
See our `installation `_
and `tutorial `_ docs for more.
Dependencies
------------
All Channels projects currently support Python 3.5 and up. ``channels`` is
compatible with Django 2.2 and 3.0.
Contributing
------------
To learn more about contributing, please `read our contributing docs `_.
Maintenance and Security
------------------------
To report security issues, please contact security@djangoproject.com. For GPG
signatures and more security process information, see
https://docs.djangoproject.com/en/dev/internals/security/.
To report bugs or request new features, please open a new GitHub issue. For
larger discussions, please post to the
`django-developers mailing list `_.
Maintenance is overseen by Carlton Gibson with help from others. It is a
best-effort basis - we unfortunately can only dedicate guaranteed time to fixing
security holes.
If you are interested in joining the maintenance team, please
`read more about contributing `_
and get in touch!
Other Projects
--------------
The Channels project is made up of several packages; the others are:
* `Daphne `_, the HTTP and Websocket termination server
* `channels_redis `_, the Redis channel backend
* `asgiref `_, the base ASGI library/memory backend
channels-2.4.0/channels/ 0000775 0000000 0000000 00000000000 13576505155 0015117 5 ustar 00root root 0000000 0000000 channels-2.4.0/channels/__init__.py 0000664 0000000 0000000 00000000155 13576505155 0017231 0 ustar 00root root 0000000 0000000 __version__ = "2.4.0"
default_app_config = "channels.apps.ChannelsConfig"
DEFAULT_CHANNEL_LAYER = "default"
channels-2.4.0/channels/apps.py 0000664 0000000 0000000 00000000765 13576505155 0016444 0 ustar 00root root 0000000 0000000 from django.apps import AppConfig
# We import this here to ensure the reactor is installed very early on
# in case other packages accidentally import twisted.internet.reactor
# (e.g. raven does this).
import daphne.server
assert daphne.server # pyflakes doesn't support ignores
class ChannelsConfig(AppConfig):
name = "channels"
verbose_name = "Channels"
def ready(self):
# Do django monkeypatches
from .hacks import monkeypatch_django
monkeypatch_django()
channels-2.4.0/channels/auth.py 0000664 0000000 0000000 00000013712 13576505155 0016436 0 ustar 00root root 0000000 0000000 from django.conf import settings
from django.contrib.auth import (
BACKEND_SESSION_KEY,
HASH_SESSION_KEY,
SESSION_KEY,
_get_backends,
get_user_model,
load_backend,
user_logged_in,
user_logged_out,
)
from django.contrib.auth.models import AnonymousUser
from django.utils.crypto import constant_time_compare
from django.utils.functional import LazyObject
from django.utils.translation import LANGUAGE_SESSION_KEY
from channels.db import database_sync_to_async
from channels.middleware import BaseMiddleware
from channels.sessions import CookieMiddleware, SessionMiddleware
@database_sync_to_async
def get_user(scope):
"""
Return the user model instance associated with the given scope.
If no user is retrieved, return an instance of `AnonymousUser`.
"""
if "session" not in scope:
raise ValueError(
"Cannot find session in scope. You should wrap your consumer in SessionMiddleware."
)
session = scope["session"]
user = None
try:
user_id = _get_user_session_key(session)
backend_path = session[BACKEND_SESSION_KEY]
except KeyError:
pass
else:
if backend_path in settings.AUTHENTICATION_BACKENDS:
backend = load_backend(backend_path)
user = backend.get_user(user_id)
# Verify the session
if hasattr(user, "get_session_auth_hash"):
session_hash = session.get(HASH_SESSION_KEY)
session_hash_verified = session_hash and constant_time_compare(
session_hash, user.get_session_auth_hash()
)
if not session_hash_verified:
session.flush()
user = None
return user or AnonymousUser()
@database_sync_to_async
def login(scope, user, backend=None):
"""
Persist a user id and a backend in the request.
This way a user doesn't have to re-authenticate on every request.
Note that data set during the anonymous session is retained when the user logs in.
"""
if "session" not in scope:
raise ValueError(
"Cannot find session in scope. You should wrap your consumer in SessionMiddleware."
)
session = scope["session"]
session_auth_hash = ""
if user is None:
user = scope.get("user", None)
if user is None:
raise ValueError(
"User must be passed as an argument or must be present in the scope."
)
if hasattr(user, "get_session_auth_hash"):
session_auth_hash = user.get_session_auth_hash()
if SESSION_KEY in session:
if _get_user_session_key(session) != user.pk or (
session_auth_hash
and not constant_time_compare(
session.get(HASH_SESSION_KEY, ""), session_auth_hash
)
):
# To avoid reusing another user's session, create a new, empty
# session if the existing session corresponds to a different
# authenticated user.
session.flush()
else:
session.cycle_key()
try:
backend = backend or user.backend
except AttributeError:
backends = _get_backends(return_tuples=True)
if len(backends) == 1:
_, backend = backends[0]
else:
raise ValueError(
"You have multiple authentication backends configured and therefore must provide the `backend` "
"argument or set the `backend` attribute on the user."
)
session[SESSION_KEY] = user._meta.pk.value_to_string(user)
session[BACKEND_SESSION_KEY] = backend
session[HASH_SESSION_KEY] = session_auth_hash
scope["user"] = user
# note this does not reset the CSRF_COOKIE/Token
user_logged_in.send(sender=user.__class__, request=None, user=user)
@database_sync_to_async
def logout(scope):
"""
Remove the authenticated user's ID from the request and flush their session data.
"""
if "session" not in scope:
raise ValueError(
"Login cannot find session in scope. You should wrap your consumer in SessionMiddleware."
)
session = scope["session"]
# Dispatch the signal before the user is logged out so the receivers have a
# chance to find out *who* logged out.
user = scope.get("user", None)
if hasattr(user, "is_authenticated") and not user.is_authenticated:
user = None
if user is not None:
user_logged_out.send(sender=user.__class__, request=None, user=user)
# remember language choice saved to session
language = session.get(LANGUAGE_SESSION_KEY)
session.flush()
if language is not None:
session[LANGUAGE_SESSION_KEY] = language
if "user" in scope:
scope["user"] = AnonymousUser()
def _get_user_session_key(session):
# This value in the session is always serialized to a string, so we need
# to convert it back to Python whenever we access it.
return get_user_model()._meta.pk.to_python(session[SESSION_KEY])
class UserLazyObject(LazyObject):
"""
Throw a more useful error message when scope['user'] is accessed before it's resolved
"""
def _setup(self):
raise ValueError("Accessing scope user before it is ready.")
class AuthMiddleware(BaseMiddleware):
"""
Middleware which populates scope["user"] from a Django session.
Requires SessionMiddleware to function.
"""
def populate_scope(self, scope):
# Make sure we have a session
if "session" not in scope:
raise ValueError(
"AuthMiddleware cannot find session in scope. SessionMiddleware must be above it."
)
# Add it to the scope if it's not there already
if "user" not in scope:
scope["user"] = UserLazyObject()
async def resolve_scope(self, scope):
scope["user"]._wrapped = await get_user(scope)
# Handy shortcut for applying all three layers at once
AuthMiddlewareStack = lambda inner: CookieMiddleware(
SessionMiddleware(AuthMiddleware(inner))
)
channels-2.4.0/channels/consumer.py 0000664 0000000 0000000 00000007137 13576505155 0017334 0 ustar 00root root 0000000 0000000 import functools
from asgiref.sync import async_to_sync
from . import DEFAULT_CHANNEL_LAYER
from .db import database_sync_to_async
from .exceptions import StopConsumer
from .layers import get_channel_layer
from .utils import await_many_dispatch
def get_handler_name(message):
"""
Looks at a message, checks it has a sensible type, and returns the
handler name for that type.
"""
# Check message looks OK
if "type" not in message:
raise ValueError("Incoming message has no 'type' attribute")
if message["type"].startswith("_"):
raise ValueError("Malformed type in message (leading underscore)")
# Extract type and replace . with _
return message["type"].replace(".", "_")
class AsyncConsumer:
"""
Base consumer class. Implements the ASGI application spec, and adds on
channel layer management and routing of events to named methods based
on their type.
"""
_sync = False
channel_layer_alias = DEFAULT_CHANNEL_LAYER
def __init__(self, scope):
self.scope = scope
async def __call__(self, receive, send):
"""
Dispatches incoming messages to type-based handlers asynchronously.
"""
# Initialize channel layer
self.channel_layer = get_channel_layer(self.channel_layer_alias)
if self.channel_layer is not None:
self.channel_name = await self.channel_layer.new_channel()
self.channel_receive = functools.partial(
self.channel_layer.receive, self.channel_name
)
# Store send function
if self._sync:
self.base_send = async_to_sync(send)
else:
self.base_send = send
# Pass messages in from channel layer or client to dispatch method
try:
if self.channel_layer is not None:
await await_many_dispatch(
[receive, self.channel_receive], self.dispatch
)
else:
await await_many_dispatch([receive], self.dispatch)
except StopConsumer:
# Exit cleanly
pass
async def dispatch(self, message):
"""
Works out what to do with a message.
"""
handler = getattr(self, get_handler_name(message), None)
if handler:
await handler(message)
else:
raise ValueError("No handler for message type %s" % message["type"])
async def send(self, message):
"""
Overrideable/callable-by-subclasses send method.
"""
await self.base_send(message)
class SyncConsumer(AsyncConsumer):
"""
Synchronous version of the consumer, which is what we write most of the
generic consumers against (for now). Calls handlers in a threadpool and
uses CallBouncer to get the send method out to the main event loop.
It would have been possible to have "mixed" consumers and auto-detect
if a handler was awaitable or not, but that would have made the API
for user-called methods very confusing as there'd be two types of each.
"""
_sync = True
@database_sync_to_async
def dispatch(self, message):
"""
Dispatches incoming messages to type-based handlers asynchronously.
"""
# Get and execute the handler
handler = getattr(self, get_handler_name(message), None)
if handler:
handler(message)
else:
raise ValueError("No handler for message type %s" % message["type"])
def send(self, message):
"""
Overrideable/callable-by-subclasses send method.
"""
self.base_send(message)
channels-2.4.0/channels/db.py 0000664 0000000 0000000 00000001063 13576505155 0016056 0 ustar 00root root 0000000 0000000 from django.db import close_old_connections
from asgiref.sync import SyncToAsync
class DatabaseSyncToAsync(SyncToAsync):
"""
SyncToAsync version that cleans up old database connections when it exits.
"""
def thread_handler(self, loop, *args, **kwargs):
close_old_connections()
try:
return super().thread_handler(loop, *args, **kwargs)
finally:
close_old_connections()
# The class is TitleCased, but we want to encourage use as a callable/decorator
database_sync_to_async = DatabaseSyncToAsync
channels-2.4.0/channels/exceptions.py 0000664 0000000 0000000 00000002137 13576505155 0017655 0 ustar 00root root 0000000 0000000 class RequestAborted(Exception):
"""
Raised when the incoming request tells us it's aborted partway through
reading the body.
"""
pass
class RequestTimeout(RequestAborted):
"""
Aborted specifically due to timeout.
"""
pass
class InvalidChannelLayerError(ValueError):
"""
Raised when a channel layer is configured incorrectly.
"""
pass
class AcceptConnection(Exception):
"""
Raised during a websocket.connect (or other supported connection) handler
to accept the connection.
"""
pass
class DenyConnection(Exception):
"""
Raised during a websocket.connect (or other supported connection) handler
to deny the connection.
"""
pass
class ChannelFull(Exception):
"""
Raised when a channel cannot be sent to as it is over capacity.
"""
pass
class MessageTooLarge(Exception):
"""
Raised when a message cannot be sent as it's too big.
"""
pass
class StopConsumer(Exception):
"""
Raised when a consumer wants to stop and close down its application instance.
"""
pass
channels-2.4.0/channels/generic/ 0000775 0000000 0000000 00000000000 13576505155 0016533 5 ustar 00root root 0000000 0000000 channels-2.4.0/channels/generic/__init__.py 0000664 0000000 0000000 00000000000 13576505155 0020632 0 ustar 00root root 0000000 0000000 channels-2.4.0/channels/generic/http.py 0000664 0000000 0000000 00000006073 13576505155 0020072 0 ustar 00root root 0000000 0000000 from channels.consumer import AsyncConsumer
from ..exceptions import StopConsumer
class AsyncHttpConsumer(AsyncConsumer):
"""
Async HTTP consumer. Provides basic primitives for building asynchronous
HTTP endpoints.
"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.body = []
async def send_headers(self, *, status=200, headers=None):
"""
Sets the HTTP response status and headers. Headers may be provided as
a list of tuples or as a dictionary.
Note that the ASGI spec requires that the protocol server only starts
sending the response to the client after ``self.send_body`` has been
called the first time.
"""
if headers is None:
headers = []
elif isinstance(headers, dict):
headers = list(headers.items())
await self.send(
{"type": "http.response.start", "status": status, "headers": headers}
)
async def send_body(self, body, *, more_body=False):
"""
Sends a response body to the client. The method expects a bytestring.
Set ``more_body=True`` if you want to send more body content later.
The default behavior closes the response, and further messages on
the channel will be ignored.
"""
assert isinstance(body, bytes), "Body is not bytes"
await self.send(
{"type": "http.response.body", "body": body, "more_body": more_body}
)
async def send_response(self, status, body, **kwargs):
"""
Sends a response to the client. This is a thin wrapper over
``self.send_headers`` and ``self.send_body``, and everything said
above applies here as well. This method may only be called once.
"""
await self.send_headers(status=status, **kwargs)
await self.send_body(body)
async def handle(self, body):
"""
Receives the request body as a bytestring. Response may be composed
using the ``self.send*`` methods; the return value of this method is
thrown away.
"""
raise NotImplementedError(
"Subclasses of AsyncHttpConsumer must provide a handle() method."
)
async def disconnect(self):
"""
Overrideable place to run disconnect handling. Do not send anything
from here.
"""
pass
async def http_request(self, message):
"""
Async entrypoint - concatenates body fragments and hands off control
to ``self.handle`` when the body has been completely received.
"""
if "body" in message:
self.body.append(message["body"])
if not message.get("more_body"):
try:
await self.handle(b"".join(self.body))
finally:
await self.disconnect()
raise StopConsumer()
async def http_disconnect(self, message):
"""
Let the user do their cleanup and close the consumer.
"""
await self.disconnect()
raise StopConsumer()
channels-2.4.0/channels/generic/websocket.py 0000664 0000000 0000000 00000020707 13576505155 0021101 0 ustar 00root root 0000000 0000000 import json
from asgiref.sync import async_to_sync
from ..consumer import AsyncConsumer, SyncConsumer
from ..exceptions import (
AcceptConnection,
DenyConnection,
InvalidChannelLayerError,
StopConsumer,
)
class WebsocketConsumer(SyncConsumer):
"""
Base WebSocket consumer. Provides a general encapsulation for the
WebSocket handling model that other applications can build on.
"""
groups = None
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
if self.groups is None:
self.groups = []
def websocket_connect(self, message):
"""
Called when a WebSocket connection is opened.
"""
try:
for group in self.groups:
async_to_sync(self.channel_layer.group_add)(group, self.channel_name)
except AttributeError:
raise InvalidChannelLayerError(
"BACKEND is unconfigured or doesn't support groups"
)
try:
self.connect()
except AcceptConnection:
self.accept()
except DenyConnection:
self.close()
def connect(self):
self.accept()
def accept(self, subprotocol=None):
"""
Accepts an incoming socket
"""
super().send({"type": "websocket.accept", "subprotocol": subprotocol})
def websocket_receive(self, message):
"""
Called when a WebSocket frame is received. Decodes it and passes it
to receive().
"""
if "text" in message:
self.receive(text_data=message["text"])
else:
self.receive(bytes_data=message["bytes"])
def receive(self, text_data=None, bytes_data=None):
"""
Called with a decoded WebSocket frame.
"""
pass
def send(self, text_data=None, bytes_data=None, close=False):
"""
Sends a reply back down the WebSocket
"""
if text_data is not None:
super().send({"type": "websocket.send", "text": text_data})
elif bytes_data is not None:
super().send({"type": "websocket.send", "bytes": bytes_data})
else:
raise ValueError("You must pass one of bytes_data or text_data")
if close:
self.close(close)
def close(self, code=None):
"""
Closes the WebSocket from the server end
"""
if code is not None and code is not True:
super().send({"type": "websocket.close", "code": code})
else:
super().send({"type": "websocket.close"})
def websocket_disconnect(self, message):
"""
Called when a WebSocket connection is closed. Base level so you don't
need to call super() all the time.
"""
try:
for group in self.groups:
async_to_sync(self.channel_layer.group_discard)(
group, self.channel_name
)
except AttributeError:
raise InvalidChannelLayerError(
"BACKEND is unconfigured or doesn't support groups"
)
self.disconnect(message["code"])
raise StopConsumer()
def disconnect(self, code):
"""
Called when a WebSocket connection is closed.
"""
pass
class JsonWebsocketConsumer(WebsocketConsumer):
"""
Variant of WebsocketConsumer that automatically JSON-encodes and decodes
messages as they come in and go out. Expects everything to be text; will
error on binary data.
"""
def receive(self, text_data=None, bytes_data=None, **kwargs):
if text_data:
self.receive_json(self.decode_json(text_data), **kwargs)
else:
raise ValueError("No text section for incoming WebSocket frame!")
def receive_json(self, content, **kwargs):
"""
Called with decoded JSON content.
"""
pass
def send_json(self, content, close=False):
"""
Encode the given content as JSON and send it to the client.
"""
super().send(text_data=self.encode_json(content), close=close)
@classmethod
def decode_json(cls, text_data):
return json.loads(text_data)
@classmethod
def encode_json(cls, content):
return json.dumps(content)
class AsyncWebsocketConsumer(AsyncConsumer):
"""
Base WebSocket consumer, async version. Provides a general encapsulation
for the WebSocket handling model that other applications can build on.
"""
groups = None
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
if self.groups is None:
self.groups = []
async def websocket_connect(self, message):
"""
Called when a WebSocket connection is opened.
"""
try:
for group in self.groups:
await self.channel_layer.group_add(group, self.channel_name)
except AttributeError:
raise InvalidChannelLayerError(
"BACKEND is unconfigured or doesn't support groups"
)
try:
await self.connect()
except AcceptConnection:
await self.accept()
except DenyConnection:
await self.close()
async def connect(self):
await self.accept()
async def accept(self, subprotocol=None):
"""
Accepts an incoming socket
"""
await super().send({"type": "websocket.accept", "subprotocol": subprotocol})
async def websocket_receive(self, message):
"""
Called when a WebSocket frame is received. Decodes it and passes it
to receive().
"""
if "text" in message:
await self.receive(text_data=message["text"])
else:
await self.receive(bytes_data=message["bytes"])
async def receive(self, text_data=None, bytes_data=None):
"""
Called with a decoded WebSocket frame.
"""
pass
async def send(self, text_data=None, bytes_data=None, close=False):
"""
Sends a reply back down the WebSocket
"""
if text_data is not None:
await super().send({"type": "websocket.send", "text": text_data})
elif bytes_data is not None:
await super().send({"type": "websocket.send", "bytes": bytes_data})
else:
raise ValueError("You must pass one of bytes_data or text_data")
if close:
await self.close(close)
async def close(self, code=None):
"""
Closes the WebSocket from the server end
"""
if code is not None and code is not True:
await super().send({"type": "websocket.close", "code": code})
else:
await super().send({"type": "websocket.close"})
async def websocket_disconnect(self, message):
"""
Called when a WebSocket connection is closed. Base level so you don't
need to call super() all the time.
"""
try:
for group in self.groups:
await self.channel_layer.group_discard(group, self.channel_name)
except AttributeError:
raise InvalidChannelLayerError(
"BACKEND is unconfigured or doesn't support groups"
)
await self.disconnect(message["code"])
raise StopConsumer()
async def disconnect(self, code):
"""
Called when a WebSocket connection is closed.
"""
pass
class AsyncJsonWebsocketConsumer(AsyncWebsocketConsumer):
"""
Variant of AsyncWebsocketConsumer that automatically JSON-encodes and decodes
messages as they come in and go out. Expects everything to be text; will
error on binary data.
"""
async def receive(self, text_data=None, bytes_data=None, **kwargs):
if text_data:
await self.receive_json(await self.decode_json(text_data), **kwargs)
else:
raise ValueError("No text section for incoming WebSocket frame!")
async def receive_json(self, content, **kwargs):
"""
Called with decoded JSON content.
"""
pass
async def send_json(self, content, close=False):
"""
Encode the given content as JSON and send it to the client.
"""
await super().send(text_data=await self.encode_json(content), close=close)
@classmethod
async def decode_json(cls, text_data):
return json.loads(text_data)
@classmethod
async def encode_json(cls, content):
return json.dumps(content)
channels-2.4.0/channels/hacks.py 0000664 0000000 0000000 00000000723 13576505155 0016564 0 ustar 00root root 0000000 0000000 def monkeypatch_django():
"""
Monkeypatches support for us into parts of Django.
"""
# Ensure that the staticfiles version of runserver bows down to us
# This one is particularly horrible
from django.contrib.staticfiles.management.commands.runserver import (
Command as StaticRunserverCommand,
)
from .management.commands.runserver import Command as RunserverCommand
StaticRunserverCommand.__bases__ = (RunserverCommand,)
channels-2.4.0/channels/http.py 0000664 0000000 0000000 00000033060 13576505155 0016452 0 ustar 00root root 0000000 0000000 import cgi
import codecs
import logging
import sys
import tempfile
import traceback
from django import http
from django.conf import settings
from django.core import signals
from django.core.exceptions import RequestDataTooBig
from django.core.handlers import base
from django.http import FileResponse, HttpResponse, HttpResponseServerError
from django.urls import set_script_prefix
from django.utils.functional import cached_property
from asgiref.sync import async_to_sync, sync_to_async
from channels.exceptions import RequestAborted, RequestTimeout
logger = logging.getLogger("django.request")
class AsgiRequest(http.HttpRequest):
"""
Custom request subclass that decodes from an ASGI-standard request
dict, and wraps request body handling.
"""
# Number of seconds until a Request gives up on trying to read a request
# body and aborts.
body_receive_timeout = 60
def __init__(self, scope, stream):
self.scope = scope
self._content_length = 0
self._post_parse_error = False
self._read_started = False
self.resolver_match = None
self.script_name = self.scope.get("root_path", "")
if self.script_name and scope["path"].startswith(self.script_name):
# TODO: Better is-prefix checking, slash handling?
self.path_info = scope["path"][len(self.script_name) :]
else:
self.path_info = scope["path"]
# django path is different from asgi scope path args, it should combine with script name
if self.script_name:
self.path = "%s/%s" % (
self.script_name.rstrip("/"),
self.path_info.replace("/", "", 1),
)
else:
self.path = scope["path"]
# HTTP basics
self.method = self.scope["method"].upper()
# fix https://github.com/django/channels/issues/622
query_string = self.scope.get("query_string", "")
if isinstance(query_string, bytes):
query_string = query_string.decode("utf-8")
self.META = {
"REQUEST_METHOD": self.method,
"QUERY_STRING": query_string,
"SCRIPT_NAME": self.script_name,
"PATH_INFO": self.path_info,
# Old code will need these for a while
"wsgi.multithread": True,
"wsgi.multiprocess": True,
}
if self.scope.get("client", None):
self.META["REMOTE_ADDR"] = self.scope["client"][0]
self.META["REMOTE_HOST"] = self.META["REMOTE_ADDR"]
self.META["REMOTE_PORT"] = self.scope["client"][1]
if self.scope.get("server", None):
self.META["SERVER_NAME"] = self.scope["server"][0]
self.META["SERVER_PORT"] = str(self.scope["server"][1])
else:
self.META["SERVER_NAME"] = "unknown"
self.META["SERVER_PORT"] = "0"
# Handle old style-headers for a transition period
if "headers" in self.scope and isinstance(self.scope["headers"], dict):
self.scope["headers"] = [
(x.encode("latin1"), y) for x, y in self.scope["headers"].items()
]
# Headers go into META
for name, value in self.scope.get("headers", []):
name = name.decode("latin1")
if name == "content-length":
corrected_name = "CONTENT_LENGTH"
elif name == "content-type":
corrected_name = "CONTENT_TYPE"
else:
corrected_name = "HTTP_%s" % name.upper().replace("-", "_")
# HTTPbis say only ASCII chars are allowed in headers, but we latin1 just in case
value = value.decode("latin1")
if corrected_name in self.META:
value = self.META[corrected_name] + "," + value
self.META[corrected_name] = value
# Pull out request encoding if we find it
if "CONTENT_TYPE" in self.META:
self.content_type, self.content_params = cgi.parse_header(
self.META["CONTENT_TYPE"]
)
if "charset" in self.content_params:
try:
codecs.lookup(self.content_params["charset"])
except LookupError:
pass
else:
self.encoding = self.content_params["charset"]
else:
self.content_type, self.content_params = "", {}
# Pull out content length info
if self.META.get("CONTENT_LENGTH", None):
try:
self._content_length = int(self.META["CONTENT_LENGTH"])
except (ValueError, TypeError):
pass
# Body handling
self._stream = stream
# Other bits
self.resolver_match = None
@cached_property
def GET(self):
return http.QueryDict(self.scope.get("query_string", ""))
def _get_scheme(self):
return self.scope.get("scheme", "http")
def _get_post(self):
if not hasattr(self, "_post"):
self._load_post_and_files()
return self._post
def _set_post(self, post):
self._post = post
def _get_files(self):
if not hasattr(self, "_files"):
self._load_post_and_files()
return self._files
POST = property(_get_post, _set_post)
FILES = property(_get_files)
@cached_property
def COOKIES(self):
return http.parse_cookie(self.META.get("HTTP_COOKIE", ""))
class AsgiHandler(base.BaseHandler):
"""
Handler for ASGI requests for the view system only (it will have got here
after traversing the dispatch-by-channel-name system, which decides it's
a HTTP request)
You can also manually construct it with a get_response callback if you
want to run a single Django view yourself. If you do this, though, it will
not do any URL routing or middleware (Channels uses it for staticfiles'
serving code)
"""
request_class = AsgiRequest
# Size to chunk response bodies into for multiple response messages
chunk_size = 512 * 1024
def __init__(self, scope):
if scope["type"] != "http":
raise ValueError(
"The AsgiHandler can only handle HTTP connections, not %s"
% scope["type"]
)
super(AsgiHandler, self).__init__()
self.scope = scope
self.load_middleware()
async def __call__(self, receive, send):
"""
Async entrypoint - uses the sync_to_async wrapper to run things in a
threadpool.
"""
self.send = async_to_sync(send)
# Receive the HTTP request body as a stream object.
try:
body_stream = await self.read_body(receive)
except RequestAborted:
return
# Launch into body handling (and a synchronous subthread).
await self.handle(body_stream)
async def read_body(self, receive):
"""Reads a HTTP body from an ASGI connection."""
# Use the tempfile that auto rolls-over to a disk file as it fills up.
body_file = tempfile.SpooledTemporaryFile(
max_size=settings.FILE_UPLOAD_MAX_MEMORY_SIZE, mode="w+b"
)
while True:
message = await receive()
if message["type"] == "http.disconnect":
# Early client disconnect.
raise RequestAborted()
# Add a body chunk from the message, if provided.
if "body" in message:
body_file.write(message["body"])
# Quit out if that's the end.
if not message.get("more_body", False):
break
body_file.seek(0)
return body_file
@sync_to_async
def handle(self, body):
"""
Synchronous message processing.
"""
# Set script prefix from message root_path, turning None into empty string
script_prefix = self.scope.get("root_path", "") or ""
if settings.FORCE_SCRIPT_NAME:
script_prefix = settings.FORCE_SCRIPT_NAME
set_script_prefix(script_prefix)
signals.request_started.send(sender=self.__class__, scope=self.scope)
# Run request through view system
try:
request = self.request_class(self.scope, body)
except UnicodeDecodeError:
logger.warning(
"Bad Request (UnicodeDecodeError)",
exc_info=sys.exc_info(),
extra={"status_code": 400},
)
response = http.HttpResponseBadRequest()
except RequestTimeout:
# Parsing the request failed, so the response is a Request Timeout error
response = HttpResponse("408 Request Timeout (upload too slow)", status=408)
except RequestAborted:
# Client closed connection on us mid request. Abort!
return
except RequestDataTooBig:
response = HttpResponse("413 Payload too large", status=413)
else:
response = self.get_response(request)
# Fix chunk size on file responses
if isinstance(response, FileResponse):
response.block_size = 1024 * 512
# Transform response into messages, which we yield back to caller
for response_message in self.encode_response(response):
self.send(response_message)
# Close the response now we're done with it
response.close()
def handle_uncaught_exception(self, request, resolver, exc_info):
"""
Last-chance handler for exceptions.
"""
# There's no WSGI server to catch the exception further up if this fails,
# so translate it into a plain text response.
try:
return super(AsgiHandler, self).handle_uncaught_exception(
request, resolver, exc_info
)
except Exception:
return HttpResponseServerError(
traceback.format_exc() if settings.DEBUG else "Internal Server Error",
content_type="text/plain",
)
def load_middleware(self):
"""
Loads the Django middleware chain and caches it on the class.
"""
# Because we create an AsgiHandler on every HTTP request
# we need to preserve the Django middleware chain once we load it.
if (
hasattr(self.__class__, "_middleware_chain")
and self.__class__._middleware_chain
):
self._middleware_chain = self.__class__._middleware_chain
self._view_middleware = self.__class__._view_middleware
self._template_response_middleware = (
self.__class__._template_response_middleware
)
self._exception_middleware = self.__class__._exception_middleware
else:
super(AsgiHandler, self).load_middleware()
self.__class__._middleware_chain = self._middleware_chain
self.__class__._view_middleware = self._view_middleware
self.__class__._template_response_middleware = (
self._template_response_middleware
)
self.__class__._exception_middleware = self._exception_middleware
@classmethod
def encode_response(cls, response):
"""
Encodes a Django HTTP response into ASGI http.response message(s).
"""
# Collect cookies into headers.
# Note that we have to preserve header case as there are some non-RFC
# compliant clients that want things like Content-Type correct. Ugh.
response_headers = []
for header, value in response.items():
if isinstance(header, str):
header = header.encode("ascii")
if isinstance(value, str):
value = value.encode("latin1")
response_headers.append((bytes(header), bytes(value)))
for c in response.cookies.values():
response_headers.append(
(b"Set-Cookie", c.output(header="").encode("ascii").strip())
)
# Make initial response message
yield {
"type": "http.response.start",
"status": response.status_code,
"headers": response_headers,
}
# Streaming responses need to be pinned to their iterator
if response.streaming:
# Access `__iter__` and not `streaming_content` directly in case
# it has been overridden in a subclass.
for part in response:
for chunk, _ in cls.chunk_bytes(part):
yield {
"type": "http.response.body",
"body": chunk,
# We ignore "more" as there may be more parts; instead,
# we use an empty final closing message with False.
"more_body": True,
}
# Final closing message
yield {"type": "http.response.body"}
# Other responses just need chunking
else:
# Yield chunks of response
for chunk, last in cls.chunk_bytes(response.content):
yield {
"type": "http.response.body",
"body": chunk,
"more_body": not last,
}
@classmethod
def chunk_bytes(cls, data):
"""
Chunks some data up so it can be sent in reasonable size messages.
Yields (chunk, last_chunk) tuples.
"""
position = 0
if not data:
yield data, True
return
while position < len(data):
yield (
data[position : position + cls.chunk_size],
(position + cls.chunk_size) >= len(data),
)
position += cls.chunk_size
channels-2.4.0/channels/layers.py 0000664 0000000 0000000 00000027566 13576505155 0017010 0 ustar 00root root 0000000 0000000 from __future__ import unicode_literals
import asyncio
import fnmatch
import random
import re
import string
import time
from copy import deepcopy
from django.conf import settings
from django.core.signals import setting_changed
from django.utils.module_loading import import_string
from channels import DEFAULT_CHANNEL_LAYER
from .exceptions import ChannelFull, InvalidChannelLayerError
class ChannelLayerManager:
"""
Takes a settings dictionary of backends and initialises them on request.
"""
def __init__(self):
self.backends = {}
setting_changed.connect(self._reset_backends)
def _reset_backends(self, setting, **kwargs):
"""
Removes cached channel layers when the CHANNEL_LAYERS setting changes.
"""
if setting == "CHANNEL_LAYERS":
self.backends = {}
@property
def configs(self):
# Lazy load settings so we can be imported
return getattr(settings, "CHANNEL_LAYERS", {})
def make_backend(self, name):
"""
Instantiate channel layer.
"""
config = self.configs[name].get("CONFIG", {})
return self._make_backend(name, config)
def make_test_backend(self, name):
"""
Instantiate channel layer using its test config.
"""
try:
config = self.configs[name]["TEST_CONFIG"]
except KeyError:
raise InvalidChannelLayerError("No TEST_CONFIG specified for %s" % name)
return self._make_backend(name, config)
def _make_backend(self, name, config):
# Check for old format config
if "ROUTING" in self.configs[name]:
raise InvalidChannelLayerError(
"ROUTING key found for %s - this is no longer needed in Channels 2."
% name
)
# Load the backend class
try:
backend_class = import_string(self.configs[name]["BACKEND"])
except KeyError:
raise InvalidChannelLayerError("No BACKEND specified for %s" % name)
except ImportError:
raise InvalidChannelLayerError(
"Cannot import BACKEND %r specified for %s"
% (self.configs[name]["BACKEND"], name)
)
# Initialise and pass config
return backend_class(**config)
def __getitem__(self, key):
if key not in self.backends:
self.backends[key] = self.make_backend(key)
return self.backends[key]
def __contains__(self, key):
return key in self.configs
def set(self, key, layer):
"""
Sets an alias to point to a new ChannelLayerWrapper instance, and
returns the old one that it replaced. Useful for swapping out the
backend during tests.
"""
old = self.backends.get(key, None)
self.backends[key] = layer
return old
class BaseChannelLayer:
"""
Base channel layer class that others can inherit from, with useful
common functionality.
"""
def __init__(self, expiry=60, capacity=100, channel_capacity=None):
self.expiry = expiry
self.capacity = capacity
self.channel_capacity = channel_capacity or {}
def compile_capacities(self, channel_capacity):
"""
Takes an input channel_capacity dict and returns the compiled list
of regexes that get_capacity will look for as self.channel_capacity
"""
result = []
for pattern, value in channel_capacity.items():
# If they passed in a precompiled regex, leave it, else interpret
# it as a glob.
if hasattr(pattern, "match"):
result.append((pattern, value))
else:
result.append((re.compile(fnmatch.translate(pattern)), value))
return result
def get_capacity(self, channel):
"""
Gets the correct capacity for the given channel; either the default,
or a matching result from channel_capacity. Returns the first matching
result; if you want to control the order of matches, use an ordered dict
as input.
"""
for pattern, capacity in self.channel_capacity:
if pattern.match(channel):
return capacity
return self.capacity
def match_type_and_length(self, name):
if isinstance(name, str) and (len(name) < 100):
return True
return False
### Name validation functions
channel_name_regex = re.compile(r"^[a-zA-Z\d\-_.]+(\![\d\w\-_.]*)?$")
group_name_regex = re.compile(r"^[a-zA-Z\d\-_.]+$")
invalid_name_error = (
"{} name must be a valid unicode string containing only ASCII "
+ "alphanumerics, hyphens, underscores, or periods."
)
def valid_channel_name(self, name, receive=False):
if self.match_type_and_length(name):
if bool(self.channel_name_regex.match(name)):
# Check cases for special channels
if "!" in name and not name.endswith("!") and receive:
raise TypeError(
"Specific channel names in receive() must end at the !"
)
return True
raise TypeError(
"Channel name must be a valid unicode string containing only ASCII "
+ "alphanumerics, hyphens, or periods, not '{}'.".format(name)
)
def valid_group_name(self, name):
if self.match_type_and_length(name):
if bool(self.group_name_regex.match(name)):
return True
raise TypeError(
"Group name must be a valid unicode string containing only ASCII "
+ "alphanumerics, hyphens, or periods."
)
def valid_channel_names(self, names, receive=False):
_non_empty_list = True if names else False
_names_type = isinstance(names, list)
assert _non_empty_list and _names_type, "names must be a non-empty list"
assert all(
self.valid_channel_name(channel, receive=receive) for channel in names
)
return True
def non_local_name(self, name):
"""
Given a channel name, returns the "non-local" part. If the channel name
is a process-specific channel (contains !) this means the part up to
and including the !; if it is anything else, this means the full name.
"""
if "!" in name:
return name[: name.find("!") + 1]
else:
return name
class InMemoryChannelLayer(BaseChannelLayer):
"""
In-memory channel layer implementation
"""
def __init__(
self,
expiry=60,
group_expiry=86400,
capacity=100,
channel_capacity=None,
**kwargs
):
super().__init__(
expiry=expiry,
capacity=capacity,
channel_capacity=channel_capacity,
**kwargs
)
self.channels = {}
self.groups = {}
self.group_expiry = group_expiry
### Channel layer API ###
extensions = ["groups", "flush"]
async def send(self, channel, message):
"""
Send a message onto a (general or specific) channel.
"""
# Typecheck
assert isinstance(message, dict), "message is not a dict"
assert self.valid_channel_name(channel), "Channel name not valid"
# If it's a process-local channel, strip off local part and stick full name in message
assert "__asgi_channel__" not in message
queue = self.channels.setdefault(channel, asyncio.Queue())
# Are we full
if queue.qsize() >= self.capacity:
raise ChannelFull(channel)
# Add message
await queue.put((time.time() + self.expiry, deepcopy(message)))
async def receive(self, channel):
"""
Receive the first message that arrives on the channel.
If more than one coroutine waits on the same channel, a random one
of the waiting coroutines will get the result.
"""
assert self.valid_channel_name(channel)
self._clean_expired()
queue = self.channels.setdefault(channel, asyncio.Queue())
# Do a plain direct receive
_, message = await queue.get()
# Delete if empty
if queue.empty():
del self.channels[channel]
return message
async def new_channel(self, prefix="specific."):
"""
Returns a new channel name that can be used by something in our
process as a specific channel.
"""
return "%s.inmemory!%s" % (
prefix,
"".join(random.choice(string.ascii_letters) for i in range(12)),
)
### Expire cleanup ###
def _clean_expired(self):
"""
Goes through all messages and groups and removes those that are expired.
Any channel with an expired message is removed from all groups.
"""
# Channel cleanup
for channel, queue in list(self.channels.items()):
remove = False
# See if it's expired
while not queue.empty() and queue._queue[0][0] < time.time():
queue.get_nowait()
remove = True
# Any removal prompts group discard
if remove:
self._remove_from_groups(channel)
# Is the channel now empty and needs deleting?
if not queue:
del self.channels[channel]
# Group Expiration
timeout = int(time.time()) - self.group_expiry
for group in self.groups:
for channel in list(self.groups.get(group, set())):
# If join time is older than group_expiry end the group membership
if (
self.groups[group][channel]
and int(self.groups[group][channel]) < timeout
):
# Delete from group
del self.groups[group][channel]
### Flush extension ###
async def flush(self):
self.channels = {}
self.groups = {}
async def close(self):
# Nothing to go
pass
def _remove_from_groups(self, channel):
"""
Removes a channel from all groups. Used when a message on it expires.
"""
for channels in self.groups.values():
if channel in channels:
del channels[channel]
### Groups extension ###
async def group_add(self, group, channel):
"""
Adds the channel name to a group.
"""
# Check the inputs
assert self.valid_group_name(group), "Group name not valid"
assert self.valid_channel_name(channel), "Channel name not valid"
# Add to group dict
self.groups.setdefault(group, {})
self.groups[group][channel] = time.time()
async def group_discard(self, group, channel):
# Both should be text and valid
assert self.valid_channel_name(channel), "Invalid channel name"
assert self.valid_group_name(group), "Invalid group name"
# Remove from group set
if group in self.groups:
if channel in self.groups[group]:
del self.groups[group][channel]
if not self.groups[group]:
del self.groups[group]
async def group_send(self, group, message):
# Check types
assert isinstance(message, dict), "Message is not a dict"
assert self.valid_group_name(group), "Invalid group name"
# Run clean
self._clean_expired()
# Send to each channel
for channel in self.groups.get(group, set()):
try:
await self.send(channel, message)
except ChannelFull:
pass
def get_channel_layer(alias=DEFAULT_CHANNEL_LAYER):
"""
Returns a channel layer by alias, or None if it is not configured.
"""
try:
return channel_layers[alias]
except KeyError:
return None
# Default global instance of the channel layer manager
channel_layers = ChannelLayerManager()
channels-2.4.0/channels/management/ 0000775 0000000 0000000 00000000000 13576505155 0017233 5 ustar 00root root 0000000 0000000 channels-2.4.0/channels/management/__init__.py 0000664 0000000 0000000 00000000000 13576505155 0021332 0 ustar 00root root 0000000 0000000 channels-2.4.0/channels/management/commands/ 0000775 0000000 0000000 00000000000 13576505155 0021034 5 ustar 00root root 0000000 0000000 channels-2.4.0/channels/management/commands/__init__.py 0000664 0000000 0000000 00000000000 13576505155 0023133 0 ustar 00root root 0000000 0000000 channels-2.4.0/channels/management/commands/runserver.py 0000664 0000000 0000000 00000015575 13576505155 0023456 0 ustar 00root root 0000000 0000000 import datetime
import logging
import sys
from django.apps import apps
from django.conf import settings
from django.core.management import CommandError
from django.core.management.commands.runserver import Command as RunserverCommand
from channels import __version__
from channels.routing import get_default_application
from daphne.endpoints import build_endpoint_description_strings
from daphne.server import Server
from ...staticfiles import StaticFilesWrapper
logger = logging.getLogger("django.channels.server")
class Command(RunserverCommand):
protocol = "http"
server_cls = Server
def add_arguments(self, parser):
super().add_arguments(parser)
parser.add_argument(
"--noasgi",
action="store_false",
dest="use_asgi",
default=True,
help="Run the old WSGI-based runserver rather than the ASGI-based one",
)
parser.add_argument(
"--http_timeout",
action="store",
dest="http_timeout",
type=int,
default=None,
help="Specify the daphne http_timeout interval in seconds (default: no timeout)",
)
parser.add_argument(
"--websocket_handshake_timeout",
action="store",
dest="websocket_handshake_timeout",
type=int,
default=5,
help="Specify the daphne websocket_handshake_timeout interval in seconds (default: 5)",
)
def handle(self, *args, **options):
self.http_timeout = options.get("http_timeout", None)
self.websocket_handshake_timeout = options.get("websocket_handshake_timeout", 5)
# Check Channels is installed right
if options["use_asgi"] and not hasattr(settings, "ASGI_APPLICATION"):
raise CommandError(
"You have not set ASGI_APPLICATION, which is needed to run the server."
)
# Dispatch upward
super().handle(*args, **options)
def inner_run(self, *args, **options):
# Maybe they want the wsgi one?
if not options.get("use_asgi", True):
if hasattr(RunserverCommand, "server_cls"):
self.server_cls = RunserverCommand.server_cls
return RunserverCommand.inner_run(self, *args, **options)
# Run checks
self.stdout.write("Performing system checks...\n\n")
self.check(display_num_errors=True)
self.check_migrations()
# Print helpful text
quit_command = "CTRL-BREAK" if sys.platform == "win32" else "CONTROL-C"
now = datetime.datetime.now().strftime("%B %d, %Y - %X")
self.stdout.write(now)
self.stdout.write(
(
"Django version %(version)s, using settings %(settings)r\n"
"Starting ASGI/Channels version %(channels_version)s development server"
" at %(protocol)s://%(addr)s:%(port)s/\n"
"Quit the server with %(quit_command)s.\n"
)
% {
"version": self.get_version(),
"channels_version": __version__,
"settings": settings.SETTINGS_MODULE,
"protocol": self.protocol,
"addr": "[%s]" % self.addr if self._raw_ipv6 else self.addr,
"port": self.port,
"quit_command": quit_command,
}
)
# Launch server in 'main' thread. Signals are disabled as it's still
# actually a subthread under the autoreloader.
logger.debug("Daphne running, listening on %s:%s", self.addr, self.port)
# build the endpoint description string from host/port options
endpoints = build_endpoint_description_strings(host=self.addr, port=self.port)
try:
self.server_cls(
application=self.get_application(options),
endpoints=endpoints,
signal_handlers=not options["use_reloader"],
action_logger=self.log_action,
http_timeout=self.http_timeout,
root_path=getattr(settings, "FORCE_SCRIPT_NAME", "") or "",
websocket_handshake_timeout=self.websocket_handshake_timeout,
).run()
logger.debug("Daphne exited")
except KeyboardInterrupt:
shutdown_message = options.get("shutdown_message", "")
if shutdown_message:
self.stdout.write(shutdown_message)
return
def get_application(self, options):
"""
Returns the static files serving application wrapping the default application,
if static files should be served. Otherwise just returns the default
handler.
"""
staticfiles_installed = apps.is_installed("django.contrib.staticfiles")
use_static_handler = options.get("use_static_handler", staticfiles_installed)
insecure_serving = options.get("insecure_serving", False)
if use_static_handler and (settings.DEBUG or insecure_serving):
return StaticFilesWrapper(get_default_application())
else:
return get_default_application()
def log_action(self, protocol, action, details):
"""
Logs various different kinds of requests to the console.
"""
# HTTP requests
if protocol == "http" and action == "complete":
msg = "HTTP %(method)s %(path)s %(status)s [%(time_taken).2f, %(client)s]"
# Utilize terminal colors, if available
if 200 <= details["status"] < 300:
# Put 2XX first, since it should be the common case
logger.info(self.style.HTTP_SUCCESS(msg), details)
elif 100 <= details["status"] < 200:
logger.info(self.style.HTTP_INFO(msg), details)
elif details["status"] == 304:
logger.info(self.style.HTTP_NOT_MODIFIED(msg), details)
elif 300 <= details["status"] < 400:
logger.info(self.style.HTTP_REDIRECT(msg), details)
elif details["status"] == 404:
logger.warn(self.style.HTTP_NOT_FOUND(msg), details)
elif 400 <= details["status"] < 500:
logger.warn(self.style.HTTP_BAD_REQUEST(msg), details)
else:
# Any 5XX, or any other response
logger.error(self.style.HTTP_SERVER_ERROR(msg), details)
# Websocket requests
elif protocol == "websocket" and action == "connected":
logger.info("WebSocket CONNECT %(path)s [%(client)s]", details)
elif protocol == "websocket" and action == "disconnected":
logger.info("WebSocket DISCONNECT %(path)s [%(client)s]", details)
elif protocol == "websocket" and action == "connecting":
logger.info("WebSocket HANDSHAKING %(path)s [%(client)s]", details)
elif protocol == "websocket" and action == "rejected":
logger.info("WebSocket REJECT %(path)s [%(client)s]", details)
channels-2.4.0/channels/management/commands/runworker.py 0000664 0000000 0000000 00000003072 13576505155 0023446 0 ustar 00root root 0000000 0000000 import logging
from django.core.management import BaseCommand, CommandError
from channels import DEFAULT_CHANNEL_LAYER
from channels.layers import get_channel_layer
from channels.routing import get_default_application
from channels.worker import Worker
logger = logging.getLogger("django.channels.worker")
class Command(BaseCommand):
leave_locale_alone = True
worker_class = Worker
def add_arguments(self, parser):
super(Command, self).add_arguments(parser)
parser.add_argument(
"--layer",
action="store",
dest="layer",
default=DEFAULT_CHANNEL_LAYER,
help="Channel layer alias to use, if not the default.",
)
parser.add_argument("channels", nargs="+", help="Channels to listen on.")
def handle(self, *args, **options):
# Get the backend to use
self.verbosity = options.get("verbosity", 1)
# Get the channel layer they asked for (or see if one isn't configured)
if "layer" in options:
self.channel_layer = get_channel_layer(options["layer"])
else:
self.channel_layer = get_channel_layer()
if self.channel_layer is None:
raise CommandError("You do not have any CHANNEL_LAYERS configured.")
# Run the worker
logger.info("Running worker for channels %s", options["channels"])
worker = self.worker_class(
application=get_default_application(),
channels=options["channels"],
channel_layer=self.channel_layer,
)
worker.run()
channels-2.4.0/channels/middleware.py 0000664 0000000 0000000 00000002544 13576505155 0017613 0 ustar 00root root 0000000 0000000 from functools import partial
class BaseMiddleware:
"""
Base class for implementing ASGI middleware. Inherit from this and
override the setup() method if you want to do things before you
get to.
Note that subclasses of this are not self-safe; don't store state on
the instance, as it serves multiple application instances. Instead, use
scope.
"""
def __init__(self, inner):
"""
Middleware constructor - just takes inner application.
"""
self.inner = inner
def __call__(self, scope):
"""
ASGI constructor; can insert things into the scope, but not
run asynchronous code.
"""
# Copy scope to stop changes going upstream
scope = dict(scope)
# Allow subclasses to change the scope
self.populate_scope(scope)
# Call the inner application's init
inner_instance = self.inner(scope)
# Partially bind it to our coroutine entrypoint along with the scope
return partial(self.coroutine_call, inner_instance, scope)
async def coroutine_call(self, inner_instance, scope, receive, send):
"""
ASGI coroutine; where we can resolve items in the scope
(but you can't modify it at the top level here!)
"""
await self.resolve_scope(scope)
await inner_instance(receive, send)
channels-2.4.0/channels/routing.py 0000664 0000000 0000000 00000014245 13576505155 0017166 0 ustar 00root root 0000000 0000000 from __future__ import unicode_literals
import importlib
from django.conf import settings
from django.core.exceptions import ImproperlyConfigured
from django.urls.exceptions import Resolver404
from django.urls.resolvers import URLResolver
from channels.http import AsgiHandler
"""
All Routing instances inside this file are also valid ASGI applications - with
new Channels routing, whatever you end up with as the top level object is just
served up as the "ASGI application".
"""
def get_default_application():
"""
Gets the default application, set in the ASGI_APPLICATION setting.
"""
try:
path, name = settings.ASGI_APPLICATION.rsplit(".", 1)
except (ValueError, AttributeError):
raise ImproperlyConfigured("Cannot find ASGI_APPLICATION setting.")
try:
module = importlib.import_module(path)
except ImportError:
raise ImproperlyConfigured("Cannot import ASGI_APPLICATION module %r" % path)
try:
value = getattr(module, name)
except AttributeError:
raise ImproperlyConfigured(
"Cannot find %r in ASGI_APPLICATION module %s" % (name, path)
)
return value
class ProtocolTypeRouter:
"""
Takes a mapping of protocol type names to other Application instances,
and dispatches to the right one based on protocol name (or raises an error)
"""
def __init__(self, application_mapping):
self.application_mapping = application_mapping
if "http" not in self.application_mapping:
self.application_mapping["http"] = AsgiHandler
def __call__(self, scope):
if scope["type"] in self.application_mapping:
return self.application_mapping[scope["type"]](scope)
else:
raise ValueError(
"No application configured for scope type %r" % scope["type"]
)
def route_pattern_match(route, path):
"""
Backport of RegexPattern.match for Django versions before 2.0. Returns
the remaining path and positional and keyword arguments matched.
"""
if hasattr(route, "pattern"):
match = route.pattern.match(path)
if match:
path, args, kwargs = match
kwargs.update(route.default_args)
return path, args, kwargs
return match
# Django<2.0. No converters... :-(
match = route.regex.search(path)
if match:
# If there are any named groups, use those as kwargs, ignoring
# non-named groups. Otherwise, pass all non-named arguments as
# positional arguments.
kwargs = match.groupdict()
args = () if kwargs else match.groups()
if kwargs is not None:
kwargs.update(route.default_args)
return path[match.end() :], args, kwargs
return None
class URLRouter:
"""
Routes to different applications/consumers based on the URL path.
Works with anything that has a ``path`` key, but intended for WebSocket
and HTTP. Uses Django's django.conf.urls objects for resolution -
url() or path().
"""
#: This router wants to do routing based on scope[path] or
#: scope[path_remaining]. ``path()`` entries in URLRouter should not be
#: treated as endpoints (ended with ``$``), but similar to ``include()``.
_path_routing = True
def __init__(self, routes):
self.routes = routes
# Django 2 introduced path(); older routes have no "pattern" attribute
if self.routes and hasattr(self.routes[0], "pattern"):
for route in self.routes:
# The inner ASGI app wants to do additional routing, route
# must not be an endpoint
if getattr(route.callback, "_path_routing", False) is True:
route.pattern._is_endpoint = False
for route in self.routes:
if not route.callback and isinstance(route, URLResolver):
raise ImproperlyConfigured(
"%s: include() is not supported in URLRouter. Use nested"
" URLRouter instances instead." % (route,)
)
def __call__(self, scope):
# Get the path
path = scope.get("path_remaining", scope.get("path", None))
if path is None:
raise ValueError("No 'path' key in connection scope, cannot route URLs")
# Remove leading / to match Django's handling
path = path.lstrip("/")
# Run through the routes we have until one matches
for route in self.routes:
try:
match = route_pattern_match(route, path)
if match:
new_path, args, kwargs = match
# Add args or kwargs into the scope
outer = scope.get("url_route", {})
return route.callback(
dict(
scope,
path_remaining=new_path,
url_route={
"args": outer.get("args", ()) + args,
"kwargs": {**outer.get("kwargs", {}), **kwargs},
},
)
)
except Resolver404:
pass
else:
if "path_remaining" in scope:
raise Resolver404("No route found for path %r." % path)
# We are the outermost URLRouter
raise ValueError("No route found for path %r." % path)
class ChannelNameRouter:
"""
Maps to different applications based on a "channel" key in the scope
(intended for the Channels worker mode)
"""
def __init__(self, application_mapping):
self.application_mapping = application_mapping
def __call__(self, scope):
if "channel" not in scope:
raise ValueError(
"ChannelNameRouter got a scope without a 'channel' key. "
+ "Did you make sure it's only being used for 'channel' type messages?"
)
if scope["channel"] in self.application_mapping:
return self.application_mapping[scope["channel"]](scope)
else:
raise ValueError(
"No application configured for channel name %r" % scope["channel"]
)
channels-2.4.0/channels/security/ 0000775 0000000 0000000 00000000000 13576505155 0016766 5 ustar 00root root 0000000 0000000 channels-2.4.0/channels/security/__init__.py 0000664 0000000 0000000 00000000000 13576505155 0021065 0 ustar 00root root 0000000 0000000 channels-2.4.0/channels/security/websocket.py 0000664 0000000 0000000 00000013204 13576505155 0021326 0 ustar 00root root 0000000 0000000 from urllib.parse import urlparse
from django.conf import settings
from django.http.request import is_same_domain
from ..generic.websocket import AsyncWebsocketConsumer
class OriginValidator:
"""
Validates that the incoming connection has an Origin header that
is in an allowed list.
"""
def __init__(self, application, allowed_origins):
self.application = application
self.allowed_origins = allowed_origins
def __call__(self, scope):
# Make sure the scope is of type websocket
if scope["type"] != "websocket":
raise ValueError(
"You cannot use OriginValidator on a non-WebSocket connection"
)
# Extract the Origin header
parsed_origin = None
for header_name, header_value in scope.get("headers", []):
if header_name == b"origin":
try:
# Set ResultParse
parsed_origin = urlparse(header_value.decode("ascii"))
except UnicodeDecodeError:
pass
# Check to see if the origin header is valid
if self.valid_origin(parsed_origin):
# Pass control to the application
return self.application(scope)
else:
# Deny the connection
return WebsocketDenier(scope)
def valid_origin(self, parsed_origin):
"""
Checks parsed origin is None.
Pass control to the validate_origin function.
Returns ``True`` if validation function was successful, ``False`` otherwise.
"""
# None is not allowed unless all hosts are allowed
if parsed_origin is None and "*" not in self.allowed_origins:
return False
return self.validate_origin(parsed_origin)
def validate_origin(self, parsed_origin):
"""
Validate the given origin for this site.
Check than the origin looks valid and matches the origin pattern in
specified list ``allowed_origins``. Any pattern begins with a scheme.
After the scheme there must be a domain. Any domain beginning with a period
corresponds to the domain and all its subdomains (for example, ``http://.example.com``
``http://example.com`` and any subdomain). After the domain there must be a port,
but it can be omitted. ``*`` matches anything and anything
else must match exactly.
Note. This function assumes that the given origin has a schema, domain and port,
but port is optional.
Returns ``True`` for a valid host, ``False`` otherwise.
"""
return any(
pattern == "*" or self.match_allowed_origin(parsed_origin, pattern)
for pattern in self.allowed_origins
)
def match_allowed_origin(self, parsed_origin, pattern):
"""
Returns ``True`` if the origin is either an exact match or a match
to the wildcard pattern. Compares scheme, domain, port of origin and pattern.
Any pattern can be begins with a scheme. After the scheme must be a domain,
or just domain without scheme.
Any domain beginning with a period corresponds to the domain and all
its subdomains (for example, ``.example.com`` ``example.com``
and any subdomain). Also with scheme (for example, ``http://.example.com``
``http://exapmple.com``). After the domain there must be a port,
but it can be omitted.
Note. This function assumes that the given origin is either None, a
schema-domain-port string, or just a domain string
"""
if parsed_origin is None:
return False
# Get ResultParse object
parsed_pattern = urlparse(pattern.lower(), scheme=None)
if parsed_origin.hostname is None:
return False
if parsed_pattern.scheme is None:
pattern_hostname = urlparse("//" + pattern).hostname or pattern
return is_same_domain(parsed_origin.hostname, pattern_hostname)
# Get origin.port or default ports for origin or None
origin_port = self.get_origin_port(parsed_origin)
# Get pattern.port or default ports for pattern or None
pattern_port = self.get_origin_port(parsed_pattern)
# Compares hostname, scheme, ports of pattern and origin
if (
parsed_pattern.scheme == parsed_origin.scheme
and origin_port == pattern_port
and is_same_domain(parsed_origin.hostname, parsed_pattern.hostname)
):
return True
return False
def get_origin_port(self, origin):
"""
Returns the origin.port or port for this schema by default.
Otherwise, it returns None.
"""
if origin.port is not None:
# Return origin.port
return origin.port
# if origin.port doesn`t exists
if origin.scheme == "http" or origin.scheme == "ws":
# Default port return for http, ws
return 80
elif origin.scheme == "https" or origin.scheme == "wss":
# Default port return for https, wss
return 443
else:
return None
def AllowedHostsOriginValidator(application):
"""
Factory function which returns an OriginValidator configured to use
settings.ALLOWED_HOSTS.
"""
allowed_hosts = settings.ALLOWED_HOSTS
if settings.DEBUG and not allowed_hosts:
allowed_hosts = ["localhost", "127.0.0.1", "[::1]"]
return OriginValidator(application, allowed_hosts)
class WebsocketDenier(AsyncWebsocketConsumer):
"""
Simple application which denies all requests to it.
"""
async def connect(self):
await self.close()
channels-2.4.0/channels/sessions.py 0000664 0000000 0000000 00000023054 13576505155 0017343 0 ustar 00root root 0000000 0000000 import datetime
import time
from importlib import import_module
from django.conf import settings
from django.contrib.sessions.backends.base import UpdateError
from django.core.exceptions import SuspiciousOperation
from django.http import parse_cookie
from django.http.cookie import SimpleCookie
from django.utils import timezone
from django.utils.encoding import force_str
from django.utils.functional import LazyObject
from channels.db import database_sync_to_async
try:
from django.utils.http import http_date
except ImportError:
from django.utils.http import cookie_date as http_date
class CookieMiddleware:
"""
Extracts cookies from HTTP or WebSocket-style scopes and adds them as a
scope["cookies"] entry with the same format as Django's request.COOKIES.
"""
def __init__(self, inner):
self.inner = inner
def __call__(self, scope):
# Check this actually has headers. They're a required scope key for HTTP and WS.
if "headers" not in scope:
raise ValueError(
"CookieMiddleware was passed a scope that did not have a headers key "
+ "(make sure it is only passed HTTP or WebSocket connections)"
)
# Go through headers to find the cookie one
for name, value in scope.get("headers", []):
if name == b"cookie":
cookies = parse_cookie(value.decode("ascii"))
break
else:
# No cookie header found - add an empty default.
cookies = {}
# Return inner application
return self.inner(dict(scope, cookies=cookies))
@classmethod
def set_cookie(
cls,
message,
key,
value="",
max_age=None,
expires=None,
path="/",
domain=None,
secure=False,
httponly=False,
):
"""
Sets a cookie in the passed HTTP response message.
``expires`` can be:
- a string in the correct format,
- a naive ``datetime.datetime`` object in UTC,
- an aware ``datetime.datetime`` object in any time zone.
If it is a ``datetime.datetime`` object then ``max_age`` will be calculated.
"""
value = force_str(value)
cookies = SimpleCookie()
cookies[key] = value
if expires is not None:
if isinstance(expires, datetime.datetime):
if timezone.is_aware(expires):
expires = timezone.make_naive(expires, timezone.utc)
delta = expires - expires.utcnow()
# Add one second so the date matches exactly (a fraction of
# time gets lost between converting to a timedelta and
# then the date string).
delta = delta + datetime.timedelta(seconds=1)
# Just set max_age - the max_age logic will set expires.
expires = None
max_age = max(0, delta.days * 86400 + delta.seconds)
else:
cookies[key]["expires"] = expires
else:
cookies[key]["expires"] = ""
if max_age is not None:
cookies[key]["max-age"] = max_age
# IE requires expires, so set it if hasn't been already.
if not expires:
cookies[key]["expires"] = http_date(time.time() + max_age)
if path is not None:
cookies[key]["path"] = path
if domain is not None:
cookies[key]["domain"] = domain
if secure:
cookies[key]["secure"] = True
if httponly:
cookies[key]["httponly"] = True
# Write out the cookies to the response
for c in cookies.values():
message.setdefault("headers", []).append(
(b"Set-Cookie", bytes(c.output(header=""), encoding="utf-8"))
)
@classmethod
def delete_cookie(cls, message, key, path="/", domain=None):
"""
Deletes a cookie in a response.
"""
return cls.set_cookie(
message,
key,
max_age=0,
path=path,
domain=domain,
expires="Thu, 01-Jan-1970 00:00:00 GMT",
)
class SessionMiddleware:
"""
Class that adds Django sessions (from HTTP cookies) to the
scope. Works with HTTP or WebSocket protocol types (or anything that
provides a "headers" entry in the scope).
Requires the CookieMiddleware to be higher up in the stack.
"""
# Message types that trigger a session save if it's modified
save_message_types = ["http.response.start"]
# Message types that can carry session cookies back
cookie_response_message_types = ["http.response.start"]
def __init__(self, inner):
self.inner = inner
self.cookie_name = settings.SESSION_COOKIE_NAME
self.session_store = import_module(settings.SESSION_ENGINE).SessionStore
def __call__(self, scope):
return SessionMiddlewareInstance(scope, self)
class SessionMiddlewareInstance:
"""
Inner class that is instantiated once per scope.
"""
def __init__(self, scope, middleware):
self.middleware = middleware
self.scope = dict(scope)
if "session" in self.scope:
# There's already session middleware of some kind above us, pass that through
self.activated = False
else:
# Make sure there are cookies in the scope
if "cookies" not in self.scope:
raise ValueError(
"No cookies in scope - SessionMiddleware needs to run inside of CookieMiddleware."
)
# Parse the headers in the scope into cookies
self.scope["session"] = LazyObject()
self.activated = True
# Instantiate our inner application
self.inner = self.middleware.inner(self.scope)
async def __call__(self, receive, send):
"""
We intercept the send() callable so we can do session saves and
add session cookie overrides to send back.
"""
# Resolve the session now we can do it in a blocking way
session_key = self.scope["cookies"].get(self.middleware.cookie_name)
self.scope["session"]._wrapped = await database_sync_to_async(
self.middleware.session_store
)(session_key)
# Override send
self.real_send = send
return await self.inner(receive, self.send)
async def send(self, message):
"""
Overridden send that also does session saves/cookies.
"""
# Only save session if we're the outermost session middleware
if self.activated:
modified = self.scope["session"].modified
empty = self.scope["session"].is_empty()
# If this is a message type that we want to save on, and there's
# changed data, save it. We also save if it's empty as we might
# not be able to send a cookie-delete along with this message.
if (
message["type"] in self.middleware.save_message_types
and message.get("status", 200) != 500
and (modified or settings.SESSION_SAVE_EVERY_REQUEST)
):
await database_sync_to_async(self.save_session)()
# If this is a message type that can transport cookies back to the
# client, then do so.
if message["type"] in self.middleware.cookie_response_message_types:
if empty:
# Delete cookie if it's set
if settings.SESSION_COOKIE_NAME in self.scope["cookies"]:
CookieMiddleware.delete_cookie(
message,
settings.SESSION_COOKIE_NAME,
path=settings.SESSION_COOKIE_PATH,
domain=settings.SESSION_COOKIE_DOMAIN,
)
else:
# Get the expiry data
if self.scope["session"].get_expire_at_browser_close():
max_age = None
expires = None
else:
max_age = self.scope["session"].get_expiry_age()
expires_time = time.time() + max_age
expires = http_date(expires_time)
# Set the cookie
CookieMiddleware.set_cookie(
message,
self.middleware.cookie_name,
self.scope["session"].session_key,
max_age=max_age,
expires=expires,
domain=settings.SESSION_COOKIE_DOMAIN,
path=settings.SESSION_COOKIE_PATH,
secure=settings.SESSION_COOKIE_SECURE or None,
httponly=settings.SESSION_COOKIE_HTTPONLY or None,
)
# Pass up the send
return await self.real_send(message)
def save_session(self):
"""
Saves the current session.
"""
try:
self.scope["session"].save()
except UpdateError:
raise SuspiciousOperation(
"The request's session was deleted before the "
"request completed. The user may have logged "
"out in a concurrent request, for example."
)
# Shortcut to include cookie middleware
SessionMiddlewareStack = lambda inner: CookieMiddleware(SessionMiddleware(inner))
channels-2.4.0/channels/signals.py 0000664 0000000 0000000 00000000416 13576505155 0017132 0 ustar 00root root 0000000 0000000 from django.db import close_old_connections
from django.dispatch import Signal
consumer_started = Signal(providing_args=["environ"])
consumer_finished = Signal()
# Connect connection closer to consumer finished as well
consumer_finished.connect(close_old_connections)
channels-2.4.0/channels/staticfiles.py 0000664 0000000 0000000 00000004407 13576505155 0020010 0 ustar 00root root 0000000 0000000 from urllib.parse import urlparse
from urllib.request import url2pathname
from django.conf import settings
from django.contrib.staticfiles import utils
from django.contrib.staticfiles.views import serve
from django.http import Http404
from .http import AsgiHandler
class StaticFilesWrapper:
"""
ASGI application which wraps another and intercepts requests for static
files, passing them off to Django's static file serving.
"""
def __init__(self, application):
self.application = application
self.base_url = urlparse(self.get_base_url())
def get_base_url(self):
utils.check_settings()
return settings.STATIC_URL
def _should_handle(self, path):
"""
Checks if the path should be handled. Ignores the path if:
* the host is provided as part of the base_url
* the request's path isn't under the media path (or equal)
"""
return path.startswith(self.base_url[2]) and not self.base_url[1]
def __call__(self, scope):
# Only even look at HTTP requests
if scope["type"] == "http" and self._should_handle(scope["path"]):
# Serve static content
return StaticFilesHandler(dict(scope, static_base_url=self.base_url))
# Hand off to the main app
return self.application(scope)
class StaticFilesHandler(AsgiHandler):
"""
Subclass of AsgiHandler that serves directly from its get_response.
"""
def file_path(self, url):
"""
Returns the relative path to the media file on disk for the given URL.
"""
relative_url = url[len(self.scope["static_base_url"][2]) :]
return url2pathname(relative_url)
def serve(self, request):
"""
Actually serves the request path.
"""
return serve(request, self.file_path(request.path), insecure=True)
def get_response(self, request):
"""
Always tries to serve a static file as you don't even get into this
handler subclass without the wrapper directing you here.
"""
try:
return self.serve(request)
except Http404 as e:
if settings.DEBUG:
from django.views import debug
return debug.technical_404_response(request, e)
channels-2.4.0/channels/testing/ 0000775 0000000 0000000 00000000000 13576505155 0016574 5 ustar 00root root 0000000 0000000 channels-2.4.0/channels/testing/__init__.py 0000664 0000000 0000000 00000000527 13576505155 0020711 0 ustar 00root root 0000000 0000000 from asgiref.testing import ApplicationCommunicator # noqa
from .http import HttpCommunicator # noqa
from .live import ChannelsLiveServerTestCase # noqa
from .websocket import WebsocketCommunicator # noqa
__all__ = [
"ApplicationCommunicator",
"HttpCommunicator",
"ChannelsLiveServerTestCase",
"WebsocketCommunicator",
]
channels-2.4.0/channels/testing/http.py 0000664 0000000 0000000 00000003773 13576505155 0020137 0 ustar 00root root 0000000 0000000 from urllib.parse import unquote, urlparse
from asgiref.testing import ApplicationCommunicator
class HttpCommunicator(ApplicationCommunicator):
"""
ApplicationCommunicator subclass that has HTTP shortcut methods.
It will construct the scope for you, so you need to pass the application
(uninstantiated) along with HTTP parameters.
This does not support full chunking - for that, just use ApplicationCommunicator
directly.
"""
def __init__(self, application, method, path, body=b"", headers=None):
parsed = urlparse(path)
self.scope = {
"type": "http",
"http_version": "1.1",
"method": method.upper(),
"path": unquote(parsed.path),
"query_string": parsed.query.encode("utf-8"),
"headers": headers or [],
}
assert isinstance(body, bytes)
self.body = body
self.sent_request = False
super().__init__(application, self.scope)
async def get_response(self, timeout=1):
"""
Get the application's response. Returns a dict with keys of
"body", "headers" and "status".
"""
# If we've not sent the request yet, do so
if not self.sent_request:
self.sent_request = True
await self.send_input({"type": "http.request", "body": self.body})
# Get the response start
response_start = await self.receive_output(timeout)
assert response_start["type"] == "http.response.start"
# Get all body parts
response_start["body"] = b""
while True:
chunk = await self.receive_output(timeout)
assert chunk["type"] == "http.response.body"
assert isinstance(chunk["body"], bytes)
response_start["body"] += chunk["body"]
if not chunk.get("more_body", False):
break
# Return structured info
del response_start["type"]
response_start.setdefault("headers", [])
return response_start
channels-2.4.0/channels/testing/live.py 0000664 0000000 0000000 00000004414 13576505155 0020110 0 ustar 00root root 0000000 0000000 from django.core.exceptions import ImproperlyConfigured
from django.db import connections
from django.test.testcases import TransactionTestCase
from django.test.utils import modify_settings
from channels.routing import get_default_application
from channels.staticfiles import StaticFilesWrapper
from daphne.testing import DaphneProcess
class ChannelsLiveServerTestCase(TransactionTestCase):
"""
Does basically the same as TransactionTestCase but also launches a
live Daphne server in a separate process, so
that the tests may use another test framework, such as Selenium,
instead of the built-in dummy client.
"""
host = "localhost"
ProtocolServerProcess = DaphneProcess
static_wrapper = StaticFilesWrapper
serve_static = True
@property
def live_server_url(self):
return "http://%s:%s" % (self.host, self._port)
@property
def live_server_ws_url(self):
return "ws://%s:%s" % (self.host, self._port)
def _pre_setup(self):
for connection in connections.all():
if self._is_in_memory_db(connection):
raise ImproperlyConfigured(
"ChannelLiveServerTestCase can not be used with in memory databases"
)
super(ChannelsLiveServerTestCase, self)._pre_setup()
self._live_server_modified_settings = modify_settings(
ALLOWED_HOSTS={"append": self.host}
)
self._live_server_modified_settings.enable()
if self.serve_static:
application = self.static_wrapper(get_default_application())
else:
application = get_default_application()
self._server_process = self.ProtocolServerProcess(self.host, application)
self._server_process.start()
self._server_process.ready.wait()
self._port = self._server_process.port.value
def _post_teardown(self):
self._server_process.terminate()
self._server_process.join()
self._live_server_modified_settings.disable()
super(ChannelsLiveServerTestCase, self)._post_teardown()
def _is_in_memory_db(self, connection):
"""
Check if DatabaseWrapper holds in memory database.
"""
if connection.vendor == "sqlite":
return connection.is_in_memory_db()
channels-2.4.0/channels/testing/websocket.py 0000664 0000000 0000000 00000007475 13576505155 0021151 0 ustar 00root root 0000000 0000000 import json
from urllib.parse import unquote, urlparse
from asgiref.testing import ApplicationCommunicator
class WebsocketCommunicator(ApplicationCommunicator):
"""
ApplicationCommunicator subclass that has WebSocket shortcut methods.
It will construct the scope for you, so you need to pass the application
(uninstantiated) along with the initial connection parameters.
"""
def __init__(self, application, path, headers=None, subprotocols=None):
if not isinstance(path, str):
raise TypeError("Expected str, got {}".format(type(path)))
parsed = urlparse(path)
self.scope = {
"type": "websocket",
"path": unquote(parsed.path),
"query_string": parsed.query.encode("utf-8"),
"headers": headers or [],
"subprotocols": subprotocols or [],
}
super().__init__(application, self.scope)
async def connect(self, timeout=1):
"""
Trigger the connection code.
On an accepted connection, returns (True, )
On a rejected connection, returns (False, )
"""
await self.send_input({"type": "websocket.connect"})
response = await self.receive_output(timeout)
if response["type"] == "websocket.close":
return (False, response.get("code", 1000))
else:
return (True, response.get("subprotocol", None))
async def send_to(self, text_data=None, bytes_data=None):
"""
Sends a WebSocket frame to the application.
"""
# Make sure we have exactly one of the arguments
assert bool(text_data) != bool(
bytes_data
), "You must supply exactly one of text_data or bytes_data"
# Send the right kind of event
if text_data:
assert isinstance(text_data, str), "The text_data argument must be a str"
await self.send_input({"type": "websocket.receive", "text": text_data})
else:
assert isinstance(
bytes_data, bytes
), "The bytes_data argument must be bytes"
await self.send_input({"type": "websocket.receive", "bytes": bytes_data})
async def send_json_to(self, data):
"""
Sends JSON data as a text frame
"""
await self.send_to(text_data=json.dumps(data))
async def receive_from(self, timeout=1):
"""
Receives a data frame from the view. Will fail if the connection
closes instead. Returns either a bytestring or a unicode string
depending on what sort of frame you got.
"""
response = await self.receive_output(timeout)
# Make sure this is a send message
assert response["type"] == "websocket.send"
# Make sure there's exactly one key in the response
assert ("text" in response) != (
"bytes" in response
), "The response needs exactly one of 'text' or 'bytes'"
# Pull out the right key and typecheck it for our users
if "text" in response:
assert isinstance(response["text"], str), "Text frame payload is not str"
return response["text"]
else:
assert isinstance(
response["bytes"], bytes
), "Binary frame payload is not bytes"
return response["bytes"]
async def receive_json_from(self, timeout=1):
"""
Receives a JSON text frame payload and decodes it
"""
payload = await self.receive_from(timeout)
assert isinstance(payload, str), "JSON data is not a text frame"
return json.loads(payload)
async def disconnect(self, code=1000, timeout=1):
"""
Closes the socket
"""
await self.send_input({"type": "websocket.disconnect", "code": code})
await self.wait(timeout)
channels-2.4.0/channels/utils.py 0000664 0000000 0000000 00000004200 13576505155 0016625 0 ustar 00root root 0000000 0000000 import asyncio
import types
def name_that_thing(thing):
"""
Returns either the function/class path or just the object's repr
"""
# Instance method
if hasattr(thing, "im_class"):
# Mocks will recurse im_class forever
if hasattr(thing, "mock_calls"):
return ""
return name_that_thing(thing.im_class) + "." + thing.im_func.func_name
# Other named thing
if hasattr(thing, "__name__"):
if hasattr(thing, "__class__") and not isinstance(
thing, (types.FunctionType, types.MethodType)
):
if thing.__class__ is not type and not issubclass(thing.__class__, type):
return name_that_thing(thing.__class__)
if hasattr(thing, "__self__"):
return "%s.%s" % (thing.__self__.__module__, thing.__self__.__name__)
if hasattr(thing, "__module__"):
return "%s.%s" % (thing.__module__, thing.__name__)
# Generic instance of a class
if hasattr(thing, "__class__"):
return name_that_thing(thing.__class__)
return repr(thing)
async def await_many_dispatch(consumer_callables, dispatch):
"""
Given a set of consumer callables, awaits on them all and passes results
from them to the dispatch awaitable as they come in.
"""
# Start them all off as tasks
loop = asyncio.get_event_loop()
tasks = [
loop.create_task(consumer_callable())
for consumer_callable in consumer_callables
]
try:
while True:
# Wait for any of them to complete
await asyncio.wait(tasks, return_when=asyncio.FIRST_COMPLETED)
# Find the completed one(s), yield results, and replace them
for i, task in enumerate(tasks):
if task.done():
result = task.result()
await dispatch(result)
tasks[i] = asyncio.ensure_future(consumer_callables[i]())
finally:
# Make sure we clean up tasks on exit
for task in tasks:
task.cancel()
try:
await task
except asyncio.CancelledError:
pass
channels-2.4.0/channels/worker.py 0000664 0000000 0000000 00000003227 13576505155 0017006 0 ustar 00root root 0000000 0000000 import asyncio
from asgiref.server import StatelessServer
class Worker(StatelessServer):
"""
ASGI protocol server that surfaces events sent to specific channels
on the channel layer into a single application instance.
"""
def __init__(self, application, channels, channel_layer, max_applications=1000):
super().__init__(application, max_applications)
self.channels = channels
self.channel_layer = channel_layer
if self.channel_layer is None:
raise ValueError("Channel layer is not valid")
async def handle(self):
"""
Listens on all the provided channels and handles the messages.
"""
# For each channel, launch its own listening coroutine
listeners = []
for channel in self.channels:
listeners.append(asyncio.ensure_future(self.listener(channel)))
# Wait for them all to exit
await asyncio.wait(listeners)
# See if any of the listeners had an error (e.g. channel layer error)
[listener.result() for listener in listeners]
async def listener(self, channel):
"""
Single-channel listener
"""
while True:
message = await self.channel_layer.receive(channel)
if not message.get("type", None):
raise ValueError("Worker received message with no type.")
# Make a scope and get an application instance for it
scope = {"type": "channel", "channel": channel}
instance_queue = self.get_or_create_application_instance(channel, scope)
# Run the message into the app
await instance_queue.put(message)
channels-2.4.0/docs/ 0000775 0000000 0000000 00000000000 13576505155 0014254 5 ustar 00root root 0000000 0000000 channels-2.4.0/docs/Makefile 0000664 0000000 0000000 00000015162 13576505155 0015721 0 ustar 00root root 0000000 0000000 # Makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
PAPER =
BUILDDIR = _build
# User-friendly check for sphinx-build
ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1)
$(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/)
endif
# Internal variables.
PAPEROPT_a4 = -D latex_paper_size=a4
PAPEROPT_letter = -D latex_paper_size=letter
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
# the i18n builder cannot share the environment and doctrees with the others
I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext
help:
@echo "Please use \`make ' where is one of"
@echo " html to make standalone HTML files"
@echo " dirhtml to make HTML files named index.html in directories"
@echo " singlehtml to make a single large HTML file"
@echo " pickle to make pickle files"
@echo " json to make JSON files"
@echo " htmlhelp to make HTML files and a HTML help project"
@echo " qthelp to make HTML files and a qthelp project"
@echo " devhelp to make HTML files and a Devhelp project"
@echo " epub to make an epub"
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
@echo " latexpdf to make LaTeX files and run them through pdflatex"
@echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx"
@echo " text to make text files"
@echo " man to make manual pages"
@echo " texinfo to make Texinfo files"
@echo " info to make Texinfo files and run them through makeinfo"
@echo " gettext to make PO message catalogs"
@echo " changes to make an overview of all changed/added/deprecated items"
@echo " xml to make Docutils-native XML files"
@echo " pseudoxml to make pseudoxml-XML files for display purposes"
@echo " linkcheck to check all external links for integrity"
@echo " doctest to run all doctests embedded in the documentation (if enabled)"
clean:
rm -rf $(BUILDDIR)/*
html:
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
dirhtml:
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
singlehtml:
$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
@echo
@echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
pickle:
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
@echo
@echo "Build finished; now you can process the pickle files."
json:
$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
@echo
@echo "Build finished; now you can process the JSON files."
htmlhelp:
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
@echo
@echo "Build finished; now you can run HTML Help Workshop with the" \
".hhp project file in $(BUILDDIR)/htmlhelp."
qthelp:
$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
@echo
@echo "Build finished; now you can run "qcollectiongenerator" with the" \
".qhcp project file in $(BUILDDIR)/qthelp, like this:"
@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/Channels.qhcp"
@echo "To view the help file:"
@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/Channels.qhc"
devhelp:
$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
@echo
@echo "Build finished."
@echo "To view the help file:"
@echo "# mkdir -p $$HOME/.local/share/devhelp/Channels"
@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/Channels"
@echo "# devhelp"
epub:
$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
@echo
@echo "Build finished. The epub file is in $(BUILDDIR)/epub."
latex:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo
@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
@echo "Run \`make' in that directory to run these through (pdf)latex" \
"(use \`make latexpdf' here to do that automatically)."
latexpdf:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through pdflatex..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
latexpdfja:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through platex and dvipdfmx..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf-ja
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
text:
$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
@echo
@echo "Build finished. The text files are in $(BUILDDIR)/text."
man:
$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
@echo
@echo "Build finished. The manual pages are in $(BUILDDIR)/man."
texinfo:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo
@echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
@echo "Run \`make' in that directory to run these through makeinfo" \
"(use \`make info' here to do that automatically)."
info:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo "Running Texinfo files through makeinfo..."
make -C $(BUILDDIR)/texinfo info
@echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
gettext:
$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
@echo
@echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
changes:
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
@echo
@echo "The overview file is in $(BUILDDIR)/changes."
linkcheck:
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
@echo
@echo "Link check complete; look for any errors in the above output " \
"or in $(BUILDDIR)/linkcheck/output.txt."
doctest:
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
@echo "Testing of doctests in the sources finished, look at the " \
"results in $(BUILDDIR)/doctest/output.txt."
xml:
$(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml
@echo
@echo "Build finished. The XML files are in $(BUILDDIR)/xml."
pseudoxml:
$(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml
@echo
@echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml."
channels-2.4.0/docs/asgi.rst 0000664 0000000 0000000 00000004547 13576505155 0015743 0 ustar 00root root 0000000 0000000 ASGI
====
`ASGI `_, or the
Asynchronous Server Gateway Interface, is the specification which
Channels and Daphne are built upon, designed to untie Channels apps from a
specific application server and provide a common way to write application
and middleware code.
It's a spiritual successor to WSGI, designed not only run in an asynchronous
fashion via ``asyncio``, but also supporting multiple protocols.
The full ASGI spec can be found at http://asgi.readthedocs.io
Summary
-------
An ASGI application is a callable that takes a scope and returns a coroutine
callable, that takes receive and send methods. It's usually written as a class::
class Application:
def __init__(self, scope):
...
async def __call__(self, receive, send):
...
The ``scope`` dict defines the properties of a connection, like its remote IP (for
HTTP) or username (for a chat protocol), and the lifetime of a connection.
Applications are *instantiated* once per scope - so, for example, once per
HTTP request, or once per open WebSocket connection.
Scopes always have a ``type`` key, which tells you what kind of connection
it is and what other keys to expect in the scope (and what sort of messages
to expect).
The ``receive`` awaitable provides events as dicts as they occur, and the
``send`` awaitable sends events back to the client in a similar dict format.
A *protocol server* sits between the client and your application code,
decoding the raw protocol into the scope and event dicts and encoding anything
you send back down onto the protocol.
Composability
-------------
ASGI applications, like WSGI ones, are designed to be composable, and this
includes Channels' routing and middleware components like ``ProtocolTypeRouter``
and ``SessionMiddeware``. These are just ASGI applications that take other
ASGI applications as arguments, so you can pass around just one top-level
application for a whole Django project and dispatch down to the right consumer
based on what sort of connection you're handling.
Protocol Specifications
-----------------------
The basic ASGI spec only outlines the interface for an ASGI app - it does not
specify how network protocols are encoded to and from scopes and event dicts.
That's the job of protocol specifications:
* HTTP and WebSocket: https://github.com/django/asgiref/blob/master/specs/www.rst
channels-2.4.0/docs/channel_layer_spec.rst 0000664 0000000 0000000 00000031214 13576505155 0020625 0 ustar 00root root 0000000 0000000 ===========================
Channel Layer Specification
===========================
.. note::
Channel layers are now internal only to Channels, and not used as part of
ASGI. This spec defines what Channels and applications written using it
expect a channel layer to provide.
Abstract
========
This document outlines a set of standardized definitions for *channels* and
a *channel layer* which provides a mechanism to send and receive messages over
them. They allow inter-process communication between different processes to
help build applications that have messaging and events between different clients.
Overview
========
Messages
--------
Messages must be a ``dict``. Because these messages are sometimes sent
over a network, they need to be serializable, and so they are only allowed
to contain the following types:
* Byte strings
* Unicode strings
* Integers (within the signed 64 bit range)
* Floating point numbers (within the IEEE 754 double precision range)
* Lists (tuples should be encoded as lists)
* Dicts (keys must be unicode strings)
* Booleans
* None
Channels
--------
Channels are identified by a unicode string name consisting only of ASCII
letters, ASCII numerical digits, periods (``.``), dashes (``-``) and
underscores (``_``), plus an optional type character (see below).
Channels are a first-in, first out queue with at-most-once delivery
semantics. They can have multiple writers and multiple readers; only a single
reader should get each written message. Implementations must never deliver
a message more than once or to more than one reader, and must drop messages if
this is necessary to achieve this restriction.
In order to aid with scaling and network architecture, a distinction
is made between channels that have multiple readers and
*process-specific channels* that are read from a single known process.
*Normal channel* names contain no type characters, and can be routed however
the backend wishes; in particular, they do not have to appear globally
consistent, and backends may shard their contents out to different servers
so that a querying client only sees some portion of the messages. Calling
``receive`` on these channels does not guarantee that you will get the
messages in order or that you will get anything if the channel is non-empty.
*Process-specific channel* names contain an exclamation mark (``!``) that
separates a remote and local part. These channels are received differently;
only the name up to and including the ``!`` character is passed to the
``receive()`` call, and it will receive any message on any channel with that
prefix. This allows a process, such as a HTTP terminator, to listen on a single
process-specific channel, and then distribute incoming requests to the
appropriate client sockets using the local part (the part after the ``!``).
The local parts must be generated and managed by the process that consumes them.
These channels, like single-reader channels, are guaranteed to give any extant
messages in order if received from a single process.
Messages should expire after a set time sitting unread in a channel;
the recommendation is one minute, though the best value depends on the
channel layer and the way it is deployed, and it is recommended that users
are allowed to configure the expiry time.
The maximum message size is 1MB if the message were encoded as JSON;
if more data than this needs to be transmitted it must be chunked into
smaller messages. All channel layers must support messages up
to this size, but channel layer users are encouraged to keep well below it.
.. _asgi_extensions:
Extensions
----------
Extensions are functionality that is
not required for basic application code and nearly all protocol server
code, and so has been made optional in order to enable lightweight
channel layers for applications that don't need the full feature set defined
here.
The extensions defined here are:
* ``groups``: Allows grouping of channels to allow broadcast; see below for more.
* ``flush``: Allows easier testing and development with channel layers.
There is potential to add further extensions; these may be defined by
a separate specification, or a new version of this specification.
If application code requires an extension, it should check for it as soon
as possible, and hard error if it is not provided. Frameworks should
encourage optional use of extensions, while attempting to move any
extension-not-found errors to process startup rather than message handling.
Asynchronous Support
--------------------
All channel layers must provide asynchronous (coroutine) methods for their
primary endpoints. End-users will be able to achieve synchronous versions
using the ``asgiref.sync.async_to_sync`` wrapper.
Groups
------
While the basic channel model is sufficient to handle basic application
needs, many more advanced uses of asynchronous messaging require
notifying many users at once when an event occurs - imagine a live blog,
for example, where every viewer should get a long poll response or
WebSocket packet when a new entry is posted.
Thus, there is an *optional* groups extension which allows easier broadcast
messaging to groups of channels. End-users are free, of course, to use just
channel names and direct sending and build their own persistence/broadcast
system instead.
Capacity
--------
To provide backpressure, each channel in a channel layer may have a capacity,
defined however the layer wishes (it is recommended that it is configurable
by the user using keyword arguments to the channel layer constructor, and
furthermore configurable per channel name or name prefix).
When a channel is at or over capacity, trying to send() to that channel
may raise ChannelFull, which indicates to the sender the channel is over
capacity. How the sender wishes to deal with this will depend on context;
for example, a web application trying to send a response body will likely
wait until it empties out again, while a HTTP interface server trying to
send in a request would drop the request and return a 503 error.
Process-local channels must apply their capacity on the non-local part (that is,
up to and including the ``!`` character), and so capacity is shared among all
of the "virtual" channels inside it.
Sending to a group never raises ChannelFull; instead, it must silently drop
the message if it is over capacity, as per ASGI's at-most-once delivery
policy.
Specification Details
=====================
A *channel layer* must provide an object with these attributes
(all function arguments are positional):
* ``coroutine send(channel, message)``, that takes two arguments: the
channel to send on, as a unicode string, and the message
to send, as a serializable ``dict``.
* ``coroutine receive(channel)``, that takes a single channel name and returns
the next received message on that channel.
* ``coroutine new_channel()``, which returns a new process-specific channel
that can be used to give to a local coroutine or receiver.
* ``MessageTooLarge``, the exception raised when a send operation fails
because the encoded message is over the layer's size limit.
* ``ChannelFull``, the exception raised when a send operation fails
because the destination channel is over capacity.
* ``extensions``, a list of unicode string names indicating which
extensions this layer provides, or an empty list if it supports none.
The possible extensions can be seen in :ref:`asgi_extensions`.
A channel layer implementing the ``groups`` extension must also provide:
* ``coroutine group_add(group, channel)``, that takes a ``channel`` and adds
it to the group given by ``group``. Both are unicode strings. If the channel
is already in the group, the function should return normally.
* ``coroutine group_discard(group, channel)``, that removes the ``channel``
from the ``group`` if it is in it, and does nothing otherwise.
* ``coroutine group_send(group, message)``, that takes two positional
arguments; the group to send to, as a unicode string, and the message
to send, as a serializable ``dict``. It may raise MessageTooLarge but cannot
raise ChannelFull.
* ``group_expiry``, an integer number of seconds that specifies how long group
membership is valid for after the most recent ``group_add`` call (see
*Persistence* below)
A channel layer implementing the ``flush`` extension must also provide:
* ``coroutine flush()``, that resets the channel layer to a blank state,
containing no messages and no groups (if the groups extension is
implemented). This call must block until the system is cleared and will
consistently look empty to any client, if the channel layer is distributed.
Channel Semantics
-----------------
Channels **must**:
* Preserve ordering of messages perfectly with only a single reader
and writer if the channel is a *single-reader* or *process-specific* channel.
* Never deliver a message more than once.
* Never block on message send (though they may raise ChannelFull or
MessageTooLarge)
* Be able to handle messages of at least 1MB in size when encoded as
JSON (the implementation may use better encoding or compression, as long
as it meets the equivalent size)
* Have a maximum name length of at least 100 bytes.
They should attempt to preserve ordering in all cases as much as possible,
but perfect global ordering is obviously not possible in the distributed case.
They are not expected to deliver all messages, but a success rate of at least
99.99% is expected under normal circumstances. Implementations may want to
have a "resilience testing" mode where they deliberately drop more messages
than usual so developers can test their code's handling of these scenarios.
Persistence
-----------
Channel layers do not need to persist data long-term; group
memberships only need to live as long as a connection does, and messages
only as long as the message expiry time, which is usually a couple of minutes.
If a channel layer implements the ``groups`` extension, it must persist group
membership until at least the time when the member channel has a message
expire due to non-consumption, after which it may drop membership at any time.
If a channel subsequently has a successful delivery, the channel layer must
then not drop group membership until another message expires on that channel.
Channel layers must also drop group membership after a configurable long timeout
after the most recent ``group_add`` call for that membership, the default being
86,400 seconds (one day). The value of this timeout is exposed as the
``group_expiry`` property on the channel layer.
Approximate Global Ordering
---------------------------
While maintaining true global (across-channels) ordering of messages is
entirely unreasonable to expect of many implementations, they should strive
to prevent busy channels from overpowering quiet channels.
For example, imagine two channels, ``busy``, which spikes to 1000 messages a
second, and ``quiet``, which gets one message a second. There's a single
consumer running ``receive(['busy', 'quiet'])`` which can handle
around 200 messages a second.
In a simplistic for-loop implementation, the channel layer might always check
``busy`` first; it always has messages available, and so the consumer never
even gets to see a message from ``quiet``, even if it was sent with the
first batch of ``busy`` messages.
A simple way to solve this is to randomize the order of the channel list when
looking for messages inside the channel layer; other, better methods are also
available, but whatever is chosen, it should try to avoid a scenario where
a message doesn't get received purely because another channel is busy.
Strings and Unicode
-------------------
In this document, and all sub-specifications, *byte string* refers to
``str`` on Python 2 and ``bytes`` on Python 3. If this type still supports
Unicode codepoints due to the underlying implementation, then any values
should be kept within the 0 - 255 range.
*Unicode string* refers to ``unicode`` on Python 2 and ``str`` on Python 3.
This document will never specify just *string* - all strings are one of the
two exact types.
Some serializers, such as ``json``, cannot differentiate between byte
strings and unicode strings; these should include logic to box one type as
the other (for example, encoding byte strings as base64 unicode strings with
a preceding special character, e.g. U+FFFF).
Channel and group names are always unicode strings, with the additional
limitation that they only use the following characters:
* ASCII letters
* The digits ``0`` through ``9``
* Hyphen ``-``
* Underscore ``_``
* Period ``.``
* Question mark ``?`` (only to delineiate single-reader channel names,
and only one per name)
* Exclamation mark ``!`` (only to delineate process-specific channel names,
and only one per name)
Copyright
=========
This document has been placed in the public domain.
channels-2.4.0/docs/community.rst 0000664 0000000 0000000 00000002052 13576505155 0017031 0 ustar 00root root 0000000 0000000 Community Projects
==================
These projects from the community are developed on top of Channels:
* Beatserver_, a periodic task scheduler for django channels.
* EventStream_, a library to push data using Server-Sent Events (SSE) protocol.
* DjangoChannelsRestFramework_, a framework that providers DRF like consumers for channels.
* ChannelsMultiplexer_, a JsonConsumer Multiplexer for channels.
* DjangoChannelsIRC_, an interface server and matching generic consumers for IRC.
* Apollo_, a real-time polling application for corporate and academic environments
If you'd like to add your project, please submit a PR with a link and brief description.
.. _Beatserver: https://github.com/rajasimon/beatserver
.. _EventStream: https://github.com/fanout/django-eventstream
.. _DjangoChannelsRestFramework: https://github.com/hishnash/djangochannelsrestframework
.. _ChannelsMultiplexer: https://github.com/hishnash/channelsmultiplexer
.. _DjangoChannelsIRC: https://github.com/AdvocatesInc/django-channels-irc
.. _Apollo: https://github.com/maliesa96/apollo
channels-2.4.0/docs/conf.py 0000664 0000000 0000000 00000020034 13576505155 0015552 0 ustar 00root root 0000000 0000000 # -*- coding: utf-8 -*-
#
# Channels documentation build configuration file, created by
# sphinx-quickstart on Fri Jun 19 11:37:58 2015.
#
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import sys
import os
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
sys.path.insert(0, os.path.abspath('..'))
from channels import __version__ # noqa
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = []
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'Channels'
copyright = u'2018, Django Software Foundation'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = __version__
# The full version, including alpha/beta/rc tags.
release = __version__
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = ['_build']
# The reST default role (used for this markup: `text`) to use for all
# documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
# If true, keep warnings as "system message" paragraphs in the built documents.
#keep_warnings = False
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'default'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
#html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# " v documentation".
#html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied
# directly to the root of the documentation.
#html_extra_path = []
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
#html_domain_indices = True
# If false, no index is generated.
#html_use_index = True
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
#html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'Channelsdoc'
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#'preamble': '',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
('index', 'Channels.tex', u'Channels Documentation',
u'Andrew Godwin', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#latex_use_parts = False
# If true, show page references after internal links.
#latex_show_pagerefs = False
# If true, show URL addresses after external links.
#latex_show_urls = False
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
#latex_domain_indices = True
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'channels', u'Channels Documentation',
[u'Andrew Godwin'], 1)
]
# If true, show URL addresses after external links.
#man_show_urls = False
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index', 'Channels', u'Channels Documentation',
u'Andrew Godwin', 'Channels', 'One line description of project.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
#texinfo_appendices = []
# If false, no module index is generated.
#texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
#texinfo_show_urls = 'footnote'
# If true, do not generate a @detailmenu in the "Top" node's menu.
#texinfo_no_detailmenu = False
channels-2.4.0/docs/contributing.rst 0000664 0000000 0000000 00000011270 13576505155 0017516 0 ustar 00root root 0000000 0000000 Contributing
============
If you're looking to contribute to Channels, then please read on - we encourage
contributions both large and small, from both novice and seasoned developers.
What can I work on?
-------------------
We're looking for help with the following areas:
* Documentation and tutorial writing
* Bugfixing and testing
* Feature polish and occasional new feature design
* Case studies and writeups
You can find what we're looking to work on in the GitHub issues list for each
of the Channels sub-projects:
* `Channels issues `_, for the Django integration and overall project efforts
* `Daphne issues `_, for the HTTP and Websocket termination
* `asgiref issues `_, for the base ASGI library/memory backend
* `channels_redis issues `_, for the Redis channel backend
Issues are categorized by difficulty level:
* ``exp/beginner``: Easy issues suitable for a first-time contributor.
* ``exp/intermediate``: Moderate issues that need skill and a day or two to solve.
* ``exp/advanced``: Difficult issues that require expertise and potentially weeks of work.
They are also classified by type:
* ``documentation``: Documentation issues. Pick these if you want to help us by writing docs.
* ``bug``: A bug in existing code. Usually easier for beginners as there's a defined thing to fix.
* ``enhancement``: A new feature for the code; may be a bit more open-ended.
You should filter the issues list by the experience level and type of work
you'd like to do, and then if you want to take something on leave a comment
and assign yourself to it. If you want advice about how to take on a bug,
leave a comment asking about it, or pop into the IRC channel at
``#django-channels`` on Freenode and we'll be happy to help.
The issues are also just a suggested list - any offer to help is welcome as long
as it fits the project goals, but you should make an issue for the thing you
wish to do and discuss it first if it's relatively large (but if you just found
a small bug and want to fix it, sending us a pull request straight away is fine).
I'm a novice contributor/developer - can I help?
------------------------------------------------
Of course! The issues labelled with ``exp/beginner`` are a perfect place to
get started, as they're usually small and well defined. If you want help with
one of them, pop into the IRC channel at ``#django-channels`` on Freenode or
get in touch with Andrew directly at andrew@aeracode.org.
How do I get started and run the tests?
---------------------------------------
First, you should first clone the git repository to a local directory::
git clone https://github.com/django/channels.git channels
Next, you may want to make a virtual environment to run the tests and develop
in; you can use either ``virtualenvwrapper``, ``pipenv`` or just plain
``virtualenv`` for this.
Then, ``cd`` into the ``channels`` directory and install it editable into
your environment::
cd channels/
pip install -e .[tests]
Note the ``[tests]`` section there; that tells ``pip`` that you want to install
the ``tests`` extra, which will bring in testing dependencies like
``pytest-django``.
Then, you can run the tests::
pytest
Also, there is a tox.ini file at the root of the repository. Example commands::
$ tox -l
py36-dj11
py36-dj21
py36-dj22
py37-dj11
py37-dj21
py37-dj22
# run the test with Python 3.7, on Django 2.2 and Django master branch
$ tox -e py37-dj22 && tox -e py37-djmaster
Note that tox can also forward arguments to pytest. When using pdb with pytest,
forward the ``-s`` option to pytest as such::
tox -e py37-dj22 -- -s
Can you pay me for my time?
---------------------------
Unfortunately, the Mozilla funds we previously had are exhausted, so we can
no longer pay for contributions. Thanks to all who participated!
How do I do a release?
----------------------
If you have commit access, a release involves the following steps:
* Create a new entry in the CHANGELOG.txt file and summarise the changes
* Create a new release page in the docs under ``docs/releases`` and add the
changelog there with more information where necessary
* Add a link to the new release notes in ``docs/releases/index.rst``
* Set the new version in ``__init__.py``
* Roll all of these up into a single commit and tag it with the new version
number. Push the commit and tag, and Travis will automatically build and
release the new version to PyPI as long as all tests pass.
The release process for ``channels-redis`` and ``daphne`` is similar, but
they don't have the two steps in ``docs/``.
channels-2.4.0/docs/deploying.rst 0000664 0000000 0000000 00000021145 13576505155 0017003 0 ustar 00root root 0000000 0000000 Deploying
=========
Channels 2 (ASGI) applications deploy similarly to WSGI applications - you load
them into a server, like Daphne, and you can scale the number of server
processes up and down.
The one optional extra requirement for a Channels project is to provision a
:doc:`channel layer `. Both steps are covered below.
Configuring the ASGI application
--------------------------------
The one setting that Channels needs to run is ``ASGI_APPLICATION``, which tells
Channels what the *root application* of your project is. As discussed in
:doc:`/topics/routing`, this is almost certainly going to be your top-level
(Protocol Type) router.
It should be a dotted path to the instance of the router; this is generally
going to be in a file like ``myproject/routing.py``::
ASGI_APPLICATION = "myproject.routing.application"
Setting up a channel backend
----------------------------
.. note::
This step is optional. If you aren't using the channel layer, skip this
section.
Typically a channel backend will connect to one or more central servers that
serve as the communication layer - for example, the Redis backend connects
to a Redis server. All this goes into the ``CHANNEL_LAYERS`` setting;
here's an example for a remote Redis server::
CHANNEL_LAYERS = {
"default": {
"BACKEND": "channels_redis.core.RedisChannelLayer",
"CONFIG": {
"hosts": [("redis-server-name", 6379)],
},
},
}
To use the Redis backend you have to install it::
pip install -U channels_redis
Run protocol servers
--------------------
In order to talk to the outside world, your Channels/ASGI application needs
to be loaded into a *protocol server*. These can be like WSGI servers and run
your application in a HTTP mode, but they can also bridge to any number of
other protocols (chat protocols, IoT protocols, even radio networks).
All these servers have their own configuration options, but they all have
one thing in common - they will want you to pass them an ASGI application
to run. Because Django needs to run setup for things like models when it loads
in, you can't just pass in the same variable as you configured in
``ASGI_APPLICATION`` above; you need a bit more code to get Django ready.
In your project directory, you'll already have a file called ``wsgi.py`` that
does this to present Django as a WSGI application. Make a new file alongside it
called ``asgi.py`` and put this in it::
"""
ASGI entrypoint. Configures Django and then runs the application
defined in the ASGI_APPLICATION setting.
"""
import os
import django
from channels.routing import get_default_application
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "myproject.settings")
django.setup()
application = get_default_application()
If you have any customizations in your ``wsgi.py`` to do additional things
on application start, or different ways of loading settings, you can do those
in here as well.
Now you have this file, all you need to do is pass the ``application`` object
inside it to your protocol server as the application it should run::
daphne -p 8001 myproject.asgi:application
HTTP and WebSocket
------------------
While ASGI is a general protocol and we can't cover all possible servers here,
it's very likely you will want to deploy a Channels project to work over HTTP
and potentially WebSocket, so we'll cover that in some more detail.
The Channels project maintains an official ASGI HTTP/WebSocket server,
`Daphne `_, and it's this that we'll talk about
configuring. Other HTTP/WebSocket ASGI servers are possible and will work just
as well provided they follow the spec, but will have different configuration.
You can choose to either use Daphne for all requests - HTTP and WebSocket -
or if you are conservative about stability, keep running standard HTTP requests
through a WSGI server and use Daphne only for things WSGI cannot do, like
HTTP long-polling and WebSockets. If you do split, you'll need to put something
in front of Daphne and your WSGI server to work out what requests to send to
each (using HTTP path or domain) - that's not covered here, just know you can
do it.
If you use Daphne for all traffic, it auto-negotiates between HTTP and WebSocket,
so there's no need to have your WebSockets on a separate domain or path (and
they'll be able to share cookies with your normal view code, which isn't
possible if you separate by domain rather than path).
To run Daphne, it just needs to be supplied with an application, much like
a WSGI server would need to be. Make sure you have an ``asgi.py`` file as
outlined above.
Then, you can run Daphne and supply the channel layer as the argument::
daphne myproject.asgi:application
You should run Daphne inside either a process supervisor (systemd, supervisord)
or a container orchestration system (kubernetes, nomad) to ensure that it
gets restarted if needed and to allow you to scale the number of processes.
If you want to bind multiple Daphne instances to the same port on a machine,
use a process supervisor that can listen on ports and pass the file descriptors
to launched processes, and then pass the file descriptor with ``--fd NUM``.
You can also specify the port and IP that Daphne binds to::
daphne -b 0.0.0.0 -p 8001 myproject.asgi:application
You can see more about Daphne and its options
`on GitHub `_.
Alternative Web Servers
-----------------------
There are also alternative `ASGI `_ servers
that you can use for serving Channels.
To some degree ASGI web servers should be interchangeable, they should all have
the same basic functionality in terms of serving HTTP and WebSocket requests.
Aspects where servers may differ are in their configuration and defaults,
performance characteristics, support for resource limiting, differing protocol
and socket support, and approaches to process management.
You can see more alternative servers, such as Uvicorn, in the
`ASGI implementations documentation `_.
Example Setups
--------------
These are examples of possible setups - they are not guaranteed to work out of
the box, and should be taken more as a guide than a direct tutorial.
Nginx/Supervisor (Ubuntu)
~~~~~~~~~~~~~~~~~~~~~~~~~
This example sets up a Django site on an Ubuntu server, using Nginx as the
main webserver and supervisord to run and manage Daphne.
First, install Nginx and Supervisor::
$ sudo apt install nginx supervisor
Now, you will need to create the supervisor configuration file (often located in
``/etc/supervisor/conf.d/`` - here, we're making Supervisor listen on the TCP
port and then handing that socket off to the child processes so they can all
share the same bound port::
[fcgi-program:asgi]
# TCP socket used by Nginx backend upstream
socket=tcp://localhost:8000
# Directory where your site's project files are located
directory=/my/app/path
# Each process needs to have a separate socket file, so we use process_num
# Make sure to update "mysite.asgi" to match your project name
command=daphne -u /run/daphne/daphne%(process_num)d.sock --fd 0 --access-log - --proxy-headers mysite.asgi:application
# Number of processes to startup, roughly the number of CPUs you have
numprocs=4
# Give each process a unique name so they can be told apart
process_name=asgi%(process_num)d
# Automatically start and recover processes
autostart=true
autorestart=true
# Choose where you want your log to go
stdout_logfile=/your/log/asgi.log
redirect_stderr=true
Have supervisor reread and update its jobs::
$ sudo supervisorctl reread
$ sudo supervisorctl update
Next, Nginx has to be told to proxy traffic to the running Daphne instances.
Setup your nginx upstream conf file for your project::
upstream channels-backend {
server localhost:8000;
}
...
server {
...
location / {
try_files $uri @proxy_to_app;
}
...
location @proxy_to_app {
proxy_pass http://channels-backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
...
}
Reload nginx to apply the changes::
$ sudo service nginx reload
channels-2.4.0/docs/index.rst 0000664 0000000 0000000 00000004263 13576505155 0016122 0 ustar 00root root 0000000 0000000 Django Channels
===============
Channels is a project that takes Django and extends its abilities beyond
HTTP - to handle WebSockets, chat protocols, IoT protocols, and more. It's
built on a Python specification called `ASGI `_.
It does this by taking the core of Django and layering a fully asynchronous
layer underneath, running Django itself in a synchronous mode but handling
connections and sockets asynchronously, and giving you the choice to write
in either style.
To get started understanding Channels, read our :doc:`introduction`,
which will walk through how things work. If you're upgrading from Channels 1,
take a look at :doc:`one-to-two` to get an overview of the changes; things
are substantially different.
If you would like complete code examples to read alongside the documentation
or experiment on, the `channels-examples `_
repository contains well-commented example Channels projects.
.. warning::
This is documentation for the **2.x series** of Channels. If you are looking
for documentation for the legacy Channels 1, you can select ``1.x`` from the
versions selector in the bottom-left corner.
Projects
--------
Channels is comprised of several packages:
* `Channels `_, the Django integration layer
* `Daphne `_, the HTTP and Websocket termination server
* `asgiref `_, the base ASGI library
* `channels_redis `_, the Redis channel layer backend (optional)
This documentation covers the system as a whole; individual release notes and
instructions can be found in the individual repositories.
.. _topics:
Topics
------
.. toctree::
:maxdepth: 2
introduction
installation
tutorial/index
topics/consumers
topics/routing
topics/databases
topics/channel_layers
topics/sessions
topics/authentication
topics/security
topics/testing
topics/worker
deploying
one-to-two
Reference
---------
.. toctree::
:maxdepth: 2
asgi
channel_layer_spec
community
contributing
support
releases/index
channels-2.4.0/docs/installation.rst 0000664 0000000 0000000 00000003601 13576505155 0017507 0 ustar 00root root 0000000 0000000 Installation
============
Channels is available on PyPI - to install it, just run::
pip install -U channels
Once that's done, you should add ``channels`` to your
``INSTALLED_APPS`` setting::
INSTALLED_APPS = (
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.sites',
...
'channels',
)
Then, make a default routing in ``myproject/routing.py``::
from channels.routing import ProtocolTypeRouter
application = ProtocolTypeRouter({
# Empty for now (http->django views is added by default)
})
And finally, set your ``ASGI_APPLICATION`` setting to point to that routing
object as your root application::
ASGI_APPLICATION = "myproject.routing.application"
That's it! Once enabled, ``channels`` will integrate itself into Django and
take control of the ``runserver`` command. See :doc:`introduction` for more.
.. note::
Please be wary of any other third-party apps that require an overloaded or
replacement ``runserver`` command. Channels provides a separate
``runserver`` command and may conflict with it. An example
of such a conflict is with `whitenoise.runserver_nostatic `_
from `whitenoise `_. In order to
solve such issues, try moving ``channels`` to the top of your ``INSTALLED_APPS``
or remove the offending app altogether.
Installing the latest development version
-----------------------------------------
To install the latest version of Channels, clone the repo, change to the repo,
change to the repo directory, and pip install it into your current virtual
environment::
$ git clone git@github.com:django/channels.git
$ cd channels
$
(environment) $ pip install -e . # the dot specifies the current repo
channels-2.4.0/docs/introduction.rst 0000664 0000000 0000000 00000026426 13576505155 0017541 0 ustar 00root root 0000000 0000000 Introduction
============
Welcome to Channels! Channels changes Django to weave asynchronous code
underneath and through Django's synchronous core, allowing Django projects
to handle not only HTTP, but protocols that require long-running connections
too - WebSockets, MQTT, chatbots, amateur radio, and more.
It does this while preserving Django's synchronous and easy-to-use nature,
allowing you to choose how you write your code - synchronous in a style like
Django views, fully asynchronous, or a mixture of both. On top of this, it
provides integrations with Django's auth system, session system, and more,
making it easier than ever to extend your HTTP-only project to other protocols.
It also bundles this event-driven architecture with *channel layers*,
a system that allows you to easily communicate between processes, and separate
your project into different processes.
If you haven't yet installed Channels, you may want to read :doc:`installation`
first to get it installed. This introduction isn't a direct tutorial, but
you should be able to use it to follow along and make changes to an existing
Django project if you like.
Turtles All The Way Down
------------------------
Channels operates on the principle of "turtles all the way down" - we have
a single idea of what a channels "application" is, and even the simplest of
*consumers* (the equivalent of Django views) are an entirely valid
:doc:`/asgi` application you can run by themselves.
.. note::
ASGI is the name for the asynchronous server specification that Channels
is built on. Like WSGI, it is designed to let you choose between different
servers and frameworks rather than being locked into Channels and our server
Daphne. You can learn more at http://asgi.readthedocs.io
Channels gives you the tools to write these basic *consumers* - individual
pieces that might handle chat messaging, or notifications - and tie them
together with URL routing, protocol detection and other handy things to
make a full application.
We treat HTTP and the existing Django views as parts of a bigger whole.
Traditional Django views are still there with Channels and still useable -
we wrap them up in an ASGI application called ``channels.http.AsgiHandler`` -
but you can now also write custom HTTP long-polling handling, or WebSocket
receivers, and have that code sit alongside your existing code. URL routing,
middleware - they are all just ASGI applications.
Our belief is that you want the ability to use safe, synchronous techniques
like Django views for most code, but have the option to drop down to a more
direct, asynchronous interface for complex tasks.
Scopes and Events
------------------
Channels and ASGI split up incoming connections into two components: a *scope*,
and a series of *events*.
The *scope* is a set of details about a single incoming connection - such as
the path a web request was made from, or the originating IP address of a
WebSocket, or the user messaging a chatbot - and persists throughout the
connection.
For HTTP, the scope just lasts a single request. For WebSocket, it lasts for
the lifetime of the socket (but changes if the socket closes and reconnects).
For other protocols, it varies based on how the protocol's ASGI spec is written;
for example, it's likely that a chatbot protocol would keep one scope open
for the entirety of a user's conversation with the bot, even if the underlying
chat protocol is stateless.
During the lifetime of this *scope*, a series of *events* occur. These
represent user interactions - making a HTTP request, for example, or
sending a WebSocket frame. Your Channels or ASGI applications will be
**instantiated once per scope**, and then be fed the stream of *events*
happening within that scope to decide what to do with.
An example with HTTP:
* The user makes a HTTP request.
* We open up a new ``http`` type scope with details of the request's path,
method, headers, etc.
* We send a ``http.request`` event with the HTTP body content
* The Channels or ASGI application processes this and generates a
``http.response`` event to send back to the browser and close the connection.
* The HTTP request/response is completed and the scope is destroyed.
An example with a chatbot:
* The user sends a first message to the chatbot.
* This opens a scope containing the user's username, chosen name, and user ID.
* The application is given a ``chat.received_message`` event with the event text.
It does not have to respond, but could send one, two or more other chat messages
back as ``chat.send_message`` events if it wanted to.
* The user sends more messages to the chatbot and more ``chat.received_message``
events are generated.
* After a timeout or when the application process is restarted the scope is
closed.
Within the lifetime of a scope - be that a chat, a HTTP request, a socket
connection or something else - you will have one application instance handling
all the events from it, and you can persist things onto the application
instance as well. You can choose to write a raw ASGI application if you wish,
but Channels gives you an easy-to-use abstraction over them called *consumers*.
What is a Consumer?
-------------------
A consumer is the basic unit of Channels code. We call it a *consumer* as it
*consumes events*, but you can think of it as its own tiny little application.
When a request or new socket comes in, Channels will follow its routing table -
we'll look at that in a bit - find the right consumer for that incoming
connection, and start up a copy of it.
This means that, unlike Django views, consumers are long-running. They can
also be short-running - after all, HTTP requests can also be served by consumers -
but they're built around the idea of living for a little while (they live for
the duration of a *scope*, as we described above).
A basic consumer looks like this::
class ChatConsumer(WebsocketConsumer):
def connect(self):
self.username = "Anonymous"
self.accept()
self.send(text_data="[Welcome %s!]" % self.username)
def receive(self, *, text_data):
if text_data.startswith("/name"):
self.username = text_data[5:].strip()
self.send(text_data="[set your username to %s]" % self.username)
else:
self.send(text_data=self.username + ": " + text_data)
def disconnect(self, message):
pass
Each different protocol has different kinds of events that happen, and
each type is represented by a different method. You write code that handles
each event, and Channels will take care of scheduling them and running them
all in parallel.
Underneath, Channels is running on a fully asynchronous event loop, and
if you write code like above, it will get called in a synchronous thread.
This means you can safely do blocking operations, like calling the Django ORM::
class LogConsumer(WebsocketConsumer):
def connect(self, message):
Log.objects.create(
type="connected",
client=self.scope["client"],
)
However, if you want more control and you're willing to work only in
asynchronous functions, you can write fully asynchronous consumers::
class PingConsumer(AsyncConsumer):
async def websocket_connect(self, message):
await self.send({
"type": "websocket.accept",
})
async def websocket_receive(self, message):
await asyncio.sleep(1)
await self.send({
"type": "websocket.send",
"text": "pong",
})
You can read more about consumers in :doc:`/topics/consumers`.
Routing and Multiple Protocols
------------------------------
You can combine multiple Consumers (which are, remember, their own ASGI apps)
into one bigger app that represents your project using routing::
application = URLRouter([
url(r"^chat/admin/$", AdminChatConsumer),
url(r"^chat/$", PublicChatConsumer),
])
Channels is not just built around the world of HTTP and WebSockets - it also
allows you to build any protocol into a Django environment, by building a
server that maps those protocols into a similar set of events. For example,
you can build a chatbot in a similar style::
class ChattyBotConsumer(SyncConsumer):
def telegram_message(self, message):
"""
Simple echo handler for telegram messages in any chat.
"""
self.send({
"type": "telegram.message",
"text": "You said: %s" % message["text"],
})
And then use another router to have the one project able to serve both
WebSockets and chat requests::
application = ProtocolTypeRouter({
"websocket": URLRouter([
url(r"^chat/admin/$", AdminChatConsumer),
url(r"^chat/$", PublicChatConsumer),
]),
"telegram": ChattyBotConsumer,
})
The goal of Channels is to let you build out your Django projects to work
across any protocol or transport you might encounter in the modern web, while
letting you work with the familiar components and coding style you're used to.
For more information about protocol routing, see :doc:`/topics/routing`.
Cross-Process Communication
---------------------------
Much like a standard WSGI server, your application code that is handling
protocol events runs inside the server process itself - for example, WebSocket
handling code runs inside your WebSocket server process.
Each socket or connection to your overall application is handled by a
*application instance* inside one of these servers. They get called and can
send data back to the client directly.
However, as you build more complex application systems you start needing to
communicate between different *application instances* - for example, if you
are building a chatroom, when one *application instance* receives an incoming
message, it needs to distribute it out to any other instances that represent
people in the chatroom.
You can do this by polling a database, but Channels introduces the idea of
a *channel layer*, a low-level abstraction around a set of transports that
allow you to send information between different processes. Each application
instance has a unique *channel name*, and can join *groups*, allowing both
point-to-point and broadcast messaging.
.. note::
Channel layers are an optional part of Channels, and can be disabled if you
want (by setting the ``CHANNEL_LAYERS`` setting to an empty value).
(insert cross-process example here)
You can also send messages to a dedicated process that's listening on its own,
fixed channel name::
# In a consumer
self.channel_layer.send(
"myproject.thumbnail_notifications",
{
"type": "thumbnail.generate",
"id": 90902949,
},
)
You can read more about channel layers in :doc:`/topics/channel_layers`.
Django Integration
------------------
Channels ships with easy drop-in support for common Django features, like
sessions and authentication. You can combine authentication with your
WebSocket views by just adding the right middleware around them::
application = ProtocolTypeRouter({
"websocket": AuthMiddlewareStack(
URLRouter([
url(r"^front(end)/$", consumers.AsyncChatConsumer),
])
),
})
For more, see :doc:`/topics/sessions` and :doc:`/topics/authentication`.
channels-2.4.0/docs/make.bat 0000664 0000000 0000000 00000015061 13576505155 0015664 0 ustar 00root root 0000000 0000000 @ECHO OFF
REM Command file for Sphinx documentation
if "%SPHINXBUILD%" == "" (
set SPHINXBUILD=sphinx-build
)
set BUILDDIR=_build
set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% .
set I18NSPHINXOPTS=%SPHINXOPTS% .
if NOT "%PAPER%" == "" (
set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS%
set I18NSPHINXOPTS=-D latex_paper_size=%PAPER% %I18NSPHINXOPTS%
)
if "%1" == "" goto help
if "%1" == "help" (
:help
echo.Please use `make ^` where ^ is one of
echo. html to make standalone HTML files
echo. dirhtml to make HTML files named index.html in directories
echo. singlehtml to make a single large HTML file
echo. pickle to make pickle files
echo. json to make JSON files
echo. htmlhelp to make HTML files and a HTML help project
echo. qthelp to make HTML files and a qthelp project
echo. devhelp to make HTML files and a Devhelp project
echo. epub to make an epub
echo. latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter
echo. text to make text files
echo. man to make manual pages
echo. texinfo to make Texinfo files
echo. gettext to make PO message catalogs
echo. changes to make an overview over all changed/added/deprecated items
echo. xml to make Docutils-native XML files
echo. pseudoxml to make pseudoxml-XML files for display purposes
echo. linkcheck to check all external links for integrity
echo. doctest to run all doctests embedded in the documentation if enabled
goto end
)
if "%1" == "clean" (
for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i
del /q /s %BUILDDIR%\*
goto end
)
%SPHINXBUILD% 2> nul
if errorlevel 9009 (
echo.
echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
echo.installed, then set the SPHINXBUILD environment variable to point
echo.to the full path of the 'sphinx-build' executable. Alternatively you
echo.may add the Sphinx directory to PATH.
echo.
echo.If you don't have Sphinx installed, grab it from
echo.http://sphinx-doc.org/
exit /b 1
)
if "%1" == "html" (
%SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The HTML pages are in %BUILDDIR%/html.
goto end
)
if "%1" == "dirhtml" (
%SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml.
goto end
)
if "%1" == "singlehtml" (
%SPHINXBUILD% -b singlehtml %ALLSPHINXOPTS% %BUILDDIR%/singlehtml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The HTML pages are in %BUILDDIR%/singlehtml.
goto end
)
if "%1" == "pickle" (
%SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can process the pickle files.
goto end
)
if "%1" == "json" (
%SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can process the JSON files.
goto end
)
if "%1" == "htmlhelp" (
%SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can run HTML Help Workshop with the ^
.hhp project file in %BUILDDIR%/htmlhelp.
goto end
)
if "%1" == "qthelp" (
%SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can run "qcollectiongenerator" with the ^
.qhcp project file in %BUILDDIR%/qthelp, like this:
echo.^> qcollectiongenerator %BUILDDIR%\qthelp\Channels.qhcp
echo.To view the help file:
echo.^> assistant -collectionFile %BUILDDIR%\qthelp\Channels.ghc
goto end
)
if "%1" == "devhelp" (
%SPHINXBUILD% -b devhelp %ALLSPHINXOPTS% %BUILDDIR%/devhelp
if errorlevel 1 exit /b 1
echo.
echo.Build finished.
goto end
)
if "%1" == "epub" (
%SPHINXBUILD% -b epub %ALLSPHINXOPTS% %BUILDDIR%/epub
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The epub file is in %BUILDDIR%/epub.
goto end
)
if "%1" == "latex" (
%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
if errorlevel 1 exit /b 1
echo.
echo.Build finished; the LaTeX files are in %BUILDDIR%/latex.
goto end
)
if "%1" == "latexpdf" (
%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
cd %BUILDDIR%/latex
make all-pdf
cd %BUILDDIR%/..
echo.
echo.Build finished; the PDF files are in %BUILDDIR%/latex.
goto end
)
if "%1" == "latexpdfja" (
%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
cd %BUILDDIR%/latex
make all-pdf-ja
cd %BUILDDIR%/..
echo.
echo.Build finished; the PDF files are in %BUILDDIR%/latex.
goto end
)
if "%1" == "text" (
%SPHINXBUILD% -b text %ALLSPHINXOPTS% %BUILDDIR%/text
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The text files are in %BUILDDIR%/text.
goto end
)
if "%1" == "man" (
%SPHINXBUILD% -b man %ALLSPHINXOPTS% %BUILDDIR%/man
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The manual pages are in %BUILDDIR%/man.
goto end
)
if "%1" == "texinfo" (
%SPHINXBUILD% -b texinfo %ALLSPHINXOPTS% %BUILDDIR%/texinfo
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The Texinfo files are in %BUILDDIR%/texinfo.
goto end
)
if "%1" == "gettext" (
%SPHINXBUILD% -b gettext %I18NSPHINXOPTS% %BUILDDIR%/locale
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The message catalogs are in %BUILDDIR%/locale.
goto end
)
if "%1" == "changes" (
%SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes
if errorlevel 1 exit /b 1
echo.
echo.The overview file is in %BUILDDIR%/changes.
goto end
)
if "%1" == "linkcheck" (
%SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck
if errorlevel 1 exit /b 1
echo.
echo.Link check complete; look for any errors in the above output ^
or in %BUILDDIR%/linkcheck/output.txt.
goto end
)
if "%1" == "doctest" (
%SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest
if errorlevel 1 exit /b 1
echo.
echo.Testing of doctests in the sources finished, look at the ^
results in %BUILDDIR%/doctest/output.txt.
goto end
)
if "%1" == "xml" (
%SPHINXBUILD% -b xml %ALLSPHINXOPTS% %BUILDDIR%/xml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The XML files are in %BUILDDIR%/xml.
goto end
)
if "%1" == "pseudoxml" (
%SPHINXBUILD% -b pseudoxml %ALLSPHINXOPTS% %BUILDDIR%/pseudoxml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The pseudo-XML files are in %BUILDDIR%/pseudoxml.
goto end
)
:end
channels-2.4.0/docs/one-to-two.rst 0000664 0000000 0000000 00000024214 13576505155 0017021 0 ustar 00root root 0000000 0000000 What's new in Channels 2?
=========================
Channels 1 and Channels 2 are substantially different codebases, and the upgrade
**is a major one**. While we have attempted to keep things as familiar and
backwards-compatible as possible, major architectural shifts mean you will
need at least some code changes to upgrade.
Requirements
------------
First of all, Channels 2 is *Python 3.5 and up only*.
If you are using Python 2, or a previous version of Python 3, you cannot use
Channels 2 as it relies on the ``asyncio`` library and native Python async
support. This decision was a tough one, but ultimately Channels is a library
built around async functionality and so to not use these features would be
foolish in the long run.
Apart from that, there are no major changed requirements, and in fact Channels 2
deploys do not need separate worker and server processes and so should be easier
to manage.
Conceptual Changes
------------------
The fundamental layout and concepts of how Channels work have been significantly
changed; you'll need to understand how and why to help in upgrading.
Channel Layers and Processes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Channels 1 terminated HTTP and WebSocket connections in a separate process
to the one that ran Django code, and shuffled requests and events between them
over a cross-process *channel layer*, based on Redis or similar.
This not only meant that all request data had to be re-serialized over the
network, but that you needed to deploy and scale two separate sets of servers.
Channels 2 changes this by running the Django code in-process via a threadpool,
meaning that the network termination and application logic are combined, like
WSGI.
Application Instances
~~~~~~~~~~~~~~~~~~~~~
Because of this, all processing for a socket happens in the same process,
so ASGI applications are now instantiated once per socket and can use
local variables on ``self`` to store information, rather than the
``channel_session`` storage provided before (that is now gone entirely).
The channel layer is now only used to communicate between processes for things
like broadcast messaging - in particular, you can talk to other application
instances in direct events, rather than having to send directly to client sockets.
This means, for example, to broadcast a chat message, you would now send a
new-chat-message event to every application instance that needed it, and the application
code can handle that event, serialize the message down to the socket format,
and send it out (and apply things like multiplexing).
New Consumers
~~~~~~~~~~~~~
Because of these changes, the way consumers work has also significantly changed.
Channels 2 is now a turtles-all-the-way-down design; every aspect of the system
is designed as a valid ASGI application, including consumers and the routing
system.
The consumer base classes have changed, though if you were using the generic
consumers before, the way they work is broadly similar. However, the way that
user authentication, sessions, multiplexing, and similar features work has
changed.
Full Async
~~~~~~~~~~
Channels 2 is also built on a fundamental async foundation, and all servers
are actually running an asynchronous event loop and only jumping to synchronous
code when you interact with the Django view system or ORM. That means that
you, too, can write fully asynchronous code if you wish.
It's not a requirement, but it's there if you need it. We also provide
convenience methods that let you jump between synchronous and asynchronous
worlds easily, with correct blocking semantics, so you can write most of
a consumer in an async style and then have one method that calls the Django ORM
run synchronously.
Removed Components
------------------
The binding framework has been removed entirely - it was a simplistic
implementation, and it being in the core package prevented people from exploring
their own solutions. It's likely similar concepts and APIs will appear in a
third-party (non-official-Django) package as an option for those who want them.
How to Upgrade
--------------
While this is not an exhaustive guide, here are some rough rules on how to
proceed with an upgrade.
Given the major changes to the architecture and layout of Channels 2, it is
likely that upgrading will be a significant rewrite of your code, depending on
what you are doing.
It is **not** a drop-in replacement; we would have done this if we could,
but changing to ``asyncio`` and Python 3 made it almost impossible to keep
things backwards-compatible, and we wanted to correct some major design
decisions.
Function-based consumers and Routing
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Channels 1 allowed you to route by event type (e.g. ``websocket.connect``) and
pass individual functions with routing that looked like this::
channel_routing = [
route("websocket.connect", connect_blog, path=r'^/liveblog/(?P[^/]+)/stream/$'),
]
And function-based consumers that looked like this::
def connect_blog(message, slug):
...
You'll need to convert these to be class-based consumers, as routing is now
done once, at connection time, and so all the event handlers have to be together
in a single ASGI application. In addition, URL arguments are no longer passed
down into the individual functions - instead, they will be provided in ``scope``
as the key ``url_route``, a dict with an ``args`` key containing a list of
positional regex groups and a ``kwargs`` key with a dict of the named groups.
Routing is also now the main entry point, so you will need to change routing
to have a ProtocolTypeRouter with URLRouters nested inside it. See
:doc:`/topics/routing` for more.
channel_session and enforce_ordering
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Any use of the ``channel_session`` or ``enforce_ordering`` decorators can be
removed; ordering is now always followed as protocols are handled in the same
process, and ``channel_session`` is not needed as the same application instance
now handles all traffic from a single client.
Anywhere you stored information in the ``channel_session`` can be replaced by
storing it on ``self`` inside a consumer.
HTTP sessions and Django auth
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
All :doc:`authentication ` and
:doc:`sessions ` are now done with middleware. You can remove
any decorators that handled them, like ``http_session``, ``channel_session_user``
and so on (in fact, there are no decorators in Channels 2 - it's all middleware).
To get auth now, wrap your URLRouter in an ``AuthMiddlewareStack``::
from channels.routing import ProtocolTypeRouter, URLRouter
from channels.auth import AuthMiddlewareStack
application = ProtocolTypeRouter({
"websocket": AuthMiddlewareStack(
URLRouter([
...
])
),
})
You need to replace accesses to ``message.http_session`` with
``self.scope["session"]``, and ``message.user`` with ``self.scope["user"]``.
There is no need to do a handoff like ``channel_session_user_from_http`` any
more - just wrap the auth middleware around and the user will be in the scope
for the lifetime of the connection.
Channel Layers
~~~~~~~~~~~~~~
Channel layers are now an optional part of Channels, and the interface they
need to provide has changed to be async. Only ``channels_redis``, formerly known as
``asgi_redis``, has been updated to match so far.
Settings are still similar to before, but there is no longer a ``ROUTING``
key (the base routing is instead defined with ``ASGI_APPLICATION``)::
CHANNEL_LAYERS = {
"default": {
"BACKEND": "channels_redis.core.RedisChannelLayer",
"CONFIG": {
"hosts": [("redis-server-name", 6379)],
},
},
}
All consumers have a ``self.channel_layer`` and ``self.channel_name`` object
that is populated if you've configured a channel layer. Any messages you send
to the ``channel_name`` will now go to the consumer rather than directly to the
client - see the :doc:`/topics/channel_layers` documentation for more.
The method names are largely the same, but they're all now awaitables rather
than synchronous functions, and ``send_group`` is now ``group_send``.
Group objects
~~~~~~~~~~~~~
Group objects no longer exist; instead you should use the ``group_add``,
``group_discard``, and ``group_send`` methods on the ``self.channel_layer``
object inside of a consumer directly. As an example::
from asgiref.sync import async_to_sync
class ChatConsumer(AsyncWebsocketConsumer):
async def connect(self):
await self.channel_layer.group_add("chat", self.channel_name)
async def disconnect(self):
await self.channel_layer.group_discard("chat", self.channel_name)
Delay server
~~~~~~~~~~~~
If you used the delay server before to put things on hold for a few seconds,
you can now instead use an ``AsyncConsumer`` and ``asyncio.sleep``::
class PingConsumer(AsyncConsumer):
async def websocket_receive(self, message):
await asyncio.sleep(1)
await self.send({
"type": "websocket.send",
"text": "pong",
})
Testing
~~~~~~~
The :doc:`testing framework ` has been entirely rewritten to
be async-based.
While this does make writing tests a lot easier and cleaner,
it means you must entirely rewrite any consumer tests completely - there is no
backwards-compatible interface with the old testing client as it was
synchronous. You can read more about the new testing framework in the
:doc:`testing documentation `.
Also of note is that the live test case class has been renamed from
``ChannelLiveServerTestCase`` to ``ChannelsLiveServerTestCase`` - note the extra
``s``.
Exception Handling
~~~~~~~~~~~~~~~~~~
Because the code that's handling a socket is now in the same process as the
socket itself, Channels 2 implements cleaner exception handling than before -
if your application raises an unhandled error, it will close the connection
(HTTP or WebSocket in the case of Daphne) and log the error to console.
Additionally, sending malformed messages down to the client is now caught
and raises exceptions where you're sending, rather than silently failing and
logging to the server console.
channels-2.4.0/docs/releases/ 0000775 0000000 0000000 00000000000 13576505155 0016057 5 ustar 00root root 0000000 0000000 channels-2.4.0/docs/releases/1.0.0.rst 0000664 0000000 0000000 00000021226 13576505155 0017250 0 ustar 00root root 0000000 0000000 1.0.0 Release Notes
===================
Channels 1.0.0 brings together a number of design changes, including some
breaking changes, into our first fully stable release, and also brings the
databinding code out of alpha phase. It was released on 2017/01/08.
The result is a faster, easier to use, and safer Channels, including one major
change that will fix almost all problems with sessions and connect/receive
ordering in a way that needs no persistent storage.
It was unfortunately not possible to make all of the changes backwards
compatible, though most code should not be too affected and the fixes are
generally quite easy.
You **must also update Daphne** to at least 1.0.0 to have this release of
Channels work correctly.
Major Features
--------------
Channels 1.0 introduces a couple of new major features.
WebSocket accept/reject flow
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Rather than be immediately accepted, WebSockets now pause during the handshake
while they send over a message on ``websocket.connect``, and your application
must either accept or reject the connection before the handshake is completed
and messages can be received.
You **must** update Daphne to at least 1.0.0 to make this work correctly.
This has several advantages:
* You can now reject WebSockets before they even finish connecting, giving
appropriate error codes to browsers and not letting the browser-side socket
ever get into a connected state and send messages.
* Combined with Consumer Atomicity (below), it means there is no longer any need
for the old "slight ordering" mode, as the connect consumer must run to
completion and accept the socket before any messages can be received and
forwarded onto ``websocket.receive``.
* Any ``send`` message sent to the WebSocket will implicitly accept the connection,
meaning only a limited set of ``connect`` consumers need changes (see
Backwards Incompatible Changes below)
Consumer Atomicity
~~~~~~~~~~~~~~~~~~
Consumers will now buffer messages you try to send until the consumer completes
and then send them once it exits and the outbound part of any decorators have
been run (even if an exception is raised).
This makes the flow of messages much easier to reason about - consumers can now
be reasoned about as atomic blocks that run and then send messages, meaning that
if you send a message to start another consumer you're guaranteed that the
sending consumer has finished running by the time it's acted upon.
If you want to send messages immediately rather than at the end of the consumer,
you can still do that by passing the ``immediately`` argument::
Channel("thumbnailing-tasks").send({"id": 34245}, immediately=True)
This should be mostly backwards compatible, and may actually fix race
conditions in some apps that were pre-existing.
Databinding Group/Action Overhaul
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Previously, databinding subclasses had to implement
``group_names(instance, action)`` to return what groups to send an instance's
change to of the type ``action``. This had flaws, most notably when what was
actually just a modification to the instance in question changed its
permission status so more clients could see it; to those clients, it should
instead have been "created".
Now, Channels just calls ``group_names(instance)``, and you should return what
groups can see the instance at the current point in time given the instance
you were passed. Channels will actually call the method before and after changes,
comparing the groups you gave, and sending out create, update or delete messages
to clients appropriately.
Existing databinding code will need to be adapted; see the
"Backwards Incompatible Changes" section for more.
Demultiplexer Overhaul
~~~~~~~~~~~~~~~~~~~~~~
Demuliplexers have changed to remove the behaviour where they re-sent messages
onto new channels without special headers, and instead now correctly split out
incoming messages into sub-messages that still look like ``websocket.receive``
messages, and directly dispatch these to the relevant consumer.
They also now forward all ``websocket.connect`` and ``websocket.disconnect``
messages to all of their sub-consumers, so it's much easier to compose things
together from code that also works outside the context of multiplexing.
For more, read the updated :doc:`/generic` docs.
Delay Server
~~~~~~~~~~~~
A built-in delay server, launched with `manage.py rundelay`, now ships if you
wish to use it. It needs some extra initial setup and uses a database for
persistence; see :doc:`/delay` for more information.
Minor Changes
-------------
* Serializers can now specify fields as ``__all__`` to auto-include all fields,
and ``exclude`` to remove certain unwanted fields.
* ``runserver`` respects ``FORCE_SCRIPT_NAME``
* Websockets can now be closed with a specific code by calling ``close(status=4000)``
* ``enforce_ordering`` no longer has a ``slight`` mode (because of the accept
flow changes), and is more efficient with session saving.
* ``runserver`` respects ``--nothreading`` and only launches one worker, takes
a ``--http-timeout`` option if you want to override it from the default ``60``,
* A new ``@channel_and_http_session`` decorator rehydrates the HTTP session out
of the channel session if you want to access it inside receive consumers.
* Streaming responses no longer have a chance of being cached.
* ``request.META['SERVER_PORT']`` is now always a string.
* ``http.disconnect`` now has a ``path`` key so you can route it.
* Test client now has a ``send_and_consume`` method.
Backwards Incompatible Changes
------------------------------
Connect Consumers
~~~~~~~~~~~~~~~~~
If you have a custom consumer for ``websocket.connect``, you must ensure that
it either:
* Sends at least one message onto the ``reply_channel`` that generates a
WebSocket frame (either ``bytes`` or ``text`` is set), either directly
or via a group.
* Sends a message onto the ``reply_channel`` that is ``{"accept": True}``,
to accept a connection without sending data.
* Sends a message onto the ``reply_channel`` that is ``{"close": True}``,
to reject a connection mid-handshake.
Many consumers already do the former, but if your connect consumer does not
send anything you MUST now send an accept message or the socket will remain
in the handshaking phase forever and you'll never get any messages.
All built-in Channels consumers (e.g. in the generic consumers) have been
upgraded to do this.
You **must** update Daphne to at least 1.0.0 to make this work correctly.
Databinding group_names
~~~~~~~~~~~~~~~~~~~~~~~
If you have databinding subclasses, you will have implemented
``group_names(instance, action)``, which returns the groups to use based on the
instance and action provided.
Now, instead, you must implement ``group_names(instance)``, which returns the
groups that can see the instance as it is presented for you; the action
results will be worked out for you. For example, if you want to only show
objects marked as "admin_only" to admins, and objects without it to everyone,
previously you would have done::
def group_names(self, instance, action):
if instance.admin_only:
return ["admins"]
else:
return ["admins", "non-admins"]
Because you did nothing based on the ``action`` (and if you did, you would
have got incomplete messages, hence this design change), you can just change
the signature of the method like this::
def group_names(self, instance):
if instance.admin_only:
return ["admins"]
else:
return ["admins", "non-admins"]
Now, when an object is updated to have ``admin_only = True``, the clients
in the ``non-admins`` group will get a ``delete`` message, while those in
the ``admins`` group will get an ``update`` message.
Demultiplexers
~~~~~~~~~~~~~~
Demultiplexers have changed from using a ``mapping`` dict, which mapped stream
names to channels, to using a ``consumers`` dict which maps stream names
directly to consumer classes.
You will have to convert over to using direct references to consumers, change
the name of the dict, and then you can remove any channel routing for the old
channels that were in ``mapping`` from your routes.
Additionally, the Demultiplexer now forwards messages as they would look from
a direct connection, meaning that where you previously got a decoded object
through you will now get a correctly-formatted ``websocket.receive`` message
through with the content as a ``text`` key, JSON-encoded. You will also
now have to handle ``websocket.connect`` and ``websocket.disconnect`` messages.
Both of these issues can be solved using the ``JsonWebsocketConsumer`` generic
consumer, which will decode for you and correctly separate connection and
disconnection handling into their own methods.
channels-2.4.0/docs/releases/1.0.1.rst 0000664 0000000 0000000 00000000477 13576505155 0017256 0 ustar 00root root 0000000 0000000 1.0.1 Release Notes
===================
Channels 1.0.1 is a minor bugfix release, released on 2017/01/09.
Changes
-------
* WebSocket generic views now accept connections by default in their connect
handler for better backwards compatibility.
Backwards Incompatible Changes
------------------------------
None.
channels-2.4.0/docs/releases/1.0.2.rst 0000664 0000000 0000000 00000001673 13576505155 0017256 0 ustar 00root root 0000000 0000000 1.0.2 Release Notes
===================
Channels 1.0.2 is a minor bugfix release, released on 2017/01/12.
Changes
-------
* Websockets can now be closed from anywhere using the new ``WebsocketCloseException``,
available as ``channels.exceptions.WebsocketCloseException(code=None)``. There is
also a generic ``ChannelSocketException`` you can base any exceptions on that,
if it is caught, gets handed the current ``message`` in a ``run`` method, so you
can do custom behaviours.
* Calling ``Channel.send`` or ``Group.send`` from outside a consumer context
(i.e. in tests or management commands) will once again send the message immediately,
rather than putting it into the consumer message buffer to be flushed when the
consumer ends (which never happens)
* The base implementation of databinding now correctly only calls ``group_names(instance)``,
as documented.
Backwards Incompatible Changes
------------------------------
None.
channels-2.4.0/docs/releases/1.0.3.rst 0000664 0000000 0000000 00000001336 13576505155 0017253 0 ustar 00root root 0000000 0000000 1.0.3 Release Notes
===================
Channels 1.0.3 is a minor bugfix release, released on 2017/02/01.
Changes
-------
* Database connections are no longer force-closed after each test is run.
* Channel sessions are not re-saved if they're empty even if they're marked as
modified, allowing logout to work correctly.
* WebsocketDemultiplexer now correctly does sessions for the second/third/etc.
connect and disconnect handlers.
* Request reading timeouts now correctly return 408 rather than erroring out.
* The ``rundelay`` delay server now only polls the database once per second,
and this interval is configurable with the ``--sleep`` option.
Backwards Incompatible Changes
------------------------------
None.
channels-2.4.0/docs/releases/1.1.0.rst 0000664 0000000 0000000 00000002152 13576505155 0017246 0 ustar 00root root 0000000 0000000 1.1.0 Release Notes
===================
Channels 1.1.0 introduces a couple of major but backwards-compatible changes,
including most notably the inclusion of a standard, framework-agnostic JavaScript
library for easier integration with your site.
Major Changes
-------------
* Channels now includes a JavaScript wrapper that wraps reconnection and
multiplexing for you on the client side. For more on how to use it, see the
javascript documentation.
* Test classes have been moved from ``channels.tests`` to ``channels.test``
to better match Django. Old imports from ``channels.tests`` will continue to
work but will trigger a deprecation warning, and ``channels.tests`` will be
removed completely in version 1.3.
Minor Changes & Bugfixes
------------------------
* Bindings now support non-integer fields for primary keys on models.
* The ``enforce_ordering`` decorator no longer suffers a race condition where
it would drop messages under high load.
* ``runserver`` no longer errors if the ``staticfiles`` app is not enabled in Django.
Backwards Incompatible Changes
------------------------------
None.
channels-2.4.0/docs/releases/1.1.1.rst 0000664 0000000 0000000 00000000605 13576505155 0017250 0 ustar 00root root 0000000 0000000 1.1.1 Release Notes
===================
Channels 1.1.1 is a bugfix release that fixes a packaging issue with the JavaScript files.
Major Changes
-------------
None.
Minor Changes & Bugfixes
------------------------
* The JavaScript binding introduced in 1.1.0 is now correctly packaged and
included in builds.
Backwards Incompatible Changes
------------------------------
None.
channels-2.4.0/docs/releases/1.1.2.rst 0000664 0000000 0000000 00000001201 13576505155 0017242 0 ustar 00root root 0000000 0000000 1.1.2 Release Notes
===================
Channels 1.1.2 is a bugfix release for the 1.1 series, released on
April 1st, 2017.
Major Changes
-------------
None.
Minor Changes & Bugfixes
------------------------
* Session name hash changed to SHA-1 to satisfy FIPS-140-2.
* `scheme` key in ASGI-HTTP messages now translates into `request.is_secure()`
correctly.
* WebsocketBridge now exposes the underlying WebSocket as `.socket`.
Backwards Incompatible Changes
------------------------------
* When you upgrade all current channel sessions will be invalidated; you
should make sure you disconnect all WebSockets during upgrade.
channels-2.4.0/docs/releases/1.1.3.rst 0000664 0000000 0000000 00000000714 13576505155 0017253 0 ustar 00root root 0000000 0000000 1.1.3 Release Notes
===================
Channels 1.1.3 is a bugfix release for the 1.1 series, released on
April 5th, 2017.
Major Changes
-------------
None.
Minor Changes & Bugfixes
------------------------
* ``enforce_ordering`` now works correctly with the new-style process-specific
channels
* ASGI channel layer versions are now explicitly checked for version compatibility
Backwards Incompatible Changes
------------------------------
None.
channels-2.4.0/docs/releases/1.1.4.rst 0000664 0000000 0000000 00000001507 13576505155 0017255 0 ustar 00root root 0000000 0000000 1.1.4 Release Notes
===================
Channels 1.1.4 is a bugfix release for the 1.1 series, released on
June 15th, 2017.
Major Changes
-------------
None.
Minor Changes & Bugfixes
------------------------
* Pending messages correctly handle retries in backlog situations
* Workers in threading mode now respond to ctrl-C and gracefully exit.
* ``request.meta['QUERY_STRING']`` is now correctly encoded at all times.
* Test client improvements
* ``ChannelServerLiveTestCase`` added, allows an equivalent of the Django
``LiveTestCase``.
* Decorator added to check ``Origin`` headers (``allowed_hosts_only``)
* New ``TEST_CONFIG`` setting in ``CHANNEL_LAYERS`` that allows varying of
the channel layer for tests (e.g. using a different Redis install)
Backwards Incompatible Changes
------------------------------
None.
channels-2.4.0/docs/releases/1.1.5.rst 0000664 0000000 0000000 00000000531 13576505155 0017252 0 ustar 00root root 0000000 0000000 1.1.5 Release Notes
===================
Channels 1.1.5 is a packaging release for the 1.1 series, released on
June 16th, 2017.
Major Changes
-------------
None.
Minor Changes & Bugfixes
------------------------
* The Daphne dependency requirement was bumped to 1.3.0.
Backwards Incompatible Changes
------------------------------
None.
channels-2.4.0/docs/releases/1.1.6.rst 0000664 0000000 0000000 00000000640 13576505155 0017254 0 ustar 00root root 0000000 0000000 1.1.6 Release Notes
===================
Channels 1.1.5 is a packaging release for the 1.1 series, released on
June 28th, 2017.
Major Changes
-------------
None.
Minor Changes & Bugfixes
------------------------
* The ``runserver`` ``server_cls`` override no longer fails with more modern
Django versions that pass an ``ipv6`` parameter.
Backwards Incompatible Changes
------------------------------
None.
channels-2.4.0/docs/releases/2.0.0.rst 0000664 0000000 0000000 00000002640 13576505155 0017250 0 ustar 00root root 0000000 0000000 2.0.0 Release Notes
===================
Channels 2.0 is a major rewrite of Channels, introducing a large amount of
changes to the fundamental design and architecture of Channels. Notably:
* Data is no longer transported over a channel layer between protocol server
and application; instead, applications run inside their protocol servers
(like with WSGI).
* To achieve this, the entire core of channels is now built around Python's
``asyncio`` framework and runs async-native down until it hits either a
Django view or a synchronous consumer.
* Python 2.7 and 3.4 are no longer supported.
More detailed information on the changes and tips on how to port your
applications can be found in our :doc:`/one-to-two` documentation.
Backwards Incompatible Changes
------------------------------
Channels 2 is regrettably not backwards-compatible at all with Channels 1
applications due to the large amount of re-architecting done to the code and
the switch from synchronous to asynchronous runtimes.
A :doc:`migration guide ` is available, and a lot of the basic
concepts are the same, but the basic class structure and imports have changed.
Our apologies for having to make a breaking change like this, but it was the
only way to fix some of the fundamental design issues in Channels 1. Channels 1
will continue to receive security and data-loss fixes for the foreseeable
future, but no new features will be added.
channels-2.4.0/docs/releases/2.0.1.rst 0000664 0000000 0000000 00000002230 13576505155 0017244 0 ustar 00root root 0000000 0000000 2.0.1 Release Notes
===================
Channels 2.0.1 is a patch release of channels, adding a couple of small
new features and fixing one bug in URL resolution.
As always, when updating Channels make sure to also update its dependencies
(``asgiref`` and ``daphne``) as these also get their own bugfix updates, and
some bugs that may appear to be part of Channels are actually in those packages.
New Features
------------
* There are new async versions of the Websocket generic consumers,
``AsyncWebsocketConsumer`` and ``AsyncJsonWebsocketConsumer``. Read more
about them in :doc:`/topics/consumers`.
* The old ``allowed_hosts_only`` decorator has been removed (it was
accidentally included in the 2.0 release but didn't work) and replaced with
a new ``OriginValidator`` and ``AllowedHostsOriginValidator`` set of
ASGI middleware. Read more in :doc:`/topics/security`.
Bugfixes
--------
* A bug in ``URLRouter`` which didn't allow you to match beyond the first
URL in some situations has been resolved, and a test suite was added for
URL resolution to prevent it happening again.
Backwards Incompatible Changes
------------------------------
None.
channels-2.4.0/docs/releases/2.0.2.rst 0000664 0000000 0000000 00000001431 13576505155 0017247 0 ustar 00root root 0000000 0000000 2.0.2 Release Notes
===================
Channels 2.0.2 is a patch release of Channels, fixing a bug in the database
connection handling.
As always, when updating Channels make sure to also update its dependencies
(``asgiref`` and ``daphne``) as these also get their own bugfix updates, and
some bugs that may appear to be part of Channels are actually in those packages.
New Features
------------
* There is a new ``channels.db.database_sync_to_async`` wrapper that is like
``sync_to_async`` but also closes database connections for you. You can
read more about usage in :doc:`/topics/databases`.
Bugfixes
--------
* SyncConsumer and all its descendant classes now close database connections
when they exit.
Backwards Incompatible Changes
------------------------------
None.
channels-2.4.0/docs/releases/2.1.0.rst 0000664 0000000 0000000 00000011302 13576505155 0017244 0 ustar 00root root 0000000 0000000 2.1.0 Release Notes
===================
Channels 2.1 brings a few new major changes to Channels as well as some more
minor fixes. In addition, if you've not yet seen it, we now have a long-form
:doc:`tutorial ` to better introduce some of the concepts
and sync versus async styles of coding.
Major Changes
-------------
Async HTTP Consumer
~~~~~~~~~~~~~~~~~~~
There is a new native-async HTTP consumer class,
``channels.generic.http.AsyncHttpConsumer``. This allows much easier writing
of long-poll endpoints or other long-lived HTTP connection handling that
benefits from native async support.
You can read more about it in the :doc:`/topics/consumers` documentation.
WebSocket Consumers
~~~~~~~~~~~~~~~~~~~
These consumer classes now all have built-in group join and leave functionality,
which will make a consumer join all group names that are in the iterable
``groups`` on the consumer class (this can be a static list or a ``@property``
method).
In addition, the ``accept`` methods on both variants now take an optional
``subprotocol`` argument, which will be sent back to the WebSocket client as
the subprotocol the server has selected. The client's advertised subprotocols
can, as always, be found in the scope as ``scope["subprotocols"]``.
Nested URL Routing
~~~~~~~~~~~~~~~~~~
``URLRouter`` instances can now be nested inside each other and, like Django's
URL handling and ``include``, will strip off the matched part of the URL in the
outer router and leave only the unmatched portion for the inner router, allowing
reuseable routing files.
Note that you **cannot** use the Django ``include`` function inside of the
``URLRouter`` as it assumes a bit too much about what it is given as its
left-hand side and will terminate your regular expression/URL pattern wrongly.
Login and Logout
~~~~~~~~~~~~~~~~
As well as overhauling the internals of the ``AuthMiddleware``, there are now
also ``login`` and ``logout`` async functions you can call in consumers to
log users in and out of the current session.
Due to the way cookies are sent back to clients, these come with some caveats;
read more about them and how to use them properly in :doc:`/topics/authentication`.
In-Memory Channel Layer
~~~~~~~~~~~~~~~~~~~~~~~
The in-memory channel layer has been extended to have full expiry and group
support so it should now be suitable for drop-in replacement for most
test scenarios.
Testing
~~~~~~~
The ``ChannelsLiveServerTestCase`` has been rewritten to use a new method for
launching Daphne that should be more resilient (and faster), and now shares
code with the Daphne test suite itself.
Ports are now left up to the operating
system to decide rather than being picked from within a set range. It also now
supports static files when the Django ``staticfiles`` app is enabled.
In addition, the Communicator classes have gained a ``receive_nothing`` method
that allows you to assert that the application didn't send anything, rather
than writing this yourself using exception handling. See more in the
:doc:`/topics/testing` documentation.
Origin header validation
~~~~~~~~~~~~~~~~~~~~~~~~
As well as removing the ``print`` statements that accidentally got into the
last release, this has been overhauled to more correctly match against headers
according to the Origin header spec and align with Django's ``ALLOWED_HOSTS``
setting.
It can now also enforce protocol (``http`` versus ``https``) and port, both
optionally.
Bugfixes & Small Changes
------------------------
* ``print`` statements that accidentally got left in the ``Origin`` validation
code were removed.
* The ``runserver`` command now shows the version of Channels you are running.
* Orphaned tasks that may have caused warnings during test runs or occasionally
live site traffic are now correctly killed off rather than letting them die
later on and print warning messages.
* ``WebsocketCommunicator`` now accepts a query string passed into the
constructor and adds it to the scope rather than just ignoring it.
* Test handlers will correctly handle changing the ``CHANNEL_LAYERS`` setting
via decorators and wipe the internal channel layer cache.
* ``SessionMiddleware`` can be safely nested inside itself rather than causing
a runtime error.
Backwards Incompatible Changes
------------------------------
* The format taken by the ``OriginValidator`` for its domains has changed and
``*.example.com`` is no longer allowed; instead, use ``.example.com`` to match
a domain and all its subdomains.
* If you previously nested ``URLRouter`` instances inside each other both would
have been matching on the full URL before, whereas now they will match on the
unmatched portion of the URL, meaning your URL routes would break if you had
intended this usage.
channels-2.4.0/docs/releases/2.1.1.rst 0000664 0000000 0000000 00000002115 13576505155 0017247 0 ustar 00root root 0000000 0000000 2.1.1 Release Notes
===================
Channels 2.1.1 is a bugfix release for an important bug in the new async
authentication code.
Major Changes
-------------
None.
Bugfixes & Small Changes
------------------------
Previously, the object in ``scope["user"]`` was one of Django's
SimpleLazyObjects, which then called our ``get_user`` async function via
``async_to_sync``.
This worked fine when called from SyncConsumers, but because
async environments do not run attribute access in an async fashion, when
the body of an async consumer tried to call it, the ``asgiref`` library
flagged an error where the code was trying to call a synchronous function
during a async context.
To fix this, the User object is now loaded non-lazily on application startup.
This introduces a blocking call during the synchronous application
constructor, so the ASGI spec has been updated to recommend that constructors
for ASGI apps are called in a threadpool and Daphne 2.1.1 implements this
and is recommended for use with this release.
Backwards Incompatible Changes
------------------------------
None.
channels-2.4.0/docs/releases/2.1.2.rst 0000664 0000000 0000000 00000003134 13576505155 0017252 0 ustar 00root root 0000000 0000000 2.1.2 Release Notes
===================
Channels 2.1.2 is another bugfix release in the 2.1 series.
Special thanks to people at the DjangoCon Europe sprints who helped out with
several of these fixes.
Major Changes
-------------
Session and authentication middleware has been overhauled to be non-blocking.
Previously, these middlewares potentially did database or session store access
in the synchronous ASGI constructor, meaning they would block the entire event
loop while doing so.
Instead, they have now been modified to add LazyObjects into the scope in the
places where the session or user will be, and then when the processing goes
through their asynchronous portion, those stores are accessed in a non-blocking
fashion.
This should be an un-noticeable change for end users, but if you see weird
behaviour or an unresolved LazyObject, let us know.
Bugfixes & Small Changes
------------------------
* AsyncHttpConsumer now has a disconnect() method you can override if you
want to perform actions (such as leaving groups) when a long-running HTTP
request disconnects.
* URL routing context now includes default arguments from the URLconf in the
context's ``url_route`` key, alongside captured arguments/groups from the
URL pattern.
* The FORCE_SCRIPT_NAME setting is now respected in ASGI mode, and lets you
override where Django thinks the root URL of your application is mounted.
* ALLOWED_HOSTS is now set correctly during LiveServerTests, meaning you will
no longer get ``400 Bad Request`` errors during these test runs.
Backwards Incompatible Changes
------------------------------
None.
channels-2.4.0/docs/releases/2.1.3.rst 0000664 0000000 0000000 00000001062 13576505155 0017251 0 ustar 00root root 0000000 0000000 2.1.3 Release Notes
===================
Channels 2.1.3 is another bugfix release in the 2.1 series.
Bugfixes & Small Changes
------------------------
* An ALLOWED_ORIGINS value of "*" will now also allow requests without a Host
header at all (especially important for tests)
* The request.path value is now correct in cases when a server has SCRIPT_NAME
set.
* Errors that happen inside channel listeners inside a runworker or Worker
class are now raised rather than suppressed.
Backwards Incompatible Changes
------------------------------
None.
channels-2.4.0/docs/releases/2.1.4.rst 0000664 0000000 0000000 00000002105 13576505155 0017251 0 ustar 00root root 0000000 0000000 2.1.4 Release Notes
===================
Channels 2.1.4 is another bugfix release in the 2.1 series.
Bugfixes & Small Changes
------------------------
* Django middleware is now cached rather than instantiated per request
resulting in a significant speed improvement. Some middleware took seconds to
load and as a result Channels was unusable for HTTP serving before.
* ChannelServerLiveTestCase now serves static files again.
* Improved error message resulting from bad Origin headers.
* ``runserver`` logging now goes through the Django logging framework to match
modern Django.
* Generic consumers can now have non-default channel layers - set the
``channel_layer_alias`` property on the consumer class
* Improved error when accessing ``scope['user']`` before it's ready - the user
is not accessible in the constructor of ASGI apps as it needs an async
environment to load in. Previously it raised a generic error when you tried to
access it early; now it tells you more clearly what's happening.
Backwards Incompatible Changes
------------------------------
None.
channels-2.4.0/docs/releases/2.1.5.rst 0000664 0000000 0000000 00000000516 13576505155 0017256 0 ustar 00root root 0000000 0000000 2.1.5 Release Notes
===================
Channels 2.1.5 is another bugfix release in the 2.1 series.
Bugfixes & Small Changes
------------------------
* Django middleware caching now works on Django 1.11 and Django 2.0.
The previous release only ran on 2.1.
Backwards Incompatible Changes
------------------------------
None.
channels-2.4.0/docs/releases/2.1.6.rst 0000664 0000000 0000000 00000001027 13576505155 0017255 0 ustar 00root root 0000000 0000000 2.1.6 Release Notes
===================
Channels 2.1.6 is another bugfix release in the 2.1 series.
Bugfixes & Small Changes
------------------------
* HttpCommunicator now extracts query strings correctly from its provided
arguments
* AsyncHttpConsumer provides channel layer attributes following the same
conventions as other consumer classes
* Prevent late-Daphne import errors where importing ``daphne.server`` didn't
work due to a bad linter fix.
Backwards Incompatible Changes
------------------------------
None.
channels-2.4.0/docs/releases/2.1.7.rst 0000664 0000000 0000000 00000001533 13576505155 0017260 0 ustar 00root root 0000000 0000000 2.1.7 Release Notes
===================
Channels 2.1.7 is another bugfix release in the 2.1 series, and the last
release (at least for a long while) with Andrew Godwin as the primary
maintainer.
Thanks to everyone who has used, supported, and contributed to Channels over
the years, and I hope we can keep it going with community support for a good
while longer.
Bugfixes & Small Changes
------------------------
* HTTP request body size limit is now enforced (the one set by the
``DATA_UPLOAD_MAX_MEMORY_SIZE`` setting)
* ``database_sync_to_async`` now closes old connections before it runs code,
which should prevent some connection errors in long-running pages or tests.
* The auth middleware closes old connections before it runs, to solve similar
old-connection issues.
Backwards Incompatible Changes
------------------------------
None.
channels-2.4.0/docs/releases/2.2.0.rst 0000664 0000000 0000000 00000000323 13576505155 0017246 0 ustar 00root root 0000000 0000000 2.2.0 Release Notes
===================
Channels 2.2.0 updates the requirements for ASGI version 3, and the supporting
Daphne v2.3 release.
Backwards Incompatible Changes
------------------------------
None.
channels-2.4.0/docs/releases/2.3.0.rst 0000664 0000000 0000000 00000001745 13576505155 0017260 0 ustar 00root root 0000000 0000000 2.3.0 Release Notes
===================
Channels 2.3.0 updates the ``AsgiHandler`` HTTP request body handling to use a
spooled temporary file, rather than reading the whole request body into memory.
This significantly reduces the maximum memory requirements when serving Django
views, and protects from DoS attacks, whilst still allowing large file
uploads — a combination that had previously been *difficult*.
Many thanks to Ivan Ergunov for his work on the improvements! 🎩
Backwards Incompatible Changes
------------------------------
As a result of the reworked body handling, ``AsgiRequest.__init__()`` is
adjusted to expecte a file-like ``stream``, rather than the whole ``body`` as
bytes.
Test cases instantiating requests directly will likely need to be updated to
wrap the provided ``body`` in, e.g., ``io.BytesIO``.
Next Up...
----------
We're looking to address a few issues around ``AsyncHttpConsumer``. Any
human-power available to help on that, truly appreciated. 🙂
channels-2.4.0/docs/releases/2.4.0.rst 0000664 0000000 0000000 00000001100 13576505155 0017242 0 ustar 00root root 0000000 0000000 2.4.0 Release Notes
===================
Channels 2.4 brings compatibility with Django 3.0s ``async_unsafe()`` checks.
(Specifically we ensure session save calls are made inside an asgiref
``database_sync_to_async()``.)
If you are using Daphne, it is recommended that you install Daphne version
2.4.1 or later for full compatibility with Django 3.0.
Backwards Incompatible Changes
------------------------------
In line with the guidance provided by Django's supported versions policy we now
also drop support for all Django versions before 2.2, which is the current LTS.
channels-2.4.0/docs/releases/index.rst 0000664 0000000 0000000 00000000434 13576505155 0017721 0 ustar 00root root 0000000 0000000 Release Notes
=============
.. toctree::
:maxdepth: 1
1.0.0
1.0.1
1.0.2
1.0.3
1.1.0
1.1.1
1.1.2
1.1.3
1.1.4
1.1.5
1.1.6
2.0.0
2.0.1
2.0.2
2.1.0
2.1.1
2.1.2
2.1.3
2.1.4
2.1.5
2.1.6
2.1.7
2.2.0
2.3.0
2.4.0
channels-2.4.0/docs/support.rst 0000664 0000000 0000000 00000011772 13576505155 0016532 0 ustar 00root root 0000000 0000000 Support
=======
If you have questions about Channels, need debugging help or technical support, you can turn to community resources like:
- `Stack Overflow `_
- The `Django Users mailing list `_ (django-users@googlegroups.com)
- The #django channel on the `PySlackers Slack group `_
If you have a concrete bug or feature request (one that is clear and actionable), please file an issue against the
appropriate GitHub project.
Unfortunately, if you open a GitHub issue with a vague problem (like "it's slow!" or "connections randomly drop!")
we'll have to close it as we don't have the volunteers to answer the number of questions we'd get - please go to
one of the other places above for support from the community at large.
As a guideline, your issue is concrete enough to open an issue if you can provide **exact steps to reproduce** in a fresh,
example project. We need to be able to reproduce it on a *normal, local developer machine* - so saying something doesn't
work in a hosted environment is unfortunately not very useful to us, and we'll close the issue and point you here.
Apologies if this comes off as harsh, but please understand that open source maintenance and support takes up a lot
of time, and if we answered all the issues and support requests there would be no time left to actually work on the code
itself!
Making bugs reproducible
------------------------
If you're struggling with an issue that only happens in a production environment and can't get it to reproduce locally
so either you can fix it or someone can help you, take a step-by-step approach to eliminating the differences between the
environments.
First off, try changing your production environment to see if that helps - for example, if you have Nginx/Apache/etc.
between browsers and Channels, try going direct to the Python server and see if that fixes things. Turn SSL off if you
have it on. Try from different browsers and internet connections. WebSockets are notoriously hard to debug already,
and so you should expect some level of awkwardness from any project involving them.
Next, check package versions between your local and remote environments. You'd be surprised how easy it is to forget
to upgrade something!
Once you've made sure it's none of that, try changing your project. Make a fresh Django project (or use one of the
Channels example projects) and make sure it doesn't have the bug, then work on adding code to it from your project
until the bug appears. Alternately, take your project and remove pieces back down to the basic Django level until
it works.
Network programming is also just difficult in general; you should expect some level of reconnects and dropped connections
as a matter of course. Make sure that what you're seeing isn't just normal for a production application.
How to help the Channels project
--------------------------------
If you'd like to help us with support, the first thing to do is to provide support in the communities mentioned at the
top (Stack Overflow and the mailing list).
If you'd also like to help triage issues, please get in touch and mention you'd like to help out and we can make sure you're
set up and have a good idea of what to do. Most of the work is making sure incoming issues are actually valid and actionable,
and closing those that aren't and redirecting them to this page politely and explaining why.
Some sample response templates are below.
General support request
~~~~~~~~~~~~~~~~~~~~~~~
::
Sorry, but we can't help out with general support requests here - the issue tracker is for reproduceable bugs and
concrete feature requests only! Please see our support documentation (http://channels.readthedocs.io/en/latest/support.html)
for more information about where you can get general help.
Non-specific bug/"It doesn't work!"
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
::
I'm afraid we can't address issues without either direct steps to reproduce, or that only happen in a production
environment, as they may not be problems in the project itself. Our support documentation
(http://channels.readthedocs.io/en/latest/support.html) has details about how to take this sort of problem, diagnose it,
and either fix it yourself, get help from the community, or make it into an actionable issue that we can handle.
Sorry we have to direct you away like this, but we get a lot of support requests every week. If you can reduce the problem
to a clear set of steps to reproduce or an example project that fails in a fresh environment, please re-open the ticket
with that information.
Problem in application code
~~~~~~~~~~~~~~~~~~~~~~~~~~~
::
It looks like a problem in your application code rather than in Channels itself, so I'm going to close the ticket.
If you can trace it down to a problem in Channels itself (with exact steps to reproduce on a fresh or small example
project - see http://channels.readthedocs.io/en/latest/support.html) please re-open the ticket! Thanks.
channels-2.4.0/docs/topics/ 0000775 0000000 0000000 00000000000 13576505155 0015555 5 ustar 00root root 0000000 0000000 channels-2.4.0/docs/topics/authentication.rst 0000664 0000000 0000000 00000013113 13576505155 0021325 0 ustar 00root root 0000000 0000000 Authentication
==============
Channels supports standard Django authentication out-of-the-box for HTTP and
WebSocket consumers, and you can write your own middleware or handling code
if you want to support a different authentication scheme (for example,
tokens in the URL).
Django authentication
---------------------
The ``AuthMiddleware`` in Channels supports standard Django authentication,
where the user details are stored in the session. It allows read-only access
to a user object in the ``scope``.
``AuthMiddleware`` requires ``SessionMiddleware`` to function, which itself
requires ``CookieMiddleware``. For convenience, these are also provided
as a combined callable called ``AuthMiddlewareStack`` that includes all three.
To use the middleware, wrap it around the appropriate level of consumer
in your ``routing.py``::
from django.conf.urls import url
from channels.routing import ProtocolTypeRouter, URLRouter
from channels.auth import AuthMiddlewareStack
from myapp import consumers
application = ProtocolTypeRouter({
"websocket": AuthMiddlewareStack(
URLRouter([
url(r"^front(end)/$", consumers.AsyncChatConsumer),
])
),
})
While you can wrap the middleware around each consumer individually,
it's recommended you wrap it around a higher-level application component,
like in this case the ``URLRouter``.
Note that the ``AuthMiddleware`` will only work on protocols that provide
HTTP headers in their ``scope`` - by default, this is HTTP and WebSocket.
To access the user, just use ``self.scope["user"]`` in your consumer code::
class ChatConsumer(WebsocketConsumer):
def connect(self, event):
self.user = self.scope["user"]
Custom Authentication
---------------------
If you have a custom authentication scheme, you can write a custom middleware
to parse the details and put a user object (or whatever other object you need)
into your scope.
Middleware is written as a callable that takes an ASGI application and wraps
it to return another ASGI application. Most authentication can just be done
on the scope, so all you need to do is override the initial constructor
that takes a scope, rather than the event-running coroutine.
Here's a simple example of a middleware that just takes a user ID out of the
query string and uses that::
from django.db import close_old_connections
class QueryAuthMiddleware:
"""
Custom middleware (insecure) that takes user IDs from the query string.
"""
def __init__(self, inner):
# Store the ASGI application we were passed
self.inner = inner
def __call__(self, scope):
# Close old database connections to prevent usage of timed out connections
close_old_connections()
# Look up user from query string (you should also do things like
# checking if it is a valid user ID, or if scope["user"] is already
# populated).
user = User.objects.get(id=int(scope["query_string"]))
# Return the inner application directly and let it run everything else
return self.inner(dict(scope, user=user))
.. warning::
Right now you will need to call ``close_old_connections()`` before any
database code you call inside a middleware's scope-setup method to ensure
you don't leak idle database connections. We hope to call this automatically
in future versions of Channels.
The same principles can be applied to authenticate over non-HTTP protocols;
for example, you might want to use someone's chat username from a chat protocol
to turn it into a user.
How to log a user in/out
------------------------
Channels provides direct login and logout functions (much like Django's
``contrib.auth`` package does) as ``channels.auth.login`` and
``channels.auth.logout``.
Within your consumer you can await ``login(scope, user, backend=None)``
to log a user in. This requires that your scope has a ``session`` object;
the best way to do this is to ensure your consumer is wrapped in a
``SessionMiddlewareStack`` or a ``AuthMiddlewareStack``.
You can logout a user with the ``logout(scope)`` async function.
If you are in a WebSocket consumer, or logging-in after the first response
has been sent in a http consumer, the session is populated
**but will not be saved automatically** - you must call
``scope["session"].save()`` after login in your consumer code::
from channels.auth import login
class ChatConsumer(AsyncWebsocketConsumer):
...
async def receive(self, text_data):
...
# login the user to this session.
await login(self.scope, user)
# save the session (if the session backend does not access the db you can use `sync_to_async`)
await database_sync_to_async(self.scope["session"].save)()
When calling ``login(scope, user)``, ``logout(scope)`` or ``get_user(scope)``
from a synchronous function you will need to wrap them in ``async_to_sync``,
as we only provide async versions::
from asgiref.sync import async_to_sync
from channels.auth import login
class SyncChatConsumer(WebsocketConsumer):
...
def receive(self, text_data):
...
async_to_sync(login)(self.scope, user)
self.scope["session"].save()
.. note::
If you are using a long running consumer, websocket or long-polling
HTTP it is possible that the user will be logged out of their session
elsewhere while your consumer is running. You can periodically use
``get_user(scope)`` to be sure that the user is still logged in.
channels-2.4.0/docs/topics/channel_layers.rst 0000664 0000000 0000000 00000023400 13576505155 0021275 0 ustar 00root root 0000000 0000000 Channel Layers
==============
Channel layers allow you to talk between different instances of an application.
They're a useful part of making a distributed realtime application if you don't
want to have to shuttle all of your messages or events through a database.
Additionally, they can also be used in combination with a worker process
to make a basic task queue or to offload tasks - read more in
:doc:`/topics/worker`.
Channels does not ship with any channel layers you can use out of the box, as
each one depends on a different way of transporting data across a network. We
would recommend you use ``channels_redis``, which is an official Django-maintained
layer that uses Redis as a transport and what we'll focus the examples on here.
.. note::
Channel layers are an entirely optional part of Channels as of version 2.0.
If you don't want to use them, just leave ``CHANNEL_LAYERS`` unset, or
set it to the empty dict ``{}``.
Messages across channel layers also go to consumers/ASGI application
instances, just like events from the client, and so they now need a
``type`` key as well. See more below.
.. warning::
Channel layers have a purely async interface (for both send and receive);
you will need to wrap them in a converter if you want to call them from
synchronous code (see below).
Configuration
-------------
Channel layers are configured via the ``CHANNEL_LAYERS`` Django setting. It
generally looks something like this::
CHANNEL_LAYERS = {
"default": {
"BACKEND": "channels_redis.core.RedisChannelLayer",
"CONFIG": {
"hosts": [("redis-server-name", 6379)],
},
},
}
You can get the default channel layer from a project with
``channels.layers.get_channel_layer()``, but if you are using consumers a copy
is automatically provided for you on the consumer as ``self.channel_layer``.
Synchronous Functions
---------------------
By default the ``send()``, ``group_send()``, ``group_add()`` and other functions
are async functions, meaning you have to ``await`` them. If you need to call
them from synchronous code, you'll need to use the handy
``asgiref.sync.async_to_sync`` wrapper::
from asgiref.sync import async_to_sync
async_to_sync(channel_layer.send)("channel_name", {...})
What To Send Over The Channel Layer
-----------------------------------
Unlike in Channels 1, the channel layer is only for high-level
application-to-application communication. When you send a message, it is
received by the consumers listening to the group or channel on the other end,
and not transported to that consumer's socket directly.
What this means is that you should send high-level events over the channel
layer, and then have consumers handle those events and do appropriate low-level
networking to their attached client.
For example, the `multichat example `_
in Andrew Godwin's ``channels-examples`` repository sends events like this
over the channel layer::
await self.channel_layer.group_send(
room.group_name,
{
"type": "chat.message",
"room_id": room_id,
"username": self.scope["user"].username,
"message": message,
}
)
And then the consumers define a handling function to receive those events
and turn them into WebSocket frames::
async def chat_message(self, event):
"""
Called when someone has messaged our chat.
"""
# Send a message down to the client
await self.send_json(
{
"msg_type": settings.MSG_TYPE_MESSAGE,
"room": event["room_id"],
"username": event["username"],
"message": event["message"],
},
)
Any consumer based on Channels' ``SyncConsumer`` or ``AsyncConsumer`` will
automatically provide you a ``self.channel_layer`` and ``self.channel_name``
attribute, which contains a pointer to the channel layer instance and the
channel name that will reach the consumer respectively.
Any message sent to that channel name - or to a group the channel name was
added to - will be received by the consumer much like an event from its
connected client, and dispatched to a named method on the consumer. The name
of the method will be the ``type`` of the event with periods replaced by
underscores - so, for example, an event coming in over the channel layer
with a ``type`` of ``chat.join`` will be handled by the method ``chat_join``.
.. note::
If you are inheriting from the ``AsyncConsumer`` class tree, all your
event handlers, including ones for events over the channel layer, must
be asynchronous (``async def``). If you are in the ``SyncConsumer`` class
tree instead, they must all be synchronous (``def``).
Single Channels
---------------
Each application instance - so, for example, each long-running HTTP request
or open WebSocket - results in a single Consumer instance, and if you have
channel layers enabled, Consumers will generate a unique *channel name* for
themselves, and start listening on it for events.
This means you can send those consumers events from outside the process -
from other consumers, maybe, or from management commands - and they will react
to them and run code just like they would events from their client connection.
The channel name is available on a consumer as ``self.channel_name``. Here's
an example of writing the channel name into a database upon connection,
and then specifying a handler method for events on it::
class ChatConsumer(WebsocketConsumer):
def connect(self):
# Make a database row with our channel name
Clients.objects.create(channel_name=self.channel_name)
def disconnect(self, close_code):
# Note that in some rare cases (power loss, etc) disconnect may fail
# to run; this naive example would leave zombie channel names around.
Clients.objects.filter(channel_name=self.channel_name).delete()
def chat_message(self, event):
# Handles the "chat.message" event when it's sent to us.
self.send(text_data=event["text"])
Note that, because you're mixing event handling from the channel layer and
from the protocol connection, you need to make sure that your type names do not
clash. It's recommended you prefix type names (like we did here with ``chat.``)
to avoid clashes.
To send to a single channel, just find its channel name (for the example above,
we could crawl the database), and use ``channel_layer.send``::
from channels.layers import get_channel_layer
channel_layer = get_channel_layer()
await channel_layer.send("channel_name", {
"type": "chat.message",
"text": "Hello there!",
})
.. _groups:
Groups
------
Obviously, sending to individual channels isn't particularly useful - in most
cases you'll want to send to multiple channels/consumers at once as a broadcast.
Not only for cases like a chat where you want to send incoming messages to
everyone in the room, but even for sending to an individual user who might have
more than one browser tab or device connected.
You can construct your own solution for this if you like, using your existing
datastores, or use the Groups system built-in to some channel layers. Groups
are a broadcast system that:
* Allows you to add and remove channel names from named groups, and send to
those named groups.
* Provides group expiry for clean-up of connections whose disconnect handler
didn't get to run (e.g. power failure)
They do not allow you to enumerate or list the channels in a group; it's a
pure broadcast system. If you need more precise control or need to know who
is connected, you should build your own system or use a suitable third-party
one.
You use groups by adding a channel to them during connection, and removing it
during disconnection, illustrated here on the WebSocket generic consumer::
# This example uses WebSocket consumer, which is synchronous, and so
# needs the async channel layer functions to be converted.
from asgiref.sync import async_to_sync
class ChatConsumer(WebsocketConsumer):
def connect(self):
async_to_sync(self.channel_layer.group_add)("chat", self.channel_name)
def disconnect(self, close_code):
async_to_sync(self.channel_layer.group_discard)("chat", self.channel_name)
Then, to send to a group, use ``group_send``, like in this small example
which broadcasts chat messages to every connected socket when combined with
the code above::
class ChatConsumer(WebsocketConsumer):
...
def receive(self, text_data):
async_to_sync(self.channel_layer.group_send)(
"chat",
{
"type": "chat.message",
"text": text_data,
},
)
def chat_message(self, event):
self.send(text_data=event["text"])
Using Outside Of Consumers
--------------------------
You'll often want to send to the channel layer from outside of the scope of
a consumer, and so you won't have ``self.channel_layer``. In this case, you
should use the ``get_channel_layer`` function to retrieve it::
from channels.layers import get_channel_layer
channel_layer = get_channel_layer()
Then, once you have it, you can just call methods on it. Remember that
**channel layers only support async methods**, so you can either call it
from your own asynchronous context::
for chat_name in chats:
await channel_layer.group_send(
chat_name,
{"type": "chat.system_message", "text": announcement_text},
)
Or you'll need to use async_to_sync::
from asgiref.sync import async_to_sync
async_to_sync(channel_layer.group_send)("chat", {"type": "chat.force_disconnect"})
channels-2.4.0/docs/topics/consumers.rst 0000664 0000000 0000000 00000036106 13576505155 0020333 0 ustar 00root root 0000000 0000000 Consumers
=========
While Channels is built around a basic low-level spec called
:doc:`ASGI `, it's more designed for interoperability than for writing
complex applications in. So, Channels provides you with Consumers, a rich
abstraction that allows you to make ASGI applications easily.
Consumers do a couple of things in particular:
* Structures your code as a series of functions to be called whenever an
event happens, rather than making you write an event loop.
* Allow you to write synchronous or async code and deals with handoffs
and threading for you.
Of course, you are free to ignore consumers and use the other parts of
Channels - like routing, session handling and authentication - with any
ASGI app, but they're generally the best way to write your application code.
.. _sync_to_async:
Basic Layout
------------
A consumer is a subclass of either ``channels.consumer.AsyncConsumer`` or
``channels.consumer.SyncConsumer``. As these names suggest, one will expect
you to write async-capable code, while the other will run your code
synchronously in a threadpool for you.
Let's look at a basic example of a ``SyncConsumer``::
from channels.consumer import SyncConsumer
class EchoConsumer(SyncConsumer):
def websocket_connect(self, event):
self.send({
"type": "websocket.accept",
})
def websocket_receive(self, event):
self.send({
"type": "websocket.send",
"text": event["text"],
})
This is a very simple WebSocket echo server - it will accept all incoming
WebSocket connections, and then reply to all incoming WebSocket text frames
with the same text.
Consumers are structured around a series of named methods corresponding to the
``type`` value of the messages they are going to receive, with any ``.``
replaced by ``_``. The two handlers above are handling ``websocket.connect``
and ``websocket.receive`` messages respectively.
How did we know what event types we were going to get and what would be
in them (like ``websocket.receive`` having a ``text``) key? That's because we
designed this against the ASGI WebSocket specification, which tells us how
WebSockets are presented - read more about it in :doc:`ASGI ` - and
protected this application with a router that checks for a scope type of
``websocket`` - see more about that in :doc:`/topics/routing`.
Apart from that, the only other basic API is ``self.send(event)``. This lets
you send events back to the client or protocol server as defined by the
protocol - if you read the WebSocket protocol, you'll see that the dict we
send above is how you send a text frame to the client.
The ``AsyncConsumer`` is laid out very similarly, but all the handler methods
must be coroutines, and ``self.send`` is a coroutine::
from channels.consumer import AsyncConsumer
class EchoConsumer(AsyncConsumer):
async def websocket_connect(self, event):
await self.send({
"type": "websocket.accept",
})
async def websocket_receive(self, event):
await self.send({
"type": "websocket.send",
"text": event["text"],
})
When should you use ``AsyncConsumer`` and when should you use ``SyncConsumer``?
The main thing to consider is what you're talking to. If you call a slow
synchronous function from inside an ``AsyncConsumer`` you're going to hold up
the entire event loop, so they're only useful if you're also calling async
code (for example, using ``aiohttp`` to fetch 20 pages in parallel).
If you're calling any part of Django's ORM or other synchronous code, you
should use a ``SyncConsumer``, as this will run the whole consumer in a thread
and stop your ORM queries blocking the entire server.
We recommend that you **write SyncConsumers by default**, and only use
AsyncConsumers in cases where you know you are doing something that would
be improved by async handling (long-running tasks that could be done in
parallel) *and* you are only using async-native libraries.
If you really want to call a synchronous function from an ``AsyncConsumer``,
take a look at ``asgiref.sync.sync_to_async``, which is the utility that Channels
uses to run ``SyncConsumers`` in threadpools, and can turn any synchronous
callable into an asynchronous coroutine.
.. important::
If you want to call the Django ORM from an ``AsyncConsumer`` (or any other
synchronous code), you should use the ``database_sync_to_async`` adapter
instead. See :doc:`/topics/databases` for more.
Closing Consumers
~~~~~~~~~~~~~~~~~
When the socket or connection attached to your consumer is closed - either by
you or the client - you will likely get an event sent to you (for example,
``http.disconnect`` or ``websocket.disconnect``), and your application instance
will be given a short amount of time to act on it.
Once you have finished doing your post-disconnect cleanup, you need to raise
``channels.exceptions.StopConsumer`` to halt the ASGI application cleanly and
let the server clean it up. If you leave it running - by not raising this
exception - the server will reach its application close timeout (which is
10 seconds by default in Daphne) and then kill your application and raise
a warning.
The generic consumers below do this for you, so this is only needed if you
are writing your own consumer class based on ``AsyncConsumer`` or
``SyncConsumer``. However, if you override their ``__call__`` method, or
block the handling methods that it calls from returning, you may still run into
this; take a look at their source code if you want more information.
Additionally, if you launch your own background coroutines, make sure to also
shut them down when the connection is finished, or you'll leak coroutines into
the server.
Channel Layers
~~~~~~~~~~~~~~
Consumers also let you deal with Channel's *channel layers*, to let them
send messages between each other either one-to-one or via a broadcast system
called groups.
Consumers will use the channel layer ``default`` unless
the ``channel_layer_alias`` attribute is set when subclassing any
of the provided ``Consumer`` classes.
To use the channel layer ``echo_alias`` we would set it like so::
from channels.consumer import SyncConsumer
class EchoConsumer(SyncConsumer):
channel_layer_alias = "echo_alias"
You can read more in :doc:`/topics/channel_layers`.
.. _scope:
Scope
-----
Consumers receive the connection's ``scope`` when they are initialised, which
contains a lot of the information you'd find on the ``request`` object in a
Django view. It's available as ``self.scope`` inside the consumer's methods.
Scopes are part of the :doc:`ASGI specification `, but here are
some common things you might want to use:
* ``scope["path"]``, the path on the request. *(HTTP and WebSocket)*
* ``scope["headers"]``, raw name/value header pairs from the request *(HTTP and WebSocket)*
* ``scope["method"]``, the method name used for the request. *(HTTP)*
If you enable things like :doc:`authentication`, you'll also be able to access
the user object as ``scope["user"]``, and the URLRouter, for example, will
put captured groups from the URL into ``scope["url_route"]``.
In general, the scope is the place to get connection information and where
middleware will put attributes it wants to let you access (in the same way
that Django's middleware adds things to ``request``).
For a full list of what can occur in a connection scope, look at the basic
ASGI spec for the protocol you are terminating, plus any middleware or routing
code you are using. The web (HTTP and WebSocket) scopes are available in
`the Web ASGI spec `_.
Generic Consumers
-----------------
What you see above is the basic layout of a consumer that works for any
protocol. Much like Django's *generic views*, Channels ships with
*generic consumers* that wrap common functionality up so you don't need to
rewrite it, specifically for HTTP and WebSocket handling.
WebsocketConsumer
~~~~~~~~~~~~~~~~~
Available as ``channels.generic.websocket.WebsocketConsumer``, this
wraps the verbose plain-ASGI message sending and receiving into handling that
just deals with text and binary frames::
from channels.generic.websocket import WebsocketConsumer
class MyConsumer(WebsocketConsumer):
groups = ["broadcast"]
def connect(self):
# Called on connection.
# To accept the connection call:
self.accept()
# Or accept the connection and specify a chosen subprotocol.
# A list of subprotocols specified by the connecting client
# will be available in self.scope['subprotocols']
self.accept("subprotocol")
# To reject the connection, call:
self.close()
def receive(self, text_data=None, bytes_data=None):
# Called with either text_data or bytes_data for each frame
# You can call:
self.send(text_data="Hello world!")
# Or, to send a binary frame:
self.send(bytes_data="Hello world!")
# Want to force-close the connection? Call:
self.close()
# Or add a custom WebSocket error code!
self.close(code=4123)
def disconnect(self, close_code):
# Called when the socket closes
You can also raise ``channels.exceptions.AcceptConnection`` or
``channels.exceptions.DenyConnection`` from anywhere inside the ``connect``
method in order to accept or reject a connection, if you want reuseable
authentication or rate-limiting code that doesn't need to use mixins.
A ``WebsocketConsumer``'s channel will automatically be added to (on connect)
and removed from (on disconnect) any groups whose names appear in the
consumer's ``groups`` class attribute. ``groups`` must be an iterable, and a
channel layer with support for groups must be set as the channel backend
(``channels.layers.InMemoryChannelLayer`` and
``channels_redis.core.RedisChannelLayer`` both support groups). If no channel
layer is configured or the channel layer doesn't support groups, connecting
to a ``WebsocketConsumer`` with a non-empty ``groups`` attribute will raise
``channels.exceptions.InvalidChannelLayerError``. See :ref:`groups` for more.
AsyncWebsocketConsumer
~~~~~~~~~~~~~~~~~~~~~~
Available as ``channels.generic.websocket.AsyncWebsocketConsumer``, this has
the exact same methods and signature as ``WebsocketConsumer`` but everything
is async, and the functions you need to write have to be as well::
from channels.generic.websocket import AsyncWebsocketConsumer
class MyConsumer(AsyncWebsocketConsumer):
groups = ["broadcast"]
async def connect(self):
# Called on connection.
# To accept the connection call:
await self.accept()
# Or accept the connection and specify a chosen subprotocol.
# A list of subprotocols specified by the connecting client
# will be available in self.scope['subprotocols']
await self.accept("subprotocol")
# To reject the connection, call:
await self.close()
async def receive(self, text_data=None, bytes_data=None):
# Called with either text_data or bytes_data for each frame
# You can call:
await self.send(text_data="Hello world!")
# Or, to send a binary frame:
await self.send(bytes_data="Hello world!")
# Want to force-close the connection? Call:
await self.close()
# Or add a custom WebSocket error code!
await self.close(code=4123)
async def disconnect(self, close_code):
# Called when the socket closes
JsonWebsocketConsumer
~~~~~~~~~~~~~~~~~~~~~
Available as ``channels.generic.websocket.JsonWebsocketConsumer``, this
works like ``WebsocketConsumer``, except it will auto-encode and decode
to JSON sent as WebSocket text frames.
The only API differences are:
* Your ``receive_json`` method must take a single argument, ``content``, that
is the decoded JSON object.
* ``self.send_json`` takes only a single argument, ``content``, which will be
encoded to JSON for you.
If you want to customise the JSON encoding and decoding, you can override
the ``encode_json`` and ``decode_json`` classmethods.
AsyncJsonWebsocketConsumer
~~~~~~~~~~~~~~~~~~~~~~~~~~
An async version of ``JsonWebsocketConsumer``, available as
``channels.generic.websocket.AsyncJsonWebsocketConsumer``. Note that even
``encode_json`` and ``decode_json`` are async functions.
AsyncHttpConsumer
~~~~~~~~~~~~~~~~~
Available as ``channels.generic.http.AsyncHttpConsumer``, this offers basic
primitives to implement a HTTP endpoint::
from channels.generic.http import AsyncHttpConsumer
class BasicHttpConsumer(AsyncHttpConsumer):
async def handle(self, body):
await asyncio.sleep(10)
await self.send_response(200, b"Your response bytes", headers=[
(b"Content-Type", b"text/plain"),
])
You are expected to implement your own ``handle`` method. The
method receives the whole request body as a single bytestring. Headers
may either be passed as a list of tuples or as a dictionary. The
response body content is expected to be a bytestring.
You can also implement a ``disconnect`` method if you want to run code on
disconnect - for example, to shut down any coroutines you launched. This will
run even on an unclean disconnection, so don't expect that ``handle`` has
finished running cleanly.
If you need more control over the response, e.g. for implementing long
polling, use the lower level ``self.send_headers`` and ``self.send_body``
methods instead. This example already mentions channel layers which will
be explained in detail later::
import json
from channels.generic.http import AsyncHttpConsumer
class LongPollConsumer(AsyncHttpConsumer):
async def handle(self, body):
await self.send_headers(headers=[
(b"Content-Type", b"application/json"),
])
# Headers are only sent after the first body event.
# Set "more_body" to tell the interface server to not
# finish the response yet:
await self.send_body(b"", more_body=True)
async def chat_message(self, event):
# Send JSON and finish the response:
await self.send_body(json.dumps(event).encode("utf-8"))
Of course you can also use those primitives to implement a HTTP endpoint for
`Server-sent events `_::
from datetime import datetime
from channels.generic.http import AsyncHttpConsumer
class ServerSentEventsConsumer(AsyncHttpConsumer):
async def handle(self, body):
await self.send_headers(headers=[
(b"Cache-Control", b"no-cache"),
(b"Content-Type", b"text/event-stream"),
(b"Transfer-Encoding", b"chunked"),
])
while True:
payload = "data: %s\n\n" % datetime.now().isoformat()
await self.send_body(payload.encode("utf-8"), more_body=True)
await asyncio.sleep(1)
channels-2.4.0/docs/topics/databases.rst 0000664 0000000 0000000 00000004065 13576505155 0020243 0 ustar 00root root 0000000 0000000 Database Access
===============
The Django ORM is a synchronous piece of code, and so if you want to access
it from asynchronous code you need to do special handling to make sure its
connections are closed properly.
If you're using ``SyncConsumer``, or anything based on it - like
``JsonWebsocketConsumer`` - you don't need to do anything special, as all your
code is already run in a synchronous mode and Channels will do the cleanup
for you as part of the ``SyncConsumer`` code.
If you are writing asynchronous code, however, you will need to call
database methods in a safe, synchronous context, using ``database_sync_to_async``.
Database Connections
--------------------
Channels can potentially open a lot more database connections than you may be used to if you are using threaded consumers (synchronous ones) - it can open up to one connection per thread.
By default, the number of threads is set to "the number of CPUs * 5", so you may see up to this number of threads. If you want to change it, set the ``ASGI_THREADS`` environment variable to the maximum number you wish to allow.
To avoid having too many threads idling in connections, you can instead rewrite your code to use async consumers and only dip into threads when you need to use Django's ORM (using ``database_sync_to_async``).
database_sync_to_async
----------------------
``channels.db.database_sync_to_async`` is a version of ``asgiref.sync.sync_to_async``
that also cleans up database connections on exit.
To use it, write your ORM queries in a separate function or method, and then
call it with ``database_sync_to_async`` like so::
from channels.db import database_sync_to_async
async def connect(self):
self.username = await database_sync_to_async(self.get_name)()
def get_name(self):
return User.objects.all()[0].name
You can also use it as a decorator::
from channels.db import database_sync_to_async
async def connect(self):
self.username = await self.get_name()
@database_sync_to_async
def get_name(self):
return User.objects.all()[0].name
channels-2.4.0/docs/topics/routing.rst 0000664 0000000 0000000 00000011501 13576505155 0017774 0 ustar 00root root 0000000 0000000 Routing
=======
While consumers are valid :doc:`ASGI ` applications, you don't want
to just write one and have that be the only thing you can give to protocol
servers like Daphne. Channels provides routing classes that allow you to
combine and stack your consumers (and any other valid ASGI application) to
dispatch based on what the connection is.
.. important::
Channels routers only work on the *scope* level, not on the level of
individual *events*, which means you can only have one consumer for any
given connection. Routing is to work out what single consumer to give a
connection, not how to spread events from one connection across
multiple consumers.
Routers are themselves valid ASGI applications, and it's possible to nest them.
We suggest that you have a ``ProtocolTypeRouter`` as the root application of
your project - the one that you pass to protocol servers - and nest other,
more protocol-specific routing underneath there.
Channels expects you to be able to define a single *root application*, and
provide the path to it as the ``ASGI_APPLICATION`` setting (think of this as
being analogous to the ``ROOT_URLCONF`` setting in Django). There's no fixed
rule as to where you need to put the routing and the root application,
but we recommend putting them in a project-level file called ``routing.py``,
next to ``urls.py``. You can read more about deploying Channels projects and
settings in :doc:`/deploying`.
Here's an example of what that ``routing.py`` might look like::
from django.conf.urls import url
from channels.routing import ProtocolTypeRouter, URLRouter
from channels.auth import AuthMiddlewareStack
from chat.consumers import AdminChatConsumer, PublicChatConsumer
from aprs_news.consumers import APRSNewsConsumer
application = ProtocolTypeRouter({
# WebSocket chat handler
"websocket": AuthMiddlewareStack(
URLRouter([
url(r"^chat/admin/$", AdminChatConsumer),
url(r"^chat/$", PublicChatConsumer),
])
),
# Using the third-party project frequensgi, which provides an APRS protocol
"aprs": APRSNewsConsumer,
})
It's possible to have routers from third-party apps, too, or write your own,
but we'll go over the built-in Channels ones here.
ProtocolTypeRouter
------------------
``channels.routing.ProtocolTypeRouter``
This should be the
top level of your ASGI application stack and the main entry in your routing file.
It lets you dispatch to one of a number of other ASGI applications based on the
``type`` value present in the ``scope``. Protocols will define a fixed type
value that their scope contains, so you can use this to distinguish between
incoming connection types.
It takes a single argument - a dictionary mapping type names to ASGI
applications that serve them::
ProtocolTypeRouter({
"http": some_app,
"websocket": some_other_app,
})
If a ``http`` argument is not provided, it will default to the Django view
system's ASGI interface, ``channels.http.AsgiHandler``, which means that for
most projects that aren't doing custom long-poll HTTP handling, you can simply
not specify a ``http`` option and leave it to work the "normal" Django way.
If you want to split HTTP handling between long-poll handlers and Django views,
use a URLRouter with ``channels.http.AsgiHandler`` specified as the last entry
with a match-everything pattern.
URLRouter
---------
``channels.routing.URLRouter``
Routes ``http`` or ``websocket`` type connections via their HTTP path. Takes
a single argument, a list of Django URL objects (either ``path()`` or ``url()``)::
URLRouter([
url(r"^longpoll/$", LongPollConsumer),
url(r"^notifications/(?P\w+)/$", LongPollConsumer),
url(r"", AsgiHandler),
])
Any captured groups will be provided in ``scope`` as the key ``url_route``, a
dict with a ``kwargs`` key containing a dict of the named regex groups and
an ``args`` key with a list of positional regex groups. Note that named
and unnamed groups cannot be mixed: Positional groups are discarded as soon
as a single named group is matched.
For example, to pull out the named group ``stream`` in the example above, you
would do this::
stream = self.scope["url_route"]["kwargs"]["stream"]
Please note that ``URLRouter`` nesting will not work properly with
``path()`` routes if inner routers are wrapped by additional middleware.
ChannelNameRouter
-----------------
``channels.routing.ChannelNameRouter``
Routes ``channel`` type scopes based on the value of the ``channel`` key in
their scope. Intended for use with the :doc:`/topics/worker`.
It takes a single argument - a dictionary mapping channel names to ASGI
applications that serve them::
ChannelNameRouter({
"thumbnails-generate": some_app,
"thumbnails-delete": some_other_app,
})
channels-2.4.0/docs/topics/security.rst 0000664 0000000 0000000 00000004676 13576505155 0020173 0 ustar 00root root 0000000 0000000 Security
========
This covers basic security for protocols you're serving via Channels and
helpers that we provide.
WebSockets
----------
WebSockets start out life as a HTTP request, including all the cookies
and headers, and so you can use the standard :doc:`/topics/authentication`
code in order to grab current sessions and check user IDs.
There is also a risk of cross-site request forgery (CSRF) with WebSockets though,
as they can be initiated from any site on the internet to your domain, and will
still have the user's cookies and session from your site. If you serve private
data down the socket, you should restrict the sites which are allowed to open
sockets to you.
This is done via the ``channels.security.websocket`` package, and the two
ASGI middlewares it contains, ``OriginValidator`` and
``AllowedHostsOriginValidator``.
``OriginValidator`` lets you restrict the valid options for the ``Origin``
header that is sent with every WebSocket to say where it comes from. Just wrap
it around your WebSocket application code like this, and pass it a list of
valid domains as the second argument. You can pass only a single domain (for example,
``.allowed-domain.com``) or a full origin, in the format ``scheme://domain[:port]``
(for example, ``http://allowed-domain.com:80``). Port is optional, but recommended::
from channels.security.websocket import OriginValidator
application = ProtocolTypeRouter({
"websocket": OriginValidator(
AuthMiddlewareStack(
URLRouter([
...
])
),
[".goodsite.com", "http://.goodsite.com:80", "http://other.site.com"],
),
})
Note: If you want to resolve any domain, then use the origin ``*``.
Often, the set of domains you want to restrict to is the same as the Django
``ALLOWED_HOSTS`` setting, which performs a similar security check for the
``Host`` header, and so ``AllowedHostsOriginValidator`` lets you use this
setting without having to re-declare the list::
from channels.security.websocket import AllowedHostsOriginValidator
application = ProtocolTypeRouter({
"websocket": AllowedHostsOriginValidator(
AuthMiddlewareStack(
URLRouter([
...
])
),
),
})
``AllowedHostsOriginValidator`` will also automatically allow local connections
through if the site is in ``DEBUG`` mode, much like Django's host validation.
channels-2.4.0/docs/topics/sessions.rst 0000664 0000000 0000000 00000005471 13576505155 0020164 0 ustar 00root root 0000000 0000000 Sessions
========
Channels supports standard Django sessions using HTTP cookies for both HTTP
and WebSocket. There are some caveats, however.
Basic Usage
-----------
The ``SessionMiddleware`` in Channels supports standard Django sessions,
and like all middleware, should be wrapped around the ASGI application that
needs the session information in its scope (for example, a ``URLRouter`` to
apply it to a whole collection of consumers, or an individual consumer).
``SessionMiddleware`` requires ``CookieMiddleware`` to function.
For convenience, these are also provided as a combined callable called
``SessionMiddlewareStack`` that includes both. All are importable from
``channels.session``.
To use the middleware, wrap it around the appropriate level of consumer
in your ``routing.py``::
from channels.routing import ProtocolTypeRouter, URLRouter
from channels.sessions import SessionMiddlewareStack
from myapp import consumers
application = ProtocolTypeRouter({
"websocket": SessionMiddlewareStack(
URLRouter([
url(r"^front(end)/$", consumers.AsyncChatConsumer),
])
),
})
``SessionMiddleware`` will only work on protocols that provide
HTTP headers in their ``scope`` - by default, this is HTTP and WebSocket.
To access the session, use ``self.scope["session"]`` in your consumer code::
class ChatConsumer(WebsocketConsumer):
def connect(self, event):
self.scope["session"]["seed"] = random.randint(1, 1000)
``SessionMiddleware`` respects all the same Django settings as the default
Django session framework, like SESSION_COOKIE_NAME and SESSION_COOKIE_DOMAIN.
Session Persistence
-------------------
Within HTTP consumers or ASGI applications, session persistence works as you
would expect from Django HTTP views - sessions are saved whenever you send
a HTTP response that does not have status code ``500``.
This is done by overriding any ``http.response.start`` messages to inject
cookie headers into the response as you send it out. If you have set
the ``SESSION_SAVE_EVERY_REQUEST`` setting to ``True``, it will save the
session and send the cookie on every response, otherwise it will only save
whenever the session is modified.
If you are in a WebSocket consumer, however, the session is populated
**but will never be saved automatically** - you must call
``scope["session"].save()`` yourself whenever you want to persist a session
to your session store. If you don't save, the session will still work correctly
inside the consumer (as it's stored as an instance variable), but other
connections or HTTP views won't be able to see the changes.
.. note::
If you are in a long-polling HTTP consumer, you might want to save changes
to the session before you send a response. If you want to do this,
call ``scope["session"].save()``.
channels-2.4.0/docs/topics/testing.rst 0000664 0000000 0000000 00000025252 13576505155 0017772 0 ustar 00root root 0000000 0000000 Testing
=======
Testing Channels consumers is a little trickier than testing normal Django
views due to their underlying asynchronous nature.
To help with testing, Channels provides test helpers called *Communicators*,
which allow you to wrap up an ASGI application (like a consumer) into its own
event loop and ask it questions.
They do, however, require that you have asynchronous support in your test suite.
While you can do this yourself, we recommend using ``py.test`` with its ``asyncio``
plugin, which is how we'll illustrate tests below.
Setting Up Async Tests
----------------------
Firstly, you need to get ``py.test`` set up with async test support, and
presumably Django test support as well. You can do this by installing the
``pytest-django`` and ``pytest-asyncio`` packages::
pip install -U pytest-django pytest-asyncio
Then, you need to decorate the tests you want to run async with
``pytest.mark.asyncio``. Note that you can't mix this with ``unittest.TestCase``
subclasses; you have to write async tests as top-level test functions in the
native ``py.test`` style::
import pytest
from channels.testing import HttpCommunicator
from myproject.myapp.consumers import MyConsumer
@pytest.mark.asyncio
async def test_my_consumer():
communicator = HttpCommunicator(MyConsumer, "GET", "/test/")
response = await communicator.get_response()
assert response["body"] == b"test response"
assert response["status"] == 200
If you have normal Django views, you can continue to test those with the
standard Django test tools and client. You only need the async setup for
code that's written as consumers.
There's a few variants of the Communicator - a plain one for generic usage,
and one each for HTTP and WebSockets specifically that have shortcut methods,
ApplicationCommunicator
-----------------------
``ApplicationCommunicator`` is the generic test helper for any ASGI application.
It provides several basic methods for interaction as explained below.
You should only need this generic class for non-HTTP/WebSocket tests, though
you might need to fall back to it if you are testing things like HTTP chunked
responses or long-polling, which aren't supported in ``HttpCommunicator`` yet.
.. note::
``ApplicationCommunicator`` is actually provided by the base ``asgiref``
package, but we let you import it from ``channels.testing`` for convenience.
To construct it, pass it an application and a scope::
from channels.testing import ApplicationCommunicator
communicator = ApplicationCommunicator(MyConsumer, {"type": "http", ...})
send_input
~~~~~~~~~~
Call it to send an event to the application::
await communicator.send_input({
"type": "http.request",
"body": b"chunk one \x01 chunk two",
})
receive_output
~~~~~~~~~~~~~~
Call it to receive an event from the application::
event = await communicator.receive_output(timeout=1)
assert event["type"] == "http.response.start"
.. _application_communicator-receive_nothing:
receive_nothing
~~~~~~~~~~~~~~~
Call it to check that there is no event waiting to be received from the
application::
assert await communicator.receive_nothing(timeout=0.1, interval=0.01) is False
# Receive the rest of the http request from above
event = await communicator.receive_output()
assert event["type"] == "http.response.body"
assert event.get("more_body") is True
event = await communicator.receive_output()
assert event["type"] == "http.response.body"
assert event.get("more_body") is None
# Check that there isn't another event
assert await communicator.receive_nothing() is True
# You could continue to send and receive events
# await communicator.send_input(...)
The method has two optional parameters:
* ``timeout``: number of seconds to wait to ensure the queue is empty. Defaults
to 0.1.
* ``interval``: number of seconds to wait for another check for new events.
Defaults to 0.01.
wait
~~~~
Call it to wait for an application to exit (you'll need to either do this or wait for
it to send you output before you can see what it did using mocks or inspection)::
await communicator.wait(timeout=1)
If you're expecting your application to raise an exception, use ``pytest.raises``
around ``wait``::
with pytest.raises(ValueError):
await communicator.wait()
HttpCommunicator
----------------
``HttpCommunicator`` is a subclass of ``ApplicationCommunicator`` specifically
tailored for HTTP requests. You need only instantiate it with your desired
options::
from channels.testing import HttpCommunicator
communicator = HttpCommunicator(MyHttpConsumer, "GET", "/test/")
And then wait for its response::
response = await communicator.get_response()
assert response["body"] == b"test response"
You can pass the following arguments to the constructor:
* ``method``: HTTP method name (unicode string, required)
* ``path``: HTTP path (unicode string, required)
* ``body``: HTTP body (bytestring, optional)
The response from the ``get_response`` method will be a dict with the following
keys::
* ``status``: HTTP status code (integer)
* ``headers``: List of headers as (name, value) tuples (both bytestrings)
* ``body``: HTTP response body (bytestring)
WebsocketCommunicator
---------------------
``WebsocketCommunicator`` allows you to more easily test WebSocket consumers.
It provides several convenience methods for interacting with a WebSocket
application, as shown in this example::
from channels.testing import WebsocketCommunicator
communicator = WebsocketCommunicator(SimpleWebsocketApp, "/testws/")
connected, subprotocol = await communicator.connect()
assert connected
# Test sending text
await communicator.send_to(text_data="hello")
response = await communicator.receive_from()
assert response == "hello"
# Close
await communicator.disconnect()
.. note::
All of these methods are coroutines, which means you must ``await`` them.
If you do not, your test will either time out (if you forgot to await a
send) or try comparing things to a coroutine object (if you forgot to
await a receive).
.. important::
If you don't call ``WebsocketCommunicator.disconnect()`` before your test
suite ends, you may find yourself getting ``RuntimeWarnings`` about
things never being awaited, as you will be killing your app off in the
middle of its lifecycle. You do not, however, have to ``disconnect()`` if
your app already raised an error.
You can also pass an ``application`` built with ``URLRouter`` instead of the
plain consumer class. This lets you test applications that require positional
or keyword arguments in the ``scope``::
from channels.testing import WebsocketCommunicator
application = URLRouter([
url(r"^testws/(?P\w+)/$", KwargsWebSocketApp),
])
communicator = WebsocketCommunicator(application, "/testws/test/")
connected, subprotocol = await communicator.connect()
assert connected
# Test on connection welcome message
message = await communicator.receive_from()
assert message == 'test'
# Close
await communicator.disconnect()
.. note::
Since the ``WebsocketCommunicator`` class takes a URL in its constructor,
a single Communicator can only test a single URL. If you want to test
multiple different URLs, use multiple Communicators.
connect
~~~~~~~
Triggers the connection phase of the WebSocket and waits for the application
to either accept or deny the connection. Takes no parameters and returns
either:
* ``(True, )`` if the socket was accepted.
``chosen_subprotocol`` defaults to ``None``.
* ``(False, )`` if the socket was rejected.
``close_code`` defaults to ``1000``.
send_to
~~~~~~~
Sends a data frame to the application. Takes exactly one of ``bytes_data``
or ``text_data`` as parameters, and returns nothing::
await communicator.send_to(bytes_data=b"hi\0")
This method will type-check your parameters for you to ensure what you are
sending really is text or bytes.
send_json_to
~~~~~~~~~~~~
Sends a JSON payload to the application as a text frame. Call it with
an object and it will JSON-encode it for you, and return nothing::
await communicator.send_json_to({"hello": "world"})
receive_from
~~~~~~~~~~~~
Receives a frame from the application and gives you either ``bytes`` or
``text`` back depending on the frame type::
response = await communicator.receive_from()
Takes an optional ``timeout`` argument with a number of seconds to wait before
timing out, which defaults to 1. It will typecheck your application's responses
for you as well, to ensure that text frames contain text data, and binary
frames contain binary data.
receive_json_from
~~~~~~~~~~~~~~~~~
Receives a text frame from the application and decodes it for you::
response = await communicator.receive_json_from()
assert response == {"hello": "world"}
Takes an optional ``timeout`` argument with a number of seconds to wait before
timing out, which defaults to 1.
receive_nothing
~~~~~~~~~~~~~~~
Checks that there is no frame waiting to be received from the application. For
details see
:ref:`ApplicationCommunicator `.
disconnect
~~~~~~~~~~
Closes the socket from the client side. Takes nothing and returns nothing.
You do not need to call this if the application instance you're testing already
exited (for example, if it errored), but if you do call it, it will just
silently return control to you.
ChannelsLiveServerTestCase
--------------------------
If you just want to run standard Selenium or other tests that require a
webserver to be running for external programs, you can use
``ChannelsLiveServerTestCase``, which is a drop-in replacement for the
standard Django ``LiveServerTestCase``::
from channels.testing import ChannelsLiveServerTestCase
class SomeLiveTests(ChannelsLiveServerTestCase):
def test_live_stuff(self):
call_external_testing_thing(self.live_server_url)
.. note::
You can't use an in-memory database for your live tests. Therefore
include a test database file name in your settings to tell Django to
use a file database if you use SQLite::
DATABASES = {
"default": {
"ENGINE": "django.db.backends.sqlite3",
"NAME": os.path.join(BASE_DIR, "db.sqlite3"),
"TEST": {
"NAME": os.path.join(BASE_DIR, "db_test.sqlite3"),
},
},
}
serve_static
~~~~~~~~~~~~
Subclass ``ChannelsLiveServerTestCase`` with ``serve_static = True`` in order
to serve static files (comparable to Django's ``StaticLiveServerTestCase``, you
don't need to run collectstatic before or as a part of your tests setup).
channels-2.4.0/docs/topics/worker.rst 0000664 0000000 0000000 00000006075 13576505155 0017630 0 ustar 00root root 0000000 0000000 Worker and Background Tasks
===========================
While :doc:`channel layers ` are primarily designed for
communicating between different instances of ASGI applications, they can also
be used to offload work to a set of worker servers listening on fixed channel
names, as a simple, very-low-latency task queue.
.. note::
The worker/background tasks system in Channels is simple and very fast,
and achieves this by not having some features you may find useful, such as
retries or return values.
We recommend you use it for work that does not need guarantees around
being complete (at-most-once delivery), and for work that needs more
guarantees, look into a separate dedicated task queue like Celery.
Setting up background tasks works in two parts - sending the events, and then
setting up the consumers to receive and process the events.
Sending
-------
To send an event, just send it to a fixed channel name. For example, let's say
we want a background process that pre-caches thumbnails::
# Inside a consumer
self.channel_layer.send(
"thumbnails-generate",
{
"type": "generate",
"id": 123456789,
},
)
Note that the event you send **must** have a ``type`` key, even if only one
type of message is being sent over the channel, as it will turn into an event
a consumer has to handle.
Also remember that if you are sending the event from a synchronous environment,
you have to use the ``asgiref.sync.async_to_sync`` wrapper as specified in
:doc:`channel layers `.
Receiving and Consumers
-----------------------
Channels will present incoming worker tasks to you as events inside a scope
with a ``type`` of ``channel``, and a ``channel`` key matching the channel
name. We recommend you use ProtocolTypeRouter and ChannelNameRouter (see
:doc:`/topics/routing` for more) to arrange your consumers::
application = ProtocolTypeRouter({
...
"channel": ChannelNameRouter({
"thumbnails-generate": consumers.GenerateConsumer,
"thumbnails-delete": consumers.DeleteConsumer,
}),
})
You'll be specifying the ``type`` values of the individual events yourself
when you send them, so decide what your names are going to be and write
consumers to match. For example, here's a basic consumer that expects to
receive an event with ``type`` ``test.print``, and a ``text`` value containing
the text to print::
class PrintConsumer(SyncConsumer):
def test_print(self, message):
print("Test: " + message["text"])
Once you've hooked up the consumers, all you need to do is run a process that
will handle them. In lieu of a protocol server - as there are no connections
involved here - Channels instead provides you this with the ``runworker``
command::
./manage.py runworker thumbnails-generate thumbnails-delete
Note that ``runworker`` will only listen to the channels you pass it on the
command line. If you do not include a channel, or forget to run the worker,
your events will not be received and acted upon.
channels-2.4.0/docs/tutorial/ 0000775 0000000 0000000 00000000000 13576505155 0016117 5 ustar 00root root 0000000 0000000 channels-2.4.0/docs/tutorial/index.rst 0000664 0000000 0000000 00000001052 13576505155 0017756 0 ustar 00root root 0000000 0000000 Tutorial
========
Channels allows you to use WebSockets and other non-HTTP protocols in your
Django site. For example you might want to use WebSockets to allow a page on
your site to immediately receive updates from your Django server without using
HTTP long-polling or other expensive techniques.
In this tutorial we will build a simple chat server, where you can join an
online room, post messages to the room, and have others in the same room see
those messages immediately.
.. toctree::
:maxdepth: 1
part_1
part_2
part_3
part_4
channels-2.4.0/docs/tutorial/part_1.rst 0000664 0000000 0000000 00000026620 13576505155 0020045 0 ustar 00root root 0000000 0000000 Tutorial Part 1: Basic Setup
============================
In this tutorial we will build a simple chat server. It will have two pages:
* An index view that lets you type the name of a chat room to join.
* A room view that lets you see messages posted in a particular chat room.
The room view will use a WebSocket to communicate with the Django server and
listen for any messages that are posted.
We assume that you are familiar with basic concepts for building a Django site.
If not we recommend you complete `the Django tutorial`_ first and then come back
to this tutorial.
We assume that you have `Django installed`_ already. You can tell Django is
installed and which version by running the following command in a shell prompt
(indicated by the ``$`` prefix)::
$ python3 -m django --version
We also assume that you have :doc:`Channels installed ` already. You can tell
Channels is installed by running the following command::
$ python3 -c 'import channels; print(channels.__version__)'
This tutorial is written for Channels 2.0, which supports Python 3.5+ and Django
1.11+. If the Channels version does not match, you can refer to the tutorial for
your version of Channels by using the version switcher at the bottom left corner
of this page, or update Channels to the newest version.
This tutorial also **uses Docker** to install and run Redis. We use Redis as the
backing store for the channel layer, which is an optional component of the
Channels library that we use in the tutorial. `Install Docker`_ from its
official website - there are official runtimes for Mac OS and Windows that
make it easy to use, and packages for many Linux distributions where it can
run natively.
.. note::
While you can run the standard Django ``runserver`` without the need
for Docker, the channels features we'll be using in later parts of the
tutorial will need Redis to run, and we recommend Docker as the easiest
way to do this.
.. _the Django tutorial: https://docs.djangoproject.com/en/stable/intro/tutorial01/
.. _Django installed: https://docs.djangoproject.com/en/stable/intro/install/
.. _Install Docker: https://www.docker.com/get-docker
Creating a project
------------------
If you don't already have a Django project, you will need to create one.
From the command line, ``cd`` into a directory where you'd like to store your
code, then run the following command::
$ django-admin startproject mysite
This will create a ``mysite`` directory in your current directory with the
following contents::
mysite/
manage.py
mysite/
__init__.py
settings.py
urls.py
wsgi.py
Creating the Chat app
---------------------
We will put the code for the chat server in its own app.
Make sure you're in the same directory as ``manage.py`` and type this command::
$ python3 manage.py startapp chat
That'll create a directory ``chat``, which is laid out like this::
chat/
__init__.py
admin.py
apps.py
migrations/
__init__.py
models.py
tests.py
views.py
For the purposes of this tutorial, we will only be working with ``chat/views.py``
and ``chat/__init__.py``. So remove all other files from the ``chat`` directory.
After removing unnecessary files, the ``chat`` directory should look like::
chat/
__init__.py
views.py
We need to tell our project that the ``chat`` app is installed. Edit the
``mysite/settings.py`` file and add ``'chat'`` to the **INSTALLED_APPS** setting.
It'll look like this::
# mysite/settings.py
INSTALLED_APPS = [
'chat',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
]
Add the index view
------------------
We will now create the first view, an index view that lets you type the name of
a chat room to join.
Create a ``templates`` directory in your ``chat`` directory. Within the
``templates`` directory you have just created, create another directory called
``chat``, and within that create a file called ``index.html`` to hold the
template for the index view.
Your chat directory should now look like::
chat/
__init__.py
templates/
chat/
index.html
views.py
Put the following code in ``chat/templates/chat/index.html``::
Chat Rooms
What chat room would you like to enter?
Create the view function for the room view.
Put the following code in ``chat/views.py``::
# chat/views.py
from django.shortcuts import render
def index(request):
return render(request, 'chat/index.html', {})
To call the view, we need to map it to a URL - and for this we need a URLconf.
To create a URLconf in the chat directory, create a file called ``urls.py``.
Your app directory should now look like::
chat/
__init__.py
templates/
chat/
index.html
urls.py
views.py
In the ``chat/urls.py`` file include the following code::
# chat/urls.py
from django.urls import path
from . import views
urlpatterns = [
path('', views.index, name='index'),
]
The next step is to point the root URLconf at the **chat.urls** module.
In ``mysite/urls.py``, add an import for **django.conf.urls.include** and
insert an **include()** in the **urlpatterns** list, so you have::
# mysite/urls.py
from django.conf.urls import include
from django.urls import path
from django.contrib import admin
urlpatterns = [
path('chat/', include('chat.urls')),
path('admin/', admin.site.urls),
]
Let's verify that the index view works. Run the following command::
$ python3 manage.py runserver
You'll see the following output on the command line::
Performing system checks...
System check identified no issues (0 silenced).
You have 13 unapplied migration(s). Your project may not work properly until you apply the migrations for app(s): admin, auth, contenttypes, sessions.
Run 'python manage.py migrate' to apply them.
February 18, 2018 - 22:08:39
Django version 1.11.10, using settings 'mysite.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.
Go to http://127.0.0.1:8000/chat/ in your browser and you should see the text
"What chat room would you like to enter?" along with a text input to provide a
room name.
Type in "lobby" as the room name and press enter. You should be redirected to
the room view at http://127.0.0.1:8000/chat/lobby/ but we haven't written the
room view yet, so you'll get a "Page not found" error page.
Go to the terminal where you ran the ``runserver`` command and press Control-C
to stop the server.
Integrate the Channels library
------------------------------
So far we've just created a regular Django app; we haven't used the Channels
library at all. Now it's time to integrate Channels.
Let's start by creating a root routing configuration for Channels. A Channels
:doc:`routing configuration ` is similar to a Django URLconf in that it tells Channels
what code to run when an HTTP request is received by the Channels server.
We'll start with an empty routing configuration.
Create a file ``mysite/routing.py`` and include the following code::
# mysite/routing.py
from channels.routing import ProtocolTypeRouter
application = ProtocolTypeRouter({
# (http->django views is added by default)
})
Now add the Channels library to the list of installed apps.
Edit the ``mysite/settings.py`` file and add ``'channels'`` to the
``INSTALLED_APPS`` setting. It'll look like this::
# mysite/settings.py
INSTALLED_APPS = [
'channels',
'chat',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
]
You'll also need to point Channels at the root routing configuration.
Edit the ``mysite/settings.py`` file again and add the following to the bottom
of it::
# mysite/settings.py
# Channels
ASGI_APPLICATION = 'mysite.routing.application'
With Channels now in the installed apps, it will take control of the
``runserver`` command, replacing the standard Django development server with
the Channels development server.
.. note::
The Channels development server will conflict with any other third-party
apps that require an overloaded or replacement runserver command.
An example of such a conflict is with `whitenoise.runserver_nostatic`_ from
`whitenoise`_. In order to solve such issues, try moving ``channels`` to the
top of your ``INSTALLED_APPS`` or remove the offending app altogether.
.. _whitenoise.runserver_nostatic: https://github.com/evansd/whitenoise/issues/77
.. _whitenoise: https://github.com/evansd/whitenoise
Let's ensure that the Channels development server is working correctly.
Run the following command::
$ python3 manage.py runserver
You'll see the following output on the command line::
Performing system checks...
System check identified no issues (0 silenced).
You have 13 unapplied migration(s). Your project may not work properly until you apply the migrations for app(s): admin, auth, contenttypes, sessions.
Run 'python manage.py migrate' to apply them.
February 18, 2018 - 22:16:23
Django version 1.11.10, using settings 'mysite.settings'
Starting ASGI/Channels development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.
2018-02-18 22:16:23,729 - INFO - server - HTTP/2 support not enabled (install the http2 and tls Twisted extras)
2018-02-18 22:16:23,730 - INFO - server - Configuring endpoint tcp:port=8000:interface=127.0.0.1
2018-02-18 22:16:23,731 - INFO - server - Listening on TCP address 127.0.0.1:8000
.. note::
Ignore the warning about unapplied database migrations.
We won't be using a database in this tutorial.
Notice the line beginning with
``Starting ASGI/Channels development server at http://127.0.0.1:8000/``.
This indicates that the Channels development server has taken over from the
Django development server.
Go to http://127.0.0.1:8000/chat/ in your browser and you should still see the
index page that we created before.
Go to the terminal where you ran the ``runserver`` command and press Control-C
to stop the server.
This tutorial continues in :doc:`Tutorial 2 `.
channels-2.4.0/docs/tutorial/part_2.rst 0000664 0000000 0000000 00000043325 13576505155 0020047 0 ustar 00root root 0000000 0000000 Tutorial Part 2: Implement a Chat Server
========================================
This tutorial begins where :doc:`Tutorial 1 ` left off.
We'll get the room page working so that you can chat with yourself and others
in the same room.
Add the room view
-----------------
We will now create the second view, a room view that lets you see messages
posted in a particular chat room.
Create a new file ``chat/templates/chat/room.html``.
Your app directory should now look like::
chat/
__init__.py
templates/
chat/
index.html
room.html
urls.py
views.py
Create the view template for the room view in ``chat/templates/chat/room.html``::
Chat Room
Create the view function for the room view in ``chat/views.py``.
Add the imports of ``mark_safe`` and ``json`` and add the ``room`` view function::
# chat/views.py
from django.shortcuts import render
from django.utils.safestring import mark_safe
import json
def index(request):
return render(request, 'chat/index.html', {})
def room(request, room_name):
return render(request, 'chat/room.html', {
'room_name_json': mark_safe(json.dumps(room_name))
})
Create the route for the room view in ``chat/urls.py``::
# chat/urls.py
from django.urls import path
from . import views
urlpatterns = [
path('', views.index, name='index'),
path('/', views.room, name='room'),
]
Start the Channels development server::
$ python3 manage.py runserver
Go to http://127.0.0.1:8000/chat/ in your browser and to see the index page.
Type in "lobby" as the room name and press enter. You should be redirected to
the room page at http://127.0.0.1:8000/chat/lobby/ which now displays an empty
chat log.
Type the message "hello" and press enter. Nothing happens. In particular the
message does not appear in the chat log. Why?
The room view is trying to open a WebSocket to the URL
``ws://127.0.0.1:8000/ws/chat/lobby/`` but we haven't created a consumer that
accepts WebSocket connections yet. If you open your browser's JavaScript
console, you should see an error that looks like::
WebSocket connection to 'ws://127.0.0.1:8000/ws/chat/lobby/' failed: Unexpected response code: 500
Write your first consumer
-------------------------
When Django accepts an HTTP request, it consults the root URLconf to lookup a
view function, and then calls the view function to handle the request.
Similarly, when Channels accepts a WebSocket connection, it consults the root
routing configuration to lookup a consumer, and then calls various functions on
the consumer to handle events from the connection.
We will write a basic consumer that accepts WebSocket connections on the path
``/ws/chat/ROOM_NAME/`` that takes any message it receives on the WebSocket and
echos it back to the same WebSocket.
.. note::
It is good practice to use a common path prefix like ``/ws/`` to distinguish
WebSocket connections from ordinary HTTP connections because it will make
deploying Channels to a production environment in certain configurations
easier.
In particular for large sites it will be possible to configure a
production-grade HTTP server like nginx to route requests based on path to
either (1) a production-grade WSGI server like Gunicorn+Django for ordinary
HTTP requests or (2) a production-grade ASGI server like Daphne+Channels
for WebSocket requests.
Note that for smaller sites you can use a simpler deployment strategy where
Daphne serves all requests - HTTP and WebSocket - rather than having a
separate WSGI server. In this deployment configuration no common path prefix
like ``/ws/`` is necessary.
Create a new file ``chat/consumers.py``. Your app directory should now look like::
chat/
__init__.py
consumers.py
templates/
chat/
index.html
room.html
urls.py
views.py
Put the following code in ``chat/consumers.py``::
# chat/consumers.py
from channels.generic.websocket import WebsocketConsumer
import json
class ChatConsumer(WebsocketConsumer):
def connect(self):
self.accept()
def disconnect(self, close_code):
pass
def receive(self, text_data):
text_data_json = json.loads(text_data)
message = text_data_json['message']
self.send(text_data=json.dumps({
'message': message
}))
This is a synchronous WebSocket consumer that accepts all connections, receives
messages from its client, and echos those messages back to the same client. For
now it does not broadcast messages to other clients in the same room.
.. note::
Channels also supports writing *asynchronous* consumers for greater
performance. However any asynchronous consumer must be careful to avoid
directly performing blocking operations, such as accessing a Django model.
See the :doc:`/topics/consumers` reference for more information about writing asynchronous
consumers.
We need to create a routing configuration for the ``chat`` app that has a route to
the consumer. Create a new file ``chat/routing.py``. Your app directory should now
look like::
chat/
__init__.py
consumers.py
routing.py
templates/
chat/
index.html
room.html
urls.py
views.py
Put the following code in ``chat/routing.py``::
# chat/routing.py
from django.urls import re_path
from . import consumers
websocket_urlpatterns = [
re_path(r'ws/chat/(?P\w+)/$', consumers.ChatConsumer),
]
The next step is to point the root routing configuration at the **chat.routing**
module. In ``mysite/routing.py``, import ``AuthMiddlewareStack``, ``URLRouter``,
and ``chat.routing``; and insert a ``'websocket'`` key in the
``ProtocolTypeRouter`` list in the following format::
# mysite/routing.py
from channels.auth import AuthMiddlewareStack
from channels.routing import ProtocolTypeRouter, URLRouter
import chat.routing
application = ProtocolTypeRouter({
# (http->django views is added by default)
'websocket': AuthMiddlewareStack(
URLRouter(
chat.routing.websocket_urlpatterns
)
),
})
This root routing configuration specifies that when a connection is made to the
Channels development server, the ``ProtocolTypeRouter`` will first inspect the type
of connection. If it is a WebSocket connection (**ws://** or **wss://**), the connection
will be given to the ``AuthMiddlewareStack``.
The ``AuthMiddlewareStack`` will populate the connection's **scope** with a reference to
the currently authenticated user, similar to how Django's
``AuthenticationMiddleware`` populates the **request** object of a view function with
the currently authenticated user. (Scopes will be discussed later in this
tutorial.) Then the connection will be given to the ``URLRouter``.
The ``URLRouter`` will examine the HTTP path of the connection to route it to a
particular consumer, based on the provided ``url`` patterns.
Let's verify that the consumer for the ``/ws/chat/ROOM_NAME/`` path works. Run migrations to
apply database changes (Django's session framework needs the database) and then start the
Channels development server::
$ python manage.py migrate
Operations to perform:
Apply all migrations: admin, auth, contenttypes, sessions
Running migrations:
Applying contenttypes.0001_initial... OK
Applying auth.0001_initial... OK
Applying admin.0001_initial... OK
Applying admin.0002_logentry_remove_auto_add... OK
Applying contenttypes.0002_remove_content_type_name... OK
Applying auth.0002_alter_permission_name_max_length... OK
Applying auth.0003_alter_user_email_max_length... OK
Applying auth.0004_alter_user_username_opts... OK
Applying auth.0005_alter_user_last_login_null... OK
Applying auth.0006_require_contenttypes_0002... OK
Applying auth.0007_alter_validators_add_error_messages... OK
Applying auth.0008_alter_user_username_max_length... OK
Applying auth.0009_alter_user_last_name_max_length... OK
Applying sessions.0001_initial... OK
$ python3 manage.py runserver
Go to the room page at http://127.0.0.1:8000/chat/lobby/ which now displays an
empty chat log.
Type the message "hello" and press enter. You should now see "hello" echoed in
the chat log.
However if you open a second browser tab to the same room page at
http://127.0.0.1:8000/chat/lobby/ and type in a message, the message will not
appear in the first tab. For that to work, we need to have multiple instances of
the same ``ChatConsumer`` be able to talk to each other. Channels provides a
**channel layer** abstraction that enables this kind of communication between
consumers.
Go to the terminal where you ran the ``runserver`` command and press Control-C to
stop the server.
Enable a channel layer
----------------------
A channel layer is a kind of communication system. It allows multiple consumer
instances to talk with each other, and with other parts of Django.
A channel layer provides the following abstractions:
* A **channel** is a mailbox where messages can be sent to. Each channel has a name.
Anyone who has the name of a channel can send a message to the channel.
* A **group** is a group of related channels. A group has a name. Anyone who has the
name of a group can add/remove a channel to the group by name and send
a message to all channels in the group. It is not possible to enumerate what
channels are in a particular group.
Every consumer instance has an automatically generated unique channel name, and
so can be communicated with via a channel layer.
In our chat application we want to have multiple instances of ``ChatConsumer`` in
the same room communicate with each other. To do that we will have each
ChatConsumer add its channel to a group whose name is based on the room name.
That will allow ChatConsumers to transmit messages to all other ChatConsumers in
the same room.
We will use a channel layer that uses Redis as its backing store. To start a
Redis server on port 6379, run the following command::
$ docker run -p 6379:6379 -d redis:2.8
We need to install channels_redis so that Channels knows how to interface with
Redis. Run the following command::
$ pip3 install channels_redis
Before we can use a channel layer, we must configure it. Edit the
``mysite/settings.py`` file and add a ``CHANNEL_LAYERS`` setting to the bottom.
It should look like::
# mysite/settings.py
# Channels
ASGI_APPLICATION = 'mysite.routing.application'
CHANNEL_LAYERS = {
'default': {
'BACKEND': 'channels_redis.core.RedisChannelLayer',
'CONFIG': {
"hosts": [('127.0.0.1', 6379)],
},
},
}
.. note::
It is possible to have multiple channel layers configured.
However most projects will just use a single ``'default'`` channel layer.
Let's make sure that the channel layer can communicate with Redis. Open a Django
shell and run the following commands::
$ python3 manage.py shell
>>> import channels.layers
>>> channel_layer = channels.layers.get_channel_layer()
>>> from asgiref.sync import async_to_sync
>>> async_to_sync(channel_layer.send)('test_channel', {'type': 'hello'})
>>> async_to_sync(channel_layer.receive)('test_channel')
{'type': 'hello'}
Type Control-D to exit the Django shell.
Now that we have a channel layer, let's use it in ``ChatConsumer``. Put the
following code in ``chat/consumers.py``, replacing the old code::
# chat/consumers.py
from asgiref.sync import async_to_sync
from channels.generic.websocket import WebsocketConsumer
import json
class ChatConsumer(WebsocketConsumer):
def connect(self):
self.room_name = self.scope['url_route']['kwargs']['room_name']
self.room_group_name = 'chat_%s' % self.room_name
# Join room group
async_to_sync(self.channel_layer.group_add)(
self.room_group_name,
self.channel_name
)
self.accept()
def disconnect(self, close_code):
# Leave room group
async_to_sync(self.channel_layer.group_discard)(
self.room_group_name,
self.channel_name
)
# Receive message from WebSocket
def receive(self, text_data):
text_data_json = json.loads(text_data)
message = text_data_json['message']
# Send message to room group
async_to_sync(self.channel_layer.group_send)(
self.room_group_name,
{
'type': 'chat_message',
'message': message
}
)
# Receive message from room group
def chat_message(self, event):
message = event['message']
# Send message to WebSocket
self.send(text_data=json.dumps({
'message': message
}))
When a user posts a message, a JavaScript function will transmit the message
over WebSocket to a ChatConsumer. The ChatConsumer will receive that message and
forward it to the group corresponding to the room name. Every ChatConsumer in
the same group (and thus in the same room) will then receive the message from
the group and forward it over WebSocket back to JavaScript, where it will be
appended to the chat log.
Several parts of the new ``ChatConsumer`` code deserve further explanation:
* self.scope['url_route']['kwargs']['room_name']
* Obtains the ``'room_name'`` parameter from the URL route in ``chat/routing.py``
that opened the WebSocket connection to the consumer.
* Every consumer has a :ref:`scope ` that contains information about its connection,
including in particular any positional or keyword arguments from the URL
route and the currently authenticated user if any.
* self.room_group_name = 'chat_%s' % self.room_name
* Constructs a Channels group name directly from the user-specified room
name, without any quoting or escaping.
* Group names may only contain letters, digits, hyphens, and periods.
Therefore this example code will fail on room names that have other
characters.
* async_to_sync(self.channel_layer.group_add)(...)
* Joins a group.
* The async_to_sync(...) wrapper is required because ChatConsumer is a
synchronous WebsocketConsumer but it is calling an asynchronous channel
layer method. (All channel layer methods are asynchronous.)
* Group names are restricted to ASCII alphanumerics, hyphens, and periods
only. Since this code constructs a group name directly from the room name,
it will fail if the room name contains any characters that aren't valid in
a group name.
* self.accept()
* Accepts the WebSocket connection.
* If you do not call accept() within the connect() method then the
connection will be rejected and closed. You might want to reject a connection
for example because the requesting user is not authorized to perform the
requested action.
* It is recommended that accept() be called as the *last* action in connect()
if you choose to accept the connection.
* async_to_sync(self.channel_layer.group_discard)(...)
* Leaves a group.
* async_to_sync(self.channel_layer.group_send)
* Sends an event to a group.
* An event has a special ``'type'`` key corresponding to the name of the method
that should be invoked on consumers that receive the event.
Let's verify that the new consumer for the ``/ws/chat/ROOM_NAME/`` path works.
To start the Channels development server, run the following command::
$ python3 manage.py runserver
Open a browser tab to the room page at http://127.0.0.1:8000/chat/lobby/.
Open a second browser tab to the same room page.
In the second browser tab, type the message "hello" and press enter. You should
now see "hello" echoed in the chat log in both the second browser tab and in the
first browser tab.
You now have a basic fully-functional chat server!
This tutorial continues in :doc:`Tutorial 3 `.
channels-2.4.0/docs/tutorial/part_3.rst 0000664 0000000 0000000 00000007765 13576505155 0020060 0 ustar 00root root 0000000 0000000 Tutorial Part 3: Rewrite Chat Server as Asynchronous
====================================================
This tutorial begins where :doc:`Tutorial 2 ` left off.
We'll rewrite the consumer code to be asynchronous rather than synchronous
to improve its performance.
Rewrite the consumer to be asynchronous
---------------------------------------
The ``ChatConsumer`` that we have written is currently synchronous. Synchronous
consumers are convenient because they can call regular synchronous I/O functions
such as those that access Django models without writing special code. However
asynchronous consumers can provide a higher level of performance since they
don't need to create additional threads when handling requests.
``ChatConsumer`` only uses async-native libraries (Channels and the channel layer)
and in particular it does not access synchronous Django models. Therefore it can
be rewritten to be asynchronous without complications.
.. note::
Even if ``ChatConsumer`` *did* access Django models or other synchronous code it
would still be possible to rewrite it as asynchronous. Utilities like
:ref:`asgiref.sync.sync_to_async ` and
:doc:`channels.db.database_sync_to_async ` can be
used to call synchronous code from an asynchronous consumer. The performance
gains however would be less than if it only used async-native libraries.
Let's rewrite ``ChatConsumer`` to be asynchronous.
Put the following code in ``chat/consumers.py``::
# chat/consumers.py
from channels.generic.websocket import AsyncWebsocketConsumer
import json
class ChatConsumer(AsyncWebsocketConsumer):
async def connect(self):
self.room_name = self.scope['url_route']['kwargs']['room_name']
self.room_group_name = 'chat_%s' % self.room_name
# Join room group
await self.channel_layer.group_add(
self.room_group_name,
self.channel_name
)
await self.accept()
async def disconnect(self, close_code):
# Leave room group
await self.channel_layer.group_discard(
self.room_group_name,
self.channel_name
)
# Receive message from WebSocket
async def receive(self, text_data):
text_data_json = json.loads(text_data)
message = text_data_json['message']
# Send message to room group
await self.channel_layer.group_send(
self.room_group_name,
{
'type': 'chat_message',
'message': message
}
)
# Receive message from room group
async def chat_message(self, event):
message = event['message']
# Send message to WebSocket
await self.send(text_data=json.dumps({
'message': message
}))
This new code is for ChatConsumer is very similar to the original code, with the following differences:
* ``ChatConsumer`` now inherits from ``AsyncWebsocketConsumer`` rather than
``WebsocketConsumer``.
* All methods are ``async def`` rather than just ``def``.
* ``await`` is used to call asynchronous functions that perform I/O.
* ``async_to_sync`` is no longer needed when calling methods on the channel layer.
Let's verify that the consumer for the ``/ws/chat/ROOM_NAME/`` path still works.
To start the Channels development server, run the following command::
$ python3 manage.py runserver
Open a browser tab to the room page at http://127.0.0.1:8000/chat/lobby/.
Open a second browser tab to the same room page.
In the second browser tab, type the message "hello" and press enter. You should
now see "hello" echoed in the chat log in both the second browser tab and in the
first browser tab.
Now your chat server is fully asynchronous!
This tutorial continues in :doc:`Tutorial 4 `.
channels-2.4.0/docs/tutorial/part_4.rst 0000664 0000000 0000000 00000015453 13576505155 0020052 0 ustar 00root root 0000000 0000000 Tutorial Part 4: Automated Testing
==================================
This tutorial begins where :doc:`Tutorial 3 ` left off.
We've built a simple chat server and now we'll create some automated tests for it.
Testing the views
-----------------
To ensure that the chat server keeps working, we will write some tests.
We will write a suite of end-to-end tests using Selenium to control a Chrome web
browser. These tests will ensure that:
* when a chat message is posted then it is seen by everyone in the same room
* when a chat message is posted then it is not seen by anyone in a different room
`Install the Chrome web browser`_, if you do not already have it.
`Install chromedriver`_.
Install Selenium. Run the following command::
$ pip3 install selenium
.. _Install the Chrome web browser: https://www.google.com/chrome/
.. _Install chromedriver: https://sites.google.com/a/chromium.org/chromedriver/getting-started
Create a new file ``chat/tests.py``. Your app directory should now look like::
chat/
__init__.py
consumers.py
routing.py
templates/
chat/
index.html
room.html
tests.py
urls.py
views.py
Put the following code in ``chat/tests.py``::
# chat/tests.py
from channels.testing import ChannelsLiveServerTestCase
from selenium import webdriver
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.support.wait import WebDriverWait
class ChatTests(ChannelsLiveServerTestCase):
serve_static = True # emulate StaticLiveServerTestCase
@classmethod
def setUpClass(cls):
super().setUpClass()
try:
# NOTE: Requires "chromedriver" binary to be installed in $PATH
cls.driver = webdriver.Chrome()
except:
super().tearDownClass()
raise
@classmethod
def tearDownClass(cls):
cls.driver.quit()
super().tearDownClass()
def test_when_chat_message_posted_then_seen_by_everyone_in_same_room(self):
try:
self._enter_chat_room('room_1')
self._open_new_window()
self._enter_chat_room('room_1')
self._switch_to_window(0)
self._post_message('hello')
WebDriverWait(self.driver, 2).until(lambda _:
'hello' in self._chat_log_value,
'Message was not received by window 1 from window 1')
self._switch_to_window(1)
WebDriverWait(self.driver, 2).until(lambda _:
'hello' in self._chat_log_value,
'Message was not received by window 2 from window 1')
finally:
self._close_all_new_windows()
def test_when_chat_message_posted_then_not_seen_by_anyone_in_different_room(self):
try:
self._enter_chat_room('room_1')
self._open_new_window()
self._enter_chat_room('room_2')
self._switch_to_window(0)
self._post_message('hello')
WebDriverWait(self.driver, 2).until(lambda _:
'hello' in self._chat_log_value,
'Message was not received by window 1 from window 1')
self._switch_to_window(1)
self._post_message('world')
WebDriverWait(self.driver, 2).until(lambda _:
'world' in self._chat_log_value,
'Message was not received by window 2 from window 2')
self.assertTrue('hello' not in self._chat_log_value,
'Message was improperly received by window 2 from window 1')
finally:
self._close_all_new_windows()
# === Utility ===
def _enter_chat_room(self, room_name):
self.driver.get(self.live_server_url + '/chat/')
ActionChains(self.driver).send_keys(room_name + '\n').perform()
WebDriverWait(self.driver, 2).until(lambda _:
room_name in self.driver.current_url)
def _open_new_window(self):
self.driver.execute_script('window.open("about:blank", "_blank");')
self.driver.switch_to_window(self.driver.window_handles[-1])
def _close_all_new_windows(self):
while len(self.driver.window_handles) > 1:
self.driver.switch_to_window(self.driver.window_handles[-1])
self.driver.execute_script('window.close();')
if len(self.driver.window_handles) == 1:
self.driver.switch_to_window(self.driver.window_handles[0])
def _switch_to_window(self, window_index):
self.driver.switch_to_window(self.driver.window_handles[window_index])
def _post_message(self, message):
ActionChains(self.driver).send_keys(message + '\n').perform()
@property
def _chat_log_value(self):
return self.driver.find_element_by_css_selector('#chat-log').get_property('value')
Our test suite extends ``ChannelsLiveServerTestCase`` rather than Django's usual
suites for end-to-end tests (``StaticLiveServerTestCase`` or ``LiveServerTestCase``) so
that URLs inside the Channels routing configuration like ``/ws/room/ROOM_NAME/``
will work inside the suite.
We are using ``sqlite3``, which for testing, is run as an in-memory database, and therefore, the tests will not run correctly.
We need to tell our project that the ``sqlite3`` database need not to be in memory for run the tests. Edit the
``mysite/settings.py`` file and add the ``TEST`` argument to the **DATABASES** setting::
# mysite/settings.py
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
'TEST': {
'NAME': os.path.join(BASE_DIR, 'db_test.sqlite3')
}
}
}
To run the tests, run the following command::
$ python3 manage.py test chat.tests
You should see output that looks like::
Creating test database for alias 'default'...
System check identified no issues (0 silenced).
..
----------------------------------------------------------------------
Ran 2 tests in 5.014s
OK
Destroying test database for alias 'default'...
You now have a tested chat server!
What's next?
------------
Congratulations! You've fully implemented a chat server, made it performant by
writing it in asynchronous style, and written automated tests to ensure it won't
break.
This is the end of the tutorial. At this point you should know enough to start
an app of your own that uses Channels and start fooling around.
As you need to learn new tricks, come back to rest of the
:ref:`documentation `.
channels-2.4.0/loadtesting/ 0000775 0000000 0000000 00000000000 13576505155 0015641 5 ustar 00root root 0000000 0000000 channels-2.4.0/loadtesting/2016-09-06/ 0000775 0000000 0000000 00000000000 13576505155 0016702 5 ustar 00root root 0000000 0000000 channels-2.4.0/loadtesting/2016-09-06/README.rst 0000664 0000000 0000000 00000010673 13576505155 0020400 0 ustar 00root root 0000000 0000000 Django Channels Load Testing Results for (2016-09-06)
=====================================================
The goal of these load tests is to see how Channels performs with normal HTTP traffic under heavy load.
In order to handle WebSockets, Channels introduced ASGI, a new interface spec for asynchronous request handling. Also,
Channels implemented this spec with Daphne--an HTTP, HTTP2, and WebSocket protocol server.
The load testing completed has been to compare how well Daphne using 1 worker performs with normal HTTP traffic in
comparison to a WSGI HTTP server. Gunincorn was chosen as its configuration was simple and well-understood.
Summary of Results
~~~~~~~~~~~~~~~~~~
Daphne is not as efficient as its WSGI counterpart. When considering only latency, Daphne can have 10 times the latency
when under the same traffic load as gunincorn. When considering only throughput, Daphne can have 40-50% of the total
throughput of gunicorn while still being at 2 times latency.
The results should not be surprising considering the overhead involved. However, these results represent the simplest
case to test and should be represented as saying that Daphne is always slower than an WSGI server. These results are
a starting point, not a final conclusion.
Some additional things that should be tested:
- More than 1 worker
- A separate server for redis
- Comparison to other WebSocket servers, such as Node's socket.io or Rails' Action cable
Methodology
~~~~~~~~~~~
In order to control for variances, several measures were taken:
- the same testing tool was used across all tests, `loadtest `_.
- all target machines were identical
- all target code variances were separated into appropriate files in the dir of /testproject in this repo
- all target config variances necessary to the different setups were controlled by supervisord so that human error was limited
- across different test types, the same target machines were used, using the same target code and the same target config
- several tests were run for each setup and test type
Setups
~~~~~~
3 setups were used for this set of tests:
1) Normal Django with Gunicorn (19.6.0)
2) Django Channels with local Redis (0.14.0) and Daphne (0.14.3)
3) Django Channels with IPC (1.1.0) and Daphne (0.14.3)
Latency
~~~~~~~
All target and sources machines were identical ec2 instances m3.2xlarge running Ubuntu 16.04.
In order to ensure that the same number of requests were sent, the rps flag was set to 300.
.. image:: channels-latency.PNG
Throughput
~~~~~~~~~~
The same source machine was used for all tests: ec2 instance m3.large running Ubuntu 16.04.
All target machines were identical ec2 instances m3.2xlarge running Ubuntu 16.04.
For the following tests, loadtest was permitted to autothrottle so as to limit errors; this led to varied latency times.
Gunicorn had a latency of 6 ms; daphne and Redis, 12 ms; daphne and IPC, 35 ms.
.. image:: channels-throughput.PNG
Supervisor Configs
~~~~~~~~~~~~~~~~~~
**Gunicorn (19.6.0)**
This is the non-channels config. It's a standard Django environment on one machine, using gunicorn to handle requests.
.. code-block:: bash
[program:gunicorn]
command = gunicorn testproject.wsgi_no_channels -b 0.0.0.0:80
directory = /srv/channels/testproject/
user = root
[group:django_http]
programs=gunicorn
priority=999
**Redis (0.14.0) and Daphne (0.14.3)**
This is the channels config using redis as the backend. It's on one machine, so a local redis confog.
Also, it's a single worker, not multiple, as that's the default config.
.. code-block:: bash
[program:daphne]
command = daphne -b 0.0.0.0 -p 80 testproject.asgi:channel_layer
directory = /srv/channels/testproject/
user = root
[program:worker]
command = python manage.py runworker
directory = /srv/channels/testproject/
user = django-channels
[group:django_channels]
programs=daphne,worker
priority=999
**IPC (1.1.0) and Daphne (0.14.3)**
This is the channels config using IPC (Inter Process Communication). It's only possible to have this work on one machine.
.. code-block:: bash
[program:daphne]
command = daphne -b 0.0.0.0 -p 80 testproject.asgi_for_ipc:channel_layer
directory = /srv/channels/testproject/
user = root
[program:worker]
command = python manage.py runworker --settings=testproject.settings.channels_ipc
directory = /srv/channels/testproject/
user = root
[group:django_channels]
programs=daphne,worker
priority=999
channels-2.4.0/loadtesting/2016-09-06/channels-latency.PNG 0000664 0000000 0000000 00000211312 13576505155 0022500 0 ustar 00root root 0000000 0000000 ‰PNG
IHDR ¢ à §$ß× iCCPICC Profile H‰•WT“Éž¿¤Z RBo‚é¡÷"l„$@(‚н,*¸TD@TtDѵ ²VD±°öþ° ²².l¨¼I]_;ïž3ÿ|ܹ÷Îw'w† ”mÙ¹¹Ù¨
9‚|at 31)™Iê (³9¢\﨨0 e´ÿ»¼»IÕZë_Çÿ«¨ry" HÄ©\'âÃ àšœ\a> „N¨7šŸ+Áƒ«!A ˆ¸§Ë°¦§Êð©Ml´/Ä, ÈT6[˜€’„7³€“ã(I8Ú
¸|ÄU{r2Ø\ˆïA