pax_global_header00006660000000000000000000000064140070147650014516gustar00rootroot0000000000000052 comment=0537c8f3be313d1e408c5bdfa85c186de175d618 pika-1.2.0/000077500000000000000000000000001400701476500124425ustar00rootroot00000000000000pika-1.2.0/.checkignore000066400000000000000000000000561400701476500147250ustar00rootroot00000000000000**/docs **/examples **/test **/utils setup.py pika-1.2.0/.codeclimate.yml000066400000000000000000000001551400701476500155150ustar00rootroot00000000000000languages: - python exclude_paths: - docs/* - tests/* - utils/* - pika/examples/* - pika/spec.py pika-1.2.0/.coveragerc000066400000000000000000000000321400701476500145560ustar00rootroot00000000000000[run] omit = pika/spec.py pika-1.2.0/.github/000077500000000000000000000000001400701476500140025ustar00rootroot00000000000000pika-1.2.0/.github/ISSUE_TEMPLATE.md000066400000000000000000000011221400701476500165030ustar00rootroot00000000000000Thank you for using Pika. - IMPORTANT ----------------------------------------------------------- STOP NOW AND READ THIS BEFORE OPENING A NEW ISSUE ON GITHUB -------------------------------------------------------------------------- Unless you are CERTAIN you have found a reproducible bug in Pika, you must first ask your question, discuss your suspected issue, or propose your new feature on the mailing list: https://groups.google.com/forum/#!forum/pika-python Pika's maintainers do NOT use GitHub issues for questions, root cause analysis, conversations, code reviews, etc. Thank you! pika-1.2.0/.github/PULL_REQUEST_TEMPLATE.md000066400000000000000000000033371400701476500176110ustar00rootroot00000000000000## Proposed Changes Please describe the big picture of your changes here to communicate to the Pika team why we should accept this pull request. If it fixes a bug or resolves a feature request, be sure to link to that issue. A pull request that doesn't explain **why** the change was made has a much lower chance of being accepted. If English isn't your first language, don't worry about it and try to communicate the problem you are trying to solve to the best of your abilities. As long as we can understand the intent, it's all good. ## Types of Changes What types of changes does your code introduce to this project? _Put an `x` in the boxes that apply_ - [ ] Bugfix (non-breaking change which fixes issue #NNNN) - [ ] New feature (non-breaking change which adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected) - [ ] Documentation (correction or otherwise) - [ ] Cosmetics (whitespace, appearance) ## Checklist _Put an `x` in the boxes that apply. You can also fill these out after creating the PR. If you're unsure about any of them, don't hesitate to ask on the [`pika-python`](https://groups.google.com/forum/#!forum/pika-python) mailing list. We're here to help! This is simply a reminder of what we are going to look for before merging your code._ - [ ] I have read the `CONTRIBUTING.md` document - [ ] All tests pass locally with my changes - [ ] I have added tests that prove my fix is effective or that my feature works - [ ] I have added necessary documentation (if appropriate) ## Further Comments If this is a relatively large or complex change, kick off the discussion by explaining why you chose the solution you did and what alternatives you considered, etc. pika-1.2.0/.gitignore000066400000000000000000000003231400701476500144300ustar00rootroot00000000000000*.pyc *~ .idea .coverage .tox .DS_Store .python-version .pytest_cache/ pika.iml codegen pika.egg-info debug/ examples/pika examples/blocking/pika atlassian*xml build dist docs/_build venv*/ env/ testdata/*.conf pika-1.2.0/.travis.yml000066400000000000000000000073721400701476500145640ustar00rootroot00000000000000language: python # Turn on sudo mode to facilitate the IPv6 workaround per # github.com/travis-ci/travis-ci/issues/8711. See also related reference in # before_script section. sudo: true addons: apt: sources: - sourceline: deb https://packages.erlang-solutions.com/ubuntu trusty contrib key_url: https://packages.erlang-solutions.com/ubuntu/erlang_solutions.asc packages: - erlang-nox env: global: - RABBITMQ_VERSION=3.8.11 - RABBITMQ_DOWNLOAD_URL="https://github.com/rabbitmq/rabbitmq-server/releases/download/v$RABBITMQ_VERSION/rabbitmq-server-generic-unix-$RABBITMQ_VERSION.tar.xz" - RABBITMQ_TAR="rabbitmq-$RABBITMQ_VERSION.tar.xz" - PATH=$HOME/.local/bin:$PATH - AWS_DEFAULT_REGION=us-east-1 - secure: "Eghft2UgJmWuCgnqz6O+KV5F9AERzUbKIeXkcw7vsFAVdkB9z01XgqVLhQ6N+n6i8mkiRDkc0Jes6htVtO4Hi6lTTFeDhu661YCXXTFdRdsx+D9v5bgw8Q2bP41xFy0iao7otYqkzFKIo32Q2cUYzMUqXlS661Yai5DXldr3mjM=" - secure: "LjieH/Yh0ng5gwT6+Pl3rL7RMxxb/wOlogoLG7cS99XKdX6N4WRVFvWbHWwCxoVr0be2AcyQynu4VOn+0jC8iGfQjkJZ7UrJjZCDGWbNjAWrNcY0F9VdretFDy8Vn2sHfBXq8fINqszJkgTnmbQk8dZWUtj0m/RNVnOBeBcsIOU=" stages: - test - name: coverage if: repo = pika/pika - name: deploy if: tag IS present cache: apt: true directories: - $HOME/.cache install: - pip install -r test-requirements.txt - pip install awscli - if [ ! -d "$HOME/.cache" ]; then mkdir "$HOME/.cache"; fi - if [ -s "$HOME/.cache/$RABBITMQ_TAR" ]; then echo "[INFO] found cached $RABBITMQ_TAR file"; else wget -O "$HOME/.cache/$RABBITMQ_TAR" "$RABBITMQ_DOWNLOAD_URL"; fi - tar -C "$TRAVIS_BUILD_DIR" -xvf "$HOME/.cache/$RABBITMQ_TAR" - sed -e "s#PIKA_DIR#$TRAVIS_BUILD_DIR#g" "$TRAVIS_BUILD_DIR/testdata/rabbitmq.conf.in" > "$TRAVIS_BUILD_DIR/testdata/rabbitmq.conf" before_script: # Enable IPv6 for our tests - see github.com/travis-ci/travis-ci/issues/8711 - echo 0 | sudo tee /proc/sys/net/ipv6/conf/all/disable_ipv6 - pip freeze - /bin/sh -c "RABBITMQ_PID_FILE=$TRAVIS_BUILD_DIR/rabbitmq.pid RABBITMQ_CONFIG_FILE=$TRAVIS_BUILD_DIR/testdata/rabbitmq $TRAVIS_BUILD_DIR/rabbitmq_server-$RABBITMQ_VERSION/sbin/rabbitmq-server &" - /bin/sh "$TRAVIS_BUILD_DIR/rabbitmq_server-$RABBITMQ_VERSION/sbin/rabbitmqctl" wait "$TRAVIS_BUILD_DIR/rabbitmq.pid" - /bin/sh "$TRAVIS_BUILD_DIR/rabbitmq_server-$RABBITMQ_VERSION/sbin/rabbitmqctl" status script: # See https://github.com/travis-ci/travis-ci/issues/1066 and https://github.com/pika/pika/pull/984#issuecomment-370565220 # as to why 'set -e' and 'set +e' are added here - set -e - nosetests - PIKA_TEST_TLS=true nosetests - set +e after_success: - aws s3 cp .coverage "s3://com-gavinroy-travis/pika/$TRAVIS_BUILD_NUMBER/.coverage.${TRAVIS_PYTHON_VERSION}" jobs: include: - python: pypy3 - python: 3.8 - python: 3.9 - stage: coverage if: fork = false OR type != pull_request python: 3.8 services: [] install: - pip install awscli coverage codecov before_script: [] script: - mkdir coverage - aws s3 cp --recursive s3://com-gavinroy-travis/pika/$TRAVIS_BUILD_NUMBER/ coverage - cd coverage - coverage combine - cd .. - mv coverage/.coverage . - coverage report after_success: codecov - stage: deploy if: repo = pika/pika python: 3.8 services: [] install: true before_script: [] script: true after_success: [] deploy: distributions: sdist bdist_wheel provider: pypi user: crad on: tags: true all_branches: true password: secure: "V/JTU/X9C6uUUVGEAWmWWbmKW7NzVVlC/JWYpo05Ha9c0YV0vX4jOfov2EUAphM0WwkD/MRhz4dq3kCU5+cjHxR3aTSb+sbiElsCpaciaPkyrns+0wT5MCMO29Lpnq2qBLc1ePR1ey5aTWC/VibgFJOL7H/3wyvukL6ZaCnktYk=" pika-1.2.0/CHANGELOG.rst000066400000000000000000001366401400701476500144750ustar00rootroot00000000000000Version History =============== 1.2.0 2021-02-04 ---------------- `GitHub milestone `_ 1.1.0 2019-07-16 ---------------- `GitHub milestone `_ 1.0.1 2019-04-12 ---------------- `GitHub milestone `_ - API docstring updates - Twisted adapter: Add basic_consume Deferred to the call list (`PR `_) 1.0.0 2019-03-26 ---------------- `GitHub milestone `_ - ``AsyncioConnection``, ``TornadoConnection`` and ``TwistedProtocolConnection`` are no longer auto-imported (`PR `_) - ``BlockingConnection.consume`` now returns ``(None, None, None)`` when inactivity timeout is reached (`PR `_) - Python 3.7 support (`Issue `_) - ``all_channels`` parameter of the ``Channel.basic_qos`` method renamed to ``global_qos`` - ``global_`` parameter of the ``Basic.Qos`` spec class renamed to ``global_qos`` - **NOTE:** ``heartbeat_interval`` is removed, use ``heartbeat`` instead. - **NOTE:** The `backpressure_detection` option of `ConnectionParameters` and `URLParameters` property is REMOVED in favor of `Connection.Blocked` and `Connection.Unblocked`. See `Connection.add_on_connection_blocked_callback`. - **NOTE:** The legacy ``basic_publish`` method is removed, and ``publish`` renamed to ``basic_publish`` - **NOTE**: The signature of the following methods has changed from Pika 0.13.0. In general, the callback parameter that indicates completion of the method has been moved to the end of the parameter list to be consistent with other parts of Pika's API and with other libraries in general. **IMPORTANT**: The signature of the following methods has changed from Pika 0.13.0. In general, the callback parameter that indicates completion of the method has been moved to the end of the parameter list to be consistent with other parts of Pika's API and with other libraries in general. - ``basic_cancel`` - ``basic_consume`` - ``basic_get`` - ``basic_qos`` - ``basic_recover`` - ``confirm_delivery`` - ``exchange_bind`` - ``exchange_declare`` - ``exchange_delete`` - ``exchange_unbind`` - ``flow`` - ``queue_bind`` - ``queue_declare`` - ``queue_delete`` - ``queue_purge`` - ``queue_unbind`` **IMPORTANT**: When specifying TLS / SSL options, the ``SSLOptions`` class must be used, and a ``dict`` is no longer supported. 0.13.1 2019-02-04 ----------------- `GitHub milestone `_ 0.13.0 2019-01-17 ----------------- `GitHub milestone `_ 0.12.0 2018-06-19 ----------------- `GitHub milestone `_ This is an interim release prior to version `1.0.0`. It includes the following backported pull requests and commits from the `master` branch: - `PR #901 `_ - `PR #908 `_ - `PR #910 `_ - `PR #918 `_ - `PR #920 `_ - `PR #924 `_ - `PR #937 `_ - `PR #938 `_ - `PR #933 `_ - `PR #940 `_ - `PR #932 `_ - `PR #928 `_ - `PR #934 `_ - `PR #915 `_ - `PR #946 `_ - `PR #947 `_ - `PR #952 `_ - `PR #956 `_ - `PR #966 `_ - `PR #975 `_ - `PR #978 `_ - `PR #981 `_ - `PR #994 `_ - `PR #1007 `_ - `PR #1045 `_ (manually backported) - `PR #1011 `_ Commits: Travis CI fail fast - 3f0e739 New features: ``BlockingConnection.consume`` now returns ``(None, None, None)`` when inactivity timeout is reached (`PR `_) ``BlockingConnection`` now supports the ``add_callback_threadsafe`` method which allows a function to be executed correctly on the IO loop thread. The main use-case for this is as follows: - Application sets up a thread for ``BlockingConnection`` and calls ``basic_consume`` on it - When a message is received, work is done on another thread - When the work is done, the worker uses ``connection.add_callback_threadsafe`` to call the ``basic_ack`` method on the channel instance. Please see ``examples/basic_consumer_threaded.py`` for an example. As always, ``SelectConnection`` and a fully async consumer/publisher is the preferred method of using Pika. Heartbeats are now sent at an interval equal to 1/2 of the negotiated idle connection timeout. RabbitMQ's default timeout value is 60 seconds, so heartbeats will be sent at a 30 second interval. In addition, Pika's check for an idle connection will be done at an interval equal to the timeout value plus 5 seconds to allow for delays. This results in an interval of 65 seconds by default. 0.11.2 2017-11-30 ----------------- `GitHub milestone `_ `0.11.2 `_ - Remove `+` character from platform releases string (`PR `_) 0.11.1 2017-11-27 ----------------- `GitHub milestone `_ `0.11.1 `_ - Fix `BlockingConnection` to ensure event loop exits (`PR `_) - Heartbeat timeouts will use the client value if specified (`PR `_) - Allow setting some common TCP options (`PR `_) - Errors when decoding Unicode are ignored (`PR `_) - Fix large number encoding (`PR `_) 0.11.0 2017-07-29 ----------------- `GitHub milestone `_ `0.11.0 `_ - Simplify Travis CI configuration for OS X. - Add `asyncio` connection adapter for Python 3.4 and newer. - Connection failures that occur after the socket is opened and before the AMQP connection is ready to go are now reported by calling the connection error callback. Previously these were not consistently reported. - In BaseConnection.close, call _handle_ioloop_stop only if the connection is already closed to allow the asynchronous close operation to complete gracefully. - Pass error information from failed socket connection to user callbacks on_open_error_callback and on_close_callback with result_code=-1. - ValueError is raised when a completion callback is passed to an asynchronous (nowait) Channel operation. It's an application error to pass a non-None completion callback with an asynchronous request, because this callback can never be serviced in the asynchronous scenario. - `Channel.basic_reject` fixed to allow `delivery_tag` to be of type `long` as well as `int`. (by quantum5) - Implemented support for blocked connection timeouts in `pika.connection.Connection`. This feature is available to all pika adapters. See `pika.connection.ConnectionParameters` docstring to learn more about `blocked_connection_timeout` configuration. - Deprecated the `heartbeat_interval` arg in `pika.ConnectionParameters` in favor of the `heartbeat` arg for consistency with the other connection parameters classes `pika.connection.Parameters` and `pika.URLParameters`. - When the `port` arg is not set explicitly in `ConnectionParameters` constructor, but the `ssl` arg is set explicitly, then set the port value to to the default AMQP SSL port if SSL is enabled, otherwise to the default AMQP plaintext port. - `URLParameters` will raise ValueError if a non-empty URL scheme other than {amqp | amqps | http | https} is specified. - `InvalidMinimumFrameSize` and `InvalidMaximumFrameSize` exceptions are deprecated. pika.connection.Parameters.frame_max property setter now raises the standard `ValueError` exception when the value is out of bounds. - Removed deprecated parameter `type` in `Channel.exchange_declare` and `BlockingChannel.exchange_declare` in favor of the `exchange_type` arg that doesn't overshadow the builtin `type` keyword. - Channel.close() on OPENING channel transitions it to CLOSING instead of raising ChannelClosed. - Channel.close() on CLOSING channel raises `ChannelAlreadyClosing`; used to raise `ChannelClosed`. - Connection.channel() raises `ConnectionClosed` if connection is not in OPEN state. - When performing graceful close on a channel and `Channel.Close` from broker arrives while waiting for CloseOk, don't release the channel number until CloseOk arrives to avoid race condition that may lead to a new channel receiving the CloseOk that was destined for the closing channel. - The `backpressure_detection` option of `ConnectionParameters` and `URLParameters` property is DEPRECATED in favor of `Connection.Blocked` and `Connection.Unblocked`. See `Connection.add_on_connection_blocked_callback`. 0.10.0 2015-09-02 ----------------- `0.10.0 `_ - a9bf96d - LibevConnection: Fixed dict chgd size during iteration (Michael Laing) - 388c55d - SelectConnection: Fixed KeyError exceptions in IOLoop timeout executions (Shinji Suzuki) - 4780de3 - BlockingConnection: Add support to make BlockingConnection a Context Manager (@reddec) 0.10.0b2 2015-07-15 ------------------- - f72b58f - Fixed failure to purge _ConsumerCancellationEvt from BlockingChannel._pending_events during basic_cancel. (Vitaly Kruglikov) 0.10.0b1 2015-07-10 ------------------- High-level summary of notable changes: - Change to 3-Clause BSD License - Python 3.x support - Over 150 commits from 19 contributors - Refactoring of SelectConnection ioloop - This major release contains certain non-backward-compatible API changes as well as significant performance improvements in the `BlockingConnection` adapter. - Non-backward-compatible changes in `Channel.add_on_return_callback` callback's signature. - The `AsyncoreConnection` adapter was retired **Details** Python 3.x: this release introduces python 3.x support. Tested on Python 3.3 and 3.4. `AsyncoreConnection`: Retired this legacy adapter to reduce maintenance burden; the recommended replacement is the `SelectConnection` adapter. `SelectConnection`: ioloop was refactored for compatibility with other ioloops. `Channel.add_on_return_callback`: The callback is now passed the individual parameters channel, method, properties, and body instead of a tuple of those values for congruence with other similar callbacks. `BlockingConnection`: This adapter underwent a makeover under the hood and gained significant performance improvements as well as enhanced timer resolution. It is now implemented as a client of the `SelectConnection` adapter. Below is an overview of the `BlockingConnection` and `BlockingChannel` API changes: - Recursion: the new implementation eliminates callback recursion that sometimes blew out the stack in the legacy implementation (e.g., publish -> consumer_callback -> publish -> consumer_callback, etc.). While `BlockingConnection.process_data_events` and `BlockingConnection.sleep` may still be called from the scope of the blocking adapter's callbacks in order to process pending I/O, additional callbacks will be suppressed whenever `BlockingConnection.process_data_events` and `BlockingConnection.sleep` are nested in any combination; in that case, the callback information will be bufferred and dispatched once nesting unwinds and control returns to the level-zero dispatcher. - `BlockingConnection.connect`: this method was removed in favor of the constructor as the only way to establish connections; this reduces maintenance burden, while improving reliability of the adapter. - `BlockingConnection.process_data_events`: added the optional parameter `time_limit`. - `BlockingConnection.add_on_close_callback`: removed; legacy raised `NotImplementedError`. - `BlockingConnection.add_on_open_callback`: removed; legacy raised `NotImplementedError`. - `BlockingConnection.add_on_open_error_callback`: removed; legacy raised `NotImplementedError`. - `BlockingConnection.add_backpressure_callback`: not supported - `BlockingConnection.set_backpressure_multiplier`: not supported - `BlockingChannel.add_on_flow_callback`: not supported; per docstring in channel.py: "Note that newer versions of RabbitMQ will not issue this but instead use TCP backpressure". - `BlockingChannel.flow`: not supported - `BlockingChannel.force_data_events`: removed as it is no longer necessary following redesign of the adapter. - Removed the `nowait` parameter from `BlockingChannel` methods, forcing `nowait=False` (former API default) in the implementation; this is more suitable for the blocking nature of the adapter and its error-reporting strategy; this concerns the following methods: `basic_cancel`, `confirm_delivery`, `exchange_bind`, `exchange_declare`, `exchange_delete`, `exchange_unbind`, `queue_bind`, `queue_declare`, `queue_delete`, and `queue_purge`. - `BlockingChannel.basic_cancel`: returns a sequence instead of None; for a `no_ack=True` consumer, `basic_cancel` returns a sequence of pending messages that arrived before broker confirmed the cancellation. - `BlockingChannel.consume`: added new optional kwargs `arguments` and `inactivity_timeout`. Also, raises ValueError if the consumer creation parameters don't match those used to create the existing queue consumer generator, if any; this happens when you break out of the consume loop, then call `BlockingChannel.consume` again with different consumer-creation args without first cancelling the previous queue consumer generator via `BlockingChannel.cancel`. The legacy implementation would silently resume consuming from the existing queue consumer generator even if the subsequent `BlockingChannel.consume` was invoked with a different queue name, etc. - `BlockingChannel.cancel`: returns 0; the legacy implementation tried to return the number of requeued messages, but this number was not accurate as it didn't include the messages returned by the Channel class; this count is not generally useful, so returning 0 is a reasonable replacement. - `BlockingChannel.open`: removed in favor of having a single mechanism for creating a channel (`BlockingConnection.channel`); this reduces maintenance burden, while improving reliability of the adapter. - `BlockingChannel.confirm_delivery`: raises UnroutableError when unroutable messages that were sent prior to this call are returned before we receive Confirm.Select-ok. - `BlockingChannel.basic_publish: always returns True when delivery confirmation is not enabled (publisher-acks = off); the legacy implementation returned a bool in this case if `mandatory=True` to indicate whether the message was delivered; however, this was non-deterministic, because Basic.Return is asynchronous and there is no way to know how long to wait for it or its absence. The legacy implementation returned None when publishing with publisher-acks = off and `mandatory=False`. The new implementation always returns True when publishing while publisher-acks = off. - `BlockingChannel.publish`: a new alternate method (vs. `basic_publish`) for publishing a message with more detailed error reporting via UnroutableError and NackError exceptions. - `BlockingChannel.start_consuming`: raises pika.exceptions.RecursionError if called from the scope of a `BlockingConnection` or `BlockingChannel` callback. - `BlockingChannel.get_waiting_message_count`: new method; returns the number of messages that may be retrieved from the current queue consumer generator via `BasicChannel.consume` without blocking. **Commits** - 5aaa753 - Fixed SSL import and removed no_ack=True in favor of explicit AMQP message handling based on deferreds (skftn) - 7f222c2 - Add checkignore for codeclimate (Gavin M. Roy) - 4dec370 - Implemented BlockingChannel.flow; Implemented BlockingConnection.add_on_connection_blocked_callback; Implemented BlockingConnection.add_on_connection_unblocked_callback. (Vitaly Kruglikov) - 4804200 - Implemented blocking adapter acceptance test for exchange-to-exchange binding. Added rudimentary validation of BasicProperties passthru in blocking adapter publish tests. Updated CHANGELOG. (Vitaly Kruglikov) - 4ec07fd - Fixed sending of data in TwistedProtocolConnection (Vitaly Kruglikov) - a747fb3 - Remove my copyright from forward_server.py test utility. (Vitaly Kruglikov) - 94246d2 - Return True from basic_publish when pubacks is off. Implemented more blocking adapter accceptance tests. (Vitaly Kruglikov) - 3ce013d - PIKA-609 Wait for broker to dispatch all messages to client before cancelling consumer in TestBasicCancelWithNonAckableConsumer and TestBasicCancelWithAckableConsumer (Vitaly Kruglikov) - 293f778 - Created CHANGELOG entry for release 0.10.0. Fixed up callback documentation for basic_get, basic_consume, and add_on_return_callback. (Vitaly Kruglikov) - 16d360a - Removed the legacy AsyncoreConnection adapter in favor of the recommended SelectConnection adapter. (Vitaly Kruglikov) - 240a82c - Defer creation of poller's event loop interrupt socket pair until start is called, because some SelectConnection users (e.g., BlockingConnection adapter) don't use the event loop, and these sockets would just get reported as resource leaks. (Vitaly Kruglikov) - aed5cae - Added EINTR loops in select_connection pollers. Addressed some pylint findings, including an error or two. Wrap socket.send and socket.recv calls in EINTR loops Use the correct exception for socket.error and select.error and get errno depending on python version. (Vitaly Kruglikov) - 498f1be - Allow passing exchange, queue and routing_key as text, handle short strings as text in python3 (saarni) - 9f7f243 - Restored basic_consume, basic_cancel, and add_on_cancel_callback (Vitaly Kruglikov) - 18c9909 - Reintroduced BlockingConnection.process_data_events. (Vitaly Kruglikov) - 4b25cb6 - Fixed BlockingConnection/BlockingChannel acceptance and unit tests (Vitaly Kruglikov) - bfa932f - Facilitate proper connection state after BasicConnection._adapter_disconnect (Vitaly Kruglikov) - 9a09268 - Fixed BlockingConnection test that was failing with ConnectionClosed error. (Vitaly Kruglikov) - 5a36934 - Copied synchronous_connection.py from pika-synchronous branch Fixed pylint findings Integrated SynchronousConnection with the new ioloop in SelectConnection Defined dedicated message classes PolledMessage and ConsumerMessage and moved from BlockingChannel to module-global scope. Got rid of nowait args from BlockingChannel public API methods Signal unroutable messages via UnroutableError exception. Signal Nack'ed messages via NackError exception. These expose more information about the failure than legacy basic_publich API. Removed set_timeout and backpressure callback methods Restored legacy `is_open`, etc. property names (Vitaly Kruglikov) - 6226dc0 - Remove deprecated --use-mirrors (Gavin M. Roy) - 1a7112f - Raise ConnectionClosed when sending a frame with no connection (#439) (Gavin M. Roy) - 9040a14 - Make delivery_tag non-optional (#498) (Gavin M. Roy) - 86aabc2 - Bump version (Gavin M. Roy) - 562075a - Update a few testing things (Gavin M. Roy) - 4954d38 - use unicode_type in blocking_connection.py (Antti Haapala) - 133d6bc - Let Travis install ordereddict for Python 2.6, and ttest 3.3, 3.4 too. (Antti Haapala) - 0d2287d - Pika Python 3 support (Antti Haapala) - 3125c79 - SSLWantRead is not supported before python 2.7.9 and 3.3 (Will) - 9a9c46c - Fixed TestDisconnectDuringConnectionStart: it turns out that depending on callback order, it might get either ProbableAuthenticationError or ProbableAccessDeniedError. (Vitaly Kruglikov) - cd8c9b0 - A fix the write starvation problem that we see with tornado and pika (Will) - 8654fbc - SelectConnection - make interrupt socketpair non-blocking (Will) - 4f3666d - Added copyright in forward_server.py and fixed NameError bug (Vitaly Kruglikov) - f8ebbbc - ignore docs (Gavin M. Roy) - a344f78 - Updated codeclimate config (Gavin M. Roy) - 373c970 - Try and fix pathing issues in codeclimate (Gavin M. Roy) - 228340d - Ignore codegen (Gavin M. Roy) - 4db0740 - Add a codeclimate config (Gavin M. Roy) - 7e989f9 - Slight code re-org, usage comment and better naming of test file. (Will) - 287be36 - Set up _kqueue member of KQueuePoller before calling super constructor to avoid exception due to missing _kqueue member. Call `self._map_event(event)` instead of `self._map_event(event.filter)`, because `KQueuePoller._map_event()` assumes it's getting an event, not an event filter. (Vitaly Kruglikov) - 62810fb - Fix issue #412: reset BlockingConnection._read_poller in BlockingConnection._adapter_disconnect() to guard against accidental access to old file descriptor. (Vitaly Kruglikov) - 03400ce - Rationalise adapter acceptance tests (Will) - 9414153 - Fix bug selecting non epoll poller (Will) - 4f063df - Use user heartbeat setting if server proposes none (Pau Gargallo) - 9d04d6e - Deactivate heartbeats when heartbeat_interval is 0 (Pau Gargallo) - a52a608 - Bug fix and review comments. (Will) - e3ebb6f - Fix incorrect x-expires argument in acceptance tests (Will) - 294904e - Get BlockingConnection into consistent state upon loss of TCP/IP connection with broker and implement acceptance tests for those cases. (Vitaly Kruglikov) - 7f91a68 - Make SelectConnection behave like an ioloop (Will) - dc9db2b - Perhaps 5 seconds is too agressive for travis (Gavin M. Roy) - c23e532 - Lower the stuck test timeout (Gavin M. Roy) - 1053ebc - Late night bug (Gavin M. Roy) - cd6c1bf - More BaseConnection._handle_error cleanup (Gavin M. Roy) - a0ff21c - Fix the test to work with Python 2.6 (Gavin M. Roy) - 748e8aa - Remove pypy for now (Gavin M. Roy) - 1c921c1 - Socket close/shutdown cleanup (Gavin M. Roy) - 5289125 - Formatting update from PR (Gavin M. Roy) - d235989 - Be more specific when calling getaddrinfo (Gavin M. Roy) - b5d1b31 - Reflect the method name change in pika.callback (Gavin M. Roy) - df7d3b7 - Cleanup BlockingConnection in a few places (Gavin M. Roy) - cd99e1c - Rename method due to use in BlockingConnection (Gavin M. Roy) - 7e0d1b3 - Use google style with yapf instead of pep8 (Gavin M. Roy) - 7dc9bab - Refactor socket writing to not use sendall #481 (Gavin M. Roy) - 4838789 - Dont log the fd #521 (Gavin M. Roy) - 765107d - Add Connection.Blocked callback registration methods #476 (Gavin M. Roy) - c15b5c1 - Fix _blocking typo pointed out in #513 (Gavin M. Roy) - 759ac2c - yapf of codegen (Gavin M. Roy) - 9dadd77 - yapf cleanup of codegen and spec (Gavin M. Roy) - ddba7ce - Do not reject consumers with no_ack=True #486 #530 (Gavin M. Roy) - 4528a1a - yapf reformatting of tests (Gavin M. Roy) - e7b6d73 - Remove catching AttributError (#531) (Gavin M. Roy) - 41ea5ea - Update README badges [skip ci] (Gavin M. Roy) - 6af987b - Add note on contributing (Gavin M. Roy) - 161fc0d - yapf formatting cleanup (Gavin M. Roy) - edcb619 - Add PYPY to travis testing (Gavin M. Roy) - 2225771 - Change the coverage badge (Gavin M. Roy) - 8f7d451 - Move to codecov from coveralls (Gavin M. Roy) - b80407e - Add confirm_delivery to example (Andrew Smith) - 6637212 - Update base_connection.py (bstemshorn) - 1583537 - #544 get_waiting_message_count() (markcf) - 0c9be99 - Fix #535: pass expected reply_code and reply_text from method frame to Connection._on_disconnect from Connection._on_connection_closed (Vitaly Kruglikov) - d11e73f - Propagate ConnectionClosed exception out of BlockingChannel._send_method() and log ConnectionClosed in BlockingConnection._on_connection_closed() (Vitaly Kruglikov) - 63d2951 - Fix #541 - make sure connection state is properly reset when BlockingConnection._check_state_on_disconnect raises ConnectionClosed. This supplements the previously-merged PR #450 by getting the connection into consistent state. (Vitaly Kruglikov) - 71bc0eb - Remove unused self.fd attribute from BaseConnection (Vitaly Kruglikov) - 8c08f93 - PIKA-532 Removed unnecessary params (Vitaly Kruglikov) - 6052ecf - PIKA-532 Fix bug in BlockingConnection._handle_timeout that was preventing _on_connection_closed from being called when not closing. (Vitaly Kruglikov) - 562aa15 - pika: callback: Display exception message when callback fails. (Stuart Longland) - 452995c - Typo fix in connection.py (Andrew) - 361c0ad - Added some missing yields (Robert Weidlich) - 0ab5a60 - Added complete example for python twisted service (Robert Weidlich) - 4429110 - Add deployment and webhooks (Gavin M. Roy) - 7e50302 - Fix has_content style in codegen (Andrew Grigorev) - 28c2214 - Fix the trove categorization (Gavin M. Roy) - de8b545 - Ensure frames can not be interspersed on send (Gavin M. Roy) - 8fe6bdd - Fix heartbeat behaviour after connection failure. (Kyösti Herrala) - c123472 - Updating BlockingChannel.basic_get doc (it does not receive a callback like the rest of the adapters) (Roberto Decurnex) - b5f52fb - Fix number of arguments passed to _on_return callback (Axel Eirola) - 765139e - Lower default TIMEOUT to 0.01 (bra-fsn) - 6cc22a5 - Fix confirmation on reconnects (bra-fsn) - f4faf0a - asynchronous publisher and subscriber examples refactored to follow the StepDown rule (Riccardo Cirimelli) 0.9.14 - 2014-07-11 ------------------- `0.9.14 `_ - 57fe43e - fix test to generate a correct range of random ints (ml) - 0d68dee - fix async watcher for libev_connection (ml) - 01710ad - Use default username and password if not specified in URLParameters (Sean Dwyer) - fae328e - documentation typo (Jeff Fein-Worton) - afbc9e0 - libev_connection: reset_io_watcher (ml) - 24332a2 - Fix the manifest (Gavin M. Roy) - acdfdef - Remove useless test (Gavin M. Roy) - 7918e1a - Skip libev tests if pyev is not installed or if they are being run in pypy (Gavin M. Roy) - bb583bf - Remove the deprecated test (Gavin M. Roy) - aecf3f2 - Don't reject a message if the channel is not open (Gavin M. Roy) - e37f336 - Remove UTF-8 decoding in spec (Gavin M. Roy) - ddc35a9 - Update the unittest to reflect removal of force binary (Gavin M. Roy) - fea2476 - PEP8 cleanup (Gavin M. Roy) - 9b97956 - Remove force_binary (Gavin M. Roy) - a42dd90 - Whitespace required (Gavin M. Roy) - 85867ea - Update the content_frame_dispatcher tests to reflect removal of auto-cast utf-8 (Gavin M. Roy) - 5a4bd5d - Remove unicode casting (Gavin M. Roy) - efea53d - Remove force binary and unicode casting (Gavin M. Roy) - e918d15 - Add methods to remove deprecation warnings from asyncore (Gavin M. Roy) - 117f62d - Add a coveragerc to ignore the auto generated pika.spec (Gavin M. Roy) - 52f4485 - Remove pypy tests from travis for now (Gavin M. Roy) - c3aa958 - Update README.rst (Gavin M. Roy) - 3e2319f - Delete README.md (Gavin M. Roy) - c12b0f1 - Move to RST (Gavin M. Roy) - 704f5be - Badging updates (Gavin M. Roy) - 7ae33ca - Update for coverage info (Gavin M. Roy) - ae7ca86 - add libev_adapter_tests.py; modify .travis.yml to install libev and pyev (ml) - f86aba5 - libev_connection: add **kwargs to _handle_event; suppress default_ioloop reuse warning (ml) - 603f1cf - async_test_base: add necessary args to _on_cconn_closed (ml) - 3422007 - add libev_adapter_tests.py (ml) - 6cbab0c - removed relative imports and importing urlparse from urllib.parse for py3+ (a-tal) - f808464 - libev_connection: add async watcher; add optional parameters to add_timeout (ml) - c041c80 - Remove ev all together for now (Gavin M. Roy) - 9408388 - Update the test descriptions and timeout (Gavin M. Roy) - 1b552e0 - Increase timeout (Gavin M. Roy) - 69a1f46 - Remove the pyev requirement for 2.6 testing (Gavin M. Roy) - fe062d2 - Update package name (Gavin M. Roy) - 611ad0e - Distribute the LICENSE and README.md (#350) (Gavin M. Roy) - df5e1d8 - Ensure that the entire frame is written using socket.sendall (#349) (Gavin M. Roy) - 69ec8cf - Move the libev install to before_install (Gavin M. Roy) - a75f693 - Update test structure (Gavin M. Roy) - 636b424 - Update things to ignore (Gavin M. Roy) - b538c68 - Add tox, nose.cfg, update testing config (Gavin M. Roy) - a0e7063 - add some tests to increase coverage of pika.connection (Charles Law) - c76d9eb - Address issue #459 (Gavin M. Roy) - 86ad2db - Raise exception if positional arg for parameters isn't an instance of Parameters (Gavin M. Roy) - 14d08e1 - Fix for python 2.6 (Gavin M. Roy) - bd388a3 - Use the first unused channel number addressing #404, #460 (Gavin M. Roy) - e7676e6 - removing a debug that was left in last commit (James Mutton) - 6c93b38 - Fixing connection-closed behavior to detect on attempt to publish (James Mutton) - c3f0356 - Initialize bytes_written in _handle_write() (Jonathan Kirsch) - 4510e95 - Fix _handle_write() may not send full frame (Jonathan Kirsch) - 12b793f - fixed Tornado Consumer example to successfully reconnect (Yang Yang) - f074444 - remove forgotten import of ordereddict (Pedro Abranches) - 1ba0aea - fix last merge (Pedro Abranches) - 10490a6 - change timeouts structure to list to maintain scheduling order (Pedro Abranches) - 7958394 - save timeouts in ordered dict instead of dict (Pedro Abranches) - d2746bf - URLParameters and ConnectionParameters accept unicode strings (Allard Hoeve) - 596d145 - previous fix for AttributeError made parent and child class methods identical, remove duplication (James Mutton) - 42940dd - UrlParameters Docs: fixed amqps scheme examples (Riccardo Cirimelli) - 43904ff - Dont test this in PyPy due to sort order issue (Gavin M. Roy) - d7d293e - Don't leave __repr__ sorting up to chance (Gavin M. Roy) - 848c594 - Add integration test to travis and fix invocation (Gavin M. Roy) - 2678275 - Add pypy to travis tests (Gavin M. Roy) - 1877f3d - Also addresses issue #419 (Gavin M. Roy) - 470c245 - Address issue #419 (Gavin M. Roy) - ca3cb59 - Address issue #432 (Gavin M. Roy) - a3ff6f2 - Default frame max should be AMQP FRAME_MAX (Gavin M. Roy) - ff3d5cb - Remove max consumer tag test due to change in code. (Gavin M. Roy) - 6045dda - Catch KeyError (#437) to ensure that an exception is not raised in a race condition (Gavin M. Roy) - 0b4d53a - Address issue #441 (Gavin M. Roy) - 180e7c4 - Update license and related files (Gavin M. Roy) - 256ed3d - Added Jython support. (Erik Olof Gunnar Andersson) - f73c141 - experimental work around for recursion issue. (Erik Olof Gunnar Andersson) - a623f69 - Prevent #436 by iterating the keys and not the dict (Gavin M. Roy) - 755fcae - Add support for authentication_failure_close, connection.blocked (Gavin M. Roy) - c121243 - merge upstream master (Michael Laing) - a08dc0d - add arg to channel.basic_consume (Pedro Abranches) - 10b136d - Documentation fix (Anton Ryzhov) - 9313307 - Fixed minor markup errors. (Jorge Puente Sarrín) - fb3e3cf - Fix the spelling of UnsupportedAMQPFieldException (Garrett Cooper) - 03d5da3 - connection.py: Propagate the force_channel keyword parameter to methods involved in channel creation (Michael Laing) - 7bbcff5 - Documentation fix for basic_publish (JuhaS) - 01dcea7 - Expose no_ack and exclusive to BlockingChannel.consume (Jeff Tang) - d39b6aa - Fix BlockingChannel.basic_consume does not block on non-empty queues (Juhyeong Park) - 6e1d295 - fix for issue 391 and issue 307 (Qi Fan) - d9ffce9 - Update parameters.rst (cacovsky) - 6afa41e - Add additional badges (Gavin M. Roy) - a255925 - Fix return value on dns resolution issue (Laurent Eschenauer) - 3f7466c - libev_connection: tweak docs (Michael Laing) - 0aaed93 - libev_connection: Fix varable naming (Michael Laing) - 0562d08 - libev_connection: Fix globals warning (Michael Laing) - 22ada59 - libev_connection: use globals to track sigint and sigterm watchers as they are created globally within libev (Michael Laing) - 2649b31 - Move badge [skip ci] (Gavin M. Roy) - f70eea1 - Remove pypy and installation attempt of pyev (Gavin M. Roy) - f32e522 - Conditionally skip external connection adapters if lib is not installed (Gavin M. Roy) - cce97c5 - Only install pyev on python 2.7 (Gavin M. Roy) - ff84462 - Add travis ci support (Gavin M. Roy) - cf971da - lib_evconnection: improve signal handling; add callback (Michael Laing) - 9adb269 - bugfix in returning a list in Py3k (Alex Chandel) - c41d5b9 - update exception syntax for Py3k (Alex Chandel) - c8506f1 - fix _adapter_connect (Michael Laing) - 67cb660 - Add LibevConnection to README (Michael Laing) - 1f9e72b - Propagate low-level connection errors to the AMQPConnectionError. (Bjorn Sandberg) - e1da447 - Avoid race condition in _on_getok on successive basic_get() when clearing out callbacks (Jeff) - 7a09979 - Add support for upcoming Connection.Blocked/Unblocked (Gavin M. Roy) - 53cce88 - TwistedChannel correctly handles multi-argument deferreds. (eivanov) - 66f8ace - Use uuid when creating unique consumer tag (Perttu Ranta-aho) - 4ee2738 - Limit the growth of Channel._cancelled, use deque instead of list. (Perttu Ranta-aho) - 0369aed - fix adapter references and tweak docs (Michael Laing) - 1738c23 - retry select.select() on EINTR (Cenk Alti) - 1e55357 - libev_connection: reset internal state on reconnect (Michael Laing) - 708559e - libev adapter (Michael Laing) - a6b7c8b - Prioritize EPollPoller and KQueuePoller over PollPoller and SelectPoller (Anton Ryzhov) - 53400d3 - Handle socket errors in PollPoller and EPollPoller Correctly check 'select.poll' availability (Anton Ryzhov) - a6dc969 - Use dict.keys & items instead of iterkeys & iteritems (Alex Chandel) - 5c1b0d0 - Use print function syntax, in examples (Alex Chandel) - ac9f87a - Fixed a typo in the name of the Asyncore Connection adapter (Guruprasad) - dfbba50 - Fixed bug mentioned in Issue #357 (Erik Andersson) - c906a2d - Drop additional flags when getting info for the hostnames, log errors (#352) (Gavin M. Roy) - baf23dd - retry poll() on EINTR (Cenk Alti) - 7cd8762 - Address ticket #352 catching an error when socket.getprotobyname fails (Gavin M. Roy) - 6c3ec75 - Prep for 0.9.14 (Gavin M. Roy) - dae7a99 - Bump to 0.9.14p0 (Gavin M. Roy) - 620edc7 - Use default port and virtual host if omitted in URLParameters (Issue #342) (Gavin M. Roy) - 42a8787 - Move the exception handling inside the while loop (Gavin M. Roy) - 10e0264 - Fix connection back pressure detection issue #347 (Gavin M. Roy) - 0bfd670 - Fixed mistake in commit 3a19d65. (Erik Andersson) - da04bc0 - Fixed Unknown state on disconnect error message generated when closing connections. (Erik Andersson) - 3a19d65 - Alternative solution to fix #345. (Erik Andersson) - abf9fa8 - switch to sendall to send entire frame (Dustin Koupal) - 9ce8ce4 - Fixed the async publisher example to work with reconnections (Raphaël De Giusti) - 511028a - Fix typo in TwistedChannel docstring (cacovsky) - 8b69e5a - calls self._adapter_disconnect() instead of self.disconnect() which doesn't actually exist #294 (Mark Unsworth) - 06a5cf8 - add NullHandler to prevent logging warnings (Cenk Alti) - f404a9a - Fix #337 cannot start ioloop after stop (Ralf Nyren) 0.9.13 - 2013-05-15 ------------------- `0.9.13 `_ **Major Changes** - IPv6 Support with thanks to Alessandro Tagliapietra for initial prototype - Officially remove support for <= Python 2.5 even though it was broken already - Drop pika.simplebuffer.SimpleBuffer in favor of the Python stdlib collections.deque object - New default object for receiving content is a "bytes" object which is a str wrapper in Python 2, but paves way for Python 3 support - New "Raw" mode for frame decoding content frames (#334) addresses issues #331, #229 added by Garth Williamson - Connection and Disconnection logic refactored, allowing for cleaner separation of protocol logic and socket handling logic as well as connection state management - New "on_open_error_callback" argument in creating connection objects and new Connection.add_on_open_error_callback method - New Connection.connect method to cleanly allow for reconnection code - Support for all AMQP field types, using protocol specified signed/unsigned unpacking **Backwards Incompatible Changes** - Method signature for creating connection objects has new argument "on_open_error_callback" which is positionally before "on_close_callback" - Internal callback variable names in connection.Connection have been renamed and constants used. If you relied on any of these callbacks outside of their internal use, make sure to check out the new constants. - Connection._connect method, which was an internal only method is now deprecated and will raise a DeprecationWarning. If you relied on this method, your code needs to change. - pika.simplebuffer has been removed **Bugfixes** - BlockingConnection consumer generator does not free buffer when exited (#328) - Unicode body payloads in the blocking adapter raises exception (#333) - Support "b" short-short-int AMQP data type (#318) - Docstring type fix in adapters/select_connection (#316) fix by Rikard Hultén - IPv6 not supported (#309) - Stop the HeartbeatChecker when connection is closed (#307) - Unittest fix for SelectConnection (#336) fix by Erik Andersson - Handle condition where no connection or socket exists but SelectConnection needs a timeout for retrying a connection (#322) - TwistedAdapter lagging behind BaseConnection changes (#321) fix by Jan Urbański **Other** - Refactored documentation - Added Twisted Adapter example (#314) by nolinksoft 0.9.12 - 2013-03-18 ------------------- `0.9.12 `_ **Bugfixes** - New timeout id hashing was not unique 0.9.11 - 2013-03-17 ------------------- `0.9.11 `_ **Bugfixes** - Address inconsistent channel close callback documentation and add the signature change to the TwistedChannel class (#305) - Address a missed timeout related internal data structure name change introduced in the SelectConnection 0.9.10 release. Update all connection adapters to use same signature and docstring (#306). 0.9.10 - 2013-03-16 ------------------- `0.9.10 `_ **Bugfixes** - Fix timeout in twisted adapter (Submitted by cellscape) - Fix blocking_connection poll timer resolution to milliseconds (Submitted by cellscape) - Fix channel._on_close() without a method frame (Submitted by Richard Boulton) - Addressed exception on close (Issue #279 - fix by patcpsc) - 'messages' not initialized in BlockingConnection.cancel() (Issue #289 - fix by Mik Kocikowski) - Make queue_unbind behave like queue_bind (Issue #277) - Address closing behavioral issues for connections and channels (Issue #275) - Pass a Method frame to Channel._on_close in Connection._on_disconnect (Submitted by Jan Urbański) - Fix channel closed callback signature in the Twisted adapter (Submitted by Jan Urbański) - Don't stop the IOLoop on connection close for in the Twisted adapter (Submitted by Jan Urbański) - Update the asynchronous examples to fix reconnecting and have it work - Warn if the socket was closed such as if RabbitMQ dies without a Close frame - Fix URLParameters ssl_options (Issue #296) - Add state to BlockingConnection addressing (Issue #301) - Encode unicode body content prior to publishing (Issue #282) - Fix an issue with unicode keys in BasicProperties headers key (Issue #280) - Change how timeout ids are generated (Issue #254) - Address post close state issues in Channel (Issue #302) ** Behavior changes ** - Change core connection communication behavior to prefer outbound writes over reads, addressing a recursion issue - Update connection on close callbacks, changing callback method signature - Update channel on close callbacks, changing callback method signature - Give more info in the ChannelClosed exception - Change the constructor signature for BlockingConnection, block open/close callbacks - Disable the use of add_on_open_callback/add_on_close_callback methods in BlockingConnection 0.9.9 - 2013-01-29 ------------------ `0.9.9 `_ **Bugfixes** - Only remove the tornado_connection.TornadoConnection file descriptor from the IOLoop if it's still open (Issue #221) - Allow messages with no body (Issue #227) - Allow for empty routing keys (Issue #224) - Don't raise an exception when trying to send a frame to a closed connection (Issue #229) - Only send a Connection.CloseOk if the connection is still open. (Issue #236 - Fix by noleaf) - Fix timeout threshold in blocking connection - (Issue #232 - Fix by Adam Flynn) - Fix closing connection while a channel is still open (Issue #230 - Fix by Adam Flynn) - Fixed misleading warning and exception messages in BaseConnection (Issue #237 - Fix by Tristan Penman) - Pluralised and altered the wording of the AMQPConnectionError exception (Issue #237 - Fix by Tristan Penman) - Fixed _adapter_disconnect in TornadoConnection class (Issue #237 - Fix by Tristan Penman) - Fixing hang when closing connection without any channel in BlockingConnection (Issue #244 - Fix by Ales Teska) - Remove the process_timeouts() call in SelectConnection (Issue #239) - Change the string validation to basestring for host connection parameters (Issue #231) - Add a poller to the BlockingConnection to address latency issues introduced in Pika 0.9.8 (Issue #242) - reply_code and reply_text is not set in ChannelException (Issue #250) - Add the missing constraint parameter for Channel._on_return callback processing (Issue #257 - Fix by patcpsc) - Channel callbacks not being removed from callback manager when channel is closed or deleted (Issue #261) 0.9.8 - 2012-11-18 ------------------ `0.9.8 `_ **Bugfixes** - Channel.queue_declare/BlockingChannel.queue_declare not setting up callbacks property for empty queue name (Issue #218) - Channel.queue_bind/BlockingChannel.queue_bind not allowing empty routing key - Connection._on_connection_closed calling wrong method in Channel (Issue #219) - Fix tx_commit and tx_rollback bugs in BlockingChannel (Issue #217) 0.9.7 - 2012-11-11 ------------------ `0.9.7 `_ **New features** - generator based consumer in BlockingChannel (See :doc:`examples/blocking_consumer_generator` for example) **Changes** - BlockingChannel._send_method will only wait if explicitly told to **Bugfixes** - Added the exchange "type" parameter back but issue a DeprecationWarning - Dont require a queue name in Channel.queue_declare() - Fixed KeyError when processing timeouts (Issue # 215 - Fix by Raphael De Giusti) - Don't try and close channels when the connection is closed (Issue #216 - Fix by Charles Law) - Dont raise UnexpectedFrame exceptions, log them instead - Handle multiple synchronous RPC calls made without waiting for the call result (Issues #192, #204, #211) - Typo in docs (Issue #207 Fix by Luca Wehrstedt) - Only sleep on connection failure when retry attempts are > 0 (Issue #200) - Bypass _rpc method and just send frames for Basic.Ack, Basic.Nack, Basic.Reject (Issue #205) 0.9.6 - 2012-10-29 ------------------ `0.9.6 `_ **New features** - URLParameters - BlockingChannel.start_consuming() and BlockingChannel.stop_consuming() - Delivery Confirmations - Improved unittests **Major bugfix areas** - Connection handling - Blocking functionality in the BlockingConnection - SSL - UTF-8 Handling **Removals** - pika.reconnection_strategies - pika.channel.ChannelTransport - pika.log - pika.template - examples directory 0.9.5 - 2011-03-29 ------------------ `0.9.5 `_ **Changelog** - Scope changes with adapter IOLoops and CallbackManager allowing for cleaner, multi-threaded operation - Add support for Confirm.Select with channel.Channel.confirm_delivery() - Add examples of delivery confirmation to examples (demo_send_confirmed.py) - Update uses of log.warn with warning.warn for TCP Back-pressure alerting - License boilerplate updated to simplify license text in source files - Increment the timeout in select_connection.SelectPoller reducing CPU utilization - Bug fix in Heartbeat frame delivery addressing issue #35 - Remove abuse of pika.log.method_call through a majority of the code - Rename of key modules: table to data, frames to frame - Cleanup of frame module and related classes - Restructure of tests and test runner - Update functional tests to respect RABBITMQ_HOST, RABBITMQ_PORT environment variables - Bug fixes to reconnection_strategies module - Fix the scale of timeout for PollPoller to be specified in milliseconds - Remove mutable default arguments in RPC calls - Add data type validation to RPC calls - Move optional credentials erasing out of connection.Connection into credentials module - Add support to allow for additional external credential types - Add a NullHandler to prevent the 'No handlers could be found for logger "pika"' error message when not using pika.log in a client app at all. - Clean up all examples to make them easier to read and use - Move documentation into its own repository https://github.com/pika/documentation - channel.py - Move channel.MAX_CHANNELS constant from connection.CHANNEL_MAX - Add default value of None to ChannelTransport.rpc - Validate callback and acceptable replies parameters in ChannelTransport.RPC - Remove unused connection attribute from Channel - connection.py - Remove unused import of struct - Remove direct import of pika.credentials.PlainCredentials - Change to import pika.credentials - Move CHANNEL_MAX to channel.MAX_CHANNELS - Change ConnectionParameters initialization parameter heartbeat to boolean - Validate all inbound parameter types in ConnectionParameters - Remove the Connection._erase_credentials stub method in favor of letting the Credentials object deal with that itself. - Warn if the credentials object intends on erasing the credentials and a reconnection strategy other than NullReconnectionStrategy is specified. - Change the default types for callback and acceptable_replies in Connection._rpc - Validate the callback and acceptable_replies data types in Connection._rpc - adapters.blocking_connection.BlockingConnection - Addition of _adapter_disconnect to blocking_connection.BlockingConnection - Add timeout methods to BlockingConnection addressing issue #41 - BlockingConnection didn't allow you register more than one consumer callback because basic_consume was overridden to block immediately. New behavior allows you to do so. - Removed overriding of base basic_consume and basic_cancel methods. Now uses underlying Channel versions of those methods. - Added start_consuming() method to BlockingChannel to start the consumption loop. - Updated stop_consuming() to iterate through all the registered consumers in self._consumers and issue a basic_cancel. pika-1.2.0/CONTRIBUTING.md000066400000000000000000000033461400701476500147010ustar00rootroot00000000000000# Contributing ## Test Coverage To contribute to Pika, please make sure that any new features or changes to existing functionality **include test coverage**. *Pull requests that add or change code without coverage have a much lower chance of being accepted.* ## Prerequisites Pika test suite has a couple of requirements: * Dependencies from `test-dependencies.txt` are installed * A RabbitMQ node with all defaults is running on `localhost:5672` ## Installing Dependencies To install the dependencies needed to run Pika tests, use pip install -r test-requirements.txt which on Python 3 might look like this pip3 install -r test-requirements.txt ## Running Tests To run all test suites, use nosetests Note that some tests are OS-specific (e.g. epoll on Linux or kqueue on MacOS and BSD). Those will be skipped automatically. If you would like to run TLS/SSL tests, use the following procedure: * Create a `rabbitmq.conf` file: ``` sed -e "s#PIKA_DIR#$PWD#g" ./testdata/rabbitmq.conf.in > ./testdata/rabbitmq.conf ``` * Start RabbitMQ and use the configuration file you just created. An example command that works with the `generic-unix` package is as follows: ``` $ RABBITMQ_CONFIG_FILE=/path/to/pika/testdata/rabbitmq.conf ./sbin/rabbitmq-server ``` * Run the tests indicating that TLS/SSL connections should be used: ``` PIKA_TEST_TLS=true nosetests ``` ## Code Formatting Please format your code using [yapf](http://pypi.python.org/pypi/yapf) with ``google`` style prior to issuing your pull request. *Note: only format those lines that you have changed in your pull request. If you format an entire file and change code outside of the scope of your PR, it will likely be rejected.* pika-1.2.0/LICENSE000066400000000000000000000030201400701476500134420ustar00rootroot00000000000000Copyright (c) 2009-2019, Tony Garnock-Jones, Gavin M. Roy, Pivotal Software, Inc and others. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of the Pika project nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. pika-1.2.0/MANIFEST.in000066400000000000000000000000421400701476500141740ustar00rootroot00000000000000include LICENSE include README.rstpika-1.2.0/README.rst000066400000000000000000000264661400701476500141470ustar00rootroot00000000000000Pika ==== Pika is a RabbitMQ (AMQP 0-9-1) client library for Python. |Version| |Python versions| |Travis CI Status| |AppVeyor Status| |Coverage| |License| |Docs| Introduction ------------ Pika is a pure-Python implementation of the AMQP 0-9-1 protocol including RabbitMQ's extensions. - Python 2.7 and 3.4+ are supported. - Since threads aren't appropriate to every situation, it doesn't require threads. Pika core takes care not to forbid them, either. The same goes for greenlets, callbacks, continuations, and generators. An instance of Pika's built-in connection adapters isn't thread-safe, however. - People may be using direct sockets, plain old ``select()``, or any of the wide variety of ways of getting network events to and from a Python application. Pika tries to stay compatible with all of these, and to make adapting it to a new environment as simple as possible. Documentation ------------- Pika's documentation can be found at https://pika.readthedocs.io. Example ------- Here is the most simple example of use, sending a message with the ``pika.BlockingConnection`` adapter: .. code :: python import pika connection = pika.BlockingConnection() channel = connection.channel() channel.basic_publish(exchange='test', routing_key='test', body=b'Test message.') connection.close() And an example of writing a blocking consumer: .. code :: python import pika connection = pika.BlockingConnection() channel = connection.channel() for method_frame, properties, body in channel.consume('test'): # Display the message parts and acknowledge the message print(method_frame, properties, body) channel.basic_ack(method_frame.delivery_tag) # Escape out of the loop after 10 messages if method_frame.delivery_tag == 10: break # Cancel the consumer and return any pending messages requeued_messages = channel.cancel() print('Requeued %i messages' % requeued_messages) connection.close() Pika provides the following adapters ------------------------------------ - ``pika.adapters.asyncio_connection.AsyncioConnection`` - asynchronous adapter for Python 3 `AsyncIO `_'s I/O loop. - ``pika.BlockingConnection`` - synchronous adapter on top of library for simple usage. - ``pika.SelectConnection`` - asynchronous adapter without third-party dependencies. - ``pika.adapters.gevent_connection.GeventConnection`` - asynchronous adapter for use with `Gevent `_'s I/O loop. - ``pika.adapters.tornado_connection.TornadoConnection`` - asynchronous adapter for use with `Tornado `_'s I/O loop. - ``pika.adapters.twisted_connection.TwistedProtocolConnection`` - asynchronous adapter for use with `Twisted `_'s I/O loop. Multiple connection parameters ------------------------------ You can also pass multiple ``pika.ConnectionParameters`` instances for fault-tolerance as in the code snippet below (host names are just examples, of course). To enable retries, set ``connection_attempts`` and ``retry_delay`` as needed in the last ``pika.ConnectionParameters`` element of the sequence. Retries occur after connection attempts using all of the given connection parameters fail. .. code :: python import pika parameters = ( pika.ConnectionParameters(host='rabbitmq.zone1.yourdomain.com'), pika.ConnectionParameters(host='rabbitmq.zone2.yourdomain.com', connection_attempts=5, retry_delay=1)) connection = pika.BlockingConnection(parameters) With non-blocking adapters, such as ``pika.SelectConnection`` and ``pika.adapters.asyncio_connection.AsyncioConnection``, you can request a connection using multiple connection parameter instances via the connection adapter's ``create_connection()`` class method. Requesting message acknowledgements from another thread ------------------------------------------------------- The single-threaded usage constraint of an individual Pika connection adapter instance may result in a dropped AMQP/stream connection due to AMQP heartbeat timeout in consumers that take a long time to process an incoming message. A common solution is to delegate processing of the incoming messages to another thread, while the connection adapter's thread continues to service its I/O loop's message pump, permitting AMQP heartbeats and other I/O to be serviced in a timely fashion. Messages processed in another thread may not be acknowledged directly from that thread, since all accesses to the connection adapter instance must be from a single thread, which is the thread running the adapter's I/O loop. This is accomplished by requesting a callback to be executed in the adapter's I/O loop thread. For example, the callback function's implementation might look like this: .. code :: python def ack_message(channel, delivery_tag): """Note that `channel` must be the same Pika channel instance via which the message being acknowledged was retrieved (AMQP protocol constraint). """ if channel.is_open: channel.basic_ack(delivery_tag) else: # Channel is already closed, so we can't acknowledge this message; # log and/or do something that makes sense for your app in this case. pass The code running in the other thread may request the ``ack_message()`` function to be executed in the connection adapter's I/O loop thread using an adapter-specific mechanism: - ``pika.BlockingConnection`` abstracts its I/O loop from the application and thus exposes ``pika.BlockingConnection.add_callback_threadsafe()``. Refer to this method's docstring for additional information. For example: .. code :: python connection.add_callback_threadsafe(functools.partial(ack_message, channel, delivery_tag)) - When using a non-blocking connection adapter, such as ``pika.adapters.asyncio_connection.AsyncioConnection`` or ``pika.SelectConnection``, you use the underlying asynchronous framework's native API for requesting an I/O loop-bound callback from another thread. For example, ``pika.SelectConnection``'s I/O loop provides ``add_callback_threadsafe()``, ``pika.adapters.tornado_connection.TornadoConnection``'s I/O loop has ``add_callback()``, while ``pika.adapters.asyncio_connection.AsyncioConnection``'s I/O loop exposes ``call_soon_threadsafe()``. This threadsafe callback request mechanism may also be used to delegate publishing of messages, etc., from a background thread to the connection adapter's thread. Connection recovery ------------------- Some RabbitMQ clients (Bunny, Java, .NET, Objective-C, Swift) provide a way to automatically recover connection, its channels and topology (e.g. queues, bindings and consumers) after a network failure. Others require connection recovery to be performed by the application code and strive to make it a straightforward process. Pika falls into the second category. Pika supports multiple connection adapters. They take different approaches to connection recovery. For ``pika.BlockingConnection`` adapter exception handling can be used to check for connection errors. Here is a very basic example: .. code :: python import pika while True: try: connection = pika.BlockingConnection() channel = connection.channel() channel.basic_consume('test', on_message_callback) channel.start_consuming() # Don't recover if connection was closed by broker except pika.exceptions.ConnectionClosedByBroker: break # Don't recover on channel errors except pika.exceptions.AMQPChannelError: break # Recover on all other connection errors except pika.exceptions.AMQPConnectionError: continue This example can be found in `examples/consume_recover.py`. Generic operation retry libraries such as `retry `_ can be used. Decorators make it possible to configure some additional recovery behaviours, like delays between retries and limiting the number of retries: .. code :: python from retry import retry @retry(pika.exceptions.AMQPConnectionError, delay=5, jitter=(1, 3)) def consume(): connection = pika.BlockingConnection() channel = connection.channel() channel.basic_consume('test', on_message_callback) try: channel.start_consuming() # Don't recover connections closed by server except pika.exceptions.ConnectionClosedByBroker: pass consume() This example can be found in `examples/consume_recover_retry.py`. For asynchronous adapters, use ``on_close_callback`` to react to connection failure events. This callback can be used to clean up and recover the connection. An example of recovery using ``on_close_callback`` can be found in `examples/asynchronous_consumer_example.py`. Contributing ------------ To contribute to Pika, please make sure that any new features or changes to existing functionality **include test coverage**. *Pull requests that add or change code without adequate test coverage will be rejected.* Additionally, please format your code using `Yapf `_ with ``google`` style prior to issuing your pull request. *Note: only format those lines that you have changed in your pull request. If you format an entire file and change code outside of the scope of your PR, it will likely be rejected.* Extending to support additional I/O frameworks ---------------------------------------------- New non-blocking adapters may be implemented in either of the following ways: - By subclassing ``pika.BaseConnection``, implementing its abstract method and passing its constructor an implementation of ``pika.adapters.utils.nbio_interface.AbstractIOServices``. ``pika.BaseConnection`` implements ``pika.connection.Connection``'s abstract methods, including internally-initiated connection logic. For examples, refer to the implementations of ``pika.adapters.asyncio_connection.AsyncioConnection``, ``pika.adapters.gevent_connection.GeventConnection`` and ``pika.adapters.tornado_connection.TornadoConnection``. - By subclassing ``pika.connection.Connection`` and implementing its abstract methods. This approach facilitates implementation of custom connection-establishment and transport mechanisms. For an example, refer to the implementation of ``pika.adapters.twisted_connection.TwistedProtocolConnection``. .. |Version| image:: https://img.shields.io/pypi/v/pika.svg? :target: http://badge.fury.io/py/pika .. |Python versions| image:: https://img.shields.io/pypi/pyversions/pika.svg :target: https://pypi.python.org/pypi/pika .. |Travis CI Status| image:: https://img.shields.io/travis/pika/pika.svg? :target: https://travis-ci.org/pika/pika .. |AppVeyor Status| image:: https://ci.appveyor.com/api/projects/status/ql8u3dlls7hxvbqo?svg=true :target: https://ci.appveyor.com/project/gmr/pika .. |Coverage| image:: https://img.shields.io/codecov/c/github/pika/pika.svg? :target: https://codecov.io/github/pika/pika?branch=master .. |License| image:: https://img.shields.io/pypi/l/pika.svg? :target: https://pika.readthedocs.io .. |Docs| image:: https://readthedocs.org/projects/pika/badge/?version=stable :target: https://pika.readthedocs.io :alt: Documentation Status pika-1.2.0/appveyor.yml000066400000000000000000000075461400701476500150460ustar00rootroot00000000000000# Windows build and test of Pika environment: erlang_download_url: "https://github.com/erlang/otp/releases/download/OTP-23.2.1/otp_win64_23.2.3.exe" erlang_exe_path: "C:\\Users\\appveyor\\erlang_installer.exe" erlang_home_dir: "C:\\Users\\appveyor\\erlang" erlang_erts_version: "erts-11.1.5" rabbitmq_version: 3.8.11 rabbitmq_installer_download_url: "https://github.com/rabbitmq/rabbitmq-server/releases/download/v3.8.11/rabbitmq-server-3.8.11.exe" rabbitmq_installer_path: "C:\\Users\\appveyor\\rabbitmq-server-3.8.11.exe" matrix: - PYTHONHOME: "C:\\Python38" PIKA_TEST_TLS: false - PYTHONHOME: "C:\\Python38" PIKA_TEST_TLS: true cache: # RabbitMQ is a pretty big package, so caching it in hopes of expediting the # runtime - "%erlang_exe_path%" - "%rabbitmq_installer_path%" install: - set PYTHONPATH=%PYTHONHOME% - set PATH=%PYTHONHOME%\Scripts;%PYTHONHOME%;%PATH% # For diagnostics - echo %PYTHONPATH% - echo %PATH% - python --version - echo Upgrading pip... - python -m pip install --upgrade pip setuptools - pip --version - echo Installing wheel... - pip install wheel build_script: - echo Building distributions... - python setup.py sdist bdist bdist_wheel - DIR /s *.whl artifacts: - path: 'dist\*.whl' name: pika wheel before_test: # Install test requirements - echo Installing pika... - python setup.py install - echo Installing pika test requirements... - pip install -r test-requirements.txt # List conents of C:\ to help debug caching of rabbitmq artifacts # - DIR C:\ - ps: $webclient=New-Object System.Net.WebClient - echo Downloading Erlang... - ps: if (-Not (Test-Path "$env:erlang_exe_path")) { $webclient.DownloadFile("$env:erlang_download_url", "$env:erlang_exe_path") } else { Write-Host "Found" $env:erlang_exe_path "in cache." } - echo Removing all existing versions of Erlang... - ps: Get-ChildItem -Path 'C:\Program Files\erl*\Uninstall.exe' | %{ Start-Process -Wait -NoNewWindow -FilePath $_ -ArgumentList '/S' } - echo Installing Erlang... - start /B /WAIT %erlang_exe_path% /S /D=%erlang_home_dir% - set ERLANG_HOME=%erlang_home_dir% - echo Downloading RabbitMQ... - ps: if (-Not (Test-Path "$env:rabbitmq_installer_path")) { $webclient.DownloadFile("$env:rabbitmq_installer_download_url", "$env:rabbitmq_installer_path") } else { Write-Host "Found" $env:rabbitmq_installer_path "in cache." } - echo Creating directory %AppData%\RabbitMQ... - ps: New-Item -ItemType Directory -ErrorAction Continue -Path "$env:AppData/RabbitMQ" - echo Creating RabbitMQ configuration file in %AppData%\RabbitMQ... - ps: Get-Content C:/Projects/pika/testdata/rabbitmq.conf.in | %{ $_ -replace 'PIKA_DIR', 'C:/projects/pika' } | Set-Content -Path "$env:AppData/RabbitMQ/rabbitmq.conf" - ps: Get-Content "$env:AppData/RabbitMQ/rabbitmq.conf" - echo Creating Erlang cookie files... - ps: '[System.IO.File]::WriteAllText("C:\Users\appveyor\.erlang.cookie", "PIKAISTHEBEST", [System.Text.Encoding]::ASCII)' - ps: '[System.IO.File]::WriteAllText("C:\Windows\System32\config\systemprofile\.erlang.cookie", "PIKAISTHEBEST", [System.Text.Encoding]::ASCII)' - echo Installing and starting RabbitMQ with default config... - start /B /WAIT %rabbitmq_installer_path% /S - ps: (Get-Service -Name RabbitMQ).Status - echo RabbitMQ Service Registry Entry - reg query HKLM\SOFTWARE\Ericsson\Erlang\ErlSrv\1.1\RabbitMQ - echo Waiting for epmd to report that RabbitMQ has started... - ps: C:\projects\pika\testdata\wait-epmd.ps1 - echo Waiting for RabbitMQ to start... - ps: C:\projects\pika\testdata\wait-rabbitmq.ps1 - echo Getting RabbitMQ status... - cmd /c "C:\Program Files\RabbitMQ Server\rabbitmq_server-%rabbitmq_version%\sbin\rabbitmqctl.bat" status test_script: - nosetests # Since Pika is source-only there's no need to deploy from Windows deploy: false pika-1.2.0/docs/000077500000000000000000000000001400701476500133725ustar00rootroot00000000000000pika-1.2.0/docs/Makefile000066400000000000000000000126641400701476500150430ustar00rootroot00000000000000# Makefile for Sphinx documentation # # You can set these variables from the command line. SPHINXOPTS = SPHINXBUILD = sphinx-build PAPER = BUILDDIR = _build # Internal variables. PAPEROPT_a4 = -D latex_paper_size=a4 PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . # the i18n builder cannot share the environment and doctrees with the others I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . .PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext help: @echo "Please use \`make ' where is one of" @echo " html to make standalone HTML files" @echo " dirhtml to make HTML files named index.html in directories" @echo " singlehtml to make a single large HTML file" @echo " pickle to make pickle files" @echo " json to make JSON files" @echo " htmlhelp to make HTML files and a HTML help project" @echo " qthelp to make HTML files and a qthelp project" @echo " devhelp to make HTML files and a Devhelp project" @echo " epub to make an epub" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" @echo " latexpdf to make LaTeX files and run them through pdflatex" @echo " text to make text files" @echo " man to make manual pages" @echo " texinfo to make Texinfo files" @echo " info to make Texinfo files and run them through makeinfo" @echo " gettext to make PO message catalogs" @echo " changes to make an overview of all changed/added/deprecated items" @echo " linkcheck to check all external links for integrity" @echo " doctest to run all doctests embedded in the documentation (if enabled)" clean: -rm -rf $(BUILDDIR)/* html: $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." dirhtml: $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." singlehtml: $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml @echo @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." pickle: $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle @echo @echo "Build finished; now you can process the pickle files." json: $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json @echo @echo "Build finished; now you can process the JSON files." htmlhelp: $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp @echo @echo "Build finished; now you can run HTML Help Workshop with the" \ ".hhp project file in $(BUILDDIR)/htmlhelp." qthelp: $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp @echo @echo "Build finished; now you can run "qcollectiongenerator" with the" \ ".qhcp project file in $(BUILDDIR)/qthelp, like this:" @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/pika.qhcp" @echo "To view the help file:" @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/pika.qhc" devhelp: $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp @echo @echo "Build finished." @echo "To view the help file:" @echo "# mkdir -p $$HOME/.local/share/devhelp/pika" @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/pika" @echo "# devhelp" epub: $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub @echo @echo "Build finished. The epub file is in $(BUILDDIR)/epub." latex: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." @echo "Run \`make' in that directory to run these through (pdf)latex" \ "(use \`make latexpdf' here to do that automatically)." latexpdf: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo "Running LaTeX files through pdflatex..." $(MAKE) -C $(BUILDDIR)/latex all-pdf @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." text: $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text @echo @echo "Build finished. The text files are in $(BUILDDIR)/text." man: $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man @echo @echo "Build finished. The manual pages are in $(BUILDDIR)/man." texinfo: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo @echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo." @echo "Run \`make' in that directory to run these through makeinfo" \ "(use \`make info' here to do that automatically)." info: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo "Running Texinfo files through makeinfo..." make -C $(BUILDDIR)/texinfo info @echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo." gettext: $(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale @echo @echo "Build finished. The message catalogs are in $(BUILDDIR)/locale." changes: $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes @echo @echo "The overview file is in $(BUILDDIR)/changes." linkcheck: $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck @echo @echo "Link check complete; look for any errors in the above output " \ "or in $(BUILDDIR)/linkcheck/output.txt." doctest: $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest @echo "Testing of doctests in the sources finished, look at the " \ "results in $(BUILDDIR)/doctest/output.txt." pika-1.2.0/docs/conf.py000066400000000000000000000017711400701476500146770ustar00rootroot00000000000000# -*- coding: utf-8 -*- import sys sys.path.insert(0, '../') # needs_sphinx = '1.0' extensions = ['sphinx.ext.autodoc', 'sphinx.ext.viewcode', 'sphinx.ext.intersphinx'] intersphinx_mapping = {'python': ('https://docs.python.org/3/', 'https://docs.python.org/3/objects.inv'), 'tornado': ('http://www.tornadoweb.org/en/stable/', 'http://www.tornadoweb.org/en/stable/objects.inv')} templates_path = ['_templates'] source_suffix = '.rst' master_doc = 'index' project = 'pika' copyright = '2009-2017, Tony Garnock-Jones, Gavin M. Roy, Pivotal Software, Inc and contributors.' import pika release = pika.__version__ version = '.'.join(release.split('.')[0:1]) exclude_patterns = ['_build'] add_function_parentheses = True add_module_names = True show_authors = True pygments_style = 'sphinx' modindex_common_prefix = ['pika'] html_theme = 'default' html_static_path = ['_static'] htmlhelp_basename = 'pikadoc' pika-1.2.0/docs/contributors.rst000066400000000000000000000034011400701476500166570ustar00rootroot00000000000000Contributors ============ The following people have directly contributes code by way of new features and/or bug fixes to Pika: - Gavin M. Roy - Tony Garnock-Jones - Vitaly Kruglikov - Michael Laing - Marek Majkowski - Jan Urbański - Brian K. Jones - Ask Solem - ml - Will - atatsu - Fredrik Svensson - Pedro Abranches - Kyösti Herrala - Erik Andersson - Charles Law - Alex Chandel - Tristan Penman - Raphaël De Giusti - Jozef Van Eenbergen - Josh Braegger - Jason J. W. Williams - James Mutton - Cenk Alti - Asko Soukka - Antti Haapala - Anton Ryzhov - cellscape - cacovsky - bra-fsn - ateska - Roey Berman - Robert Weidlich - Riccardo Cirimelli - Perttu Ranta-aho - Pau Gargallo - Kane - Kamil Kisiel - Jonty Wareing - Jonathan Kirsch - Jacek 'Forger' Całusiński - Garth Williamson - Erik Olof Gunnar Andersson - David Strauss - Anton V. Yanchenko - Alexey Myasnikov - Alessandro Tagliapietra - Adam Flynn - skftn - saarni - pavlobaron - nonleaf - markcf - george y - eivanov - bstemshorn - a-tal - Yang Yang - Stuart Longland - Sigurd Høgsbro - Sean Dwyer - Samuel Stauffer - Roberto Decurnex - Rikard Hultén - Richard Boulton - Ralf Nyren - Qi Fan - Peter Magnusson - Pankrat - Olivier Le Thanh Duong - Njal Karevoll - Milan Skuhra - Mik Kocikowski - Michael Kenney - Mark Unsworth - Luca Wehrstedt - Laurent Eschenauer - Lars van de Kerkhof - Kyösti Herrala - Juhyeong Park - JuhaS - Josh Hansen - Jorge Puente Sarrín - Jeff Tang - Jeff Fein-Worton - Jeff - Hunter Morris - Guruprasad - Garrett Cooper - Frank Slaughter - Dustin Koupal - Bjorn Sandberg - Axel Eirola - Andrew Smith - Andrew Grigorev - Andrew - Allard Hoeve - A.Shaposhnikov *Contributors listed by commit count.* pika-1.2.0/docs/examples.rst000066400000000000000000000016011400701476500157400ustar00rootroot00000000000000Usage Examples ============== Pika has various methods of use, between the synchronous BlockingConnection adapter and the various asynchronous connection adapter. The following examples illustrate the various ways that you can use Pika in your projects. .. toctree:: :glob: :maxdepth: 1 examples/using_urlparameters examples/connecting_async examples/blocking_basic_get examples/blocking_consume examples/blocking_consume_recover_multiple_hosts examples/blocking_consumer_generator examples/comparing_publishing_sync_async examples/blocking_delivery_confirmations examples/blocking_publish_mandatory examples/asynchronous_consumer_example examples/asynchronous_publisher_example examples/twisted_example examples/tornado_consumer examples/tls_mutual_authentication examples/tls_server_authentication examples/heartbeat_and_blocked_timeouts pika-1.2.0/docs/examples/000077500000000000000000000000001400701476500152105ustar00rootroot00000000000000pika-1.2.0/docs/examples/asynchronous_consumer_example.rst000066400000000000000000000010271400701476500241230ustar00rootroot00000000000000Asynchronous consumer example ============================= The following example implements a consumer that will respond to RPC commands sent from RabbitMQ. For example, it will reconnect if RabbitMQ closes the connection and will shutdown if RabbitMQ cancels the consumer or closes the channel. While it may look intimidating, each method is very short and represents a individual actions that a consumer can do. `Asynchronous Consumer Example `_ pika-1.2.0/docs/examples/asynchronous_publisher_example.rst000066400000000000000000000010301400701476500242570ustar00rootroot00000000000000Asynchronous publisher example ============================== The following example implements a publisher that will respond to RPC commands sent from RabbitMQ and uses delivery confirmations. It will reconnect if RabbitMQ closes the connection and will shutdown if RabbitMQ closes the channel. While it may look intimidating, each method is very short and represents a individual actions that a publisher can do. `Asynchronous Publisher Example `_ pika-1.2.0/docs/examples/asyncio_consumer.rst000066400000000000000000000012271400701476500213240ustar00rootroot00000000000000Asyncio Consumer ================ The following example implements a consumer using the :class:`Asyncio adapter ` for the `Asyncio library `_ that will respond to RPC commands sent from RabbitMQ. For example, it will reconnect if RabbitMQ closes the connection and will shutdown if RabbitMQ cancels the consumer or closes the channel. While it may look intimidating, each method is very short and represents a individual actions that a consumer can do. `Asyncio Consumer Example `_ pika-1.2.0/docs/examples/blocking_basic_get.rst000066400000000000000000000022541400701476500215350ustar00rootroot00000000000000Using the Blocking Connection to get a message from RabbitMQ ============================================================ .. _example_blocking_basic_get: The :py:meth:`BlockingChannel.basic_get ` method will return a tuple with the members. If the server returns a message, the first item in the tuple will be a :class:`pika.spec.Basic.GetOk` object with the current message count, the redelivered flag, the routing key that was used to put the message in the queue, and the exchange the message was published to. The second item will be a :py:class:`~pika.spec.BasicProperties` object and the third will be the message body. If the server did not return a message a tuple of None, None, None will be returned. Example of getting a message and acknowledging it:: import pika connection = pika.BlockingConnection() channel = connection.channel() method_frame, header_frame, body = channel.basic_get('test') if method_frame: print(method_frame, header_frame, body) channel.basic_ack(method_frame.delivery_tag) else: print('No message returned') pika-1.2.0/docs/examples/blocking_consume.rst000066400000000000000000000024311400701476500212630ustar00rootroot00000000000000Using the Blocking Connection to consume messages from RabbitMQ =============================================================== .. _example_blocking_basic_consume: The :py:meth:`BlockingChannel.basic_consume ` method assign a callback method to be called every time that RabbitMQ delivers messages to your consuming application. When pika calls your method, it will pass in the channel, a :py:class:`pika.spec.Basic.Deliver` object with the delivery tag, the redelivered flag, the routing key that was used to put the message in the queue, and the exchange the message was published to. The third argument will be a :py:class:`pika.spec.BasicProperties` object and the last will be the message body. Example of consuming messages and acknowledging them:: import pika def on_message(channel, method_frame, header_frame, body): print(method_frame.delivery_tag) print(body) print() channel.basic_ack(delivery_tag=method_frame.delivery_tag) connection = pika.BlockingConnection() channel = connection.channel() channel.basic_consume('test', on_message) try: channel.start_consuming() except KeyboardInterrupt: channel.stop_consuming() connection.close() pika-1.2.0/docs/examples/blocking_consume_recover_multiple_hosts.rst000066400000000000000000000110661400701476500261470ustar00rootroot00000000000000Using the Blocking Connection with connection recovery with multiple hosts ========================================================================== .. _example_blocking_basic_consume_recover_multiple_hosts: RabbitMQ nodes can be `clustered `_. In the absence of failure clients can connect to any node and perform any operation. In case a node fails, stops, or becomes unavailable, clients should be able to connect to another node and continue. To simplify reconnection to a different node, connection recovery mechanism should be combined with connection configuration that specifies multiple hosts. The BlockingConnection adapter relies on exception handling to check for connection errors:: import pika import random def on_message(channel, method_frame, header_frame, body): print(method_frame.delivery_tag) print(body) print() channel.basic_ack(delivery_tag=method_frame.delivery_tag) ## Assuming there are three hosts: host1, host2, and host3 node1 = pika.URLParameters('amqp://node1') node2 = pika.URLParameters('amqp://node2') node3 = pika.URLParameters('amqp://node3') all_endpoints = [node1, node2, node3] while(True): try: print("Connecting...") ## Shuffle the hosts list before reconnecting. ## This can help balance connections. random.shuffle(all_endpoints) connection = pika.BlockingConnection(all_endpoints) channel = connection.channel() channel.basic_qos(prefetch_count=1) ## This queue is intentionally non-durable. See http://www.rabbitmq.com/ha.html#non-mirrored-queue-behavior-on-node-failure ## to learn more. channel.queue_declare('recovery-example', durable = False, auto_delete = True) channel.basic_consume('recovery-example', on_message) try: channel.start_consuming() except KeyboardInterrupt: channel.stop_consuming() connection.close() break except pika.exceptions.ConnectionClosedByBroker: # Uncomment this to make the example not attempt recovery # from server-initiated connection closure, including # when the node is stopped cleanly # # break continue # Do not recover on channel errors except pika.exceptions.AMQPChannelError as err: print("Caught a channel error: {}, stopping...".format(err)) break # Recover on all other connection errors except pika.exceptions.AMQPConnectionError: print("Connection was closed, retrying...") continue Generic operation retry libraries such as `retry `_ can prove useful. To run the following example, install the library first with `pip install retry`. In this example the `retry` decorator is used to set up recovery with delay:: import pika import random from retry import retry def on_message(channel, method_frame, header_frame, body): print(method_frame.delivery_tag) print(body) print() channel.basic_ack(delivery_tag=method_frame.delivery_tag) ## Assuming there are three hosts: host1, host2, and host3 node1 = pika.URLParameters('amqp://node1') node2 = pika.URLParameters('amqp://node2') node3 = pika.URLParameters('amqp://node3') all_endpoints = [node1, node2, node3] @retry(pika.exceptions.AMQPConnectionError, delay=5, jitter=(1, 3)) def consume(): random.shuffle(all_endpoints) connection = pika.BlockingConnection(all_endpoints) channel = connection.channel() channel.basic_qos(prefetch_count=1) ## This queue is intentionally non-durable. See http://www.rabbitmq.com/ha.html#non-mirrored-queue-behavior-on-node-failure ## to learn more. channel.queue_declare('recovery-example', durable = False, auto_delete = True) channel.basic_consume('recovery-example', on_message) try: channel.start_consuming() except KeyboardInterrupt: channel.stop_consuming() connection.close() except pika.exceptions.ConnectionClosedByBroker: # Uncomment this to make the example not attempt recovery # from server-initiated connection closure, including # when the node is stopped cleanly # except pika.exceptions.ConnectionClosedByBroker: # pass continue consume() pika-1.2.0/docs/examples/blocking_consumer_generator.rst000066400000000000000000000072031400701476500235150ustar00rootroot00000000000000Using the BlockingChannel.consume generator to consume messages =============================================================== .. _example_blocking_basic_get: The :py:meth:`BlockingChannel.consume ` method is a generator that will return a tuple of method, properties and body. When you escape out of the loop, be sure to call consumer.cancel() to return any unprocessed messages. Example of consuming messages and acknowledging them:: import pika connection = pika.BlockingConnection() channel = connection.channel() # Get ten messages and break out for method_frame, properties, body in channel.consume('test'): # Display the message parts print(method_frame) print(properties) print(body) # Acknowledge the message channel.basic_ack(method_frame.delivery_tag) # Escape out of the loop after 10 messages if method_frame.delivery_tag == 10: break # Cancel the consumer and return any pending messages requeued_messages = channel.cancel() print('Requeued %i messages' % requeued_messages) # Close the channel and the connection channel.close() connection.close() If you have pending messages in the test queue, your output should look something like:: (pika)gmr-0x02:pika gmr$ python blocking_nack.py Hello World! Hello World! Hello World! Hello World! Hello World! Hello World! Hello World! Hello World! Hello World! Hello World! Requeued 1894 messages pika-1.2.0/docs/examples/blocking_delivery_confirmations.rst000066400000000000000000000021551400701476500243730ustar00rootroot00000000000000Using Delivery Confirmations with the BlockingConnection ======================================================== The following code demonstrates how to turn on delivery confirmations with the BlockingConnection and how to check for confirmation from RabbitMQ:: import pika # Open a connection to RabbitMQ on localhost using all default parameters connection = pika.BlockingConnection() # Open the channel channel = connection.channel() # Declare the queue channel.queue_declare(queue="test", durable=True, exclusive=False, auto_delete=False) # Turn on delivery confirmations channel.confirm_delivery() # Send a message try: channel.basic_publish(exchange='test', routing_key='test', body='Hello World!', properties=pika.BasicProperties(content_type='text/plain', delivery_mode=1)): print('Message publish was confirmed') except pika.exceptions.UnroutableError: print('Message could not be confirmed') pika-1.2.0/docs/examples/blocking_publish_mandatory.rst000066400000000000000000000022701400701476500233370ustar00rootroot00000000000000Ensuring message delivery with the mandatory flag ================================================= The following example demonstrates how to check if a message is delivered by setting the mandatory flag and handling exceptions when using the BlockingConnection:: import pika import pika.exceptions # Open a connection to RabbitMQ on localhost using all default parameters connection = pika.BlockingConnection() # Open the channel channel = connection.channel() # Declare the queue channel.queue_declare(queue="test", durable=True, exclusive=False, auto_delete=False) # Enabled delivery confirmations. This is REQUIRED. channel.confirm_delivery() # Send a message try: channel.basic_publish(exchange='test', routing_key='test', body='Hello World!', properties=pika.BasicProperties(content_type='text/plain', delivery_mode=1), mandatory=True) print('Message was published') except pika.exceptions.UnroutableError: print('Message was returned') pika-1.2.0/docs/examples/comparing_publishing_sync_async.rst000066400000000000000000000054251400701476500244040ustar00rootroot00000000000000Comparing Message Publishing with BlockingConnection and SelectConnection ========================================================================= For those doing simple, non-asynchronous programming, :py:meth:`pika.adapters.blocking_connection.BlockingConnection` proves to be the easiest way to get up and running with Pika to publish messages. In the following example, a connection is made to RabbitMQ listening to port *5672* on *localhost* using the username *guest* and password *guest* and virtual host */*. Once connected, a channel is opened and a message is published to the *test_exchange* exchange using the *test_routing_key* routing key. The BasicProperties value passed in sets the message to delivery mode *1* (non-persisted) with a content-type of *text/plain*. Once the message is published, the connection is closed:: import pika parameters = pika.URLParameters('amqp://guest:guest@localhost:5672/%2F') connection = pika.BlockingConnection(parameters) channel = connection.channel() channel.basic_publish('test_exchange', 'test_routing_key', 'message body value', pika.BasicProperties(content_type='text/plain', delivery_mode=1)) connection.close() In contrast, using :py:meth:`pika.adapters.select_connection.SelectConnection` and the other asynchronous adapters is more complicated and less pythonic, but when used with other asynchronous services can have tremendous performance improvements. In the following code example, all of the same parameters and values are used as were used in the previous example:: import pika # Step #3 def on_open(connection): connection.channel(on_open_callback=on_channel_open) # Step #4 def on_channel_open(channel): channel.basic_publish('test_exchange', 'test_routing_key', 'message body value', pika.BasicProperties(content_type='text/plain', delivery_mode=1)) connection.close() # Step #1: Connect to RabbitMQ parameters = pika.URLParameters('amqp://guest:guest@localhost:5672/%2F') connection = pika.SelectConnection(parameters=parameters, on_open_callback=on_open) try: # Step #2 - Block on the IOLoop connection.ioloop.start() # Catch a Keyboard Interrupt to make sure that the connection is closed cleanly except KeyboardInterrupt: # Gracefully close the connection connection.close() # Start the IOLoop again so Pika can communicate, it will stop on its own when the connection is closed connection.ioloop.start() pika-1.2.0/docs/examples/connecting_async.rst000066400000000000000000000035241400701476500212720ustar00rootroot00000000000000Connecting to RabbitMQ with Callback-Passing Style ================================================== When you connect to RabbitMQ with an asynchronous adapter, you are writing event oriented code. The connection adapter will block on the IOLoop that is watching to see when pika should read data from and write data to RabbitMQ. Because you're now blocking on the IOLoop, you will receive callback notifications when specific events happen. Example Code ------------ In the example, there are three steps that take place: 1. Setup the connection to RabbitMQ 2. Start the IOLoop 3. Once connected, the on_open method will be called by Pika with a handle to the connection. In this method, a new channel will be opened on the connection. 4. Once the channel is opened, you can do your other actions, whether they be publishing messages, consuming messages or other RabbitMQ related activities.:: import pika # Step #3 def on_open(connection): connection.channel(on_open_callback=on_channel_open) # Step #4 def on_channel_open(channel): channel.basic_publish('exchange_name', 'routing_key', 'Test Message', pika.BasicProperties(content_type='text/plain', type='example')) # Step #1: Connect to RabbitMQ connection = pika.SelectConnection(on_open_callback=on_open) try: # Step #2 - Block on the IOLoop connection.ioloop.start() # Catch a Keyboard Interrupt to make sure that the connection is closed cleanly except KeyboardInterrupt: # Gracefully close the connection connection.close() # Start the IOLoop again so Pika can communicate, it will stop on its own when the connection is closed connection.ioloop.start() pika-1.2.0/docs/examples/direct_reply_to.rst000066400000000000000000000054771400701476500211460ustar00rootroot00000000000000Direct reply-to example ============================== The following example demonstrates the use of the RabbitMQ "Direct reply-to" feature via `pika.BlockingConnection`. See https://www.rabbitmq.com/direct-reply-to.html for more info about this feature. direct_reply_to.py:: # -*- coding: utf-8 -*- """ This example demonstrates the RabbitMQ "Direct reply-to" usage via `pika.BlockingConnection`. See https://www.rabbitmq.com/direct-reply-to.html for more info about this feature. """ import pika SERVER_QUEUE = 'rpc.server.queue' def main(): """ Here, Client sends "Marco" to RPC Server, and RPC Server replies with "Polo". NOTE Normally, the server would be running separately from the client, but in this very simple example both are running in the same thread and sharing connection and channel. """ with pika.BlockingConnection() as conn: channel = conn.channel() # Set up server channel.queue_declare(queue=SERVER_QUEUE, exclusive=True, auto_delete=True) channel.basic_consume(SERVER_QUEUE, on_server_rx_rpc_request) # Set up client # NOTE Client must create its consumer and publish RPC requests on the # same channel to enable the RabbitMQ broker to make the necessary # associations. # # Also, client must create the consumer *before* starting to publish the # RPC requests. # # Client must create its consumer with auto_ack=True, because the reply-to # queue isn't real. channel.basic_consume('amq.rabbitmq.reply-to', on_client_rx_reply_from_server, auto_ack=True) channel.basic_publish( exchange='', routing_key=SERVER_QUEUE, body='Marco', properties=pika.BasicProperties(reply_to='amq.rabbitmq.reply-to')) channel.start_consuming() def on_server_rx_rpc_request(ch, method_frame, properties, body): print('RPC Server got request: %s' % body) ch.basic_publish('', routing_key=properties.reply_to, body='Polo') ch.basic_ack(delivery_tag=method_frame.delivery_tag) print('RPC Server says good bye') def on_client_rx_reply_from_server(ch, method_frame, properties, body): print('RPC Client got reply: %s' % body) # NOTE A real client might want to make additional RPC requests, but in this # simple example we're closing the channel after getting our first reply # to force control to return from channel.start_consuming() print('RPC Client says bye') ch.close() pika-1.2.0/docs/examples/heartbeat_and_blocked_timeouts.rst000066400000000000000000000043031400701476500241370ustar00rootroot00000000000000Ensuring well-behaved connection with heartbeat and blocked-connection timeouts =============================================================================== This example demonstrates explicit setting of heartbeat and blocked connection timeouts. Starting with RabbitMQ 3.5.5, the broker's default heartbeat timeout decreased from 580 seconds to 60 seconds. As a result, applications that perform lengthy processing in the same thread that also runs their Pika connection may experience unexpected dropped connections due to heartbeat timeout. Here, we specify an explicit lower bound for heartbeat timeout. When RabbitMQ broker is running out of certain resources, such as memory and disk space, it may block connections that are performing resource-consuming operations, such as publishing messages. Once a connection is blocked, RabbitMQ stops reading from that connection's socket, so no commands from the client will get through to the broker on that connection until the broker unblocks it. A blocked connection may last for an indefinite period of time, stalling the connection and possibly resulting in a hang (e.g., in BlockingConnection) until the connection is unblocked. Blocked Connection Timeout is intended to interrupt (i.e., drop) a connection that has been blocked longer than the given timeout value. Example of configuring heartbeat and blocked-connection timeouts:: import pika def main(): # NOTE: These parameters work with all Pika connection types params = pika.ConnectionParameters(heartbeat=600, blocked_connection_timeout=300) conn = pika.BlockingConnection(params) chan = conn.channel() chan.basic_publish('', 'my-alphabet-queue', "abc") # If publish causes the connection to become blocked, then this conn.close() # would hang until the connection is unblocked, if ever. However, the # blocked_connection_timeout connection parameter would interrupt the wait, # resulting in ConnectionClosed exception from BlockingConnection (or the # on_connection_closed callback call in an asynchronous adapter) conn.close() if __name__ == '__main__': main() pika-1.2.0/docs/examples/tls_mutual_authentication.rst000066400000000000000000000071771400701476500232460ustar00rootroot00000000000000TLS parameters example ====================== This example demonstrates a TLS session with RabbitMQ using mutual authentication (server and client authentication). It was tested against RabbitMQ 3.7.4, using Python 3.6.5 and Pika 1.0.0. See `the RabbitMQ TLS/SSL documentation `_ for certificate generation and RabbitMQ TLS configuration. Please note that the `RabbitMQ TLS (x509 certificate) authentication mechanism `_ must be enabled for these examples to work. tls_example.py:: import logging import pika import ssl logging.basicConfig(level=logging.INFO) context = ssl.create_default_context( cafile="PIKA_DIR/testdata/certs/ca_certificate.pem") context.load_cert_chain("PIKA_DIR/testdata/certs/client_certificate.pem", "PIKA_DIR/testdata/certs/client_key.pem") ssl_options = pika.SSLOptions(context, "localhost") conn_params = pika.ConnectionParameters(port=5671, ssl_options=ssl_options) with pika.BlockingConnection(conn_params) as conn: ch = conn.channel() ch.queue_declare("foobar") ch.basic_publish("", "foobar", "Hello, world!") print(ch.basic_get("foobar")) rabbitmq.config:: # Enable AMQPS listeners.ssl.default = 5671 ssl_options.cacertfile = PIKA_DIR/testdata/certs/ca_certificate.pem ssl_options.certfile = PIKA_DIR/testdata/certs/server_certificate.pem ssl_options.keyfile = PIKA_DIR/testdata/certs/server_key.pem ssl_options.verify = verify_peer ssl_options.fail_if_no_peer_cert = true # Enable HTTPS management.listener.port = 15671 management.listener.ssl = true management.listener.ssl_opts.cacertfile = PIKA_DIR/testdata/certs/ca_certificate.pem management.listener.ssl_opts.certfile = PIKA_DIR/testdata/certs/server_certificate.pem management.listener.ssl_opts.keyfile = PIKA_DIR/testdata/certs/server_key.pem To perform mutual authentication with a Twisted connection:: from pika import ConnectionParameters from pika.adapters import twisted_connection from pika.credentials import ExternalCredentials from twisted.internet import defer, protocol, ssl, reactor @defer.inlineCallbacks def publish(connection): channel = yield connection.channel() yield channel.basic_publish( exchange='amq.topic', routing_key='hello.world', body='Hello World!', ) print("published") def connection_ready(conn): conn.ready.addCallback(lambda _ :conn) return conn.ready # Load the CA certificate to validate the server's identity with open("PIKA_DIR/testdata/certs/ca_certificate.pem") as fd: ca_cert = ssl.Certificate.loadPEM(fd.read()) # Load the client certificate and key to authenticate with the server with open("PIKA_DIR/testdata/certs/client_key.pem") as fd: client_key = fd.read() with open("PIKA_DIR/testdata/certs/client_certificate.pem") as fd: client_cert = fd.read() client_keypair = ssl.PrivateCertificate.loadPEM(client_key + client_cert) context_factory = ssl.optionsForClientTLS( "localhost", trustRoot=ca_cert, clientCertificate=client_keypair, ) params = ConnectionParameters(credentials=ExternalCredentials()) cc = protocol.ClientCreator( reactor, twisted_connection.TwistedProtocolConnection, params) deferred = cc.connectSSL("localhost", 5671, context_factory) deferred.addCallback(connection_ready) deferred.addCallback(publish) reactor.run() pika-1.2.0/docs/examples/tls_server_authentication.rst000066400000000000000000000045701400701476500232370ustar00rootroot00000000000000TLS parameters example ============================= This examples demonstrates a TLS session with RabbitMQ using server authentication. It was tested against RabbitMQ 3.6.10, using Python 3.6.1 and pre-release Pika `0.11.0` Note the use of `ssl_version=ssl.PROTOCOL_TLSv1`. The recent versions of RabbitMQ disable older versions of SSL due to security vulnerabilities. See https://www.rabbitmq.com/ssl.html for certificate creation and rabbitmq SSL configuration instructions. tls_example.py:: import ssl import pika import logging logging.basicConfig(level=logging.INFO) context = ssl.SSLContext(ssl.PROTOCOL_TLSv1) context.verify_mode = ssl.CERT_REQUIRED context.load_verify_locations('/Users/me/tls-gen/basic/testca/cacert.pem') cp = pika.ConnectionParameters(ssl_options=pika.SSLOptions(context)) conn = pika.BlockingConnection(cp) ch = conn.channel() print(ch.queue_declare("sslq")) ch.publish("", "sslq", "abc") print(ch.basic_get("sslq")) rabbitmq.config:: %% Both the client and rabbitmq server were running on the same machine, a MacBookPro laptop. %% %% rabbitmq.config was created in its default location for OS X: /usr/local/etc/rabbitmq/rabbitmq.config. %% %% The contents of the example rabbitmq.config are for demonstration purposes only. See https://www.rabbitmq.com/ssl.html for instructions about creating the test certificates and the contents of rabbitmq.config. %% %% Note that the {fail_if_no_peer_cert,false} option, states that RabbitMQ should accept clients that don't have a certificate to send to the broker, but through the {verify,verify_peer} option, we state that if the client does send a certificate to the broker, the broker must be able to establish a chain of trust to it. [ {rabbit, [ {ssl_listeners, [{"127.0.0.1", 5671}]}, %% Configuring SSL. %% See http://www.rabbitmq.com/ssl.html for full documentation. %% {ssl_options, [{cacertfile, "/Users/me/tls-gen/basic/testca/cacert.pem"}, {certfile, "/Users/me/tls-gen/basic/server/cert.pem"}, {keyfile, "/Users/me/tls-gen/basic/server/key.pem"}, {verify, verify_peer}, {fail_if_no_peer_cert, false}]} ] } ]. pika-1.2.0/docs/examples/tornado_consumer.rst000066400000000000000000000353301400701476500213270ustar00rootroot00000000000000Tornado Consumer ================ The following example implements a consumer using the :class:`Tornado adapter ` for the `Tornado framework `_ that will respond to RPC commands sent from RabbitMQ. For example, it will reconnect if RabbitMQ closes the connection and will shutdown if RabbitMQ cancels the consumer or closes the channel. While it may look intimidating, each method is very short and represents a individual actions that a consumer can do. consumer.py:: import logging import pika from pika import adapters from pika.adapters.tornado_connection import TornadoConnection from pika.exchange_type import ExchangeType LOG_FORMAT = ('%(levelname) -10s %(asctime)s %(name) -30s %(funcName) ' '-35s %(lineno) -5d: %(message)s') LOGGER = logging.getLogger(__name__) class ExampleConsumer(object): """This is an example consumer that will handle unexpected interactions with RabbitMQ such as channel and connection closures. If RabbitMQ closes the connection, it will reopen it. You should look at the output, as there are limited reasons why the connection may be closed, which usually are tied to permission related issues or socket timeouts. If the channel is closed, it will indicate a problem with one of the commands that were issued and that should surface in the output as well. """ EXCHANGE = 'message' EXCHANGE_TYPE = ExchangeType.topic QUEUE = 'text' ROUTING_KEY = 'example.text' def __init__(self, amqp_url): """Create a new instance of the consumer class, passing in the AMQP URL used to connect to RabbitMQ. :param str amqp_url: The AMQP url to connect with """ self._connection = None self._channel = None self._closing = False self._consumer_tag = None self._url = amqp_url def connect(self): """This method connects to RabbitMQ, returning the connection handle. When the connection is established, the on_connection_open method will be invoked by pika. :rtype: pika.SelectConnection """ LOGGER.info('Connecting to %s', self._url) return TornadoConnection( pika.URLParameters(self._url), self.on_connection_open, ) def close_connection(self): """This method closes the connection to RabbitMQ.""" LOGGER.info('Closing connection') self._connection.close() def add_on_connection_close_callback(self): """This method adds an on close callback that will be invoked by pika when RabbitMQ closes the connection to the publisher unexpectedly. """ LOGGER.info('Adding connection close callback') self._connection.add_on_close_callback(self.on_connection_closed) def on_connection_closed(self, connection, reason): """This method is invoked by pika when the connection to RabbitMQ is closed unexpectedly. Since it is unexpected, we will reconnect to RabbitMQ if it disconnects. :param pika.connection.Connection connection: The closed connection obj :param Exception reason: exception representing reason for loss of connection. """ self._channel = None if self._closing: self._connection.ioloop.stop() else: LOGGER.warning('Connection closed, reopening in 5 seconds: %s', reason) self._connection.ioloop.call_later(5, self.reconnect) def on_connection_open(self, unused_connection): """This method is called by pika once the connection to RabbitMQ has been established. It passes the handle to the connection object in case we need it, but in this case, we'll just mark it unused. :param pika.SelectConnection _unused_connection: The connection """ LOGGER.info('Connection opened') self.add_on_connection_close_callback() self.open_channel() def reconnect(self): """Will be invoked by the IOLoop timer if the connection is closed. See the on_connection_closed method. """ if not self._closing: # Create a new connection self._connection = self.connect() def add_on_channel_close_callback(self): """This method tells pika to call the on_channel_closed method if RabbitMQ unexpectedly closes the channel. """ LOGGER.info('Adding channel close callback') self._channel.add_on_close_callback(self.on_channel_closed) def on_channel_closed(self, channel, reason): """Invoked by pika when RabbitMQ unexpectedly closes the channel. Channels are usually closed if you attempt to do something that violates the protocol, such as re-declare an exchange or queue with different parameters. In this case, we'll close the connection to shutdown the object. :param pika.channel.Channel: The closed channel :param Exception reason: why the channel was closed """ LOGGER.warning('Channel %i was closed: %s', channel, reason) self._connection.close() def on_channel_open(self, channel): """This method is invoked by pika when the channel has been opened. The channel object is passed in so we can make use of it. Since the channel is now open, we'll declare the exchange to use. :param pika.channel.Channel channel: The channel object """ LOGGER.info('Channel opened') self._channel = channel self.add_on_channel_close_callback() self.setup_exchange(self.EXCHANGE) def setup_exchange(self, exchange_name): """Setup the exchange on RabbitMQ by invoking the Exchange.Declare RPC command. When it is complete, the on_exchange_declareok method will be invoked by pika. :param str|unicode exchange_name: The name of the exchange to declare """ LOGGER.info('Declaring exchange %s', exchange_name) self._channel.exchange_declare( callback=self.on_exchange_declareok, exchange=exchange_name, exchange_type=self.EXCHANGE_TYPE, ) def on_exchange_declareok(self, unused_frame): """Invoked by pika when RabbitMQ has finished the Exchange.Declare RPC command. :param pika.Frame.Method unused_frame: Exchange.DeclareOk response frame """ LOGGER.info('Exchange declared') self.setup_queue(self.QUEUE) def setup_queue(self, queue_name): """Setup the queue on RabbitMQ by invoking the Queue.Declare RPC command. When it is complete, the on_queue_declareok method will be invoked by pika. :param str|unicode queue_name: The name of the queue to declare. """ LOGGER.info('Declaring queue %s', queue_name) self._channel.queue_declare( queue=queue_name, callback=self.on_queue_declareok, ) def on_queue_declareok(self, method_frame): """Method invoked by pika when the Queue.Declare RPC call made in setup_queue has completed. In this method we will bind the queue and exchange together with the routing key by issuing the Queue.Bind RPC command. When this command is complete, the on_bindok method will be invoked by pika. :param pika.frame.Method method_frame: The Queue.DeclareOk frame """ LOGGER.info('Binding %s to %s with %s', self.EXCHANGE, self.QUEUE, self.ROUTING_KEY) self._channel.queue_bind( queue=self.QUEUE, exchange=self.EXCHANGE, routing_key=self.ROUTING_KEY, callback=self.on_bindok, ) def add_on_cancel_callback(self): """Add a callback that will be invoked if RabbitMQ cancels the consumer for some reason. If RabbitMQ does cancel the consumer, on_consumer_cancelled will be invoked by pika. """ LOGGER.info('Adding consumer cancellation callback') self._channel.add_on_cancel_callback(self.on_consumer_cancelled) def on_consumer_cancelled(self, method_frame): """Invoked by pika when RabbitMQ sends a Basic.Cancel for a consumer receiving messages. :param pika.frame.Method method_frame: The Basic.Cancel frame """ LOGGER.info('Consumer was cancelled remotely, shutting down: %r', method_frame) if self._channel: self._channel.close() def acknowledge_message(self, delivery_tag): """Acknowledge the message delivery from RabbitMQ by sending a Basic.Ack RPC method for the delivery tag. :param int delivery_tag: The delivery tag from the Basic.Deliver frame """ LOGGER.info('Acknowledging message %s', delivery_tag) self._channel.basic_ack(delivery_tag) def on_message(self, unused_channel, basic_deliver, properties, body): """Invoked by pika when a message is delivered from RabbitMQ. The channel is passed for your convenience. The basic_deliver object that is passed in carries the exchange, routing key, delivery tag and a redelivered flag for the message. The properties passed in is an instance of BasicProperties with the message properties and the body is the message that was sent. :param pika.channel.Channel unused_channel: The channel object :param pika.Spec.Basic.Deliver: basic_deliver method :param pika.Spec.BasicProperties: properties :param bytes body: The message body """ LOGGER.info('Received message # %s from %s: %s', basic_deliver.delivery_tag, properties.app_id, body) self.acknowledge_message(basic_deliver.delivery_tag) def on_cancelok(self, unused_frame): """This method is invoked by pika when RabbitMQ acknowledges the cancellation of a consumer. At this point we will close the channel. This will invoke the on_channel_closed method once the channel has been closed, which will in-turn close the connection. :param pika.frame.Method unused_frame: The Basic.CancelOk frame """ LOGGER.info('RabbitMQ acknowledged the cancellation of the consumer') self.close_channel() def stop_consuming(self): """Tell RabbitMQ that you would like to stop consuming by sending the Basic.Cancel RPC command. """ if self._channel: LOGGER.info('Sending a Basic.Cancel RPC command to RabbitMQ') self._channel.basic_cancel(self.on_cancelok, self._consumer_tag) def start_consuming(self): """This method sets up the consumer by first calling add_on_cancel_callback so that the object is notified if RabbitMQ cancels the consumer. It then issues the Basic.Consume RPC command which returns the consumer tag that is used to uniquely identify the consumer with RabbitMQ. We keep the value to use it when we want to cancel consuming. The on_message method is passed in as a callback pika will invoke when a message is fully received. """ LOGGER.info('Issuing consumer related RPC commands') self.add_on_cancel_callback() self._consumer_tag = self._channel.basic_consume( on_message_callback=self.on_message, queue=self.QUEUE, ) def on_bindok(self, unused_frame): """Invoked by pika when the Queue.Bind method has completed. At this point we will start consuming messages by calling start_consuming which will invoke the needed RPC commands to start the process. :param pika.frame.Method unused_frame: The Queue.BindOk response frame """ LOGGER.info('Queue bound') self.start_consuming() def close_channel(self): """Call to close the channel with RabbitMQ cleanly by issuing the Channel.Close RPC command. """ LOGGER.info('Closing the channel') self._channel.close() def open_channel(self): """Open a new channel with RabbitMQ by issuing the Channel.Open RPC command. When RabbitMQ responds that the channel is open, the on_channel_open callback will be invoked by pika. """ LOGGER.info('Creating a new channel') self._connection.channel(on_open_callback=self.on_channel_open) def run(self): """Run the example consumer by connecting to RabbitMQ and then starting the IOLoop to block and allow the SelectConnection to operate. """ self._connection = self.connect() self._connection.ioloop.start() def stop(self): """Cleanly shutdown the connection to RabbitMQ by stopping the consumer with RabbitMQ. When RabbitMQ confirms the cancellation, on_cancelok will be invoked by pika, which will then closing the channel and connection. The IOLoop is started again because this method is invoked when CTRL-C is pressed raising a KeyboardInterrupt exception. This exception stops the IOLoop which needs to be running for pika to communicate with RabbitMQ. All of the commands issued prior to starting the IOLoop will be buffered but not processed. """ LOGGER.info('Stopping') self._closing = True self.stop_consuming() self._connection.ioloop.start() LOGGER.info('Stopped') def main(): logging.basicConfig(level=logging.INFO, format=LOG_FORMAT) example = ExampleConsumer('amqp://guest:guest@localhost:5672/%2F') try: example.run() except KeyboardInterrupt: example.stop() if __name__ == '__main__': main() pika-1.2.0/docs/examples/twisted_example.rst000066400000000000000000000004401400701476500211360ustar00rootroot00000000000000Twisted Consumer Example ======================== Example of writing an application using the :py:class:`Twisted connection adapter `::. `Twisted Example `_ pika-1.2.0/docs/examples/using_urlparameters.rst000066400000000000000000000077001400701476500220410ustar00rootroot00000000000000Using URLParameters =================== Pika has two methods of encapsulating the data that lets it know how to connect to RabbitMQ, :py:class:`pika.connection.ConnectionParameters` and :py:class:`pika.connection.URLParameters`. .. note:: If you're connecting to RabbitMQ on localhost on port 5672, with the default virtual host of */* and the default username and password of *guest* and *guest*, you do not need to specify connection parameters when connecting. Using :py:class:`pika.connection.URLParameters` is an easy way to minimize the variables required to connect to RabbitMQ and supports all of the directives that :py:class:`pika.connection.ConnectionParameters` supports. The following is the format for the URLParameters connection value:: scheme://username:password@host:port/virtual_host?key=value&key=value As you can see, by default, the scheme (amqp, amqps), username, password, host, port and virtual host make up the core of the URL and any other parameter is passed in as query string values. Example Connection URLS ----------------------- The default connection URL connects to the / virtual host as guest using the guest password on localhost port 5672. Note the forwardslash in the URL is encoded to %2F:: amqp://guest:guest@localhost:5672/%2F Connect to a host *rabbit1* as the user *www-data* using the password *rabbit_pwd* on the virtual host *web_messages*:: amqp://www-data:rabbit_pwd@rabbit1/web_messages Connecting via SSL is pretty easy too. To connect via SSL for the previous example, simply change the scheme to *amqps*. If you do not specify a port, Pika will use the default SSL port of 5671:: amqps://www-data:rabbit_pwd@rabbit1/web_messages If you're looking to tweak other parameters, such as enabling heartbeats, simply add the key/value pair as a query string value. The following builds upon the SSL connection, enabling heartbeats every 30 seconds:: amqps://www-data:rabbit_pwd@rabbit1/web_messages?heartbeat=30 Options that are available as query string values: - backpressure_detection: Pass in a value of *t* to enable backpressure detection, it is disabled by default. - channel_max: Alter the default channel maximum by passing in a 32-bit integer value here. - connection_attempts: Alter the default of 1 connection attempt by passing in an integer value here. - frame_max: Alter the default frame maximum size value by passing in a long integer value [#f1]_. - heartbeat: Pass a value greater than zero to enable heartbeats between the server and your application. The integer value you pass here will be the number of seconds between heartbeats. - locale: Set the locale of the client using underscore delimited posix Locale code in ll_CC format (en_US, pt_BR, de_DE). - retry_delay: The number of seconds to wait before attempting to reconnect on a failed connection, if connection_attempts is > 0. - socket_timeout: Change the default socket timeout duration from 0.25 seconds to another integer or float value. Adjust with caution. - ssl_options: A url encoded dict of values for the SSL connection. The available keys are: - ca_certs - cert_reqs - certfile - keyfile - ssl_version For an information on what the ssl_options can be set to reference the `official Python documentation `_. Here is an example of setting the client certificate and key:: amqp://www-data:rabbit_pwd@rabbit1/web_messages?heartbeat=30&ssl_options=%7B%27keyfile%27%3A+%27%2Fetc%2Fssl%2Fmykey.pem%27%2C+%27certfile%27%3A+%27%2Fetc%2Fssl%2Fmycert.pem%27%7D The following example demonstrates how to generate the ssl_options string with `Python's urllib `_:: import urllib urllib.urlencode({'ssl_options': {'certfile': '/etc/ssl/mycert.pem', 'keyfile': '/etc/ssl/mykey.pem'}}) .. rubric:: Footnotes .. [#f1] The AMQP specification states that a server can reject a request for a frame size larger than the value it passes during content negotiation. pika-1.2.0/docs/faq.rst000066400000000000000000000025471400701476500147030ustar00rootroot00000000000000Frequently Asked Questions -------------------------- - Is Pika thread safe? Pika does not have any notion of threading in the code. If you want to use Pika with threading, make sure you have a Pika connection per thread, created in that thread. It is not safe to share one Pika connection across threads, with one exception: you may call the connection method `add_callback_threadsafe` from another thread to schedule a callback within an active pika connection. - How do I report a bug with Pika? The `main Pika repository `_ is hosted on `Github `_ and we use the Issue tracker at `https://github.com/pika/pika/issues `_. - Is there a mailing list for Pika? Yes, Pika's mailing list is available `on Google Groups `_ and the email address is pika-python@googlegroups.com, though traditionally questions about Pika have been asked on the `RabbitMQ mailing list `_. - How can I contribute to Pika? You can `fork the project on Github `_ and issue `Pull Requests `_ when you believe you have something solid to be added to the main repository. pika-1.2.0/docs/index.rst000066400000000000000000000014401400701476500152320ustar00rootroot00000000000000Introduction to Pika ==================== Pika is a pure-Python implementation of the AMQP 0-9-1 protocol that tries to stay fairly independent of the underlying network support library. If you have not developed with Pika or RabbitMQ before, the :doc:`intro` documentation is a good place to get started. Installing Pika --------------- Pika is available for download via PyPI and may be installed using easy_install or pip:: pip install pika or:: easy_install pika To install from source, run "python setup.py install" in the root source directory. Using Pika ---------- .. toctree:: :glob: :maxdepth: 1 intro modules/index examples faq contributors version_history Indices and tables ------------------ * :ref:`genindex` * :ref:`modindex` * :ref:`search` pika-1.2.0/docs/intro.rst000066400000000000000000000142471400701476500152670ustar00rootroot00000000000000Introduction to Pika ==================== IO and Event Looping -------------------- As AMQP is a two-way RPC protocol where the client can send requests to the server and the server can send requests to a client, Pika implements or extends IO loops in each of its asynchronous connection adapters. These IO loops are blocking methods which loop and listen for events. Each asynchronous adapter follows the same standard for invoking the IO loop. The IO loop is created when the connection adapter is created. To start an IO loop for any given adapter, call the ``connection.ioloop.start()`` method. If you are using an external IO loop such as Tornado's :class:`~tornado.ioloop.IOLoop` you invoke it normally and then add the Pika Tornado adapter to it. Example:: import pika def on_open(connection): # Invoked when the connection is open pass # Create our connection object, passing in the on_open method connection = pika.SelectConnection(on_open_callback=on_open) try: # Loop so we can communicate with RabbitMQ connection.ioloop.start() except KeyboardInterrupt: # Gracefully close the connection connection.close() # Loop until we're fully closed, will stop on its own connection.ioloop.start() .. _intro_to_cps: Continuation-Passing Style -------------------------- Interfacing with Pika asynchronously is done by passing in callback methods you would like to have invoked when a certain event completes. For example, if you are going to declare a queue, you pass in a method that will be called when the RabbitMQ server returns a `Queue.DeclareOk `_ response. In our example below we use the following five easy steps: #. We start by creating our connection object, then starting our event loop. #. When we are connected, the *on_connected* method is called. In that method we create a channel. #. When the channel is created, the *on_channel_open* method is called. In that method we declare a queue. #. When the queue is declared successfully, *on_queue_declared* is called. In that method we call :py:meth:`channel.basic_consume ` telling it to call the handle_delivery for each message RabbitMQ delivers to us. #. When RabbitMQ has a message to send us, it calls the handle_delivery method passing the AMQP Method frame, Header frame, and Body. .. NOTE:: Step #1 is on line #28 and Step #2 is on line #6. This is so that Python knows about the functions we'll call in Steps #2 through #5. .. _cps_example: Example:: import pika # Create a global channel variable to hold our channel object in channel = None # Step #2 def on_connected(connection): """Called when we are fully connected to RabbitMQ""" # Open a channel connection.channel(on_open_callback=on_channel_open) # Step #3 def on_channel_open(new_channel): """Called when our channel has opened""" global channel channel = new_channel channel.queue_declare(queue="test", durable=True, exclusive=False, auto_delete=False, callback=on_queue_declared) # Step #4 def on_queue_declared(frame): """Called when RabbitMQ has told us our Queue has been declared, frame is the response from RabbitMQ""" channel.basic_consume('test', handle_delivery) # Step #5 def handle_delivery(channel, method, header, body): """Called when we receive a message from RabbitMQ""" print(body) # Step #1: Connect to RabbitMQ using the default parameters parameters = pika.ConnectionParameters() connection = pika.SelectConnection(parameters, on_open_callback=on_connected) try: # Loop so we can communicate with RabbitMQ connection.ioloop.start() except KeyboardInterrupt: # Gracefully close the connection connection.close() # Loop until we're fully closed, will stop on its own connection.ioloop.start() Credentials ----------- The :mod:`pika.credentials` module provides the mechanism by which you pass the username and password to the :py:class:`ConnectionParameters ` class when it is created. Example:: import pika credentials = pika.PlainCredentials('username', 'password') parameters = pika.ConnectionParameters(credentials=credentials) .. _connection_parameters: Connection Parameters --------------------- There are two types of connection parameter classes in Pika to allow you to pass the connection information into a connection adapter, :class:`ConnectionParameters ` and :class:`URLParameters `. Both classes share the same default connection values. .. _intro_to_backpressure: TCP Backpressure ---------------- As of RabbitMQ 2.0, client side `Channel.Flow `_ has been removed [#f1]_. Instead, the RabbitMQ broker uses TCP Backpressure to slow your client if it is delivering messages too fast. If you pass in backpressure_detection into your connection parameters, Pika attempts to help you handle this situation by providing a mechanism by which you may be notified if Pika has noticed too many frames have yet to be delivered. By registering a callback function with the :py:meth:`add_backpressure_callback ` method of any connection adapter, your function will be called when Pika sees that a backlog of 10 times the average frame size you have been sending has been exceeded. You may tweak the notification multiplier value by calling the :py:meth:`set_backpressure_multiplier ` method passing any integer value. Example:: import pika parameters = pika.URLParameters('amqp://guest:guest@rabbit-server1:5672/%2F?backpressure_detection=t') .. rubric:: Footnotes .. [#f1] "more effective flow control mechanism that does not require cooperation from clients and reacts quickly to prevent the broker from exhausting memory - see http://lists.rabbitmq.com/pipermail/rabbitmq-announce/attachments/20100825/2c672695/attachment.txt pika-1.2.0/docs/modules/000077500000000000000000000000001400701476500150425ustar00rootroot00000000000000pika-1.2.0/docs/modules/adapters/000077500000000000000000000000001400701476500166455ustar00rootroot00000000000000pika-1.2.0/docs/modules/adapters/asyncio.rst000066400000000000000000000005441400701476500210470ustar00rootroot00000000000000asyncio Connection Adapter ========================== .. automodule:: pika.adapters.asyncio_connection Be sure to check out the :doc:`asynchronous examples ` including the asyncio specific :doc:`consumer ` example. .. autoclass:: pika.adapters.asyncio_connection.AsyncioConnection :members: :inherited-members: pika-1.2.0/docs/modules/adapters/blocking.rst000066400000000000000000000005271400701476500211730ustar00rootroot00000000000000BlockingConnection ------------------ .. automodule:: pika.adapters.blocking_connection Be sure to check out examples in :doc:`/examples`. .. autoclass:: pika.adapters.blocking_connection.BlockingConnection :members: :inherited-members: .. autoclass:: pika.adapters.blocking_connection.BlockingChannel :members: :inherited-members: pika-1.2.0/docs/modules/adapters/index.rst000066400000000000000000000007461400701476500205150ustar00rootroot00000000000000Connection Adapters =================== Pika uses connection adapters to provide a flexible method for adapting pika's core communication to different IOLoop implementations. In addition to asynchronous adapters, there is the :class:`BlockingConnection ` adapter that provides a more idiomatic procedural approach to using Pika. Adapters -------- .. toctree:: :glob: :maxdepth: 1 blocking select tornado twisted pika-1.2.0/docs/modules/adapters/select.rst000066400000000000000000000003101400701476500206500ustar00rootroot00000000000000Select Connection Adapter ========================== .. automodule:: pika.adapters.select_connection .. autoclass:: pika.adapters.select_connection.SelectConnection :members: :inherited-members: pika-1.2.0/docs/modules/adapters/tornado.rst000066400000000000000000000005441400701476500210500ustar00rootroot00000000000000Tornado Connection Adapter ========================== .. automodule:: pika.adapters.tornado_connection Be sure to check out the :doc:`asynchronous examples ` including the Tornado specific :doc:`consumer ` example. .. autoclass:: pika.adapters.tornado_connection.TornadoConnection :members: :inherited-members: pika-1.2.0/docs/modules/adapters/twisted.rst000066400000000000000000000006371400701476500210700ustar00rootroot00000000000000Twisted Connection Adapter ========================== .. automodule:: pika.adapters.twisted_connection .. autoclass:: pika.adapters.twisted_connection.TwistedProtocolConnection :members: :inherited-members: .. autoclass:: pika.adapters.twisted_connection.TwistedChannel :members: :inherited-members: .. autoclass:: pika.adapters.twisted_connection.ClosableDeferredQueue :members: :inherited-members: pika-1.2.0/docs/modules/channel.rst000066400000000000000000000002241400701476500172020ustar00rootroot00000000000000Channel ======= .. automodule:: pika.channel Channel ------- .. autoclass:: Channel :members: :inherited-members: :member-order: bysource pika-1.2.0/docs/modules/connection.rst000066400000000000000000000002771400701476500177410ustar00rootroot00000000000000Connection ---------- The :class:`~pika.connection.Connection` class implements the base behavior that all connection adapters extend. .. autoclass:: pika.connection.Connection :members: pika-1.2.0/docs/modules/credentials.rst000066400000000000000000000005111400701476500200660ustar00rootroot00000000000000Authentication Credentials ========================== .. automodule:: pika.credentials PlainCredentials ---------------- .. autoclass:: PlainCredentials :members: :inherited-members: :noindex: ExternalCredentials ------------------- .. autoclass:: ExternalCredentials :members: :inherited-members: :noindex: pika-1.2.0/docs/modules/exceptions.rst000066400000000000000000000001261400701476500177540ustar00rootroot00000000000000Exceptions ========== .. automodule:: pika.exceptions :members: :undoc-members: pika-1.2.0/docs/modules/index.rst000066400000000000000000000015751400701476500167130ustar00rootroot00000000000000Core Class and Module Documentation =================================== For the end user, Pika is organized into a small set of objects for all communication with RabbitMQ. - A :doc:`connection adapter ` is used to connect to RabbitMQ and manages the connection. - :doc:`Connection parameters ` are used to instruct the :class:`~pika.connection.Connection` object how to connect to RabbitMQ. - :doc:`credentials` are used to encapsulate all authentication information for the :class:`~pika.connection.ConnectionParameters` class. - A :class:`~pika.channel.Channel` object is used to communicate with RabbitMQ via the AMQP RPC methods. - :doc:`exceptions` are raised at various points when using Pika when something goes wrong. .. toctree:: :hidden: :maxdepth: 1 adapters/index channel connection credentials exceptions parameters spec pika-1.2.0/docs/modules/parameters.rst000066400000000000000000000034261400701476500177440ustar00rootroot00000000000000Connection Parameters ===================== To maintain flexibility in how you specify the connection information required for your applications to properly connect to RabbitMQ, pika implements two classes for encapsulating the information, :class:`~pika.connection.ConnectionParameters` and :class:`~pika.connection.URLParameters`. ConnectionParameters -------------------- The classic object for specifying all of the connection parameters required to connect to RabbitMQ, :class:`~pika.connection.ConnectionParameters` provides attributes for tweaking every possible connection option. Example:: import pika # Set the connection parameters to connect to rabbit-server1 on port 5672 # on the / virtual host using the username "guest" and password "guest" credentials = pika.PlainCredentials('guest', 'guest') parameters = pika.ConnectionParameters('rabbit-server1', 5672, '/', credentials) .. autoclass:: pika.connection.ConnectionParameters :members: :inherited-members: :member-order: bysource URLParameters ------------- The :class:`~pika.connection.URLParameters` class allows you to pass in an AMQP URL when creating the object and supports the host, port, virtual host, ssl, username and password in the base URL and other options are passed in via query parameters. Example:: import pika # Set the connection parameters to connect to rabbit-server1 on port 5672 # on the / virtual host using the username "guest" and password "guest" parameters = pika.URLParameters('amqp://guest:guest@rabbit-server1:5672/%2F') .. autoclass:: pika.connection.URLParameters :members: :inherited-members: :member-order: bysource pika-1.2.0/docs/modules/spec.rst000066400000000000000000000002011400701476500165170ustar00rootroot00000000000000pika.spec ========= .. automodule:: pika.spec :members: :inherited-members: :member-order: bysource :undoc-members: pika-1.2.0/docs/version_history.rst000066400000000000000000001366401400701476500174040ustar00rootroot00000000000000Version History =============== 1.2.0 2021-02-04 ---------------- `GitHub milestone `_ 1.1.0 2019-07-16 ---------------- `GitHub milestone `_ 1.0.1 2019-04-12 ---------------- `GitHub milestone `_ - API docstring updates - Twisted adapter: Add basic_consume Deferred to the call list (`PR `_) 1.0.0 2019-03-26 ---------------- `GitHub milestone `_ - ``AsyncioConnection``, ``TornadoConnection`` and ``TwistedProtocolConnection`` are no longer auto-imported (`PR `_) - ``BlockingConnection.consume`` now returns ``(None, None, None)`` when inactivity timeout is reached (`PR `_) - Python 3.7 support (`Issue `_) - ``all_channels`` parameter of the ``Channel.basic_qos`` method renamed to ``global_qos`` - ``global_`` parameter of the ``Basic.Qos`` spec class renamed to ``global_qos`` - **NOTE:** ``heartbeat_interval`` is removed, use ``heartbeat`` instead. - **NOTE:** The `backpressure_detection` option of `ConnectionParameters` and `URLParameters` property is REMOVED in favor of `Connection.Blocked` and `Connection.Unblocked`. See `Connection.add_on_connection_blocked_callback`. - **NOTE:** The legacy ``basic_publish`` method is removed, and ``publish`` renamed to ``basic_publish`` - **NOTE**: The signature of the following methods has changed from Pika 0.13.0. In general, the callback parameter that indicates completion of the method has been moved to the end of the parameter list to be consistent with other parts of Pika's API and with other libraries in general. **IMPORTANT**: The signature of the following methods has changed from Pika 0.13.0. In general, the callback parameter that indicates completion of the method has been moved to the end of the parameter list to be consistent with other parts of Pika's API and with other libraries in general. - ``basic_cancel`` - ``basic_consume`` - ``basic_get`` - ``basic_qos`` - ``basic_recover`` - ``confirm_delivery`` - ``exchange_bind`` - ``exchange_declare`` - ``exchange_delete`` - ``exchange_unbind`` - ``flow`` - ``queue_bind`` - ``queue_declare`` - ``queue_delete`` - ``queue_purge`` - ``queue_unbind`` **IMPORTANT**: When specifying TLS / SSL options, the ``SSLOptions`` class must be used, and a ``dict`` is no longer supported. 0.13.1 2019-02-04 ----------------- `GitHub milestone `_ 0.13.0 2019-01-17 ----------------- `GitHub milestone `_ 0.12.0 2018-06-19 ----------------- `GitHub milestone `_ This is an interim release prior to version `1.0.0`. It includes the following backported pull requests and commits from the `master` branch: - `PR #901 `_ - `PR #908 `_ - `PR #910 `_ - `PR #918 `_ - `PR #920 `_ - `PR #924 `_ - `PR #937 `_ - `PR #938 `_ - `PR #933 `_ - `PR #940 `_ - `PR #932 `_ - `PR #928 `_ - `PR #934 `_ - `PR #915 `_ - `PR #946 `_ - `PR #947 `_ - `PR #952 `_ - `PR #956 `_ - `PR #966 `_ - `PR #975 `_ - `PR #978 `_ - `PR #981 `_ - `PR #994 `_ - `PR #1007 `_ - `PR #1045 `_ (manually backported) - `PR #1011 `_ Commits: Travis CI fail fast - 3f0e739 New features: ``BlockingConnection.consume`` now returns ``(None, None, None)`` when inactivity timeout is reached (`PR `_) ``BlockingConnection`` now supports the ``add_callback_threadsafe`` method which allows a function to be executed correctly on the IO loop thread. The main use-case for this is as follows: - Application sets up a thread for ``BlockingConnection`` and calls ``basic_consume`` on it - When a message is received, work is done on another thread - When the work is done, the worker uses ``connection.add_callback_threadsafe`` to call the ``basic_ack`` method on the channel instance. Please see ``examples/basic_consumer_threaded.py`` for an example. As always, ``SelectConnection`` and a fully async consumer/publisher is the preferred method of using Pika. Heartbeats are now sent at an interval equal to 1/2 of the negotiated idle connection timeout. RabbitMQ's default timeout value is 60 seconds, so heartbeats will be sent at a 30 second interval. In addition, Pika's check for an idle connection will be done at an interval equal to the timeout value plus 5 seconds to allow for delays. This results in an interval of 65 seconds by default. 0.11.2 2017-11-30 ----------------- `GitHub milestone `_ `0.11.2 `_ - Remove `+` character from platform releases string (`PR `_) 0.11.1 2017-11-27 ----------------- `GitHub milestone `_ `0.11.1 `_ - Fix `BlockingConnection` to ensure event loop exits (`PR `_) - Heartbeat timeouts will use the client value if specified (`PR `_) - Allow setting some common TCP options (`PR `_) - Errors when decoding Unicode are ignored (`PR `_) - Fix large number encoding (`PR `_) 0.11.0 2017-07-29 ----------------- `GitHub milestone `_ `0.11.0 `_ - Simplify Travis CI configuration for OS X. - Add `asyncio` connection adapter for Python 3.4 and newer. - Connection failures that occur after the socket is opened and before the AMQP connection is ready to go are now reported by calling the connection error callback. Previously these were not consistently reported. - In BaseConnection.close, call _handle_ioloop_stop only if the connection is already closed to allow the asynchronous close operation to complete gracefully. - Pass error information from failed socket connection to user callbacks on_open_error_callback and on_close_callback with result_code=-1. - ValueError is raised when a completion callback is passed to an asynchronous (nowait) Channel operation. It's an application error to pass a non-None completion callback with an asynchronous request, because this callback can never be serviced in the asynchronous scenario. - `Channel.basic_reject` fixed to allow `delivery_tag` to be of type `long` as well as `int`. (by quantum5) - Implemented support for blocked connection timeouts in `pika.connection.Connection`. This feature is available to all pika adapters. See `pika.connection.ConnectionParameters` docstring to learn more about `blocked_connection_timeout` configuration. - Deprecated the `heartbeat_interval` arg in `pika.ConnectionParameters` in favor of the `heartbeat` arg for consistency with the other connection parameters classes `pika.connection.Parameters` and `pika.URLParameters`. - When the `port` arg is not set explicitly in `ConnectionParameters` constructor, but the `ssl` arg is set explicitly, then set the port value to to the default AMQP SSL port if SSL is enabled, otherwise to the default AMQP plaintext port. - `URLParameters` will raise ValueError if a non-empty URL scheme other than {amqp | amqps | http | https} is specified. - `InvalidMinimumFrameSize` and `InvalidMaximumFrameSize` exceptions are deprecated. pika.connection.Parameters.frame_max property setter now raises the standard `ValueError` exception when the value is out of bounds. - Removed deprecated parameter `type` in `Channel.exchange_declare` and `BlockingChannel.exchange_declare` in favor of the `exchange_type` arg that doesn't overshadow the builtin `type` keyword. - Channel.close() on OPENING channel transitions it to CLOSING instead of raising ChannelClosed. - Channel.close() on CLOSING channel raises `ChannelAlreadyClosing`; used to raise `ChannelClosed`. - Connection.channel() raises `ConnectionClosed` if connection is not in OPEN state. - When performing graceful close on a channel and `Channel.Close` from broker arrives while waiting for CloseOk, don't release the channel number until CloseOk arrives to avoid race condition that may lead to a new channel receiving the CloseOk that was destined for the closing channel. - The `backpressure_detection` option of `ConnectionParameters` and `URLParameters` property is DEPRECATED in favor of `Connection.Blocked` and `Connection.Unblocked`. See `Connection.add_on_connection_blocked_callback`. 0.10.0 2015-09-02 ----------------- `0.10.0 `_ - a9bf96d - LibevConnection: Fixed dict chgd size during iteration (Michael Laing) - 388c55d - SelectConnection: Fixed KeyError exceptions in IOLoop timeout executions (Shinji Suzuki) - 4780de3 - BlockingConnection: Add support to make BlockingConnection a Context Manager (@reddec) 0.10.0b2 2015-07-15 ------------------- - f72b58f - Fixed failure to purge _ConsumerCancellationEvt from BlockingChannel._pending_events during basic_cancel. (Vitaly Kruglikov) 0.10.0b1 2015-07-10 ------------------- High-level summary of notable changes: - Change to 3-Clause BSD License - Python 3.x support - Over 150 commits from 19 contributors - Refactoring of SelectConnection ioloop - This major release contains certain non-backward-compatible API changes as well as significant performance improvements in the `BlockingConnection` adapter. - Non-backward-compatible changes in `Channel.add_on_return_callback` callback's signature. - The `AsyncoreConnection` adapter was retired **Details** Python 3.x: this release introduces python 3.x support. Tested on Python 3.3 and 3.4. `AsyncoreConnection`: Retired this legacy adapter to reduce maintenance burden; the recommended replacement is the `SelectConnection` adapter. `SelectConnection`: ioloop was refactored for compatibility with other ioloops. `Channel.add_on_return_callback`: The callback is now passed the individual parameters channel, method, properties, and body instead of a tuple of those values for congruence with other similar callbacks. `BlockingConnection`: This adapter underwent a makeover under the hood and gained significant performance improvements as well as enhanced timer resolution. It is now implemented as a client of the `SelectConnection` adapter. Below is an overview of the `BlockingConnection` and `BlockingChannel` API changes: - Recursion: the new implementation eliminates callback recursion that sometimes blew out the stack in the legacy implementation (e.g., publish -> consumer_callback -> publish -> consumer_callback, etc.). While `BlockingConnection.process_data_events` and `BlockingConnection.sleep` may still be called from the scope of the blocking adapter's callbacks in order to process pending I/O, additional callbacks will be suppressed whenever `BlockingConnection.process_data_events` and `BlockingConnection.sleep` are nested in any combination; in that case, the callback information will be bufferred and dispatched once nesting unwinds and control returns to the level-zero dispatcher. - `BlockingConnection.connect`: this method was removed in favor of the constructor as the only way to establish connections; this reduces maintenance burden, while improving reliability of the adapter. - `BlockingConnection.process_data_events`: added the optional parameter `time_limit`. - `BlockingConnection.add_on_close_callback`: removed; legacy raised `NotImplementedError`. - `BlockingConnection.add_on_open_callback`: removed; legacy raised `NotImplementedError`. - `BlockingConnection.add_on_open_error_callback`: removed; legacy raised `NotImplementedError`. - `BlockingConnection.add_backpressure_callback`: not supported - `BlockingConnection.set_backpressure_multiplier`: not supported - `BlockingChannel.add_on_flow_callback`: not supported; per docstring in channel.py: "Note that newer versions of RabbitMQ will not issue this but instead use TCP backpressure". - `BlockingChannel.flow`: not supported - `BlockingChannel.force_data_events`: removed as it is no longer necessary following redesign of the adapter. - Removed the `nowait` parameter from `BlockingChannel` methods, forcing `nowait=False` (former API default) in the implementation; this is more suitable for the blocking nature of the adapter and its error-reporting strategy; this concerns the following methods: `basic_cancel`, `confirm_delivery`, `exchange_bind`, `exchange_declare`, `exchange_delete`, `exchange_unbind`, `queue_bind`, `queue_declare`, `queue_delete`, and `queue_purge`. - `BlockingChannel.basic_cancel`: returns a sequence instead of None; for a `no_ack=True` consumer, `basic_cancel` returns a sequence of pending messages that arrived before broker confirmed the cancellation. - `BlockingChannel.consume`: added new optional kwargs `arguments` and `inactivity_timeout`. Also, raises ValueError if the consumer creation parameters don't match those used to create the existing queue consumer generator, if any; this happens when you break out of the consume loop, then call `BlockingChannel.consume` again with different consumer-creation args without first cancelling the previous queue consumer generator via `BlockingChannel.cancel`. The legacy implementation would silently resume consuming from the existing queue consumer generator even if the subsequent `BlockingChannel.consume` was invoked with a different queue name, etc. - `BlockingChannel.cancel`: returns 0; the legacy implementation tried to return the number of requeued messages, but this number was not accurate as it didn't include the messages returned by the Channel class; this count is not generally useful, so returning 0 is a reasonable replacement. - `BlockingChannel.open`: removed in favor of having a single mechanism for creating a channel (`BlockingConnection.channel`); this reduces maintenance burden, while improving reliability of the adapter. - `BlockingChannel.confirm_delivery`: raises UnroutableError when unroutable messages that were sent prior to this call are returned before we receive Confirm.Select-ok. - `BlockingChannel.basic_publish: always returns True when delivery confirmation is not enabled (publisher-acks = off); the legacy implementation returned a bool in this case if `mandatory=True` to indicate whether the message was delivered; however, this was non-deterministic, because Basic.Return is asynchronous and there is no way to know how long to wait for it or its absence. The legacy implementation returned None when publishing with publisher-acks = off and `mandatory=False`. The new implementation always returns True when publishing while publisher-acks = off. - `BlockingChannel.publish`: a new alternate method (vs. `basic_publish`) for publishing a message with more detailed error reporting via UnroutableError and NackError exceptions. - `BlockingChannel.start_consuming`: raises pika.exceptions.RecursionError if called from the scope of a `BlockingConnection` or `BlockingChannel` callback. - `BlockingChannel.get_waiting_message_count`: new method; returns the number of messages that may be retrieved from the current queue consumer generator via `BasicChannel.consume` without blocking. **Commits** - 5aaa753 - Fixed SSL import and removed no_ack=True in favor of explicit AMQP message handling based on deferreds (skftn) - 7f222c2 - Add checkignore for codeclimate (Gavin M. Roy) - 4dec370 - Implemented BlockingChannel.flow; Implemented BlockingConnection.add_on_connection_blocked_callback; Implemented BlockingConnection.add_on_connection_unblocked_callback. (Vitaly Kruglikov) - 4804200 - Implemented blocking adapter acceptance test for exchange-to-exchange binding. Added rudimentary validation of BasicProperties passthru in blocking adapter publish tests. Updated CHANGELOG. (Vitaly Kruglikov) - 4ec07fd - Fixed sending of data in TwistedProtocolConnection (Vitaly Kruglikov) - a747fb3 - Remove my copyright from forward_server.py test utility. (Vitaly Kruglikov) - 94246d2 - Return True from basic_publish when pubacks is off. Implemented more blocking adapter accceptance tests. (Vitaly Kruglikov) - 3ce013d - PIKA-609 Wait for broker to dispatch all messages to client before cancelling consumer in TestBasicCancelWithNonAckableConsumer and TestBasicCancelWithAckableConsumer (Vitaly Kruglikov) - 293f778 - Created CHANGELOG entry for release 0.10.0. Fixed up callback documentation for basic_get, basic_consume, and add_on_return_callback. (Vitaly Kruglikov) - 16d360a - Removed the legacy AsyncoreConnection adapter in favor of the recommended SelectConnection adapter. (Vitaly Kruglikov) - 240a82c - Defer creation of poller's event loop interrupt socket pair until start is called, because some SelectConnection users (e.g., BlockingConnection adapter) don't use the event loop, and these sockets would just get reported as resource leaks. (Vitaly Kruglikov) - aed5cae - Added EINTR loops in select_connection pollers. Addressed some pylint findings, including an error or two. Wrap socket.send and socket.recv calls in EINTR loops Use the correct exception for socket.error and select.error and get errno depending on python version. (Vitaly Kruglikov) - 498f1be - Allow passing exchange, queue and routing_key as text, handle short strings as text in python3 (saarni) - 9f7f243 - Restored basic_consume, basic_cancel, and add_on_cancel_callback (Vitaly Kruglikov) - 18c9909 - Reintroduced BlockingConnection.process_data_events. (Vitaly Kruglikov) - 4b25cb6 - Fixed BlockingConnection/BlockingChannel acceptance and unit tests (Vitaly Kruglikov) - bfa932f - Facilitate proper connection state after BasicConnection._adapter_disconnect (Vitaly Kruglikov) - 9a09268 - Fixed BlockingConnection test that was failing with ConnectionClosed error. (Vitaly Kruglikov) - 5a36934 - Copied synchronous_connection.py from pika-synchronous branch Fixed pylint findings Integrated SynchronousConnection with the new ioloop in SelectConnection Defined dedicated message classes PolledMessage and ConsumerMessage and moved from BlockingChannel to module-global scope. Got rid of nowait args from BlockingChannel public API methods Signal unroutable messages via UnroutableError exception. Signal Nack'ed messages via NackError exception. These expose more information about the failure than legacy basic_publich API. Removed set_timeout and backpressure callback methods Restored legacy `is_open`, etc. property names (Vitaly Kruglikov) - 6226dc0 - Remove deprecated --use-mirrors (Gavin M. Roy) - 1a7112f - Raise ConnectionClosed when sending a frame with no connection (#439) (Gavin M. Roy) - 9040a14 - Make delivery_tag non-optional (#498) (Gavin M. Roy) - 86aabc2 - Bump version (Gavin M. Roy) - 562075a - Update a few testing things (Gavin M. Roy) - 4954d38 - use unicode_type in blocking_connection.py (Antti Haapala) - 133d6bc - Let Travis install ordereddict for Python 2.6, and ttest 3.3, 3.4 too. (Antti Haapala) - 0d2287d - Pika Python 3 support (Antti Haapala) - 3125c79 - SSLWantRead is not supported before python 2.7.9 and 3.3 (Will) - 9a9c46c - Fixed TestDisconnectDuringConnectionStart: it turns out that depending on callback order, it might get either ProbableAuthenticationError or ProbableAccessDeniedError. (Vitaly Kruglikov) - cd8c9b0 - A fix the write starvation problem that we see with tornado and pika (Will) - 8654fbc - SelectConnection - make interrupt socketpair non-blocking (Will) - 4f3666d - Added copyright in forward_server.py and fixed NameError bug (Vitaly Kruglikov) - f8ebbbc - ignore docs (Gavin M. Roy) - a344f78 - Updated codeclimate config (Gavin M. Roy) - 373c970 - Try and fix pathing issues in codeclimate (Gavin M. Roy) - 228340d - Ignore codegen (Gavin M. Roy) - 4db0740 - Add a codeclimate config (Gavin M. Roy) - 7e989f9 - Slight code re-org, usage comment and better naming of test file. (Will) - 287be36 - Set up _kqueue member of KQueuePoller before calling super constructor to avoid exception due to missing _kqueue member. Call `self._map_event(event)` instead of `self._map_event(event.filter)`, because `KQueuePoller._map_event()` assumes it's getting an event, not an event filter. (Vitaly Kruglikov) - 62810fb - Fix issue #412: reset BlockingConnection._read_poller in BlockingConnection._adapter_disconnect() to guard against accidental access to old file descriptor. (Vitaly Kruglikov) - 03400ce - Rationalise adapter acceptance tests (Will) - 9414153 - Fix bug selecting non epoll poller (Will) - 4f063df - Use user heartbeat setting if server proposes none (Pau Gargallo) - 9d04d6e - Deactivate heartbeats when heartbeat_interval is 0 (Pau Gargallo) - a52a608 - Bug fix and review comments. (Will) - e3ebb6f - Fix incorrect x-expires argument in acceptance tests (Will) - 294904e - Get BlockingConnection into consistent state upon loss of TCP/IP connection with broker and implement acceptance tests for those cases. (Vitaly Kruglikov) - 7f91a68 - Make SelectConnection behave like an ioloop (Will) - dc9db2b - Perhaps 5 seconds is too agressive for travis (Gavin M. Roy) - c23e532 - Lower the stuck test timeout (Gavin M. Roy) - 1053ebc - Late night bug (Gavin M. Roy) - cd6c1bf - More BaseConnection._handle_error cleanup (Gavin M. Roy) - a0ff21c - Fix the test to work with Python 2.6 (Gavin M. Roy) - 748e8aa - Remove pypy for now (Gavin M. Roy) - 1c921c1 - Socket close/shutdown cleanup (Gavin M. Roy) - 5289125 - Formatting update from PR (Gavin M. Roy) - d235989 - Be more specific when calling getaddrinfo (Gavin M. Roy) - b5d1b31 - Reflect the method name change in pika.callback (Gavin M. Roy) - df7d3b7 - Cleanup BlockingConnection in a few places (Gavin M. Roy) - cd99e1c - Rename method due to use in BlockingConnection (Gavin M. Roy) - 7e0d1b3 - Use google style with yapf instead of pep8 (Gavin M. Roy) - 7dc9bab - Refactor socket writing to not use sendall #481 (Gavin M. Roy) - 4838789 - Dont log the fd #521 (Gavin M. Roy) - 765107d - Add Connection.Blocked callback registration methods #476 (Gavin M. Roy) - c15b5c1 - Fix _blocking typo pointed out in #513 (Gavin M. Roy) - 759ac2c - yapf of codegen (Gavin M. Roy) - 9dadd77 - yapf cleanup of codegen and spec (Gavin M. Roy) - ddba7ce - Do not reject consumers with no_ack=True #486 #530 (Gavin M. Roy) - 4528a1a - yapf reformatting of tests (Gavin M. Roy) - e7b6d73 - Remove catching AttributError (#531) (Gavin M. Roy) - 41ea5ea - Update README badges [skip ci] (Gavin M. Roy) - 6af987b - Add note on contributing (Gavin M. Roy) - 161fc0d - yapf formatting cleanup (Gavin M. Roy) - edcb619 - Add PYPY to travis testing (Gavin M. Roy) - 2225771 - Change the coverage badge (Gavin M. Roy) - 8f7d451 - Move to codecov from coveralls (Gavin M. Roy) - b80407e - Add confirm_delivery to example (Andrew Smith) - 6637212 - Update base_connection.py (bstemshorn) - 1583537 - #544 get_waiting_message_count() (markcf) - 0c9be99 - Fix #535: pass expected reply_code and reply_text from method frame to Connection._on_disconnect from Connection._on_connection_closed (Vitaly Kruglikov) - d11e73f - Propagate ConnectionClosed exception out of BlockingChannel._send_method() and log ConnectionClosed in BlockingConnection._on_connection_closed() (Vitaly Kruglikov) - 63d2951 - Fix #541 - make sure connection state is properly reset when BlockingConnection._check_state_on_disconnect raises ConnectionClosed. This supplements the previously-merged PR #450 by getting the connection into consistent state. (Vitaly Kruglikov) - 71bc0eb - Remove unused self.fd attribute from BaseConnection (Vitaly Kruglikov) - 8c08f93 - PIKA-532 Removed unnecessary params (Vitaly Kruglikov) - 6052ecf - PIKA-532 Fix bug in BlockingConnection._handle_timeout that was preventing _on_connection_closed from being called when not closing. (Vitaly Kruglikov) - 562aa15 - pika: callback: Display exception message when callback fails. (Stuart Longland) - 452995c - Typo fix in connection.py (Andrew) - 361c0ad - Added some missing yields (Robert Weidlich) - 0ab5a60 - Added complete example for python twisted service (Robert Weidlich) - 4429110 - Add deployment and webhooks (Gavin M. Roy) - 7e50302 - Fix has_content style in codegen (Andrew Grigorev) - 28c2214 - Fix the trove categorization (Gavin M. Roy) - de8b545 - Ensure frames can not be interspersed on send (Gavin M. Roy) - 8fe6bdd - Fix heartbeat behaviour after connection failure. (Kyösti Herrala) - c123472 - Updating BlockingChannel.basic_get doc (it does not receive a callback like the rest of the adapters) (Roberto Decurnex) - b5f52fb - Fix number of arguments passed to _on_return callback (Axel Eirola) - 765139e - Lower default TIMEOUT to 0.01 (bra-fsn) - 6cc22a5 - Fix confirmation on reconnects (bra-fsn) - f4faf0a - asynchronous publisher and subscriber examples refactored to follow the StepDown rule (Riccardo Cirimelli) 0.9.14 - 2014-07-11 ------------------- `0.9.14 `_ - 57fe43e - fix test to generate a correct range of random ints (ml) - 0d68dee - fix async watcher for libev_connection (ml) - 01710ad - Use default username and password if not specified in URLParameters (Sean Dwyer) - fae328e - documentation typo (Jeff Fein-Worton) - afbc9e0 - libev_connection: reset_io_watcher (ml) - 24332a2 - Fix the manifest (Gavin M. Roy) - acdfdef - Remove useless test (Gavin M. Roy) - 7918e1a - Skip libev tests if pyev is not installed or if they are being run in pypy (Gavin M. Roy) - bb583bf - Remove the deprecated test (Gavin M. Roy) - aecf3f2 - Don't reject a message if the channel is not open (Gavin M. Roy) - e37f336 - Remove UTF-8 decoding in spec (Gavin M. Roy) - ddc35a9 - Update the unittest to reflect removal of force binary (Gavin M. Roy) - fea2476 - PEP8 cleanup (Gavin M. Roy) - 9b97956 - Remove force_binary (Gavin M. Roy) - a42dd90 - Whitespace required (Gavin M. Roy) - 85867ea - Update the content_frame_dispatcher tests to reflect removal of auto-cast utf-8 (Gavin M. Roy) - 5a4bd5d - Remove unicode casting (Gavin M. Roy) - efea53d - Remove force binary and unicode casting (Gavin M. Roy) - e918d15 - Add methods to remove deprecation warnings from asyncore (Gavin M. Roy) - 117f62d - Add a coveragerc to ignore the auto generated pika.spec (Gavin M. Roy) - 52f4485 - Remove pypy tests from travis for now (Gavin M. Roy) - c3aa958 - Update README.rst (Gavin M. Roy) - 3e2319f - Delete README.md (Gavin M. Roy) - c12b0f1 - Move to RST (Gavin M. Roy) - 704f5be - Badging updates (Gavin M. Roy) - 7ae33ca - Update for coverage info (Gavin M. Roy) - ae7ca86 - add libev_adapter_tests.py; modify .travis.yml to install libev and pyev (ml) - f86aba5 - libev_connection: add **kwargs to _handle_event; suppress default_ioloop reuse warning (ml) - 603f1cf - async_test_base: add necessary args to _on_cconn_closed (ml) - 3422007 - add libev_adapter_tests.py (ml) - 6cbab0c - removed relative imports and importing urlparse from urllib.parse for py3+ (a-tal) - f808464 - libev_connection: add async watcher; add optional parameters to add_timeout (ml) - c041c80 - Remove ev all together for now (Gavin M. Roy) - 9408388 - Update the test descriptions and timeout (Gavin M. Roy) - 1b552e0 - Increase timeout (Gavin M. Roy) - 69a1f46 - Remove the pyev requirement for 2.6 testing (Gavin M. Roy) - fe062d2 - Update package name (Gavin M. Roy) - 611ad0e - Distribute the LICENSE and README.md (#350) (Gavin M. Roy) - df5e1d8 - Ensure that the entire frame is written using socket.sendall (#349) (Gavin M. Roy) - 69ec8cf - Move the libev install to before_install (Gavin M. Roy) - a75f693 - Update test structure (Gavin M. Roy) - 636b424 - Update things to ignore (Gavin M. Roy) - b538c68 - Add tox, nose.cfg, update testing config (Gavin M. Roy) - a0e7063 - add some tests to increase coverage of pika.connection (Charles Law) - c76d9eb - Address issue #459 (Gavin M. Roy) - 86ad2db - Raise exception if positional arg for parameters isn't an instance of Parameters (Gavin M. Roy) - 14d08e1 - Fix for python 2.6 (Gavin M. Roy) - bd388a3 - Use the first unused channel number addressing #404, #460 (Gavin M. Roy) - e7676e6 - removing a debug that was left in last commit (James Mutton) - 6c93b38 - Fixing connection-closed behavior to detect on attempt to publish (James Mutton) - c3f0356 - Initialize bytes_written in _handle_write() (Jonathan Kirsch) - 4510e95 - Fix _handle_write() may not send full frame (Jonathan Kirsch) - 12b793f - fixed Tornado Consumer example to successfully reconnect (Yang Yang) - f074444 - remove forgotten import of ordereddict (Pedro Abranches) - 1ba0aea - fix last merge (Pedro Abranches) - 10490a6 - change timeouts structure to list to maintain scheduling order (Pedro Abranches) - 7958394 - save timeouts in ordered dict instead of dict (Pedro Abranches) - d2746bf - URLParameters and ConnectionParameters accept unicode strings (Allard Hoeve) - 596d145 - previous fix for AttributeError made parent and child class methods identical, remove duplication (James Mutton) - 42940dd - UrlParameters Docs: fixed amqps scheme examples (Riccardo Cirimelli) - 43904ff - Dont test this in PyPy due to sort order issue (Gavin M. Roy) - d7d293e - Don't leave __repr__ sorting up to chance (Gavin M. Roy) - 848c594 - Add integration test to travis and fix invocation (Gavin M. Roy) - 2678275 - Add pypy to travis tests (Gavin M. Roy) - 1877f3d - Also addresses issue #419 (Gavin M. Roy) - 470c245 - Address issue #419 (Gavin M. Roy) - ca3cb59 - Address issue #432 (Gavin M. Roy) - a3ff6f2 - Default frame max should be AMQP FRAME_MAX (Gavin M. Roy) - ff3d5cb - Remove max consumer tag test due to change in code. (Gavin M. Roy) - 6045dda - Catch KeyError (#437) to ensure that an exception is not raised in a race condition (Gavin M. Roy) - 0b4d53a - Address issue #441 (Gavin M. Roy) - 180e7c4 - Update license and related files (Gavin M. Roy) - 256ed3d - Added Jython support. (Erik Olof Gunnar Andersson) - f73c141 - experimental work around for recursion issue. (Erik Olof Gunnar Andersson) - a623f69 - Prevent #436 by iterating the keys and not the dict (Gavin M. Roy) - 755fcae - Add support for authentication_failure_close, connection.blocked (Gavin M. Roy) - c121243 - merge upstream master (Michael Laing) - a08dc0d - add arg to channel.basic_consume (Pedro Abranches) - 10b136d - Documentation fix (Anton Ryzhov) - 9313307 - Fixed minor markup errors. (Jorge Puente Sarrín) - fb3e3cf - Fix the spelling of UnsupportedAMQPFieldException (Garrett Cooper) - 03d5da3 - connection.py: Propagate the force_channel keyword parameter to methods involved in channel creation (Michael Laing) - 7bbcff5 - Documentation fix for basic_publish (JuhaS) - 01dcea7 - Expose no_ack and exclusive to BlockingChannel.consume (Jeff Tang) - d39b6aa - Fix BlockingChannel.basic_consume does not block on non-empty queues (Juhyeong Park) - 6e1d295 - fix for issue 391 and issue 307 (Qi Fan) - d9ffce9 - Update parameters.rst (cacovsky) - 6afa41e - Add additional badges (Gavin M. Roy) - a255925 - Fix return value on dns resolution issue (Laurent Eschenauer) - 3f7466c - libev_connection: tweak docs (Michael Laing) - 0aaed93 - libev_connection: Fix varable naming (Michael Laing) - 0562d08 - libev_connection: Fix globals warning (Michael Laing) - 22ada59 - libev_connection: use globals to track sigint and sigterm watchers as they are created globally within libev (Michael Laing) - 2649b31 - Move badge [skip ci] (Gavin M. Roy) - f70eea1 - Remove pypy and installation attempt of pyev (Gavin M. Roy) - f32e522 - Conditionally skip external connection adapters if lib is not installed (Gavin M. Roy) - cce97c5 - Only install pyev on python 2.7 (Gavin M. Roy) - ff84462 - Add travis ci support (Gavin M. Roy) - cf971da - lib_evconnection: improve signal handling; add callback (Michael Laing) - 9adb269 - bugfix in returning a list in Py3k (Alex Chandel) - c41d5b9 - update exception syntax for Py3k (Alex Chandel) - c8506f1 - fix _adapter_connect (Michael Laing) - 67cb660 - Add LibevConnection to README (Michael Laing) - 1f9e72b - Propagate low-level connection errors to the AMQPConnectionError. (Bjorn Sandberg) - e1da447 - Avoid race condition in _on_getok on successive basic_get() when clearing out callbacks (Jeff) - 7a09979 - Add support for upcoming Connection.Blocked/Unblocked (Gavin M. Roy) - 53cce88 - TwistedChannel correctly handles multi-argument deferreds. (eivanov) - 66f8ace - Use uuid when creating unique consumer tag (Perttu Ranta-aho) - 4ee2738 - Limit the growth of Channel._cancelled, use deque instead of list. (Perttu Ranta-aho) - 0369aed - fix adapter references and tweak docs (Michael Laing) - 1738c23 - retry select.select() on EINTR (Cenk Alti) - 1e55357 - libev_connection: reset internal state on reconnect (Michael Laing) - 708559e - libev adapter (Michael Laing) - a6b7c8b - Prioritize EPollPoller and KQueuePoller over PollPoller and SelectPoller (Anton Ryzhov) - 53400d3 - Handle socket errors in PollPoller and EPollPoller Correctly check 'select.poll' availability (Anton Ryzhov) - a6dc969 - Use dict.keys & items instead of iterkeys & iteritems (Alex Chandel) - 5c1b0d0 - Use print function syntax, in examples (Alex Chandel) - ac9f87a - Fixed a typo in the name of the Asyncore Connection adapter (Guruprasad) - dfbba50 - Fixed bug mentioned in Issue #357 (Erik Andersson) - c906a2d - Drop additional flags when getting info for the hostnames, log errors (#352) (Gavin M. Roy) - baf23dd - retry poll() on EINTR (Cenk Alti) - 7cd8762 - Address ticket #352 catching an error when socket.getprotobyname fails (Gavin M. Roy) - 6c3ec75 - Prep for 0.9.14 (Gavin M. Roy) - dae7a99 - Bump to 0.9.14p0 (Gavin M. Roy) - 620edc7 - Use default port and virtual host if omitted in URLParameters (Issue #342) (Gavin M. Roy) - 42a8787 - Move the exception handling inside the while loop (Gavin M. Roy) - 10e0264 - Fix connection back pressure detection issue #347 (Gavin M. Roy) - 0bfd670 - Fixed mistake in commit 3a19d65. (Erik Andersson) - da04bc0 - Fixed Unknown state on disconnect error message generated when closing connections. (Erik Andersson) - 3a19d65 - Alternative solution to fix #345. (Erik Andersson) - abf9fa8 - switch to sendall to send entire frame (Dustin Koupal) - 9ce8ce4 - Fixed the async publisher example to work with reconnections (Raphaël De Giusti) - 511028a - Fix typo in TwistedChannel docstring (cacovsky) - 8b69e5a - calls self._adapter_disconnect() instead of self.disconnect() which doesn't actually exist #294 (Mark Unsworth) - 06a5cf8 - add NullHandler to prevent logging warnings (Cenk Alti) - f404a9a - Fix #337 cannot start ioloop after stop (Ralf Nyren) 0.9.13 - 2013-05-15 ------------------- `0.9.13 `_ **Major Changes** - IPv6 Support with thanks to Alessandro Tagliapietra for initial prototype - Officially remove support for <= Python 2.5 even though it was broken already - Drop pika.simplebuffer.SimpleBuffer in favor of the Python stdlib collections.deque object - New default object for receiving content is a "bytes" object which is a str wrapper in Python 2, but paves way for Python 3 support - New "Raw" mode for frame decoding content frames (#334) addresses issues #331, #229 added by Garth Williamson - Connection and Disconnection logic refactored, allowing for cleaner separation of protocol logic and socket handling logic as well as connection state management - New "on_open_error_callback" argument in creating connection objects and new Connection.add_on_open_error_callback method - New Connection.connect method to cleanly allow for reconnection code - Support for all AMQP field types, using protocol specified signed/unsigned unpacking **Backwards Incompatible Changes** - Method signature for creating connection objects has new argument "on_open_error_callback" which is positionally before "on_close_callback" - Internal callback variable names in connection.Connection have been renamed and constants used. If you relied on any of these callbacks outside of their internal use, make sure to check out the new constants. - Connection._connect method, which was an internal only method is now deprecated and will raise a DeprecationWarning. If you relied on this method, your code needs to change. - pika.simplebuffer has been removed **Bugfixes** - BlockingConnection consumer generator does not free buffer when exited (#328) - Unicode body payloads in the blocking adapter raises exception (#333) - Support "b" short-short-int AMQP data type (#318) - Docstring type fix in adapters/select_connection (#316) fix by Rikard Hultén - IPv6 not supported (#309) - Stop the HeartbeatChecker when connection is closed (#307) - Unittest fix for SelectConnection (#336) fix by Erik Andersson - Handle condition where no connection or socket exists but SelectConnection needs a timeout for retrying a connection (#322) - TwistedAdapter lagging behind BaseConnection changes (#321) fix by Jan Urbański **Other** - Refactored documentation - Added Twisted Adapter example (#314) by nolinksoft 0.9.12 - 2013-03-18 ------------------- `0.9.12 `_ **Bugfixes** - New timeout id hashing was not unique 0.9.11 - 2013-03-17 ------------------- `0.9.11 `_ **Bugfixes** - Address inconsistent channel close callback documentation and add the signature change to the TwistedChannel class (#305) - Address a missed timeout related internal data structure name change introduced in the SelectConnection 0.9.10 release. Update all connection adapters to use same signature and docstring (#306). 0.9.10 - 2013-03-16 ------------------- `0.9.10 `_ **Bugfixes** - Fix timeout in twisted adapter (Submitted by cellscape) - Fix blocking_connection poll timer resolution to milliseconds (Submitted by cellscape) - Fix channel._on_close() without a method frame (Submitted by Richard Boulton) - Addressed exception on close (Issue #279 - fix by patcpsc) - 'messages' not initialized in BlockingConnection.cancel() (Issue #289 - fix by Mik Kocikowski) - Make queue_unbind behave like queue_bind (Issue #277) - Address closing behavioral issues for connections and channels (Issue #275) - Pass a Method frame to Channel._on_close in Connection._on_disconnect (Submitted by Jan Urbański) - Fix channel closed callback signature in the Twisted adapter (Submitted by Jan Urbański) - Don't stop the IOLoop on connection close for in the Twisted adapter (Submitted by Jan Urbański) - Update the asynchronous examples to fix reconnecting and have it work - Warn if the socket was closed such as if RabbitMQ dies without a Close frame - Fix URLParameters ssl_options (Issue #296) - Add state to BlockingConnection addressing (Issue #301) - Encode unicode body content prior to publishing (Issue #282) - Fix an issue with unicode keys in BasicProperties headers key (Issue #280) - Change how timeout ids are generated (Issue #254) - Address post close state issues in Channel (Issue #302) ** Behavior changes ** - Change core connection communication behavior to prefer outbound writes over reads, addressing a recursion issue - Update connection on close callbacks, changing callback method signature - Update channel on close callbacks, changing callback method signature - Give more info in the ChannelClosed exception - Change the constructor signature for BlockingConnection, block open/close callbacks - Disable the use of add_on_open_callback/add_on_close_callback methods in BlockingConnection 0.9.9 - 2013-01-29 ------------------ `0.9.9 `_ **Bugfixes** - Only remove the tornado_connection.TornadoConnection file descriptor from the IOLoop if it's still open (Issue #221) - Allow messages with no body (Issue #227) - Allow for empty routing keys (Issue #224) - Don't raise an exception when trying to send a frame to a closed connection (Issue #229) - Only send a Connection.CloseOk if the connection is still open. (Issue #236 - Fix by noleaf) - Fix timeout threshold in blocking connection - (Issue #232 - Fix by Adam Flynn) - Fix closing connection while a channel is still open (Issue #230 - Fix by Adam Flynn) - Fixed misleading warning and exception messages in BaseConnection (Issue #237 - Fix by Tristan Penman) - Pluralised and altered the wording of the AMQPConnectionError exception (Issue #237 - Fix by Tristan Penman) - Fixed _adapter_disconnect in TornadoConnection class (Issue #237 - Fix by Tristan Penman) - Fixing hang when closing connection without any channel in BlockingConnection (Issue #244 - Fix by Ales Teska) - Remove the process_timeouts() call in SelectConnection (Issue #239) - Change the string validation to basestring for host connection parameters (Issue #231) - Add a poller to the BlockingConnection to address latency issues introduced in Pika 0.9.8 (Issue #242) - reply_code and reply_text is not set in ChannelException (Issue #250) - Add the missing constraint parameter for Channel._on_return callback processing (Issue #257 - Fix by patcpsc) - Channel callbacks not being removed from callback manager when channel is closed or deleted (Issue #261) 0.9.8 - 2012-11-18 ------------------ `0.9.8 `_ **Bugfixes** - Channel.queue_declare/BlockingChannel.queue_declare not setting up callbacks property for empty queue name (Issue #218) - Channel.queue_bind/BlockingChannel.queue_bind not allowing empty routing key - Connection._on_connection_closed calling wrong method in Channel (Issue #219) - Fix tx_commit and tx_rollback bugs in BlockingChannel (Issue #217) 0.9.7 - 2012-11-11 ------------------ `0.9.7 `_ **New features** - generator based consumer in BlockingChannel (See :doc:`examples/blocking_consumer_generator` for example) **Changes** - BlockingChannel._send_method will only wait if explicitly told to **Bugfixes** - Added the exchange "type" parameter back but issue a DeprecationWarning - Dont require a queue name in Channel.queue_declare() - Fixed KeyError when processing timeouts (Issue # 215 - Fix by Raphael De Giusti) - Don't try and close channels when the connection is closed (Issue #216 - Fix by Charles Law) - Dont raise UnexpectedFrame exceptions, log them instead - Handle multiple synchronous RPC calls made without waiting for the call result (Issues #192, #204, #211) - Typo in docs (Issue #207 Fix by Luca Wehrstedt) - Only sleep on connection failure when retry attempts are > 0 (Issue #200) - Bypass _rpc method and just send frames for Basic.Ack, Basic.Nack, Basic.Reject (Issue #205) 0.9.6 - 2012-10-29 ------------------ `0.9.6 `_ **New features** - URLParameters - BlockingChannel.start_consuming() and BlockingChannel.stop_consuming() - Delivery Confirmations - Improved unittests **Major bugfix areas** - Connection handling - Blocking functionality in the BlockingConnection - SSL - UTF-8 Handling **Removals** - pika.reconnection_strategies - pika.channel.ChannelTransport - pika.log - pika.template - examples directory 0.9.5 - 2011-03-29 ------------------ `0.9.5 `_ **Changelog** - Scope changes with adapter IOLoops and CallbackManager allowing for cleaner, multi-threaded operation - Add support for Confirm.Select with channel.Channel.confirm_delivery() - Add examples of delivery confirmation to examples (demo_send_confirmed.py) - Update uses of log.warn with warning.warn for TCP Back-pressure alerting - License boilerplate updated to simplify license text in source files - Increment the timeout in select_connection.SelectPoller reducing CPU utilization - Bug fix in Heartbeat frame delivery addressing issue #35 - Remove abuse of pika.log.method_call through a majority of the code - Rename of key modules: table to data, frames to frame - Cleanup of frame module and related classes - Restructure of tests and test runner - Update functional tests to respect RABBITMQ_HOST, RABBITMQ_PORT environment variables - Bug fixes to reconnection_strategies module - Fix the scale of timeout for PollPoller to be specified in milliseconds - Remove mutable default arguments in RPC calls - Add data type validation to RPC calls - Move optional credentials erasing out of connection.Connection into credentials module - Add support to allow for additional external credential types - Add a NullHandler to prevent the 'No handlers could be found for logger "pika"' error message when not using pika.log in a client app at all. - Clean up all examples to make them easier to read and use - Move documentation into its own repository https://github.com/pika/documentation - channel.py - Move channel.MAX_CHANNELS constant from connection.CHANNEL_MAX - Add default value of None to ChannelTransport.rpc - Validate callback and acceptable replies parameters in ChannelTransport.RPC - Remove unused connection attribute from Channel - connection.py - Remove unused import of struct - Remove direct import of pika.credentials.PlainCredentials - Change to import pika.credentials - Move CHANNEL_MAX to channel.MAX_CHANNELS - Change ConnectionParameters initialization parameter heartbeat to boolean - Validate all inbound parameter types in ConnectionParameters - Remove the Connection._erase_credentials stub method in favor of letting the Credentials object deal with that itself. - Warn if the credentials object intends on erasing the credentials and a reconnection strategy other than NullReconnectionStrategy is specified. - Change the default types for callback and acceptable_replies in Connection._rpc - Validate the callback and acceptable_replies data types in Connection._rpc - adapters.blocking_connection.BlockingConnection - Addition of _adapter_disconnect to blocking_connection.BlockingConnection - Add timeout methods to BlockingConnection addressing issue #41 - BlockingConnection didn't allow you register more than one consumer callback because basic_consume was overridden to block immediately. New behavior allows you to do so. - Removed overriding of base basic_consume and basic_cancel methods. Now uses underlying Channel versions of those methods. - Added start_consuming() method to BlockingChannel to start the consumption loop. - Updated stop_consuming() to iterate through all the registered consumers in self._consumers and issue a basic_cancel. pika-1.2.0/examples/000077500000000000000000000000001400701476500142605ustar00rootroot00000000000000pika-1.2.0/examples/asynchronous_consumer_example.py000066400000000000000000000406711400701476500230230ustar00rootroot00000000000000# -*- coding: utf-8 -*- # pylint: disable=C0111,C0103,R0205 import functools import logging import time import pika from pika.exchange_type import ExchangeType LOG_FORMAT = ('%(levelname) -10s %(asctime)s %(name) -30s %(funcName) ' '-35s %(lineno) -5d: %(message)s') LOGGER = logging.getLogger(__name__) class ExampleConsumer(object): """This is an example consumer that will handle unexpected interactions with RabbitMQ such as channel and connection closures. If RabbitMQ closes the connection, this class will stop and indicate that reconnection is necessary. You should look at the output, as there are limited reasons why the connection may be closed, which usually are tied to permission related issues or socket timeouts. If the channel is closed, it will indicate a problem with one of the commands that were issued and that should surface in the output as well. """ EXCHANGE = 'message' EXCHANGE_TYPE = ExchangeType.topic QUEUE = 'text' ROUTING_KEY = 'example.text' def __init__(self, amqp_url): """Create a new instance of the consumer class, passing in the AMQP URL used to connect to RabbitMQ. :param str amqp_url: The AMQP url to connect with """ self.should_reconnect = False self.was_consuming = False self._connection = None self._channel = None self._closing = False self._consumer_tag = None self._url = amqp_url self._consuming = False # In production, experiment with higher prefetch values # for higher consumer throughput self._prefetch_count = 1 def connect(self): """This method connects to RabbitMQ, returning the connection handle. When the connection is established, the on_connection_open method will be invoked by pika. :rtype: pika.SelectConnection """ LOGGER.info('Connecting to %s', self._url) return pika.SelectConnection( parameters=pika.URLParameters(self._url), on_open_callback=self.on_connection_open, on_open_error_callback=self.on_connection_open_error, on_close_callback=self.on_connection_closed) def close_connection(self): self._consuming = False if self._connection.is_closing or self._connection.is_closed: LOGGER.info('Connection is closing or already closed') else: LOGGER.info('Closing connection') self._connection.close() def on_connection_open(self, _unused_connection): """This method is called by pika once the connection to RabbitMQ has been established. It passes the handle to the connection object in case we need it, but in this case, we'll just mark it unused. :param pika.SelectConnection _unused_connection: The connection """ LOGGER.info('Connection opened') self.open_channel() def on_connection_open_error(self, _unused_connection, err): """This method is called by pika if the connection to RabbitMQ can't be established. :param pika.SelectConnection _unused_connection: The connection :param Exception err: The error """ LOGGER.error('Connection open failed: %s', err) self.reconnect() def on_connection_closed(self, _unused_connection, reason): """This method is invoked by pika when the connection to RabbitMQ is closed unexpectedly. Since it is unexpected, we will reconnect to RabbitMQ if it disconnects. :param pika.connection.Connection connection: The closed connection obj :param Exception reason: exception representing reason for loss of connection. """ self._channel = None if self._closing: self._connection.ioloop.stop() else: LOGGER.warning('Connection closed, reconnect necessary: %s', reason) self.reconnect() def reconnect(self): """Will be invoked if the connection can't be opened or is closed. Indicates that a reconnect is necessary then stops the ioloop. """ self.should_reconnect = True self.stop() def open_channel(self): """Open a new channel with RabbitMQ by issuing the Channel.Open RPC command. When RabbitMQ responds that the channel is open, the on_channel_open callback will be invoked by pika. """ LOGGER.info('Creating a new channel') self._connection.channel(on_open_callback=self.on_channel_open) def on_channel_open(self, channel): """This method is invoked by pika when the channel has been opened. The channel object is passed in so we can make use of it. Since the channel is now open, we'll declare the exchange to use. :param pika.channel.Channel channel: The channel object """ LOGGER.info('Channel opened') self._channel = channel self.add_on_channel_close_callback() self.setup_exchange(self.EXCHANGE) def add_on_channel_close_callback(self): """This method tells pika to call the on_channel_closed method if RabbitMQ unexpectedly closes the channel. """ LOGGER.info('Adding channel close callback') self._channel.add_on_close_callback(self.on_channel_closed) def on_channel_closed(self, channel, reason): """Invoked by pika when RabbitMQ unexpectedly closes the channel. Channels are usually closed if you attempt to do something that violates the protocol, such as re-declare an exchange or queue with different parameters. In this case, we'll close the connection to shutdown the object. :param pika.channel.Channel: The closed channel :param Exception reason: why the channel was closed """ LOGGER.warning('Channel %i was closed: %s', channel, reason) self.close_connection() def setup_exchange(self, exchange_name): """Setup the exchange on RabbitMQ by invoking the Exchange.Declare RPC command. When it is complete, the on_exchange_declareok method will be invoked by pika. :param str|unicode exchange_name: The name of the exchange to declare """ LOGGER.info('Declaring exchange: %s', exchange_name) # Note: using functools.partial is not required, it is demonstrating # how arbitrary data can be passed to the callback when it is called cb = functools.partial( self.on_exchange_declareok, userdata=exchange_name) self._channel.exchange_declare( exchange=exchange_name, exchange_type=self.EXCHANGE_TYPE, callback=cb) def on_exchange_declareok(self, _unused_frame, userdata): """Invoked by pika when RabbitMQ has finished the Exchange.Declare RPC command. :param pika.Frame.Method unused_frame: Exchange.DeclareOk response frame :param str|unicode userdata: Extra user data (exchange name) """ LOGGER.info('Exchange declared: %s', userdata) self.setup_queue(self.QUEUE) def setup_queue(self, queue_name): """Setup the queue on RabbitMQ by invoking the Queue.Declare RPC command. When it is complete, the on_queue_declareok method will be invoked by pika. :param str|unicode queue_name: The name of the queue to declare. """ LOGGER.info('Declaring queue %s', queue_name) cb = functools.partial(self.on_queue_declareok, userdata=queue_name) self._channel.queue_declare(queue=queue_name, callback=cb) def on_queue_declareok(self, _unused_frame, userdata): """Method invoked by pika when the Queue.Declare RPC call made in setup_queue has completed. In this method we will bind the queue and exchange together with the routing key by issuing the Queue.Bind RPC command. When this command is complete, the on_bindok method will be invoked by pika. :param pika.frame.Method _unused_frame: The Queue.DeclareOk frame :param str|unicode userdata: Extra user data (queue name) """ queue_name = userdata LOGGER.info('Binding %s to %s with %s', self.EXCHANGE, queue_name, self.ROUTING_KEY) cb = functools.partial(self.on_bindok, userdata=queue_name) self._channel.queue_bind( queue_name, self.EXCHANGE, routing_key=self.ROUTING_KEY, callback=cb) def on_bindok(self, _unused_frame, userdata): """Invoked by pika when the Queue.Bind method has completed. At this point we will set the prefetch count for the channel. :param pika.frame.Method _unused_frame: The Queue.BindOk response frame :param str|unicode userdata: Extra user data (queue name) """ LOGGER.info('Queue bound: %s', userdata) self.set_qos() def set_qos(self): """This method sets up the consumer prefetch to only be delivered one message at a time. The consumer must acknowledge this message before RabbitMQ will deliver another one. You should experiment with different prefetch values to achieve desired performance. """ self._channel.basic_qos( prefetch_count=self._prefetch_count, callback=self.on_basic_qos_ok) def on_basic_qos_ok(self, _unused_frame): """Invoked by pika when the Basic.QoS method has completed. At this point we will start consuming messages by calling start_consuming which will invoke the needed RPC commands to start the process. :param pika.frame.Method _unused_frame: The Basic.QosOk response frame """ LOGGER.info('QOS set to: %d', self._prefetch_count) self.start_consuming() def start_consuming(self): """This method sets up the consumer by first calling add_on_cancel_callback so that the object is notified if RabbitMQ cancels the consumer. It then issues the Basic.Consume RPC command which returns the consumer tag that is used to uniquely identify the consumer with RabbitMQ. We keep the value to use it when we want to cancel consuming. The on_message method is passed in as a callback pika will invoke when a message is fully received. """ LOGGER.info('Issuing consumer related RPC commands') self.add_on_cancel_callback() self._consumer_tag = self._channel.basic_consume( self.QUEUE, self.on_message) self.was_consuming = True self._consuming = True def add_on_cancel_callback(self): """Add a callback that will be invoked if RabbitMQ cancels the consumer for some reason. If RabbitMQ does cancel the consumer, on_consumer_cancelled will be invoked by pika. """ LOGGER.info('Adding consumer cancellation callback') self._channel.add_on_cancel_callback(self.on_consumer_cancelled) def on_consumer_cancelled(self, method_frame): """Invoked by pika when RabbitMQ sends a Basic.Cancel for a consumer receiving messages. :param pika.frame.Method method_frame: The Basic.Cancel frame """ LOGGER.info('Consumer was cancelled remotely, shutting down: %r', method_frame) if self._channel: self._channel.close() def on_message(self, _unused_channel, basic_deliver, properties, body): """Invoked by pika when a message is delivered from RabbitMQ. The channel is passed for your convenience. The basic_deliver object that is passed in carries the exchange, routing key, delivery tag and a redelivered flag for the message. The properties passed in is an instance of BasicProperties with the message properties and the body is the message that was sent. :param pika.channel.Channel _unused_channel: The channel object :param pika.Spec.Basic.Deliver: basic_deliver method :param pika.Spec.BasicProperties: properties :param bytes body: The message body """ LOGGER.info('Received message # %s from %s: %s', basic_deliver.delivery_tag, properties.app_id, body) self.acknowledge_message(basic_deliver.delivery_tag) def acknowledge_message(self, delivery_tag): """Acknowledge the message delivery from RabbitMQ by sending a Basic.Ack RPC method for the delivery tag. :param int delivery_tag: The delivery tag from the Basic.Deliver frame """ LOGGER.info('Acknowledging message %s', delivery_tag) self._channel.basic_ack(delivery_tag) def stop_consuming(self): """Tell RabbitMQ that you would like to stop consuming by sending the Basic.Cancel RPC command. """ if self._channel: LOGGER.info('Sending a Basic.Cancel RPC command to RabbitMQ') cb = functools.partial( self.on_cancelok, userdata=self._consumer_tag) self._channel.basic_cancel(self._consumer_tag, cb) def on_cancelok(self, _unused_frame, userdata): """This method is invoked by pika when RabbitMQ acknowledges the cancellation of a consumer. At this point we will close the channel. This will invoke the on_channel_closed method once the channel has been closed, which will in-turn close the connection. :param pika.frame.Method _unused_frame: The Basic.CancelOk frame :param str|unicode userdata: Extra user data (consumer tag) """ self._consuming = False LOGGER.info( 'RabbitMQ acknowledged the cancellation of the consumer: %s', userdata) self.close_channel() def close_channel(self): """Call to close the channel with RabbitMQ cleanly by issuing the Channel.Close RPC command. """ LOGGER.info('Closing the channel') self._channel.close() def run(self): """Run the example consumer by connecting to RabbitMQ and then starting the IOLoop to block and allow the SelectConnection to operate. """ self._connection = self.connect() self._connection.ioloop.start() def stop(self): """Cleanly shutdown the connection to RabbitMQ by stopping the consumer with RabbitMQ. When RabbitMQ confirms the cancellation, on_cancelok will be invoked by pika, which will then closing the channel and connection. The IOLoop is started again because this method is invoked when CTRL-C is pressed raising a KeyboardInterrupt exception. This exception stops the IOLoop which needs to be running for pika to communicate with RabbitMQ. All of the commands issued prior to starting the IOLoop will be buffered but not processed. """ if not self._closing: self._closing = True LOGGER.info('Stopping') if self._consuming: self.stop_consuming() self._connection.ioloop.start() else: self._connection.ioloop.stop() LOGGER.info('Stopped') class ReconnectingExampleConsumer(object): """This is an example consumer that will reconnect if the nested ExampleConsumer indicates that a reconnect is necessary. """ def __init__(self, amqp_url): self._reconnect_delay = 0 self._amqp_url = amqp_url self._consumer = ExampleConsumer(self._amqp_url) def run(self): while True: try: self._consumer.run() except KeyboardInterrupt: self._consumer.stop() break self._maybe_reconnect() def _maybe_reconnect(self): if self._consumer.should_reconnect: self._consumer.stop() reconnect_delay = self._get_reconnect_delay() LOGGER.info('Reconnecting after %d seconds', reconnect_delay) time.sleep(reconnect_delay) self._consumer = ExampleConsumer(self._amqp_url) def _get_reconnect_delay(self): if self._consumer.was_consuming: self._reconnect_delay = 0 else: self._reconnect_delay += 1 if self._reconnect_delay > 30: self._reconnect_delay = 30 return self._reconnect_delay def main(): logging.basicConfig(level=logging.DEBUG, format=LOG_FORMAT) amqp_url = 'amqp://guest:guest@localhost:5672/%2F' consumer = ReconnectingExampleConsumer(amqp_url) consumer.run() if __name__ == '__main__': main() pika-1.2.0/examples/asynchronous_publisher_example.py000066400000000000000000000342531400701476500231640ustar00rootroot00000000000000# -*- coding: utf-8 -*- # pylint: disable=C0111,C0103,R0205 import functools import logging import json import pika from pika.exchange_type import ExchangeType LOG_FORMAT = ('%(levelname) -10s %(asctime)s %(name) -30s %(funcName) ' '-35s %(lineno) -5d: %(message)s') LOGGER = logging.getLogger(__name__) class ExamplePublisher(object): """This is an example publisher that will handle unexpected interactions with RabbitMQ such as channel and connection closures. If RabbitMQ closes the connection, it will reopen it. You should look at the output, as there are limited reasons why the connection may be closed, which usually are tied to permission related issues or socket timeouts. It uses delivery confirmations and illustrates one way to keep track of messages that have been sent and if they've been confirmed by RabbitMQ. """ EXCHANGE = 'message' EXCHANGE_TYPE = ExchangeType.topic PUBLISH_INTERVAL = 1 QUEUE = 'text' ROUTING_KEY = 'example.text' def __init__(self, amqp_url): """Setup the example publisher object, passing in the URL we will use to connect to RabbitMQ. :param str amqp_url: The URL for connecting to RabbitMQ """ self._connection = None self._channel = None self._deliveries = None self._acked = None self._nacked = None self._message_number = None self._stopping = False self._url = amqp_url def connect(self): """This method connects to RabbitMQ, returning the connection handle. When the connection is established, the on_connection_open method will be invoked by pika. :rtype: pika.SelectConnection """ LOGGER.info('Connecting to %s', self._url) return pika.SelectConnection( pika.URLParameters(self._url), on_open_callback=self.on_connection_open, on_open_error_callback=self.on_connection_open_error, on_close_callback=self.on_connection_closed) def on_connection_open(self, _unused_connection): """This method is called by pika once the connection to RabbitMQ has been established. It passes the handle to the connection object in case we need it, but in this case, we'll just mark it unused. :param pika.SelectConnection _unused_connection: The connection """ LOGGER.info('Connection opened') self.open_channel() def on_connection_open_error(self, _unused_connection, err): """This method is called by pika if the connection to RabbitMQ can't be established. :param pika.SelectConnection _unused_connection: The connection :param Exception err: The error """ LOGGER.error('Connection open failed, reopening in 5 seconds: %s', err) self._connection.ioloop.call_later(5, self._connection.ioloop.stop) def on_connection_closed(self, _unused_connection, reason): """This method is invoked by pika when the connection to RabbitMQ is closed unexpectedly. Since it is unexpected, we will reconnect to RabbitMQ if it disconnects. :param pika.connection.Connection connection: The closed connection obj :param Exception reason: exception representing reason for loss of connection. """ self._channel = None if self._stopping: self._connection.ioloop.stop() else: LOGGER.warning('Connection closed, reopening in 5 seconds: %s', reason) self._connection.ioloop.call_later(5, self._connection.ioloop.stop) def open_channel(self): """This method will open a new channel with RabbitMQ by issuing the Channel.Open RPC command. When RabbitMQ confirms the channel is open by sending the Channel.OpenOK RPC reply, the on_channel_open method will be invoked. """ LOGGER.info('Creating a new channel') self._connection.channel(on_open_callback=self.on_channel_open) def on_channel_open(self, channel): """This method is invoked by pika when the channel has been opened. The channel object is passed in so we can make use of it. Since the channel is now open, we'll declare the exchange to use. :param pika.channel.Channel channel: The channel object """ LOGGER.info('Channel opened') self._channel = channel self.add_on_channel_close_callback() self.setup_exchange(self.EXCHANGE) def add_on_channel_close_callback(self): """This method tells pika to call the on_channel_closed method if RabbitMQ unexpectedly closes the channel. """ LOGGER.info('Adding channel close callback') self._channel.add_on_close_callback(self.on_channel_closed) def on_channel_closed(self, channel, reason): """Invoked by pika when RabbitMQ unexpectedly closes the channel. Channels are usually closed if you attempt to do something that violates the protocol, such as re-declare an exchange or queue with different parameters. In this case, we'll close the connection to shutdown the object. :param pika.channel.Channel channel: The closed channel :param Exception reason: why the channel was closed """ LOGGER.warning('Channel %i was closed: %s', channel, reason) self._channel = None if not self._stopping: self._connection.close() def setup_exchange(self, exchange_name): """Setup the exchange on RabbitMQ by invoking the Exchange.Declare RPC command. When it is complete, the on_exchange_declareok method will be invoked by pika. :param str|unicode exchange_name: The name of the exchange to declare """ LOGGER.info('Declaring exchange %s', exchange_name) # Note: using functools.partial is not required, it is demonstrating # how arbitrary data can be passed to the callback when it is called cb = functools.partial( self.on_exchange_declareok, userdata=exchange_name) self._channel.exchange_declare( exchange=exchange_name, exchange_type=self.EXCHANGE_TYPE, callback=cb) def on_exchange_declareok(self, _unused_frame, userdata): """Invoked by pika when RabbitMQ has finished the Exchange.Declare RPC command. :param pika.Frame.Method unused_frame: Exchange.DeclareOk response frame :param str|unicode userdata: Extra user data (exchange name) """ LOGGER.info('Exchange declared: %s', userdata) self.setup_queue(self.QUEUE) def setup_queue(self, queue_name): """Setup the queue on RabbitMQ by invoking the Queue.Declare RPC command. When it is complete, the on_queue_declareok method will be invoked by pika. :param str|unicode queue_name: The name of the queue to declare. """ LOGGER.info('Declaring queue %s', queue_name) self._channel.queue_declare( queue=queue_name, callback=self.on_queue_declareok) def on_queue_declareok(self, _unused_frame): """Method invoked by pika when the Queue.Declare RPC call made in setup_queue has completed. In this method we will bind the queue and exchange together with the routing key by issuing the Queue.Bind RPC command. When this command is complete, the on_bindok method will be invoked by pika. :param pika.frame.Method method_frame: The Queue.DeclareOk frame """ LOGGER.info('Binding %s to %s with %s', self.EXCHANGE, self.QUEUE, self.ROUTING_KEY) self._channel.queue_bind( self.QUEUE, self.EXCHANGE, routing_key=self.ROUTING_KEY, callback=self.on_bindok) def on_bindok(self, _unused_frame): """This method is invoked by pika when it receives the Queue.BindOk response from RabbitMQ. Since we know we're now setup and bound, it's time to start publishing.""" LOGGER.info('Queue bound') self.start_publishing() def start_publishing(self): """This method will enable delivery confirmations and schedule the first message to be sent to RabbitMQ """ LOGGER.info('Issuing consumer related RPC commands') self.enable_delivery_confirmations() self.schedule_next_message() def enable_delivery_confirmations(self): """Send the Confirm.Select RPC method to RabbitMQ to enable delivery confirmations on the channel. The only way to turn this off is to close the channel and create a new one. When the message is confirmed from RabbitMQ, the on_delivery_confirmation method will be invoked passing in a Basic.Ack or Basic.Nack method from RabbitMQ that will indicate which messages it is confirming or rejecting. """ LOGGER.info('Issuing Confirm.Select RPC command') self._channel.confirm_delivery(self.on_delivery_confirmation) def on_delivery_confirmation(self, method_frame): """Invoked by pika when RabbitMQ responds to a Basic.Publish RPC command, passing in either a Basic.Ack or Basic.Nack frame with the delivery tag of the message that was published. The delivery tag is an integer counter indicating the message number that was sent on the channel via Basic.Publish. Here we're just doing house keeping to keep track of stats and remove message numbers that we expect a delivery confirmation of from the list used to keep track of messages that are pending confirmation. :param pika.frame.Method method_frame: Basic.Ack or Basic.Nack frame """ confirmation_type = method_frame.method.NAME.split('.')[1].lower() LOGGER.info('Received %s for delivery tag: %i', confirmation_type, method_frame.method.delivery_tag) if confirmation_type == 'ack': self._acked += 1 elif confirmation_type == 'nack': self._nacked += 1 self._deliveries.remove(method_frame.method.delivery_tag) LOGGER.info( 'Published %i messages, %i have yet to be confirmed, ' '%i were acked and %i were nacked', self._message_number, len(self._deliveries), self._acked, self._nacked) def schedule_next_message(self): """If we are not closing our connection to RabbitMQ, schedule another message to be delivered in PUBLISH_INTERVAL seconds. """ LOGGER.info('Scheduling next message for %0.1f seconds', self.PUBLISH_INTERVAL) self._connection.ioloop.call_later(self.PUBLISH_INTERVAL, self.publish_message) def publish_message(self): """If the class is not stopping, publish a message to RabbitMQ, appending a list of deliveries with the message number that was sent. This list will be used to check for delivery confirmations in the on_delivery_confirmations method. Once the message has been sent, schedule another message to be sent. The main reason I put scheduling in was just so you can get a good idea of how the process is flowing by slowing down and speeding up the delivery intervals by changing the PUBLISH_INTERVAL constant in the class. """ if self._channel is None or not self._channel.is_open: return hdrs = {u'مفتاح': u' قيمة', u'键': u'值', u'キー': u'値'} properties = pika.BasicProperties( app_id='example-publisher', content_type='application/json', headers=hdrs) message = u'مفتاح قيمة 键 值 キー 値' self._channel.basic_publish(self.EXCHANGE, self.ROUTING_KEY, json.dumps(message, ensure_ascii=False), properties) self._message_number += 1 self._deliveries.append(self._message_number) LOGGER.info('Published message # %i', self._message_number) self.schedule_next_message() def run(self): """Run the example code by connecting and then starting the IOLoop. """ while not self._stopping: self._connection = None self._deliveries = [] self._acked = 0 self._nacked = 0 self._message_number = 0 try: self._connection = self.connect() self._connection.ioloop.start() except KeyboardInterrupt: self.stop() if (self._connection is not None and not self._connection.is_closed): # Finish closing self._connection.ioloop.start() LOGGER.info('Stopped') def stop(self): """Stop the example by closing the channel and connection. We set a flag here so that we stop scheduling new messages to be published. The IOLoop is started because this method is invoked by the Try/Catch below when KeyboardInterrupt is caught. Starting the IOLoop again will allow the publisher to cleanly disconnect from RabbitMQ. """ LOGGER.info('Stopping') self._stopping = True self.close_channel() self.close_connection() def close_channel(self): """Invoke this command to close the channel with RabbitMQ by sending the Channel.Close RPC command. """ if self._channel is not None: LOGGER.info('Closing the channel') self._channel.close() def close_connection(self): """This method closes the connection to RabbitMQ.""" if self._connection is not None: LOGGER.info('Closing connection') self._connection.close() def main(): logging.basicConfig(level=logging.DEBUG, format=LOG_FORMAT) # Connect to localhost:5672 as guest with the password guest and virtual host "/" (%2F) example = ExamplePublisher( 'amqp://guest:guest@localhost:5672/%2F?connection_attempts=3&heartbeat=3600' ) example.run() if __name__ == '__main__': main() pika-1.2.0/examples/asyncio_consumer_example.py000066400000000000000000000411571400701476500217350ustar00rootroot00000000000000# -*- coding: utf-8 -*- # pylint: disable=C0111,C0103,R0205 import functools import logging import time import pika from pika.adapters.asyncio_connection import AsyncioConnection from pika.exchange_type import ExchangeType LOG_FORMAT = ('%(levelname) -10s %(asctime)s %(name) -30s %(funcName) ' '-35s %(lineno) -5d: %(message)s') LOGGER = logging.getLogger(__name__) class ExampleConsumer(object): """This is an example consumer that will handle unexpected interactions with RabbitMQ such as channel and connection closures. If RabbitMQ closes the connection, this class will stop and indicate that reconnection is necessary. You should look at the output, as there are limited reasons why the connection may be closed, which usually are tied to permission related issues or socket timeouts. If the channel is closed, it will indicate a problem with one of the commands that were issued and that should surface in the output as well. """ EXCHANGE = 'message' EXCHANGE_TYPE = ExchangeType.topic QUEUE = 'text' ROUTING_KEY = 'example.text' def __init__(self, amqp_url): """Create a new instance of the consumer class, passing in the AMQP URL used to connect to RabbitMQ. :param str amqp_url: The AMQP url to connect with """ self.should_reconnect = False self.was_consuming = False self._connection = None self._channel = None self._closing = False self._consumer_tag = None self._url = amqp_url self._consuming = False # In production, experiment with higher prefetch values # for higher consumer throughput self._prefetch_count = 1 def connect(self): """This method connects to RabbitMQ, returning the connection handle. When the connection is established, the on_connection_open method will be invoked by pika. :rtype: pika.adapters.asyncio_connection.AsyncioConnection """ LOGGER.info('Connecting to %s', self._url) return AsyncioConnection( parameters=pika.URLParameters(self._url), on_open_callback=self.on_connection_open, on_open_error_callback=self.on_connection_open_error, on_close_callback=self.on_connection_closed) def close_connection(self): self._consuming = False if self._connection.is_closing or self._connection.is_closed: LOGGER.info('Connection is closing or already closed') else: LOGGER.info('Closing connection') self._connection.close() def on_connection_open(self, _unused_connection): """This method is called by pika once the connection to RabbitMQ has been established. It passes the handle to the connection object in case we need it, but in this case, we'll just mark it unused. :param pika.adapters.asyncio_connection.AsyncioConnection _unused_connection: The connection """ LOGGER.info('Connection opened') self.open_channel() def on_connection_open_error(self, _unused_connection, err): """This method is called by pika if the connection to RabbitMQ can't be established. :param pika.adapters.asyncio_connection.AsyncioConnection _unused_connection: The connection :param Exception err: The error """ LOGGER.error('Connection open failed: %s', err) self.reconnect() def on_connection_closed(self, _unused_connection, reason): """This method is invoked by pika when the connection to RabbitMQ is closed unexpectedly. Since it is unexpected, we will reconnect to RabbitMQ if it disconnects. :param pika.connection.Connection connection: The closed connection obj :param Exception reason: exception representing reason for loss of connection. """ self._channel = None if self._closing: self._connection.ioloop.stop() else: LOGGER.warning('Connection closed, reconnect necessary: %s', reason) self.reconnect() def reconnect(self): """Will be invoked if the connection can't be opened or is closed. Indicates that a reconnect is necessary then stops the ioloop. """ self.should_reconnect = True self.stop() def open_channel(self): """Open a new channel with RabbitMQ by issuing the Channel.Open RPC command. When RabbitMQ responds that the channel is open, the on_channel_open callback will be invoked by pika. """ LOGGER.info('Creating a new channel') self._connection.channel(on_open_callback=self.on_channel_open) def on_channel_open(self, channel): """This method is invoked by pika when the channel has been opened. The channel object is passed in so we can make use of it. Since the channel is now open, we'll declare the exchange to use. :param pika.channel.Channel channel: The channel object """ LOGGER.info('Channel opened') self._channel = channel self.add_on_channel_close_callback() self.setup_exchange(self.EXCHANGE) def add_on_channel_close_callback(self): """This method tells pika to call the on_channel_closed method if RabbitMQ unexpectedly closes the channel. """ LOGGER.info('Adding channel close callback') self._channel.add_on_close_callback(self.on_channel_closed) def on_channel_closed(self, channel, reason): """Invoked by pika when RabbitMQ unexpectedly closes the channel. Channels are usually closed if you attempt to do something that violates the protocol, such as re-declare an exchange or queue with different parameters. In this case, we'll close the connection to shutdown the object. :param pika.channel.Channel: The closed channel :param Exception reason: why the channel was closed """ LOGGER.warning('Channel %i was closed: %s', channel, reason) self.close_connection() def setup_exchange(self, exchange_name): """Setup the exchange on RabbitMQ by invoking the Exchange.Declare RPC command. When it is complete, the on_exchange_declareok method will be invoked by pika. :param str|unicode exchange_name: The name of the exchange to declare """ LOGGER.info('Declaring exchange: %s', exchange_name) # Note: using functools.partial is not required, it is demonstrating # how arbitrary data can be passed to the callback when it is called cb = functools.partial( self.on_exchange_declareok, userdata=exchange_name) self._channel.exchange_declare( exchange=exchange_name, exchange_type=self.EXCHANGE_TYPE, callback=cb) def on_exchange_declareok(self, _unused_frame, userdata): """Invoked by pika when RabbitMQ has finished the Exchange.Declare RPC command. :param pika.Frame.Method unused_frame: Exchange.DeclareOk response frame :param str|unicode userdata: Extra user data (exchange name) """ LOGGER.info('Exchange declared: %s', userdata) self.setup_queue(self.QUEUE) def setup_queue(self, queue_name): """Setup the queue on RabbitMQ by invoking the Queue.Declare RPC command. When it is complete, the on_queue_declareok method will be invoked by pika. :param str|unicode queue_name: The name of the queue to declare. """ LOGGER.info('Declaring queue %s', queue_name) cb = functools.partial(self.on_queue_declareok, userdata=queue_name) self._channel.queue_declare(queue=queue_name, callback=cb) def on_queue_declareok(self, _unused_frame, userdata): """Method invoked by pika when the Queue.Declare RPC call made in setup_queue has completed. In this method we will bind the queue and exchange together with the routing key by issuing the Queue.Bind RPC command. When this command is complete, the on_bindok method will be invoked by pika. :param pika.frame.Method _unused_frame: The Queue.DeclareOk frame :param str|unicode userdata: Extra user data (queue name) """ queue_name = userdata LOGGER.info('Binding %s to %s with %s', self.EXCHANGE, queue_name, self.ROUTING_KEY) cb = functools.partial(self.on_bindok, userdata=queue_name) self._channel.queue_bind( queue_name, self.EXCHANGE, routing_key=self.ROUTING_KEY, callback=cb) def on_bindok(self, _unused_frame, userdata): """Invoked by pika when the Queue.Bind method has completed. At this point we will set the prefetch count for the channel. :param pika.frame.Method _unused_frame: The Queue.BindOk response frame :param str|unicode userdata: Extra user data (queue name) """ LOGGER.info('Queue bound: %s', userdata) self.set_qos() def set_qos(self): """This method sets up the consumer prefetch to only be delivered one message at a time. The consumer must acknowledge this message before RabbitMQ will deliver another one. You should experiment with different prefetch values to achieve desired performance. """ self._channel.basic_qos( prefetch_count=self._prefetch_count, callback=self.on_basic_qos_ok) def on_basic_qos_ok(self, _unused_frame): """Invoked by pika when the Basic.QoS method has completed. At this point we will start consuming messages by calling start_consuming which will invoke the needed RPC commands to start the process. :param pika.frame.Method _unused_frame: The Basic.QosOk response frame """ LOGGER.info('QOS set to: %d', self._prefetch_count) self.start_consuming() def start_consuming(self): """This method sets up the consumer by first calling add_on_cancel_callback so that the object is notified if RabbitMQ cancels the consumer. It then issues the Basic.Consume RPC command which returns the consumer tag that is used to uniquely identify the consumer with RabbitMQ. We keep the value to use it when we want to cancel consuming. The on_message method is passed in as a callback pika will invoke when a message is fully received. """ LOGGER.info('Issuing consumer related RPC commands') self.add_on_cancel_callback() self._consumer_tag = self._channel.basic_consume( self.QUEUE, self.on_message) self.was_consuming = True self._consuming = True def add_on_cancel_callback(self): """Add a callback that will be invoked if RabbitMQ cancels the consumer for some reason. If RabbitMQ does cancel the consumer, on_consumer_cancelled will be invoked by pika. """ LOGGER.info('Adding consumer cancellation callback') self._channel.add_on_cancel_callback(self.on_consumer_cancelled) def on_consumer_cancelled(self, method_frame): """Invoked by pika when RabbitMQ sends a Basic.Cancel for a consumer receiving messages. :param pika.frame.Method method_frame: The Basic.Cancel frame """ LOGGER.info('Consumer was cancelled remotely, shutting down: %r', method_frame) if self._channel: self._channel.close() def on_message(self, _unused_channel, basic_deliver, properties, body): """Invoked by pika when a message is delivered from RabbitMQ. The channel is passed for your convenience. The basic_deliver object that is passed in carries the exchange, routing key, delivery tag and a redelivered flag for the message. The properties passed in is an instance of BasicProperties with the message properties and the body is the message that was sent. :param pika.channel.Channel _unused_channel: The channel object :param pika.Spec.Basic.Deliver: basic_deliver method :param pika.Spec.BasicProperties: properties :param bytes body: The message body """ LOGGER.info('Received message # %s from %s: %s', basic_deliver.delivery_tag, properties.app_id, body) self.acknowledge_message(basic_deliver.delivery_tag) def acknowledge_message(self, delivery_tag): """Acknowledge the message delivery from RabbitMQ by sending a Basic.Ack RPC method for the delivery tag. :param int delivery_tag: The delivery tag from the Basic.Deliver frame """ LOGGER.info('Acknowledging message %s', delivery_tag) self._channel.basic_ack(delivery_tag) def stop_consuming(self): """Tell RabbitMQ that you would like to stop consuming by sending the Basic.Cancel RPC command. """ if self._channel: LOGGER.info('Sending a Basic.Cancel RPC command to RabbitMQ') cb = functools.partial( self.on_cancelok, userdata=self._consumer_tag) self._channel.basic_cancel(self._consumer_tag, cb) def on_cancelok(self, _unused_frame, userdata): """This method is invoked by pika when RabbitMQ acknowledges the cancellation of a consumer. At this point we will close the channel. This will invoke the on_channel_closed method once the channel has been closed, which will in-turn close the connection. :param pika.frame.Method _unused_frame: The Basic.CancelOk frame :param str|unicode userdata: Extra user data (consumer tag) """ self._consuming = False LOGGER.info( 'RabbitMQ acknowledged the cancellation of the consumer: %s', userdata) self.close_channel() def close_channel(self): """Call to close the channel with RabbitMQ cleanly by issuing the Channel.Close RPC command. """ LOGGER.info('Closing the channel') self._channel.close() def run(self): """Run the example consumer by connecting to RabbitMQ and then starting the IOLoop to block and allow the AsyncioConnection to operate. """ self._connection = self.connect() self._connection.ioloop.run_forever() def stop(self): """Cleanly shutdown the connection to RabbitMQ by stopping the consumer with RabbitMQ. When RabbitMQ confirms the cancellation, on_cancelok will be invoked by pika, which will then closing the channel and connection. The IOLoop is started again because this method is invoked when CTRL-C is pressed raising a KeyboardInterrupt exception. This exception stops the IOLoop which needs to be running for pika to communicate with RabbitMQ. All of the commands issued prior to starting the IOLoop will be buffered but not processed. """ if not self._closing: self._closing = True LOGGER.info('Stopping') if self._consuming: self.stop_consuming() self._connection.ioloop.run_forever() else: self._connection.ioloop.stop() LOGGER.info('Stopped') class ReconnectingExampleConsumer(object): """This is an example consumer that will reconnect if the nested ExampleConsumer indicates that a reconnect is necessary. """ def __init__(self, amqp_url): self._reconnect_delay = 0 self._amqp_url = amqp_url self._consumer = ExampleConsumer(self._amqp_url) def run(self): while True: try: self._consumer.run() except KeyboardInterrupt: self._consumer.stop() break self._maybe_reconnect() def _maybe_reconnect(self): if self._consumer.should_reconnect: self._consumer.stop() reconnect_delay = self._get_reconnect_delay() LOGGER.info('Reconnecting after %d seconds', reconnect_delay) time.sleep(reconnect_delay) self._consumer = ExampleConsumer(self._amqp_url) def _get_reconnect_delay(self): if self._consumer.was_consuming: self._reconnect_delay = 0 else: self._reconnect_delay += 1 if self._reconnect_delay > 30: self._reconnect_delay = 30 return self._reconnect_delay def main(): logging.basicConfig(level=logging.DEBUG, format=LOG_FORMAT) amqp_url = 'amqp://guest:guest@localhost:5672/%2F' consumer = ReconnectingExampleConsumer(amqp_url) consumer.run() if __name__ == '__main__': main() pika-1.2.0/examples/basic_consumer_threaded.py000066400000000000000000000052271400701476500214740ustar00rootroot00000000000000# -*- coding: utf-8 -*- # pylint: disable=C0111,C0103,R0205 import functools import logging import threading import time import pika from pika.exchange_type import ExchangeType LOG_FORMAT = ('%(levelname) -10s %(asctime)s %(name) -30s %(funcName) ' '-35s %(lineno) -5d: %(message)s') LOGGER = logging.getLogger(__name__) logging.basicConfig(level=logging.DEBUG, format=LOG_FORMAT) def ack_message(ch, delivery_tag): """Note that `ch` must be the same pika channel instance via which the message being ACKed was retrieved (AMQP protocol constraint). """ if ch.is_open: ch.basic_ack(delivery_tag) else: # Channel is already closed, so we can't ACK this message; # log and/or do something that makes sense for your app in this case. pass def do_work(conn, ch, delivery_tag, body): thread_id = threading.get_ident() LOGGER.info('Thread id: %s Delivery tag: %s Message body: %s', thread_id, delivery_tag, body) # Sleeping to simulate 10 seconds of work time.sleep(10) cb = functools.partial(ack_message, ch, delivery_tag) conn.add_callback_threadsafe(cb) def on_message(ch, method_frame, _header_frame, body, args): (conn, thrds) = args delivery_tag = method_frame.delivery_tag t = threading.Thread(target=do_work, args=(conn, ch, delivery_tag, body)) t.start() thrds.append(t) credentials = pika.PlainCredentials('guest', 'guest') # Note: sending a short heartbeat to prove that heartbeats are still # sent even though the worker simulates long-running work parameters = pika.ConnectionParameters( 'localhost', credentials=credentials, heartbeat=5) connection = pika.BlockingConnection(parameters) channel = connection.channel() channel.exchange_declare( exchange="test_exchange", exchange_type=ExchangeType.direct, passive=False, durable=True, auto_delete=False) channel.queue_declare(queue="standard", auto_delete=True) channel.queue_bind( queue="standard", exchange="test_exchange", routing_key="standard_key") # Note: prefetch is set to 1 here as an example only and to keep the number of threads created # to a reasonable amount. In production you will want to test with different prefetch values # to find which one provides the best performance and usability for your solution channel.basic_qos(prefetch_count=1) threads = [] on_message_callback = functools.partial(on_message, args=(connection, threads)) channel.basic_consume('standard', on_message_callback) try: channel.start_consuming() except KeyboardInterrupt: channel.stop_consuming() # Wait for all to complete for thread in threads: thread.join() connection.close() pika-1.2.0/examples/blocking_consume_recover_multiple_hosts.py000066400000000000000000000037121400701476500250360ustar00rootroot00000000000000# -*- coding: utf-8 -*- # pylint: disable=C0111,C0103,R0205 import functools import random import pika from pika.exchange_type import ExchangeType def on_message(ch, method_frame, _header_frame, body, userdata=None): print('Userdata: {} Message body: {}'.format(userdata, body)) ch.basic_ack(delivery_tag=method_frame.delivery_tag) credentials = pika.PlainCredentials('guest', 'guest') params1 = pika.ConnectionParameters( 'localhost', port=5672, credentials=credentials) params2 = pika.ConnectionParameters( 'localhost', port=5673, credentials=credentials) params3 = pika.ConnectionParameters( 'localhost', port=5674, credentials=credentials) params_all = [params1, params2, params3] # Infinite loop while True: try: random.shuffle(params_all) connection = pika.BlockingConnection(params_all) channel = connection.channel() channel.exchange_declare( exchange='test_exchange', exchange_type=ExchangeType.direct, passive=False, durable=True, auto_delete=False) channel.queue_declare(queue='standard', auto_delete=True) channel.queue_bind( queue='standard', exchange='test_exchange', routing_key='standard_key') channel.basic_qos(prefetch_count=1) on_message_callback = functools.partial( on_message, userdata='on_message_userdata') channel.basic_consume('standard', on_message_callback) try: channel.start_consuming() except KeyboardInterrupt: channel.stop_consuming() connection.close() break # Do not recover if connection was closed by broker except pika.exceptions.ConnectionClosedByBroker: break # Do not recover on channel errors except pika.exceptions.AMQPChannelError: break # Recover on all other connection errors except pika.exceptions.AMQPConnectionError: continue pika-1.2.0/examples/confirmation.py000066400000000000000000000027111400701476500173230ustar00rootroot00000000000000# -*- coding: utf-8 -*- # pylint: disable=C0111,C0103,R0205,W0603 import logging import pika from pika import spec ITERATIONS = 100 logging.basicConfig(level=logging.INFO) confirmed = 0 errors = 0 published = 0 def on_open(conn): conn.channel(on_open_callback=on_channel_open) def on_channel_open(channel): global published channel.confirm_delivery(ack_nack_callback=on_delivery_confirmation) for _iteration in range(0, ITERATIONS): channel.basic_publish( 'test', 'test.confirm', 'message body value', pika.BasicProperties(content_type='text/plain', delivery_mode=1)) published += 1 def on_delivery_confirmation(frame): global confirmed, errors if isinstance(frame.method, spec.Basic.Ack): confirmed += 1 logging.info('Received confirmation: %r', frame.method) else: logging.error('Received negative confirmation: %r', frame.method) errors += 1 if (confirmed + errors) == ITERATIONS: logging.info( 'All confirmations received, published %i, confirmed %i with %i errors', published, confirmed, errors) connection.close() parameters = pika.URLParameters( 'amqp://guest:guest@localhost:5672/%2F?connection_attempts=50') connection = pika.SelectConnection( parameters=parameters, on_open_callback=on_open) try: connection.ioloop.start() except KeyboardInterrupt: connection.close() connection.ioloop.start() pika-1.2.0/examples/consume.py000066400000000000000000000032221400701476500163020ustar00rootroot00000000000000"""Basic message consumer example""" import functools import logging import pika from pika.exchange_type import ExchangeType LOG_FORMAT = ('%(levelname) -10s %(asctime)s %(name) -30s %(funcName) ' '-35s %(lineno) -5d: %(message)s') LOGGER = logging.getLogger(__name__) logging.basicConfig(level=logging.INFO, format=LOG_FORMAT) def on_message(chan, method_frame, header_frame, body, userdata=None): """Called when a message is received. Log message and ack it.""" LOGGER.info('Delivery properties: %s, message metadata: %s', method_frame, header_frame) LOGGER.info('Userdata: %s, message body: %s', userdata, body) chan.basic_ack(delivery_tag=method_frame.delivery_tag) def main(): """Main method.""" credentials = pika.PlainCredentials('guest', 'guest') parameters = pika.ConnectionParameters('localhost', credentials=credentials) connection = pika.BlockingConnection(parameters) channel = connection.channel() channel.exchange_declare( exchange='test_exchange', exchange_type=ExchangeType.direct, passive=False, durable=True, auto_delete=False) channel.queue_declare(queue='standard', auto_delete=True) channel.queue_bind( queue='standard', exchange='test_exchange', routing_key='standard_key') channel.basic_qos(prefetch_count=1) on_message_callback = functools.partial( on_message, userdata='on_message_userdata') channel.basic_consume('standard', on_message_callback) try: channel.start_consuming() except KeyboardInterrupt: channel.stop_consuming() connection.close() if __name__ == '__main__': main() pika-1.2.0/examples/consumer_queued.py000066400000000000000000000037441400701476500200450ustar00rootroot00000000000000# -*- coding: utf-8 -*- # pylint: disable=C0111,C0103,R0205 import json import threading import pika from pika.exchange_type import ExchangeType body_buffer = [] lock = threading.Lock() print('pika version: %s' % pika.__version__) connection = pika.BlockingConnection( pika.ConnectionParameters(host='localhost')) main_channel = connection.channel() consumer_channel = connection.channel() bind_channel = connection.channel() main_channel.exchange_declare(exchange='com.micex.sten', exchange_type=ExchangeType.direct) main_channel.exchange_declare( exchange='com.micex.lasttrades', exchange_type=ExchangeType.direct) queue = main_channel.queue_declare('', exclusive=True).method.queue queue_tickers = main_channel.queue_declare('', exclusive=True).method.queue main_channel.queue_bind( exchange='com.micex.sten', queue=queue, routing_key='order.stop.create') def process_buffer(): if not lock.acquire(False): print('locked!') return try: while body_buffer: body = body_buffer.pop(0) ticker = None if 'ticker' in body['data']['params']['condition']: ticker = body['data']['params']['condition']['ticker'] if not ticker: continue print('got ticker %s, gonna bind it...' % ticker) bind_channel.queue_bind( exchange='com.micex.lasttrades', queue=queue_tickers, routing_key=str(ticker)) print('ticker %s binded ok' % ticker) finally: lock.release() def callback(_ch, _method, _properties, body): body = json.loads(body)['order.stop.create'] body_buffer.append(body) process_buffer() # Note: consuming with automatic acknowledgements has its risks # and used here for simplicity. # See https://www.rabbitmq.com/confirms.html. consumer_channel.basic_consume(queue, callback, auto_ack=True) try: consumer_channel.start_consuming() finally: connection.close() pika-1.2.0/examples/consumer_simple.py000066400000000000000000000033211400701476500200350ustar00rootroot00000000000000# -*- coding: utf-8 -*- # pylint: disable=C0111,C0103,R0205 import json import logging import pika from pika.exchange_type import ExchangeType print('pika version: %s' % pika.__version__) connection = pika.BlockingConnection( pika.ConnectionParameters(host='localhost')) main_channel = connection.channel() consumer_channel = connection.channel() bind_channel = connection.channel() main_channel.exchange_declare(exchange='com.micex.sten', exchange_type=ExchangeType.direct) main_channel.exchange_declare( exchange='com.micex.lasttrades', exchange_type=ExchangeType.direct) queue = main_channel.queue_declare('', exclusive=True).method.queue queue_tickers = main_channel.queue_declare('', exclusive=True).method.queue main_channel.queue_bind( exchange='com.micex.sten', queue=queue, routing_key='order.stop.create') def hello(): print('Hello world') connection.call_later(5, hello) def callback(_ch, _method, _properties, body): body = json.loads(body)['order.stop.create'] ticker = None if 'ticker' in body['data']['params']['condition']: ticker = body['data']['params']['condition']['ticker'] if not ticker: return print('got ticker %s, gonna bind it...' % ticker) bind_channel.queue_bind( exchange='com.micex.lasttrades', queue=queue_tickers, routing_key=str(ticker)) print('ticker %s binded ok' % ticker) logging.basicConfig(level=logging.INFO) # Note: consuming with automatic acknowledgements has its risks # and used here for simplicity. # See https://www.rabbitmq.com/confirms.html. consumer_channel.basic_consume(queue, callback, auto_ack=True) try: consumer_channel.start_consuming() finally: connection.close() pika-1.2.0/examples/direct_reply_to.py000066400000000000000000000045321400701476500200250ustar00rootroot00000000000000# -*- coding: utf-8 -*- # pylint: disable=C0111,C0103,R0205 """ This example demonstrates RabbitMQ's "Direct reply-to" usage via `pika.BlockingConnection`. See https://www.rabbitmq.com/direct-reply-to.html for more info about this feature. """ import pika SERVER_QUEUE = 'rpc.server.queue' def main(): """ Here, Client sends "Marco" to RPC Server, and RPC Server replies with "Polo". NOTE Normally, the server would be running separately from the client, but in this very simple example both are running in the same thread and sharing connection and channel. """ with pika.BlockingConnection() as conn: channel = conn.channel() # Set up server channel.queue_declare( queue=SERVER_QUEUE, exclusive=True, auto_delete=True) channel.basic_consume(SERVER_QUEUE, on_server_rx_rpc_request) # Set up client # NOTE Client must create its consumer and publish RPC requests on the # same channel to enable the RabbitMQ broker to make the necessary # associations. # # Also, client must create the consumer *before* starting to publish the # RPC requests. # # Client must create its consumer with auto_ack=True, because the reply-to # queue isn't real. channel.basic_consume( 'amq.rabbitmq.reply-to', on_client_rx_reply_from_server, auto_ack=True) channel.basic_publish( exchange='', routing_key=SERVER_QUEUE, body='Marco', properties=pika.BasicProperties(reply_to='amq.rabbitmq.reply-to')) channel.start_consuming() def on_server_rx_rpc_request(ch, method_frame, properties, body): print('RPC Server got request: %s' % body) ch.basic_publish('', routing_key=properties.reply_to, body='Polo') ch.basic_ack(delivery_tag=method_frame.delivery_tag) print('RPC Server says good bye') def on_client_rx_reply_from_server(ch, _method_frame, _properties, body): print('RPC Client got reply: %s' % body) # NOTE A real client might want to make additional RPC requests, but in this # simple example we're closing the channel after getting our first reply # to force control to return from channel.start_consuming() print('RPC Client says bye') ch.close() if __name__ == '__main__': main() pika-1.2.0/examples/heartbeat_and_blocked_timeouts.py000066400000000000000000000037001400701476500230270ustar00rootroot00000000000000# -*- coding: utf-8 -*- # pylint: disable=C0111,C0103,R0205 """ This example demonstrates explicit setting of heartbeat and blocked connection timeouts. Starting with RabbitMQ 3.5.5, the broker's default hearbeat timeout decreased from 580 seconds to 60 seconds. As a result, applications that perform lengthy processing in the same thread that also runs their Pika connection may experience unexpected dropped connections due to heartbeat timeout. Here, we specify an explicit lower bound for heartbeat timeout. When RabbitMQ broker is running out of certain resources, such as memory and disk space, it may block connections that are performing resource-consuming operations, such as publishing messages. Once a connection is blocked, RabbiMQ stops reading from that connection's socket, so no commands from the client will get through to te broker on that connection until the broker unblocks it. A blocked connection may last for an indefinite period of time, stalling the connection and possibly resulting in a hang (e.g., in BlockingConnection) until the connection is unblocked. Blocked Connectin Timeout is intended to interrupt (i.e., drop) a connection that has been blocked longer than the given timeout value. """ import pika def main(): # NOTE: These paramerers work with all Pika connection types params = pika.ConnectionParameters( heartbeat=600, blocked_connection_timeout=300) conn = pika.BlockingConnection(params) chan = conn.channel() chan.basic_publish('', 'my-alphabet-queue', "abc") # If publish causes the connection to become blocked, then this conn.close() # would hang until the connection is unblocked, if ever. However, the # blocked_connection_timeout connection parameter would interrupt the wait, # resulting in ConnectionClosed exception from BlockingConnection (or the # on_connection_closed callback call in an asynchronous adapter) conn.close() if __name__ == '__main__': main() pika-1.2.0/examples/producer.py000066400000000000000000000025701400701476500164610ustar00rootroot00000000000000# -*- coding: utf-8 -*- # pylint: disable=C0111,C0103,R0205 import json import random import pika from pika.exchange_type import ExchangeType print('pika version: %s' % pika.__version__) connection = pika.BlockingConnection( pika.ConnectionParameters(host='localhost')) main_channel = connection.channel() main_channel.exchange_declare(exchange='com.micex.sten', exchange_type=ExchangeType.direct) main_channel.exchange_declare( exchange='com.micex.lasttrades', exchange_type=ExchangeType.direct) tickers = { 'MXSE.EQBR.LKOH': (1933, 1940), 'MXSE.EQBR.MSNG': (1.35, 1.45), 'MXSE.EQBR.SBER': (90, 92), 'MXSE.EQNE.GAZP': (156, 162), 'MXSE.EQNE.PLZL': (1025, 1040), 'MXSE.EQNL.VTBR': (0.05, 0.06) } def getticker(): return list(tickers.keys())[random.randrange(0, len(tickers) - 1)] _COUNT_ = 10 for i in range(0, _COUNT_): ticker = getticker() msg = { 'order.stop.create': { 'data': { 'params': { 'condition': { 'ticker': ticker } } } } } main_channel.basic_publish( exchange='com.micex.sten', routing_key='order.stop.create', body=json.dumps(msg), properties=pika.BasicProperties(content_type='application/json')) print('send ticker %s' % ticker) connection.close() pika-1.2.0/examples/publish.py000066400000000000000000000022231400701476500162770ustar00rootroot00000000000000# -*- coding: utf-8 -*- # pylint: disable=C0111,C0103,R0205 import logging import pika from pika.exchange_type import ExchangeType logging.basicConfig(level=logging.DEBUG) credentials = pika.PlainCredentials('guest', 'guest') parameters = pika.ConnectionParameters('localhost', credentials=credentials) connection = pika.BlockingConnection(parameters) channel = connection.channel() channel.exchange_declare( exchange="test_exchange", exchange_type=ExchangeType.direct, passive=False, durable=True, auto_delete=False) print("Sending message to create a queue") channel.basic_publish( 'test_exchange', 'standard_key', 'queue:group', pika.BasicProperties(content_type='text/plain', delivery_mode=1)) connection.sleep(5) print("Sending text message to group") channel.basic_publish( 'test_exchange', 'group_key', 'Message to group_key', pika.BasicProperties(content_type='text/plain', delivery_mode=1)) connection.sleep(5) print("Sending text message") channel.basic_publish( 'test_exchange', 'standard_key', 'Message to standard_key', pika.BasicProperties(content_type='text/plain', delivery_mode=1)) connection.close() pika-1.2.0/examples/twisted_service.py000066400000000000000000000222471400701476500200440ustar00rootroot00000000000000# -*- coding: utf-8 -*- # pylint: disable=C0111,C0103,R0205 """ # based on: # - txamqp-helpers by Dan Siemon (March 2010) # http://git.coverfire.com/?p=txamqp-twistd.git;a=tree # - Post by Brian Chandler # https://groups.google.com/forum/#!topic/pika-python/o_deVmGondk # - Pika Documentation # https://pika.readthedocs.io/en/latest/examples/twisted_example.html Fire up this test application via `twistd -ny twisted_service.py` The application will answer to requests to exchange "foobar" and any of the routing_key values: "request1", "request2", or "request3" with messages to the same exchange, but with routing_key "response" When a routing_key of "task" is used on the exchange "foobar", the application can asynchronously run a maximum of 2 tasks at once as defined by PREFETCH_COUNT """ import logging import sys from twisted.internet import protocol from twisted.application import internet from twisted.application import service from twisted.internet.defer import inlineCallbacks from twisted.internet import ssl, defer, task from twisted.python import log from twisted.internet import reactor import pika from pika import spec from pika.adapters import twisted_connection from pika.exchange_type import ExchangeType PREFETCH_COUNT = 2 class PikaService(service.MultiService): name = 'amqp' def __init__(self, parameter): service.MultiService.__init__(self) self.parameters = parameter def startService(self): self.connect() service.MultiService.startService(self) def getFactory(self): return self.services[0].factory def connect(self): f = PikaFactory(self.parameters) if self.parameters.ssl_options: s = ssl.ClientContextFactory() serv = internet.SSLClient( # pylint: disable=E1101 host=self.parameters.host, port=self.parameters.port, factory=f, contextFactory=s) else: serv = internet.TCPClient( # pylint: disable=E1101 host=self.parameters.host, port=self.parameters.port, factory=f) serv.factory = f f.service = serv # pylint: disable=W0201 name = '%s%s:%d' % ('ssl:' if self.parameters.ssl_options else '', self.parameters.host, self.parameters.port) serv.__repr__ = lambda: '' % name serv.setName(name) serv.setServiceParent(self) class PikaProtocol(twisted_connection.TwistedProtocolConnection): connected = False name = 'AMQP:Protocol' def __init__(self, factory, parameters): super().__init__(parameters) self.factory = factory @inlineCallbacks def connectionReady(self): self._channel = yield self.channel() yield self._channel.basic_qos(prefetch_count=PREFETCH_COUNT) self.connected = True yield self._channel.confirm_delivery() for ( exchange, routing_key, callback, ) in self.factory.read_list: yield self.setup_read(exchange, routing_key, callback) self.send() @inlineCallbacks def read(self, exchange, routing_key, callback): """Add an exchange to the list of exchanges to read from.""" if self.connected: yield self.setup_read(exchange, routing_key, callback) @inlineCallbacks def setup_read(self, exchange, routing_key, callback): """This function does the work to read from an exchange.""" if exchange: yield self._channel.exchange_declare( exchange=exchange, exchange_type=ExchangeType.topic, durable=True, auto_delete=False) yield self._channel.queue_declare(queue=routing_key, durable=True) if exchange: yield self._channel.queue_bind(queue=routing_key, exchange=exchange) yield self._channel.queue_bind( queue=routing_key, exchange=exchange, routing_key=routing_key) ( queue, _consumer_tag, ) = yield self._channel.basic_consume( queue=routing_key, auto_ack=False) d = queue.get() d.addCallback(self._read_item, queue, callback) d.addErrback(self._read_item_err) def _read_item(self, item, queue, callback): """Callback function which is called when an item is read.""" d = queue.get() d.addCallback(self._read_item, queue, callback) d.addErrback(self._read_item_err) ( channel, deliver, _props, msg, ) = item log.msg( '%s (%s): %s' % (deliver.exchange, deliver.routing_key, repr(msg)), system='Pika:<=') d = defer.maybeDeferred(callback, item) d.addCallbacks(lambda _: channel.basic_ack(deliver.delivery_tag), lambda _: channel.basic_nack(deliver.delivery_tag)) @staticmethod def _read_item_err(error): print(error) def send(self): """If connected, send all waiting messages.""" if self.connected: while self.factory.queued_messages: ( exchange, r_key, message, ) = self.factory.queued_messages.pop(0) self.send_message(exchange, r_key, message) @inlineCallbacks def send_message(self, exchange, routing_key, msg): """Send a single message.""" log.msg( '%s (%s): %s' % (exchange, routing_key, repr(msg)), system='Pika:=>') yield self._channel.exchange_declare( exchange=exchange, exchange_type=ExchangeType.topic, durable=True, auto_delete=False) prop = spec.BasicProperties(delivery_mode=2) try: yield self._channel.basic_publish( exchange=exchange, routing_key=routing_key, body=msg, properties=prop) except Exception as error: # pylint: disable=W0703 log.msg('Error while sending message: %s' % error, system=self.name) class PikaFactory(protocol.ReconnectingClientFactory): name = 'AMQP:Factory' def __init__(self, parameters): self.parameters = parameters self.client = None self.queued_messages = [] self.read_list = [] def startedConnecting(self, connector): log.msg('Started to connect.', system=self.name) def buildProtocol(self, addr): self.resetDelay() log.msg('Connected', system=self.name) self.client = PikaProtocol(self, self.parameters) return self.client def clientConnectionLost(self, connector, reason): # pylint: disable=W0221 log.msg('Lost connection. Reason: %s' % reason.value, system=self.name) protocol.ReconnectingClientFactory.clientConnectionLost( self, connector, reason) def clientConnectionFailed(self, connector, reason): log.msg( 'Connection failed. Reason: %s' % reason.value, system=self.name) protocol.ReconnectingClientFactory.clientConnectionFailed( self, connector, reason) def send_message(self, exchange=None, routing_key=None, message=None): self.queued_messages.append((exchange, routing_key, message)) if self.client is not None: self.client.send() def read_messages(self, exchange, routing_key, callback): """Configure an exchange to be read from.""" self.read_list.append((exchange, routing_key, callback)) if self.client is not None: self.client.read(exchange, routing_key, callback) application = service.Application("pikaapplication") ps = PikaService( pika.ConnectionParameters( host="localhost", virtual_host="/", credentials=pika.PlainCredentials("guest", "guest"))) ps.setServiceParent(application) class TestService(service.Service): def __init__(self): super().__init__() self.amqp = None def task(self, _msg): # pylint: disable=R0201 """ Method for a time consuming task. This function must return a deferred. If it is successfull, a `basic.ack` will be sent to AMQP. If the task was not completed a `basic.nack` will be sent. In this example it will always return successfully after a 2 second pause. """ return task.deferLater(reactor, 2, lambda: log.msg("task completed")) def respond(self, msg): self.amqp.send_message('foobar', 'response', msg[3]) def startService(self): amqp_service = self.parent.getServiceNamed("amqp") # pylint: disable=E1111,E1121 self.amqp = amqp_service.getFactory() self.amqp.read_messages("foobar", "request1", self.respond) self.amqp.read_messages("foobar", "request2", self.respond) self.amqp.read_messages("foobar", "request3", self.respond) self.amqp.read_messages("foobar", "task", self.task) ts = TestService() ts.setServiceParent(application) observer = log.PythonLoggingObserver() observer.start() logging.basicConfig(level=logging.INFO, stream=sys.stdout) pika-1.2.0/pika/000077500000000000000000000000001400701476500133665ustar00rootroot00000000000000pika-1.2.0/pika/__init__.py000066400000000000000000000012111400701476500154720ustar00rootroot00000000000000__version__ = '1.2.0' import logging # Add NullHandler before importing Pika modules to prevent logging warnings logging.getLogger(__name__).addHandler(logging.NullHandler()) # pylint: disable=C0413 from pika.connection import ConnectionParameters from pika.connection import URLParameters from pika.connection import SSLOptions from pika.credentials import PlainCredentials from pika.spec import BasicProperties from pika import adapters from pika.adapters import BaseConnection from pika.adapters import BlockingConnection from pika.adapters import SelectConnection from pika.adapters.utils.connection_workflow import AMQPConnectionWorkflow pika-1.2.0/pika/adapters/000077500000000000000000000000001400701476500151715ustar00rootroot00000000000000pika-1.2.0/pika/adapters/__init__.py000066400000000000000000000017421400701476500173060ustar00rootroot00000000000000""" Connection Adapters =================== Pika provides multiple adapters to connect to RabbitMQ: - adapters.asyncio_connection.AsyncioConnection: Native Python3 AsyncIO use - adapters.blocking_connection.BlockingConnection: Enables blocking, synchronous operation on top of library for simple uses. - adapters.gevent_connection.GeventConnection: Connection adapter for use with Gevent. - adapters.select_connection.SelectConnection: A native event based connection adapter that implements select, kqueue, poll and epoll. - adapters.tornado_connection.TornadoConnection: Connection adapter for use with the Tornado web framework. - adapters.twisted_connection.TwistedProtocolConnection: Connection adapter for use with the Twisted framework """ from pika.adapters.base_connection import BaseConnection from pika.adapters.blocking_connection import BlockingConnection from pika.adapters.select_connection import SelectConnection from pika.adapters.select_connection import IOLoop pika-1.2.0/pika/adapters/asyncio_connection.py000066400000000000000000000220221400701476500214250ustar00rootroot00000000000000"""Use pika with the Asyncio EventLoop""" import asyncio import logging import sys from pika.adapters import base_connection from pika.adapters.utils import nbio_interface, io_services_utils LOGGER = logging.getLogger(__name__) if sys.platform == 'win32': asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy()) class AsyncioConnection(base_connection.BaseConnection): """ The AsyncioConnection runs on the Asyncio EventLoop. """ def __init__(self, parameters=None, on_open_callback=None, on_open_error_callback=None, on_close_callback=None, custom_ioloop=None, internal_connection_workflow=True): """ Create a new instance of the AsyncioConnection class, connecting to RabbitMQ automatically :param pika.connection.Parameters parameters: Connection parameters :param callable on_open_callback: The method to call when the connection is open :param None | method on_open_error_callback: Called if the connection can't be established or connection establishment is interrupted by `Connection.close()`: on_open_error_callback(Connection, exception). :param None | method on_close_callback: Called when a previously fully open connection is closed: `on_close_callback(Connection, exception)`, where `exception` is either an instance of `exceptions.ConnectionClosed` if closed by user or broker or exception of another type that describes the cause of connection failure. :param None | asyncio.AbstractEventLoop | nbio_interface.AbstractIOServices custom_ioloop: Defaults to asyncio.get_event_loop(). :param bool internal_connection_workflow: True for autonomous connection establishment which is default; False for externally-managed connection workflow via the `create_connection()` factory. """ if isinstance(custom_ioloop, nbio_interface.AbstractIOServices): nbio = custom_ioloop else: nbio = _AsyncioIOServicesAdapter(custom_ioloop) super().__init__( parameters, on_open_callback, on_open_error_callback, on_close_callback, nbio, internal_connection_workflow=internal_connection_workflow) @classmethod def create_connection(cls, connection_configs, on_done, custom_ioloop=None, workflow=None): """Implement :py:classmethod:`pika.adapters.BaseConnection.create_connection()`. """ nbio = _AsyncioIOServicesAdapter(custom_ioloop) def connection_factory(params): """Connection factory.""" if params is None: raise ValueError('Expected pika.connection.Parameters ' 'instance, but got None in params arg.') return cls( parameters=params, custom_ioloop=nbio, internal_connection_workflow=False) return cls._start_connection_workflow( connection_configs=connection_configs, connection_factory=connection_factory, nbio=nbio, workflow=workflow, on_done=on_done) class _AsyncioIOServicesAdapter(io_services_utils.SocketConnectionMixin, io_services_utils.StreamingConnectionMixin, nbio_interface.AbstractIOServices, nbio_interface.AbstractFileDescriptorServices): """Implements :py:class:`.utils.nbio_interface.AbstractIOServices` interface on top of `asyncio`. NOTE: :py:class:`.utils.nbio_interface.AbstractFileDescriptorServices` interface is only required by the mixins. """ def __init__(self, loop=None): """ :param asyncio.AbstractEventLoop | None loop: If None, gets default event loop from asyncio. """ self._loop = loop or asyncio.get_event_loop() def get_native_ioloop(self): """Implement :py:meth:`.utils.nbio_interface.AbstractIOServices.get_native_ioloop()`. """ return self._loop def close(self): """Implement :py:meth:`.utils.nbio_interface.AbstractIOServices.close()`. """ self._loop.close() def run(self): """Implement :py:meth:`.utils.nbio_interface.AbstractIOServices.run()`. """ self._loop.run_forever() def stop(self): """Implement :py:meth:`.utils.nbio_interface.AbstractIOServices.stop()`. """ self._loop.stop() def add_callback_threadsafe(self, callback): """Implement :py:meth:`.utils.nbio_interface.AbstractIOServices.add_callback_threadsafe()`. """ self._loop.call_soon_threadsafe(callback) def call_later(self, delay, callback): """Implement :py:meth:`.utils.nbio_interface.AbstractIOServices.call_later()`. """ return _TimerHandle(self._loop.call_later(delay, callback)) def getaddrinfo(self, host, port, on_done, family=0, socktype=0, proto=0, flags=0): """Implement :py:meth:`.utils.nbio_interface.AbstractIOServices.getaddrinfo()`. """ return self._schedule_and_wrap_in_io_ref( self._loop.getaddrinfo( host, port, family=family, type=socktype, proto=proto, flags=flags), on_done) def set_reader(self, fd, on_readable): """Implement :py:meth:`.utils.nbio_interface.AbstractFileDescriptorServices.set_reader()`. """ self._loop.add_reader(fd, on_readable) LOGGER.debug('set_reader(%s, _)', fd) def remove_reader(self, fd): """Implement :py:meth:`.utils.nbio_interface.AbstractFileDescriptorServices.remove_reader()`. """ LOGGER.debug('remove_reader(%s)', fd) return self._loop.remove_reader(fd) def set_writer(self, fd, on_writable): """Implement :py:meth:`.utils.nbio_interface.AbstractFileDescriptorServices.set_writer()`. """ self._loop.add_writer(fd, on_writable) LOGGER.debug('set_writer(%s, _)', fd) def remove_writer(self, fd): """Implement :py:meth:`.utils.nbio_interface.AbstractFileDescriptorServices.remove_writer()`. """ LOGGER.debug('remove_writer(%s)', fd) return self._loop.remove_writer(fd) def _schedule_and_wrap_in_io_ref(self, coro, on_done): """Schedule the coroutine to run and return _AsyncioIOReference :param coroutine-obj coro: :param callable on_done: user callback that takes the completion result or exception as its only arg. It will not be called if the operation was cancelled. :rtype: _AsyncioIOReference which is derived from nbio_interface.AbstractIOReference """ if not callable(on_done): raise TypeError( 'on_done arg must be callable, but got {!r}'.format(on_done)) return _AsyncioIOReference( asyncio.ensure_future(coro, loop=self._loop), on_done) class _TimerHandle(nbio_interface.AbstractTimerReference): """This module's adaptation of `nbio_interface.AbstractTimerReference`. """ def __init__(self, handle): """ :param asyncio.Handle handle: """ self._handle = handle def cancel(self): if self._handle is not None: self._handle.cancel() self._handle = None class _AsyncioIOReference(nbio_interface.AbstractIOReference): """This module's adaptation of `nbio_interface.AbstractIOReference`. """ def __init__(self, future, on_done): """ :param asyncio.Future future: :param callable on_done: user callback that takes the completion result or exception as its only arg. It will not be called if the operation was cancelled. """ if not callable(on_done): raise TypeError( 'on_done arg must be callable, but got {!r}'.format(on_done)) self._future = future def on_done_adapter(future): """Handle completion callback from the future instance""" # NOTE: Asyncio schedules callback for cancelled futures, but pika # doesn't want that if not future.cancelled(): on_done(future.exception() or future.result()) future.add_done_callback(on_done_adapter) def cancel(self): """Cancel pending operation :returns: False if was already done or cancelled; True otherwise :rtype: bool """ return self._future.cancel() pika-1.2.0/pika/adapters/base_connection.py000066400000000000000000000500551400701476500207010ustar00rootroot00000000000000"""Base class extended by connection adapters. This extends the connection.Connection class to encapsulate connection behavior but still isolate socket and low level communication. """ import abc import functools import logging import pika.compat import pika.exceptions import pika.tcp_socket_opts from pika.adapters.utils import connection_workflow, nbio_interface from pika import connection LOGGER = logging.getLogger(__name__) class BaseConnection(connection.Connection): """BaseConnection class that should be extended by connection adapters. This class abstracts I/O loop and transport services from pika core. """ def __init__(self, parameters, on_open_callback, on_open_error_callback, on_close_callback, nbio, internal_connection_workflow): """Create a new instance of the Connection object. :param None|pika.connection.Parameters parameters: Connection parameters :param None|method on_open_callback: Method to call on connection open :param None | method on_open_error_callback: Called if the connection can't be established or connection establishment is interrupted by `Connection.close()`: on_open_error_callback(Connection, exception). :param None | method on_close_callback: Called when a previously fully open connection is closed: `on_close_callback(Connection, exception)`, where `exception` is either an instance of `exceptions.ConnectionClosed` if closed by user or broker or exception of another type that describes the cause of connection failure. :param pika.adapters.utils.nbio_interface.AbstractIOServices nbio: asynchronous services :param bool internal_connection_workflow: True for autonomous connection establishment which is default; False for externally-managed connection workflow via the `create_connection()` factory. :raises: RuntimeError :raises: ValueError """ if parameters and not isinstance(parameters, connection.Parameters): raise ValueError( 'Expected instance of Parameters, not %r' % (parameters,)) self._nbio = nbio self._connection_workflow = None # type: connection_workflow.AMQPConnectionWorkflow self._transport = None # type: pika.adapters.utils.nbio_interface.AbstractStreamTransport self._got_eof = False # transport indicated EOF (connection reset) super(BaseConnection, self).__init__( parameters, on_open_callback, on_open_error_callback, on_close_callback, internal_connection_workflow=internal_connection_workflow) def _init_connection_state(self): """Initialize or reset all of our internal state variables for a given connection. If we disconnect and reconnect, all of our state needs to be wiped. """ super(BaseConnection, self)._init_connection_state() self._connection_workflow = None self._transport = None self._got_eof = False def __repr__(self): # def get_socket_repr(sock): # """Return socket info suitable for use in repr""" # if sock is None: # return None # # sockname = None # peername = None # try: # sockname = sock.getsockname() # except pika.compat.SOCKET_ERROR: # # closed? # pass # else: # try: # peername = sock.getpeername() # except pika.compat.SOCKET_ERROR: # # not connected? # pass # # return '%s->%s' % (sockname, peername) # TODO need helpful __repr__ in transports return ('<%s %s transport=%s params=%s>' % ( self.__class__.__name__, self._STATE_NAMES[self.connection_state], self._transport, self.params)) @classmethod @abc.abstractmethod def create_connection(cls, connection_configs, on_done, custom_ioloop=None, workflow=None): """Asynchronously create a connection to an AMQP broker using the given configurations. Will attempt to connect using each config in the given order, including all compatible resolved IP addresses of the hostname supplied in each config, until one is established or all attempts fail. See also `_start_connection_workflow()`. :param sequence connection_configs: A sequence of one or more `pika.connection.Parameters`-based objects. :param callable on_done: as defined in `connection_workflow.AbstractAMQPConnectionWorkflow.start()`. :param object | None custom_ioloop: Provide a custom I/O loop that is native to the specific adapter implementation; if None, the adapter will use a default loop instance, which is typically a singleton. :param connection_workflow.AbstractAMQPConnectionWorkflow | None workflow: Pass an instance of an implementation of the `connection_workflow.AbstractAMQPConnectionWorkflow` interface; defaults to a `connection_workflow.AMQPConnectionWorkflow` instance with default values for optional args. :returns: Connection workflow instance in use. The user should limit their interaction with this object only to it's `abort()` method. :rtype: connection_workflow.AbstractAMQPConnectionWorkflow """ raise NotImplementedError @classmethod def _start_connection_workflow(cls, connection_configs, connection_factory, nbio, workflow, on_done): """Helper function for custom implementations of `create_connection()`. :param sequence connection_configs: A sequence of one or more `pika.connection.Parameters`-based objects. :param callable connection_factory: A function that takes `pika.connection.Parameters` as its only arg and returns a brand new `pika.connection.Connection`-based adapter instance each time it is called. The factory must instantiate the connection with `internal_connection_workflow=False`. :param pika.adapters.utils.nbio_interface.AbstractIOServices nbio: :param connection_workflow.AbstractAMQPConnectionWorkflow | None workflow: Pass an instance of an implementation of the `connection_workflow.AbstractAMQPConnectionWorkflow` interface; defaults to a `connection_workflow.AMQPConnectionWorkflow` instance with default values for optional args. :param callable on_done: as defined in :py:meth:`connection_workflow.AbstractAMQPConnectionWorkflow.start()`. :returns: Connection workflow instance in use. The user should limit their interaction with this object only to it's `abort()` method. :rtype: connection_workflow.AbstractAMQPConnectionWorkflow """ if workflow is None: workflow = connection_workflow.AMQPConnectionWorkflow() LOGGER.debug('Created default connection workflow %r', workflow) if isinstance(workflow, connection_workflow.AMQPConnectionWorkflow): workflow.set_io_services(nbio) def create_connector(): """`AMQPConnector` factory.""" return connection_workflow.AMQPConnector( lambda params: _StreamingProtocolShim( connection_factory(params)), nbio) workflow.start( connection_configs=connection_configs, connector_factory=create_connector, native_loop=nbio.get_native_ioloop(), on_done=functools.partial(cls._unshim_connection_workflow_callback, on_done)) return workflow @property def ioloop(self): """ :returns: the native I/O loop instance underlying async services selected by user or the default selected by the specialized connection adapter (e.g., Twisted reactor, `asyncio.SelectorEventLoop`, `select_connection.IOLoop`, etc.) :rtype: object """ return self._nbio.get_native_ioloop() def _adapter_call_later(self, delay, callback): """Implement :py:meth:`pika.connection.Connection._adapter_call_later()`. """ return self._nbio.call_later(delay, callback) def _adapter_remove_timeout(self, timeout_id): """Implement :py:meth:`pika.connection.Connection._adapter_remove_timeout()`. """ timeout_id.cancel() def _adapter_add_callback_threadsafe(self, callback): """Implement :py:meth:`pika.connection.Connection._adapter_add_callback_threadsafe()`. """ if not callable(callback): raise TypeError( 'callback must be a callable, but got %r' % (callback,)) self._nbio.add_callback_threadsafe(callback) def _adapter_connect_stream(self): """Initiate full-stack connection establishment asynchronously for internally-initiated connection bring-up. Upon failed completion, we will invoke `Connection._on_stream_terminated()`. NOTE: On success, the stack will be up already, so there is no corresponding callback. """ self._connection_workflow = connection_workflow.AMQPConnectionWorkflow( _until_first_amqp_attempt=True) self._connection_workflow.set_io_services(self._nbio) def create_connector(): """`AMQPConnector` factory""" return connection_workflow.AMQPConnector( lambda _params: _StreamingProtocolShim(self), self._nbio) self._connection_workflow.start( [self.params], connector_factory=create_connector, native_loop=self._nbio.get_native_ioloop(), on_done=functools.partial(self._unshim_connection_workflow_callback, self._on_connection_workflow_done)) @staticmethod def _unshim_connection_workflow_callback(user_on_done, shim_or_exc): """ :param callable user_on_done: user's `on_done` callback as defined in :py:meth:`connection_workflow.AbstractAMQPConnectionWorkflow.start()`. :param _StreamingProtocolShim | Exception shim_or_exc: """ result = shim_or_exc if isinstance(result, _StreamingProtocolShim): result = result.conn user_on_done(result) def _abort_connection_workflow(self): """Asynchronously abort connection workflow. Upon completion, `Connection._on_stream_terminated()` will be called with None as the error argument. Assumption: may be called only while connection is opening. """ assert not self._opened, ( '_abort_connection_workflow() may be called only when ' 'connection is opening.') if self._transport is None: # NOTE: this is possible only when user calls Connection.close() to # interrupt internally-initiated connection establishment. # self._connection_workflow.abort() would not call # Connection.close() before pairing of connection with transport. assert self._internal_connection_workflow, ( 'Unexpected _abort_connection_workflow() call with ' 'no transport in external connection workflow mode.') # This will result in call to _on_connection_workflow_done() upon # completion self._connection_workflow.abort() else: # NOTE: we can't use self._connection_workflow.abort() in this case, # because it would result in infinite recursion as we're called # from Connection.close() and _connection_workflow.abort() calls # Connection.close() to abort a connection that's already been # paired with a transport. During internally-initiated connection # establishment, AMQPConnectionWorkflow will discover that user # aborted the connection when it receives # pika.exceptions.ConnectionOpenAborted. # This completes asynchronously, culminating in call to our method # `connection_lost()` self._transport.abort() def _on_connection_workflow_done(self, conn_or_exc): """`AMQPConnectionWorkflow` completion callback. :param BaseConnection | Exception conn_or_exc: Our own connection instance on success; exception on failure. See `AbstractAMQPConnectionWorkflow.start()` for details. """ LOGGER.debug('Full-stack connection workflow completed: %r', conn_or_exc) self._connection_workflow = None # Notify protocol of failure if isinstance(conn_or_exc, Exception): self._transport = None if isinstance(conn_or_exc, connection_workflow.AMQPConnectionWorkflowAborted): LOGGER.info('Full-stack connection workflow aborted: %r', conn_or_exc) # So that _handle_connection_workflow_failure() will know it's # not a failure conn_or_exc = None else: LOGGER.error('Full-stack connection workflow failed: %r', conn_or_exc) if (isinstance(conn_or_exc, connection_workflow.AMQPConnectionWorkflowFailed) and isinstance( conn_or_exc.exceptions[-1], connection_workflow. AMQPConnectorSocketConnectError)): conn_or_exc = pika.exceptions.AMQPConnectionError( conn_or_exc) self._handle_connection_workflow_failure(conn_or_exc) else: # NOTE: On success, the stack will be up already, so there is no # corresponding callback. assert conn_or_exc is self, \ 'Expected self conn={!r} from workflow, but got {!r}.'.format( self, conn_or_exc) def _handle_connection_workflow_failure(self, error): """Handle failure of self-initiated stack bring-up and call `Connection._on_stream_terminated()` if connection is not in closed state yet. Called by adapter layer when the full-stack connection workflow fails. :param Exception | None error: exception instance describing the reason for failure or None if the connection workflow was aborted. """ if error is None: LOGGER.info('Self-initiated stack bring-up aborted.') else: LOGGER.error('Self-initiated stack bring-up failed: %r', error) if not self.is_closed: self._on_stream_terminated(error) else: # This may happen when AMQP layer bring up was started but did not # complete LOGGER.debug('_handle_connection_workflow_failure(): ' 'suppressing - connection already closed.') def _adapter_disconnect_stream(self): """Asynchronously bring down the streaming transport layer and invoke `Connection._on_stream_terminated()` asynchronously when complete. """ if not self._opened: self._abort_connection_workflow() else: # This completes asynchronously, culminating in call to our method # `connection_lost()` self._transport.abort() def _adapter_emit_data(self, data): """Take ownership of data and send it to AMQP server as soon as possible. :param bytes data: """ self._transport.write(data) def _proto_connection_made(self, transport): """Introduces transport to protocol after transport is connected. :py:class:`.utils.nbio_interface.AbstractStreamProtocol` implementation. :param nbio_interface.AbstractStreamTransport transport: :raises Exception: Exception-based exception on error """ self._transport = transport # Let connection know that stream is available self._on_stream_connected() def _proto_connection_lost(self, error): """Called upon loss or closing of TCP connection. :py:class:`.utils.nbio_interface.AbstractStreamProtocol` implementation. NOTE: `connection_made()` and `connection_lost()` are each called just once and in that order. All other callbacks are called between them. :param BaseException | None error: An exception (check for `BaseException`) indicates connection failure. None indicates that connection was closed on this side, such as when it's aborted or when `AbstractStreamProtocol.eof_received()` returns a falsy result. :raises Exception: Exception-based exception on error """ self._transport = None if error is None: # Either result of `eof_received()` or abort if self._got_eof: error = pika.exceptions.StreamLostError( 'Transport indicated EOF') else: error = pika.exceptions.StreamLostError( 'Stream connection lost: {!r}'.format(error)) LOGGER.log(logging.DEBUG if error is None else logging.ERROR, 'connection_lost: %r', error) self._on_stream_terminated(error) def _proto_eof_received(self): # pylint: disable=R0201 """Called after the remote peer shuts its write end of the connection. :py:class:`.utils.nbio_interface.AbstractStreamProtocol` implementation. :returns: A falsy value (including None) will cause the transport to close itself, resulting in an eventual `connection_lost()` call from the transport. If a truthy value is returned, it will be the protocol's responsibility to close/abort the transport. :rtype: falsy|truthy :raises Exception: Exception-based exception on error """ LOGGER.error('Transport indicated EOF.') self._got_eof = True # This is how a reset connection will typically present itself # when we have nothing to send to the server over plaintext stream. # # Have transport tear down the connection and invoke our # `connection_lost` method return False def _proto_data_received(self, data): """Called to deliver incoming data from the server to the protocol. :py:class:`.utils.nbio_interface.AbstractStreamProtocol` implementation. :param data: Non-empty data bytes. :raises Exception: Exception-based exception on error """ self._on_data_available(data) class _StreamingProtocolShim(nbio_interface.AbstractStreamProtocol): """Shim for callbacks from transport so that we BaseConnection can delegate to private methods, thus avoiding contamination of API with methods that look public, but aren't. """ # Override AbstractStreamProtocol abstract methods to enable instantiation connection_made = None connection_lost = None eof_received = None data_received = None def __init__(self, conn): """ :param BaseConnection conn: """ self.conn = conn # pylint: disable=W0212 self.connection_made = conn._proto_connection_made self.connection_lost = conn._proto_connection_lost self.eof_received = conn._proto_eof_received self.data_received = conn._proto_data_received def __getattr__(self, attr): """Proxy inexistent attribute requests to our connection instance so that AMQPConnectionWorkflow/AMQPConnector may treat the shim as an actual connection. """ return getattr(self.conn, attr) def __repr__(self): return '{}: {!r}'.format(self.__class__.__name__, self.conn) pika-1.2.0/pika/adapters/blocking_connection.py000066400000000000000000003255021400701476500215610ustar00rootroot00000000000000"""The blocking connection adapter module implements blocking semantics on top of Pika's core AMQP driver. While most of the asynchronous expectations are removed when using the blocking connection adapter, it attempts to remain true to the asynchronous RPC nature of the AMQP protocol, supporting server sent RPC commands. The user facing classes in the module consist of the :py:class:`~pika.adapters.blocking_connection.BlockingConnection` and the :class:`~pika.adapters.blocking_connection.BlockingChannel` classes. """ # Suppress too-many-lines # pylint: disable=C0302 # Disable "access to protected member warnings: this wrapper implementation is # a friend of those instances # pylint: disable=W0212 from collections import namedtuple, deque import contextlib import functools import logging import threading import pika.compat as compat import pika.exceptions as exceptions import pika.spec import pika.validators as validators from pika.adapters.utils import connection_workflow # NOTE: import SelectConnection after others to avoid circular depenency from pika.adapters import select_connection from pika.exchange_type import ExchangeType LOGGER = logging.getLogger(__name__) class _CallbackResult(object): """ CallbackResult is a non-thread-safe implementation for receiving callback results; INTERNAL USE ONLY! """ __slots__ = ('_value_class', '_ready', '_values') def __init__(self, value_class=None): """ :param callable value_class: only needed if the CallbackResult instance will be used with `set_value_once` and `append_element`. *args and **kwargs of the value setter methods will be passed to this class. """ self._value_class = value_class self._ready = None self._values = None self.reset() def reset(self): """Reset value, but not _value_class""" self._ready = False self._values = None def __bool__(self): """ Called by python runtime to implement truth value testing and the built-in operation bool(); NOTE: python 3.x """ return self.is_ready() # python 2.x version of __bool__ __nonzero__ = __bool__ def __enter__(self): """ Entry into context manager that automatically resets the object on exit; this usage pattern helps garbage-collection by eliminating potential circular references. """ return self def __exit__(self, *args, **kwargs): """Reset value""" self.reset() def is_ready(self): """ :returns: True if the object is in a signaled state :rtype: bool """ return self._ready @property def ready(self): """True if the object is in a signaled state""" return self._ready def signal_once(self, *_args, **_kwargs): """ Set as ready :raises AssertionError: if result was already signalled """ assert not self._ready, '_CallbackResult was already set' self._ready = True def set_value_once(self, *args, **kwargs): """ Set as ready with value; the value may be retrieved via the `value` property getter :raises AssertionError: if result was already set """ self.signal_once() try: self._values = (self._value_class(*args, **kwargs),) except Exception: LOGGER.error( "set_value_once failed: value_class=%r; args=%r; kwargs=%r", self._value_class, args, kwargs) raise def append_element(self, *args, **kwargs): """Append an element to values""" assert not self._ready or isinstance(self._values, list), ( '_CallbackResult state is incompatible with append_element: ' 'ready=%r; values=%r' % (self._ready, self._values)) try: value = self._value_class(*args, **kwargs) except Exception: LOGGER.error( "append_element failed: value_class=%r; args=%r; kwargs=%r", self._value_class, args, kwargs) raise if self._values is None: self._values = [value] else: self._values.append(value) self._ready = True @property def value(self): """ :returns: a reference to the value that was set via `set_value_once` :rtype: object :raises AssertionError: if result was not set or value is incompatible with `set_value_once` """ assert self._ready, '_CallbackResult was not set' assert isinstance(self._values, tuple) and len(self._values) == 1, ( '_CallbackResult value is incompatible with set_value_once: %r' % (self._values,)) return self._values[0] @property def elements(self): """ :returns: a reference to the list containing one or more elements that were added via `append_element` :rtype: list :raises AssertionError: if result was not set or value is incompatible with `append_element` """ assert self._ready, '_CallbackResult was not set' assert isinstance(self._values, list) and self._values, ( '_CallbackResult value is incompatible with append_element: %r' % (self._values,)) return self._values class _IoloopTimerContext(object): """Context manager for registering and safely unregistering a SelectConnection ioloop-based timer """ def __init__(self, duration, connection): """ :param float duration: non-negative timer duration in seconds :param select_connection.SelectConnection connection: """ assert hasattr(connection, '_adapter_call_later'), connection self._duration = duration self._connection = connection self._callback_result = _CallbackResult() self._timer_handle = None def __enter__(self): """Register a timer""" self._timer_handle = self._connection._adapter_call_later( self._duration, self._callback_result.signal_once) return self def __exit__(self, *_args, **_kwargs): """Unregister timer if it hasn't fired yet""" if not self._callback_result: self._connection._adapter_remove_timeout(self._timer_handle) self._timer_handle = None def is_ready(self): """ :returns: True if timer has fired, False otherwise :rtype: bool """ return self._callback_result.is_ready() class _TimerEvt(object): """Represents a timer created via `BlockingConnection.call_later`""" __slots__ = ('timer_id', '_callback') def __init__(self, callback): """ :param callback: see callback in `BlockingConnection.call_later` """ self._callback = callback # Will be set to timer id returned from the underlying implementation's # `_adapter_call_later` method self.timer_id = None def __repr__(self): return '<%s timer_id=%s callback=%s>' % (self.__class__.__name__, self.timer_id, self._callback) def dispatch(self): """Dispatch the user's callback method""" LOGGER.debug('_TimerEvt.dispatch: invoking callback=%r', self._callback) self._callback() class _ConnectionBlockedUnblockedEvtBase(object): """Base class for `_ConnectionBlockedEvt` and `_ConnectionUnblockedEvt`""" __slots__ = ('_callback', '_method_frame') def __init__(self, callback, method_frame): """ :param callback: see callback parameter in `BlockingConnection.add_on_connection_blocked_callback` and `BlockingConnection.add_on_connection_unblocked_callback` :param pika.frame.Method method_frame: with method_frame.method of type `pika.spec.Connection.Blocked` or `pika.spec.Connection.Unblocked` """ self._callback = callback self._method_frame = method_frame def __repr__(self): return '<%s callback=%s, frame=%s>' % ( self.__class__.__name__, self._callback, self._method_frame) def dispatch(self): """Dispatch the user's callback method""" self._callback(self._method_frame) class _ConnectionBlockedEvt(_ConnectionBlockedUnblockedEvtBase): """Represents a Connection.Blocked notification from RabbitMQ broker`""" class _ConnectionUnblockedEvt(_ConnectionBlockedUnblockedEvtBase): """Represents a Connection.Unblocked notification from RabbitMQ broker`""" class BlockingConnection(object): """The BlockingConnection creates a layer on top of Pika's asynchronous core providing methods that will block until their expected response has returned. Due to the asynchronous nature of the `Basic.Deliver` and `Basic.Return` calls from RabbitMQ to your application, you can still implement continuation-passing style asynchronous methods if you'd like to receive messages from RabbitMQ using :meth:`basic_consume ` or if you want to be notified of a delivery failure when using :meth:`basic_publish `. For more information about communicating with the blocking_connection adapter, be sure to check out the :class:`BlockingChannel ` class which implements the :class:`Channel ` based communication for the blocking_connection adapter. To prevent recursion/reentrancy, the blocking connection and channel implementations queue asynchronously-delivered events received in nested context (e.g., while waiting for `BlockingConnection.channel` or `BlockingChannel.queue_declare` to complete), dispatching them synchronously once nesting returns to the desired context. This concerns all callbacks, such as those registered via `BlockingConnection.call_later`, `BlockingConnection.add_on_connection_blocked_callback`, `BlockingConnection.add_on_connection_unblocked_callback`, `BlockingChannel.basic_consume`, etc. Blocked Connection deadlock avoidance: when RabbitMQ becomes low on resources, it emits Connection.Blocked (AMQP extension) to the client connection when client makes a resource-consuming request on that connection or its channel (e.g., `Basic.Publish`); subsequently, RabbitMQ suspsends processing requests from that connection until the affected resources are restored. See http://www.rabbitmq.com/connection-blocked.html. This may impact `BlockingConnection` and `BlockingChannel` operations in a way that users might not be expecting. For example, if the user dispatches `BlockingChannel.basic_publish` in non-publisher-confirmation mode while RabbitMQ is in this low-resource state followed by a synchronous request (e.g., `BlockingConnection.channel`, `BlockingChannel.consume`, `BlockingChannel.basic_consume`, etc.), the synchronous request will block indefinitely (until Connection.Unblocked) waiting for RabbitMQ to reply. If the blocked state persists for a long time, the blocking operation will appear to hang. In this state, `BlockingConnection` instance and its channels will not dispatch user callbacks. SOLUTION: To break this potential deadlock, applications may configure the `blocked_connection_timeout` connection parameter when instantiating `BlockingConnection`. Upon blocked connection timeout, this adapter will raise ConnectionBlockedTimeout exception`. See `pika.connection.ConnectionParameters` documentation to learn more about the `blocked_connection_timeout` configuration. """ # Connection-closing callback args _OnClosedArgs = namedtuple('BlockingConnection__OnClosedArgs', 'connection error') # Channel-opened callback args _OnChannelOpenedArgs = namedtuple('BlockingConnection__OnChannelOpenedArgs', 'channel') def __init__(self, parameters=None, _impl_class=None): """Create a new instance of the Connection object. :param None | pika.connection.Parameters | sequence parameters: Connection parameters instance or non-empty sequence of them. If None, a `pika.connection.Parameters` instance will be created with default settings. See `pika.AMQPConnectionWorkflow` for more details about multiple parameter configurations and retries. :param _impl_class: for tests/debugging only; implementation class; None=default :raises RuntimeError: """ # Used for mutual exclusion to avoid race condition between # BlockingConnection._cleanup() and another thread calling # BlockingConnection.add_callback_threadsafe() against a closed # ioloop. self._cleanup_mutex = threading.Lock() # Used by the _acquire_event_dispatch decorator; when already greater # than 0, event dispatch is already acquired higher up the call stack self._event_dispatch_suspend_depth = 0 # Connection-specific events that are ready for dispatch: _TimerEvt, # _ConnectionBlockedEvt, _ConnectionUnblockedEvt self._ready_events = deque() # Channel numbers of channels that are requesting a call to their # BlockingChannel._dispatch_events method; See # `_request_channel_dispatch` self._channels_pending_dispatch = set() # Receives on_close_callback args from Connection self._closed_result = _CallbackResult(self._OnClosedArgs) # Perform connection workflow self._impl = None # so that attribute is created in case below raises self._impl = self._create_connection(parameters, _impl_class) self._impl.add_on_close_callback(self._closed_result.set_value_once) def __repr__(self): return '<%s impl=%r>' % (self.__class__.__name__, self._impl) def __enter__(self): # Prepare `with` context return self def __exit__(self, exc_type, value, traceback): # Close connection after `with` context if self.is_open: self.close() def _cleanup(self): """Clean up members that might inhibit garbage collection """ with self._cleanup_mutex: if self._impl is not None: self._impl.ioloop.close() self._ready_events.clear() self._closed_result.reset() @contextlib.contextmanager def _acquire_event_dispatch(self): """ Context manager that controls access to event dispatcher for preventing reentrancy. The "as" value is True if the managed code block owns the event dispatcher and False if caller higher up in the call stack already owns it. Only managed code that gets ownership (got True) is permitted to dispatch """ try: # __enter__ part self._event_dispatch_suspend_depth += 1 yield self._event_dispatch_suspend_depth == 1 finally: # __exit__ part self._event_dispatch_suspend_depth -= 1 def _create_connection(self, configs, impl_class): """Run connection workflow, blocking until it completes. :param None | pika.connection.Parameters | sequence configs: Connection parameters instance or non-empty sequence of them. :param None | SelectConnection impl_class: for tests/debugging only; implementation class; :rtype: impl_class :raises: exception on failure """ if configs is None: configs = (pika.connection.Parameters(),) if isinstance(configs, pika.connection.Parameters): configs = (configs,) if not configs: raise ValueError('Expected a non-empty sequence of connection ' 'parameters, but got {!r}.'.format(configs)) # Connection workflow completion args # `result` may be an instance of connection on success or exception on # failure. on_cw_done_result = _CallbackResult( namedtuple('BlockingConnection_OnConnectionWorkflowDoneArgs', 'result')) impl_class = impl_class or select_connection.SelectConnection ioloop = select_connection.IOLoop() ioloop.activate_poller() try: impl_class.create_connection( configs, on_done=on_cw_done_result.set_value_once, custom_ioloop=ioloop) while not on_cw_done_result.ready: ioloop.poll() ioloop.process_timeouts() if isinstance(on_cw_done_result.value.result, BaseException): error = on_cw_done_result.value.result LOGGER.error('Connection workflow failed: %r', error) raise self._reap_last_connection_workflow_error(error) else: LOGGER.info('Connection workflow succeeded: %r', on_cw_done_result.value.result) return on_cw_done_result.value.result except Exception: LOGGER.exception('Error in _create_connection().') ioloop.close() self._cleanup() raise @staticmethod def _reap_last_connection_workflow_error(error): """Extract exception value from the last connection attempt :param Exception error: error passed by the `AMQPConnectionWorkflow` completion callback. :returns: Exception value from the last connection attempt :rtype: Exception """ if isinstance(error, connection_workflow.AMQPConnectionWorkflowFailed): # Extract exception value from the last connection attempt error = error.exceptions[-1] if isinstance(error, connection_workflow.AMQPConnectorSocketConnectError): error = exceptions.AMQPConnectionError(error) elif isinstance(error, connection_workflow.AMQPConnectorPhaseErrorBase): error = error.exception return error def _flush_output(self, *waiters): """ Flush output and process input while waiting for any of the given callbacks to return true. The wait is aborted upon connection-close. Otherwise, processing continues until the output is flushed AND at least one of the callbacks returns true. If there are no callbacks, then processing ends when all output is flushed. :param waiters: sequence of zero or more callables taking no args and returning true when it's time to stop processing. Their results are OR'ed together. :raises: exceptions passed by impl if opening of connection fails or connection closes. """ if self.is_closed: raise exceptions.ConnectionWrongStateError() # Conditions for terminating the processing loop: # connection closed # OR # empty outbound buffer and no waiters # OR # empty outbound buffer and any waiter is ready is_done = (lambda: self._closed_result.ready or ((not self._impl._transport or self._impl._get_write_buffer_size() == 0) and (not waiters or any(ready() for ready in waiters)))) # Process I/O until our completion condition is satisfied while not is_done(): self._impl.ioloop.poll() self._impl.ioloop.process_timeouts() if self._closed_result.ready: try: if not isinstance(self._closed_result.value.error, exceptions.ConnectionClosedByClient): LOGGER.error('Unexpected connection close detected: %r', self._closed_result.value.error) raise self._closed_result.value.error else: LOGGER.info('User-initiated close: result=%r', self._closed_result.value) finally: self._cleanup() def _request_channel_dispatch(self, channel_number): """Called by BlockingChannel instances to request a call to their _dispatch_events method or to terminate `process_data_events`; BlockingConnection will honor these requests from a safe context. :param int channel_number: positive channel number to request a call to the channel's `_dispatch_events`; a negative channel number to request termination of `process_data_events` """ self._channels_pending_dispatch.add(channel_number) def _dispatch_channel_events(self): """Invoke the `_dispatch_events` method on open channels that requested it """ if not self._channels_pending_dispatch: return with self._acquire_event_dispatch() as dispatch_acquired: if not dispatch_acquired: # Nested dispatch or dispatch blocked higher in call stack return candidates = list(self._channels_pending_dispatch) self._channels_pending_dispatch.clear() for channel_number in candidates: if channel_number < 0: # This was meant to terminate process_data_events continue try: impl_channel = self._impl._channels[channel_number] except KeyError: continue if impl_channel.is_open: impl_channel._get_cookie()._dispatch_events() def _on_timer_ready(self, evt): """Handle expiry of a timer that was registered via `_adapter_call_later()` :param _TimerEvt evt: """ self._ready_events.append(evt) def _on_threadsafe_callback(self, user_callback): """Handle callback that was registered via `self._impl._adapter_add_callback_threadsafe`. :param user_callback: callback passed to our `add_callback_threadsafe` by the application. """ # Turn it into a 0-delay timeout to take advantage of our existing logic # that deals with reentrancy self.call_later(0, user_callback) def _on_connection_blocked(self, user_callback, _impl, method_frame): """Handle Connection.Blocked notification from RabbitMQ broker :param callable user_callback: callback passed to `add_on_connection_blocked_callback` :param select_connection.SelectConnection _impl: :param pika.frame.Method method_frame: method frame having `method` member of type `pika.spec.Connection.Blocked` """ self._ready_events.append( _ConnectionBlockedEvt(user_callback, method_frame)) def _on_connection_unblocked(self, user_callback, _impl, method_frame): """Handle Connection.Unblocked notification from RabbitMQ broker :param callable user_callback: callback passed to `add_on_connection_unblocked_callback` :param select_connection.SelectConnection _impl: :param pika.frame.Method method_frame: method frame having `method` member of type `pika.spec.Connection.Blocked` """ self._ready_events.append( _ConnectionUnblockedEvt(user_callback, method_frame)) def _dispatch_connection_events(self): """Dispatch ready connection events""" if not self._ready_events: return with self._acquire_event_dispatch() as dispatch_acquired: if not dispatch_acquired: # Nested dispatch or dispatch blocked higher in call stack return # Limit dispatch to the number of currently ready events to avoid # getting stuck in this loop for _ in compat.xrange(len(self._ready_events)): try: evt = self._ready_events.popleft() except IndexError: # Some events (e.g., timers) must have been cancelled break evt.dispatch() def add_on_connection_blocked_callback(self, callback): """RabbitMQ AMQP extension - Add a callback to be notified when the connection gets blocked (`Connection.Blocked` received from RabbitMQ) due to the broker running low on resources (memory or disk). In this state RabbitMQ suspends processing incoming data until the connection is unblocked, so it's a good idea for publishers receiving this notification to suspend publishing until the connection becomes unblocked. NOTE: due to the blocking nature of BlockingConnection, if it's sending outbound data while the connection is/becomes blocked, the call may remain blocked until the connection becomes unblocked, if ever. You may use `ConnectionParameters.blocked_connection_timeout` to abort a BlockingConnection method call with an exception when the connection remains blocked longer than the given timeout value. See also `Connection.add_on_connection_unblocked_callback()` See also `ConnectionParameters.blocked_connection_timeout`. :param callable callback: Callback to call on `Connection.Blocked`, having the signature `callback(connection, pika.frame.Method)`, where connection is the `BlockingConnection` instance and the method frame's `method` member is of type `pika.spec.Connection.Blocked` """ self._impl.add_on_connection_blocked_callback( functools.partial(self._on_connection_blocked, functools.partial(callback, self))) def add_on_connection_unblocked_callback(self, callback): """RabbitMQ AMQP extension - Add a callback to be notified when the connection gets unblocked (`Connection.Unblocked` frame is received from RabbitMQ) letting publishers know it's ok to start publishing again. :param callable callback: Callback to call on Connection.Unblocked`, having the signature `callback(connection, pika.frame.Method)`, where connection is the `BlockingConnection` instance and the method frame's `method` member is of type `pika.spec.Connection.Unblocked` """ self._impl.add_on_connection_unblocked_callback( functools.partial(self._on_connection_unblocked, functools.partial(callback, self))) def call_later(self, delay, callback): """Create a single-shot timer to fire after delay seconds. Do not confuse with Tornado's timeout where you pass in the time you want to have your callback called. Only pass in the seconds until it's to be called. NOTE: the timer callbacks are dispatched only in the scope of specially-designated methods: see `BlockingConnection.process_data_events()` and `BlockingChannel.start_consuming()`. :param float delay: The number of seconds to wait to call callback :param callable callback: The callback method with the signature callback() :returns: Opaque timer id :rtype: int """ validators.require_callback(callback) evt = _TimerEvt(callback=callback) timer_id = self._impl._adapter_call_later( delay, functools.partial(self._on_timer_ready, evt)) evt.timer_id = timer_id return timer_id def add_callback_threadsafe(self, callback): """Requests a call to the given function as soon as possible in the context of this connection's thread. NOTE: This is the only thread-safe method in `BlockingConnection`. All other manipulations of `BlockingConnection` must be performed from the connection's thread. NOTE: the callbacks are dispatched only in the scope of specially-designated methods: see `BlockingConnection.process_data_events()` and `BlockingChannel.start_consuming()`. For example, a thread may request a call to the `BlockingChannel.basic_ack` method of a `BlockingConnection` that is running in a different thread via:: connection.add_callback_threadsafe( functools.partial(channel.basic_ack, delivery_tag=...)) NOTE: if you know that the requester is running on the same thread as the connection it is more efficient to use the `BlockingConnection.call_later()` method with a delay of 0. :param callable callback: The callback method; must be callable :raises pika.exceptions.ConnectionWrongStateError: if connection is closed """ with self._cleanup_mutex: # NOTE: keep in mind that we may be called from another thread and # this mutex only synchronizes us with our connection cleanup logic, # so a simple check for "is_closed" is pretty much all we're allowed # to do here besides calling the only thread-safe method # _adapter_add_callback_threadsafe(). if self.is_closed: raise exceptions.ConnectionWrongStateError( 'BlockingConnection.add_callback_threadsafe() called on ' 'closed or closing connection.') self._impl._adapter_add_callback_threadsafe( functools.partial(self._on_threadsafe_callback, callback)) def remove_timeout(self, timeout_id): """Remove a timer if it's still in the timeout stack :param timeout_id: The opaque timer id to remove """ # Remove from the impl's timeout stack self._impl._adapter_remove_timeout(timeout_id) # Remove from ready events, if the timer fired already for i, evt in enumerate(self._ready_events): if isinstance(evt, _TimerEvt) and evt.timer_id == timeout_id: index_to_remove = i break else: # Not found return del self._ready_events[index_to_remove] def close(self, reply_code=200, reply_text='Normal shutdown'): """Disconnect from RabbitMQ. If there are any open channels, it will attempt to close them prior to fully disconnecting. Channels which have active consumers will attempt to send a Basic.Cancel to RabbitMQ to cleanly stop the delivery of messages prior to closing the channel. :param int reply_code: The code number for the close :param str reply_text: The text reason for the close :raises pika.exceptions.ConnectionWrongStateError: if called on a closed connection (NEW in v1.0.0) """ if not self.is_open: msg = '{}.close({}, {!r}) called on closed connection.'.format( self.__class__.__name__, reply_code, reply_text) LOGGER.error(msg) raise exceptions.ConnectionWrongStateError(msg) LOGGER.info('Closing connection (%s): %s', reply_code, reply_text) # Close channels that remain opened for impl_channel in compat.dictvalues(self._impl._channels): channel = impl_channel._get_cookie() if channel.is_open: try: channel.close(reply_code, reply_text) except exceptions.ChannelClosed as exc: # Log and suppress broker-closed channel LOGGER.warning( 'Got ChannelClosed while closing channel ' 'from connection.close: %r', exc) # Close the connection self._impl.close(reply_code, reply_text) self._flush_output(self._closed_result.is_ready) def process_data_events(self, time_limit=0): """Will make sure that data events are processed. Dispatches timer and channel callbacks if not called from the scope of BlockingConnection or BlockingChannel callback. Your app can block on this method. :param float time_limit: suggested upper bound on processing time in seconds. The actual blocking time depends on the granularity of the underlying ioloop. Zero means return as soon as possible. None means there is no limit on processing time and the function will block until I/O produces actionable events. Defaults to 0 for backward compatibility. This parameter is NEW in pika 0.10.0. """ with self._acquire_event_dispatch() as dispatch_acquired: # Check if we can actually process pending events common_terminator = lambda: bool(dispatch_acquired and (self._channels_pending_dispatch or self._ready_events)) if time_limit is None: self._flush_output(common_terminator) else: with _IoloopTimerContext(time_limit, self._impl) as timer: self._flush_output(timer.is_ready, common_terminator) if self._ready_events: self._dispatch_connection_events() if self._channels_pending_dispatch: self._dispatch_channel_events() def sleep(self, duration): """A safer way to sleep than calling time.sleep() directly that would keep the adapter from ignoring frames sent from the broker. The connection will "sleep" or block the number of seconds specified in duration in small intervals. :param float duration: The time to sleep in seconds """ assert duration >= 0, duration deadline = compat.time_now() + duration time_limit = duration # Process events at least once while True: self.process_data_events(time_limit) time_limit = deadline - compat.time_now() if time_limit <= 0: break def channel(self, channel_number=None): """Create a new channel with the next available channel number or pass in a channel number to use. Must be non-zero if you would like to specify but it is recommended that you let Pika manage the channel numbers. :rtype: pika.adapters.blocking_connection.BlockingChannel """ with _CallbackResult(self._OnChannelOpenedArgs) as opened_args: impl_channel = self._impl.channel( channel_number=channel_number, on_open_callback=opened_args.set_value_once) # Create our proxy channel channel = BlockingChannel(impl_channel, self) # Link implementation channel with our proxy channel impl_channel._set_cookie(channel) # Drive I/O until Channel.Open-ok channel._flush_output(opened_args.is_ready) return channel # # Connections state properties # @property def is_closed(self): """ Returns a boolean reporting the current connection state. """ return self._impl.is_closed @property def is_open(self): """ Returns a boolean reporting the current connection state. """ return self._impl.is_open # # Properties that reflect server capabilities for the current connection # @property def basic_nack_supported(self): """Specifies if the server supports basic.nack on the active connection. :rtype: bool """ return self._impl.basic_nack @property def consumer_cancel_notify_supported(self): """Specifies if the server supports consumer cancel notification on the active connection. :rtype: bool """ return self._impl.consumer_cancel_notify @property def exchange_exchange_bindings_supported(self): """Specifies if the active connection supports exchange to exchange bindings. :rtype: bool """ return self._impl.exchange_exchange_bindings @property def publisher_confirms_supported(self): """Specifies if the active connection can use publisher confirmations. :rtype: bool """ return self._impl.publisher_confirms # Legacy property names for backward compatibility basic_nack = basic_nack_supported consumer_cancel_notify = consumer_cancel_notify_supported exchange_exchange_bindings = exchange_exchange_bindings_supported publisher_confirms = publisher_confirms_supported class _ChannelPendingEvt(object): """Base class for BlockingChannel pending events""" class _ConsumerDeliveryEvt(_ChannelPendingEvt): """This event represents consumer message delivery `Basic.Deliver`; it contains method, properties, and body of the delivered message. """ __slots__ = ('method', 'properties', 'body') def __init__(self, method, properties, body): """ :param spec.Basic.Deliver method: NOTE: consumer_tag and delivery_tag are valid only within source channel :param spec.BasicProperties properties: message properties :param bytes body: message body; empty string if no body """ self.method = method self.properties = properties self.body = body class _ConsumerCancellationEvt(_ChannelPendingEvt): """This event represents server-initiated consumer cancellation delivered to client via Basic.Cancel. After receiving Basic.Cancel, there will be no further deliveries for the consumer identified by `consumer_tag` in `Basic.Cancel` """ __slots__ = ('method_frame',) def __init__(self, method_frame): """ :param pika.frame.Method method_frame: method frame with method of type `spec.Basic.Cancel` """ self.method_frame = method_frame def __repr__(self): return '<%s method_frame=%r>' % (self.__class__.__name__, self.method_frame) @property def method(self): """method of type spec.Basic.Cancel""" return self.method_frame.method class _ReturnedMessageEvt(_ChannelPendingEvt): """This event represents a message returned by broker via `Basic.Return`""" __slots__ = ('callback', 'channel', 'method', 'properties', 'body') def __init__(self, callback, channel, method, properties, body): """ :param callable callback: user's callback, having the signature callback(channel, method, properties, body), where - channel: pika.Channel - method: pika.spec.Basic.Return - properties: pika.spec.BasicProperties - body: bytes :param pika.Channel channel: :param pika.spec.Basic.Return method: :param pika.spec.BasicProperties properties: :param bytes body: """ self.callback = callback self.channel = channel self.method = method self.properties = properties self.body = body def __repr__(self): return ('<%s callback=%r channel=%r method=%r properties=%r ' 'body=%.300r>') % (self.__class__.__name__, self.callback, self.channel, self.method, self.properties, self.body) def dispatch(self): """Dispatch user's callback""" self.callback(self.channel, self.method, self.properties, self.body) class ReturnedMessage(object): """Represents a message returned via Basic.Return in publish-acknowledgments mode """ __slots__ = ('method', 'properties', 'body') def __init__(self, method, properties, body): """ :param spec.Basic.Return method: :param spec.BasicProperties properties: message properties :param bytes body: message body; empty string if no body """ self.method = method self.properties = properties self.body = body class _ConsumerInfo(object): """Information about an active consumer""" __slots__ = ('consumer_tag', 'auto_ack', 'on_message_callback', 'alternate_event_sink', 'state') # Consumer states SETTING_UP = 1 ACTIVE = 2 TEARING_DOWN = 3 CANCELLED_BY_BROKER = 4 def __init__(self, consumer_tag, auto_ack, on_message_callback=None, alternate_event_sink=None): """ NOTE: exactly one of callback/alternate_event_sink musts be non-None. :param str consumer_tag: :param bool auto_ack: the no-ack value for the consumer :param callable on_message_callback: The function for dispatching messages to user, having the signature: on_message_callback(channel, method, properties, body) - channel: BlockingChannel - method: spec.Basic.Deliver - properties: spec.BasicProperties - body: bytes :param callable alternate_event_sink: if specified, _ConsumerDeliveryEvt and _ConsumerCancellationEvt objects will be diverted to this callback instead of being deposited in the channel's `_pending_events` container. Signature: alternate_event_sink(evt) """ assert (on_message_callback is None) != ( alternate_event_sink is None ), ('exactly one of on_message_callback/alternate_event_sink must be non-None', on_message_callback, alternate_event_sink) self.consumer_tag = consumer_tag self.auto_ack = auto_ack self.on_message_callback = on_message_callback self.alternate_event_sink = alternate_event_sink self.state = self.SETTING_UP @property def setting_up(self): """True if in SETTING_UP state""" return self.state == self.SETTING_UP @property def active(self): """True if in ACTIVE state""" return self.state == self.ACTIVE @property def tearing_down(self): """True if in TEARING_DOWN state""" return self.state == self.TEARING_DOWN @property def cancelled_by_broker(self): """True if in CANCELLED_BY_BROKER state""" return self.state == self.CANCELLED_BY_BROKER class _QueueConsumerGeneratorInfo(object): """Container for information about the active queue consumer generator """ __slots__ = ('params', 'consumer_tag', 'pending_events') def __init__(self, params, consumer_tag): """ :params tuple params: a three-tuple (queue, auto_ack, exclusive) that were used to create the queue consumer :param str consumer_tag: consumer tag """ self.params = params self.consumer_tag = consumer_tag #self.messages = deque() # Holds pending events of types _ConsumerDeliveryEvt and # _ConsumerCancellationEvt self.pending_events = deque() def __repr__(self): return '<%s params=%r consumer_tag=%r>' % ( self.__class__.__name__, self.params, self.consumer_tag) class BlockingChannel(object): """The BlockingChannel implements blocking semantics for most things that one would use callback-passing-style for with the :py:class:`~pika.channel.Channel` class. In addition, the `BlockingChannel` class implements a :term:`generator` that allows you to :doc:`consume messages ` without using callbacks. Example of creating a BlockingChannel:: import pika # Create our connection object connection = pika.BlockingConnection() # The returned object will be a synchronous channel channel = connection.channel() """ # Used as value_class with _CallbackResult for receiving Basic.GetOk args _RxMessageArgs = namedtuple( 'BlockingChannel__RxMessageArgs', [ 'channel', # implementation pika.Channel instance 'method', # Basic.GetOk 'properties', # pika.spec.BasicProperties 'body' # str, unicode, or bytes (python 3.x) ]) # For use as value_class with any _CallbackResult that expects method_frame # as the only arg _MethodFrameCallbackResultArgs = namedtuple( 'BlockingChannel__MethodFrameCallbackResultArgs', 'method_frame') # Broker's basic-ack/basic-nack args when delivery confirmation is enabled; # may concern a single or multiple messages _OnMessageConfirmationReportArgs = namedtuple( 'BlockingChannel__OnMessageConfirmationReportArgs', 'method_frame') # For use as value_class with _CallbackResult expecting Channel.Flow # confirmation. _FlowOkCallbackResultArgs = namedtuple( 'BlockingChannel__FlowOkCallbackResultArgs', 'active' # True if broker will start or continue sending; False if not ) _CONSUMER_CANCELLED_CB_KEY = 'blocking_channel_consumer_cancelled' def __init__(self, channel_impl, connection): """Create a new instance of the Channel :param pika.channel.Channel channel_impl: Channel implementation object as returned from SelectConnection.channel() :param BlockingConnection connection: The connection object """ self._impl = channel_impl self._connection = connection # A mapping of consumer tags to _ConsumerInfo for active consumers self._consumer_infos = dict() # Queue consumer generator generator info of type # _QueueConsumerGeneratorInfo created by BlockingChannel.consume self._queue_consumer_generator = None # Whether RabbitMQ delivery confirmation has been enabled self._delivery_confirmation = False # Receives message delivery confirmation report (Basic.ack or # Basic.nack) from broker when delivery confirmations are enabled self._message_confirmation_result = _CallbackResult( self._OnMessageConfirmationReportArgs) # deque of pending events: _ConsumerDeliveryEvt and # _ConsumerCancellationEvt objects that will be returned by # `BlockingChannel.get_event()` self._pending_events = deque() # Holds a ReturnedMessage object representing a message received via # Basic.Return in publisher-acknowledgments mode. self._puback_return = None # self._on_channel_closed() saves the reason exception here self._closing_reason = None # type: None | Exception # Receives Basic.ConsumeOk reply from server self._basic_consume_ok_result = _CallbackResult() # Receives args from Basic.GetEmpty response # http://www.rabbitmq.com/amqp-0-9-1-reference.html#basic.get self._basic_getempty_result = _CallbackResult( self._MethodFrameCallbackResultArgs) self._impl.add_on_cancel_callback(self._on_consumer_cancelled_by_broker) self._impl.add_callback( self._basic_consume_ok_result.signal_once, replies=[pika.spec.Basic.ConsumeOk], one_shot=False) self._impl.add_on_close_callback(self._on_channel_closed) self._impl.add_callback( self._basic_getempty_result.set_value_once, replies=[pika.spec.Basic.GetEmpty], one_shot=False) LOGGER.info("Created channel=%s", self.channel_number) def __int__(self): """Return the channel object as its channel number NOTE: inherited from legacy BlockingConnection; might be error-prone; use `channel_number` property instead. :rtype: int """ return self.channel_number def __repr__(self): return '<%s impl=%r>' % (self.__class__.__name__, self._impl) def __enter__(self): return self def __exit__(self, exc_type, value, traceback): if self.is_open: self.close() def _cleanup(self): """Clean up members that might inhibit garbage collection""" self._message_confirmation_result.reset() self._pending_events = deque() self._consumer_infos = dict() self._queue_consumer_generator = None @property def channel_number(self): """Channel number""" return self._impl.channel_number @property def connection(self): """The channel's BlockingConnection instance""" return self._connection @property def is_closed(self): """Returns True if the channel is closed. :rtype: bool """ return self._impl.is_closed @property def is_open(self): """Returns True if the channel is open. :rtype: bool """ return self._impl.is_open @property def consumer_tags(self): """Property method that returns a list of consumer tags for active consumers :rtype: list """ return compat.dictkeys(self._consumer_infos) _ALWAYS_READY_WAITERS = ((lambda: True),) def _flush_output(self, *waiters): """ Flush output and process input while waiting for any of the given callbacks to return true. The wait is aborted upon channel-close or connection-close. Otherwise, processing continues until the output is flushed AND at least one of the callbacks returns true. If there are no callbacks, then processing ends when all output is flushed. :param waiters: sequence of zero or more callables taking no args and returning true when it's time to stop processing. Their results are OR'ed together. An empty sequence is treated as equivalent to a waiter always returning true. """ if self.is_closed: self._impl._raise_if_not_open() if not waiters: waiters = self._ALWAYS_READY_WAITERS self._connection._flush_output(lambda: self.is_closed, *waiters) if self.is_closed and isinstance(self._closing_reason, exceptions.ChannelClosedByBroker): raise self._closing_reason # pylint: disable=E0702 def _on_puback_message_returned(self, channel, method, properties, body): """Called as the result of Basic.Return from broker in publisher-acknowledgements mode. Saves the info as a ReturnedMessage instance in self._puback_return. :param pika.Channel channel: our self._impl channel :param pika.spec.Basic.Return method: :param pika.spec.BasicProperties properties: message properties :param bytes body: returned message body; empty string if no body """ assert channel is self._impl, (channel.channel_number, self.channel_number) assert isinstance(method, pika.spec.Basic.Return), method assert isinstance(properties, pika.spec.BasicProperties), (properties) LOGGER.warning( "Published message was returned: _delivery_confirmation=%s; " "channel=%s; method=%r; properties=%r; body_size=%d; " "body_prefix=%.255r", self._delivery_confirmation, channel.channel_number, method, properties, len(body) if body is not None else None, body) self._puback_return = ReturnedMessage(method, properties, body) def _add_pending_event(self, evt): """Append an event to the channel's list of events that are ready for dispatch to user and signal our connection that this channel is ready for event dispatch :param _ChannelPendingEvt evt: an event derived from _ChannelPendingEvt """ self._pending_events.append(evt) self.connection._request_channel_dispatch(self.channel_number) def _on_channel_closed(self, _channel, reason): """Callback from impl notifying us that the channel has been closed. This may be as the result of user-, broker-, or internal connection clean-up initiated closing or meta-closing of the channel. If it resulted from receiving `Channel.Close` from broker, we will expedite waking up of the event subsystem so that it may respond by raising `ChannelClosed` from user's context. NOTE: We can't raise exceptions in callbacks in order to protect the integrity of the underlying implementation. BlockingConnection's underlying asynchronous connection adapter (SelectConnection) uses callbacks to communicate with us. If BlockingConnection leaks exceptions back into the I/O loop or the asynchronous connection adapter, we interrupt their normal workflow and introduce a high likelihood of state inconsistency. See `pika.Channel.add_on_close_callback()` for additional documentation. :param pika.Channel _channel: (unused) :param Exception reason: """ LOGGER.debug('_on_channel_closed: %r; %r', reason, self) self._closing_reason = reason if isinstance(reason, exceptions.ChannelClosedByBroker): self._cleanup() # Request urgent termination of `process_data_events()`, in case # it's executing or next time it will execute self.connection._request_channel_dispatch(-self.channel_number) def _on_consumer_cancelled_by_broker(self, method_frame): """Called by impl when broker cancels consumer via Basic.Cancel. This is a RabbitMQ-specific feature. The circumstances include deletion of queue being consumed as well as failure of a HA node responsible for the queue being consumed. :param pika.frame.Method method_frame: method frame with the `spec.Basic.Cancel` method """ evt = _ConsumerCancellationEvt(method_frame) consumer = self._consumer_infos[method_frame.method.consumer_tag] # Don't interfere with client-initiated cancellation flow if not consumer.tearing_down: consumer.state = _ConsumerInfo.CANCELLED_BY_BROKER if consumer.alternate_event_sink is not None: consumer.alternate_event_sink(evt) else: self._add_pending_event(evt) def _on_consumer_message_delivery(self, _channel, method, properties, body): """Called by impl when a message is delivered for a consumer :param Channel channel: The implementation channel object :param spec.Basic.Deliver method: :param pika.spec.BasicProperties properties: message properties :param bytes body: delivered message body; empty string if no body """ evt = _ConsumerDeliveryEvt(method, properties, body) consumer = self._consumer_infos[method.consumer_tag] if consumer.alternate_event_sink is not None: consumer.alternate_event_sink(evt) else: self._add_pending_event(evt) def _on_consumer_generator_event(self, evt): """Sink for the queue consumer generator's consumer events; append the event to queue consumer generator's pending events buffer. :param evt: an object of type _ConsumerDeliveryEvt or _ConsumerCancellationEvt """ self._queue_consumer_generator.pending_events.append(evt) # Schedule termination of connection.process_data_events using a # negative channel number self.connection._request_channel_dispatch(-self.channel_number) def _cancel_all_consumers(self): """Cancel all consumers. NOTE: pending non-ackable messages will be lost; pending ackable messages will be rejected. """ if self._consumer_infos: LOGGER.debug('Cancelling %i consumers', len(self._consumer_infos)) if self._queue_consumer_generator is not None: # Cancel queue consumer generator self.cancel() # Cancel consumers created via basic_consume for consumer_tag in compat.dictkeys(self._consumer_infos): self.basic_cancel(consumer_tag) def _dispatch_events(self): """Called by BlockingConnection to dispatch pending events. `BlockingChannel` schedules this callback via `BlockingConnection._request_channel_dispatch` """ while self._pending_events: evt = self._pending_events.popleft() if type(evt) is _ConsumerDeliveryEvt: # pylint: disable=C0123 consumer_info = self._consumer_infos[evt.method.consumer_tag] consumer_info.on_message_callback(self, evt.method, evt.properties, evt.body) elif type(evt) is _ConsumerCancellationEvt: # pylint: disable=C0123 del self._consumer_infos[evt.method_frame.method.consumer_tag] self._impl.callbacks.process(self.channel_number, self._CONSUMER_CANCELLED_CB_KEY, self, evt.method_frame) else: evt.dispatch() def close(self, reply_code=0, reply_text="Normal shutdown"): """Will invoke a clean shutdown of the channel with the AMQP Broker. :param int reply_code: The reply code to close the channel with :param str reply_text: The reply text to close the channel with """ LOGGER.debug('Channel.close(%s, %s)', reply_code, reply_text) self._impl._raise_if_not_open() try: # Cancel remaining consumers self._cancel_all_consumers() # Close the channel self._impl.close(reply_code=reply_code, reply_text=reply_text) self._flush_output(lambda: self.is_closed) finally: self._cleanup() def flow(self, active): """Turn Channel flow control off and on. NOTE: RabbitMQ doesn't support active=False; per https://www.rabbitmq.com/specification.html: "active=false is not supported by the server. Limiting prefetch with basic.qos provides much better control" For more information, please reference: http://www.rabbitmq.com/amqp-0-9-1-reference.html#channel.flow :param bool active: Turn flow on (True) or off (False) :returns: True if broker will start or continue sending; False if not :rtype: bool """ with _CallbackResult(self._FlowOkCallbackResultArgs) as flow_ok_result: self._impl.flow( active=active, callback=flow_ok_result.set_value_once) self._flush_output(flow_ok_result.is_ready) return flow_ok_result.value.active def add_on_cancel_callback(self, callback): """Pass a callback function that will be called when Basic.Cancel is sent by the broker. The callback function should receive a method frame parameter. :param callable callback: a callable for handling broker's Basic.Cancel notification with the call signature: callback(method_frame) where method_frame is of type `pika.frame.Method` with method of type `spec.Basic.Cancel` """ self._impl.callbacks.add( self.channel_number, self._CONSUMER_CANCELLED_CB_KEY, callback, one_shot=False) def add_on_return_callback(self, callback): """Pass a callback function that will be called when a published message is rejected and returned by the server via `Basic.Return`. :param callable callback: The method to call on callback with the signature callback(channel, method, properties, body), where - channel: pika.Channel - method: pika.spec.Basic.Return - properties: pika.spec.BasicProperties - body: bytes """ self._impl.add_on_return_callback( lambda _channel, method, properties, body: ( self._add_pending_event( _ReturnedMessageEvt( callback, self, method, properties, body)))) def basic_consume(self, queue, on_message_callback, auto_ack=False, exclusive=False, consumer_tag=None, arguments=None): """Sends the AMQP command Basic.Consume to the broker and binds messages for the consumer_tag to the consumer callback. If you do not pass in a consumer_tag, one will be automatically generated for you. Returns the consumer tag. NOTE: the consumer callbacks are dispatched only in the scope of specially-designated methods: see `BlockingConnection.process_data_events` and `BlockingChannel.start_consuming`. For more information about Basic.Consume, see: http://www.rabbitmq.com/amqp-0-9-1-reference.html#basic.consume :param str queue: The queue from which to consume :param callable on_message_callback: Required function for dispatching messages to user, having the signature: on_message_callback(channel, method, properties, body) - channel: BlockingChannel - method: spec.Basic.Deliver - properties: spec.BasicProperties - body: bytes :param bool auto_ack: if set to True, automatic acknowledgement mode will be used (see http://www.rabbitmq.com/confirms.html). This corresponds with the 'no_ack' parameter in the basic.consume AMQP 0.9.1 method :param bool exclusive: Don't allow other consumers on the queue :param str consumer_tag: You may specify your own consumer tag; if left empty, a consumer tag will be generated automatically :param dict arguments: Custom key/value pair arguments for the consumer :returns: consumer tag :rtype: str :raises pika.exceptions.DuplicateConsumerTag: if consumer with given consumer_tag is already present. """ validators.require_string(queue, 'queue') validators.require_callback(on_message_callback, 'on_message_callback') return self._basic_consume_impl( queue=queue, on_message_callback=on_message_callback, auto_ack=auto_ack, exclusive=exclusive, consumer_tag=consumer_tag, arguments=arguments) def _basic_consume_impl(self, queue, auto_ack, exclusive, consumer_tag, arguments=None, on_message_callback=None, alternate_event_sink=None): """The low-level implementation used by `basic_consume` and `consume`. See `basic_consume` docstring for more info. NOTE: exactly one of on_message_callback/alternate_event_sink musts be non-None. This method has one additional parameter alternate_event_sink over the args described in `basic_consume`. :param callable alternate_event_sink: if specified, _ConsumerDeliveryEvt and _ConsumerCancellationEvt objects will be diverted to this callback instead of being deposited in the channel's `_pending_events` container. Signature: alternate_event_sink(evt) :raises pika.exceptions.DuplicateConsumerTag: if consumer with given consumer_tag is already present. """ if (on_message_callback is None) == (alternate_event_sink is None): raise ValueError( ('exactly one of on_message_callback/alternate_event_sink must ' 'be non-None', on_message_callback, alternate_event_sink)) if not consumer_tag: # Need a consumer tag to register consumer info before sending # request to broker, because I/O might dispatch incoming messages # immediately following Basic.Consume-ok before _flush_output # returns consumer_tag = self._impl._generate_consumer_tag() if consumer_tag in self._consumer_infos: raise exceptions.DuplicateConsumerTag(consumer_tag) # Create new consumer self._consumer_infos[consumer_tag] = _ConsumerInfo( consumer_tag, auto_ack=auto_ack, on_message_callback=on_message_callback, alternate_event_sink=alternate_event_sink) try: with self._basic_consume_ok_result as ok_result: tag = self._impl.basic_consume( on_message_callback=self._on_consumer_message_delivery, queue=queue, auto_ack=auto_ack, exclusive=exclusive, consumer_tag=consumer_tag, arguments=arguments) assert tag == consumer_tag, (tag, consumer_tag) self._flush_output(ok_result.is_ready) except Exception: # If channel was closed, self._consumer_infos will be empty if consumer_tag in self._consumer_infos: del self._consumer_infos[consumer_tag] # Schedule termination of connection.process_data_events using a # negative channel number self.connection._request_channel_dispatch(-self.channel_number) raise # NOTE: Consumer could get cancelled by broker immediately after opening # (e.g., queue getting deleted externally) if self._consumer_infos[consumer_tag].setting_up: self._consumer_infos[consumer_tag].state = _ConsumerInfo.ACTIVE return consumer_tag def basic_cancel(self, consumer_tag): """This method cancels a consumer. This does not affect already delivered messages, but it does mean the server will not send any more messages for that consumer. The client may receive an arbitrary number of messages in between sending the cancel method and receiving the cancel-ok reply. NOTE: When cancelling an auto_ack=False consumer, this implementation automatically Nacks and suppresses any incoming messages that have not yet been dispatched to the consumer's callback. However, when cancelling a auto_ack=True consumer, this method will return any pending messages that arrived before broker confirmed the cancellation. :param str consumer_tag: Identifier for the consumer; the result of passing a consumer_tag that was created on another channel is undefined (bad things will happen) :returns: (NEW IN pika 0.10.0) empty sequence for a auto_ack=False consumer; for a auto_ack=True consumer, returns a (possibly empty) sequence of pending messages that arrived before broker confirmed the cancellation (this is done instead of via consumer's callback in order to prevent reentrancy/recursion. Each message is four-tuple: (channel, method, properties, body) - channel: BlockingChannel - method: spec.Basic.Deliver - properties: spec.BasicProperties - body: bytes :rtype: list """ try: consumer_info = self._consumer_infos[consumer_tag] except KeyError: LOGGER.warning( "User is attempting to cancel an unknown consumer=%s; " "already cancelled by user or broker?", consumer_tag) return [] try: # Assertion failure here is most likely due to reentrance assert consumer_info.active or consumer_info.cancelled_by_broker, ( consumer_info.state) # Assertion failure here signals disconnect between consumer state # in BlockingChannel and Channel assert (consumer_info.cancelled_by_broker or consumer_tag in self._impl._consumers), consumer_tag auto_ack = consumer_info.auto_ack consumer_info.state = _ConsumerInfo.TEARING_DOWN with _CallbackResult() as cancel_ok_result: # Nack pending messages for auto_ack=False consumer if not auto_ack: pending_messages = self._remove_pending_deliveries( consumer_tag) if pending_messages: # NOTE: we use impl's basic_reject to avoid the # possibility of redelivery before basic_cancel takes # control of nacking. # NOTE: we can't use basic_nack with the multiple option # to avoid nacking messages already held by our client. for message in pending_messages: self._impl.basic_reject( message.method.delivery_tag, requeue=True) # Cancel the consumer; impl takes care of rejecting any # additional deliveries that arrive for a auto_ack=False # consumer self._impl.basic_cancel( consumer_tag=consumer_tag, callback=cancel_ok_result.signal_once) # Flush output and wait for Basic.Cancel-ok or # broker-initiated Basic.Cancel self._flush_output( cancel_ok_result.is_ready, lambda: consumer_tag not in self._impl._consumers) if auto_ack: # Return pending messages for auto_ack=True consumer return [(evt.method, evt.properties, evt.body) for evt in self._remove_pending_deliveries(consumer_tag) ] else: # impl takes care of rejecting any incoming deliveries during # cancellation messages = self._remove_pending_deliveries(consumer_tag) assert not messages, messages return [] finally: # NOTE: The entry could be purged if channel or connection closes if consumer_tag in self._consumer_infos: del self._consumer_infos[consumer_tag] # Schedule termination of connection.process_data_events using a # negative channel number self.connection._request_channel_dispatch(-self.channel_number) def _remove_pending_deliveries(self, consumer_tag): """Extract _ConsumerDeliveryEvt objects destined for the given consumer from pending events, discarding the _ConsumerCancellationEvt, if any :param str consumer_tag: :returns: a (possibly empty) sequence of _ConsumerDeliveryEvt destined for the given consumer tag :rtype: list """ remaining_events = deque() unprocessed_messages = [] while self._pending_events: evt = self._pending_events.popleft() if type(evt) is _ConsumerDeliveryEvt: # pylint: disable=C0123 if evt.method.consumer_tag == consumer_tag: unprocessed_messages.append(evt) continue if type(evt) is _ConsumerCancellationEvt: # pylint: disable=C0123 if evt.method_frame.method.consumer_tag == consumer_tag: # A broker-initiated Basic.Cancel must have arrived # before our cancel request completed continue remaining_events.append(evt) self._pending_events = remaining_events return unprocessed_messages def start_consuming(self): """Processes I/O events and dispatches timers and `basic_consume` callbacks until all consumers are cancelled. NOTE: this blocking function may not be called from the scope of a pika callback, because dispatching `basic_consume` callbacks from this context would constitute recursion. :raises pika.exceptions.ReentrancyError: if called from the scope of a `BlockingConnection` or `BlockingChannel` callback :raises ChannelClosed: when this channel is closed by broker. """ # Check if called from the scope of an event dispatch callback with self.connection._acquire_event_dispatch() as dispatch_allowed: if not dispatch_allowed: raise exceptions.ReentrancyError( 'start_consuming may not be called from the scope of ' 'another BlockingConnection or BlockingChannel callback') self._impl._raise_if_not_open() # Process events as long as consumers exist on this channel while self._consumer_infos: # This will raise ChannelClosed if channel is closed by broker self._process_data_events(time_limit=None) def stop_consuming(self, consumer_tag=None): """ Cancels all consumers, signalling the `start_consuming` loop to exit. NOTE: pending non-ackable messages will be lost; pending ackable messages will be rejected. """ if consumer_tag: self.basic_cancel(consumer_tag) else: self._cancel_all_consumers() def consume(self, queue, auto_ack=False, exclusive=False, arguments=None, inactivity_timeout=None): """Blocking consumption of a queue instead of via a callback. This method is a generator that yields each message as a tuple of method, properties, and body. The active generator iterator terminates when the consumer is cancelled by client via `BlockingChannel.cancel()` or by broker. Example: :: for method, properties, body in channel.consume('queue'): print body channel.basic_ack(method.delivery_tag) You should call `BlockingChannel.cancel()` when you escape out of the generator loop. If you don't cancel this consumer, then next call on the same channel to `consume()` with the exact same (queue, auto_ack, exclusive) parameters will resume the existing consumer generator; however, calling with different parameters will result in an exception. :param str queue: The queue name to consume :param bool auto_ack: Tell the broker to not expect a ack/nack response :param bool exclusive: Don't allow other consumers on the queue :param dict arguments: Custom key/value pair arguments for the consumer :param float inactivity_timeout: if a number is given (in seconds), will cause the method to yield (None, None, None) after the given period of inactivity; this permits for pseudo-regular maintenance activities to be carried out by the user while waiting for messages to arrive. If None is given (default), then the method blocks until the next event arrives. NOTE that timing granularity is limited by the timer resolution of the underlying implementation. NEW in pika 0.10.0. :yields: tuple(spec.Basic.Deliver, spec.BasicProperties, str or unicode) :raises ValueError: if consumer-creation parameters don't match those of the existing queue consumer generator, if any. NEW in pika 0.10.0 :raises ChannelClosed: when this channel is closed by broker. """ self._impl._raise_if_not_open() params = (queue, auto_ack, exclusive) if self._queue_consumer_generator is not None: if params != self._queue_consumer_generator.params: raise ValueError( 'Consume with different params not allowed on existing ' 'queue consumer generator; previous params: %r; ' 'new params: %r' % (self._queue_consumer_generator.params, (queue, auto_ack, exclusive))) else: LOGGER.debug('Creating new queue consumer generator; params: %r', params) # Need a consumer tag to register consumer info before sending # request to broker, because I/O might pick up incoming messages # in addition to Basic.Consume-ok consumer_tag = self._impl._generate_consumer_tag() self._queue_consumer_generator = _QueueConsumerGeneratorInfo( params, consumer_tag) try: self._basic_consume_impl( queue=queue, auto_ack=auto_ack, exclusive=exclusive, consumer_tag=consumer_tag, arguments=arguments, alternate_event_sink=self._on_consumer_generator_event) except Exception: self._queue_consumer_generator = None raise LOGGER.info('Created new queue consumer generator %r', self._queue_consumer_generator) while self._queue_consumer_generator is not None: # Process pending events if self._queue_consumer_generator.pending_events: evt = self._queue_consumer_generator.pending_events.popleft() if type(evt) is _ConsumerCancellationEvt: # pylint: disable=C0123 # Consumer was cancelled by broker self._queue_consumer_generator = None break else: yield (evt.method, evt.properties, evt.body) continue if inactivity_timeout is None: # Wait indefinitely for a message to arrive, while processing # I/O events and triggering ChannelClosed exception when the # channel fails self._process_data_events(time_limit=None) continue # Wait with inactivity timeout wait_start_time = compat.time_now() wait_deadline = wait_start_time + inactivity_timeout delta = inactivity_timeout while (self._queue_consumer_generator is not None and not self._queue_consumer_generator.pending_events): self._process_data_events(time_limit=delta) if not self._queue_consumer_generator: # Consumer was cancelled by client break if self._queue_consumer_generator.pending_events: # Got message(s) break delta = wait_deadline - compat.time_now() if delta <= 0.0: # Signal inactivity timeout yield (None, None, None) break def _process_data_events(self, time_limit): """Wrapper for `BlockingConnection.process_data_events()` with common channel-specific logic that raises ChannelClosed if broker closed this channel. NOTE: We need to raise an exception in the context of user's call into our API to protect the integrity of the underlying implementation. BlockingConnection's underlying asynchronous connection adapter (SelectConnection) uses callbacks to communicate with us. If BlockingConnection leaks exceptions back into the I/O loop or the asynchronous connection adapter, we interrupt their normal workflow and introduce a high likelihood of state inconsistency. See `BlockingConnection.process_data_events()` for documentation of args and behavior. :param float time_limit: """ self.connection.process_data_events(time_limit=time_limit) if self.is_closed and isinstance(self._closing_reason, exceptions.ChannelClosedByBroker): LOGGER.debug('Channel close by broker detected, raising %r; %r', self._closing_reason, self) raise self._closing_reason # pylint: disable=E0702 def get_waiting_message_count(self): """Returns the number of messages that may be retrieved from the current queue consumer generator via `BlockingChannel.consume` without blocking. NEW in pika 0.10.0 :returns: The number of waiting messages :rtype: int """ if self._queue_consumer_generator is not None: pending_events = self._queue_consumer_generator.pending_events count = len(pending_events) if count and type(pending_events[-1]) is _ConsumerCancellationEvt: # pylint: disable=C0123 count -= 1 else: count = 0 return count def cancel(self): """Cancel the queue consumer created by `BlockingChannel.consume`, rejecting all pending ackable messages. NOTE: If you're looking to cancel a consumer issued with BlockingChannel.basic_consume then you should call BlockingChannel.basic_cancel. :returns: The number of messages requeued by Basic.Nack. NEW in 0.10.0: returns 0 :rtype: int """ if self._queue_consumer_generator is None: LOGGER.warning('cancel: queue consumer generator is inactive ' '(already cancelled by client or broker?)') return 0 try: _, auto_ack, _ = self._queue_consumer_generator.params if not auto_ack: # Reject messages held by queue consumer generator; NOTE: we # can't use basic_nack with the multiple option to avoid nacking # messages already held by our client. pending_events = self._queue_consumer_generator.pending_events # NOTE `get_waiting_message_count` adjusts for `Basic.Cancel` # from the server at the end (if any) for _ in compat.xrange(self.get_waiting_message_count()): evt = pending_events.popleft() self._impl.basic_reject( evt.method.delivery_tag, requeue=True) self.basic_cancel(self._queue_consumer_generator.consumer_tag) finally: self._queue_consumer_generator = None # Return 0 for compatibility with legacy implementation; the number of # nacked messages is not meaningful since only messages consumed with # auto_ack=False may be nacked, and those arriving after calling # basic_cancel will be rejected automatically by impl channel, so we'll # never know how many of those were nacked. return 0 def basic_ack(self, delivery_tag=0, multiple=False): """Acknowledge one or more messages. When sent by the client, this method acknowledges one or more messages delivered via the Deliver or Get-Ok methods. When sent by server, this method acknowledges one or more messages published with the Publish method on a channel in confirm mode. The acknowledgement can be for a single message or a set of messages up to and including a specific message. :param int delivery-tag: The server-assigned delivery tag :param bool multiple: If set to True, the delivery tag is treated as "up to and including", so that multiple messages can be acknowledged with a single method. If set to False, the delivery tag refers to a single message. If the multiple field is 1, and the delivery tag is zero, this indicates acknowledgement of all outstanding messages. """ self._impl.basic_ack(delivery_tag=delivery_tag, multiple=multiple) self._flush_output() def basic_nack(self, delivery_tag=0, multiple=False, requeue=True): """This method allows a client to reject one or more incoming messages. It can be used to interrupt and cancel large incoming messages, or return untreatable messages to their original queue. :param int delivery-tag: The server-assigned delivery tag :param bool multiple: If set to True, the delivery tag is treated as "up to and including", so that multiple messages can be acknowledged with a single method. If set to False, the delivery tag refers to a single message. If the multiple field is 1, and the delivery tag is zero, this indicates acknowledgement of all outstanding messages. :param bool requeue: If requeue is true, the server will attempt to requeue the message. If requeue is false or the requeue attempt fails the messages are discarded or dead-lettered. """ self._impl.basic_nack( delivery_tag=delivery_tag, multiple=multiple, requeue=requeue) self._flush_output() def basic_get(self, queue, auto_ack=False): """Get a single message from the AMQP broker. Returns a sequence with the method frame, message properties, and body. :param str queue: Name of queue from which to get a message :param bool auto_ack: Tell the broker to not expect a reply :returns: a three-tuple; (None, None, None) if the queue was empty; otherwise (method, properties, body); NOTE: body may be None :rtype: (spec.Basic.GetOk|None, spec.BasicProperties|None, str|None) """ assert not self._basic_getempty_result validators.require_string(queue, 'queue') # NOTE: nested with for python 2.6 compatibility with _CallbackResult(self._RxMessageArgs) as get_ok_result: with self._basic_getempty_result: self._impl.basic_get( queue=queue, auto_ack=auto_ack, callback=get_ok_result.set_value_once) self._flush_output(get_ok_result.is_ready, self._basic_getempty_result.is_ready) if get_ok_result: evt = get_ok_result.value return evt.method, evt.properties, evt.body else: assert self._basic_getempty_result, ( "wait completed without GetOk and GetEmpty") return None, None, None def basic_publish(self, exchange, routing_key, body, properties=None, mandatory=False): """Publish to the channel with the given exchange, routing key, and body. For more information on basic_publish and what the parameters do, see: http://www.rabbitmq.com/amqp-0-9-1-reference.html#basic.publish NOTE: mandatory may be enabled even without delivery confirmation, but in the absence of delivery confirmation the synchronous implementation has no way to know how long to wait for the Basic.Return. :param str exchange: The exchange to publish to :param str routing_key: The routing key to bind on :param bytes body: The message body; empty string if no body :param pika.spec.BasicProperties properties: message properties :param bool mandatory: The mandatory flag :raises UnroutableError: raised when a message published in publisher-acknowledgments mode (see `BlockingChannel.confirm_delivery`) is returned via `Basic.Return` followed by `Basic.Ack`. :raises NackError: raised when a message published in publisher-acknowledgements mode is Nack'ed by the broker. See `BlockingChannel.confirm_delivery`. """ if self._delivery_confirmation: # In publisher-acknowledgments mode with self._message_confirmation_result: self._impl.basic_publish( exchange=exchange, routing_key=routing_key, body=body, properties=properties, mandatory=mandatory) self._flush_output(self._message_confirmation_result.is_ready) conf_method = ( self._message_confirmation_result.value.method_frame.method) if isinstance(conf_method, pika.spec.Basic.Nack): # Broker was unable to process the message due to internal # error LOGGER.warning( "Message was Nack'ed by broker: nack=%r; channel=%s; " "exchange=%s; routing_key=%s; mandatory=%r; ", conf_method, self.channel_number, exchange, routing_key, mandatory) if self._puback_return is not None: returned_messages = [self._puback_return] self._puback_return = None else: returned_messages = [] raise exceptions.NackError(returned_messages) else: assert isinstance(conf_method, pika.spec.Basic.Ack), (conf_method) if self._puback_return is not None: # Unroutable message was returned messages = [self._puback_return] self._puback_return = None raise exceptions.UnroutableError(messages) else: # In non-publisher-acknowledgments mode self._impl.basic_publish( exchange=exchange, routing_key=routing_key, body=body, properties=properties, mandatory=mandatory) self._flush_output() def basic_qos(self, prefetch_size=0, prefetch_count=0, global_qos=False): """Specify quality of service. This method requests a specific quality of service. The QoS can be specified for the current channel or for all channels on the connection. The client can request that messages be sent in advance so that when the client finishes processing a message, the following message is already held locally, rather than needing to be sent down the channel. Prefetching gives a performance improvement. :param int prefetch_size: This field specifies the prefetch window size. The server will send a message in advance if it is equal to or smaller in size than the available prefetch size (and also falls into other prefetch limits). May be set to zero, meaning "no specific limit", although other prefetch limits may still apply. The prefetch-size is ignored if the no-ack option is set in the consumer. :param int prefetch_count: Specifies a prefetch window in terms of whole messages. This field may be used in combination with the prefetch-size field; a message will only be sent in advance if both prefetch windows (and those at the channel and connection level) allow it. The prefetch-count is ignored if the no-ack option is set in the consumer. :param bool global_qos: Should the QoS apply to all channels on the connection. """ with _CallbackResult() as qos_ok_result: self._impl.basic_qos( callback=qos_ok_result.signal_once, prefetch_size=prefetch_size, prefetch_count=prefetch_count, global_qos=global_qos) self._flush_output(qos_ok_result.is_ready) def basic_recover(self, requeue=False): """This method asks the server to redeliver all unacknowledged messages on a specified channel. Zero or more messages may be redelivered. This method replaces the asynchronous Recover. :param bool requeue: If False, the message will be redelivered to the original recipient. If True, the server will attempt to requeue the message, potentially then delivering it to an alternative subscriber. """ with _CallbackResult() as recover_ok_result: self._impl.basic_recover( requeue=requeue, callback=recover_ok_result.signal_once) self._flush_output(recover_ok_result.is_ready) def basic_reject(self, delivery_tag=0, requeue=True): """Reject an incoming message. This method allows a client to reject a message. It can be used to interrupt and cancel large incoming messages, or return untreatable messages to their original queue. :param int delivery-tag: The server-assigned delivery tag :param bool requeue: If requeue is true, the server will attempt to requeue the message. If requeue is false or the requeue attempt fails the messages are discarded or dead-lettered. """ self._impl.basic_reject(delivery_tag=delivery_tag, requeue=requeue) self._flush_output() def confirm_delivery(self): """Turn on RabbitMQ-proprietary Confirm mode in the channel. For more information see: https://www.rabbitmq.com/confirms.html """ if self._delivery_confirmation: LOGGER.error( 'confirm_delivery: confirmation was already enabled ' 'on channel=%s', self.channel_number) return with _CallbackResult() as select_ok_result: self._impl.confirm_delivery( ack_nack_callback=self._message_confirmation_result. set_value_once, callback=select_ok_result.signal_once) self._flush_output(select_ok_result.is_ready) self._delivery_confirmation = True # Unroutable messages returned after this point will be in the context # of publisher acknowledgments self._impl.add_on_return_callback(self._on_puback_message_returned) def exchange_declare(self, exchange, exchange_type=ExchangeType.direct, passive=False, durable=False, auto_delete=False, internal=False, arguments=None): """This method creates an exchange if it does not already exist, and if the exchange exists, verifies that it is of the correct and expected class. If passive set, the server will reply with Declare-Ok if the exchange already exists with the same name, and raise an error if not and if the exchange does not already exist, the server MUST raise a channel exception with reply code 404 (not found). :param str exchange: The exchange name consists of a non-empty sequence of these characters: letters, digits, hyphen, underscore, period, or colon. :param str exchange_type: The exchange type to use :param bool passive: Perform a declare or just check to see if it exists :param bool durable: Survive a reboot of RabbitMQ :param bool auto_delete: Remove when no more queues are bound to it :param bool internal: Can only be published to by other exchanges :param dict arguments: Custom key/value pair arguments for the exchange :returns: Method frame from the Exchange.Declare-ok response :rtype: `pika.frame.Method` having `method` attribute of type `spec.Exchange.DeclareOk` """ validators.require_string(exchange, 'exchange') with _CallbackResult( self._MethodFrameCallbackResultArgs) as declare_ok_result: self._impl.exchange_declare( exchange=exchange, exchange_type=exchange_type, passive=passive, durable=durable, auto_delete=auto_delete, internal=internal, arguments=arguments, callback=declare_ok_result.set_value_once) self._flush_output(declare_ok_result.is_ready) return declare_ok_result.value.method_frame def exchange_delete(self, exchange=None, if_unused=False): """Delete the exchange. :param str exchange: The exchange name :param bool if_unused: only delete if the exchange is unused :returns: Method frame from the Exchange.Delete-ok response :rtype: `pika.frame.Method` having `method` attribute of type `spec.Exchange.DeleteOk` """ with _CallbackResult( self._MethodFrameCallbackResultArgs) as delete_ok_result: self._impl.exchange_delete( exchange=exchange, if_unused=if_unused, callback=delete_ok_result.set_value_once) self._flush_output(delete_ok_result.is_ready) return delete_ok_result.value.method_frame def exchange_bind(self, destination, source, routing_key='', arguments=None): """Bind an exchange to another exchange. :param str destination: The destination exchange to bind :param str source: The source exchange to bind to :param str routing_key: The routing key to bind on :param dict arguments: Custom key/value pair arguments for the binding :returns: Method frame from the Exchange.Bind-ok response :rtype: `pika.frame.Method` having `method` attribute of type `spec.Exchange.BindOk` """ validators.require_string(destination, 'destination') validators.require_string(source, 'source') with _CallbackResult(self._MethodFrameCallbackResultArgs) as \ bind_ok_result: self._impl.exchange_bind( destination=destination, source=source, routing_key=routing_key, arguments=arguments, callback=bind_ok_result.set_value_once) self._flush_output(bind_ok_result.is_ready) return bind_ok_result.value.method_frame def exchange_unbind(self, destination=None, source=None, routing_key='', arguments=None): """Unbind an exchange from another exchange. :param str destination: The destination exchange to unbind :param str source: The source exchange to unbind from :param str routing_key: The routing key to unbind :param dict arguments: Custom key/value pair arguments for the binding :returns: Method frame from the Exchange.Unbind-ok response :rtype: `pika.frame.Method` having `method` attribute of type `spec.Exchange.UnbindOk` """ with _CallbackResult( self._MethodFrameCallbackResultArgs) as unbind_ok_result: self._impl.exchange_unbind( destination=destination, source=source, routing_key=routing_key, arguments=arguments, callback=unbind_ok_result.set_value_once) self._flush_output(unbind_ok_result.is_ready) return unbind_ok_result.value.method_frame def queue_declare(self, queue, passive=False, durable=False, exclusive=False, auto_delete=False, arguments=None): """Declare queue, create if needed. This method creates or checks a queue. When creating a new queue the client can specify various properties that control the durability of the queue and its contents, and the level of sharing for the queue. Use an empty string as the queue name for the broker to auto-generate one. Retrieve this auto-generated queue name from the returned `spec.Queue.DeclareOk` method frame. :param str queue: The queue name; if empty string, the broker will create a unique queue name :param bool passive: Only check to see if the queue exists and raise `ChannelClosed` if it doesn't :param bool durable: Survive reboots of the broker :param bool exclusive: Only allow access by the current connection :param bool auto_delete: Delete after consumer cancels or disconnects :param dict arguments: Custom key/value arguments for the queue :returns: Method frame from the Queue.Declare-ok response :rtype: `pika.frame.Method` having `method` attribute of type `spec.Queue.DeclareOk` """ validators.require_string(queue, 'queue') with _CallbackResult(self._MethodFrameCallbackResultArgs) as \ declare_ok_result: self._impl.queue_declare( queue=queue, passive=passive, durable=durable, exclusive=exclusive, auto_delete=auto_delete, arguments=arguments, callback=declare_ok_result.set_value_once) self._flush_output(declare_ok_result.is_ready) return declare_ok_result.value.method_frame def queue_delete(self, queue, if_unused=False, if_empty=False): """Delete a queue from the broker. :param str queue: The queue to delete :param bool if_unused: only delete if it's unused :param bool if_empty: only delete if the queue is empty :returns: Method frame from the Queue.Delete-ok response :rtype: `pika.frame.Method` having `method` attribute of type `spec.Queue.DeleteOk` """ with _CallbackResult(self._MethodFrameCallbackResultArgs) as \ delete_ok_result: self._impl.queue_delete( queue=queue, if_unused=if_unused, if_empty=if_empty, callback=delete_ok_result.set_value_once) self._flush_output(delete_ok_result.is_ready) return delete_ok_result.value.method_frame def queue_purge(self, queue): """Purge all of the messages from the specified queue :param str queue: The queue to purge :returns: Method frame from the Queue.Purge-ok response :rtype: `pika.frame.Method` having `method` attribute of type `spec.Queue.PurgeOk` """ with _CallbackResult(self._MethodFrameCallbackResultArgs) as \ purge_ok_result: self._impl.queue_purge( queue=queue, callback=purge_ok_result.set_value_once) self._flush_output(purge_ok_result.is_ready) return purge_ok_result.value.method_frame def queue_bind(self, queue, exchange, routing_key=None, arguments=None): """Bind the queue to the specified exchange :param str queue: The queue to bind to the exchange :param str exchange: The source exchange to bind to :param str routing_key: The routing key to bind on :param dict arguments: Custom key/value pair arguments for the binding :returns: Method frame from the Queue.Bind-ok response :rtype: `pika.frame.Method` having `method` attribute of type `spec.Queue.BindOk` """ validators.require_string(queue, 'queue') validators.require_string(exchange, 'exchange') with _CallbackResult( self._MethodFrameCallbackResultArgs) as bind_ok_result: self._impl.queue_bind( queue=queue, exchange=exchange, routing_key=routing_key, arguments=arguments, callback=bind_ok_result.set_value_once) self._flush_output(bind_ok_result.is_ready) return bind_ok_result.value.method_frame def queue_unbind(self, queue, exchange=None, routing_key=None, arguments=None): """Unbind a queue from an exchange. :param str queue: The queue to unbind from the exchange :param str exchange: The source exchange to bind from :param str routing_key: The routing key to unbind :param dict arguments: Custom key/value pair arguments for the binding :returns: Method frame from the Queue.Unbind-ok response :rtype: `pika.frame.Method` having `method` attribute of type `spec.Queue.UnbindOk` """ with _CallbackResult(self._MethodFrameCallbackResultArgs) as \ unbind_ok_result: self._impl.queue_unbind( queue=queue, exchange=exchange, routing_key=routing_key, arguments=arguments, callback=unbind_ok_result.set_value_once) self._flush_output(unbind_ok_result.is_ready) return unbind_ok_result.value.method_frame def tx_select(self): """Select standard transaction mode. This method sets the channel to use standard transactions. The client must use this method at least once on a channel before using the Commit or Rollback methods. :returns: Method frame from the Tx.Select-ok response :rtype: `pika.frame.Method` having `method` attribute of type `spec.Tx.SelectOk` """ with _CallbackResult(self._MethodFrameCallbackResultArgs) as \ select_ok_result: self._impl.tx_select(select_ok_result.set_value_once) self._flush_output(select_ok_result.is_ready) return select_ok_result.value.method_frame def tx_commit(self): """Commit a transaction. :returns: Method frame from the Tx.Commit-ok response :rtype: `pika.frame.Method` having `method` attribute of type `spec.Tx.CommitOk` """ with _CallbackResult(self._MethodFrameCallbackResultArgs) as \ commit_ok_result: self._impl.tx_commit(commit_ok_result.set_value_once) self._flush_output(commit_ok_result.is_ready) return commit_ok_result.value.method_frame def tx_rollback(self): """Rollback a transaction. :returns: Method frame from the Tx.Commit-ok response :rtype: `pika.frame.Method` having `method` attribute of type `spec.Tx.CommitOk` """ with _CallbackResult(self._MethodFrameCallbackResultArgs) as \ rollback_ok_result: self._impl.tx_rollback(rollback_ok_result.set_value_once) self._flush_output(rollback_ok_result.is_ready) return rollback_ok_result.value.method_frame pika-1.2.0/pika/adapters/gevent_connection.py000066400000000000000000000405361400701476500212620ustar00rootroot00000000000000"""Use pika with the Gevent IOLoop.""" import functools import logging import os import threading import weakref try: import queue except ImportError: # Python <= v2.7 import Queue as queue import gevent import gevent.hub import gevent.socket import pika.compat from pika.adapters.base_connection import BaseConnection from pika.adapters.utils.io_services_utils import check_callback_arg from pika.adapters.utils.nbio_interface import ( AbstractIOReference, AbstractIOServices, ) from pika.adapters.utils.selector_ioloop_adapter import ( AbstractSelectorIOLoop, SelectorIOServicesAdapter, ) LOGGER = logging.getLogger(__name__) class GeventConnection(BaseConnection): """Implementation of pika's ``BaseConnection``. An async selector-based connection which integrates with Gevent. """ def __init__(self, parameters=None, on_open_callback=None, on_open_error_callback=None, on_close_callback=None, custom_ioloop=None, internal_connection_workflow=True): """Create a new GeventConnection instance and connect to RabbitMQ on Gevent's event-loop. :param pika.connection.Parameters|None parameters: The connection parameters :param callable|None on_open_callback: The method to call when the connection is open :param callable|None on_open_error_callback: Called if the connection can't be established or connection establishment is interrupted by `Connection.close()`: on_open_error_callback(Connection, exception) :param callable|None on_close_callback: Called when a previously fully open connection is closed: `on_close_callback(Connection, exception)`, where `exception` is either an instance of `exceptions.ConnectionClosed` if closed by user or broker or exception of another type that describes the cause of connection failure :param gevent._interfaces.ILoop|nbio_interface.AbstractIOServices|None custom_ioloop: Use a custom Gevent ILoop. :param bool internal_connection_workflow: True for autonomous connection establishment which is default; False for externally-managed connection workflow via the `create_connection()` factory """ if pika.compat.ON_WINDOWS: raise RuntimeError('GeventConnection is not supported on Windows.') custom_ioloop = (custom_ioloop or _GeventSelectorIOLoop(gevent.get_hub())) if isinstance(custom_ioloop, AbstractIOServices): nbio = custom_ioloop else: nbio = _GeventSelectorIOServicesAdapter(custom_ioloop) super(GeventConnection, self).__init__( parameters, on_open_callback, on_open_error_callback, on_close_callback, nbio, internal_connection_workflow=internal_connection_workflow) @classmethod def create_connection(cls, connection_configs, on_done, custom_ioloop=None, workflow=None): """Implement :py:classmethod:`pika.adapters.BaseConnection.create_connection()`. """ custom_ioloop = (custom_ioloop or _GeventSelectorIOLoop(gevent.get_hub())) nbio = _GeventSelectorIOServicesAdapter(custom_ioloop) def connection_factory(params): """Connection factory.""" if params is None: raise ValueError('Expected pika.connection.Parameters ' 'instance, but got None in params arg.') return cls(parameters=params, custom_ioloop=nbio, internal_connection_workflow=False) return cls._start_connection_workflow( connection_configs=connection_configs, connection_factory=connection_factory, nbio=nbio, workflow=workflow, on_done=on_done) class _TSafeCallbackQueue(object): """Dispatch callbacks from any thread to be executed in the main thread efficiently with IO events. """ def __init__(self): """ :param _GeventSelectorIOLoop loop: IO loop to add callbacks to. """ # Thread-safe, blocking queue. self._queue = queue.Queue() # PIPE to trigger an event when the queue is ready. self._read_fd, self._write_fd = os.pipe() # Lock around writes to the PIPE in case some platform/implementation # requires this. self._write_lock = threading.RLock() @property def fd(self): """The file-descriptor to register for READ events in the IO loop.""" return self._read_fd def add_callback_threadsafe(self, callback): """Add an item to the queue from any thread. The configured handler will be invoked with the item in the main thread. :param item: Object to add to the queue. """ self._queue.put(callback) with self._write_lock: # The value written is not important. os.write(self._write_fd, b'\xFF') def run_next_callback(self): """Invoke the next callback from the queue. MUST run in the main thread. If no callback was added to the queue, this will block the IO loop. Performs a blocking READ on the pipe so must only be called when the pipe is ready for reading. """ try: callback = self._queue.get_nowait() except queue.Empty: # Should never happen. LOGGER.warning("Callback queue was empty.") else: # Read the byte from the pipe so the event doesn't re-fire. os.read(self._read_fd, 1) callback() class _GeventSelectorIOLoop(AbstractSelectorIOLoop): """Implementation of `AbstractSelectorIOLoop` using the Gevent event loop. Required by implementations of `SelectorIOServicesAdapter`. """ # Gevent's READ and WRITE masks are defined as 1 and 2 respectively. No # ERROR mask is defined. # See http://www.gevent.org/api/gevent.hub.html#gevent._interfaces.ILoop.io READ = 1 WRITE = 2 ERROR = 0 def __init__(self, gevent_hub=None): """ :param gevent._interfaces.ILoop gevent_loop: """ self._hub = gevent_hub or gevent.get_hub() self._io_watchers_by_fd = {} # Used to start/stop the loop. self._waiter = gevent.hub.Waiter() # For adding callbacks from other threads. See `add_callback(..)`. self._callback_queue = _TSafeCallbackQueue() def run_callback_in_main_thread(fd, events): """Swallow the fd and events arguments.""" del fd del events self._callback_queue.run_next_callback() self.add_handler(self._callback_queue.fd, run_callback_in_main_thread, self.READ) def close(self): """Release the loop's resources.""" self._hub.loop.destroy() self._hub = None def start(self): """Run the I/O loop. It will loop until requested to exit. See `stop()`. """ LOGGER.debug("Passing control to Gevent's IOLoop") self._waiter.get() # Block until 'stop()' is called. LOGGER.debug("Control was passed back from Gevent's IOLoop") self._waiter.clear() def stop(self): """Request exit from the ioloop. The loop is NOT guaranteed to stop before this method returns. To invoke `stop()` safely from a thread other than this IOLoop's thread, call it via `add_callback_threadsafe`; e.g., `ioloop.add_callback(ioloop.stop)` """ self._waiter.switch(None) def add_callback(self, callback): """Requests a call to the given function as soon as possible in the context of this IOLoop's thread. NOTE: This is the only thread-safe method in IOLoop. All other manipulations of IOLoop must be performed from the IOLoop's thread. For example, a thread may request a call to the `stop` method of an ioloop that is running in a different thread via `ioloop.add_callback_threadsafe(ioloop.stop)` :param callable callback: The callback method """ if gevent.get_hub() == self._hub: # We're in the main thread; just add the callback. LOGGER.debug("Adding callback from main thread") self._hub.loop.run_callback(callback) else: # This isn't the main thread and Gevent's hub/loop don't provide # any thread-safety so enqueue the callback for it to be registered # in the main thread. LOGGER.debug("Adding callback from another thread") callback = functools.partial(self._hub.loop.run_callback, callback) self._callback_queue.add_callback_threadsafe(callback) def call_later(self, delay, callback): """Add the callback to the IOLoop timer to be called after delay seconds from the time of call on best-effort basis. Returns a handle to the timeout. :param float delay: The number of seconds to wait to call callback :param callable callback: The callback method :returns: handle to the created timeout that may be passed to `remove_timeout()` :rtype: object """ timer = self._hub.loop.timer(delay) timer.start(callback) return timer def remove_timeout(self, timeout_handle): """Remove a timeout :param timeout_handle: Handle of timeout to remove """ timeout_handle.close() def add_handler(self, fd, handler, events): """Start watching the given file descriptor for events :param int fd: The file descriptor :param callable handler: When requested event(s) occur, `handler(fd, events)` will be called. :param int events: The event mask (READ|WRITE) """ io_watcher = self._hub.loop.io(fd, events) self._io_watchers_by_fd[fd] = io_watcher io_watcher.start(handler, fd, events) def update_handler(self, fd, events): """Change the events being watched for. :param int fd: The file descriptor :param int events: The new event mask (READ|WRITE) """ io_watcher = self._io_watchers_by_fd[fd] # Save callback from the original watcher. The close the old watcher # and create a new one using the saved callback and the new events. callback = io_watcher.callback io_watcher.close() del self._io_watchers_by_fd[fd] self.add_handler(fd, callback, events) def remove_handler(self, fd): """Stop watching the given file descriptor for events :param int fd: The file descriptor """ io_watcher = self._io_watchers_by_fd[fd] io_watcher.close() del self._io_watchers_by_fd[fd] class _GeventSelectorIOServicesAdapter(SelectorIOServicesAdapter): """SelectorIOServicesAdapter implementation using Gevent's DNS resolver.""" def getaddrinfo(self, host, port, on_done, family=0, socktype=0, proto=0, flags=0): """Implement :py:meth:`.nbio_interface.AbstractIOServices.getaddrinfo()`. """ resolver = _GeventAddressResolver(native_loop=self._loop, host=host, port=port, family=family, socktype=socktype, proto=proto, flags=flags, on_done=on_done) resolver.start() # Return needs an implementation of `AbstractIOReference`. return _GeventIOLoopIOHandle(resolver) class _GeventIOLoopIOHandle(AbstractIOReference): """Implement `AbstractIOReference`. Only used to wrap the _GeventAddressResolver. """ def __init__(self, subject): """ :param subject: subject of the reference containing a `cancel()` method """ self._cancel = subject.cancel def cancel(self): """Cancel pending operation :returns: False if was already done or cancelled; True otherwise :rtype: bool """ return self._cancel() class _GeventAddressResolver(object): """Performs getaddrinfo asynchronously Gevent's configured resolver in a separate greenlet and invoking the provided callback with the result. See: http://www.gevent.org/dns.html """ __slots__ = ( '_loop', '_on_done', '_greenlet', # getaddrinfo(..) args: '_ga_host', '_ga_port', '_ga_family', '_ga_socktype', '_ga_proto', '_ga_flags') def __init__(self, native_loop, host, port, family, socktype, proto, flags, on_done): """Initialize the `_GeventAddressResolver`. :param AbstractSelectorIOLoop native_loop: :param host: `see socket.getaddrinfo()` :param port: `see socket.getaddrinfo()` :param family: `see socket.getaddrinfo()` :param socktype: `see socket.getaddrinfo()` :param proto: `see socket.getaddrinfo()` :param flags: `see socket.getaddrinfo()` :param on_done: on_done(records|BaseException) callback for reporting result from the given I/O loop. The single arg will be either an exception object (check for `BaseException`) in case of failure or the result returned by `socket.getaddrinfo()`. """ check_callback_arg(on_done, 'on_done') self._loop = native_loop self._on_done = on_done # Reference to the greenlet performing `getaddrinfo`. self._greenlet = None # getaddrinfo(..) args. self._ga_host = host self._ga_port = port self._ga_family = family self._ga_socktype = socktype self._ga_proto = proto self._ga_flags = flags def start(self): """Start an asynchronous getaddrinfo invocation.""" if self._greenlet is None: self._greenlet = gevent.spawn_raw(self._resolve) else: LOGGER.warning("_GeventAddressResolver already started") def cancel(self): """Cancel the pending resolver.""" changed = False if self._greenlet is not None: changed = True self._stop_greenlet() self._cleanup() return changed def _cleanup(self): """Stop the resolver and release any resources.""" self._stop_greenlet() self._loop = None self._on_done = None def _stop_greenlet(self): """Stop the greenlet performing getaddrinfo if running. Otherwise, this is a no-op. """ if self._greenlet is not None: gevent.kill(self._greenlet) self._greenlet = None def _resolve(self): """Call `getaddrinfo()` and return result via user's callback function on the configured IO loop. """ try: # NOTE(JG): Can't use kwargs with getaddrinfo on Python <= v2.7. result = gevent.socket.getaddrinfo(self._ga_host, self._ga_port, self._ga_family, self._ga_socktype, self._ga_proto, self._ga_flags) except Exception as exc: # pylint: disable=broad-except LOGGER.error('Address resolution failed: %r', exc) result = exc callback = functools.partial(self._dispatch_callback, result) self._loop.add_callback(callback) def _dispatch_callback(self, result): """Invoke the configured completion callback and any subsequent cleanup. :param result: result from getaddrinfo, or the exception if raised. """ try: LOGGER.debug( 'Invoking async getaddrinfo() completion callback; host=%r', self._ga_host) self._on_done(result) finally: self._cleanup() pika-1.2.0/pika/adapters/select_connection.py000066400000000000000000001301511400701476500212420ustar00rootroot00000000000000"""A connection adapter that tries to use the best polling method for the platform pika is running on. """ import abc import collections import errno import heapq import logging import select import time import threading import pika.compat from pika.adapters.utils import nbio_interface from pika.adapters.base_connection import BaseConnection from pika.adapters.utils.selector_ioloop_adapter import ( SelectorIOServicesAdapter, AbstractSelectorIOLoop) LOGGER = logging.getLogger(__name__) # One of select, epoll, kqueue or poll SELECT_TYPE = None # Reason for this unconventional dict initialization is the fact that on some # platforms select.error is an aliases for OSError. We don't want the lambda # for select.error to win over one for OSError. _SELECT_ERROR_CHECKERS = {} if pika.compat.PY3: # InterruptedError is undefined in PY2 # pylint: disable=E0602 _SELECT_ERROR_CHECKERS[InterruptedError] = lambda e: True _SELECT_ERROR_CHECKERS[select.error] = lambda e: e.args[0] == errno.EINTR _SELECT_ERROR_CHECKERS[IOError] = lambda e: e.errno == errno.EINTR _SELECT_ERROR_CHECKERS[OSError] = lambda e: e.errno == errno.EINTR # We can reduce the number of elements in the list by looking at super-sub # class relationship because only the most generic ones needs to be caught. # For now the optimization is left out. # Following is better but still incomplete. # _SELECT_ERRORS = tuple(filter(lambda e: not isinstance(e, OSError), # _SELECT_ERROR_CHECKERS.keys()) # + [OSError]) _SELECT_ERRORS = tuple(_SELECT_ERROR_CHECKERS.keys()) def _is_resumable(exc): """Check if caught exception represents EINTR error. :param exc: exception; must be one of classes in _SELECT_ERRORS """ checker = _SELECT_ERROR_CHECKERS.get(exc.__class__, None) if checker is not None: return checker(exc) else: return False class SelectConnection(BaseConnection): """An asynchronous connection adapter that attempts to use the fastest event loop adapter for the given platform. """ def __init__( self, # pylint: disable=R0913 parameters=None, on_open_callback=None, on_open_error_callback=None, on_close_callback=None, custom_ioloop=None, internal_connection_workflow=True): """Create a new instance of the Connection object. :param pika.connection.Parameters parameters: Connection parameters :param callable on_open_callback: Method to call on connection open :param None | method on_open_error_callback: Called if the connection can't be established or connection establishment is interrupted by `Connection.close()`: on_open_error_callback(Connection, exception). :param None | method on_close_callback: Called when a previously fully open connection is closed: `on_close_callback(Connection, exception)`, where `exception` is either an instance of `exceptions.ConnectionClosed` if closed by user or broker or exception of another type that describes the cause of connection failure. :param None | IOLoop | nbio_interface.AbstractIOServices custom_ioloop: Provide a custom I/O Loop object. :param bool internal_connection_workflow: True for autonomous connection establishment which is default; False for externally-managed connection workflow via the `create_connection()` factory. :raises: RuntimeError """ if isinstance(custom_ioloop, nbio_interface.AbstractIOServices): nbio = custom_ioloop else: nbio = SelectorIOServicesAdapter(custom_ioloop or IOLoop()) super(SelectConnection, self).__init__( parameters, on_open_callback, on_open_error_callback, on_close_callback, nbio, internal_connection_workflow=internal_connection_workflow) @classmethod def create_connection(cls, connection_configs, on_done, custom_ioloop=None, workflow=None): """Implement :py:classmethod:`pika.adapters.BaseConnection.create_connection()`. """ nbio = SelectorIOServicesAdapter(custom_ioloop or IOLoop()) def connection_factory(params): """Connection factory.""" if params is None: raise ValueError('Expected pika.connection.Parameters ' 'instance, but got None in params arg.') return cls( parameters=params, custom_ioloop=nbio, internal_connection_workflow=False) return cls._start_connection_workflow( connection_configs=connection_configs, connection_factory=connection_factory, nbio=nbio, workflow=workflow, on_done=on_done) def _get_write_buffer_size(self): """ :returns: Current size of output data buffered by the transport :rtype: int """ return self._transport.get_write_buffer_size() class _Timeout(object): """Represents a timeout""" __slots__ = ( 'deadline', 'callback', ) def __init__(self, deadline, callback): """ :param float deadline: timer expiration as non-negative epoch number :param callable callback: callback to call when timeout expires :raises ValueError, TypeError: """ if deadline < 0: raise ValueError( 'deadline must be non-negative epoch number, but got %r' % (deadline,)) if not callable(callback): raise TypeError( 'callback must be a callable, but got %r' % (callback,)) self.deadline = deadline self.callback = callback def __eq__(self, other): """NOTE: not supporting sort stability""" if isinstance(other, _Timeout): return self.deadline == other.deadline return NotImplemented def __ne__(self, other): """NOTE: not supporting sort stability""" result = self.__eq__(other) if result is not NotImplemented: return not result return NotImplemented def __lt__(self, other): """NOTE: not supporting sort stability""" if isinstance(other, _Timeout): return self.deadline < other.deadline return NotImplemented def __gt__(self, other): """NOTE: not supporting sort stability""" if isinstance(other, _Timeout): return self.deadline > other.deadline return NotImplemented def __le__(self, other): """NOTE: not supporting sort stability""" if isinstance(other, _Timeout): return self.deadline <= other.deadline return NotImplemented def __ge__(self, other): """NOTE: not supporting sort stability""" if isinstance(other, _Timeout): return self.deadline >= other.deadline return NotImplemented class _Timer(object): """Manage timeouts for use in ioloop""" # Cancellation count threshold for triggering garbage collection of # cancelled timers _GC_CANCELLATION_THRESHOLD = 1024 def __init__(self): self._timeout_heap = [] # Number of canceled timeouts on heap; for scheduling garbage # collection of canceled timeouts self._num_cancellations = 0 def close(self): """Release resources. Don't use the `_Timer` instance after closing it """ # Eliminate potential reference cycles to aid garbage-collection if self._timeout_heap is not None: for timeout in self._timeout_heap: timeout.callback = None self._timeout_heap = None def call_later(self, delay, callback): """Schedule a one-shot timeout given delay seconds. NOTE: you may cancel the timer before dispatch of the callback. Timer Manager cancels the timer upon dispatch of the callback. :param float delay: Non-negative number of seconds from now until expiration :param callable callback: The callback method, having the signature `callback()` :rtype: _Timeout :raises ValueError, TypeError """ if self._timeout_heap is None: raise ValueError("Timeout closed before call") if delay < 0: raise ValueError( 'call_later: delay must be non-negative, but got %r' % (delay,)) now = pika.compat.time_now() timeout = _Timeout(now + delay, callback) heapq.heappush(self._timeout_heap, timeout) LOGGER.debug( 'call_later: added timeout %r with deadline=%r and ' 'callback=%r; now=%s; delay=%s', timeout, timeout.deadline, timeout.callback, now, delay) return timeout def remove_timeout(self, timeout): """Cancel the timeout :param _Timeout timeout: The timer to cancel """ # NOTE removing from the heap is difficult, so we just deactivate the # timeout and garbage-collect it at a later time; see discussion # in http://docs.python.org/library/heapq.html if timeout.callback is None: LOGGER.debug( 'remove_timeout: timeout was already removed or called %r', timeout) else: LOGGER.debug( 'remove_timeout: removing timeout %r with deadline=%r ' 'and callback=%r', timeout, timeout.deadline, timeout.callback) timeout.callback = None self._num_cancellations += 1 def get_remaining_interval(self): """Get the interval to the next timeout expiration :returns: non-negative number of seconds until next timer expiration; None if there are no timers :rtype: float """ if self._timeout_heap: now = pika.compat.time_now() interval = max(0, self._timeout_heap[0].deadline - now) else: interval = None return interval def process_timeouts(self): """Process pending timeouts, invoking callbacks for those whose time has come """ if self._timeout_heap: now = pika.compat.time_now() # Remove ready timeouts from the heap now to prevent IO starvation # from timeouts added during callback processing ready_timeouts = [] while self._timeout_heap and self._timeout_heap[0].deadline <= now: timeout = heapq.heappop(self._timeout_heap) if timeout.callback is not None: ready_timeouts.append(timeout) else: self._num_cancellations -= 1 # Invoke ready timeout callbacks for timeout in ready_timeouts: if timeout.callback is None: # Must have been canceled from a prior callback self._num_cancellations -= 1 continue timeout.callback() timeout.callback = None # Garbage-collect canceled timeouts if they exceed threshold if (self._num_cancellations >= self._GC_CANCELLATION_THRESHOLD and self._num_cancellations > (len(self._timeout_heap) >> 1)): self._num_cancellations = 0 self._timeout_heap = [ t for t in self._timeout_heap if t.callback is not None ] heapq.heapify(self._timeout_heap) class PollEvents(object): """Event flags for I/O""" # Use epoll's constants to keep life easy READ = getattr(select, 'POLLIN', 0x01) # available for read WRITE = getattr(select, 'POLLOUT', 0x04) # available for write ERROR = getattr(select, 'POLLERR', 0x08) # error on associated fd class IOLoop(AbstractSelectorIOLoop): """I/O loop implementation that picks a suitable poller (`select`, `poll`, `epoll`, `kqueue`) to use based on platform. Implements the `pika.adapters.utils.selector_ioloop_adapter.AbstractSelectorIOLoop` interface. """ # READ/WRITE/ERROR per `AbstractSelectorIOLoop` requirements READ = PollEvents.READ WRITE = PollEvents.WRITE ERROR = PollEvents.ERROR def __init__(self): self._timer = _Timer() # Callbacks requested via `add_callback` self._callbacks = collections.deque() self._poller = self._get_poller(self._get_remaining_interval, self.process_timeouts) def close(self): """Release IOLoop's resources. `IOLoop.close` is intended to be called by the application or test code only after `IOLoop.start()` returns. After calling `close()`, no other interaction with the closed instance of `IOLoop` should be performed. """ if self._callbacks is not None: self._poller.close() self._timer.close() # Set _callbacks to empty list rather than None so that race from # another thread calling add_callback_threadsafe() won't result in # AttributeError self._callbacks = [] @staticmethod def _get_poller(get_wait_seconds, process_timeouts): """Determine the best poller to use for this environment and instantiate it. :param get_wait_seconds: Function for getting the maximum number of seconds to wait for IO for use by the poller :param process_timeouts: Function for processing timeouts for use by the poller :returns: The instantiated poller instance supporting `_PollerBase` API :rtype: object """ poller = None kwargs = dict( get_wait_seconds=get_wait_seconds, process_timeouts=process_timeouts) if hasattr(select, 'epoll'): if not SELECT_TYPE or SELECT_TYPE == 'epoll': LOGGER.debug('Using EPollPoller') poller = EPollPoller(**kwargs) if not poller and hasattr(select, 'kqueue'): if not SELECT_TYPE or SELECT_TYPE == 'kqueue': LOGGER.debug('Using KQueuePoller') poller = KQueuePoller(**kwargs) if (not poller and hasattr(select, 'poll') and hasattr(select.poll(), 'modify')): # pylint: disable=E1101 if not SELECT_TYPE or SELECT_TYPE == 'poll': LOGGER.debug('Using PollPoller') poller = PollPoller(**kwargs) if not poller: LOGGER.debug('Using SelectPoller') poller = SelectPoller(**kwargs) return poller def call_later(self, delay, callback): """Add the callback to the IOLoop timer to be called after delay seconds from the time of call on best-effort basis. Returns a handle to the timeout. :param float delay: The number of seconds to wait to call callback :param callable callback: The callback method :returns: handle to the created timeout that may be passed to `remove_timeout()` :rtype: object """ return self._timer.call_later(delay, callback) def remove_timeout(self, timeout_handle): """Remove a timeout :param timeout_handle: Handle of timeout to remove """ self._timer.remove_timeout(timeout_handle) def add_callback_threadsafe(self, callback): """Requests a call to the given function as soon as possible in the context of this IOLoop's thread. NOTE: This is the only thread-safe method in IOLoop. All other manipulations of IOLoop must be performed from the IOLoop's thread. For example, a thread may request a call to the `stop` method of an ioloop that is running in a different thread via `ioloop.add_callback_threadsafe(ioloop.stop)` :param callable callback: The callback method """ if not callable(callback): raise TypeError( 'callback must be a callable, but got %r' % (callback,)) # NOTE: `deque.append` is atomic self._callbacks.append(callback) # Wake up the IOLoop which may be running in another thread self._poller.wake_threadsafe() LOGGER.debug('add_callback_threadsafe: added callback=%r', callback) # To satisfy `AbstractSelectorIOLoop` requirement add_callback = add_callback_threadsafe def process_timeouts(self): """[Extension] Process pending callbacks and timeouts, invoking those whose time has come. Internal use only. """ # Avoid I/O starvation by postponing new callbacks to the next iteration for _ in pika.compat.xrange(len(self._callbacks)): callback = self._callbacks.popleft() LOGGER.debug('process_timeouts: invoking callback=%r', callback) callback() self._timer.process_timeouts() def _get_remaining_interval(self): """Get the remaining interval to the next callback or timeout expiration. :returns: non-negative number of seconds until next callback or timer expiration; None if there are no callbacks and timers :rtype: float """ if self._callbacks: return 0 return self._timer.get_remaining_interval() def add_handler(self, fd, handler, events): """Start watching the given file descriptor for events :param int fd: The file descriptor :param callable handler: When requested event(s) occur, `handler(fd, events)` will be called. :param int events: The event mask using READ, WRITE, ERROR. """ self._poller.add_handler(fd, handler, events) def update_handler(self, fd, events): """Changes the events we watch for :param int fd: The file descriptor :param int events: The event mask using READ, WRITE, ERROR """ self._poller.update_handler(fd, events) def remove_handler(self, fd): """Stop watching the given file descriptor for events :param int fd: The file descriptor """ self._poller.remove_handler(fd) def start(self): """[API] Start the main poller loop. It will loop until requested to exit. See `IOLoop.stop`. """ self._poller.start() def stop(self): """[API] Request exit from the ioloop. The loop is NOT guaranteed to stop before this method returns. To invoke `stop()` safely from a thread other than this IOLoop's thread, call it via `add_callback_threadsafe`; e.g., `ioloop.add_callback_threadsafe(ioloop.stop)` """ self._poller.stop() def activate_poller(self): """[Extension] Activate the poller """ self._poller.activate_poller() def deactivate_poller(self): """[Extension] Deactivate the poller """ self._poller.deactivate_poller() def poll(self): """[Extension] Wait for events of interest on registered file descriptors until an event of interest occurs or next timer deadline or `_PollerBase._MAX_POLL_TIMEOUT`, whichever is sooner, and dispatch the corresponding event handlers. """ self._poller.poll() class _PollerBase(pika.compat.AbstractBase): # pylint: disable=R0902 """Base class for select-based IOLoop implementations""" # Drop out of the poll loop every _MAX_POLL_TIMEOUT secs as a worst case; # this is only a backstop value; we will run timeouts when they are # scheduled. _MAX_POLL_TIMEOUT = 5 # if the poller uses MS override with 1000 POLL_TIMEOUT_MULT = 1 def __init__(self, get_wait_seconds, process_timeouts): """ :param get_wait_seconds: Function for getting the maximum number of seconds to wait for IO for use by the poller :param process_timeouts: Function for processing timeouts for use by the poller """ self._get_wait_seconds = get_wait_seconds self._process_timeouts = process_timeouts # We guard access to the waking file descriptors to avoid races from # closing them while another thread is calling our `wake()` method. self._waking_mutex = threading.Lock() # fd-to-handler function mappings self._fd_handlers = dict() # event-to-fdset mappings self._fd_events = { PollEvents.READ: set(), PollEvents.WRITE: set(), PollEvents.ERROR: set() } self._processing_fd_event_map = {} # Reentrancy tracker of the `start` method self._running = False self._stopping = False # Create ioloop-interrupt socket pair and register read handler. self._r_interrupt, self._w_interrupt = self._get_interrupt_pair() self.add_handler(self._r_interrupt.fileno(), self._read_interrupt, PollEvents.READ) def close(self): """Release poller's resources. `close()` is intended to be called after the poller's `start()` method returns. After calling `close()`, no other interaction with the closed poller instance should be performed. """ # Unregister and close ioloop-interrupt socket pair; mutual exclusion is # necessary to avoid race condition with `wake_threadsafe` executing in # another thread's context assert not self._running, 'Cannot call close() before start() unwinds.' with self._waking_mutex: if self._w_interrupt is not None: self.remove_handler(self._r_interrupt.fileno()) # pylint: disable=E1101 self._r_interrupt.close() self._r_interrupt = None self._w_interrupt.close() self._w_interrupt = None self.deactivate_poller() self._fd_handlers = None self._fd_events = None self._processing_fd_event_map = None def wake_threadsafe(self): """Wake up the poller as soon as possible. As the name indicates, this method is thread-safe. """ with self._waking_mutex: if self._w_interrupt is None: return try: # Send byte to interrupt the poll loop, use send() instead of # os.write for Windows compatibility self._w_interrupt.send(b'X') except pika.compat.SOCKET_ERROR as err: if err.errno != errno.EWOULDBLOCK: raise except Exception as err: # There's nothing sensible to do here, we'll exit the interrupt # loop after POLL_TIMEOUT secs in worst case anyway. LOGGER.warning("Failed to send interrupt to poller: %s", err) raise def _get_max_wait(self): """Get the interval to the next timeout event, or a default interval :returns: maximum number of self.POLL_TIMEOUT_MULT-scaled time units to wait for IO events :rtype: int """ delay = self._get_wait_seconds() if delay is None: delay = self._MAX_POLL_TIMEOUT else: delay = min(delay, self._MAX_POLL_TIMEOUT) return delay * self.POLL_TIMEOUT_MULT def add_handler(self, fileno, handler, events): """Add a new fileno to the set to be monitored :param int fileno: The file descriptor :param callable handler: What is called when an event happens :param int events: The event mask using READ, WRITE, ERROR """ self._fd_handlers[fileno] = handler self._set_handler_events(fileno, events) # Inform the derived class self._register_fd(fileno, events) def update_handler(self, fileno, events): """Set the events to the current events :param int fileno: The file descriptor :param int events: The event mask using READ, WRITE, ERROR """ # Record the change events_cleared, events_set = self._set_handler_events(fileno, events) # Inform the derived class self._modify_fd_events( fileno, events=events, events_to_clear=events_cleared, events_to_set=events_set) def remove_handler(self, fileno): """Remove a file descriptor from the set :param int fileno: The file descriptor """ try: del self._processing_fd_event_map[fileno] except KeyError: pass events_cleared, _ = self._set_handler_events(fileno, 0) del self._fd_handlers[fileno] # Inform the derived class self._unregister_fd(fileno, events_to_clear=events_cleared) def _set_handler_events(self, fileno, events): """Set the handler's events to the given events; internal to `_PollerBase`. :param int fileno: The file descriptor :param int events: The event mask (READ, WRITE, ERROR) :returns: a 2-tuple (events_cleared, events_set) :rtype: tuple """ events_cleared = 0 events_set = 0 for evt in (PollEvents.READ, PollEvents.WRITE, PollEvents.ERROR): if events & evt: if fileno not in self._fd_events[evt]: self._fd_events[evt].add(fileno) events_set |= evt else: if fileno in self._fd_events[evt]: self._fd_events[evt].discard(fileno) events_cleared |= evt return events_cleared, events_set def activate_poller(self): """Activate the poller """ # Activate the underlying poller and register current events self._init_poller() fd_to_events = collections.defaultdict(int) for event, file_descriptors in self._fd_events.items(): for fileno in file_descriptors: fd_to_events[fileno] |= event for fileno, events in fd_to_events.items(): self._register_fd(fileno, events) def deactivate_poller(self): """Deactivate the poller """ self._uninit_poller() def start(self): """Start the main poller loop. It will loop until requested to exit. This method is not reentrant and will raise an error if called recursively (pika/pika#1095) :raises: RuntimeError """ if self._running: raise RuntimeError('IOLoop is not reentrant and is already running') LOGGER.debug('Entering IOLoop') self._running = True self.activate_poller() try: # Run event loop while not self._stopping: self.poll() self._process_timeouts() finally: try: LOGGER.debug('Deactivating poller') self.deactivate_poller() finally: self._stopping = False self._running = False def stop(self): """Request exit from the ioloop. The loop is NOT guaranteed to stop before this method returns. """ LOGGER.debug('Stopping IOLoop') self._stopping = True self.wake_threadsafe() @abc.abstractmethod def poll(self): """Wait for events on interested filedescriptors. """ raise NotImplementedError @abc.abstractmethod def _init_poller(self): """Notify the implementation to allocate the poller resource""" raise NotImplementedError @abc.abstractmethod def _uninit_poller(self): """Notify the implementation to release the poller resource""" raise NotImplementedError @abc.abstractmethod def _register_fd(self, fileno, events): """The base class invokes this method to notify the implementation to register the file descriptor with the polling object. The request must be ignored if the poller is not activated. :param int fileno: The file descriptor :param int events: The event mask (READ, WRITE, ERROR) """ raise NotImplementedError @abc.abstractmethod def _modify_fd_events(self, fileno, events, events_to_clear, events_to_set): """The base class invoikes this method to notify the implementation to modify an already registered file descriptor. The request must be ignored if the poller is not activated. :param int fileno: The file descriptor :param int events: absolute events (READ, WRITE, ERROR) :param int events_to_clear: The events to clear (READ, WRITE, ERROR) :param int events_to_set: The events to set (READ, WRITE, ERROR) """ raise NotImplementedError @abc.abstractmethod def _unregister_fd(self, fileno, events_to_clear): """The base class invokes this method to notify the implementation to unregister the file descriptor being tracked by the polling object. The request must be ignored if the poller is not activated. :param int fileno: The file descriptor :param int events_to_clear: The events to clear (READ, WRITE, ERROR) """ raise NotImplementedError def _dispatch_fd_events(self, fd_event_map): """ Helper to dispatch callbacks for file descriptors that received events. Before doing so we re-calculate the event mask based on what is currently set in case it has been changed under our feet by a previous callback. We also take a store a refernce to the fd_event_map so that we can detect removal of an fileno during processing of another callback and not generate spurious callbacks on it. :param dict fd_event_map: Map of fds to events received on them. """ # Reset the prior map; if the call is nested, this will suppress the # remaining dispatch in the earlier call. self._processing_fd_event_map.clear() self._processing_fd_event_map = fd_event_map for fileno in pika.compat.dictkeys(fd_event_map): if fileno not in fd_event_map: # the fileno has been removed from the map under our feet. continue events = fd_event_map[fileno] for evt in [PollEvents.READ, PollEvents.WRITE, PollEvents.ERROR]: if fileno not in self._fd_events[evt]: events &= ~evt if events: handler = self._fd_handlers[fileno] handler(fileno, events) @staticmethod def _get_interrupt_pair(): """ Use a socketpair to be able to interrupt the ioloop if called from another thread. Socketpair() is not supported on some OS (Win) so use a pair of simple TCP sockets instead. The sockets will be closed and garbage collected by python when the ioloop itself is. """ return pika.compat._nonblocking_socketpair() # pylint: disable=W0212 def _read_interrupt(self, _interrupt_fd, _events): """ Read the interrupt byte(s). We ignore the event mask as we can ony get here if there's data to be read on our fd. :param int _interrupt_fd: (unused) The file descriptor to read from :param int _events: (unused) The events generated for this fd """ try: # NOTE Use recv instead of os.read for windows compatibility self._r_interrupt.recv(512) # pylint: disable=E1101 except pika.compat.SOCKET_ERROR as err: if err.errno != errno.EAGAIN: raise class SelectPoller(_PollerBase): """Default behavior is to use Select since it's the widest supported and has all of the methods we need for child classes as well. One should only need to override the update_handler and start methods for additional types. """ # if the poller uses MS specify 1000 POLL_TIMEOUT_MULT = 1 def poll(self): """Wait for events of interest on registered file descriptors until an event of interest occurs or next timer deadline or _MAX_POLL_TIMEOUT, whichever is sooner, and dispatch the corresponding event handlers. """ while True: try: if (self._fd_events[PollEvents.READ] or self._fd_events[PollEvents.WRITE] or self._fd_events[PollEvents.ERROR]): read, write, error = select.select( self._fd_events[PollEvents.READ], self._fd_events[PollEvents.WRITE], self._fd_events[PollEvents.ERROR], self._get_max_wait()) else: # NOTE When called without any FDs, select fails on # Windows with error 10022, 'An invalid argument was # supplied'. time.sleep(self._get_max_wait()) read, write, error = [], [], [] break except _SELECT_ERRORS as error: if _is_resumable(error): continue else: raise # Build an event bit mask for each fileno we've received an event for fd_event_map = collections.defaultdict(int) for fd_set, evt in zip( (read, write, error), (PollEvents.READ, PollEvents.WRITE, PollEvents.ERROR)): for fileno in fd_set: fd_event_map[fileno] |= evt self._dispatch_fd_events(fd_event_map) def _init_poller(self): """Notify the implementation to allocate the poller resource""" # It's a no op in SelectPoller def _uninit_poller(self): """Notify the implementation to release the poller resource""" # It's a no op in SelectPoller def _register_fd(self, fileno, events): """The base class invokes this method to notify the implementation to register the file descriptor with the polling object. The request must be ignored if the poller is not activated. :param int fileno: The file descriptor :param int events: The event mask using READ, WRITE, ERROR """ # It's a no op in SelectPoller def _modify_fd_events(self, fileno, events, events_to_clear, events_to_set): """The base class invoikes this method to notify the implementation to modify an already registered file descriptor. The request must be ignored if the poller is not activated. :param int fileno: The file descriptor :param int events: absolute events (READ, WRITE, ERROR) :param int events_to_clear: The events to clear (READ, WRITE, ERROR) :param int events_to_set: The events to set (READ, WRITE, ERROR) """ # It's a no op in SelectPoller def _unregister_fd(self, fileno, events_to_clear): """The base class invokes this method to notify the implementation to unregister the file descriptor being tracked by the polling object. The request must be ignored if the poller is not activated. :param int fileno: The file descriptor :param int events_to_clear: The events to clear (READ, WRITE, ERROR) """ # It's a no op in SelectPoller class KQueuePoller(_PollerBase): # pylint: disable=E1101 """KQueuePoller works on BSD based systems and is faster than select""" def __init__(self, get_wait_seconds, process_timeouts): """Create an instance of the KQueuePoller """ self._kqueue = None super(KQueuePoller, self).__init__(get_wait_seconds, process_timeouts) @staticmethod def _map_event(kevent): """return the event type associated with a kevent object :param kevent kevent: a kevent object as returned by kqueue.control() """ mask = 0 if kevent.filter == select.KQ_FILTER_READ: mask = PollEvents.READ elif kevent.filter == select.KQ_FILTER_WRITE: mask = PollEvents.WRITE if kevent.flags & select.KQ_EV_EOF: # May be set when the peer reader disconnects. We don't check # KQ_EV_EOF for KQ_FILTER_READ because in that case it may be # set before the remaining data is consumed from sockbuf. mask |= PollEvents.ERROR elif kevent.flags & select.KQ_EV_ERROR: mask = PollEvents.ERROR else: LOGGER.critical('Unexpected kevent: %s', kevent) return mask def poll(self): """Wait for events of interest on registered file descriptors until an event of interest occurs or next timer deadline or _MAX_POLL_TIMEOUT, whichever is sooner, and dispatch the corresponding event handlers. """ while True: try: kevents = self._kqueue.control(None, 1000, self._get_max_wait()) break except _SELECT_ERRORS as error: if _is_resumable(error): continue else: raise fd_event_map = collections.defaultdict(int) for event in kevents: fd_event_map[event.ident] |= self._map_event(event) self._dispatch_fd_events(fd_event_map) def _init_poller(self): """Notify the implementation to allocate the poller resource""" assert self._kqueue is None self._kqueue = select.kqueue() def _uninit_poller(self): """Notify the implementation to release the poller resource""" if self._kqueue is not None: self._kqueue.close() self._kqueue = None def _register_fd(self, fileno, events): """The base class invokes this method to notify the implementation to register the file descriptor with the polling object. The request must be ignored if the poller is not activated. :param int fileno: The file descriptor :param int events: The event mask using READ, WRITE, ERROR """ self._modify_fd_events( fileno, events=events, events_to_clear=0, events_to_set=events) def _modify_fd_events(self, fileno, events, events_to_clear, events_to_set): """The base class invoikes this method to notify the implementation to modify an already registered file descriptor. The request must be ignored if the poller is not activated. :param int fileno: The file descriptor :param int events: absolute events (READ, WRITE, ERROR) :param int events_to_clear: The events to clear (READ, WRITE, ERROR) :param int events_to_set: The events to set (READ, WRITE, ERROR) """ if self._kqueue is None: return kevents = list() if events_to_clear & PollEvents.READ: kevents.append( select.kevent( fileno, filter=select.KQ_FILTER_READ, flags=select.KQ_EV_DELETE)) if events_to_set & PollEvents.READ: kevents.append( select.kevent( fileno, filter=select.KQ_FILTER_READ, flags=select.KQ_EV_ADD)) if events_to_clear & PollEvents.WRITE: kevents.append( select.kevent( fileno, filter=select.KQ_FILTER_WRITE, flags=select.KQ_EV_DELETE)) if events_to_set & PollEvents.WRITE: kevents.append( select.kevent( fileno, filter=select.KQ_FILTER_WRITE, flags=select.KQ_EV_ADD)) self._kqueue.control(kevents, 0) def _unregister_fd(self, fileno, events_to_clear): """The base class invokes this method to notify the implementation to unregister the file descriptor being tracked by the polling object. The request must be ignored if the poller is not activated. :param int fileno: The file descriptor :param int events_to_clear: The events to clear (READ, WRITE, ERROR) """ self._modify_fd_events( fileno, events=0, events_to_clear=events_to_clear, events_to_set=0) class PollPoller(_PollerBase): """Poll works on Linux and can have better performance than EPoll in certain scenarios. Both are faster than select. """ POLL_TIMEOUT_MULT = 1000 def __init__(self, get_wait_seconds, process_timeouts): """Create an instance of the KQueuePoller """ self._poll = None super(PollPoller, self).__init__(get_wait_seconds, process_timeouts) @staticmethod def _create_poller(): """ :rtype: `select.poll` """ return select.poll() # pylint: disable=E1101 def poll(self): """Wait for events of interest on registered file descriptors until an event of interest occurs or next timer deadline or _MAX_POLL_TIMEOUT, whichever is sooner, and dispatch the corresponding event handlers. """ while True: try: events = self._poll.poll(self._get_max_wait()) break except _SELECT_ERRORS as error: if _is_resumable(error): continue else: raise fd_event_map = collections.defaultdict(int) for fileno, event in events: # NOTE: On OS X, when poll() sets POLLHUP, it's mutually-exclusive with # POLLOUT and it doesn't seem to set POLLERR along with POLLHUP when # socket connection fails, for example. So, we need to at least add # POLLERR when we see POLLHUP if (event & select.POLLHUP) and pika.compat.ON_OSX: event |= select.POLLERR fd_event_map[fileno] |= event self._dispatch_fd_events(fd_event_map) def _init_poller(self): """Notify the implementation to allocate the poller resource""" assert self._poll is None self._poll = self._create_poller() def _uninit_poller(self): """Notify the implementation to release the poller resource""" if self._poll is not None: if hasattr(self._poll, "close"): self._poll.close() self._poll = None def _register_fd(self, fileno, events): """The base class invokes this method to notify the implementation to register the file descriptor with the polling object. The request must be ignored if the poller is not activated. :param int fileno: The file descriptor :param int events: The event mask using READ, WRITE, ERROR """ if self._poll is not None: self._poll.register(fileno, events) def _modify_fd_events(self, fileno, events, events_to_clear, events_to_set): """The base class invoikes this method to notify the implementation to modify an already registered file descriptor. The request must be ignored if the poller is not activated. :param int fileno: The file descriptor :param int events: absolute events (READ, WRITE, ERROR) :param int events_to_clear: The events to clear (READ, WRITE, ERROR) :param int events_to_set: The events to set (READ, WRITE, ERROR) """ if self._poll is not None: self._poll.modify(fileno, events) def _unregister_fd(self, fileno, events_to_clear): """The base class invokes this method to notify the implementation to unregister the file descriptor being tracked by the polling object. The request must be ignored if the poller is not activated. :param int fileno: The file descriptor :param int events_to_clear: The events to clear (READ, WRITE, ERROR) """ if self._poll is not None: self._poll.unregister(fileno) class EPollPoller(PollPoller): """EPoll works on Linux and can have better performance than Poll in certain scenarios. Both are faster than select. """ POLL_TIMEOUT_MULT = 1 @staticmethod def _create_poller(): """ :rtype: `select.poll` """ return select.epoll() # pylint: disable=E1101 pika-1.2.0/pika/adapters/tornado_connection.py000066400000000000000000000067461400701476500214450ustar00rootroot00000000000000"""Use pika with the Tornado IOLoop """ import logging from tornado import ioloop from pika.adapters.utils import nbio_interface, selector_ioloop_adapter from pika.adapters import base_connection LOGGER = logging.getLogger(__name__) class TornadoConnection(base_connection.BaseConnection): """The TornadoConnection runs on the Tornado IOLoop. """ def __init__(self, parameters=None, on_open_callback=None, on_open_error_callback=None, on_close_callback=None, custom_ioloop=None, internal_connection_workflow=True): """Create a new instance of the TornadoConnection class, connecting to RabbitMQ automatically. :param pika.connection.Parameters|None parameters: The connection parameters :param callable|None on_open_callback: The method to call when the connection is open :param callable|None on_open_error_callback: Called if the connection can't be established or connection establishment is interrupted by `Connection.close()`: on_open_error_callback(Connection, exception) :param callable|None on_close_callback: Called when a previously fully open connection is closed: `on_close_callback(Connection, exception)`, where `exception` is either an instance of `exceptions.ConnectionClosed` if closed by user or broker or exception of another type that describes the cause of connection failure :param ioloop.IOLoop|nbio_interface.AbstractIOServices|None custom_ioloop: Override using the global IOLoop in Tornado :param bool internal_connection_workflow: True for autonomous connection establishment which is default; False for externally-managed connection workflow via the `create_connection()` factory """ if isinstance(custom_ioloop, nbio_interface.AbstractIOServices): nbio = custom_ioloop else: nbio = (selector_ioloop_adapter.SelectorIOServicesAdapter( custom_ioloop or ioloop.IOLoop.instance())) super(TornadoConnection, self).__init__( parameters, on_open_callback, on_open_error_callback, on_close_callback, nbio, internal_connection_workflow=internal_connection_workflow) @classmethod def create_connection(cls, connection_configs, on_done, custom_ioloop=None, workflow=None): """Implement :py:classmethod:`pika.adapters.BaseConnection.create_connection()`. """ nbio = selector_ioloop_adapter.SelectorIOServicesAdapter( custom_ioloop or ioloop.IOLoop.instance()) def connection_factory(params): """Connection factory.""" if params is None: raise ValueError('Expected pika.connection.Parameters ' 'instance, but got None in params arg.') return cls( parameters=params, custom_ioloop=nbio, internal_connection_workflow=False) return cls._start_connection_workflow( connection_configs=connection_configs, connection_factory=connection_factory, nbio=nbio, workflow=workflow, on_done=on_done) pika-1.2.0/pika/adapters/twisted_connection.py000066400000000000000000001421641400701476500214550ustar00rootroot00000000000000"""Using Pika with a Twisted reactor. The interfaces in this module are Deferred-based when possible. This means that the connection.channel() method and most of the channel methods return Deferreds instead of taking a callback argument and that basic_consume() returns a Twisted DeferredQueue where messages from the server will be stored. Refer to the docstrings for TwistedProtocolConnection.channel() and the TwistedChannel class for details. """ import functools import logging from collections import namedtuple from twisted.internet import (defer, error as twisted_error, reactor, protocol) from twisted.python.failure import Failure import pika.connection from pika import exceptions, spec from pika.adapters.utils import nbio_interface from pika.adapters.utils.io_services_utils import check_callback_arg from pika.exchange_type import ExchangeType # Twistisms # pylint: disable=C0111,C0103 # Other # pylint: disable=too-many-lines LOGGER = logging.getLogger(__name__) class ClosableDeferredQueue(defer.DeferredQueue): """ Like the normal Twisted DeferredQueue, but after close() is called with an exception instance all pending Deferreds are errbacked and further attempts to call get() or put() return a Failure wrapping that exception. """ def __init__(self, size=None, backlog=None): self.closed = None super(ClosableDeferredQueue, self).__init__(size, backlog) def put(self, obj): """ Like the original :meth:`DeferredQueue.put` method, but returns an errback if the queue is closed. """ if self.closed: LOGGER.error('Impossible to put to the queue, it is closed.') return defer.fail(self.closed) return defer.DeferredQueue.put(self, obj) def get(self): """ Returns a Deferred that will fire with the next item in the queue, when it's available. The Deferred will errback if the queue is closed. :returns: Deferred that fires with the next item. :rtype: Deferred """ if self.closed: LOGGER.error('Impossible to get from the queue, it is closed.') return defer.fail(self.closed) return defer.DeferredQueue.get(self) def close(self, reason): """Closes the queue. Errback the pending calls to :meth:`get()`. """ if self.closed: LOGGER.warning('Queue was already closed with reason: %s.', self.closed) self.closed = reason while self.waiting: self.waiting.pop().errback(reason) self.pending = [] ReceivedMessage = namedtuple("ReceivedMessage", ["channel", "method", "properties", "body"]) class TwistedChannel(object): """A wrapper around Pika's Channel. Channel methods that normally take a callback argument are wrapped to return a Deferred that fires with whatever would be passed to the callback. If the channel gets closed, all pending Deferreds are errbacked with a ChannelClosed exception. The returned Deferreds fire with whatever arguments the callback to the original method would receive. Some methods like basic_consume and basic_get are wrapped in a special way, see their docstrings for details. """ def __init__(self, channel): self._channel = channel self._closed = None self._calls = set() self._consumers = {} # Store Basic.Get calls so we can handle GetEmpty replies self._basic_get_deferred = None self._channel.add_callback(self._on_getempty, [spec.Basic.GetEmpty], False) # We need this mapping to close the ClosableDeferredQueue when a queue # is deleted. self._queue_name_to_consumer_tags = {} # Whether RabbitMQ delivery confirmation has been enabled self._delivery_confirmation = False self._delivery_message_id = None self._deliveries = {} # Holds a ReceivedMessage object representing a message received via # Basic.Return in publisher-acknowledgments mode. self._puback_return = None self.on_closed = defer.Deferred() self._channel.add_on_close_callback(self._on_channel_closed) self._channel.add_on_cancel_callback( self._on_consumer_cancelled_by_broker) def __repr__(self): return '<{cls} channel={chan!r}>'.format( cls=self.__class__.__name__, chan=self._channel) def _on_channel_closed(self, _channel, reason): # enter the closed state self._closed = reason # errback all pending calls for d in self._calls: d.errback(self._closed) # errback all pending deliveries for d in self._deliveries.values(): d.errback(self._closed) # close all open queues for consumer in self._consumers.values(): consumer.close(self._closed) # release references to stored objects self._calls = set() self._deliveries = {} self._consumers = {} self.on_closed.callback(self._closed) def _on_consumer_cancelled_by_broker(self, method_frame): """Called by impl when broker cancels consumer via Basic.Cancel. This is a RabbitMQ-specific feature. The circumstances include deletion of queue being consumed as well as failure of a HA node responsible for the queue being consumed. :param pika.frame.Method method_frame: method frame with the `spec.Basic.Cancel` method """ return self._on_consumer_cancelled(method_frame) def _on_consumer_cancelled(self, frame): """Called when the broker cancels a consumer via Basic.Cancel or when the broker responds to a Basic.Cancel request by Basic.CancelOk. :param pika.frame.Method frame: method frame with the `spec.Basic.Cancel` or `spec.Basic.CancelOk` method """ consumer_tag = frame.method.consumer_tag if consumer_tag not in self._consumers: # Could be cancelled by user or broker earlier LOGGER.warning('basic_cancel - consumer not found: %s', consumer_tag) return frame self._consumers[consumer_tag].close(exceptions.ConsumerCancelled()) del self._consumers[consumer_tag] # Remove from the queue-to-ctags index: for ctags in self._queue_name_to_consumer_tags.values(): try: ctags.remove(consumer_tag) except KeyError: continue return frame def _on_getempty(self, _method_frame): """Callback the Basic.Get deferred with None. """ if self._basic_get_deferred is None: LOGGER.warning("Got Basic.GetEmpty but no Basic.Get calls " "were pending.") return self._basic_get_deferred.callback(None) def _wrap_channel_method(self, name): """Wrap Pika's Channel method to make it return a Deferred that fires when the method completes and errbacks if the channel gets closed. If the original method's callback would receive more than one argument, the Deferred fires with a tuple of argument values. """ method = getattr(self._channel, name) @functools.wraps(method) def wrapped(*args, **kwargs): if self._closed: return defer.fail(self._closed) d = defer.Deferred() self._calls.add(d) d.addCallback(self._clear_call, d) def single_argument(*args): """ Make sure that the deferred is called with a single argument. In case the original callback fires with more than one, convert to a tuple. """ if len(args) > 1: d.callback(tuple(args)) else: d.callback(*args) kwargs['callback'] = single_argument try: method(*args, **kwargs) except Exception: # pylint: disable=W0703 return defer.fail() return d return wrapped def _clear_call(self, ret, d): self._calls.discard(d) return ret # Public Channel attributes @property def channel_number(self): return self._channel.channel_number @property def connection(self): return self._channel.connection @property def is_closed(self): """Returns True if the channel is closed. :rtype: bool """ return self._channel.is_closed @property def is_closing(self): """Returns True if client-initiated closing of the channel is in progress. :rtype: bool """ return self._channel.is_closing @property def is_open(self): """Returns True if the channel is open. :rtype: bool """ return self._channel.is_open @property def flow_active(self): return self._channel.flow_active @property def consumer_tags(self): return self._channel.consumer_tags # Deferred-equivalents of public Channel methods def callback_deferred(self, deferred, replies): """Pass in a Deferred and a list replies from the RabbitMQ broker which you'd like the Deferred to be callbacked on with the frame as callback value. :param Deferred deferred: The Deferred to callback :param list replies: The replies to callback on """ self._channel.add_callback(deferred.callback, replies) # Public Channel methods def add_on_return_callback(self, callback): """Pass a callback function that will be called when a published message is rejected and returned by the server via `Basic.Return`. :param callable callback: The method to call on callback with the message as only argument. The message is a named tuple with the following attributes: channel: this TwistedChannel method: pika.spec.Basic.Return properties: pika.spec.BasicProperties body: bytes """ self._channel.add_on_return_callback( lambda _channel, method, properties, body: callback( ReceivedMessage( channel=self, method=method, properties=properties, body=body, ) ) ) def basic_ack(self, delivery_tag=0, multiple=False): """Acknowledge one or more messages. When sent by the client, this method acknowledges one or more messages delivered via the Deliver or Get-Ok methods. When sent by server, this method acknowledges one or more messages published with the Publish method on a channel in confirm mode. The acknowledgement can be for a single message or a set of messages up to and including a specific message. :param integer delivery_tag: int/long The server-assigned delivery tag :param bool multiple: If set to True, the delivery tag is treated as "up to and including", so that multiple messages can be acknowledged with a single method. If set to False, the delivery tag refers to a single message. If the multiple field is 1, and the delivery tag is zero, this indicates acknowledgement of all outstanding messages. """ return self._channel.basic_ack( delivery_tag=delivery_tag, multiple=multiple) def basic_cancel(self, consumer_tag=''): """This method cancels a consumer. This does not affect already delivered messages, but it does mean the server will not send any more messages for that consumer. The client may receive an arbitrary number of messages in between sending the cancel method and receiving the cancel-ok reply. It may also be sent from the server to the client in the event of the consumer being unexpectedly cancelled (i.e. cancelled for any reason other than the server receiving the corresponding basic.cancel from the client). This allows clients to be notified of the loss of consumers due to events such as queue deletion. This method wraps :meth:`Channel.basic_cancel ` and closes any deferred queue associated with that consumer. :param str consumer_tag: Identifier for the consumer :returns: Deferred that fires on the Basic.CancelOk response :rtype: Deferred :raises ValueError: """ wrapped = self._wrap_channel_method('basic_cancel') d = wrapped(consumer_tag=consumer_tag) return d.addCallback(self._on_consumer_cancelled) def basic_consume(self, queue, auto_ack=False, exclusive=False, consumer_tag=None, arguments=None): """Consume from a server queue. Sends the AMQP 0-9-1 command Basic.Consume to the broker and binds messages for the consumer_tag to a :class:`ClosableDeferredQueue`. If you do not pass in a consumer_tag, one will be automatically generated for you. For more information on basic_consume, see: Tutorial 2 at http://www.rabbitmq.com/getstarted.html http://www.rabbitmq.com/confirms.html http://www.rabbitmq.com/amqp-0-9-1-reference.html#basic.consume :param str queue: The queue to consume from. Use the empty string to specify the most recent server-named queue for this channel. :param bool auto_ack: if set to True, automatic acknowledgement mode will be used (see http://www.rabbitmq.com/confirms.html). This corresponds with the 'no_ack' parameter in the basic.consume AMQP 0.9.1 method :param bool exclusive: Don't allow other consumers on the queue :param str consumer_tag: Specify your own consumer tag :param dict arguments: Custom key/value pair arguments for the consumer :returns: Deferred that fires with a tuple ``(queue_object, consumer_tag)``. The Deferred will errback with an instance of :class:`exceptions.ChannelClosed` if the call fails. The queue object is an instance of :class:`ClosableDeferredQueue`, where data received from the queue will be stored. Clients should use its :meth:`get() ` method to fetch an individual message, which will return a Deferred firing with a namedtuple whose attributes are: - channel: this TwistedChannel - method: pika.spec.Basic.Deliver - properties: pika.spec.BasicProperties - body: bytes :rtype: Deferred """ if self._closed: return defer.fail(self._closed) queue_obj = ClosableDeferredQueue() d = defer.Deferred() self._calls.add(d) def on_consume_ok(frame): consumer_tag = frame.method.consumer_tag self._queue_name_to_consumer_tags.setdefault( queue, set()).add(consumer_tag) self._consumers[consumer_tag] = queue_obj self._calls.discard(d) d.callback((queue_obj, consumer_tag)) def on_message_callback(_channel, method, properties, body): """Add the ReceivedMessage to the queue, while replacing the channel implementation. """ queue_obj.put( ReceivedMessage( channel=self, method=method, properties=properties, body=body, )) try: self._channel.basic_consume( queue=queue, on_message_callback=on_message_callback, auto_ack=auto_ack, exclusive=exclusive, consumer_tag=consumer_tag, arguments=arguments, callback=on_consume_ok, ) except Exception: # pylint: disable=W0703 return defer.fail() return d def basic_get(self, queue, auto_ack=False): """Get a single message from the AMQP broker. Will return If the queue is empty, it will return None. If you want to be notified of Basic.GetEmpty, use the Channel.add_callback method adding your Basic.GetEmpty callback which should expect only one parameter, frame. Due to implementation details, this cannot be called a second time until the callback is executed. For more information on basic_get and its parameters, see: http://www.rabbitmq.com/amqp-0-9-1-reference.html#basic.get This method wraps :meth:`Channel.basic_get `. :param str queue: The queue from which to get a message. Use the empty string to specify the most recent server-named queue for this channel. :param bool auto_ack: Tell the broker to not expect a reply :returns: Deferred that fires with a namedtuple whose attributes are: - channel: this TwistedChannel - method: pika.spec.Basic.GetOk - properties: pika.spec.BasicProperties - body: bytes If the queue is empty, None will be returned. :rtype: Deferred :raises pika.exceptions.DuplicateGetOkCallback: """ if self._basic_get_deferred is not None: raise exceptions.DuplicateGetOkCallback() def create_namedtuple(result): if result is None: return None _channel, method, properties, body = result return ReceivedMessage( channel=self, method=method, properties=properties, body=body, ) def cleanup_attribute(result): self._basic_get_deferred = None return result d = self._wrap_channel_method("basic_get")( queue=queue, auto_ack=auto_ack) d.addCallback(create_namedtuple) d.addBoth(cleanup_attribute) self._basic_get_deferred = d return d def basic_nack(self, delivery_tag=None, multiple=False, requeue=True): """This method allows a client to reject one or more incoming messages. It can be used to interrupt and cancel large incoming messages, or return untreatable messages to their original queue. :param integer delivery-tag: int/long The server-assigned delivery tag :param bool multiple: If set to True, the delivery tag is treated as "up to and including", so that multiple messages can be acknowledged with a single method. If set to False, the delivery tag refers to a single message. If the multiple field is 1, and the delivery tag is zero, this indicates acknowledgement of all outstanding messages. :param bool requeue: If requeue is true, the server will attempt to requeue the message. If requeue is false or the requeue attempt fails the messages are discarded or dead-lettered. """ return self._channel.basic_nack( delivery_tag=delivery_tag, multiple=multiple, requeue=requeue, ) def basic_publish(self, exchange, routing_key, body, properties=None, mandatory=False): """Publish to the channel with the given exchange, routing key and body. This method wraps :meth:`Channel.basic_publish `, but makes sure the channel is not closed before publishing. For more information on basic_publish and what the parameters do, see: http://www.rabbitmq.com/amqp-0-9-1-reference.html#basic.publish :param str exchange: The exchange to publish to :param str routing_key: The routing key to bind on :param bytes body: The message body :param pika.spec.BasicProperties properties: Basic.properties :param bool mandatory: The mandatory flag :returns: A Deferred that fires with the result of the channel's basic_publish. :rtype: Deferred :raises UnroutableError: raised when a message published in publisher-acknowledgments mode (see `BlockingChannel.confirm_delivery`) is returned via `Basic.Return` followed by `Basic.Ack`. :raises NackError: raised when a message published in publisher-acknowledgements mode is Nack'ed by the broker. See `BlockingChannel.confirm_delivery`. """ if self._closed: return defer.fail(self._closed) result = self._channel.basic_publish( exchange=exchange, routing_key=routing_key, body=body, properties=properties, mandatory=mandatory) if not self._delivery_confirmation: return defer.succeed(result) else: # See http://www.rabbitmq.com/confirms.html#publisher-confirms self._delivery_message_id += 1 self._deliveries[self._delivery_message_id] = defer.Deferred() return self._deliveries[self._delivery_message_id] def basic_qos(self, prefetch_size=0, prefetch_count=0, global_qos=False): """Specify quality of service. This method requests a specific quality of service. The QoS can be specified for the current channel or for all channels on the connection. The client can request that messages be sent in advance so that when the client finishes processing a message, the following message is already held locally, rather than needing to be sent down the channel. Prefetching gives a performance improvement. :param int prefetch_size: This field specifies the prefetch window size. The server will send a message in advance if it is equal to or smaller in size than the available prefetch size (and also falls into other prefetch limits). May be set to zero, meaning "no specific limit", although other prefetch limits may still apply. The prefetch-size is ignored by consumers who have enabled the no-ack option. :param int prefetch_count: Specifies a prefetch window in terms of whole messages. This field may be used in combination with the prefetch-size field; a message will only be sent in advance if both prefetch windows (and those at the channel and connection level) allow it. The prefetch-count is ignored by consumers who have enabled the no-ack option. :param bool global_qos: Should the QoS apply to all channels on the connection. :returns: Deferred that fires on the Basic.QosOk response :rtype: Deferred """ return self._wrap_channel_method("basic_qos")( prefetch_size=prefetch_size, prefetch_count=prefetch_count, global_qos=global_qos, ) def basic_reject(self, delivery_tag, requeue=True): """Reject an incoming message. This method allows a client to reject a message. It can be used to interrupt and cancel large incoming messages, or return untreatable messages to their original queue. :param integer delivery_tag: int/long The server-assigned delivery tag :param bool requeue: If requeue is true, the server will attempt to requeue the message. If requeue is false or the requeue attempt fails the messages are discarded or dead-lettered. :raises: TypeError """ return self._channel.basic_reject( delivery_tag=delivery_tag, requeue=requeue) def basic_recover(self, requeue=False): """This method asks the server to redeliver all unacknowledged messages on a specified channel. Zero or more messages may be redelivered. This method replaces the asynchronous Recover. :param bool requeue: If False, the message will be redelivered to the original recipient. If True, the server will attempt to requeue the message, potentially then delivering it to an alternative subscriber. :returns: Deferred that fires on the Basic.RecoverOk response :rtype: Deferred """ return self._wrap_channel_method("basic_recover")(requeue=requeue) def close(self, reply_code=0, reply_text="Normal shutdown"): """Invoke a graceful shutdown of the channel with the AMQP Broker. If channel is OPENING, transition to CLOSING and suppress the incoming Channel.OpenOk, if any. :param int reply_code: The reason code to send to broker :param str reply_text: The reason text to send to broker :raises ChannelWrongStateError: if channel is closed or closing """ return self._channel.close(reply_code=reply_code, reply_text=reply_text) def confirm_delivery(self): """Turn on Confirm mode in the channel. Pass in a callback to be notified by the Broker when a message has been confirmed as received or rejected (Basic.Ack, Basic.Nack) from the broker to the publisher. For more information see: http://www.rabbitmq.com/confirms.html#publisher-confirms :returns: Deferred that fires on the Confirm.SelectOk response :rtype: Deferred """ if self._delivery_confirmation: LOGGER.error('confirm_delivery: confirmation was already enabled.') return defer.succeed(None) wrapped = self._wrap_channel_method('confirm_delivery') d = wrapped(ack_nack_callback=self._on_delivery_confirmation) def set_delivery_confirmation(result): self._delivery_confirmation = True self._delivery_message_id = 0 LOGGER.debug("Delivery confirmation enabled.") return result d.addCallback(set_delivery_confirmation) # Unroutable messages returned after this point will be in the context # of publisher acknowledgments self._channel.add_on_return_callback(self._on_puback_message_returned) return d def _on_delivery_confirmation(self, method_frame): """Invoked by pika when RabbitMQ responds to a Basic.Publish RPC command, passing in either a Basic.Ack or Basic.Nack frame with the delivery tag of the message that was published. The delivery tag is an integer counter indicating the message number that was sent on the channel via Basic.Publish. Here we're just doing house keeping to keep track of stats and remove message numbers that we expect a delivery confirmation of from the list used to keep track of messages that are pending confirmation. :param pika.frame.Method method_frame: Basic.Ack or Basic.Nack frame """ delivery_tag = method_frame.method.delivery_tag if delivery_tag not in self._deliveries: LOGGER.error("Delivery tag %s not found in the pending deliveries", delivery_tag) return if method_frame.method.multiple: tags = [tag for tag in self._deliveries if tag <= delivery_tag] tags.sort() else: tags = [delivery_tag] for tag in tags: d = self._deliveries[tag] del self._deliveries[tag] if isinstance(method_frame.method, pika.spec.Basic.Nack): # Broker was unable to process the message due to internal # error LOGGER.warning( "Message was Nack'ed by broker: nack=%r; channel=%s;", method_frame.method, self.channel_number) if self._puback_return is not None: returned_messages = [self._puback_return] self._puback_return = None else: returned_messages = [] d.errback(exceptions.NackError(returned_messages)) else: assert isinstance(method_frame.method, pika.spec.Basic.Ack) if self._puback_return is not None: # Unroutable message was returned returned_messages = [self._puback_return] self._puback_return = None d.errback(exceptions.UnroutableError(returned_messages)) else: d.callback(method_frame.method) def _on_puback_message_returned(self, channel, method, properties, body): """Called as the result of Basic.Return from broker in publisher-acknowledgements mode. Saves the info as a ReturnedMessage instance in self._puback_return. :param pika.Channel channel: our self._impl channel :param pika.spec.Basic.Return method: :param pika.spec.BasicProperties properties: message properties :param bytes body: returned message body; empty string if no body """ assert isinstance(method, spec.Basic.Return), method assert isinstance(properties, spec.BasicProperties), properties LOGGER.warning( "Published message was returned: _delivery_confirmation=%s; " "channel=%s; method=%r; properties=%r; body_size=%d; " "body_prefix=%.255r", self._delivery_confirmation, channel.channel_number, method, properties, len(body) if body is not None else None, body) self._puback_return = ReceivedMessage(channel=self, method=method, properties=properties, body=body) def exchange_bind(self, destination, source, routing_key='', arguments=None): """Bind an exchange to another exchange. :param str destination: The destination exchange to bind :param str source: The source exchange to bind to :param str routing_key: The routing key to bind on :param dict arguments: Custom key/value pair arguments for the binding :raises ValueError: :returns: Deferred that fires on the Exchange.BindOk response :rtype: Deferred """ return self._wrap_channel_method("exchange_bind")( destination=destination, source=source, routing_key=routing_key, arguments=arguments, ) def exchange_declare(self, exchange, exchange_type=ExchangeType.direct, passive=False, durable=False, auto_delete=False, internal=False, arguments=None): """This method creates an exchange if it does not already exist, and if the exchange exists, verifies that it is of the correct and expected class. If passive set, the server will reply with Declare-Ok if the exchange already exists with the same name, and raise an error if not and if the exchange does not already exist, the server MUST raise a channel exception with reply code 404 (not found). :param str exchange: The exchange name consists of a non-empty sequence of these characters: letters, digits, hyphen, underscore, period, or colon :param str exchange_type: The exchange type to use :param bool passive: Perform a declare or just check to see if it exists :param bool durable: Survive a reboot of RabbitMQ :param bool auto_delete: Remove when no more queues are bound to it :param bool internal: Can only be published to by other exchanges :param dict arguments: Custom key/value pair arguments for the exchange :returns: Deferred that fires on the Exchange.DeclareOk response :rtype: Deferred :raises ValueError: """ return self._wrap_channel_method("exchange_declare")( exchange=exchange, exchange_type=exchange_type, passive=passive, durable=durable, auto_delete=auto_delete, internal=internal, arguments=arguments, ) def exchange_delete(self, exchange=None, if_unused=False): """Delete the exchange. :param str exchange: The exchange name :param bool if_unused: only delete if the exchange is unused :returns: Deferred that fires on the Exchange.DeleteOk response :rtype: Deferred :raises ValueError: """ return self._wrap_channel_method("exchange_delete")( exchange=exchange, if_unused=if_unused, ) def exchange_unbind(self, destination=None, source=None, routing_key='', arguments=None): """Unbind an exchange from another exchange. :param str destination: The destination exchange to unbind :param str source: The source exchange to unbind from :param str routing_key: The routing key to unbind :param dict arguments: Custom key/value pair arguments for the binding :returns: Deferred that fires on the Exchange.UnbindOk response :rtype: Deferred :raises ValueError: """ return self._wrap_channel_method("exchange_unbind")( destination=destination, source=source, routing_key=routing_key, arguments=arguments, ) def flow(self, active): """Turn Channel flow control off and on. Returns a Deferred that will fire with a bool indicating the channel flow state. For more information, please reference: http://www.rabbitmq.com/amqp-0-9-1-reference.html#channel.flow :param bool active: Turn flow on or off :returns: Deferred that fires with the channel flow state :rtype: Deferred :raises ValueError: """ return self._wrap_channel_method("flow")(active=active) def open(self): """Open the channel""" return self._channel.open() def queue_bind(self, queue, exchange, routing_key=None, arguments=None): """Bind the queue to the specified exchange :param str queue: The queue to bind to the exchange :param str exchange: The source exchange to bind to :param str routing_key: The routing key to bind on :param dict arguments: Custom key/value pair arguments for the binding :returns: Deferred that fires on the Queue.BindOk response :rtype: Deferred :raises ValueError: """ return self._wrap_channel_method("queue_bind")( queue=queue, exchange=exchange, routing_key=routing_key, arguments=arguments, ) def queue_declare(self, queue, passive=False, durable=False, exclusive=False, auto_delete=False, arguments=None): """Declare queue, create if needed. This method creates or checks a queue. When creating a new queue the client can specify various properties that control the durability of the queue and its contents, and the level of sharing for the queue. Use an empty string as the queue name for the broker to auto-generate one :param str queue: The queue name; if empty string, the broker will create a unique queue name :param bool passive: Only check to see if the queue exists :param bool durable: Survive reboots of the broker :param bool exclusive: Only allow access by the current connection :param bool auto_delete: Delete after consumer cancels or disconnects :param dict arguments: Custom key/value arguments for the queue :returns: Deferred that fires on the Queue.DeclareOk response :rtype: Deferred :raises ValueError: """ return self._wrap_channel_method("queue_declare")( queue=queue, passive=passive, durable=durable, exclusive=exclusive, auto_delete=auto_delete, arguments=arguments, ) def queue_delete(self, queue, if_unused=False, if_empty=False): """Delete a queue from the broker. This method wraps :meth:`Channel.queue_delete `, and removes the reference to the queue object after it gets deleted on the server. :param str queue: The queue to delete :param bool if_unused: only delete if it's unused :param bool if_empty: only delete if the queue is empty :returns: Deferred that fires on the Queue.DeleteOk response :rtype: Deferred :raises ValueError: """ wrapped = self._wrap_channel_method('queue_delete') d = wrapped(queue=queue, if_unused=if_unused, if_empty=if_empty) def _clear_consumer(ret, queue_name): for consumer_tag in list( self._queue_name_to_consumer_tags.get(queue_name, set())): self._consumers[consumer_tag].close( exceptions.ConsumerCancelled( "Queue %s was deleted." % queue_name)) del self._consumers[consumer_tag] self._queue_name_to_consumer_tags[queue_name].remove( consumer_tag) return ret return d.addCallback(_clear_consumer, queue) def queue_purge(self, queue): """Purge all of the messages from the specified queue :param str queue: The queue to purge :returns: Deferred that fires on the Queue.PurgeOk response :rtype: Deferred :raises ValueError: """ return self._wrap_channel_method("queue_purge")(queue=queue) def queue_unbind(self, queue, exchange=None, routing_key=None, arguments=None): """Unbind a queue from an exchange. :param str queue: The queue to unbind from the exchange :param str exchange: The source exchange to bind from :param str routing_key: The routing key to unbind :param dict arguments: Custom key/value pair arguments for the binding :returns: Deferred that fires on the Queue.UnbindOk response :rtype: Deferred :raises ValueError: """ return self._wrap_channel_method("queue_unbind")( queue=queue, exchange=exchange, routing_key=routing_key, arguments=arguments, ) def tx_commit(self): """Commit a transaction. :returns: Deferred that fires on the Tx.CommitOk response :rtype: Deferred :raises ValueError: """ return self._wrap_channel_method("tx_commit")() def tx_rollback(self): """Rollback a transaction. :returns: Deferred that fires on the Tx.RollbackOk response :rtype: Deferred :raises ValueError: """ return self._wrap_channel_method("tx_rollback")() def tx_select(self): """Select standard transaction mode. This method sets the channel to use standard transactions. The client must use this method at least once on a channel before using the Commit or Rollback methods. :returns: Deferred that fires on the Tx.SelectOk response :rtype: Deferred :raises ValueError: """ return self._wrap_channel_method("tx_select")() class _TwistedConnectionAdapter(pika.connection.Connection): """A Twisted-specific implementation of a Pika Connection. NOTE: since `base_connection.BaseConnection`'s primary responsibility is management of the transport, we use `pika.connection.Connection` directly as our base class because this adapter uses a different transport management strategy. """ def __init__(self, parameters, on_open_callback, on_open_error_callback, on_close_callback, custom_reactor): super(_TwistedConnectionAdapter, self).__init__( parameters=parameters, on_open_callback=on_open_callback, on_open_error_callback=on_open_error_callback, on_close_callback=on_close_callback, internal_connection_workflow=False) self._reactor = custom_reactor or reactor self._transport = None # to be provided by `connection_made()` def _adapter_call_later(self, delay, callback): """Implement :py:meth:`pika.connection.Connection._adapter_call_later()`. """ check_callback_arg(callback, 'callback') return _TimerHandle(self._reactor.callLater(delay, callback)) def _adapter_remove_timeout(self, timeout_id): """Implement :py:meth:`pika.connection.Connection._adapter_remove_timeout()`. """ timeout_id.cancel() def _adapter_add_callback_threadsafe(self, callback): """Implement :py:meth:`pika.connection.Connection._adapter_add_callback_threadsafe()`. """ check_callback_arg(callback, 'callback') self._reactor.callFromThread(callback) def _adapter_connect_stream(self): """Implement pure virtual :py:ref:meth:`pika.connection.Connection._adapter_connect_stream()` method. NOTE: This should not be called due to our initialization of Connection via `internal_connection_workflow=False` """ raise NotImplementedError def _adapter_disconnect_stream(self): """Implement pure virtual :py:ref:meth:`pika.connection.Connection._adapter_disconnect_stream()` method. """ self._transport.loseConnection() def _adapter_emit_data(self, data): """Implement pure virtual :py:ref:meth:`pika.connection.Connection._adapter_emit_data()` method. """ self._transport.write(data) def connection_made(self, transport): """Introduces transport to protocol after transport is connected. :param twisted.internet.interfaces.ITransport transport: :raises Exception: Exception-based exception on error """ self._transport = transport # Let connection know that stream is available self._on_stream_connected() def connection_lost(self, error): """Called upon loss or closing of TCP connection. NOTE: `connection_made()` and `connection_lost()` are each called just once and in that order. All other callbacks are called between them. :param Failure: A Twisted Failure instance wrapping an exception. """ self._transport = None error = error.value # drop the Failure wrapper if isinstance(error, twisted_error.ConnectionDone): self._error = error error = None LOGGER.log(logging.DEBUG if error is None else logging.ERROR, 'connection_lost: %r', error) self._on_stream_terminated(error) def data_received(self, data): """Called to deliver incoming data from the server to the protocol. :param data: Non-empty data bytes. :raises Exception: Exception-based exception on error """ self._on_data_available(data) class TwistedProtocolConnection(protocol.Protocol): """A Pika-specific implementation of a Twisted Protocol. Allows using Twisted's non-blocking connectTCP/connectSSL methods for connecting to the server. TwistedProtocolConnection objects have a `ready` instance variable that's a Deferred which fires when the connection is ready to be used (the initial AMQP handshaking has been done). You *have* to wait for this Deferred to fire before requesting a channel. Once the connection is ready, you will be able to use the `closed` instance variable: a Deferred which fires when the connection is closed. Since it's Twisted handling connection establishing it does not accept connect callbacks, you have to implement that within Twisted. Also remember that the host, port and ssl values of the connection parameters are ignored because, yet again, it's Twisted who manages the connection. """ def __init__(self, parameters=None, custom_reactor=None): self.ready = defer.Deferred() self.ready.addCallback(lambda _: self.connectionReady()) self.closed = None self._impl = _TwistedConnectionAdapter( parameters=parameters, on_open_callback=self._on_connection_ready, on_open_error_callback=self._on_connection_failed, on_close_callback=self._on_connection_closed, custom_reactor=custom_reactor, ) self._calls = set() def channel(self, channel_number=None): # pylint: disable=W0221 """Create a new channel with the next available channel number or pass in a channel number to use. Must be non-zero if you would like to specify but it is recommended that you let Pika manage the channel numbers. :param int channel_number: The channel number to use, defaults to the next available. :returns: a Deferred that fires with an instance of a wrapper around the Pika Channel class. :rtype: Deferred """ d = defer.Deferred() self._impl.channel(channel_number, d.callback) self._calls.add(d) d.addCallback(self._clear_call, d) return d.addCallback(TwistedChannel) @property def is_open(self): # For compatibility with previous releases. return self._impl.is_open @property def is_closed(self): # For compatibility with previous releases. return self._impl.is_closed def close(self, reply_code=200, reply_text='Normal shutdown'): if not self._impl.is_closed: self._impl.close(reply_code, reply_text) return self.closed # IProtocol methods def dataReceived(self, data): # Pass the bytes to Pika for parsing self._impl.data_received(data) def connectionLost(self, reason=protocol.connectionDone): self._impl.connection_lost(reason) # Let the caller know there's been an error d, self.ready = self.ready, None if d: d.errback(reason) def makeConnection(self, transport): self._impl.connection_made(transport) protocol.Protocol.makeConnection(self, transport) # Our own methods def connectionReady(self): """This method will be called when the underlying connection is ready. """ return self def _on_connection_ready(self, _connection): d, self.ready = self.ready, None if d: self.closed = defer.Deferred() d.callback(None) def _on_connection_failed(self, _connection, _error_message=None): d, self.ready = self.ready, None if d: attempts = self._impl.params.connection_attempts exc = exceptions.AMQPConnectionError(attempts) d.errback(exc) def _on_connection_closed(self, _connection, exception): # errback all pending calls for d in self._calls: d.errback(exception) self._calls = set() d, self.closed = self.closed, None if d: if isinstance(exception, Failure): # Calling `callback` with a Failure instance will trigger the # errback path. exception = exception.value d.callback(exception) def _clear_call(self, ret, d): self._calls.discard(d) return ret class _TimerHandle(nbio_interface.AbstractTimerReference): """This module's adaptation of `nbio_interface.AbstractTimerReference`. """ def __init__(self, handle): """ :param twisted.internet.base.DelayedCall handle: """ self._handle = handle def cancel(self): if self._handle is not None: try: self._handle.cancel() except (twisted_error.AlreadyCalled, twisted_error.AlreadyCancelled): pass self._handle = None pika-1.2.0/pika/adapters/utils/000077500000000000000000000000001400701476500163315ustar00rootroot00000000000000pika-1.2.0/pika/adapters/utils/__init__.py000066400000000000000000000000001400701476500204300ustar00rootroot00000000000000pika-1.2.0/pika/adapters/utils/connection_workflow.py000066400000000000000000001020651400701476500230000ustar00rootroot00000000000000"""Implements `AMQPConnectionWorkflow` - the default workflow of performing multiple TCP/[SSL]/AMQP connection attempts with timeouts and retries until one succeeds or all attempts fail. Defines the interface `AbstractAMQPConnectionWorkflow` that facilitates implementing custom connection workflows. """ import functools import logging import socket import pika.compat import pika.exceptions import pika.tcp_socket_opts from pika import __version__ _LOG = logging.getLogger(__name__) class AMQPConnectorException(Exception): """Base exception for this module""" class AMQPConnectorStackTimeout(AMQPConnectorException): """Overall TCP/[SSL]/AMQP stack connection attempt timed out.""" class AMQPConnectorAborted(AMQPConnectorException): """Asynchronous request was aborted""" class AMQPConnectorWrongState(AMQPConnectorException): """AMQPConnector operation requested in wrong state, such as aborting after completion was reported. """ class AMQPConnectorPhaseErrorBase(AMQPConnectorException): """Wrapper for exception that occurred during a particular bring-up phase. """ def __init__(self, exception, *args): """ :param BaseException exception: error that occurred while waiting for a subclass-specific protocol bring-up phase to complete. :param args: args for parent class """ super(AMQPConnectorPhaseErrorBase, self).__init__(*args) self.exception = exception def __repr__(self): return '{}: {!r}'.format(self.__class__.__name__, self.exception) class AMQPConnectorSocketConnectError(AMQPConnectorPhaseErrorBase): """Error connecting TCP socket to remote peer""" class AMQPConnectorTransportSetupError(AMQPConnectorPhaseErrorBase): """Error setting up transport after TCP connected but before AMQP handshake. """ class AMQPConnectorAMQPHandshakeError(AMQPConnectorPhaseErrorBase): """Error during AMQP handshake""" class AMQPConnectionWorkflowAborted(AMQPConnectorException): """AMQP Connection workflow was aborted.""" class AMQPConnectionWorkflowWrongState(AMQPConnectorException): """AMQP Connection Workflow operation requested in wrong state, such as aborting after completion was reported. """ class AMQPConnectionWorkflowFailed(AMQPConnectorException): """Indicates that AMQP connection workflow failed. """ def __init__(self, exceptions, *args): """ :param sequence exceptions: Exceptions that occurred during the workflow. :param args: args to pass to base class """ super(AMQPConnectionWorkflowFailed, self).__init__(*args) self.exceptions = tuple(exceptions) def __repr__(self): return ('{}: {} exceptions in all; last exception - {!r}; first ' 'exception - {!r}').format( self.__class__.__name__, len(self.exceptions), self.exceptions[-1], self.exceptions[0] if len(self.exceptions) > 1 else None) class AMQPConnector(object): """Performs a single TCP/[SSL]/AMQP connection workflow. """ _STATE_INIT = 0 # start() hasn't been called yet _STATE_TCP = 1 # TCP/IP connection establishment _STATE_TRANSPORT = 2 # [SSL] and transport linkup _STATE_AMQP = 3 # AMQP connection handshake _STATE_TIMEOUT = 4 # overall TCP/[SSL]/AMQP timeout _STATE_ABORTING = 5 # abort() called - aborting workflow _STATE_DONE = 6 # result reported to client def __init__(self, conn_factory, nbio): """ :param callable conn_factory: A function that takes `pika.connection.Parameters` as its only arg and returns a brand new `pika.connection.Connection`-based adapter instance each time it is called. The factory must instantiate the connection with `internal_connection_workflow=False`. :param pika.adapters.utils.nbio_interface.AbstractIOServices nbio: """ self._conn_factory = conn_factory self._nbio = nbio self._addr_record = None # type: tuple self._conn_params = None # type: pika.connection.Parameters self._on_done = None # will be provided via start() # TCP connection timeout # pylint: disable=C0301 self._tcp_timeout_ref = None # type: pika.adapters.utils.nbio_interface.AbstractTimerReference # Overall TCP/[SSL]/AMQP timeout self._stack_timeout_ref = None # type: pika.adapters.utils.nbio_interface.AbstractTimerReference # Current task self._task_ref = None # type: pika.adapters.utils.nbio_interface.AbstractIOReference self._sock = None # type: socket.socket self._amqp_conn = None # type: pika.connection.Connection self._state = self._STATE_INIT def start(self, addr_record, conn_params, on_done): """Asynchronously perform a single TCP/[SSL]/AMQP connection attempt. :param tuple addr_record: a single resolved address record compatible with `socket.getaddrinfo()` format. :param pika.connection.Parameters conn_params: :param callable on_done: Function to call upon completion of the workflow: `on_done(pika.connection.Connection | BaseException)`. If exception, it's going to be one of the following: `AMQPConnectorSocketConnectError` `AMQPConnectorTransportSetupError` `AMQPConnectorAMQPHandshakeError` `AMQPConnectorAborted` """ if self._state != self._STATE_INIT: raise AMQPConnectorWrongState( 'Already in progress or finished; state={}'.format(self._state)) self._addr_record = addr_record self._conn_params = conn_params self._on_done = on_done # Create socket and initiate TCP/IP connection self._state = self._STATE_TCP self._sock = socket.socket(*self._addr_record[:3]) self._sock.setsockopt(pika.compat.SOL_TCP, socket.TCP_NODELAY, 1) pika.tcp_socket_opts.set_sock_opts(self._conn_params.tcp_options, self._sock) self._sock.setblocking(False) addr = self._addr_record[4] _LOG.info('Pika version %s connecting to %r', __version__, addr) self._task_ref = self._nbio.connect_socket( self._sock, addr, on_done=self._on_tcp_connection_done) # Start socket connection timeout timer self._tcp_timeout_ref = None if self._conn_params.socket_timeout is not None: self._tcp_timeout_ref = self._nbio.call_later( self._conn_params.socket_timeout, self._on_tcp_connection_timeout) # Start overall TCP/[SSL]/AMQP stack connection timeout timer self._stack_timeout_ref = None if self._conn_params.stack_timeout is not None: self._stack_timeout_ref = self._nbio.call_later( self._conn_params.stack_timeout, self._on_overall_timeout) def abort(self): """Abort the workflow asynchronously. The completion callback will be called with an instance of AMQPConnectorAborted. NOTE: we can't cancel/close synchronously because aborting pika Connection and its transport requires an asynchronous operation. :raises AMQPConnectorWrongState: If called after completion has been reported or the workflow not started yet. """ if self._state == self._STATE_INIT: raise AMQPConnectorWrongState('Cannot abort before starting.') if self._state == self._STATE_DONE: raise AMQPConnectorWrongState('Cannot abort after completion was reported') self._state = self._STATE_ABORTING self._deactivate() _LOG.info( 'AMQPConnector: beginning client-initiated asynchronous ' 'abort; %r/%s', self._conn_params.host, self._addr_record) if self._amqp_conn is None: _LOG.debug('AMQPConnector.abort(): no connection, so just ' 'scheduling completion report via I/O loop.') self._nbio.add_callback_threadsafe( functools.partial(self._report_completion_and_cleanup, AMQPConnectorAborted())) else: if not self._amqp_conn.is_closing: # Initiate close of AMQP connection and wait for asynchronous # callback from the Connection instance before reporting # completion to client _LOG.debug('AMQPConnector.abort(): closing Connection.') self._amqp_conn.close( 320, 'Client-initiated abort of AMQP Connection Workflow.') else: # It's already closing, must be due to our timeout processing, # so we'll just piggy back on the callback it registered _LOG.debug('AMQPConnector.abort(): closing of Connection was ' 'already initiated.') assert self._state == self._STATE_TIMEOUT, \ ('Connection is closing, but not in TIMEOUT state; state={}' .format(self._state)) def _close(self): """Cancel asynchronous tasks and clean up to assist garbage collection. Transition to STATE_DONE. """ self._deactivate() if self._sock is not None: self._sock.close() self._sock = None self._conn_factory = None self._nbio = None self._addr_record = None self._on_done = None self._state = self._STATE_DONE def _deactivate(self): """Cancel asynchronous tasks. """ # NOTE: self._amqp_conn requires special handling as it doesn't support # synchronous closing. We special-case it elsewhere in the code where # needed. assert self._amqp_conn is None, \ '_deactivate called with self._amqp_conn not None; state={}'.format( self._state) if self._tcp_timeout_ref is not None: self._tcp_timeout_ref.cancel() self._tcp_timeout_ref = None if self._stack_timeout_ref is not None: self._stack_timeout_ref.cancel() self._stack_timeout_ref = None if self._task_ref is not None: self._task_ref.cancel() self._task_ref = None def _report_completion_and_cleanup(self, result): """Clean up and invoke client's `on_done` callback. :param pika.connection.Connection | BaseException result: value to pass to user's `on_done` callback. """ if isinstance(result, BaseException): _LOG.error('AMQPConnector - reporting failure: %r', result) else: _LOG.info('AMQPConnector - reporting success: %r', result) on_done = self._on_done self._close() on_done(result) def _on_tcp_connection_timeout(self): """Handle TCP connection timeout Reports AMQPConnectorSocketConnectError with socket.timeout inside. """ self._tcp_timeout_ref = None error = AMQPConnectorSocketConnectError( socket.timeout('TCP connection attempt timed out: {!r}/{}'.format( self._conn_params.host, self._addr_record))) self._report_completion_and_cleanup(error) def _on_overall_timeout(self): """Handle overall TCP/[SSL]/AMQP connection attempt timeout by reporting `Timeout` error to the client. Reports AMQPConnectorSocketConnectError if timeout occurred during socket TCP connection attempt. Reports AMQPConnectorTransportSetupError if timeout occurred during tramsport [SSL] setup attempt. Reports AMQPConnectorAMQPHandshakeError if timeout occurred during AMQP handshake. """ self._stack_timeout_ref = None prev_state = self._state self._state = self._STATE_TIMEOUT if prev_state == self._STATE_AMQP: msg = ('Timeout while setting up AMQP to {!r}/{}; ssl={}'.format( self._conn_params.host, self._addr_record, bool(self._conn_params.ssl_options))) _LOG.error(msg) # Initiate close of AMQP connection and wait for asynchronous # callback from the Connection instance before reporting completion # to client assert not self._amqp_conn.is_open, \ 'Unexpected open state of {!r}'.format(self._amqp_conn) if not self._amqp_conn.is_closing: self._amqp_conn.close(320, msg) return if prev_state == self._STATE_TCP: error = AMQPConnectorSocketConnectError( AMQPConnectorStackTimeout( 'Timeout while connecting socket to {!r}/{}'.format( self._conn_params.host, self._addr_record))) else: assert prev_state == self._STATE_TRANSPORT error = AMQPConnectorTransportSetupError( AMQPConnectorStackTimeout( 'Timeout while setting up transport to {!r}/{}; ssl={}'. format(self._conn_params.host, self._addr_record, bool(self._conn_params.ssl_options)))) self._report_completion_and_cleanup(error) def _on_tcp_connection_done(self, exc): """Handle completion of asynchronous socket connection attempt. Reports AMQPConnectorSocketConnectError if TCP socket connection failed. :param None|BaseException exc: None on success; exception object on failure """ self._task_ref = None if self._tcp_timeout_ref is not None: self._tcp_timeout_ref.cancel() self._tcp_timeout_ref = None if exc is not None: _LOG.error('TCP Connection attempt failed: %r; dest=%r', exc, self._addr_record) self._report_completion_and_cleanup( AMQPConnectorSocketConnectError(exc)) return # We succeeded in making a TCP/IP connection to the server _LOG.debug('TCP connection to broker established: %r.', self._sock) # Now set up the transport self._state = self._STATE_TRANSPORT ssl_context = server_hostname = None if self._conn_params.ssl_options is not None: ssl_context = self._conn_params.ssl_options.context server_hostname = self._conn_params.ssl_options.server_hostname if server_hostname is None: server_hostname = self._conn_params.host self._task_ref = self._nbio.create_streaming_connection( protocol_factory=functools.partial(self._conn_factory, self._conn_params), sock=self._sock, ssl_context=ssl_context, server_hostname=server_hostname, on_done=self._on_transport_establishment_done) self._sock = None # create_streaming_connection() takes ownership def _on_transport_establishment_done(self, result): """Handle asynchronous completion of `AbstractIOServices.create_streaming_connection()` Reports AMQPConnectorTransportSetupError if transport ([SSL]) setup failed. :param sequence|BaseException result: On success, a two-tuple (transport, protocol); on failure, exception instance. """ self._task_ref = None if isinstance(result, BaseException): _LOG.error( 'Attempt to create the streaming transport failed: %r; ' '%r/%s; ssl=%s', result, self._conn_params.host, self._addr_record, bool(self._conn_params.ssl_options)) self._report_completion_and_cleanup( AMQPConnectorTransportSetupError(result)) return # We succeeded in setting up the streaming transport! # result is a two-tuple (transport, protocol) _LOG.info('Streaming transport linked up: %r.', result) _transport, self._amqp_conn = result # AMQP handshake is in progress - initiated during transport link-up self._state = self._STATE_AMQP # We explicitly remove default handler because it raises an exception. self._amqp_conn.add_on_open_error_callback( self._on_amqp_handshake_done, remove_default=True) self._amqp_conn.add_on_open_callback(self._on_amqp_handshake_done) def _on_amqp_handshake_done(self, connection, error=None): """Handle completion of AMQP connection handshake attempt. NOTE: we handle two types of callbacks - success with just connection arg as well as the open-error callback with connection and error Reports AMQPConnectorAMQPHandshakeError if AMQP handshake failed. :param pika.connection.Connection connection: :param BaseException | None error: None on success, otherwise failure """ _LOG.debug( 'AMQPConnector: AMQP handshake attempt completed; state=%s; ' 'error=%r; %r/%s', self._state, error, self._conn_params.host, self._addr_record) # Don't need it any more; and _deactivate() checks that it's None self._amqp_conn = None if self._state == self._STATE_ABORTING: # Client-initiated abort takes precedence over timeout result = AMQPConnectorAborted() elif self._state == self._STATE_TIMEOUT: result = AMQPConnectorAMQPHandshakeError( AMQPConnectorStackTimeout( 'Timeout during AMQP handshake{!r}/{}; ssl={}'.format( self._conn_params.host, self._addr_record, bool(self._conn_params.ssl_options)))) elif self._state == self._STATE_AMQP: if error is None: _LOG.debug( 'AMQPConnector: AMQP connection established for %r/%s: %r', self._conn_params.host, self._addr_record, connection) result = connection else: _LOG.debug( 'AMQPConnector: AMQP connection handshake failed for ' '%r/%s: %r', self._conn_params.host, self._addr_record, error) result = AMQPConnectorAMQPHandshakeError(error) else: # We timed out or aborted and initiated closing of the connection, # but this callback snuck in _LOG.debug( 'AMQPConnector: Ignoring AMQP handshake completion ' 'notification due to wrong state=%s; error=%r; conn=%r', self._state, error, connection) return self._report_completion_and_cleanup(result) class AbstractAMQPConnectionWorkflow(pika.compat.AbstractBase): """Interface for implementing a custom TCP/[SSL]/AMQP connection workflow. """ def start(self, connection_configs, connector_factory, native_loop, on_done): """Asynchronously perform the workflow until success or all retries are exhausted. Called by the adapter. :param sequence connection_configs: A sequence of one or more `pika.connection.Parameters`-based objects. Will attempt to connect using each config in the given order. :param callable connector_factory: call it without args to obtain a new instance of `AMQPConnector` for each connection attempt. See `AMQPConnector` for details. :param native_loop: Native I/O loop passed by app to the adapter or obtained by the adapter by default. :param callable on_done: Function to call upon completion of the workflow: `on_done(pika.connection.Connection | AMQPConnectionWorkflowFailed | AMQPConnectionWorkflowAborted)`. `Connection`-based adapter on success, `AMQPConnectionWorkflowFailed` on failure, `AMQPConnectionWorkflowAborted` if workflow was aborted. :raises AMQPConnectionWorkflowWrongState: If called in wrong state, such as after starting the workflow. """ raise NotImplementedError def abort(self): """Abort the workflow asynchronously. The completion callback will be called with an instance of AMQPConnectionWorkflowAborted. NOTE: we can't cancel/close synchronously because aborting pika Connection and its transport requires an asynchronous operation. :raises AMQPConnectionWorkflowWrongState: If called in wrong state, such as before starting or after completion has been reported. """ raise NotImplementedError class AMQPConnectionWorkflow(AbstractAMQPConnectionWorkflow): """Implements Pika's default workflow for performing multiple TCP/[SSL]/AMQP connection attempts with timeouts and retries until one succeeds or all attempts fail. The workflow: while not success and retries remain: 1. For each given config (pika.connection.Parameters object): A. Perform DNS resolution of the config's host. B. Attempt to establish TCP/[SSL]/AMQP for each resolved address until one succeeds, in which case we're done. 2. If all configs failed but retries remain, resume from beginning after the given retry pause. NOTE: failure of DNS resolution is equivalent to one cycle and will be retried after the pause if retries remain. """ _SOCK_TYPE = socket.SOCK_STREAM _IPPROTO = socket.IPPROTO_TCP _STATE_INIT = 0 _STATE_ACTIVE = 1 _STATE_ABORTING = 2 _STATE_DONE = 3 def __init__(self, _until_first_amqp_attempt=False): """ :param int | float retry_pause: Non-negative number of seconds to wait before retrying the config sequence. Meaningful only if retries is greater than 0. Defaults to 2 seconds. :param bool _until_first_amqp_attempt: INTERNAL USE ONLY; ends workflow after first AMQP handshake attempt, regardless of outcome (success or failure). The automatic connection logic in `pika.connection.Connection` enables this because it's not designed/tested to reset all state properly to handle more than one AMQP handshake attempt. TODO: Do we need getaddrinfo timeout? TODO: Would it be useful to implement exponential back-off? """ self._attempts_remaining = None # supplied by start() self._retry_pause = None # supplied by start() self._until_first_amqp_attempt = _until_first_amqp_attempt # Provided by set_io_services() # pylint: disable=C0301 self._nbio = None # type: pika.adapters.utils.nbio_interface.AbstractIOServices # Current index within `_connection_configs`; initialized when # starting a new connection sequence. self._current_config_index = None self._connection_configs = None # supplied by start() self._connector_factory = None # supplied by start() self._on_done = None # supplied by start() self._connector = None # type: AMQPConnector self._task_ref = None # current cancelable asynchronous task or timer self._addrinfo_iter = None # Exceptions from all failed connection attempts in this workflow self._connection_errors = [] self._state = self._STATE_INIT def set_io_services(self, nbio): """Called by the conneciton adapter only on pika's `AMQPConnectionWorkflow` instance to provide it the adapter-specific `AbstractIOServices` object before calling the `start()` method. NOTE: Custom workflow implementations should use the native I/O loop directly because `AbstractIOServices` is private to Pika implementation and its interface may change without notice. :param pika.adapters.utils.nbio_interface.AbstractIOServices nbio: """ self._nbio = nbio def start( self, connection_configs, connector_factory, native_loop, # pylint: disable=W0613 on_done): """Override `AbstractAMQPConnectionWorkflow.start()`. NOTE: This implementation uses `connection_attempts` and `retry_delay` values from the last element of the given `connection_configs` sequence as the overall number of connection attempts of the entire `connection_configs` sequence and pause between each sequence. """ if self._state != self._STATE_INIT: raise AMQPConnectorWrongState( 'Already in progress or finished; state={}'.format(self._state)) try: iter(connection_configs) except Exception as error: raise TypeError( 'connection_configs does not support iteration: {!r}'.format( error)) if not connection_configs: raise ValueError( 'connection_configs is empty: {!r}.'.format(connection_configs)) self._connection_configs = connection_configs self._connector_factory = connector_factory self._on_done = on_done self._attempts_remaining = connection_configs[-1].connection_attempts self._retry_pause = connection_configs[-1].retry_delay self._state = self._STATE_ACTIVE _LOG.debug('Starting AMQP Connection workflow asynchronously.') # Begin from our own I/O loop context to avoid calling back into client # from client's call here self._task_ref = self._nbio.call_later( 0, functools.partial(self._start_new_cycle_async, first=True)) def abort(self): """Override `AbstractAMQPConnectionWorkflow.abort()`. """ if self._state == self._STATE_INIT: raise AMQPConnectorWrongState('Cannot abort before starting.') elif self._state == self._STATE_DONE: raise AMQPConnectorWrongState( 'Cannot abort after completion was reported') self._state = self._STATE_ABORTING self._deactivate() _LOG.info('AMQPConnectionWorkflow: beginning client-initiated ' 'asynchronous abort.') if self._connector is None: _LOG.debug('AMQPConnectionWorkflow.abort(): no connector, so just ' 'scheduling completion report via I/O loop.') self._nbio.add_callback_threadsafe( functools.partial(self._report_completion_and_cleanup, AMQPConnectionWorkflowAborted())) else: _LOG.debug('AMQPConnectionWorkflow.abort(): requesting ' 'connector.abort().') self._connector.abort() def _close(self): """Cancel asynchronous tasks and clean up to assist garbage collection. Transition to _STATE_DONE. """ self._deactivate() self._connection_configs = None self._nbio = None self._connector_factory = None self._on_done = None self._connector = None self._addrinfo_iter = None self._connection_errors = None self._state = self._STATE_DONE def _deactivate(self): """Cancel asynchronous tasks. """ if self._task_ref is not None: self._task_ref.cancel() self._task_ref = None def _report_completion_and_cleanup(self, result): """Clean up and invoke client's `on_done` callback. :param pika.connection.Connection | AMQPConnectionWorkflowFailed result: value to pass to user's `on_done` callback. """ if isinstance(result, BaseException): _LOG.error('AMQPConnectionWorkflow - reporting failure: %r', result) else: _LOG.info('AMQPConnectionWorkflow - reporting success: %r', result) on_done = self._on_done self._close() on_done(result) def _start_new_cycle_async(self, first): """Start a new workflow cycle (if any more attempts are left) beginning with the first Parameters object in self._connection_configs. If out of attempts, report `AMQPConnectionWorkflowFailed`. :param bool first: if True, don't delay; otherwise delay next attempt by `self._retry_pause` seconds. """ self._task_ref = None assert self._attempts_remaining >= 0, self._attempts_remaining if self._attempts_remaining <= 0: error = AMQPConnectionWorkflowFailed(self._connection_errors) _LOG.error('AMQP connection workflow failed: %r.', error) self._report_completion_and_cleanup(error) return self._attempts_remaining -= 1 _LOG.debug( 'Beginning a new AMQP connection workflow cycle; attempts ' 'remaining after this: %s', self._attempts_remaining) self._current_config_index = None self._task_ref = self._nbio.call_later( 0 if first else self._retry_pause, self._try_next_config_async) def _try_next_config_async(self): """Attempt to connect using the next Parameters config. If there are no more configs, start a new cycle. """ self._task_ref = None if self._current_config_index is None: self._current_config_index = 0 else: self._current_config_index += 1 if self._current_config_index >= len(self._connection_configs): _LOG.debug('_try_next_config_async: starting a new cycle.') self._start_new_cycle_async(first=False) return params = self._connection_configs[self._current_config_index] _LOG.debug('_try_next_config_async: %r:%s', params.host, params.port) # Begin with host address resolution assert self._task_ref is None self._task_ref = self._nbio.getaddrinfo( host=params.host, port=params.port, socktype=self._SOCK_TYPE, proto=self._IPPROTO, on_done=self._on_getaddrinfo_async_done) def _on_getaddrinfo_async_done(self, addrinfos_or_exc): """Handles completion callback from asynchronous `getaddrinfo()`. :param list | BaseException addrinfos_or_exc: resolved address records returned by `getaddrinfo()` or an exception object from failure. """ self._task_ref = None if isinstance(addrinfos_or_exc, BaseException): _LOG.error('getaddrinfo failed: %r.', addrinfos_or_exc) self._connection_errors.append(addrinfos_or_exc) self._start_new_cycle_async(first=False) return _LOG.debug('getaddrinfo returned %s records', len(addrinfos_or_exc)) self._addrinfo_iter = iter(addrinfos_or_exc) self._try_next_resolved_address() def _try_next_resolved_address(self): """Try connecting using next resolved address. If there aren't any left, continue with next Parameters config. """ try: addr_record = next(self._addrinfo_iter) except StopIteration: _LOG.debug( '_try_next_resolved_address: continuing with next config.') self._try_next_config_async() return _LOG.debug('Attempting to connect using address record %r', addr_record) self._connector = self._connector_factory() # type: AMQPConnector self._connector.start( addr_record=addr_record, conn_params=self._connection_configs[self._current_config_index], on_done=self._on_connector_done) def _on_connector_done(self, conn_or_exc): """Handle completion of connection attempt by `AMQPConnector`. :param pika.connection.Connection | BaseException conn_or_exc: See `AMQPConnector.start()` for exception details. """ self._connector = None _LOG.debug('Connection attempt completed with %r', conn_or_exc) if isinstance(conn_or_exc, BaseException): self._connection_errors.append(conn_or_exc) if isinstance(conn_or_exc, AMQPConnectorAborted): assert self._state == self._STATE_ABORTING, \ 'Expected _STATE_ABORTING, but got {!r}'.format(self._state) self._report_completion_and_cleanup( AMQPConnectionWorkflowAborted()) elif (self._until_first_amqp_attempt and isinstance(conn_or_exc, AMQPConnectorAMQPHandshakeError)): _LOG.debug('Ending AMQP connection workflow after first failed ' 'AMQP handshake due to _until_first_amqp_attempt.') if isinstance(conn_or_exc.exception, pika.exceptions.ConnectionOpenAborted): error = AMQPConnectionWorkflowAborted else: error = AMQPConnectionWorkflowFailed( self._connection_errors) self._report_completion_and_cleanup(error) else: self._try_next_resolved_address() else: # Success! self._report_completion_and_cleanup(conn_or_exc) pika-1.2.0/pika/adapters/utils/io_services_utils.py000077500000000000000000001505061400701476500224470ustar00rootroot00000000000000"""Utilities for implementing `nbio_interface.AbstractIOServices` for pika connection adapters. """ import collections import errno import functools import logging import numbers import os import socket import ssl import sys import traceback from pika.adapters.utils.nbio_interface import (AbstractIOReference, AbstractStreamTransport) import pika.compat import pika.diagnostic_utils # "Try again" error codes for non-blocking socket I/O - send()/recv(). # NOTE: POSIX.1 allows either error to be returned for this case and doesn't require # them to have the same value. _TRY_IO_AGAIN_SOCK_ERROR_CODES = ( errno.EAGAIN, errno.EWOULDBLOCK, ) # "Connection establishment pending" error codes for non-blocking socket # connect() call. # NOTE: EINPROGRESS for Posix and EWOULDBLOCK for Windows _CONNECTION_IN_PROGRESS_SOCK_ERROR_CODES = ( errno.EINPROGRESS, errno.EWOULDBLOCK, ) _LOGGER = logging.getLogger(__name__) # Decorator that logs exceptions escaping from the decorated function _log_exceptions = pika.diagnostic_utils.create_log_exception_decorator(_LOGGER) # pylint: disable=C0103 def check_callback_arg(callback, name): """Raise TypeError if callback is not callable :param callback: callback to check :param name: Name to include in exception text :raises TypeError: """ if not callable(callback): raise TypeError('{} must be callable, but got {!r}'.format( name, callback)) def check_fd_arg(fd): """Raise TypeError if file descriptor is not an integer :param fd: file descriptor :raises TypeError: """ if not isinstance(fd, numbers.Integral): raise TypeError( 'Paramter must be a file descriptor, but got {!r}'.format(fd)) def _retry_on_sigint(func): """Function decorator for retrying on SIGINT. """ @functools.wraps(func) def retry_sigint_wrap(*args, **kwargs): """Wrapper for decorated function""" while True: try: return func(*args, **kwargs) except pika.compat.SOCKET_ERROR as error: if error.errno == errno.EINTR: continue else: raise return retry_sigint_wrap class SocketConnectionMixin(object): """Implements `pika.adapters.utils.nbio_interface.AbstractIOServices.connect_socket()` on top of `pika.adapters.utils.nbio_interface.AbstractFileDescriptorServices` and basic `pika.adapters.utils.nbio_interface.AbstractIOServices`. """ def connect_socket(self, sock, resolved_addr, on_done): """Implement :py:meth:`.nbio_interface.AbstractIOServices.connect_socket()`. """ return _AsyncSocketConnector( nbio=self, sock=sock, resolved_addr=resolved_addr, on_done=on_done).start() class StreamingConnectionMixin(object): """Implements `.nbio_interface.AbstractIOServices.create_streaming_connection()` on top of `.nbio_interface.AbstractFileDescriptorServices` and basic `nbio_interface.AbstractIOServices` services. """ def create_streaming_connection(self, protocol_factory, sock, on_done, ssl_context=None, server_hostname=None): """Implement :py:meth:`.nbio_interface.AbstractIOServices.create_streaming_connection()`. """ try: return _AsyncStreamConnector( nbio=self, protocol_factory=protocol_factory, sock=sock, ssl_context=ssl_context, server_hostname=server_hostname, on_done=on_done).start() except Exception as error: _LOGGER.error('create_streaming_connection(%s) failed: %r', sock, error) # Close the socket since this function takes ownership try: sock.close() except Exception as error: # pylint: disable=W0703 # We log and suppress the exception from sock.close() so that # the original error from _AsyncStreamConnector constructor will # percolate _LOGGER.error('%s.close() failed: %r', sock, error) raise class _AsyncServiceAsyncHandle(AbstractIOReference): """This module's adaptation of `.nbio_interface.AbstractIOReference` """ def __init__(self, subject): """ :param subject: subject of the reference containing a `cancel()` method """ self._cancel = subject.cancel def cancel(self): """Cancel pending operation :returns: False if was already done or cancelled; True otherwise :rtype: bool """ return self._cancel() class _AsyncSocketConnector(object): """Connects the given non-blocking socket asynchronously using `.nbio_interface.AbstractFileDescriptorServices` and basic `.nbio_interface.AbstractIOServices`. Used for implementing `.nbio_interface.AbstractIOServices.connect_socket()`. """ _STATE_NOT_STARTED = 0 # start() not called yet _STATE_ACTIVE = 1 # workflow started _STATE_CANCELED = 2 # workflow aborted by user's cancel() call _STATE_COMPLETED = 3 # workflow completed: succeeded or failed def __init__(self, nbio, sock, resolved_addr, on_done): """ :param AbstractIOServices | AbstractFileDescriptorServices nbio: :param socket.socket sock: non-blocking socket that needs to be connected via `socket.socket.connect()` :param tuple resolved_addr: resolved destination address/port two-tuple which is compatible with the given's socket's address family :param callable on_done: user callback that takes None upon successful completion or exception upon error (check for `BaseException`) as its only arg. It will not be called if the operation was cancelled. :raises ValueError: if host portion of `resolved_addr` is not an IP address or is inconsistent with the socket's address family as validated via `socket.inet_pton()` """ check_callback_arg(on_done, 'on_done') try: socket.inet_pton(sock.family, resolved_addr[0]) except Exception as error: # pylint: disable=W0703 if not hasattr(socket, 'inet_pton'): _LOGGER.debug( 'Unable to check resolved address: no socket.inet_pton().') else: msg = ('Invalid or unresolved IP address ' '{!r} for socket {}: {!r}').format( resolved_addr, sock, error) _LOGGER.error(msg) raise ValueError(msg) self._nbio = nbio self._sock = sock self._addr = resolved_addr self._on_done = on_done self._state = self._STATE_NOT_STARTED self._watching_socket_events = False @_log_exceptions def _cleanup(self): """Remove socket watcher, if any """ if self._watching_socket_events: self._watching_socket_events = False self._nbio.remove_writer(self._sock.fileno()) def start(self): """Start asynchronous connection establishment. :rtype: AbstractIOReference """ assert self._state == self._STATE_NOT_STARTED, ( '_AsyncSocketConnector.start(): expected _STATE_NOT_STARTED', self._state) self._state = self._STATE_ACTIVE # Continue the rest of the operation on the I/O loop to avoid calling # user's completion callback from the scope of user's call self._nbio.add_callback_threadsafe(self._start_async) return _AsyncServiceAsyncHandle(self) def cancel(self): """Cancel pending connection request without calling user's completion callback. :returns: False if was already done or cancelled; True otherwise :rtype: bool """ if self._state == self._STATE_ACTIVE: self._state = self._STATE_CANCELED _LOGGER.debug('User canceled connection request for %s to %s', self._sock, self._addr) self._cleanup() return True _LOGGER.debug( '_AsyncSocketConnector cancel requested when not ACTIVE: ' 'state=%s; %s', self._state, self._sock) return False @_log_exceptions def _report_completion(self, result): """Advance to COMPLETED state, remove socket watcher, and invoke user's completion callback. :param BaseException | None result: value to pass in user's callback """ _LOGGER.debug('_AsyncSocketConnector._report_completion(%r); %s', result, self._sock) assert isinstance(result, (BaseException, type(None))), ( '_AsyncSocketConnector._report_completion() expected exception or ' 'None as result.', result) assert self._state == self._STATE_ACTIVE, ( '_AsyncSocketConnector._report_completion() expected ' '_STATE_NOT_STARTED', self._state) self._state = self._STATE_COMPLETED self._cleanup() self._on_done(result) @_log_exceptions def _start_async(self): """Called as callback from I/O loop to kick-start the workflow, so it's safe to call user's completion callback from here, if needed """ if self._state != self._STATE_ACTIVE: # Must have been canceled by user before we were called _LOGGER.debug( 'Abandoning sock=%s connection establishment to %s ' 'due to inactive state=%s', self._sock, self._addr, self._state) return try: self._sock.connect(self._addr) except (Exception, pika.compat.SOCKET_ERROR) as error: # pylint: disable=W0703 if (isinstance(error, pika.compat.SOCKET_ERROR) and error.errno in _CONNECTION_IN_PROGRESS_SOCK_ERROR_CODES): # Connection establishment is pending pass else: _LOGGER.error('%s.connect(%s) failed: %r', self._sock, self._addr, error) self._report_completion(error) return # Get notified when the socket becomes writable try: self._nbio.set_writer(self._sock.fileno(), self._on_writable) except Exception as error: # pylint: disable=W0703 _LOGGER.exception('async.set_writer(%s) failed: %r', self._sock, error) self._report_completion(error) return else: self._watching_socket_events = True _LOGGER.debug('Connection-establishment is in progress for %s.', self._sock) @_log_exceptions def _on_writable(self): """Called when socket connects or fails to. Check for predicament and invoke user's completion callback. """ if self._state != self._STATE_ACTIVE: # This should never happen since we remove the watcher upon # `cancel()` _LOGGER.error( 'Socket connection-establishment event watcher ' 'called in inactive state (ignoring): %s; state=%s', self._sock, self._state) return # The moment of truth... error_code = self._sock.getsockopt(socket.SOL_SOCKET, socket.SO_ERROR) if not error_code: _LOGGER.info('Socket connected: %s', self._sock) result = None else: error_msg = os.strerror(error_code) _LOGGER.error('Socket failed to connect: %s; error=%s (%s)', self._sock, error_code, error_msg) result = pika.compat.SOCKET_ERROR(error_code, error_msg) self._report_completion(result) class _AsyncStreamConnector(object): """Performs asynchronous SSL session establishment, if requested, on the already-connected socket and links the streaming transport to protocol. Used for implementing `.nbio_interface.AbstractIOServices.create_streaming_connection()`. """ _STATE_NOT_STARTED = 0 # start() not called yet _STATE_ACTIVE = 1 # start() called and kicked off the workflow _STATE_CANCELED = 2 # workflow terminated by cancel() request _STATE_COMPLETED = 3 # workflow terminated by success or failure def __init__(self, nbio, protocol_factory, sock, ssl_context, server_hostname, on_done): """ NOTE: We take ownership of the given socket upon successful completion of the constructor. See `AbstractIOServices.create_streaming_connection()` for detailed documentation of the corresponding args. :param AbstractIOServices | AbstractFileDescriptorServices nbio: :param callable protocol_factory: :param socket.socket sock: :param ssl.SSLContext | None ssl_context: :param str | None server_hostname: :param callable on_done: """ check_callback_arg(protocol_factory, 'protocol_factory') check_callback_arg(on_done, 'on_done') if not isinstance(ssl_context, (type(None), ssl.SSLContext)): raise ValueError('Expected ssl_context=None | ssl.SSLContext, but ' 'got {!r}'.format(ssl_context)) if server_hostname is not None and ssl_context is None: raise ValueError('Non-None server_hostname must not be passed ' 'without ssl context') # Check that the socket connection establishment had completed in order # to avoid stalling while waiting for the socket to become readable # and/or writable. try: sock.getpeername() except Exception as error: raise ValueError( 'Expected connected socket, but getpeername() failed: ' 'error={!r}; {}; '.format(error, sock)) self._nbio = nbio self._protocol_factory = protocol_factory self._sock = sock self._ssl_context = ssl_context self._server_hostname = server_hostname self._on_done = on_done self._state = self._STATE_NOT_STARTED self._watching_socket = False @_log_exceptions def _cleanup(self, close): """Cancel pending async operations, if any :param bool close: close the socket if true """ _LOGGER.debug('_AsyncStreamConnector._cleanup(%r)', close) if self._watching_socket: _LOGGER.debug( '_AsyncStreamConnector._cleanup(%r): removing RdWr; %s', close, self._sock) self._watching_socket = False self._nbio.remove_reader(self._sock.fileno()) self._nbio.remove_writer(self._sock.fileno()) try: if close: _LOGGER.debug( '_AsyncStreamConnector._cleanup(%r): closing socket; %s', close, self._sock) try: self._sock.close() except Exception as error: # pylint: disable=W0703 _LOGGER.exception('_sock.close() failed: error=%r; %s', error, self._sock) raise finally: self._sock = None self._nbio = None self._protocol_factory = None self._ssl_context = None self._server_hostname = None self._on_done = None def start(self): """Kick off the workflow :rtype: AbstractIOReference """ _LOGGER.debug('_AsyncStreamConnector.start(); %s', self._sock) assert self._state == self._STATE_NOT_STARTED, ( '_AsyncStreamConnector.start() expected ' '_STATE_NOT_STARTED', self._state) self._state = self._STATE_ACTIVE # Request callback from I/O loop to start processing so that we don't # end up making callbacks from the caller's scope self._nbio.add_callback_threadsafe(self._start_async) return _AsyncServiceAsyncHandle(self) def cancel(self): """Cancel pending connection request without calling user's completion callback. :returns: False if was already done or cancelled; True otherwise :rtype: bool """ if self._state == self._STATE_ACTIVE: self._state = self._STATE_CANCELED _LOGGER.debug('User canceled streaming linkup for %s', self._sock) # Close the socket, since we took ownership self._cleanup(close=True) return True _LOGGER.debug( '_AsyncStreamConnector cancel requested when not ACTIVE: ' 'state=%s; %s', self._state, self._sock) return False @_log_exceptions def _report_completion(self, result): """Advance to COMPLETED state, cancel async operation(s), and invoke user's completion callback. :param BaseException | tuple result: value to pass in user's callback. `tuple(transport, protocol)` on success, exception on error """ _LOGGER.debug('_AsyncStreamConnector._report_completion(%r); %s', result, self._sock) assert isinstance(result, (BaseException, tuple)), ( '_AsyncStreamConnector._report_completion() expected exception or ' 'tuple as result.', result, self._state) assert self._state == self._STATE_ACTIVE, ( '_AsyncStreamConnector._report_completion() expected ' '_STATE_ACTIVE', self._state) self._state = self._STATE_COMPLETED # Notify user try: self._on_done(result) except Exception: _LOGGER.exception('%r: _on_done(%r) failed.', self._report_completion, result) raise finally: # NOTE: Close the socket on error, since we took ownership of it self._cleanup(close=isinstance(result, BaseException)) @_log_exceptions def _start_async(self): """Called as callback from I/O loop to kick-start the workflow, so it's safe to call user's completion callback from here if needed """ _LOGGER.debug('_AsyncStreamConnector._start_async(); %s', self._sock) if self._state != self._STATE_ACTIVE: # Must have been canceled by user before we were called _LOGGER.debug( 'Abandoning streaming linkup due to inactive state ' 'transition; state=%s; %s; .', self._state, self._sock) return # Link up protocol and transport if this is a plaintext linkup; # otherwise kick-off SSL workflow first if self._ssl_context is None: self._linkup() else: _LOGGER.debug('Starting SSL handshake on %s', self._sock) # Wrap our plain socket in ssl socket try: self._sock = self._ssl_context.wrap_socket( self._sock, server_side=False, do_handshake_on_connect=False, suppress_ragged_eofs=False, # False = error on incoming EOF server_hostname=self._server_hostname) except Exception as error: # pylint: disable=W0703 _LOGGER.exception('SSL wrap_socket(%s) failed: %r', self._sock, error) self._report_completion(error) return self._do_ssl_handshake() @_log_exceptions def _linkup(self): """Connection is ready: instantiate and link up transport and protocol, and invoke user's completion callback. """ _LOGGER.debug('_AsyncStreamConnector._linkup()') transport = None try: # Create the protocol try: protocol = self._protocol_factory() except Exception as error: _LOGGER.exception('protocol_factory() failed: error=%r; %s', error, self._sock) raise if self._ssl_context is None: # Create plaintext streaming transport try: transport = _AsyncPlaintextTransport( self._sock, protocol, self._nbio) except Exception as error: _LOGGER.exception('PlainTransport() failed: error=%r; %s', error, self._sock) raise else: # Create SSL streaming transport try: transport = _AsyncSSLTransport(self._sock, protocol, self._nbio) except Exception as error: _LOGGER.exception('SSLTransport() failed: error=%r; %s', error, self._sock) raise _LOGGER.debug('_linkup(): created transport %r', transport) # Acquaint protocol with its transport try: protocol.connection_made(transport) except Exception as error: _LOGGER.exception( 'protocol.connection_made(%r) failed: error=%r; %s', transport, error, self._sock) raise _LOGGER.debug('_linkup(): introduced transport to protocol %r; %r', transport, protocol) except Exception as error: # pylint: disable=W0703 result = error else: result = (transport, protocol) self._report_completion(result) @_log_exceptions def _do_ssl_handshake(self): """Perform asynchronous SSL handshake on the already wrapped socket """ _LOGGER.debug('_AsyncStreamConnector._do_ssl_handshake()') if self._state != self._STATE_ACTIVE: _LOGGER.debug( '_do_ssl_handshake: Abandoning streaming linkup due ' 'to inactive state transition; state=%s; %s; .', self._state, self._sock) return done = False try: try: self._sock.do_handshake() except ssl.SSLError as error: if error.errno == ssl.SSL_ERROR_WANT_READ: _LOGGER.debug('SSL handshake wants read; %s.', self._sock) self._watching_socket = True self._nbio.set_reader(self._sock.fileno(), self._do_ssl_handshake) self._nbio.remove_writer(self._sock.fileno()) elif error.errno == ssl.SSL_ERROR_WANT_WRITE: _LOGGER.debug('SSL handshake wants write. %s', self._sock) self._watching_socket = True self._nbio.set_writer(self._sock.fileno(), self._do_ssl_handshake) self._nbio.remove_reader(self._sock.fileno()) else: # Outer catch will report it raise else: done = True _LOGGER.info('SSL handshake completed successfully: %s', self._sock) except Exception as error: # pylint: disable=W0703 _LOGGER.exception('SSL do_handshake failed: error=%r; %s', error, self._sock) self._report_completion(error) return if done: # Suspend I/O and link up transport with protocol _LOGGER.debug( '_do_ssl_handshake: removing watchers ahead of linkup: %s', self._sock) self._nbio.remove_reader(self._sock.fileno()) self._nbio.remove_writer(self._sock.fileno()) # So that our `_cleanup()` won't interfere with the transport's # socket watcher configuration. self._watching_socket = False _LOGGER.debug( '_do_ssl_handshake: pre-linkup removal of watchers is done; %s', self._sock) self._linkup() class _AsyncTransportBase( # pylint: disable=W0223 AbstractStreamTransport): """Base class for `_AsyncPlaintextTransport` and `_AsyncSSLTransport`. """ _STATE_ACTIVE = 1 _STATE_FAILED = 2 # connection failed _STATE_ABORTED_BY_USER = 3 # cancel() called _STATE_COMPLETED = 4 # done with connection _MAX_RECV_BYTES = 4096 # per socket.recv() documentation recommendation # Max per consume call to prevent event starvation _MAX_CONSUME_BYTES = 1024 * 100 class RxEndOfFile(OSError): """We raise this internally when EOF (empty read) is detected on input. """ def __init__(self): super(_AsyncTransportBase.RxEndOfFile, self).__init__( -1, 'End of input stream (EOF)') def __init__(self, sock, protocol, nbio): """ :param socket.socket | ssl.SSLSocket sock: connected socket :param pika.adapters.utils.nbio_interface.AbstractStreamProtocol protocol: corresponding protocol in this transport/protocol pairing; the protocol already had its `connection_made()` method called. :param AbstractIOServices | AbstractFileDescriptorServices nbio: """ _LOGGER.debug('_AsyncTransportBase.__init__: %s', sock) self._sock = sock self._protocol = protocol self._nbio = nbio self._state = self._STATE_ACTIVE self._tx_buffers = collections.deque() self._tx_buffered_byte_count = 0 def abort(self): """Close connection abruptly without waiting for pending I/O to complete. Will invoke the corresponding protocol's `connection_lost()` method asynchronously (not in context of the abort() call). :raises Exception: Exception-based exception on error """ _LOGGER.info('Aborting transport connection: state=%s; %s', self._state, self._sock) self._initiate_abort(None) def get_protocol(self): """Return the protocol linked to this transport. :rtype: pika.adapters.utils.nbio_interface.AbstractStreamProtocol """ return self._protocol def get_write_buffer_size(self): """ :returns: Current size of output data buffered by the transport :rtype: int """ return self._tx_buffered_byte_count def _buffer_tx_data(self, data): """Buffer the given data until it can be sent asynchronously. :param bytes data: :raises ValueError: if called with empty data """ if not data: _LOGGER.error('write() called with empty data: state=%s; %s', self._state, self._sock) raise ValueError('write() called with empty data {!r}'.format(data)) if self._state != self._STATE_ACTIVE: _LOGGER.debug( 'Ignoring write() called during inactive state: ' 'state=%s; %s', self._state, self._sock) return self._tx_buffers.append(data) self._tx_buffered_byte_count += len(data) def _consume(self): """Utility method for use by subclasses to ingest data from socket and dispatch it to protocol's `data_received()` method socket-specific "try again" exception, per-event data consumption limit is reached, transport becomes inactive, or a fatal failure. Consumes up to `self._MAX_CONSUME_BYTES` to prevent event starvation or until state becomes inactive (e.g., `protocol.data_received()` callback aborts the transport) :raises: Whatever the corresponding `sock.recv()` raises except the socket error with errno.EINTR :raises: Whatever the `protocol.data_received()` callback raises :raises _AsyncTransportBase.RxEndOfFile: upon shutdown of input stream """ bytes_consumed = 0 while (self._state == self._STATE_ACTIVE and bytes_consumed < self._MAX_CONSUME_BYTES): data = self._sigint_safe_recv(self._sock, self._MAX_RECV_BYTES) bytes_consumed += len(data) # Empty data, should disconnect if not data: _LOGGER.error('Socket EOF; %s', self._sock) raise self.RxEndOfFile() # Pass the data to the protocol try: self._protocol.data_received(data) except Exception as error: _LOGGER.exception( 'protocol.data_received() failed: error=%r; %s', error, self._sock) raise def _produce(self): """Utility method for use by subclasses to emit data from tx_buffers. This method sends chunks from `tx_buffers` until all chunks are exhausted or sending is interrupted by an exception. Maintains integrity of `self.tx_buffers`. :raises: whatever the corresponding `sock.send()` raises except the socket error with errno.EINTR """ while self._tx_buffers: num_bytes_sent = self._sigint_safe_send(self._sock, self._tx_buffers[0]) chunk = self._tx_buffers.popleft() if num_bytes_sent < len(chunk): _LOGGER.debug('Partial send, requeing remaining data; %s of %s', num_bytes_sent, len(chunk)) self._tx_buffers.appendleft(chunk[num_bytes_sent:]) self._tx_buffered_byte_count -= num_bytes_sent assert self._tx_buffered_byte_count >= 0, ( '_AsyncTransportBase._produce() tx buffer size underflow', self._tx_buffered_byte_count, self._state) @staticmethod @_retry_on_sigint def _sigint_safe_recv(sock, max_bytes): """Receive data from socket, retrying on SIGINT. :param sock: stream or SSL socket :param max_bytes: maximum number of bytes to receive :returns: received data or empty bytes uppon end of file :rtype: bytes :raises: whatever the corresponding `sock.recv()` raises except socket error with errno.EINTR """ return sock.recv(max_bytes) @staticmethod @_retry_on_sigint def _sigint_safe_send(sock, data): """Send data to socket, retrying on SIGINT. :param sock: stream or SSL socket :param data: data bytes to send :returns: number of bytes actually sent :rtype: int :raises: whatever the corresponding `sock.send()` raises except socket error with errno.EINTR """ return sock.send(data) @_log_exceptions def _deactivate(self): """Unregister the transport from I/O events """ if self._state == self._STATE_ACTIVE: _LOGGER.info('Deactivating transport: state=%s; %s', self._state, self._sock) self._nbio.remove_reader(self._sock.fileno()) self._nbio.remove_writer(self._sock.fileno()) self._tx_buffers.clear() @_log_exceptions def _close_and_finalize(self): """Close the transport's socket and unlink the transport it from references to other assets (protocol, etc.) """ if self._state != self._STATE_COMPLETED: _LOGGER.info('Closing transport socket and unlinking: state=%s; %s', self._state, self._sock) try: self._sock.shutdown(socket.SHUT_RDWR) except pika.compat.SOCKET_ERROR: pass self._sock.close() self._sock = None self._protocol = None self._nbio = None self._state = self._STATE_COMPLETED @_log_exceptions def _initiate_abort(self, error): """Initiate asynchronous abort of the transport that concludes with a call to the protocol's `connection_lost()` method. No flushing of output buffers will take place. :param BaseException | None error: None if being canceled by user, including via falsie return value from protocol.eof_received; otherwise the exception corresponding to the the failed connection. """ _LOGGER.info( '_AsyncTransportBase._initate_abort(): Initiating abrupt ' 'asynchronous transport shutdown: state=%s; error=%r; %s', self._state, error, self._sock) assert self._state != self._STATE_COMPLETED, ( '_AsyncTransportBase._initate_abort() expected ' 'non-_STATE_COMPLETED', self._state) if self._state == self._STATE_COMPLETED: return self._deactivate() # Update state if error is None: # Being aborted by user if self._state == self._STATE_ABORTED_BY_USER: # Abort by user already pending _LOGGER.debug('_AsyncTransportBase._initiate_abort(): ' 'ignoring - user-abort already pending.') return # Notification priority is given to user-initiated abort over # failed connection self._state = self._STATE_ABORTED_BY_USER else: # Connection failed if self._state != self._STATE_ACTIVE: assert self._state == self._STATE_ABORTED_BY_USER, ( '_AsyncTransportBase._initate_abort() expected ' '_STATE_ABORTED_BY_USER', self._state) return self._state = self._STATE_FAILED # Schedule callback from I/O loop to avoid potential reentry into user # code self._nbio.add_callback_threadsafe( functools.partial(self._connection_lost_notify_async, error)) @_log_exceptions def _connection_lost_notify_async(self, error): """Handle aborting of transport either due to socket error or user- initiated `abort()` call. Must be called from an I/O loop callback owned by us in order to avoid reentry into user code from user's API call into the transport. :param BaseException | None error: None if being canceled by user; otherwise the exception corresponding to the the failed connection. """ _LOGGER.debug('Concluding transport shutdown: state=%s; error=%r', self._state, error) if self._state == self._STATE_COMPLETED: return if error is not None and self._state != self._STATE_FAILED: # Priority is given to user-initiated abort notification assert self._state == self._STATE_ABORTED_BY_USER, ( '_AsyncTransportBase._connection_lost_notify_async() ' 'expected _STATE_ABORTED_BY_USER', self._state) return # Inform protocol try: self._protocol.connection_lost(error) except Exception as exc: # pylint: disable=W0703 _LOGGER.exception('protocol.connection_lost(%r) failed: exc=%r; %s', error, exc, self._sock) # Re-raise, since we've exhausted our normal failure notification # mechanism (i.e., connection_lost()) raise finally: self._close_and_finalize() class _AsyncPlaintextTransport(_AsyncTransportBase): """Implementation of `nbio_interface.AbstractStreamTransport` for a plaintext connection. """ def __init__(self, sock, protocol, nbio): """ :param socket.socket sock: non-blocking connected socket :param pika.adapters.utils.nbio_interface.AbstractStreamProtocol protocol: corresponding protocol in this transport/protocol pairing; the protocol already had its `connection_made()` method called. :param AbstractIOServices | AbstractFileDescriptorServices nbio: """ super(_AsyncPlaintextTransport, self).__init__(sock, protocol, nbio) # Request to be notified of incoming data; we'll watch for writability # only when our write buffer is non-empty self._nbio.set_reader(self._sock.fileno(), self._on_socket_readable) def write(self, data): """Buffer the given data until it can be sent asynchronously. :param bytes data: :raises ValueError: if called with empty data """ if self._state != self._STATE_ACTIVE: _LOGGER.debug( 'Ignoring write() called during inactive state: ' 'state=%s; %s', self._state, self._sock) return assert data, ('_AsyncPlaintextTransport.write(): empty data from user.', data, self._state) # pika/pika#1286 # NOTE: Modify code to write data to buffer before setting writer. # Otherwise a race condition can occur where ioloop executes writer # while buffer is still empty. tx_buffer_was_empty = self.get_write_buffer_size() == 0 self._buffer_tx_data(data) if tx_buffer_was_empty: self._nbio.set_writer(self._sock.fileno(), self._on_socket_writable) _LOGGER.debug('Turned on writability watcher: %s', self._sock) @_log_exceptions def _on_socket_readable(self): """Ingest data from socket and dispatch it to protocol until exception occurs (typically EAGAIN or EWOULDBLOCK), per-event data consumption limit is reached, transport becomes inactive, or failure. """ if self._state != self._STATE_ACTIVE: _LOGGER.debug( 'Ignoring readability notification due to inactive ' 'state: state=%s; %s', self._state, self._sock) return try: self._consume() except self.RxEndOfFile: try: keep_open = self._protocol.eof_received() except Exception as error: # pylint: disable=W0703 _LOGGER.exception( 'protocol.eof_received() failed: error=%r; %s', error, self._sock) self._initiate_abort(error) else: if keep_open: _LOGGER.info( 'protocol.eof_received() elected to keep open: %s', self._sock) self._nbio.remove_reader(self._sock.fileno()) else: _LOGGER.info('protocol.eof_received() elected to close: %s', self._sock) self._initiate_abort(None) except (Exception, pika.compat.SOCKET_ERROR) as error: # pylint: disable=W0703 if (isinstance(error, pika.compat.SOCKET_ERROR) and error.errno in _TRY_IO_AGAIN_SOCK_ERROR_CODES): _LOGGER.debug('Recv would block on %s', self._sock) else: _LOGGER.exception( '_AsyncBaseTransport._consume() failed, aborting ' 'connection: error=%r; sock=%s; Caller\'s stack:\n%s', error, self._sock, ''.join( traceback.format_exception(*sys.exc_info()))) self._initiate_abort(error) else: if self._state != self._STATE_ACTIVE: # Most likely our protocol's `data_received()` aborted the # transport _LOGGER.debug( 'Leaving Plaintext consumer due to inactive ' 'state: state=%s; %s', self._state, self._sock) @_log_exceptions def _on_socket_writable(self): """Handle writable socket notification """ if self._state != self._STATE_ACTIVE: _LOGGER.debug( 'Ignoring writability notification due to inactive ' 'state: state=%s; %s', self._state, self._sock) return # We shouldn't be getting called with empty tx buffers assert self._tx_buffers, ( '_AsyncPlaintextTransport._on_socket_writable() called, ' 'but _tx_buffers is empty.', self._state) try: # Transmit buffered data to remote socket self._produce() except (Exception, pika.compat.SOCKET_ERROR) as error: # pylint: disable=W0703 if (isinstance(error, pika.compat.SOCKET_ERROR) and error.errno in _TRY_IO_AGAIN_SOCK_ERROR_CODES): _LOGGER.debug('Send would block on %s', self._sock) else: _LOGGER.exception( '_AsyncBaseTransport._produce() failed, aborting ' 'connection: error=%r; sock=%s; Caller\'s stack:\n%s', error, self._sock, ''.join( traceback.format_exception(*sys.exc_info()))) self._initiate_abort(error) else: if not self._tx_buffers: self._nbio.remove_writer(self._sock.fileno()) _LOGGER.debug('Turned off writability watcher: %s', self._sock) class _AsyncSSLTransport(_AsyncTransportBase): """Implementation of `.nbio_interface.AbstractStreamTransport` for an SSL connection. """ def __init__(self, sock, protocol, nbio): """ :param ssl.SSLSocket sock: non-blocking connected socket :param pika.adapters.utils.nbio_interface.AbstractStreamProtocol protocol: corresponding protocol in this transport/protocol pairing; the protocol already had its `connection_made()` method called. :param AbstractIOServices | AbstractFileDescriptorServices nbio: """ super(_AsyncSSLTransport, self).__init__(sock, protocol, nbio) self._ssl_readable_action = self._consume self._ssl_writable_action = None # Bootstrap consumer; we'll take care of producer once data is buffered self._nbio.set_reader(self._sock.fileno(), self._on_socket_readable) # Try reading asap just in case read-ahead caused some self._nbio.add_callback_threadsafe(self._on_socket_readable) def write(self, data): """Buffer the given data until it can be sent asynchronously. :param bytes data: :raises ValueError: if called with empty data """ if self._state != self._STATE_ACTIVE: _LOGGER.debug( 'Ignoring write() called during inactive state: ' 'state=%s; %s', self._state, self._sock) return assert data, ('_AsyncSSLTransport.write(): empty data from user.', data, self._state) # pika/pika#1286 # NOTE: Modify code to write data to buffer before setting writer. # Otherwise a race condition can occur where ioloop executes writer # while buffer is still empty. tx_buffer_was_empty = self.get_write_buffer_size() == 0 self._buffer_tx_data(data) if tx_buffer_was_empty and self._ssl_writable_action is None: self._ssl_writable_action = self._produce self._nbio.set_writer(self._sock.fileno(), self._on_socket_writable) _LOGGER.debug('Turned on writability watcher: %s', self._sock) @_log_exceptions def _on_socket_readable(self): """Handle readable socket indication """ if self._state != self._STATE_ACTIVE: _LOGGER.debug( 'Ignoring readability notification due to inactive ' 'state: state=%s; %s', self._state, self._sock) return if self._ssl_readable_action: try: self._ssl_readable_action() except Exception as error: # pylint: disable=W0703 self._initiate_abort(error) else: _LOGGER.debug( 'SSL readable action was suppressed: ' 'ssl_writable_action=%r; %s', self._ssl_writable_action, self._sock) @_log_exceptions def _on_socket_writable(self): """Handle writable socket notification """ if self._state != self._STATE_ACTIVE: _LOGGER.debug( 'Ignoring writability notification due to inactive ' 'state: state=%s; %s', self._state, self._sock) return if self._ssl_writable_action: try: self._ssl_writable_action() except Exception as error: # pylint: disable=W0703 self._initiate_abort(error) else: _LOGGER.debug( 'SSL writable action was suppressed: ' 'ssl_readable_action=%r; %s', self._ssl_readable_action, self._sock) @_log_exceptions def _consume(self): """[override] Ingest data from socket and dispatch it to protocol until exception occurs (typically ssl.SSLError with SSL_ERROR_WANT_READ/WRITE), per-event data consumption limit is reached, transport becomes inactive, or failure. Update consumer/producer registration. :raises Exception: error that signals that connection needs to be aborted """ next_consume_on_readable = True try: super(_AsyncSSLTransport, self)._consume() except ssl.SSLError as error: if error.errno == ssl.SSL_ERROR_WANT_READ: _LOGGER.debug('SSL ingester wants read: %s', self._sock) elif error.errno == ssl.SSL_ERROR_WANT_WRITE: # Looks like SSL re-negotiation _LOGGER.debug('SSL ingester wants write: %s', self._sock) next_consume_on_readable = False else: _LOGGER.exception( '_AsyncBaseTransport._consume() failed, aborting ' 'connection: error=%r; sock=%s; Caller\'s stack:\n%s', error, self._sock, ''.join( traceback.format_exception(*sys.exc_info()))) raise # let outer catch block abort the transport else: if self._state != self._STATE_ACTIVE: # Most likely our protocol's `data_received()` aborted the # transport _LOGGER.debug( 'Leaving SSL consumer due to inactive ' 'state: state=%s; %s', self._state, self._sock) return # Consumer exited without exception; there may still be more, # possibly unprocessed, data records in SSL input buffers that # can be read without waiting for socket to become readable. # In case buffered input SSL data records still remain self._nbio.add_callback_threadsafe(self._on_socket_readable) # Update consumer registration if next_consume_on_readable: if not self._ssl_readable_action: self._nbio.set_reader(self._sock.fileno(), self._on_socket_readable) self._ssl_readable_action = self._consume # NOTE: can't use identity check, it fails for instance methods if self._ssl_writable_action == self._consume: # pylint: disable=W0143 self._nbio.remove_writer(self._sock.fileno()) self._ssl_writable_action = None else: # WANT_WRITE if not self._ssl_writable_action: self._nbio.set_writer(self._sock.fileno(), self._on_socket_writable) self._ssl_writable_action = self._consume if self._ssl_readable_action: self._nbio.remove_reader(self._sock.fileno()) self._ssl_readable_action = None # Update producer registration if self._tx_buffers and not self._ssl_writable_action: self._ssl_writable_action = self._produce self._nbio.set_writer(self._sock.fileno(), self._on_socket_writable) @_log_exceptions def _produce(self): """[override] Emit data from tx_buffers all chunks are exhausted or sending is interrupted by an exception (typically ssl.SSLError with SSL_ERROR_WANT_READ/WRITE). Update consumer/producer registration. :raises Exception: error that signals that connection needs to be aborted """ next_produce_on_writable = None # None means no need to produce try: super(_AsyncSSLTransport, self)._produce() except ssl.SSLError as error: if error.errno == ssl.SSL_ERROR_WANT_READ: # Looks like SSL re-negotiation _LOGGER.debug('SSL emitter wants read: %s', self._sock) next_produce_on_writable = False elif error.errno == ssl.SSL_ERROR_WANT_WRITE: _LOGGER.debug('SSL emitter wants write: %s', self._sock) next_produce_on_writable = True else: _LOGGER.exception( '_AsyncBaseTransport._produce() failed, aborting ' 'connection: error=%r; sock=%s; Caller\'s stack:\n%s', error, self._sock, ''.join( traceback.format_exception(*sys.exc_info()))) raise # let outer catch block abort the transport else: # No exception, so everything must have been written to the socket assert not self._tx_buffers, ( '_AsyncSSLTransport._produce(): no exception from parent ' 'class, but data remains in _tx_buffers.', len( self._tx_buffers)) # Update producer registration if self._tx_buffers: assert next_produce_on_writable is not None, ( '_AsyncSSLTransport._produce(): next_produce_on_writable is ' 'still None', self._state) if next_produce_on_writable: if not self._ssl_writable_action: self._nbio.set_writer(self._sock.fileno(), self._on_socket_writable) self._ssl_writable_action = self._produce # NOTE: can't use identity check, it fails for instance methods if self._ssl_readable_action == self._produce: # pylint: disable=W0143 self._nbio.remove_reader(self._sock.fileno()) self._ssl_readable_action = None else: # WANT_READ if not self._ssl_readable_action: self._nbio.set_reader(self._sock.fileno(), self._on_socket_readable) self._ssl_readable_action = self._produce if self._ssl_writable_action: self._nbio.remove_writer(self._sock.fileno()) self._ssl_writable_action = None else: # NOTE: can't use identity check, it fails for instance methods if self._ssl_readable_action == self._produce: # pylint: disable=W0143 self._nbio.remove_reader(self._sock.fileno()) self._ssl_readable_action = None assert self._ssl_writable_action != self._produce, ( # pylint: disable=W0143 '_AsyncSSLTransport._produce(): with empty tx_buffers, ' 'writable_action cannot be _produce when readable is ' '_produce', self._state) else: # NOTE: can't use identity check, it fails for instance methods assert self._ssl_writable_action == self._produce, ( # pylint: disable=W0143 '_AsyncSSLTransport._produce(): with empty tx_buffers, ' 'expected writable_action as _produce when readable_action ' 'is not _produce', 'writable_action:', self._ssl_writable_action, 'readable_action:', self._ssl_readable_action, 'state:', self._state) self._ssl_writable_action = None self._nbio.remove_writer(self._sock.fileno()) # Update consumer registration if not self._ssl_readable_action: self._ssl_readable_action = self._consume self._nbio.set_reader(self._sock.fileno(), self._on_socket_readable) # In case input SSL data records have been buffered self._nbio.add_callback_threadsafe(self._on_socket_readable) elif self._sock.pending(): self._nbio.add_callback_threadsafe(self._on_socket_readable) pika-1.2.0/pika/adapters/utils/nbio_interface.py000066400000000000000000000415011400701476500216530ustar00rootroot00000000000000"""Non-blocking I/O interface for pika connection adapters. I/O interface expected by `pika.adapters.base_connection.BaseConnection` NOTE: This API is modeled after asyncio in python3 for a couple of reasons 1. It's a sensible API 2. To make it easy to implement at least on top of the built-in asyncio Furthermore, the API caters to the needs of pika core and lack of generalization is intentional for the sake of reducing complexity of the implementation and testing and lessening the maintenance burden. """ import abc import pika.compat class AbstractIOServices(pika.compat.AbstractBase): """Interface to I/O services required by `pika.adapters.BaseConnection` and related utilities. NOTE: This is not a public API. Pika users should rely on the native I/O loop APIs (e.g., asyncio event loop, tornado ioloop, twisted reactor, etc.) that corresponds to the chosen Connection adapter. """ @abc.abstractmethod def get_native_ioloop(self): """Returns the native I/O loop instance, such as Twisted reactor, asyncio's or tornado's event loop """ raise NotImplementedError @abc.abstractmethod def close(self): """Release IOLoop's resources. the `close()` method is intended to be called by Pika's own test code only after `start()` returns. After calling `close()`, no other interaction with the closed instance of `IOLoop` should be performed. NOTE: This method is provided for Pika's own test scripts that need to be able to run I/O loops generically to test multiple Connection Adapter implementations. Pika users should use the native I/O loop's API instead. """ raise NotImplementedError @abc.abstractmethod def run(self): """Run the I/O loop. It will loop until requested to exit. See `stop()`. NOTE: the outcome or restarting an instance that had been stopped is UNDEFINED! NOTE: This method is provided for Pika's own test scripts that need to be able to run I/O loops generically to test multiple Connection Adapter implementations (not all of the supported I/O Loop frameworks have methods named start/stop). Pika users should use the native I/O loop's API instead. """ raise NotImplementedError @abc.abstractmethod def stop(self): """Request exit from the ioloop. The loop is NOT guaranteed to stop before this method returns. NOTE: The outcome of calling `stop()` on a non-running instance is UNDEFINED! NOTE: This method is provided for Pika's own test scripts that need to be able to run I/O loops generically to test multiple Connection Adapter implementations (not all of the supported I/O Loop frameworks have methods named start/stop). Pika users should use the native I/O loop's API instead. To invoke `stop()` safely from a thread other than this IOLoop's thread, call it via `add_callback_threadsafe`; e.g., `ioloop.add_callback_threadsafe(ioloop.stop)` """ raise NotImplementedError @abc.abstractmethod def add_callback_threadsafe(self, callback): """Requests a call to the given function as soon as possible. It will be called from this IOLoop's thread. NOTE: This is the only thread-safe method offered by the IOLoop adapter. All other manipulations of the IOLoop adapter and objects governed by it must be performed from the IOLoop's thread. NOTE: if you know that the requester is running on the same thread as the connection it is more efficient to use the `ioloop.call_later()` method with a delay of 0. :param callable callback: The callback method; must be callable. """ raise NotImplementedError @abc.abstractmethod def call_later(self, delay, callback): """Add the callback to the IOLoop timer to be called after delay seconds from the time of call on best-effort basis. Returns a handle to the timeout. If two are scheduled for the same time, it's undefined which one will be called first. :param float delay: The number of seconds to wait to call callback :param callable callback: The callback method :returns: A handle that can be used to cancel the request. :rtype: AbstractTimerReference """ raise NotImplementedError @abc.abstractmethod def getaddrinfo(self, host, port, on_done, family=0, socktype=0, proto=0, flags=0): """Perform the equivalent of `socket.getaddrinfo()` asynchronously. See `socket.getaddrinfo()` for the standard args. :param callable on_done: user callback that takes the return value of `socket.getaddrinfo()` upon successful completion or exception upon failure (check for `BaseException`) as its only arg. It will not be called if the operation was cancelled. :rtype: AbstractIOReference """ raise NotImplementedError @abc.abstractmethod def connect_socket(self, sock, resolved_addr, on_done): """Perform the equivalent of `socket.connect()` on a previously-resolved address asynchronously. IMPLEMENTATION NOTE: Pika's connection logic resolves the addresses prior to making socket connections, so we don't need to burden the implementations of this method with the extra logic of asynchronous DNS resolution. Implementations can use `socket.inet_pton()` to verify the address. :param socket.socket sock: non-blocking socket that needs to be connected via `socket.socket.connect()` :param tuple resolved_addr: resolved destination address/port two-tuple as per `socket.socket.connect()`, except that the first element must be an actual IP address that's consistent with the given socket's address family. :param callable on_done: user callback that takes None upon successful completion or exception (check for `BaseException`) upon error as its only arg. It will not be called if the operation was cancelled. :rtype: AbstractIOReference :raises ValueError: if host portion of `resolved_addr` is not an IP address or is inconsistent with the socket's address family as validated via `socket.inet_pton()` """ raise NotImplementedError @abc.abstractmethod def create_streaming_connection(self, protocol_factory, sock, on_done, ssl_context=None, server_hostname=None): """Perform SSL session establishment, if requested, on the already- connected socket and link the streaming transport/protocol pair. NOTE: This method takes ownership of the socket. :param callable protocol_factory: called without args, returns an instance with the `AbstractStreamProtocol` interface. The protocol's `connection_made(transport)` method will be called to link it to the transport after remaining connection activity (e.g., SSL session establishment), if any, is completed successfully. :param socket.socket sock: Already-connected, non-blocking `socket.SOCK_STREAM` socket to be used by the transport. We take ownership of this socket. :param callable on_done: User callback `on_done(BaseException | (transport, protocol))` to be notified when the asynchronous operation completes. An exception arg indicates failure (check for `BaseException`); otherwise the two-tuple will contain the linked transport/protocol pair having AbstractStreamTransport and AbstractStreamProtocol interfaces respectively. :param None | ssl.SSLContext ssl_context: if None, this will proceed as a plaintext connection; otherwise, if not None, SSL session establishment will be performed prior to linking the transport and protocol. :param str | None server_hostname: For use during SSL session establishment to match against the target server's certificate. The value `None` disables this check (which is a huge security risk) :rtype: AbstractIOReference """ raise NotImplementedError class AbstractFileDescriptorServices(pika.compat.AbstractBase): """Interface definition of common non-blocking file descriptor services required by some utility implementations. NOTE: This is not a public API. Pika users should rely on the native I/O loop APIs (e.g., asyncio event loop, tornado ioloop, twisted reactor, etc.) that corresponds to the chosen Connection adapter. """ @abc.abstractmethod def set_reader(self, fd, on_readable): """Call the given callback when the file descriptor is readable. Replace prior reader, if any, for the given file descriptor. :param fd: file descriptor :param callable on_readable: a callback taking no args to be notified when fd becomes readable. """ raise NotImplementedError @abc.abstractmethod def remove_reader(self, fd): """Stop watching the given file descriptor for readability :param fd: file descriptor :returns: True if reader was removed; False if none was registered. :rtype: bool """ raise NotImplementedError @abc.abstractmethod def set_writer(self, fd, on_writable): """Call the given callback whenever the file descriptor is writable. Replace prior writer callback, if any, for the given file descriptor. IMPLEMENTATION NOTE: For portability, implementations of `set_writable()` should also watch for indication of error on the socket and treat it as equivalent to the writable indication (e.g., also adding the socket to the `exceptfds` arg of `socket.select()` and calling the `on_writable` callback if `select.select()` indicates that the socket is in error state). Specifically, Windows (unlike POSIX) only indicates error on the socket (but not writable) when connection establishment fails. :param fd: file descriptor :param callable on_writable: a callback taking no args to be notified when fd becomes writable. """ raise NotImplementedError @abc.abstractmethod def remove_writer(self, fd): """Stop watching the given file descriptor for writability :param fd: file descriptor :returns: True if reader was removed; False if none was registered. :rtype: bool """ raise NotImplementedError class AbstractTimerReference(pika.compat.AbstractBase): """Reference to asynchronous operation""" @abc.abstractmethod def cancel(self): """Cancel callback. If already cancelled, has no affect. """ raise NotImplementedError class AbstractIOReference(pika.compat.AbstractBase): """Reference to asynchronous I/O operation""" @abc.abstractmethod def cancel(self): """Cancel pending operation :returns: False if was already done or cancelled; True otherwise :rtype: bool """ raise NotImplementedError class AbstractStreamProtocol(pika.compat.AbstractBase): """Stream protocol interface. It's compatible with a subset of `asyncio.protocols.Protocol` for compatibility with asyncio-based `AbstractIOServices` implementation. """ @abc.abstractmethod def connection_made(self, transport): """Introduces transport to protocol after transport is connected. :param AbstractStreamTransport transport: :raises Exception: Exception-based exception on error """ raise NotImplementedError @abc.abstractmethod def connection_lost(self, error): """Called upon loss or closing of connection. NOTE: `connection_made()` and `connection_lost()` are each called just once and in that order. All other callbacks are called between them. :param BaseException | None error: An exception (check for `BaseException`) indicates connection failure. None indicates that connection was closed on this side, such as when it's aborted or when `AbstractStreamProtocol.eof_received()` returns a result that doesn't evaluate to True. :raises Exception: Exception-based exception on error """ raise NotImplementedError @abc.abstractmethod def eof_received(self): """Called after the remote peer shuts its write end of the connection. :returns: A falsy value (including None) will cause the transport to close itself, resulting in an eventual `connection_lost()` call from the transport. If a truthy value is returned, it will be the protocol's responsibility to close/abort the transport. :rtype: falsy|truthy :raises Exception: Exception-based exception on error """ raise NotImplementedError @abc.abstractmethod def data_received(self, data): """Called to deliver incoming data to the protocol. :param data: Non-empty data bytes. :raises Exception: Exception-based exception on error """ raise NotImplementedError # pylint: disable=W0511 # TODO Undecided whether we need write flow-control yet, although it seems # like a good idea. # @abc.abstractmethod # def pause_writing(self): # """Called when the transport's write buffer size becomes greater than or # equal to the transport's high-water mark. It won't be called again until # the transport's write buffer gets back to its low-water mark and then # returns to/past the hight-water mark again. # """ # raise NotImplementedError # # @abc.abstractmethod # def resume_writing(self): # """Called when the transport's write buffer size becomes less than or # equal to the transport's low-water mark. # """ # raise NotImplementedError class AbstractStreamTransport(pika.compat.AbstractBase): """Stream transport interface. It's compatible with a subset of `asyncio.transports.Transport` for compatibility with asyncio-based `AbstractIOServices` implementation. """ @abc.abstractmethod def abort(self): """Close connection abruptly without waiting for pending I/O to complete. Will invoke the corresponding protocol's `connection_lost()` method asynchronously (not in context of the abort() call). :raises Exception: Exception-based exception on error """ raise NotImplementedError @abc.abstractmethod def get_protocol(self): """Return the protocol linked to this transport. :rtype: AbstractStreamProtocol :raises Exception: Exception-based exception on error """ raise NotImplementedError @abc.abstractmethod def write(self, data): """Buffer the given data until it can be sent asynchronously. :param bytes data: :raises ValueError: if called with empty data :raises Exception: Exception-based exception on error """ raise NotImplementedError @abc.abstractmethod def get_write_buffer_size(self): """ :returns: Current size of output data buffered by the transport :rtype: int """ raise NotImplementedError # pylint: disable=W0511 # TODO Udecided whether we need write flow-control yet, although it seems # like a good idea. # @abc.abstractmethod # def set_write_buffer_limits(self, high, low): # """Set thresholds for calling the protocol's `pause_writing()` # and `resume_writing()` methods. `low` must be less than or equal to # `high`. # # NOTE The unintuitive order of the args is preserved to match the # corresponding method in `asyncio.WriteTransport`. I would expect `low` # to be the first arg, especially since # `asyncio.WriteTransport.get_write_buffer_limits()` returns them in the # opposite order. This seems error-prone. # # See `asyncio.WriteTransport.get_write_buffer_limits()` for more details # about the args. # # :param int high: non-negative high-water mark. # :param int low: non-negative low-water mark. # """ # raise NotImplementedError pika-1.2.0/pika/adapters/utils/selector_ioloop_adapter.py000077500000000000000000000467631400701476500236270ustar00rootroot00000000000000""" Implementation of `nbio_interface.AbstractIOServices` on top of a selector-based I/O loop, such as tornado's and our home-grown select_connection's I/O loops. """ import abc import logging import socket import threading from pika.adapters.utils import nbio_interface, io_services_utils from pika.adapters.utils.io_services_utils import (check_callback_arg, check_fd_arg) LOGGER = logging.getLogger(__name__) class AbstractSelectorIOLoop(object): """Selector-based I/O loop interface expected by `selector_ioloop_adapter.SelectorIOServicesAdapter` NOTE: this interface follows the corresponding methods and attributes of `tornado.ioloop.IOLoop` in order to avoid additional adapter layering when wrapping tornado's IOLoop. """ @property @abc.abstractmethod def READ(self): # pylint: disable=C0103 """The value of the I/O loop's READ flag; READ/WRITE/ERROR may be used with bitwise operators as expected. Implementation note: the implementations can simply replace these READ/WRITE/ERROR properties with class-level attributes """ @property @abc.abstractmethod def WRITE(self): # pylint: disable=C0103 """The value of the I/O loop's WRITE flag; READ/WRITE/ERROR may be used with bitwise operators as expected """ @property @abc.abstractmethod def ERROR(self): # pylint: disable=C0103 """The value of the I/O loop's ERROR flag; READ/WRITE/ERROR may be used with bitwise operators as expected """ @abc.abstractmethod def close(self): """Release IOLoop's resources. the `close()` method is intended to be called by the application or test code only after `start()` returns. After calling `close()`, no other interaction with the closed instance of `IOLoop` should be performed. """ @abc.abstractmethod def start(self): """Run the I/O loop. It will loop until requested to exit. See `stop()`. """ @abc.abstractmethod def stop(self): """Request exit from the ioloop. The loop is NOT guaranteed to stop before this method returns. To invoke `stop()` safely from a thread other than this IOLoop's thread, call it via `add_callback_threadsafe`; e.g., `ioloop.add_callback(ioloop.stop)` """ @abc.abstractmethod def call_later(self, delay, callback): """Add the callback to the IOLoop timer to be called after delay seconds from the time of call on best-effort basis. Returns a handle to the timeout. :param float delay: The number of seconds to wait to call callback :param callable callback: The callback method :returns: handle to the created timeout that may be passed to `remove_timeout()` :rtype: object """ @abc.abstractmethod def remove_timeout(self, timeout_handle): """Remove a timeout :param timeout_handle: Handle of timeout to remove """ @abc.abstractmethod def add_callback(self, callback): """Requests a call to the given function as soon as possible in the context of this IOLoop's thread. NOTE: This is the only thread-safe method in IOLoop. All other manipulations of IOLoop must be performed from the IOLoop's thread. For example, a thread may request a call to the `stop` method of an ioloop that is running in a different thread via `ioloop.add_callback_threadsafe(ioloop.stop)` :param callable callback: The callback method """ @abc.abstractmethod def add_handler(self, fd, handler, events): """Start watching the given file descriptor for events :param int fd: The file descriptor :param callable handler: When requested event(s) occur, `handler(fd, events)` will be called. :param int events: The event mask using READ, WRITE, ERROR. """ @abc.abstractmethod def update_handler(self, fd, events): """Changes the events we watch for :param int fd: The file descriptor :param int events: The event mask using READ, WRITE, ERROR """ @abc.abstractmethod def remove_handler(self, fd): """Stop watching the given file descriptor for events :param int fd: The file descriptor """ class SelectorIOServicesAdapter(io_services_utils.SocketConnectionMixin, io_services_utils.StreamingConnectionMixin, nbio_interface.AbstractIOServices, nbio_interface.AbstractFileDescriptorServices): """Implements the :py:class:`.nbio_interface.AbstractIOServices` interface on top of selector-style native loop having the :py:class:`AbstractSelectorIOLoop` interface, such as :py:class:`pika.selection_connection.IOLoop` and :py:class:`tornado.IOLoop`. NOTE: :py:class:`.nbio_interface.AbstractFileDescriptorServices` interface is only required by the mixins. """ def __init__(self, native_loop): """ :param AbstractSelectorIOLoop native_loop: An instance compatible with the `AbstractSelectorIOLoop` interface, but not necessarily derived from it. """ self._loop = native_loop # Active watchers: maps file descriptors to `_FileDescriptorCallbacks` self._watchers = dict() # Native loop-specific event masks of interest self._readable_mask = self._loop.READ # NOTE: tying ERROR to WRITE is particularly handy for Windows, whose # `select.select()` differs from Posix by reporting # connection-establishment failure only through exceptfds (ERROR event), # while the typical application workflow is to wait for the socket to # become writable when waiting for socket connection to be established. self._writable_mask = self._loop.WRITE | self._loop.ERROR def get_native_ioloop(self): """Implement :py:meth:`.nbio_interface.AbstractIOServices.get_native_ioloop()`. """ return self._loop def close(self): """Implement :py:meth:`.nbio_interface.AbstractIOServices.close()`. """ self._loop.close() def run(self): """Implement :py:meth:`.nbio_interface.AbstractIOServices.run()`. """ self._loop.start() def stop(self): """Implement :py:meth:`.nbio_interface.AbstractIOServices.stop()`. """ self._loop.stop() def add_callback_threadsafe(self, callback): """Implement :py:meth:`.nbio_interface.AbstractIOServices.add_callback_threadsafe()`. """ self._loop.add_callback(callback) def call_later(self, delay, callback): """Implement :py:meth:`.nbio_interface.AbstractIOServices.call_later()`. """ return _TimerHandle(self._loop.call_later(delay, callback), self._loop) def getaddrinfo(self, host, port, on_done, family=0, socktype=0, proto=0, flags=0): """Implement :py:meth:`.nbio_interface.AbstractIOServices.getaddrinfo()`. """ return _SelectorIOLoopIOHandle( _AddressResolver( native_loop=self._loop, host=host, port=port, family=family, socktype=socktype, proto=proto, flags=flags, on_done=on_done).start()) def set_reader(self, fd, on_readable): """Implement :py:meth:`.nbio_interface.AbstractFileDescriptorServices.set_reader()`. """ LOGGER.debug('SelectorIOServicesAdapter.set_reader(%s, %r)', fd, on_readable) check_fd_arg(fd) check_callback_arg(on_readable, 'on_readable') try: callbacks = self._watchers[fd] except KeyError: self._loop.add_handler(fd, self._on_reader_writer_fd_events, self._readable_mask) self._watchers[fd] = _FileDescriptorCallbacks(reader=on_readable) LOGGER.debug('set_reader(%s, _) added handler Rd', fd) else: if callbacks.reader is None: assert callbacks.writer is not None self._loop.update_handler( fd, self._readable_mask | self._writable_mask) LOGGER.debug('set_reader(%s, _) updated handler RdWr', fd) else: LOGGER.debug('set_reader(%s, _) replacing reader', fd) callbacks.reader = on_readable def remove_reader(self, fd): """Implement :py:meth:`.nbio_interface.AbstractFileDescriptorServices.remove_reader()`. """ LOGGER.debug('SelectorIOServicesAdapter.remove_reader(%s)', fd) check_fd_arg(fd) try: callbacks = self._watchers[fd] except KeyError: LOGGER.debug('remove_reader(%s) neither was set', fd) return False if callbacks.reader is None: assert callbacks.writer is not None LOGGER.debug('remove_reader(%s) reader wasn\'t set Wr', fd) return False callbacks.reader = None if callbacks.writer is None: del self._watchers[fd] self._loop.remove_handler(fd) LOGGER.debug('remove_reader(%s) removed handler', fd) else: self._loop.update_handler(fd, self._writable_mask) LOGGER.debug('remove_reader(%s) updated handler Wr', fd) return True def set_writer(self, fd, on_writable): """Implement :py:meth:`.nbio_interface.AbstractFileDescriptorServices.set_writer()`. """ LOGGER.debug('SelectorIOServicesAdapter.set_writer(%s, %r)', fd, on_writable) check_fd_arg(fd) check_callback_arg(on_writable, 'on_writable') try: callbacks = self._watchers[fd] except KeyError: self._loop.add_handler(fd, self._on_reader_writer_fd_events, self._writable_mask) self._watchers[fd] = _FileDescriptorCallbacks(writer=on_writable) LOGGER.debug('set_writer(%s, _) added handler Wr', fd) else: if callbacks.writer is None: assert callbacks.reader is not None # NOTE: Set the writer func before setting the mask! # Otherwise a race condition can occur where ioloop tries to # call writer when it is still None. callbacks.writer = on_writable self._loop.update_handler( fd, self._readable_mask | self._writable_mask) LOGGER.debug('set_writer(%s, _) updated handler RdWr', fd) else: LOGGER.debug('set_writer(%s, _) replacing writer', fd) callbacks.writer = on_writable def remove_writer(self, fd): """Implement :py:meth:`.nbio_interface.AbstractFileDescriptorServices.remove_writer()`. """ LOGGER.debug('SelectorIOServicesAdapter.remove_writer(%s)', fd) check_fd_arg(fd) try: callbacks = self._watchers[fd] except KeyError: LOGGER.debug('remove_writer(%s) neither was set.', fd) return False if callbacks.writer is None: assert callbacks.reader is not None LOGGER.debug('remove_writer(%s) writer wasn\'t set Rd', fd) return False callbacks.writer = None if callbacks.reader is None: del self._watchers[fd] self._loop.remove_handler(fd) LOGGER.debug('remove_writer(%s) removed handler', fd) else: self._loop.update_handler(fd, self._readable_mask) LOGGER.debug('remove_writer(%s) updated handler Rd', fd) return True def _on_reader_writer_fd_events(self, fd, events): """Handle indicated file descriptor events requested via `set_reader()` and `set_writer()`. :param fd: file descriptor :param events: event mask using native loop's READ/WRITE/ERROR. NOTE: depending on the underlying poller mechanism, ERROR may be indicated upon certain file description state even though we don't request it. We ignore ERROR here since `set_reader()`/`set_writer()` don't request for it. """ callbacks = self._watchers[fd] if events & self._readable_mask and callbacks.reader is None: # NOTE: we check for consistency here ahead of the writer callback # because the writer callback, if any, can change the events being # watched LOGGER.warning( 'READ indicated on fd=%s, but reader callback is None; ' 'events=%s', fd, bin(events)) if events & self._writable_mask: if callbacks.writer is not None: callbacks.writer() else: LOGGER.warning( 'WRITE indicated on fd=%s, but writer callback is None; ' 'events=%s', fd, bin(events)) if events & self._readable_mask: if callbacks.reader is not None: callbacks.reader() else: # Reader callback might have been removed in the scope of writer # callback. pass class _FileDescriptorCallbacks(object): """Holds reader and writer callbacks for a file descriptor""" __slots__ = ('reader', 'writer') def __init__(self, reader=None, writer=None): self.reader = reader self.writer = writer class _TimerHandle(nbio_interface.AbstractTimerReference): """This module's adaptation of `nbio_interface.AbstractTimerReference`. """ def __init__(self, handle, loop): """ :param opaque handle: timer handle from the underlying loop implementation that may be passed to its `remove_timeout()` method :param AbstractSelectorIOLoop loop: the I/O loop instance that created the timeout. """ self._handle = handle self._loop = loop def cancel(self): if self._loop is not None: self._loop.remove_timeout(self._handle) self._handle = None self._loop = None class _SelectorIOLoopIOHandle(nbio_interface.AbstractIOReference): """This module's adaptation of `nbio_interface.AbstractIOReference` """ def __init__(self, subject): """ :param subject: subject of the reference containing a `cancel()` method """ self._cancel = subject.cancel def cancel(self): """Cancel pending operation :returns: False if was already done or cancelled; True otherwise :rtype: bool """ return self._cancel() class _AddressResolver(object): """Performs getaddrinfo asynchronously using a thread, then reports result via callback from the given I/O loop. NOTE: at this stage, we're using a thread per request, which may prove inefficient and even prohibitive if the app performs many of these operations concurrently. """ NOT_STARTED = 0 ACTIVE = 1 CANCELED = 2 COMPLETED = 3 def __init__(self, native_loop, host, port, family, socktype, proto, flags, on_done): """ :param AbstractSelectorIOLoop native_loop: :param host: `see socket.getaddrinfo()` :param port: `see socket.getaddrinfo()` :param family: `see socket.getaddrinfo()` :param socktype: `see socket.getaddrinfo()` :param proto: `see socket.getaddrinfo()` :param flags: `see socket.getaddrinfo()` :param on_done: on_done(records|BaseException) callback for reporting result from the given I/O loop. The single arg will be either an exception object (check for `BaseException`) in case of failure or the result returned by `socket.getaddrinfo()`. """ check_callback_arg(on_done, 'on_done') self._state = self.NOT_STARTED self._result = None self._loop = native_loop self._host = host self._port = port self._family = family self._socktype = socktype self._proto = proto self._flags = flags self._on_done = on_done self._mutex = threading.Lock() self._threading_timer = None def _cleanup(self): """Release resources """ self._loop = None self._threading_timer = None self._on_done = None def start(self): """Start asynchronous DNS lookup. :rtype: nbio_interface.AbstractIOReference """ assert self._state == self.NOT_STARTED, self._state self._state = self.ACTIVE self._threading_timer = threading.Timer(0, self._resolve) self._threading_timer.start() return _SelectorIOLoopIOHandle(self) def cancel(self): """Cancel the pending resolver :returns: False if was already done or cancelled; True otherwise :rtype: bool """ # Try to cancel, but no guarantees with self._mutex: if self._state == self.ACTIVE: LOGGER.debug('Canceling resolver for %s:%s', self._host, self._port) self._state = self.CANCELED # Attempt to cancel, but not guaranteed self._threading_timer.cancel() self._cleanup() return True else: LOGGER.debug( 'Ignoring _AddressResolver cancel request when not ACTIVE; ' '(%s:%s); state=%s', self._host, self._port, self._state) return False def _resolve(self): """Call `socket.getaddrinfo()` and return result via user's callback function on the given I/O loop """ try: # NOTE: on python 2.x, can't pass keyword args to getaddrinfo() result = socket.getaddrinfo(self._host, self._port, self._family, self._socktype, self._proto, self._flags) except Exception as exc: # pylint: disable=W0703 LOGGER.error('Address resolution failed: %r', exc) result = exc self._result = result # Schedule result to be returned to user via user's event loop with self._mutex: if self._state == self.ACTIVE: self._loop.add_callback(self._dispatch_result) else: LOGGER.debug( 'Asynchronous getaddrinfo cancellation detected; ' 'in thread; host=%r', self._host) def _dispatch_result(self): """This is called from the user's I/O loop to pass the result to the user via the user's on_done callback """ if self._state == self.ACTIVE: self._state = self.COMPLETED try: LOGGER.debug( 'Invoking asynchronous getaddrinfo() completion callback; ' 'host=%r', self._host) self._on_done(self._result) finally: self._cleanup() else: LOGGER.debug( 'Asynchronous getaddrinfo cancellation detected; ' 'in I/O loop context; host=%r', self._host) pika-1.2.0/pika/amqp_object.py000066400000000000000000000034461400701476500162330ustar00rootroot00000000000000"""Base classes that are extended by low level AMQP frames and higher level AMQP classes and methods. """ class AMQPObject(object): """Base object that is extended by AMQP low level frames and AMQP classes and methods. """ NAME = 'AMQPObject' INDEX = None def __repr__(self): items = list() for key, value in self.__dict__.items(): if getattr(self.__class__, key, None) != value: items.append('%s=%s' % (key, value)) if not items: return "<%s>" % self.NAME return "<%s(%s)>" % (self.NAME, sorted(items)) def __eq__(self, other): if other is not None: return self.__dict__ == other.__dict__ else: return False class Class(AMQPObject): """Is extended by AMQP classes""" NAME = 'Unextended Class' class Method(AMQPObject): """Is extended by AMQP methods""" NAME = 'Unextended Method' synchronous = False def _set_content(self, properties, body): """If the method is a content frame, set the properties and body to be carried as attributes of the class. :param pika.frame.Properties properties: AMQP Basic Properties :param bytes body: The message body """ self._properties = properties # pylint: disable=W0201 self._body = body # pylint: disable=W0201 def get_properties(self): """Return the properties if they are set. :rtype: pika.frame.Properties """ return self._properties def get_body(self): """Return the message body if it is set. :rtype: str|unicode """ return self._body class Properties(AMQPObject): """Class to encompass message properties (AMQP Basic.Properties)""" NAME = 'Unextended Properties' pika-1.2.0/pika/callback.py000066400000000000000000000346271400701476500155100ustar00rootroot00000000000000"""Callback management class, common area for keeping track of all callbacks in the Pika stack. """ import functools import logging from pika import frame from pika import amqp_object from pika.compat import xrange, canonical_str LOGGER = logging.getLogger(__name__) def name_or_value(value): """Will take Frame objects, classes, etc and attempt to return a valid string identifier for them. :param pika.amqp_object.AMQPObject|pika.frame.Frame|int|str value: The value to sanitize :rtype: str """ # Is it subclass of AMQPObject try: if issubclass(value, amqp_object.AMQPObject): return value.NAME except TypeError: pass # Is it a Pika frame object? if isinstance(value, frame.Method): return value.method.NAME # Is it a Pika frame object (go after Method since Method extends this) if isinstance(value, amqp_object.AMQPObject): return value.NAME # Cast the value to a str (python 2 and python 3); encoding as UTF-8 on Python 2 return canonical_str(value) def sanitize_prefix(function): """Automatically call name_or_value on the prefix passed in.""" @functools.wraps(function) def wrapper(*args, **kwargs): args = list(args) offset = 1 if 'prefix' in kwargs: kwargs['prefix'] = name_or_value(kwargs['prefix']) elif len(args) - 1 >= offset: args[offset] = name_or_value(args[offset]) offset += 1 if 'key' in kwargs: kwargs['key'] = name_or_value(kwargs['key']) elif len(args) - 1 >= offset: args[offset] = name_or_value(args[offset]) return function(*tuple(args), **kwargs) return wrapper def check_for_prefix_and_key(function): """Automatically return false if the key or prefix is not in the callbacks for the instance. """ @functools.wraps(function) def wrapper(*args, **kwargs): offset = 1 # Sanitize the prefix if 'prefix' in kwargs: prefix = name_or_value(kwargs['prefix']) else: prefix = name_or_value(args[offset]) offset += 1 # Make sure to sanitize the key as well if 'key' in kwargs: key = name_or_value(kwargs['key']) else: key = name_or_value(args[offset]) # Make sure prefix and key are in the stack if prefix not in args[0]._stack or key not in args[0]._stack[prefix]: # pylint: disable=W0212 return False # Execute the method return function(*args, **kwargs) return wrapper class CallbackManager(object): """CallbackManager is a global callback system designed to be a single place where Pika can manage callbacks and process them. It should be referenced by the CallbackManager.instance() method instead of constructing new instances of it. """ CALLS = 'calls' ARGUMENTS = 'arguments' DUPLICATE_WARNING = 'Duplicate callback found for "%s:%s"' CALLBACK = 'callback' ONE_SHOT = 'one_shot' ONLY_CALLER = 'only' def __init__(self): """Create an instance of the CallbackManager""" self._stack = dict() @sanitize_prefix def add(self, prefix, key, callback, one_shot=True, only_caller=None, arguments=None): """Add a callback to the stack for the specified key. If the call is specified as one_shot, it will be removed after being fired The prefix is usually the channel number but the class is generic and prefix and key may be any value. If you pass in only_caller CallbackManager will restrict processing of the callback to only the calling function/object that you specify. :param str|int prefix: Categorize the callback :param str|dict key: The key for the callback :param callable callback: The callback to call :param bool one_shot: Remove this callback after it is called :param object only_caller: Only allow one_caller value to call the event that fires the callback. :param dict arguments: Arguments to validate when processing :rtype: tuple(prefix, key) """ # Prep the stack if prefix not in self._stack: self._stack[prefix] = dict() if key not in self._stack[prefix]: self._stack[prefix][key] = list() # Check for a duplicate for callback_dict in self._stack[prefix][key]: if (callback_dict[self.CALLBACK] == callback and callback_dict[self.ARGUMENTS] == arguments and callback_dict[self.ONLY_CALLER] == only_caller): if callback_dict[self.ONE_SHOT] is True: callback_dict[self.CALLS] += 1 LOGGER.debug('Incremented callback reference counter: %r', callback_dict) else: LOGGER.warning(self.DUPLICATE_WARNING, prefix, key) return prefix, key # Create the callback dictionary callback_dict = self._callback_dict(callback, one_shot, only_caller, arguments) self._stack[prefix][key].append(callback_dict) LOGGER.debug('Added: %r', callback_dict) return prefix, key def clear(self): """Clear all the callbacks if there are any defined.""" self._stack = dict() LOGGER.debug('Callbacks cleared') @sanitize_prefix def cleanup(self, prefix): """Remove all callbacks from the stack by a prefix. Returns True if keys were there to be removed :param str or int prefix: The prefix for keeping track of callbacks with :rtype: bool """ LOGGER.debug('Clearing out %r from the stack', prefix) if prefix not in self._stack or not self._stack[prefix]: return False del self._stack[prefix] return True @sanitize_prefix def pending(self, prefix, key): """Return count of callbacks for a given prefix or key or None :param str|int prefix: Categorize the callback :param object|str|dict key: The key for the callback :rtype: None or int """ if not prefix in self._stack or not key in self._stack[prefix]: return None return len(self._stack[prefix][key]) @sanitize_prefix @check_for_prefix_and_key def process(self, prefix, key, caller, *args, **keywords): """Run through and process all the callbacks for the specified keys. Caller should be specified at all times so that callbacks which require a specific function to call CallbackManager.process will not be processed. :param str|int prefix: Categorize the callback :param object|str|int key: The key for the callback :param object caller: Who is firing the event :param list args: Any optional arguments :param dict keywords: Optional keyword arguments :rtype: bool """ LOGGER.debug('Processing %s:%s', prefix, key) if prefix not in self._stack or key not in self._stack[prefix]: return False callbacks = list() # Check each callback, append it to the list if it should be called for callback_dict in list(self._stack[prefix][key]): if self._should_process_callback(callback_dict, caller, list(args)): callbacks.append(callback_dict[self.CALLBACK]) if callback_dict[self.ONE_SHOT]: self._use_one_shot_callback(prefix, key, callback_dict) # Call each callback for callback in callbacks: LOGGER.debug('Calling %s for "%s:%s"', callback, prefix, key) try: callback(*args, **keywords) except: LOGGER.exception('Calling %s for "%s:%s" failed', callback, prefix, key) raise return True @sanitize_prefix @check_for_prefix_and_key def remove(self, prefix, key, callback_value=None, arguments=None): """Remove a callback from the stack by prefix, key and optionally the callback itself. If you only pass in prefix and key, all callbacks for that prefix and key will be removed. :param str or int prefix: The prefix for keeping track of callbacks with :param str key: The callback key :param callable callback_value: The method defined to call on callback :param dict arguments: Optional arguments to check :rtype: bool """ if callback_value: offsets_to_remove = list() for offset in xrange(len(self._stack[prefix][key]), 0, -1): callback_dict = self._stack[prefix][key][offset - 1] if (callback_dict[self.CALLBACK] == callback_value and self._arguments_match(callback_dict, [arguments])): offsets_to_remove.append(offset - 1) for offset in offsets_to_remove: try: LOGGER.debug('Removing callback #%i: %r', offset, self._stack[prefix][key][offset]) del self._stack[prefix][key][offset] except KeyError: pass self._cleanup_callback_dict(prefix, key) return True @sanitize_prefix @check_for_prefix_and_key def remove_all(self, prefix, key): """Remove all callbacks for the specified prefix and key. :param str prefix: The prefix for keeping track of callbacks with :param str key: The callback key """ del self._stack[prefix][key] self._cleanup_callback_dict(prefix, key) def _arguments_match(self, callback_dict, args): """Validate if the arguments passed in match the expected arguments in the callback_dict. We expect this to be a frame passed in to *args for process or passed in as a list from remove. :param dict callback_dict: The callback dictionary to evaluate against :param list args: The arguments passed in as a list """ if callback_dict[self.ARGUMENTS] is None: return True if not args: return False if isinstance(args[0], dict): return self._dict_arguments_match(args[0], callback_dict[self.ARGUMENTS]) return self._obj_arguments_match( args[0].method if hasattr(args[0], 'method') else args[0], callback_dict[self.ARGUMENTS]) def _callback_dict(self, callback, one_shot, only_caller, arguments): """Return the callback dictionary. :param callable callback: The callback to call :param bool one_shot: Remove this callback after it is called :param object only_caller: Only allow one_caller value to call the event that fires the callback. :rtype: dict """ value = { self.CALLBACK: callback, self.ONE_SHOT: one_shot, self.ONLY_CALLER: only_caller, self.ARGUMENTS: arguments } if one_shot: value[self.CALLS] = 1 return value def _cleanup_callback_dict(self, prefix, key=None): """Remove empty dict nodes in the callback stack. :param str or int prefix: The prefix for keeping track of callbacks with :param str key: The callback key """ if key and key in self._stack[prefix] and not self._stack[prefix][key]: del self._stack[prefix][key] if prefix in self._stack and not self._stack[prefix]: del self._stack[prefix] @staticmethod def _dict_arguments_match(value, expectation): """Checks an dict to see if it has attributes that meet the expectation. :param dict value: The dict to evaluate :param dict expectation: The values to check against :rtype: bool """ LOGGER.debug('Comparing %r to %r', value, expectation) for key in expectation: if value.get(key) != expectation[key]: LOGGER.debug('Values in dict do not match for %s', key) return False return True @staticmethod def _obj_arguments_match(value, expectation): """Checks an object to see if it has attributes that meet the expectation. :param object value: The object to evaluate :param dict expectation: The values to check against :rtype: bool """ for key in expectation: if not hasattr(value, key): LOGGER.debug('%r does not have required attribute: %s', type(value), key) return False if getattr(value, key) != expectation[key]: LOGGER.debug('Values in %s do not match for %s', type(value), key) return False return True def _should_process_callback(self, callback_dict, caller, args): """Returns True if the callback should be processed. :param dict callback_dict: The callback configuration :param object caller: Who is firing the event :param list args: Any optional arguments :rtype: bool """ if not self._arguments_match(callback_dict, args): LOGGER.debug('Arguments do not match for %r, %r', callback_dict, args) return False return (callback_dict[self.ONLY_CALLER] is None or (callback_dict[self.ONLY_CALLER] and callback_dict[self.ONLY_CALLER] == caller)) def _use_one_shot_callback(self, prefix, key, callback_dict): """Process the one-shot callback, decrementing the use counter and removing it from the stack if it's now been fully used. :param str or int prefix: The prefix for keeping track of callbacks with :param str key: The callback key :param dict callback_dict: The callback dict to process """ LOGGER.debug('Processing use of oneshot callback') callback_dict[self.CALLS] -= 1 LOGGER.debug('%i registered uses left', callback_dict[self.CALLS]) if callback_dict[self.CALLS] <= 0: self.remove(prefix, key, callback_dict[self.CALLBACK], callback_dict[self.ARGUMENTS]) pika-1.2.0/pika/channel.py000066400000000000000000001727021400701476500153610ustar00rootroot00000000000000"""The Channel class provides a wrapper for interacting with RabbitMQ implementing the methods and behaviors for an AMQP Channel. """ # disable too-many-lines # pylint: disable=C0302 import collections import logging import uuid from enum import Enum import pika.frame as frame import pika.exceptions as exceptions import pika.spec as spec import pika.validators as validators from pika.compat import unicode_type, dictkeys, is_integer from pika.exchange_type import ExchangeType LOGGER = logging.getLogger(__name__) MAX_CHANNELS = 65535 # per AMQP 0.9.1 spec. class Channel(object): """A Channel is the primary communication method for interacting with RabbitMQ. It is recommended that you do not directly invoke the creation of a channel object in your application code but rather construct a channel by calling the active connection's channel() method. """ # Disable pylint messages concerning "method could be a function" # pylint: disable=R0201 CLOSED = 0 OPENING = 1 OPEN = 2 CLOSING = 3 # client-initiated close in progress _STATE_NAMES = { CLOSED: 'CLOSED', OPENING: 'OPENING', OPEN: 'OPEN', CLOSING: 'CLOSING' } _ON_CHANNEL_CLEANUP_CB_KEY = '_on_channel_cleanup' def __init__(self, connection, channel_number, on_open_callback): """Create a new instance of the Channel :param pika.connection.Connection connection: The connection :param int channel_number: The channel number for this instance :param callable on_open_callback: The callback to call on channel open. The callback will be invoked with the `Channel` instance as its only argument. """ if not isinstance(channel_number, int): raise exceptions.InvalidChannelNumber(channel_number) validators.rpc_completion_callback(on_open_callback) self.channel_number = channel_number self.callbacks = connection.callbacks self.connection = connection # Initially, flow is assumed to be active self.flow_active = True self._content_assembler = ContentFrameAssembler() self._blocked = collections.deque(list()) self._blocking = None self._has_on_flow_callback = False self._cancelled = set() self._consumers = dict() self._consumers_with_noack = set() self._on_flowok_callback = None self._on_getok_callback = None self._on_openok_callback = on_open_callback self._state = self.CLOSED # We save the closing reason exception to be passed to on-channel-close # callback at closing of the channel. Exception representing the closing # reason; ChannelClosedByClient or ChannelClosedByBroker on controlled # close; otherwise another exception describing the reason for failure # (most likely connection failure). self._closing_reason = None # type: None | Exception # opaque cookie value set by wrapper layer (e.g., BlockingConnection) # via _set_cookie self._cookie = None def __int__(self): """Return the channel object as its channel number :rtype: int """ return self.channel_number def __repr__(self): return '<%s number=%s %s conn=%r>' % ( self.__class__.__name__, self.channel_number, self._STATE_NAMES[self._state], self.connection) def add_callback(self, callback, replies, one_shot=True): """Pass in a callback handler and a list replies from the RabbitMQ broker which you'd like the callback notified of. Callbacks should allow for the frame parameter to be passed in. :param callable callback: The callback to call :param list replies: The replies to get a callback for :param bool one_shot: Only handle the first type callback """ for reply in replies: self.callbacks.add(self.channel_number, reply, callback, one_shot) def add_on_cancel_callback(self, callback): """Pass a callback function that will be called when the basic_cancel is sent by the server. The callback function should receive a frame parameter. :param callable callback: The callback to call on Basic.Cancel from broker """ self.callbacks.add(self.channel_number, spec.Basic.Cancel, callback, False) def add_on_close_callback(self, callback): """Pass a callback function that will be called when the channel is closed. The callback function will receive the channel and an exception describing why the channel was closed. If the channel is closed by broker via Channel.Close, the callback will receive `ChannelClosedByBroker` as the reason. If graceful user-initiated channel closing completes successfully ( either directly of indirectly by closing a connection containing the channel) and closing concludes gracefully without Channel.Close from the broker and without loss of connection, the callback will receive `ChannelClosedByClient` exception as reason. If channel was closed due to loss of connection, the callback will receive another exception type describing the failure. :param callable callback: The callback, having the signature: callback(Channel, Exception reason) """ self.callbacks.add(self.channel_number, '_on_channel_close', callback, False, self) def add_on_flow_callback(self, callback): """Pass a callback function that will be called when Channel.Flow is called by the remote server. Note that newer versions of RabbitMQ will not issue this but instead use TCP backpressure :param callable callback: The callback function """ self._has_on_flow_callback = True self.callbacks.add(self.channel_number, spec.Channel.Flow, callback, False) def add_on_return_callback(self, callback): """Pass a callback function that will be called when basic_publish is sent a message that has been rejected and returned by the server. :param callable callback: The function to call, having the signature callback(channel, method, properties, body) where - channel: pika.channel.Channel - method: pika.spec.Basic.Return - properties: pika.spec.BasicProperties - body: bytes """ self.callbacks.add(self.channel_number, '_on_return', callback, False) def basic_ack(self, delivery_tag=0, multiple=False): """Acknowledge one or more messages. When sent by the client, this method acknowledges one or more messages delivered via the Deliver or Get-Ok methods. When sent by server, this method acknowledges one or more messages published with the Publish method on a channel in confirm mode. The acknowledgement can be for a single message or a set of messages up to and including a specific message. :param integer delivery_tag: int/long The server-assigned delivery tag :param bool multiple: If set to True, the delivery tag is treated as "up to and including", so that multiple messages can be acknowledged with a single method. If set to False, the delivery tag refers to a single message. If the multiple field is 1, and the delivery tag is zero, this indicates acknowledgement of all outstanding messages. """ self._raise_if_not_open() return self._send_method(spec.Basic.Ack(delivery_tag, multiple)) def basic_cancel(self, consumer_tag='', callback=None): """This method cancels a consumer. This does not affect already delivered messages, but it does mean the server will not send any more messages for that consumer. The client may receive an arbitrary number of messages in between sending the cancel method and receiving the cancel-ok reply. It may also be sent from the server to the client in the event of the consumer being unexpectedly cancelled (i.e. cancelled for any reason other than the server receiving the corresponding basic.cancel from the client). This allows clients to be notified of the loss of consumers due to events such as queue deletion. :param str consumer_tag: Identifier for the consumer :param callable callback: callback(pika.frame.Method) for method Basic.CancelOk. If None, do not expect a Basic.CancelOk response, otherwise, callback must be callable :raises ValueError: """ validators.require_string(consumer_tag, 'consumer_tag') self._raise_if_not_open() nowait = validators.rpc_completion_callback(callback) if consumer_tag in self._cancelled: # We check for cancelled first, because basic_cancel removes # consumers closed with nowait from self._consumers LOGGER.warning('basic_cancel - consumer is already cancelling: %s', consumer_tag) return if consumer_tag not in self._consumers: # Could be cancelled by user or broker earlier LOGGER.warning('basic_cancel - consumer not found: %s', consumer_tag) return LOGGER.debug('Cancelling consumer: %s (nowait=%s)', consumer_tag, nowait) if nowait: # This is our last opportunity while the channel is open to remove # this consumer callback and help gc; unfortunately, this consumer's # self._cancelled and self._consumers_with_noack (if any) entries # will persist until the channel is closed. del self._consumers[consumer_tag] if callback is not None: self.callbacks.add(self.channel_number, spec.Basic.CancelOk, callback) self._cancelled.add(consumer_tag) self._rpc(spec.Basic.Cancel(consumer_tag=consumer_tag, nowait=nowait), self._on_cancelok if not nowait else None, [(spec.Basic.CancelOk, { 'consumer_tag': consumer_tag })] if not nowait else []) def basic_consume(self, queue, on_message_callback, auto_ack=False, exclusive=False, consumer_tag=None, arguments=None, callback=None): """Sends the AMQP 0-9-1 command Basic.Consume to the broker and binds messages for the consumer_tag to the consumer callback. If you do not pass in a consumer_tag, one will be automatically generated for you. Returns the consumer tag. For more information on basic_consume, see: Tutorial 2 at http://www.rabbitmq.com/getstarted.html http://www.rabbitmq.com/confirms.html http://www.rabbitmq.com/amqp-0-9-1-reference.html#basic.consume :param str queue: The queue to consume from. Use the empty string to specify the most recent server-named queue for this channel :param callable on_message_callback: The function to call when consuming with the signature on_message_callback(channel, method, properties, body), where - channel: pika.channel.Channel - method: pika.spec.Basic.Deliver - properties: pika.spec.BasicProperties - body: bytes :param bool auto_ack: if set to True, automatic acknowledgement mode will be used (see http://www.rabbitmq.com/confirms.html). This corresponds with the 'no_ack' parameter in the basic.consume AMQP 0.9.1 method :param bool exclusive: Don't allow other consumers on the queue :param str consumer_tag: Specify your own consumer tag :param dict arguments: Custom key/value pair arguments for the consumer :param callable callback: callback(pika.frame.Method) for method Basic.ConsumeOk. :returns: Consumer tag which may be used to cancel the consumer. :rtype: str :raises ValueError: """ validators.require_string(queue, 'queue') validators.require_callback(on_message_callback) self._raise_if_not_open() validators.rpc_completion_callback(callback) # If a consumer tag was not passed, create one if not consumer_tag: consumer_tag = self._generate_consumer_tag() if consumer_tag in self._consumers or consumer_tag in self._cancelled: raise exceptions.DuplicateConsumerTag(consumer_tag) if auto_ack: self._consumers_with_noack.add(consumer_tag) self._consumers[consumer_tag] = on_message_callback rpc_callback = self._on_eventok if callback is None else callback self._rpc( spec.Basic.Consume(queue=queue, consumer_tag=consumer_tag, no_ack=auto_ack, exclusive=exclusive, arguments=arguments or dict()), rpc_callback, [(spec.Basic.ConsumeOk, { 'consumer_tag': consumer_tag })]) return consumer_tag def _generate_consumer_tag(self): """Generate a consumer tag NOTE: this protected method may be called by derived classes :returns: consumer tag :rtype: str """ return 'ctag%i.%s' % (self.channel_number, uuid.uuid4().hex) def basic_get(self, queue, callback, auto_ack=False): """Get a single message from the AMQP broker. If you want to be notified of Basic.GetEmpty, use the Channel.add_callback method adding your Basic.GetEmpty callback which should expect only one parameter, frame. Due to implementation details, this cannot be called a second time until the callback is executed. For more information on basic_get and its parameters, see: http://www.rabbitmq.com/amqp-0-9-1-reference.html#basic.get :param str queue: The queue from which to get a message. Use the empty string to specify the most recent server-named queue for this channel :param callable callback: The callback to call with a message that has the signature callback(channel, method, properties, body), where: - channel: pika.channel.Channel - method: pika.spec.Basic.GetOk - properties: pika.spec.BasicProperties - body: bytes :param bool auto_ack: Tell the broker to not expect a reply :raises ValueError: """ validators.require_string(queue, 'queue') validators.require_callback(callback) if self._on_getok_callback is not None: raise exceptions.DuplicateGetOkCallback() self._on_getok_callback = callback # pylint: disable=W0511 # TODO Strangely, not using _rpc for the synchronous Basic.Get. Would # need to extend _rpc to handle Basic.GetOk method, header, and body # frames (or similar) self._send_method(spec.Basic.Get(queue=queue, no_ack=auto_ack)) def basic_nack(self, delivery_tag=0, multiple=False, requeue=True): """This method allows a client to reject one or more incoming messages. It can be used to interrupt and cancel large incoming messages, or return untreatable messages to their original queue. :param integer delivery-tag: int/long The server-assigned delivery tag :param bool multiple: If set to True, the delivery tag is treated as "up to and including", so that multiple messages can be acknowledged with a single method. If set to False, the delivery tag refers to a single message. If the multiple field is 1, and the delivery tag is zero, this indicates acknowledgement of all outstanding messages. :param bool requeue: If requeue is true, the server will attempt to requeue the message. If requeue is false or the requeue attempt fails the messages are discarded or dead-lettered. """ self._raise_if_not_open() return self._send_method( spec.Basic.Nack(delivery_tag, multiple, requeue)) def basic_publish(self, exchange, routing_key, body, properties=None, mandatory=False): """Publish to the channel with the given exchange, routing key and body. For more information on basic_publish and what the parameters do, see: http://www.rabbitmq.com/amqp-0-9-1-reference.html#basic.publish :param str exchange: The exchange to publish to :param str routing_key: The routing key to bind on :param bytes body: The message body :param pika.spec.BasicProperties properties: Basic.properties :param bool mandatory: The mandatory flag """ self._raise_if_not_open() if isinstance(body, unicode_type): body = body.encode('utf-8') properties = properties or spec.BasicProperties() self._send_method( spec.Basic.Publish(exchange=exchange, routing_key=routing_key, mandatory=mandatory), (properties, body)) def basic_qos(self, prefetch_size=0, prefetch_count=0, global_qos=False, callback=None): """Specify quality of service. This method requests a specific quality of service. The QoS can be specified for the current channel or for all channels on the connection. The client can request that messages be sent in advance so that when the client finishes processing a message, the following message is already held locally, rather than needing to be sent down the channel. Prefetching gives a performance improvement. :param int prefetch_size: This field specifies the prefetch window size. The server will send a message in advance if it is equal to or smaller in size than the available prefetch size (and also falls into other prefetch limits). May be set to zero, meaning "no specific limit", although other prefetch limits may still apply. The prefetch-size is ignored by consumers who have enabled the no-ack option. :param int prefetch_count: Specifies a prefetch window in terms of whole messages. This field may be used in combination with the prefetch-size field; a message will only be sent in advance if both prefetch windows (and those at the channel and connection level) allow it. The prefetch-count is ignored by consumers who have enabled the no-ack option. :param bool global_qos: Should the QoS apply to all channels on the connection. :param callable callback: The callback to call for Basic.QosOk response :raises ValueError: """ self._raise_if_not_open() validators.rpc_completion_callback(callback) validators.zero_or_greater('prefetch_size', prefetch_size) validators.zero_or_greater('prefetch_count', prefetch_count) return self._rpc( spec.Basic.Qos(prefetch_size, prefetch_count, global_qos), callback, [spec.Basic.QosOk]) def basic_reject(self, delivery_tag=0, requeue=True): """Reject an incoming message. This method allows a client to reject a message. It can be used to interrupt and cancel large incoming messages, or return untreatable messages to their original queue. :param integer delivery-tag: int/long The server-assigned delivery tag :param bool requeue: If requeue is true, the server will attempt to requeue the message. If requeue is false or the requeue attempt fails the messages are discarded or dead-lettered. :raises: TypeError """ self._raise_if_not_open() if not is_integer(delivery_tag): raise TypeError('delivery_tag must be an integer') return self._send_method(spec.Basic.Reject(delivery_tag, requeue)) def basic_recover(self, requeue=False, callback=None): """This method asks the server to redeliver all unacknowledged messages on a specified channel. Zero or more messages may be redelivered. This method replaces the asynchronous Recover. :param bool requeue: If False, the message will be redelivered to the original recipient. If True, the server will attempt to requeue the message, potentially then delivering it to an alternative subscriber. :param callable callback: Callback to call when receiving Basic.RecoverOk :param callable callback: callback(pika.frame.Method) for method Basic.RecoverOk :raises ValueError: """ self._raise_if_not_open() validators.rpc_completion_callback(callback) return self._rpc(spec.Basic.Recover(requeue), callback, [spec.Basic.RecoverOk]) def close(self, reply_code=0, reply_text="Normal shutdown"): """Invoke a graceful shutdown of the channel with the AMQP Broker. If channel is OPENING, transition to CLOSING and suppress the incoming Channel.OpenOk, if any. :param int reply_code: The reason code to send to broker :param str reply_text: The reason text to send to broker :raises ChannelWrongStateError: if channel is closed or closing """ if self.is_closed or self.is_closing: # Whoever is calling `close` might expect the on-channel-close-cb # to be called, which won't happen when it's already closed. self._raise_if_not_open() # If channel is OPENING, we will transition it to CLOSING state, # causing the _on_openok method to suppress the OPEN state transition # and the on-channel-open-callback LOGGER.info('Closing channel (%s): %r on %s', reply_code, reply_text, self) # Save the reason info so that we may use it in the '_on_channel_close' # callback processing self._closing_reason = exceptions.ChannelClosedByClient( reply_code, reply_text) for consumer_tag in dictkeys(self._consumers): if consumer_tag not in self._cancelled: self.basic_cancel(consumer_tag=consumer_tag) # Change state after cancelling consumers to avoid # ChannelWrongStateError exception from basic_cancel self._set_state(self.CLOSING) self._rpc(spec.Channel.Close(reply_code, reply_text, 0, 0), self._on_closeok, [spec.Channel.CloseOk]) def confirm_delivery(self, ack_nack_callback, callback=None): """Turn on Confirm mode in the channel. Pass in a callback to be notified by the Broker when a message has been confirmed as received or rejected (Basic.Ack, Basic.Nack) from the broker to the publisher. For more information see: https://www.rabbitmq.com/confirms.html :param callable ack_nack_callback: Required callback for delivery confirmations that has the following signature: callback(pika.frame.Method), where method_frame contains either method `spec.Basic.Ack` or `spec.Basic.Nack`. :param callable callback: callback(pika.frame.Method) for method Confirm.SelectOk :raises ValueError: """ if not callable(ack_nack_callback): # confirm_deliver requires a callback; it's meaningless # without a user callback to receieve Basic.Ack/Basic.Nack notifications raise ValueError('confirm_delivery requires a callback ' 'to receieve Basic.Ack/Basic.Nack notifications') self._raise_if_not_open() nowait = validators.rpc_completion_callback(callback) if not (self.connection.publisher_confirms and self.connection.basic_nack): raise exceptions.MethodNotImplemented( 'Confirm.Select not Supported by Server') # Add the ack and nack callback self.callbacks.add(self.channel_number, spec.Basic.Ack, ack_nack_callback, False) self.callbacks.add(self.channel_number, spec.Basic.Nack, ack_nack_callback, False) self._rpc(spec.Confirm.Select(nowait), callback, [spec.Confirm.SelectOk] if not nowait else []) @property def consumer_tags(self): """Property method that returns a list of currently active consumers :rtype: list """ return dictkeys(self._consumers) def exchange_bind(self, destination, source, routing_key='', arguments=None, callback=None): """Bind an exchange to another exchange. :param str destination: The destination exchange to bind :param str source: The source exchange to bind to :param str routing_key: The routing key to bind on :param dict arguments: Custom key/value pair arguments for the binding :param callable callback: callback(pika.frame.Method) for method Exchange.BindOk :raises ValueError: """ self._raise_if_not_open() validators.require_string(destination, 'destination') validators.require_string(source, 'source') nowait = validators.rpc_completion_callback(callback) return self._rpc( spec.Exchange.Bind(0, destination, source, routing_key, nowait, arguments or dict()), callback, [spec.Exchange.BindOk] if not nowait else []) def exchange_declare(self, exchange, exchange_type=ExchangeType.direct, passive=False, durable=False, auto_delete=False, internal=False, arguments=None, callback=None): """This method creates an exchange if it does not already exist, and if the exchange exists, verifies that it is of the correct and expected class. If passive set, the server will reply with Declare-Ok if the exchange already exists with the same name, and raise an error if not and if the exchange does not already exist, the server MUST raise a channel exception with reply code 404 (not found). :param str exchange: The exchange name consists of a non-empty sequence of these characters: letters, digits, hyphen, underscore, period, or colon :param str exchange_type: The exchange type to use :param bool passive: Perform a declare or just check to see if it exists :param bool durable: Survive a reboot of RabbitMQ :param bool auto_delete: Remove when no more queues are bound to it :param bool internal: Can only be published to by other exchanges :param dict arguments: Custom key/value pair arguments for the exchange :param callable callback: callback(pika.frame.Method) for method Exchange.DeclareOk :raises ValueError: """ validators.require_string(exchange, 'exchange') self._raise_if_not_open() nowait = validators.rpc_completion_callback(callback) if isinstance(exchange_type, Enum): exchange_type = exchange_type.value return self._rpc( spec.Exchange.Declare(0, exchange, exchange_type, passive, durable, auto_delete, internal, nowait, arguments or dict()), callback, [spec.Exchange.DeclareOk] if not nowait else []) def exchange_delete(self, exchange=None, if_unused=False, callback=None): """Delete the exchange. :param str exchange: The exchange name :param bool if_unused: only delete if the exchange is unused :param callable callback: callback(pika.frame.Method) for method Exchange.DeleteOk :raises ValueError: """ self._raise_if_not_open() nowait = validators.rpc_completion_callback(callback) return self._rpc(spec.Exchange.Delete(0, exchange, if_unused, nowait), callback, [spec.Exchange.DeleteOk] if not nowait else []) def exchange_unbind(self, destination=None, source=None, routing_key='', arguments=None, callback=None): """Unbind an exchange from another exchange. :param str destination: The destination exchange to unbind :param str source: The source exchange to unbind from :param str routing_key: The routing key to unbind :param dict arguments: Custom key/value pair arguments for the binding :param callable callback: callback(pika.frame.Method) for method Exchange.UnbindOk :raises ValueError: """ self._raise_if_not_open() nowait = validators.rpc_completion_callback(callback) return self._rpc( spec.Exchange.Unbind(0, destination, source, routing_key, nowait, arguments), callback, [spec.Exchange.UnbindOk] if not nowait else []) def flow(self, active, callback=None): """Turn Channel flow control off and on. Pass a callback to be notified of the response from the server. active is a bool. Callback should expect a bool in response indicating channel flow state. For more information, please reference: http://www.rabbitmq.com/amqp-0-9-1-reference.html#channel.flow :param bool active: Turn flow on or off :param callable callback: callback(bool) upon completion :raises ValueError: """ self._raise_if_not_open() validators.rpc_completion_callback(callback) self._on_flowok_callback = callback self._rpc(spec.Channel.Flow(active), self._on_flowok, [spec.Channel.FlowOk]) @property def is_closed(self): """Returns True if the channel is closed. :rtype: bool """ return self._state == self.CLOSED @property def is_closing(self): """Returns True if client-initiated closing of the channel is in progress. :rtype: bool """ return self._state == self.CLOSING @property def is_open(self): """Returns True if the channel is open. :rtype: bool """ return self._state == self.OPEN def open(self): """Open the channel""" self._set_state(self.OPENING) self._add_callbacks() self._rpc(spec.Channel.Open(), self._on_openok, [spec.Channel.OpenOk]) def queue_bind(self, queue, exchange, routing_key=None, arguments=None, callback=None): """Bind the queue to the specified exchange :param str queue: The queue to bind to the exchange :param str exchange: The source exchange to bind to :param str routing_key: The routing key to bind on :param dict arguments: Custom key/value pair arguments for the binding :param callable callback: callback(pika.frame.Method) for method Queue.BindOk :raises ValueError: """ validators.require_string(queue, 'queue') validators.require_string(exchange, 'exchange') self._raise_if_not_open() nowait = validators.rpc_completion_callback(callback) if routing_key is None: routing_key = queue return self._rpc( spec.Queue.Bind(0, queue, exchange, routing_key, nowait, arguments or dict()), callback, [spec.Queue.BindOk] if not nowait else []) def queue_declare(self, queue, passive=False, durable=False, exclusive=False, auto_delete=False, arguments=None, callback=None): """Declare queue, create if needed. This method creates or checks a queue. When creating a new queue the client can specify various properties that control the durability of the queue and its contents, and the level of sharing for the queue. Use an empty string as the queue name for the broker to auto-generate one :param str queue: The queue name; if empty string, the broker will create a unique queue name :param bool passive: Only check to see if the queue exists :param bool durable: Survive reboots of the broker :param bool exclusive: Only allow access by the current connection :param bool auto_delete: Delete after consumer cancels or disconnects :param dict arguments: Custom key/value arguments for the queue :param callable callback: callback(pika.frame.Method) for method Queue.DeclareOk :raises ValueError: """ validators.require_string(queue, 'queue') self._raise_if_not_open() nowait = validators.rpc_completion_callback(callback) if queue: condition = (spec.Queue.DeclareOk, {'queue': queue}) else: condition = spec.Queue.DeclareOk replies = [condition] if not nowait else [] return self._rpc( spec.Queue.Declare(0, queue, passive, durable, exclusive, auto_delete, nowait, arguments or dict()), callback, replies) def queue_delete(self, queue, if_unused=False, if_empty=False, callback=None): """Delete a queue from the broker. :param str queue: The queue to delete :param bool if_unused: only delete if it's unused :param bool if_empty: only delete if the queue is empty :param callable callback: callback(pika.frame.Method) for method Queue.DeleteOk :raises ValueError: """ self._raise_if_not_open() validators.require_string(queue, 'queue') nowait = validators.rpc_completion_callback(callback) replies = [spec.Queue.DeleteOk] if not nowait else [] return self._rpc( spec.Queue.Delete(0, queue, if_unused, if_empty, nowait), callback, replies) def queue_purge(self, queue, callback=None): """Purge all of the messages from the specified queue :param str queue: The queue to purge :param callable callback: callback(pika.frame.Method) for method Queue.PurgeOk :raises ValueError: """ self._raise_if_not_open() validators.require_string(queue, 'queue') nowait = validators.rpc_completion_callback(callback) replies = [spec.Queue.PurgeOk] if not nowait else [] return self._rpc(spec.Queue.Purge(0, queue, nowait), callback, replies) def queue_unbind(self, queue, exchange=None, routing_key=None, arguments=None, callback=None): """Unbind a queue from an exchange. :param str queue: The queue to unbind from the exchange :param str exchange: The source exchange to bind from :param str routing_key: The routing key to unbind :param dict arguments: Custom key/value pair arguments for the binding :param callable callback: callback(pika.frame.Method) for method Queue.UnbindOk :raises ValueError: """ self._raise_if_not_open() validators.require_string(queue, 'queue') validators.rpc_completion_callback(callback) if routing_key is None: routing_key = queue return self._rpc( spec.Queue.Unbind(0, queue, exchange, routing_key, arguments or dict()), callback, [spec.Queue.UnbindOk]) def tx_commit(self, callback=None): """Commit a transaction :param callable callback: The callback for delivery confirmations :raises ValueError: """ self._raise_if_not_open() validators.rpc_completion_callback(callback) return self._rpc(spec.Tx.Commit(), callback, [spec.Tx.CommitOk]) def tx_rollback(self, callback=None): """Rollback a transaction. :param callable callback: The callback for delivery confirmations :raises ValueError: """ self._raise_if_not_open() validators.rpc_completion_callback(callback) return self._rpc(spec.Tx.Rollback(), callback, [spec.Tx.RollbackOk]) def tx_select(self, callback=None): """Select standard transaction mode. This method sets the channel to use standard transactions. The client must use this method at least once on a channel before using the Commit or Rollback methods. :param callable callback: The callback for delivery confirmations :raises ValueError: """ self._raise_if_not_open() validators.rpc_completion_callback(callback) return self._rpc(spec.Tx.Select(), callback, [spec.Tx.SelectOk]) # Internal methods def _add_callbacks(self): """Callbacks that add the required behavior for a channel when connecting and connected to a server. """ # Add a callback for Basic.GetEmpty self.callbacks.add(self.channel_number, spec.Basic.GetEmpty, self._on_getempty, False) # Add a callback for Basic.Cancel self.callbacks.add(self.channel_number, spec.Basic.Cancel, self._on_cancel, False) # Deprecated in newer versions of RabbitMQ but still register for it self.callbacks.add(self.channel_number, spec.Channel.Flow, self._on_flow, False) # Add a callback for when the server closes our channel self.callbacks.add(self.channel_number, spec.Channel.Close, self._on_close_from_broker, True) def _add_on_cleanup_callback(self, callback): """For internal use only (e.g., Connection needs to remove closed channels from its channel container). Pass a callback function that will be called when the channel is being cleaned up after all channel-close callbacks callbacks. :param callable callback: The callback to call, having the signature: callback(channel) """ self.callbacks.add(self.channel_number, self._ON_CHANNEL_CLEANUP_CB_KEY, callback, one_shot=True, only_caller=self) def _cleanup(self): """Remove all consumers and any callbacks for the channel.""" self.callbacks.process(self.channel_number, self._ON_CHANNEL_CLEANUP_CB_KEY, self, self) self._consumers = dict() self.callbacks.cleanup(str(self.channel_number)) self._cookie = None def _cleanup_consumer_ref(self, consumer_tag): """Remove any references to the consumer tag in internal structures for consumer state. :param str consumer_tag: The consumer tag to cleanup """ self._consumers_with_noack.discard(consumer_tag) self._consumers.pop(consumer_tag, None) self._cancelled.discard(consumer_tag) def _get_cookie(self): """Used by the wrapper implementation (e.g., `BlockingChannel`) to retrieve the cookie that it set via `_set_cookie` :returns: opaque cookie value that was set via `_set_cookie` :rtype: object """ return self._cookie def _handle_content_frame(self, frame_value): """This is invoked by the connection when frames that are not registered with the CallbackManager have been found. This should only be the case when the frames are related to content delivery. The _content_assembler will be invoked which will return the fully formed message in three parts when all of the body frames have been received. :param pika.amqp_object.Frame frame_value: The frame to deliver """ try: response = self._content_assembler.process(frame_value) except exceptions.UnexpectedFrameError: self._on_unexpected_frame(frame_value) return if response: if isinstance(response[0].method, spec.Basic.Deliver): self._on_deliver(*response) elif isinstance(response[0].method, spec.Basic.GetOk): self._on_getok(*response) elif isinstance(response[0].method, spec.Basic.Return): self._on_return(*response) def _on_cancel(self, method_frame): """When the broker cancels a consumer, delete it from our internal dictionary. :param pika.frame.Method method_frame: The method frame received """ if method_frame.method.consumer_tag in self._cancelled: # User-initiated cancel is waiting for Cancel-ok return self._cleanup_consumer_ref(method_frame.method.consumer_tag) def _on_cancelok(self, method_frame): """Called in response to a frame from the Broker when the client sends Basic.Cancel :param pika.frame.Method method_frame: The method frame received """ self._cleanup_consumer_ref(method_frame.method.consumer_tag) def _transition_to_closed(self): """Common logic for transitioning the channel to the CLOSED state: Set state to CLOSED, dispatch callbacks registered via `Channel.add_on_close_callback()`, and mop up. Assumes that the channel is not in CLOSED state and that `self._closing_reason` has been set up """ assert not self.is_closed assert self._closing_reason is not None self._set_state(self.CLOSED) try: self.callbacks.process(self.channel_number, '_on_channel_close', self, self, self._closing_reason) finally: self._cleanup() def _on_close_from_broker(self, method_frame): """Handle `Channel.Close` from broker. :param pika.frame.Method method_frame: Method frame with Channel.Close method """ LOGGER.warning('Received remote Channel.Close (%s): %r on %s', method_frame.method.reply_code, method_frame.method.reply_text, self) # Note, we should not be called when channel is already closed assert not self.is_closed # AMQP 0.9.1 requires CloseOk response to Channel.Close; self._send_method(spec.Channel.CloseOk()) # Save the details, possibly overriding user-provided values if # user-initiated close is pending (in which case they will be provided # to user callback when CloseOk arrives). self._closing_reason = exceptions.ChannelClosedByBroker( method_frame.method.reply_code, method_frame.method.reply_text) if self.is_closing: # Since we may have already put Channel.Close on the wire, we need # to wait for CloseOk before cleaning up to avoid a race condition # whereby our channel number might get reused before our CloseOk # arrives # # NOTE: if our Channel.Close destined for the broker was blocked by # an earlier synchronous method, this call will drop it and perform # a meta-close (see `_on_close_meta()` which fakes receipt of # `Channel.CloseOk` and dispatches the `'_on_channel_close'` # callbacks. self._drain_blocked_methods_on_remote_close() else: self._transition_to_closed() def _on_close_meta(self, reason): """Handle meta-close request from either a remote Channel.Close from the broker (when a pending Channel.Close method is queued for execution) or a Connection's cleanup logic after sudden connection loss. We use this opportunity to transition to CLOSED state, clean up the channel, and dispatch the on-channel-closed callbacks. :param Exception reason: Exception describing the reason for closing. """ LOGGER.debug('Handling meta-close on %s: %r', self, reason) if not self.is_closed: self._closing_reason = reason self._transition_to_closed() def _on_closeok(self, method_frame): """Invoked when RabbitMQ replies to a Channel.Close method :param pika.frame.Method method_frame: Method frame with Channel.CloseOk method """ LOGGER.info('Received %s on %s', method_frame.method, self) self._transition_to_closed() def _on_deliver(self, method_frame, header_frame, body): """Cope with reentrancy. If a particular consumer is still active when another delivery appears for it, queue the deliveries up until it finally exits. :param pika.frame.Method method_frame: The method frame received :param pika.frame.Header header_frame: The header frame received :param bytes body: The body received """ consumer_tag = method_frame.method.consumer_tag if consumer_tag in self._cancelled: if self.is_open and consumer_tag not in self._consumers_with_noack: self.basic_reject(method_frame.method.delivery_tag) return if consumer_tag not in self._consumers: LOGGER.error('Unexpected delivery: %r', method_frame) return self._consumers[consumer_tag](self, method_frame.method, header_frame.properties, body) def _on_eventok(self, method_frame): """Generic events that returned ok that may have internal callbacks. We keep a list of what we've yet to implement so that we don't silently drain events that we don't support. :param pika.frame.Method method_frame: The method frame received """ LOGGER.debug('Discarding frame %r', method_frame) def _on_flow(self, _method_frame_unused): """Called if the server sends a Channel.Flow frame. :param pika.frame.Method method_frame_unused: The Channel.Flow frame """ if self._has_on_flow_callback is False: LOGGER.warning('Channel.Flow received from server') def _on_flowok(self, method_frame): """Called in response to us asking the server to toggle on Channel.Flow :param pika.frame.Method method_frame: The method frame received """ self.flow_active = method_frame.method.active if self._on_flowok_callback: self._on_flowok_callback(method_frame.method.active) self._on_flowok_callback = None else: LOGGER.warning('Channel.FlowOk received with no active callbacks') def _on_getempty(self, method_frame): """When we receive an empty reply do nothing but log it :param pika.frame.Method method_frame: The method frame received """ LOGGER.debug('Received Basic.GetEmpty: %r', method_frame) if self._on_getok_callback is not None: self._on_getok_callback = None def _on_getok(self, method_frame, header_frame, body): """Called in reply to a Basic.Get when there is a message. :param pika.frame.Method method_frame: The method frame received :param pika.frame.Header header_frame: The header frame received :param bytes body: The body received """ if self._on_getok_callback is not None: callback = self._on_getok_callback self._on_getok_callback = None callback(self, method_frame.method, header_frame.properties, body) else: LOGGER.error('Basic.GetOk received with no active callback') def _on_openok(self, method_frame): """Called by our callback handler when we receive a Channel.OpenOk and subsequently calls our _on_openok_callback which was passed into the Channel constructor. The reason we do this is because we want to make sure that the on_open_callback parameter passed into the Channel constructor is not the first callback we make. Suppress the state transition and callback if channel is already in CLOSING state. :param pika.frame.Method method_frame: Channel.OpenOk frame """ # Suppress OpenOk if the user or Connection.Close started closing it # before open completed. if self.is_closing: LOGGER.debug('Suppressing while in closing state: %s', method_frame) else: self._set_state(self.OPEN) if self._on_openok_callback is not None: self._on_openok_callback(self) def _on_return(self, method_frame, header_frame, body): """Called if the server sends a Basic.Return frame. :param pika.frame.Method method_frame: The Basic.Return frame :param pika.frame.Header header_frame: The content header frame :param bytes body: The message body """ if not self.callbacks.process(self.channel_number, '_on_return', self, self, method_frame.method, header_frame.properties, body): LOGGER.debug('Basic.Return received from server (%r, %r)', method_frame.method, header_frame.properties) def _on_selectok(self, method_frame): """Called when the broker sends a Confirm.SelectOk frame :param pika.frame.Method method_frame: The method frame received """ LOGGER.debug("Confirm.SelectOk Received: %r", method_frame) def _on_synchronous_complete(self, _method_frame_unused): """This is called when a synchronous command is completed. It will undo the blocking state and send all the frames that stacked up while we were in the blocking state. :param pika.frame.Method method_frame_unused: The method frame received """ LOGGER.debug('%i blocked frames', len(self._blocked)) self._blocking = None # self._blocking must be checked here as a callback could # potentially change the state of that variable during an # iteration of the while loop while self._blocked and self._blocking is None: self._rpc(*self._blocked.popleft()) def _drain_blocked_methods_on_remote_close(self): """This is called when the broker sends a Channel.Close while the client is in CLOSING state. This method checks the blocked method queue for a pending client-initiated Channel.Close method and ensures its callbacks are processed, but does not send the method to the broker. The broker may close the channel before responding to outstanding in-transit synchronous methods, or even before these methods have been sent to the broker. AMQP 0.9.1 obliges the server to drop all methods arriving on a closed channel other than Channel.CloseOk and Channel.Close. Since the response to a synchronous method that blocked the channel never arrives, the channel never becomes unblocked, and the Channel.Close, if any, in the blocked queue has no opportunity to be sent, and thus its completion callback would never be called. """ LOGGER.debug( 'Draining %i blocked frames due to broker-requested Channel.Close', len(self._blocked)) while self._blocked: method = self._blocked.popleft()[0] if isinstance(method, spec.Channel.Close): # The desired reason is already in self._closing_reason self._on_close_meta(self._closing_reason) else: LOGGER.debug('Ignoring drained blocked method: %s', method) def _rpc(self, method, callback=None, acceptable_replies=None): """Make a syncronous channel RPC call for a synchronous method frame. If the channel is already in the blocking state, then enqueue the request, but don't send it at this time; it will be eventually sent by `_on_synchronous_complete` after the prior blocking request receives a response. If the channel is not in the blocking state and `acceptable_replies` is not empty, transition the channel to the blocking state and register for `_on_synchronous_complete` before sending the request. NOTE: A callback must be accompanied by non-empty acceptable_replies. :param pika.amqp_object.Method method: The AMQP method to invoke :param callable callback: The callback for the RPC response :param list|None acceptable_replies: A (possibly empty) sequence of replies this RPC call expects or None """ assert method.synchronous, ( 'Only synchronous-capable methods may be used with _rpc: %r' % (method,)) # Validate we got None or a list of acceptable_replies if not isinstance(acceptable_replies, (type(None), list)): raise TypeError('acceptable_replies should be list or None') if callback is not None: # Validate the callback is callable if not callable(callback): raise TypeError('callback should be None or a callable') # Make sure that callback is accompanied by acceptable replies if not acceptable_replies: raise ValueError( 'Unexpected callback for asynchronous (nowait) operation.') # Make sure the channel is not closed yet if self.is_closed: self._raise_if_not_open() # If the channel is blocking, add subsequent commands to our stack if self._blocking: LOGGER.debug( 'Already in blocking state, so enqueueing method %s; ' 'acceptable_replies=%r', method, acceptable_replies) self._blocked.append([method, callback, acceptable_replies]) return # Note: _send_method can throw exceptions if there are framing errors # or invalid data passed in. Call it here to prevent self._blocking # from being set if an exception is thrown. This also prevents # acceptable_replies registering callbacks when exceptions are thrown self._send_method(method) # If acceptable replies are set, add callbacks if acceptable_replies: # Block until a response frame is received for synchronous frames self._blocking = method.NAME LOGGER.debug( 'Entering blocking state on frame %s; acceptable_replies=%r', method, acceptable_replies) for reply in acceptable_replies: if isinstance(reply, tuple): reply, arguments = reply else: arguments = None LOGGER.debug('Adding on_synchronous_complete callback') self.callbacks.add(self.channel_number, reply, self._on_synchronous_complete, arguments=arguments) if callback is not None: LOGGER.debug('Adding passed-in RPC response callback') self.callbacks.add(self.channel_number, reply, callback, arguments=arguments) def _raise_if_not_open(self): """If channel is not in the OPEN state, raises ChannelWrongStateError with `reply_code` and `reply_text` corresponding to current state. :raises exceptions.ChannelWrongStateError: if channel is not in OPEN state. """ if self._state == self.OPEN: return if self._state == self.OPENING: raise exceptions.ChannelWrongStateError('Channel is opening, but is not usable yet.') if self._state == self.CLOSING: raise exceptions.ChannelWrongStateError('Channel is closing.') # Assumed self.CLOSED assert self._state == self.CLOSED raise exceptions.ChannelWrongStateError('Channel is closed.') def _send_method(self, method, content=None): """Shortcut wrapper to send a method through our connection, passing in the channel number :param pika.amqp_object.Method method: The method to send :param tuple content: If set, is a content frame, is tuple of properties and body. """ # pylint: disable=W0212 self.connection._send_method(self.channel_number, method, content) def _set_cookie(self, cookie): """Used by wrapper layer (e.g., `BlockingConnection`) to link the channel implementation back to the proxy. See `_get_cookie`. :param cookie: an opaque value; typically a proxy channel implementation instance (e.g., `BlockingChannel` instance) """ self._cookie = cookie def _set_state(self, connection_state): """Set the channel connection state to the specified state value. :param int connection_state: The connection_state value """ self._state = connection_state def _on_unexpected_frame(self, frame_value): """Invoked when a frame is received that is not setup to be processed. :param pika.frame.Frame frame_value: The frame received """ LOGGER.error('Unexpected frame: %r', frame_value) class ContentFrameAssembler(object): """Handle content related frames, building a message and return the message back in three parts upon receipt. """ def __init__(self): """Create a new instance of the conent frame assembler. """ self._method_frame = None self._header_frame = None self._seen_so_far = 0 self._body_fragments = list() def process(self, frame_value): """Invoked by the Channel object when passed frames that are not setup in the rpc process and that don't have explicit reply types defined. This includes Basic.Publish, Basic.GetOk and Basic.Return :param Method|Header|Body frame_value: The frame to process """ if (isinstance(frame_value, frame.Method) and spec.has_content(frame_value.method.INDEX)): self._method_frame = frame_value return None elif isinstance(frame_value, frame.Header): self._header_frame = frame_value if frame_value.body_size == 0: return self._finish() else: return None elif isinstance(frame_value, frame.Body): return self._handle_body_frame(frame_value) else: raise exceptions.UnexpectedFrameError(frame_value) def _finish(self): """Invoked when all of the message has been received :rtype: tuple(pika.frame.Method, pika.frame.Header, str) """ content = (self._method_frame, self._header_frame, b''.join(self._body_fragments)) self._reset() return content def _handle_body_frame(self, body_frame): """Receive body frames and append them to the stack. When the body size matches, call the finish method. :param Body body_frame: The body frame :raises: pika.exceptions.BodyTooLongError :rtype: tuple(pika.frame.Method, pika.frame.Header, str)|None """ self._seen_so_far += len(body_frame.fragment) self._body_fragments.append(body_frame.fragment) if self._seen_so_far == self._header_frame.body_size: return self._finish() elif self._seen_so_far > self._header_frame.body_size: raise exceptions.BodyTooLongError(self._seen_so_far, self._header_frame.body_size) return None def _reset(self): """Reset the values for processing frames""" self._method_frame = None self._header_frame = None self._seen_so_far = 0 self._body_fragments = list() pika-1.2.0/pika/compat.py000066400000000000000000000160651400701476500152330ustar00rootroot00000000000000"""The compat module provides various Python 2 / Python 3 compatibility functions """ # pylint: disable=C0103 import abc import os import platform import re import socket import sys as _sys import time PY2 = _sys.version_info.major == 2 PY3 = not PY2 RE_NUM = re.compile(r'(\d+).+') ON_LINUX = platform.system() == 'Linux' ON_OSX = platform.system() == 'Darwin' ON_WINDOWS = platform.system() == 'Windows' # Portable Abstract Base Class AbstractBase = abc.ABCMeta('AbstractBase', (object,), {}) if _sys.version_info[:2] < (3, 3): SOCKET_ERROR = socket.error else: # socket.error was deprecated and replaced by OSError in python 3.3 SOCKET_ERROR = OSError try: SOL_TCP = socket.SOL_TCP except AttributeError: SOL_TCP = socket.IPPROTO_TCP if PY3: # these were moved around for Python 3 # pylint: disable=W0611 from urllib.parse import (quote as url_quote, unquote as url_unquote, urlencode, parse_qs as url_parse_qs, urlparse) from io import StringIO # Python 3 does not have basestring anymore; we include # *only* the str here as this is used for textual data. basestring = (str,) # for assertions that the data is either encoded or non-encoded text str_or_bytes = (str, bytes) # xrange is gone, replace it with range xrange = range # the unicode type is str unicode_type = str def time_now(): """ Python 3 supports monotonic time """ return time.monotonic() def dictkeys(dct): """ Returns a list of keys of dictionary dict.keys returns a view that works like .keys in Python 2 *except* any modifications in the dictionary will be visible (and will cause errors if the view is being iterated over while it is modified). """ return list(dct.keys()) def dictvalues(dct): """ Returns a list of values of a dictionary dict.values returns a view that works like .values in Python 2 *except* any modifications in the dictionary will be visible (and will cause errors if the view is being iterated over while it is modified). """ return list(dct.values()) def dict_iteritems(dct): """ Returns an iterator of items (key/value pairs) of a dictionary dict.items returns a view that works like .items in Python 2 *except* any modifications in the dictionary will be visible (and will cause errors if the view is being iterated over while it is modified). """ return dct.items() def dict_itervalues(dct): """ :param dict dct: :returns: an iterator of the values of a dictionary :rtype: iterator """ return dct.values() def byte(*args): """ This is the same as Python 2 `chr(n)` for bytes in Python 3 Returns a single byte `bytes` for the given int argument (we optimize it a bit here by passing the positional argument tuple directly to the bytes constructor. """ return bytes(args) class long(int): """ A marker class that signifies that the integer value should be serialized as `l` instead of `I` """ def __str__(self): return str(int(self)) def __repr__(self): return str(self) + 'L' def canonical_str(value): """ Return the canonical str value for the string. In both Python 3 and Python 2 this is str. """ return str(value) def is_integer(value): """ Is value an integer? """ return isinstance(value, int) else: from urllib import (quote as url_quote, unquote as url_unquote, urlencode) # pylint: disable=C0412,E0611 from urlparse import (parse_qs as url_parse_qs, urlparse) # pylint: disable=E0401 from StringIO import StringIO # pylint: disable=E0401 basestring = basestring str_or_bytes = basestring xrange = xrange unicode_type = unicode # pylint: disable=E0602 dictkeys = dict.keys dictvalues = dict.values dict_iteritems = dict.iteritems # pylint: disable=E1101 dict_itervalues = dict.itervalues # pylint: disable=E1101 byte = chr long = long def time_now(): """ Python 2 does not support monotonic time """ return time.time() def canonical_str(value): """ Returns the canonical string value of the given string. In Python 2 this is the value unchanged if it is an str, otherwise it is the unicode value encoded as UTF-8. """ try: return str(value) except UnicodeEncodeError: return str(value.encode('utf-8')) def is_integer(value): """ Is value an integer? """ return isinstance(value, (int, long)) def as_bytes(value): """ Returns value as bytes """ if not isinstance(value, bytes): return value.encode('UTF-8') return value def to_digit(value): """ Returns value as in integer """ if value.isdigit(): return int(value) match = RE_NUM.match(value) return int(match.groups()[0]) if match else 0 def get_linux_version(release_str): """ Gets linux version """ ver_str = release_str.split('-')[0] return tuple(map(to_digit, ver_str.split('.')[:3])) HAVE_SIGNAL = os.name == 'posix' EINTR_IS_EXPOSED = _sys.version_info[:2] <= (3, 4) LINUX_VERSION = None if platform.system() == 'Linux': LINUX_VERSION = get_linux_version(platform.release()) _LOCALHOST = '127.0.0.1' _LOCALHOST_V6 = '::1' def _nonblocking_socketpair(family=socket.AF_INET, socket_type=socket.SOCK_STREAM, proto=0): """ Returns a pair of sockets in the manner of socketpair with the additional feature that they will be non-blocking. Prior to Python 3.5, socketpair did not exist on Windows at all. """ if family == socket.AF_INET: host = _LOCALHOST elif family == socket.AF_INET6: host = _LOCALHOST_V6 else: raise ValueError('Only AF_INET and AF_INET6 socket address families ' 'are supported') if socket_type != socket.SOCK_STREAM: raise ValueError('Only SOCK_STREAM socket socket_type is supported') if proto != 0: raise ValueError('Only protocol zero is supported') lsock = socket.socket(family, socket_type, proto) try: lsock.bind((host, 0)) lsock.listen(min(socket.SOMAXCONN, 128)) # On IPv6, ignore flow_info and scope_id addr, port = lsock.getsockname()[:2] csock = socket.socket(family, socket_type, proto) try: csock.connect((addr, port)) ssock, _ = lsock.accept() except Exception: csock.close() raise finally: lsock.close() # Make sockets non-blocking to prevent deadlocks # See https://github.com/pika/pika/issues/917 csock.setblocking(False) ssock.setblocking(False) return ssock, csock pika-1.2.0/pika/connection.py000066400000000000000000002506621400701476500161120ustar00rootroot00000000000000"""Core connection objects""" # disable too-many-lines # pylint: disable=C0302 import abc import ast import copy import functools import logging import math import numbers import platform import ssl import pika.callback import pika.channel import pika.compat import pika.credentials import pika.exceptions as exceptions import pika.frame as frame import pika.heartbeat import pika.spec as spec import pika.validators as validators from pika.compat import ( xrange, url_unquote, dictkeys, dict_itervalues, dict_iteritems) PRODUCT = "Pika Python Client Library" LOGGER = logging.getLogger(__name__) class Parameters(object): # pylint: disable=R0902 """Base connection parameters class definition """ # Declare slots to protect against accidental assignment of an invalid # attribute __slots__ = ('_blocked_connection_timeout', '_channel_max', '_client_properties', '_connection_attempts', '_credentials', '_frame_max', '_heartbeat', '_host', '_locale', '_port', '_retry_delay', '_socket_timeout', '_stack_timeout', '_ssl_options', '_virtual_host', '_tcp_options') DEFAULT_USERNAME = 'guest' DEFAULT_PASSWORD = 'guest' DEFAULT_BLOCKED_CONNECTION_TIMEOUT = None DEFAULT_CHANNEL_MAX = pika.channel.MAX_CHANNELS DEFAULT_CLIENT_PROPERTIES = None DEFAULT_CREDENTIALS = pika.credentials.PlainCredentials( DEFAULT_USERNAME, DEFAULT_PASSWORD) DEFAULT_CONNECTION_ATTEMPTS = 1 DEFAULT_FRAME_MAX = spec.FRAME_MAX_SIZE DEFAULT_HEARTBEAT_TIMEOUT = None # None accepts server's proposal DEFAULT_HOST = 'localhost' DEFAULT_LOCALE = 'en_US' DEFAULT_PORT = 5672 DEFAULT_RETRY_DELAY = 2.0 DEFAULT_SOCKET_TIMEOUT = 10.0 # socket.connect() timeout DEFAULT_STACK_TIMEOUT = 15.0 # full-stack TCP/[SSl]/AMQP bring-up timeout DEFAULT_SSL = False DEFAULT_SSL_OPTIONS = None DEFAULT_SSL_PORT = 5671 DEFAULT_VIRTUAL_HOST = '/' DEFAULT_TCP_OPTIONS = None def __init__(self): # If not None, blocked_connection_timeout is the timeout, in seconds, # for the connection to remain blocked; if the timeout expires, the # connection will be torn down, triggering the connection's # on_close_callback self._blocked_connection_timeout = None self.blocked_connection_timeout = ( self.DEFAULT_BLOCKED_CONNECTION_TIMEOUT) self._channel_max = None self.channel_max = self.DEFAULT_CHANNEL_MAX self._client_properties = None self.client_properties = self.DEFAULT_CLIENT_PROPERTIES self._connection_attempts = None self.connection_attempts = self.DEFAULT_CONNECTION_ATTEMPTS self._credentials = None self.credentials = self.DEFAULT_CREDENTIALS self._frame_max = None self.frame_max = self.DEFAULT_FRAME_MAX self._heartbeat = None self.heartbeat = self.DEFAULT_HEARTBEAT_TIMEOUT self._host = None self.host = self.DEFAULT_HOST self._locale = None self.locale = self.DEFAULT_LOCALE self._port = None self.port = self.DEFAULT_PORT self._retry_delay = None self.retry_delay = self.DEFAULT_RETRY_DELAY self._socket_timeout = None self.socket_timeout = self.DEFAULT_SOCKET_TIMEOUT self._stack_timeout = None self.stack_timeout = self.DEFAULT_STACK_TIMEOUT self._ssl_options = None self.ssl_options = self.DEFAULT_SSL_OPTIONS self._virtual_host = None self.virtual_host = self.DEFAULT_VIRTUAL_HOST self._tcp_options = None self.tcp_options = self.DEFAULT_TCP_OPTIONS def __repr__(self): """Represent the info about the instance. :rtype: str """ return ('<%s host=%s port=%s virtual_host=%s ssl=%s>' % (self.__class__.__name__, self.host, self.port, self.virtual_host, bool(self.ssl_options))) def __eq__(self, other): if isinstance(other, Parameters): return self._host == other._host and self._port == other._port # pylint: disable=W0212 return NotImplemented def __ne__(self, other): result = self.__eq__(other) if result is not NotImplemented: return not result return NotImplemented @property def blocked_connection_timeout(self): """ :returns: blocked connection timeout. Defaults to `DEFAULT_BLOCKED_CONNECTION_TIMEOUT`. :rtype: float|None """ return self._blocked_connection_timeout @blocked_connection_timeout.setter def blocked_connection_timeout(self, value): """ :param value: If not None, blocked_connection_timeout is the timeout, in seconds, for the connection to remain blocked; if the timeout expires, the connection will be torn down, triggering the connection's on_close_callback """ if value is not None: if not isinstance(value, numbers.Real): raise TypeError('blocked_connection_timeout must be a Real ' 'number, but got %r' % (value,)) if value < 0: raise ValueError('blocked_connection_timeout must be >= 0, but ' 'got %r' % (value,)) self._blocked_connection_timeout = value @property def channel_max(self): """ :returns: max preferred number of channels. Defaults to `DEFAULT_CHANNEL_MAX`. :rtype: int """ return self._channel_max @channel_max.setter def channel_max(self, value): """ :param int value: max preferred number of channels, between 1 and `channel.MAX_CHANNELS`, inclusive """ if not isinstance(value, numbers.Integral): raise TypeError('channel_max must be an int, but got %r' % (value,)) if value < 1 or value > pika.channel.MAX_CHANNELS: raise ValueError('channel_max must be <= %i and > 0, but got %r' % (pika.channel.MAX_CHANNELS, value)) self._channel_max = value @property def client_properties(self): """ :returns: client properties used to override the fields in the default client properties reported to RabbitMQ via `Connection.StartOk` method. Defaults to `DEFAULT_CLIENT_PROPERTIES`. :rtype: dict|None """ return self._client_properties @client_properties.setter def client_properties(self, value): """ :param value: None or dict of client properties used to override the fields in the default client properties reported to RabbitMQ via `Connection.StartOk` method. """ if not isinstance(value, ( dict, type(None), )): raise TypeError('client_properties must be dict or None, ' 'but got %r' % (value,)) # Copy the mutable object to avoid accidental side-effects self._client_properties = copy.deepcopy(value) @property def connection_attempts(self): """ :returns: number of socket connection attempts. Defaults to `DEFAULT_CONNECTION_ATTEMPTS`. See also `retry_delay`. :rtype: int """ return self._connection_attempts @connection_attempts.setter def connection_attempts(self, value): """ :param int value: number of socket connection attempts of at least 1. See also `retry_delay`. """ if not isinstance(value, numbers.Integral): raise TypeError('connection_attempts must be an int') if value < 1: raise ValueError( 'connection_attempts must be > 0, but got %r' % (value,)) self._connection_attempts = value @property def credentials(self): """ :rtype: one of the classes from `pika.credentials.VALID_TYPES`. Defaults to `DEFAULT_CREDENTIALS`. """ return self._credentials @credentials.setter def credentials(self, value): """ :param value: authentication credential object of one of the classes from `pika.credentials.VALID_TYPES` """ if not isinstance(value, tuple(pika.credentials.VALID_TYPES)): raise TypeError('credentials must be an object of type: %r, but ' 'got %r' % (pika.credentials.VALID_TYPES, value)) # Copy the mutable object to avoid accidental side-effects self._credentials = copy.deepcopy(value) @property def frame_max(self): """ :returns: desired maximum AMQP frame size to use. Defaults to `DEFAULT_FRAME_MAX`. :rtype: int """ return self._frame_max @frame_max.setter def frame_max(self, value): """ :param int value: desired maximum AMQP frame size to use between `spec.FRAME_MIN_SIZE` and `spec.FRAME_MAX_SIZE`, inclusive """ if not isinstance(value, numbers.Integral): raise TypeError('frame_max must be an int, but got %r' % (value,)) if value < spec.FRAME_MIN_SIZE: raise ValueError('Min AMQP 0.9.1 Frame Size is %i, but got %r' % ( spec.FRAME_MIN_SIZE, value, )) elif value > spec.FRAME_MAX_SIZE: raise ValueError('Max AMQP 0.9.1 Frame Size is %i, but got %r' % ( spec.FRAME_MAX_SIZE, value, )) self._frame_max = value @property def heartbeat(self): """ :returns: AMQP connection heartbeat timeout value for negotiation during connection tuning or callable which is invoked during connection tuning. None to accept broker's value. 0 turns heartbeat off. Defaults to `DEFAULT_HEARTBEAT_TIMEOUT`. :rtype: int|callable|None """ return self._heartbeat @heartbeat.setter def heartbeat(self, value): """ :param int|None|callable value: Controls AMQP heartbeat timeout negotiation during connection tuning. An integer value always overrides the value proposed by broker. Use 0 to deactivate heartbeats and None to always accept the broker's proposal. If a callable is given, it will be called with the connection instance and the heartbeat timeout proposed by broker as its arguments. The callback should return a non-negative integer that will be used to override the broker's proposal. """ if value is not None: if not isinstance(value, numbers.Integral) and not callable(value): raise TypeError( 'heartbeat must be an int or a callable function, but got %r' % (value,)) if not callable(value) and value < 0: raise ValueError('heartbeat must >= 0, but got %r' % (value,)) self._heartbeat = value @property def host(self): """ :returns: hostname or ip address of broker. Defaults to `DEFAULT_HOST`. :rtype: str """ return self._host @host.setter def host(self, value): """ :param str value: hostname or ip address of broker """ validators.require_string(value, 'host') self._host = value @property def locale(self): """ :returns: locale value to pass to broker; e.g., 'en_US'. Defaults to `DEFAULT_LOCALE`. :rtype: str """ return self._locale @locale.setter def locale(self, value): """ :param str value: locale value to pass to broker; e.g., "en_US" """ validators.require_string(value, 'locale') self._locale = value @property def port(self): """ :returns: port number of broker's listening socket. Defaults to `DEFAULT_PORT`. :rtype: int """ return self._port @port.setter def port(self, value): """ :param int value: port number of broker's listening socket """ try: self._port = int(value) except (TypeError, ValueError): raise TypeError('port must be an int, but got %r' % (value,)) @property def retry_delay(self): """ :returns: interval between socket connection attempts; see also `connection_attempts`. Defaults to `DEFAULT_RETRY_DELAY`. :rtype: float """ return self._retry_delay @retry_delay.setter def retry_delay(self, value): """ :param int | float value: interval between socket connection attempts; see also `connection_attempts`. """ if not isinstance(value, numbers.Real): raise TypeError( 'retry_delay must be a float or int, but got %r' % (value,)) self._retry_delay = value @property def socket_timeout(self): """ :returns: socket connect timeout in seconds. Defaults to `DEFAULT_SOCKET_TIMEOUT`. The value None disables this timeout. :rtype: float|None """ return self._socket_timeout @socket_timeout.setter def socket_timeout(self, value): """ :param int | float | None value: positive socket connect timeout in seconds. None to disable this timeout. """ if value is not None: if not isinstance(value, numbers.Real): raise TypeError('socket_timeout must be a float or int, ' 'but got %r' % (value,)) if value <= 0: raise ValueError( 'socket_timeout must be > 0, but got %r' % (value,)) value = float(value) self._socket_timeout = value @property def stack_timeout(self): """ :returns: full protocol stack TCP/[SSL]/AMQP bring-up timeout in seconds. Defaults to `DEFAULT_STACK_TIMEOUT`. The value None disables this timeout. :rtype: float """ return self._stack_timeout @stack_timeout.setter def stack_timeout(self, value): """ :param int | float | None value: positive full protocol stack TCP/[SSL]/AMQP bring-up timeout in seconds. It's recommended to set this value higher than `socket_timeout`. None to disable this timeout. """ if value is not None: if not isinstance(value, numbers.Real): raise TypeError('stack_timeout must be a float or int, ' 'but got %r' % (value,)) if value <= 0: raise ValueError( 'stack_timeout must be > 0, but got %r' % (value,)) value = float(value) self._stack_timeout = value @property def ssl_options(self): """ :returns: None for plaintext or `pika.SSLOptions` instance for SSL/TLS. :rtype: `pika.SSLOptions`|None """ return self._ssl_options @ssl_options.setter def ssl_options(self, value): """ :param `pika.SSLOptions`|None value: None for plaintext or `pika.SSLOptions` instance for SSL/TLS. Defaults to None. """ if not isinstance(value, (SSLOptions, type(None))): raise TypeError( 'ssl_options must be None or SSLOptions but got %r' % (value,)) self._ssl_options = value @property def virtual_host(self): """ :returns: rabbitmq virtual host name. Defaults to `DEFAULT_VIRTUAL_HOST`. :rtype: str """ return self._virtual_host @virtual_host.setter def virtual_host(self, value): """ :param str value: rabbitmq virtual host name """ validators.require_string(value, 'virtual_host') self._virtual_host = value @property def tcp_options(self): """ :returns: None or a dict of options to pass to the underlying socket :rtype: dict|None """ return self._tcp_options @tcp_options.setter def tcp_options(self, value): """ :param dict|None value: None or a dict of options to pass to the underlying socket. Currently supported are TCP_KEEPIDLE, TCP_KEEPINTVL, TCP_KEEPCNT and TCP_USER_TIMEOUT. Availability of these may depend on your platform. """ if not isinstance(value, (dict, type(None))): raise TypeError( 'tcp_options must be a dict or None, but got %r' % (value,)) self._tcp_options = value class ConnectionParameters(Parameters): """Connection parameters object that is passed into the connection adapter upon construction. """ # Protect against accidental assignment of an invalid attribute __slots__ = () class _DEFAULT(object): """Designates default parameter value; internal use""" def __init__( # pylint: disable=R0913,R0914 self, host=_DEFAULT, port=_DEFAULT, virtual_host=_DEFAULT, credentials=_DEFAULT, channel_max=_DEFAULT, frame_max=_DEFAULT, heartbeat=_DEFAULT, ssl_options=_DEFAULT, connection_attempts=_DEFAULT, retry_delay=_DEFAULT, socket_timeout=_DEFAULT, stack_timeout=_DEFAULT, locale=_DEFAULT, blocked_connection_timeout=_DEFAULT, client_properties=_DEFAULT, tcp_options=_DEFAULT, **kwargs): """Create a new ConnectionParameters instance. See `Parameters` for default values. :param str host: Hostname or IP Address to connect to :param int port: TCP port to connect to :param str virtual_host: RabbitMQ virtual host to use :param pika.credentials.Credentials credentials: auth credentials :param int channel_max: Maximum number of channels to allow :param int frame_max: The maximum byte size for an AMQP frame :param int|None|callable heartbeat: Controls AMQP heartbeat timeout negotiation during connection tuning. An integer value always overrides the value proposed by broker. Use 0 to deactivate heartbeats and None to always accept the broker's proposal. If a callable is given, it will be called with the connection instance and the heartbeat timeout proposed by broker as its arguments. The callback should return a non-negative integer that will be used to override the broker's proposal. :param `pika.SSLOptions`|None ssl_options: None for plaintext or `pika.SSLOptions` instance for SSL/TLS. Defaults to None. :param int connection_attempts: Maximum number of retry attempts :param int|float retry_delay: Time to wait in seconds, before the next :param int|float socket_timeout: Positive socket connect timeout in seconds. :param int|float stack_timeout: Positive full protocol stack (TCP/[SSL]/AMQP) bring-up timeout in seconds. It's recommended to set this value higher than `socket_timeout`. :param str locale: Set the locale value :param int|float|None blocked_connection_timeout: If not None, the value is a non-negative timeout, in seconds, for the connection to remain blocked (triggered by Connection.Blocked from broker); if the timeout expires before connection becomes unblocked, the connection will be torn down, triggering the adapter-specific mechanism for informing client app about the closed connection: passing `ConnectionBlockedTimeout` exception to on_close_callback in asynchronous adapters or raising it in `BlockingConnection`. :param client_properties: None or dict of client properties used to override the fields in the default client properties reported to RabbitMQ via `Connection.StartOk` method. :param tcp_options: None or a dict of TCP options to set for socket """ super(ConnectionParameters, self).__init__() if blocked_connection_timeout is not self._DEFAULT: self.blocked_connection_timeout = blocked_connection_timeout if channel_max is not self._DEFAULT: self.channel_max = channel_max if client_properties is not self._DEFAULT: self.client_properties = client_properties if connection_attempts is not self._DEFAULT: self.connection_attempts = connection_attempts if credentials is not self._DEFAULT: self.credentials = credentials if frame_max is not self._DEFAULT: self.frame_max = frame_max if heartbeat is not self._DEFAULT: self.heartbeat = heartbeat if host is not self._DEFAULT: self.host = host if locale is not self._DEFAULT: self.locale = locale if retry_delay is not self._DEFAULT: self.retry_delay = retry_delay if socket_timeout is not self._DEFAULT: self.socket_timeout = socket_timeout if stack_timeout is not self._DEFAULT: self.stack_timeout = stack_timeout if ssl_options is not self._DEFAULT: self.ssl_options = ssl_options # Set port after SSL status is known if port is not self._DEFAULT: self.port = port else: self.port = self.DEFAULT_SSL_PORT if self.ssl_options else self.DEFAULT_PORT if virtual_host is not self._DEFAULT: self.virtual_host = virtual_host if tcp_options is not self._DEFAULT: self.tcp_options = tcp_options if kwargs: raise TypeError('unexpected kwargs: %r' % (kwargs,)) class URLParameters(Parameters): """Connect to RabbitMQ via an AMQP URL in the format:: amqp://username:password@host:port/[?query-string] Ensure that the virtual host is URI encoded when specified. For example if you are using the default "/" virtual host, the value should be `%2f`. See `Parameters` for default values. Valid query string values are: - channel_max: Override the default maximum channel count value - client_properties: dict of client properties used to override the fields in the default client properties reported to RabbitMQ via `Connection.StartOk` method - connection_attempts: Specify how many times pika should try and reconnect before it gives up - frame_max: Override the default maximum frame size for communication - heartbeat: Desired connection heartbeat timeout for negotiation. If not present the broker's value is accepted. 0 turns heartbeat off. - locale: Override the default `en_US` locale value - ssl_options: None for plaintext; for SSL: dict of public ssl context-related arguments that may be passed to :meth:`ssl.SSLSocket` as kwargs, except `sock`, `server_side`,`do_handshake_on_connect`, `family`, `type`, `proto`, `fileno`. - retry_delay: The number of seconds to sleep before attempting to connect on connection failure. - socket_timeout: Socket connect timeout value in seconds (float or int) - stack_timeout: Positive full protocol stack (TCP/[SSL]/AMQP) bring-up timeout in seconds. It's recommended to set this value higher than `socket_timeout`. - blocked_connection_timeout: Set the timeout, in seconds, that the connection may remain blocked (triggered by Connection.Blocked from broker); if the timeout expires before connection becomes unblocked, the connection will be torn down, triggering the connection's on_close_callback - tcp_options: Set the tcp options for the underlying socket. :param str url: The AMQP URL to connect to """ # Protect against accidental assignment of an invalid attribute __slots__ = ('_all_url_query_values',) # The name of the private function for parsing and setting a given URL query # arg is constructed by catenating the query arg's name to this prefix _SETTER_PREFIX = '_set_url_' def __init__(self, url): """Create a new URLParameters instance. :param str url: The URL value """ super(URLParameters, self).__init__() self._all_url_query_values = None # Handle the Protocol scheme # # Fix up scheme amqp(s) to http(s) so urlparse won't barf on python # prior to 2.7. On Python 2.6.9, # `urlparse('amqp://127.0.0.1/%2f?socket_timeout=1')` produces an # incorrect path='/%2f?socket_timeout=1' if url[0:4].lower() == 'amqp': url = 'http' + url[4:] parts = pika.compat.urlparse(url) if parts.scheme == 'https': # Create default context which will get overridden by the # ssl_options URL arg, if any self.ssl_options = pika.SSLOptions( context=ssl.create_default_context()) elif parts.scheme == 'http': self.ssl_options = None elif parts.scheme: raise ValueError('Unexpected URL scheme %r; supported scheme ' 'values: amqp, amqps' % (parts.scheme,)) if parts.hostname is not None: self.host = parts.hostname # Take care of port after SSL status is known if parts.port is not None: self.port = parts.port else: self.port = (self.DEFAULT_SSL_PORT if self.ssl_options else self.DEFAULT_PORT) if parts.username is not None: self.credentials = pika.credentials.PlainCredentials( url_unquote(parts.username), url_unquote(parts.password)) # Get the Virtual Host if len(parts.path) > 1: self.virtual_host = url_unquote(parts.path.split('/')[1]) # Handle query string values, validating and assigning them self._all_url_query_values = pika.compat.url_parse_qs(parts.query) for name, value in dict_iteritems(self._all_url_query_values): try: set_value = getattr(self, self._SETTER_PREFIX + name) except AttributeError: raise ValueError('Unknown URL parameter: %r' % (name,)) try: (value,) = value except ValueError: raise ValueError( 'Expected exactly one value for URL parameter ' '%s, but got %i values: %s' % (name, len(value), value)) set_value(value) def _set_url_blocked_connection_timeout(self, value): """Deserialize and apply the corresponding query string arg""" try: blocked_connection_timeout = float(value) except ValueError as exc: raise ValueError( 'Invalid blocked_connection_timeout value %r: %r' % ( value, exc, )) self.blocked_connection_timeout = blocked_connection_timeout def _set_url_channel_max(self, value): """Deserialize and apply the corresponding query string arg""" try: channel_max = int(value) except ValueError as exc: raise ValueError('Invalid channel_max value %r: %r' % ( value, exc, )) self.channel_max = channel_max def _set_url_client_properties(self, value): """Deserialize and apply the corresponding query string arg""" self.client_properties = ast.literal_eval(value) def _set_url_connection_attempts(self, value): """Deserialize and apply the corresponding query string arg""" try: connection_attempts = int(value) except ValueError as exc: raise ValueError('Invalid connection_attempts value %r: %r' % ( value, exc, )) self.connection_attempts = connection_attempts def _set_url_frame_max(self, value): """Deserialize and apply the corresponding query string arg""" try: frame_max = int(value) except ValueError as exc: raise ValueError('Invalid frame_max value %r: %r' % ( value, exc, )) self.frame_max = frame_max def _set_url_heartbeat(self, value): """Deserialize and apply the corresponding query string arg""" try: heartbeat_timeout = int(value) except ValueError as exc: raise ValueError('Invalid heartbeat value %r: %r' % ( value, exc, )) self.heartbeat = heartbeat_timeout def _set_url_locale(self, value): """Deserialize and apply the corresponding query string arg""" self.locale = value def _set_url_retry_delay(self, value): """Deserialize and apply the corresponding query string arg""" try: retry_delay = float(value) except ValueError as exc: raise ValueError('Invalid retry_delay value %r: %r' % ( value, exc, )) self.retry_delay = retry_delay def _set_url_socket_timeout(self, value): """Deserialize and apply the corresponding query string arg""" try: socket_timeout = float(value) except ValueError as exc: raise ValueError('Invalid socket_timeout value %r: %r' % ( value, exc, )) self.socket_timeout = socket_timeout def _set_url_stack_timeout(self, value): """Deserialize and apply the corresponding query string arg""" try: stack_timeout = float(value) except ValueError as exc: raise ValueError('Invalid stack_timeout value %r: %r' % ( value, exc, )) self.stack_timeout = stack_timeout def _set_url_ssl_options(self, value): """Deserialize and apply the corresponding query string arg """ opts = ast.literal_eval(value) if opts is None: if self.ssl_options is not None: raise ValueError( 'Specified ssl_options=None URL arg is inconsistent with ' 'the specified https URL scheme.') else: # Older versions of Pika would take the opts dict and pass it # directly as kwargs to the deprecated ssl.wrap_socket method. # Here, we take the valid options and translate them into args # for various SSLContext methods. # # https://docs.python.org/3/library/ssl.html#ssl.wrap_socket # # SSLContext.load_verify_locations(cafile=None, capath=None, cadata=None) try: opt_protocol = ssl.PROTOCOL_TLS except AttributeError: opt_protocol = ssl.PROTOCOL_TLSv1 if 'protocol' in opts: opt_protocol = opts['protocol'] cxt = ssl.SSLContext(protocol=opt_protocol) opt_cafile = opts.get('ca_certs') or opts.get('cafile') opt_capath = opts.get('ca_path') or opts.get('capath') opt_cadata = opts.get('ca_data') or opts.get('cadata') cxt.load_verify_locations(opt_cafile, opt_capath, opt_cadata) # SSLContext.load_cert_chain(certfile, keyfile=None, password=None) if 'certfile' in opts: opt_certfile = opts['certfile'] opt_keyfile = opts.get('keyfile') opt_password = opts.get('password') cxt.load_cert_chain(opt_certfile, opt_keyfile, opt_password) if 'ciphers' in opts: opt_ciphers = opts['ciphers'] cxt.set_ciphers(opt_ciphers) server_hostname = opts.get('server_hostname') self.ssl_options = pika.SSLOptions( context=cxt, server_hostname=server_hostname) def _set_url_tcp_options(self, value): """Deserialize and apply the corresponding query string arg""" self.tcp_options = ast.literal_eval(value) class SSLOptions(object): """Class used to provide parameters for optional fine grained control of SSL socket wrapping. """ # Protect against accidental assignment of an invalid attribute __slots__ = ('context', 'server_hostname') def __init__(self, context, server_hostname=None): """ :param ssl.SSLContext context: SSLContext instance :param str|None server_hostname: SSLContext.wrap_socket, used to enable SNI """ if not isinstance(context, ssl.SSLContext): raise TypeError( 'context must be of ssl.SSLContext type, but got {!r}'.format( context)) self.context = context self.server_hostname = server_hostname class Connection(pika.compat.AbstractBase): """This is the core class that implements communication with RabbitMQ. This class should not be invoked directly but rather through the use of an adapter such as SelectConnection or BlockingConnection. """ # Disable pylint messages concerning "method could be a funciton" # pylint: disable=R0201 ON_CONNECTION_CLOSED = '_on_connection_closed' ON_CONNECTION_ERROR = '_on_connection_error' ON_CONNECTION_OPEN_OK = '_on_connection_open_ok' CONNECTION_CLOSED = 0 CONNECTION_INIT = 1 CONNECTION_PROTOCOL = 2 CONNECTION_START = 3 CONNECTION_TUNE = 4 CONNECTION_OPEN = 5 CONNECTION_CLOSING = 6 # client-initiated close in progress _STATE_NAMES = { CONNECTION_CLOSED: 'CLOSED', CONNECTION_INIT: 'INIT', CONNECTION_PROTOCOL: 'PROTOCOL', CONNECTION_START: 'START', CONNECTION_TUNE: 'TUNE', CONNECTION_OPEN: 'OPEN', CONNECTION_CLOSING: 'CLOSING' } def __init__(self, parameters=None, on_open_callback=None, on_open_error_callback=None, on_close_callback=None, internal_connection_workflow=True): """Connection initialization expects an object that has implemented the Parameters class and a callback function to notify when we have successfully connected to the AMQP Broker. Available Parameters classes are the ConnectionParameters class and URLParameters class. :param pika.connection.Parameters parameters: Read-only connection parameters. :param callable on_open_callback: Called when the connection is opened: on_open_callback(connection) :param None | method on_open_error_callback: Called if the connection can't be established or connection establishment is interrupted by `Connection.close()`: on_open_error_callback(Connection, exception). :param None | method on_close_callback: Called when a previously fully open connection is closed: `on_close_callback(Connection, exception)`, where `exception` is either an instance of `exceptions.ConnectionClosed` if closed by user or broker or exception of another type that describes the cause of connection failure. :param bool internal_connection_workflow: True for autonomous connection establishment which is default; False for externally-managed connection workflow via the `create_connection()` factory. """ self.connection_state = self.CONNECTION_CLOSED # Determines whether we invoke the on_open_error_callback or # on_close_callback. So that we don't lose track when state transitions # to CONNECTION_CLOSING as the result of Connection.close() call during # opening. self._opened = False # Value to pass to on_open_error_callback or on_close_callback when # connection fails to be established or becomes closed self._error = None # type: Exception # Used to hold timer if configured for Connection.Blocked timeout self._blocked_conn_timer = None self._heartbeat_checker = None # Set our configuration options if parameters is not None: # NOTE: Work around inability to copy ssl.SSLContext contained in # our SSLOptions; ssl.SSLContext fails to implement __getnewargs__ saved_ssl_options = parameters.ssl_options parameters.ssl_options = None try: self.params = copy.deepcopy(parameters) self.params.ssl_options = saved_ssl_options finally: parameters.ssl_options = saved_ssl_options else: self.params = ConnectionParameters() self._internal_connection_workflow = internal_connection_workflow # Define our callback dictionary self.callbacks = pika.callback.CallbackManager() # Attributes that will be properly initialized by _init_connection_state # and/or during connection handshake. self.server_capabilities = None self.server_properties = None self._body_max_length = None self.known_hosts = None self._frame_buffer = None self._channels = None self._init_connection_state() # Add the on connection error callback self.callbacks.add( 0, self.ON_CONNECTION_ERROR, on_open_error_callback or self._default_on_connection_error, False) # On connection callback if on_open_callback: self.add_on_open_callback(on_open_callback) # On connection callback if on_close_callback: self.add_on_close_callback(on_close_callback) self._set_connection_state(self.CONNECTION_INIT) if self._internal_connection_workflow: # Kick off full-stack connection establishment. It will complete # asynchronously. self._adapter_connect_stream() else: # Externally-managed connection workflow will proceed asynchronously # using adapter-specific mechanism LOGGER.debug('Using external connection workflow.') def _init_connection_state(self): """Initialize or reset all of the internal state variables for a given connection. On disconnect or reconnect all of the state needs to be wiped. """ # TODO: probably don't need the state recovery logic since we don't # test re-connection sufficiently (if at all), and users should # just create a new instance of Connection when needed. # So, just merge the pertinent logic into the constructor. # Connection state self._set_connection_state(self.CONNECTION_CLOSED) # Negotiated server properties self.server_properties = None # Inbound buffer for decoding frames self._frame_buffer = bytes() # Dict of open channels self._channels = dict() # Data used for Heartbeat checking and back-pressure detection self.bytes_sent = 0 self.bytes_received = 0 self.frames_sent = 0 self.frames_received = 0 self._heartbeat_checker = None # When closing, holds reason why self._error = None # Our starting point once connected, first frame received self._add_connection_start_callback() # Add a callback handler for the Broker telling us to disconnect. # NOTE: As of RabbitMQ 3.6.0, RabbitMQ broker may send Connection.Close # to signal error during connection setup (and wait a longish time # before closing the TCP/IP stream). Earlier RabbitMQ versions # simply closed the TCP/IP stream. self.callbacks.add(0, spec.Connection.Close, self._on_connection_close_from_broker) if self.params.blocked_connection_timeout is not None: if self._blocked_conn_timer is not None: # Blocked connection timer was active when teardown was # initiated self._adapter_remove_timeout(self._blocked_conn_timer) self._blocked_conn_timer = None self.add_on_connection_blocked_callback(self._on_connection_blocked) self.add_on_connection_unblocked_callback( self._on_connection_unblocked) def add_on_close_callback(self, callback): """Add a callback notification when the connection has closed. The callback will be passed the connection and an exception instance. The exception will either be an instance of `exceptions.ConnectionClosed` if a fully-open connection was closed by user or broker or exception of another type that describes the cause of connection closure/failure. :param callable callback: Callback to call on close, having the signature: callback(pika.connection.Connection, exception) """ validators.require_callback(callback) self.callbacks.add(0, self.ON_CONNECTION_CLOSED, callback, False) def add_on_connection_blocked_callback(self, callback): """RabbitMQ AMQP extension - Add a callback to be notified when the connection gets blocked (`Connection.Blocked` received from RabbitMQ) due to the broker running low on resources (memory or disk). In this state RabbitMQ suspends processing incoming data until the connection is unblocked, so it's a good idea for publishers receiving this notification to suspend publishing until the connection becomes unblocked. See also `Connection.add_on_connection_unblocked_callback()` See also `ConnectionParameters.blocked_connection_timeout`. :param callable callback: Callback to call on `Connection.Blocked`, having the signature `callback(connection, pika.frame.Method)`, where the method frame's `method` member is of type `pika.spec.Connection.Blocked` """ validators.require_callback(callback) self.callbacks.add( 0, spec.Connection.Blocked, functools.partial(callback, self), one_shot=False) def add_on_connection_unblocked_callback(self, callback): """RabbitMQ AMQP extension - Add a callback to be notified when the connection gets unblocked (`Connection.Unblocked` frame is received from RabbitMQ) letting publishers know it's ok to start publishing again. :param callable callback: Callback to call on `Connection.Unblocked`, having the signature `callback(connection, pika.frame.Method)`, where the method frame's `method` member is of type `pika.spec.Connection.Unblocked` """ validators.require_callback(callback) self.callbacks.add( 0, spec.Connection.Unblocked, functools.partial(callback, self), one_shot=False) def add_on_open_callback(self, callback): """Add a callback notification when the connection has opened. The callback will be passed the connection instance as its only arg. :param callable callback: Callback to call when open """ validators.require_callback(callback) self.callbacks.add(0, self.ON_CONNECTION_OPEN_OK, callback, False) def add_on_open_error_callback(self, callback, remove_default=True): """Add a callback notification when the connection can not be opened. The callback method should accept the connection instance that could not connect, and either a string or an exception as its second arg. :param callable callback: Callback to call when can't connect, having the signature _(Connection, Exception) :param bool remove_default: Remove default exception raising callback """ validators.require_callback(callback) if remove_default: self.callbacks.remove(0, self.ON_CONNECTION_ERROR, self._default_on_connection_error) self.callbacks.add(0, self.ON_CONNECTION_ERROR, callback, False) def channel(self, channel_number=None, on_open_callback=None): """Create a new channel with the next available channel number or pass in a channel number to use. Must be non-zero if you would like to specify but it is recommended that you let Pika manage the channel numbers. :param int channel_number: The channel number to use, defaults to the next available. :param callable on_open_callback: The callback when the channel is opened. The callback will be invoked with the `Channel` instance as its only argument. :rtype: pika.channel.Channel """ if not self.is_open: raise exceptions.ConnectionWrongStateError( 'Channel allocation requires an open connection: %s' % self) validators.rpc_completion_callback(on_open_callback) if not channel_number: channel_number = self._next_channel_number() self._channels[channel_number] = self._create_channel( channel_number, on_open_callback) self._add_channel_callbacks(channel_number) self._channels[channel_number].open() return self._channels[channel_number] def close(self, reply_code=200, reply_text='Normal shutdown'): """Disconnect from RabbitMQ. If there are any open channels, it will attempt to close them prior to fully disconnecting. Channels which have active consumers will attempt to send a Basic.Cancel to RabbitMQ to cleanly stop the delivery of messages prior to closing the channel. :param int reply_code: The code number for the close :param str reply_text: The text reason for the close :raises pika.exceptions.ConnectionWrongStateError: if connection is closed or closing. """ if self.is_closing or self.is_closed: msg = ('Illegal close({}, {!r}) request on {} because it ' 'was called while connection state={}.'.format( reply_code, reply_text, self, self._STATE_NAMES[self.connection_state])) LOGGER.error(msg) raise exceptions.ConnectionWrongStateError(msg) # NOTE The connection is either in opening or open state # Initiate graceful closing of channels that are OPEN or OPENING if self._channels: self._close_channels(reply_code, reply_text) prev_state = self.connection_state # Transition to closing self._set_connection_state(self.CONNECTION_CLOSING) LOGGER.info("Closing connection (%s): %r", reply_code, reply_text) if not self._opened: # It was opening, but not fully open yet, so we won't attempt # graceful AMQP Connection.Close. LOGGER.info('Connection.close() is terminating stream and ' 'bypassing graceful AMQP close, since AMQP is still ' 'opening.') error = exceptions.ConnectionOpenAborted( 'Connection.close() called before connection ' 'finished opening: prev_state={} ({}): {!r}'.format( self._STATE_NAMES[prev_state], reply_code, reply_text)) self._terminate_stream(error) else: self._error = exceptions.ConnectionClosedByClient( reply_code, reply_text) # If there are channels that haven't finished closing yet, then # _on_close_ready will finally be called from _on_channel_cleanup once # all channels have been closed if not self._channels: # We can initiate graceful closing of the connection right away, # since no more channels remain self._on_close_ready() else: LOGGER.info( 'Connection.close is waiting for %d channels to close: %s', len(self._channels), self) # # Connection state properties # @property def is_closed(self): """ Returns a boolean reporting the current connection state. """ return self.connection_state == self.CONNECTION_CLOSED @property def is_closing(self): """ Returns True if connection is in the process of closing due to client-initiated `close` request, but closing is not yet complete. """ return self.connection_state == self.CONNECTION_CLOSING @property def is_open(self): """ Returns a boolean reporting the current connection state. """ return self.connection_state == self.CONNECTION_OPEN # # Properties that reflect server capabilities for the current connection # @property def basic_nack(self): """Specifies if the server supports basic.nack on the active connection. :rtype: bool """ return self.server_capabilities.get('basic.nack', False) @property def consumer_cancel_notify(self): """Specifies if the server supports consumer cancel notification on the active connection. :rtype: bool """ return self.server_capabilities.get('consumer_cancel_notify', False) @property def exchange_exchange_bindings(self): """Specifies if the active connection supports exchange to exchange bindings. :rtype: bool """ return self.server_capabilities.get('exchange_exchange_bindings', False) @property def publisher_confirms(self): """Specifies if the active connection can use publisher confirmations. :rtype: bool """ return self.server_capabilities.get('publisher_confirms', False) @abc.abstractmethod def _adapter_call_later(self, delay, callback): """Adapters should override to call the callback after the specified number of seconds have elapsed, using a timer, or a thread, or similar. :param float|int delay: The number of seconds to wait to call callback :param callable callback: The callback will be called without args. :returns: Handle that can be passed to `_adapter_remove_timeout()` to cancel the callback. :rtype: object """ raise NotImplementedError @abc.abstractmethod def _adapter_remove_timeout(self, timeout_id): """Adapters should override: Remove a timeout :param opaque timeout_id: The timeout handle to remove """ raise NotImplementedError @abc.abstractmethod def _adapter_add_callback_threadsafe(self, callback): """Requests a call to the given function as soon as possible in the context of this connection's IOLoop thread. NOTE: This is the only thread-safe method offered by the connection. All other manipulations of the connection must be performed from the connection's thread. :param callable callback: The callback method; must be callable. """ raise NotImplementedError # # Internal methods for managing the communication process # @abc.abstractmethod def _adapter_connect_stream(self): """Subclasses should override to initiate stream connection workflow asynchronously. Upon failed or aborted completion, they must invoke `Connection._on_stream_terminated()`. NOTE: On success, the stack will be up already, so there is no corresponding callback. """ raise NotImplementedError @abc.abstractmethod def _adapter_disconnect_stream(self): """Asynchronously bring down the streaming transport layer and invoke `Connection._on_stream_terminated()` asynchronously when complete. :raises: NotImplementedError """ raise NotImplementedError @abc.abstractmethod def _adapter_emit_data(self, data): """Take ownership of data and send it to AMQP server as soon as possible. Subclasses must override this :param bytes data: """ raise NotImplementedError def _add_channel_callbacks(self, channel_number): """Add the appropriate callbacks for the specified channel number. :param int channel_number: The channel number for the callbacks """ # pylint: disable=W0212 # This permits us to garbage-collect our reference to the channel # regardless of whether it was closed by client or broker, and do so # after all channel-close callbacks. self._channels[channel_number]._add_on_cleanup_callback( self._on_channel_cleanup) def _add_connection_start_callback(self): """Add a callback for when a Connection.Start frame is received from the broker. """ self.callbacks.add(0, spec.Connection.Start, self._on_connection_start) def _add_connection_tune_callback(self): """Add a callback for when a Connection.Tune frame is received.""" self.callbacks.add(0, spec.Connection.Tune, self._on_connection_tune) def _check_for_protocol_mismatch(self, value): """Invoked when starting a connection to make sure it's a supported protocol. :param pika.frame.Method value: The frame to check :raises: ProtocolVersionMismatch """ if ((value.method.version_major, value.method.version_minor) != spec.PROTOCOL_VERSION[0:2]): raise exceptions.ProtocolVersionMismatch(frame.ProtocolHeader(), value) @property def _client_properties(self): """Return the client properties dictionary. :rtype: dict """ properties = { 'product': PRODUCT, 'platform': 'Python %s' % platform.python_version(), 'capabilities': { 'authentication_failure_close': True, 'basic.nack': True, 'connection.blocked': True, 'consumer_cancel_notify': True, 'publisher_confirms': True }, 'information': 'See http://pika.rtfd.org', 'version': pika.__version__ } if self.params.client_properties: properties.update(self.params.client_properties) return properties def _close_channels(self, reply_code, reply_text): """Initiate graceful closing of channels that are in OPEN or OPENING states, passing reply_code and reply_text. :param int reply_code: The code for why the channels are being closed :param str reply_text: The text reason for why the channels are closing """ assert self.is_open, str(self) for channel_number in dictkeys(self._channels): chan = self._channels[channel_number] if not (chan.is_closing or chan.is_closed): chan.close(reply_code, reply_text) def _create_channel(self, channel_number, on_open_callback): """Create a new channel using the specified channel number and calling back the method specified by on_open_callback :param int channel_number: The channel number to use :param callable on_open_callback: The callback when the channel is opened. The callback will be invoked with the `Channel` instance as its only argument. """ LOGGER.debug('Creating channel %s', channel_number) return pika.channel.Channel(self, channel_number, on_open_callback) def _create_heartbeat_checker(self): """Create a heartbeat checker instance if there is a heartbeat interval set. :rtype: pika.heartbeat.Heartbeat|None """ if self.params.heartbeat is not None and self.params.heartbeat > 0: LOGGER.debug('Creating a HeartbeatChecker: %r', self.params.heartbeat) return pika.heartbeat.HeartbeatChecker(self, self.params.heartbeat) return None def _remove_heartbeat(self): """Stop the heartbeat checker if it exists """ if self._heartbeat_checker: self._heartbeat_checker.stop() self._heartbeat_checker = None def _deliver_frame_to_channel(self, value): """Deliver the frame to the channel specified in the frame. :param pika.frame.Method value: The frame to deliver """ if not value.channel_number in self._channels: # This should never happen and would constitute breach of the # protocol LOGGER.critical( 'Received %s frame for unregistered channel %i on %s', value.NAME, value.channel_number, self) return # pylint: disable=W0212 self._channels[value.channel_number]._handle_content_frame(value) def _ensure_closed(self): """If the connection is not closed, close it.""" if self.is_open: self.close() def _get_body_frame_max_length(self): """Calculate the maximum amount of bytes that can be in a body frame. :rtype: int """ return (self.params.frame_max - spec.FRAME_HEADER_SIZE - spec.FRAME_END_SIZE) def _get_credentials(self, method_frame): """Get credentials for authentication. :param pika.frame.MethodFrame method_frame: The Connection.Start frame :rtype: tuple(str, str) """ (auth_type, response) = self.params.credentials.response_for(method_frame.method) if not auth_type: raise exceptions.AuthenticationError(self.params.credentials.TYPE) self.params.credentials.erase_credentials() return auth_type, response def _has_pending_callbacks(self, value): """Return true if there are any callbacks pending for the specified frame. :param pika.frame.Method value: The frame to check :rtype: bool """ return self.callbacks.pending(value.channel_number, value.method) def _is_method_frame(self, value): """Returns true if the frame is a method frame. :param pika.frame.Frame value: The frame to evaluate :rtype: bool """ return isinstance(value, frame.Method) def _is_protocol_header_frame(self, value): """Returns True if it's a protocol header frame. :rtype: bool """ return isinstance(value, frame.ProtocolHeader) def _next_channel_number(self): """Return the next available channel number or raise an exception. :rtype: int """ limit = self.params.channel_max or pika.channel.MAX_CHANNELS if len(self._channels) >= limit: raise exceptions.NoFreeChannels() for num in xrange(1, len(self._channels) + 1): if num not in self._channels: return num return len(self._channels) + 1 def _on_channel_cleanup(self, channel): """Remove the channel from the dict of channels when Channel.CloseOk is sent. If connection is closing and no more channels remain, proceed to `_on_close_ready`. :param pika.channel.Channel channel: channel instance """ try: del self._channels[channel.channel_number] LOGGER.debug('Removed channel %s', channel.channel_number) except KeyError: LOGGER.error('Channel %r not in channels', channel.channel_number) if self.is_closing: if not self._channels: # Initiate graceful closing of the connection self._on_close_ready() else: # Once Connection enters CLOSING state, all remaining channels # should also be in CLOSING state. Deviation from this would # prevent Connection from completing its closing procedure. channels_not_in_closing_state = [ chan for chan in dict_itervalues(self._channels) if not chan.is_closing ] if channels_not_in_closing_state: LOGGER.critical( 'Connection in CLOSING state has non-CLOSING ' 'channels: %r', channels_not_in_closing_state) def _on_close_ready(self): """Called when the Connection is in a state that it can close after a close has been requested by client. This happens after all of the channels are closed that were open when the close request was made. """ if self.is_closed: LOGGER.warning('_on_close_ready invoked when already closed') return # NOTE: Assuming self._error is instance of exceptions.ConnectionClosed self._send_connection_close(self._error.reply_code, self._error.reply_text) def _on_stream_connected(self): """Invoked when the socket is connected and it's time to start speaking AMQP with the broker. """ self._set_connection_state(self.CONNECTION_PROTOCOL) # Start the communication with the RabbitMQ Broker self._send_frame(frame.ProtocolHeader()) def _on_blocked_connection_timeout(self): """ Called when the "connection blocked timeout" expires. When this happens, we tear down the connection """ self._blocked_conn_timer = None self._terminate_stream( exceptions.ConnectionBlockedTimeout( 'Blocked connection timeout expired.')) def _on_connection_blocked(self, _connection, method_frame): """Handle Connection.Blocked notification from RabbitMQ broker :param pika.frame.Method method_frame: method frame having `method` member of type `pika.spec.Connection.Blocked` """ LOGGER.warning('Received %s from broker', method_frame) if self._blocked_conn_timer is not None: # RabbitMQ is not supposed to repeat Connection.Blocked, but it # doesn't hurt to be careful LOGGER.warning( '_blocked_conn_timer %s already set when ' '_on_connection_blocked is called', self._blocked_conn_timer) else: self._blocked_conn_timer = self._adapter_call_later( self.params.blocked_connection_timeout, self._on_blocked_connection_timeout) def _on_connection_unblocked(self, _connection, method_frame): """Handle Connection.Unblocked notification from RabbitMQ broker :param pika.frame.Method method_frame: method frame having `method` member of type `pika.spec.Connection.Blocked` """ LOGGER.info('Received %s from broker', method_frame) if self._blocked_conn_timer is None: # RabbitMQ is supposed to pair Connection.Blocked/Unblocked, but it # doesn't hurt to be careful LOGGER.warning('_blocked_conn_timer was not active when ' '_on_connection_unblocked called') else: self._adapter_remove_timeout(self._blocked_conn_timer) self._blocked_conn_timer = None def _on_connection_close_from_broker(self, method_frame): """Called when the connection is closed remotely via Connection.Close frame from broker. :param pika.frame.Method method_frame: The Connection.Close frame """ LOGGER.debug('_on_connection_close_from_broker: frame=%s', method_frame) self._terminate_stream( exceptions.ConnectionClosedByBroker(method_frame.method.reply_code, method_frame.method.reply_text)) def _on_connection_close_ok(self, method_frame): """Called when Connection.CloseOk is received from remote. :param pika.frame.Method method_frame: The Connection.CloseOk frame """ LOGGER.debug('_on_connection_close_ok: frame=%s', method_frame) self._terminate_stream(None) def _default_on_connection_error(self, _connection_unused, error): """Default behavior when the connecting connection cannot connect and user didn't supply own `on_connection_error` callback. :raises: the given error """ raise error def _on_connection_open_ok(self, method_frame): """ This is called once we have tuned the connection with the server and called the Connection.Open on the server and it has replied with Connection.Ok. """ self._opened = True self.known_hosts = method_frame.method.known_hosts # We're now connected at the AMQP level self._set_connection_state(self.CONNECTION_OPEN) # Call our initial callback that we're open self.callbacks.process(0, self.ON_CONNECTION_OPEN_OK, self, self) def _on_connection_start(self, method_frame): """This is called as a callback once we have received a Connection.Start from the server. :param pika.frame.Method method_frame: The frame received :raises: UnexpectedFrameError """ self._set_connection_state(self.CONNECTION_START) try: if self._is_protocol_header_frame(method_frame): raise exceptions.UnexpectedFrameError(method_frame) self._check_for_protocol_mismatch(method_frame) self._set_server_information(method_frame) self._add_connection_tune_callback() self._send_connection_start_ok(*self._get_credentials(method_frame)) except Exception as error: # pylint: disable=W0703 LOGGER.exception('Error processing Connection.Start.') self._terminate_stream(error) @staticmethod def _negotiate_integer_value(client_value, server_value): """Negotiates two values. If either of them is 0 or None, returns the other one. If both are positive integers, returns the smallest one. :param int client_value: The client value :param int server_value: The server value :rtype: int """ if client_value is None: client_value = 0 if server_value is None: server_value = 0 # this is consistent with how Java client and Bunny # perform negotiation, see pika/pika#874 if client_value == 0 or server_value == 0: val = max(client_value, server_value) else: val = min(client_value, server_value) return val @staticmethod def _tune_heartbeat_timeout(client_value, server_value): """ Determine heartbeat timeout per AMQP 0-9-1 rules Per https://www.rabbitmq.com/resources/specs/amqp0-9-1.pdf, > Both peers negotiate the limits to the lowest agreed value as follows: > - The server MUST tell the client what limits it proposes. > - The client responds and **MAY reduce those limits** for its connection If the client specifies a value, it always takes precedence. :param client_value: None to accept server_value; otherwise, an integral number in seconds; 0 (zero) to disable heartbeat. :param server_value: integral value of the heartbeat timeout proposed by broker; 0 (zero) to disable heartbeat. :returns: the value of the heartbeat timeout to use and return to broker :rtype: int """ if client_value is None: # Accept server's limit timeout = server_value else: timeout = client_value return timeout def _on_connection_tune(self, method_frame): """Once the Broker sends back a Connection.Tune, we will set our tuning variables that have been returned to us and kick off the Heartbeat monitor if required, send our TuneOk and then the Connection. Open rpc call on channel 0. :param pika.frame.Method method_frame: The frame received """ self._set_connection_state(self.CONNECTION_TUNE) # Get our max channels, frames and heartbeat interval self.params.channel_max = Connection._negotiate_integer_value( self.params.channel_max, method_frame.method.channel_max) self.params.frame_max = Connection._negotiate_integer_value( self.params.frame_max, method_frame.method.frame_max) if callable(self.params.heartbeat): ret_heartbeat = self.params.heartbeat(self, method_frame.method.heartbeat) if ret_heartbeat is None or callable(ret_heartbeat): # Enforce callback-specific restrictions on callback's return value raise TypeError('heartbeat callback must not return None ' 'or callable, but got %r' % (ret_heartbeat,)) # Leave it to hearbeat setter deal with the rest of the validation self.params.heartbeat = ret_heartbeat # Negotiate heatbeat timeout self.params.heartbeat = self._tune_heartbeat_timeout( client_value=self.params.heartbeat, server_value=method_frame.method.heartbeat) # Calculate the maximum pieces for body frames self._body_max_length = self._get_body_frame_max_length() # Create a new heartbeat checker if needed self._heartbeat_checker = self._create_heartbeat_checker() # Send the TuneOk response with what we've agreed upon self._send_connection_tune_ok() # Send the Connection.Open RPC call for the vhost self._send_connection_open() def _on_data_available(self, data_in): """This is called by our Adapter, passing in the data from the socket. As long as we have buffer try and map out frame data. :param str data_in: The data that is available to read """ self._frame_buffer += data_in while self._frame_buffer: consumed_count, frame_value = self._read_frame() if not frame_value: return self._trim_frame_buffer(consumed_count) self._process_frame(frame_value) def _terminate_stream(self, error): """Deactivate heartbeat instance if activated already, and initiate termination of the stream (TCP) connection asynchronously. When connection terminates, the appropriate user callback will be invoked with the given error: "on open error" or "on connection closed". :param Exception | None error: exception instance describing the reason for termination; None for normal closing, such as upon receipt of Connection.CloseOk. """ assert isinstance(error, (type(None), Exception)), \ 'error arg is neither None nor instance of Exception: {!r}.'.format( error) if error is not None: # Save the exception for user callback once the stream closes self._error = error else: assert self._error is not None, ( '_terminate_stream() expected self._error to be set when ' 'passed None error arg.') # So it won't mess with the stack self._remove_heartbeat() # Begin disconnection of stream or termination of connection workflow self._adapter_disconnect_stream() def _on_stream_terminated(self, error): """Handle termination of stack (including TCP layer) or failure to establish the stack. Notify registered ON_CONNECTION_ERROR or ON_CONNECTION_CLOSED callbacks, depending on whether the connection was opening or open. :param Exception | None error: None means that the transport was aborted internally and exception in `self._error` represents the cause. Otherwise it's an exception object that describes the unexpected loss of connection. """ LOGGER.info( 'AMQP stack terminated, failed to connect, or aborted: ' 'opened=%r, error-arg=%r; pending-error=%r', self._opened, error, self._error) if error is not None: if self._error is not None: LOGGER.debug( '_on_stream_terminated(): overriding ' 'pending-error=%r with %r', self._error, error) self._error = error else: assert self._error is not None, ( '_on_stream_terminated() expected self._error to be populated ' 'with reason for terminating stack.') # Stop the heartbeat checker if it exists self._remove_heartbeat() # Remove connection management callbacks self._remove_callbacks(0, [spec.Connection.Close, spec.Connection.Start]) if self.params.blocked_connection_timeout is not None: self._remove_callbacks(0, [spec.Connection.Blocked, spec.Connection.Unblocked]) if not self._opened and isinstance(self._error, (exceptions.StreamLostError, exceptions.ConnectionClosedByBroker)): # Heuristically deduce error based on connection state if self.connection_state == self.CONNECTION_PROTOCOL: LOGGER.error('Probably incompatible Protocol Versions') self._error = exceptions.IncompatibleProtocolError( repr(self._error)) elif self.connection_state == self.CONNECTION_START: LOGGER.error( 'Connection closed while authenticating indicating a ' 'probable authentication error') self._error = exceptions.ProbableAuthenticationError( repr(self._error)) elif self.connection_state == self.CONNECTION_TUNE: LOGGER.error('Connection closed while tuning the connection ' 'indicating a probable permission error when ' 'accessing a virtual host') self._error = exceptions.ProbableAccessDeniedError( repr(self._error)) elif self.connection_state not in [ self.CONNECTION_OPEN, self.CONNECTION_CLOSED, self.CONNECTION_CLOSING ]: LOGGER.warning('Unexpected connection state on disconnect: %i', self.connection_state) # Transition to closed state self._set_connection_state(self.CONNECTION_CLOSED) # Inform our channel proxies, if any are still around for channel in dictkeys(self._channels): if channel not in self._channels: continue # pylint: disable=W0212 self._channels[channel]._on_close_meta(self._error) # Inform interested parties if not self._opened: LOGGER.info('Connection setup terminated due to %r', self._error) self.callbacks.process(0, self.ON_CONNECTION_ERROR, self, self, self._error) else: LOGGER.info('Stack terminated due to %r', self._error) self.callbacks.process(0, self.ON_CONNECTION_CLOSED, self, self, self._error) # Reset connection properties self._init_connection_state() def _process_callbacks(self, frame_value): """Process the callbacks for the frame if the frame is a method frame and if it has any callbacks pending. :param pika.frame.Method frame_value: The frame to process :rtype: bool """ if (self._is_method_frame(frame_value) and self._has_pending_callbacks(frame_value)): self.callbacks.process( frame_value.channel_number, # Prefix frame_value.method, # Key self, # Caller frame_value) # Args return True return False def _process_frame(self, frame_value): """Process an inbound frame from the socket. :param pika.frame.Frame|pika.frame.Method frame_value: The frame to process """ # Will receive a frame type of -1 if protocol version mismatch if frame_value.frame_type < 0: return # Keep track of how many frames have been read self.frames_received += 1 # Process any callbacks, if True, exit method if self._process_callbacks(frame_value): return # If a heartbeat is received, update the checker if isinstance(frame_value, frame.Heartbeat): if self._heartbeat_checker: self._heartbeat_checker.received() else: LOGGER.warning('Received heartbeat frame without a heartbeat ' 'checker') # If the frame has a channel number beyond the base channel, deliver it elif frame_value.channel_number > 0: self._deliver_frame_to_channel(frame_value) def _read_frame(self): """Try and read from the frame buffer and decode a frame. :rtype tuple: (int, pika.frame.Frame) """ return frame.decode_frame(self._frame_buffer) def _remove_callbacks(self, channel_number, method_classes): """Remove the callbacks for the specified channel number and list of method frames. :param int channel_number: The channel number to remove the callback on :param sequence method_classes: The method classes (derived from `pika.amqp_object.Method`) for the callbacks """ for method_cls in method_classes: self.callbacks.remove(str(channel_number), method_cls) def _rpc(self, channel_number, method, callback=None, acceptable_replies=None): """Make an RPC call for the given callback, channel number and method. acceptable_replies lists out what responses we'll process from the server with the specified callback. :param int channel_number: The channel number for the RPC call :param pika.amqp_object.Method method: The method frame to call :param callable callback: The callback for the RPC response :param list acceptable_replies: The replies this RPC call expects """ # Validate that acceptable_replies is a list or None if acceptable_replies and not isinstance(acceptable_replies, list): raise TypeError('acceptable_replies should be list or None') # Validate the callback is callable if callback is not None: validators.require_callback(callback) for reply in acceptable_replies: self.callbacks.add(channel_number, reply, callback) # Send the rpc call to RabbitMQ self._send_method(channel_number, method) def _send_connection_close(self, reply_code, reply_text): """Send a Connection.Close method frame. :param int reply_code: The reason for the close :param str reply_text: The text reason for the close """ self._rpc(0, spec.Connection.Close(reply_code, reply_text, 0, 0), self._on_connection_close_ok, [spec.Connection.CloseOk]) def _send_connection_open(self): """Send a Connection.Open frame""" self._rpc(0, spec.Connection.Open( self.params.virtual_host, insist=True), self._on_connection_open_ok, [spec.Connection.OpenOk]) def _send_connection_start_ok(self, authentication_type, response): """Send a Connection.StartOk frame :param str authentication_type: The auth type value :param str response: The encoded value to send """ self._send_method( 0, spec.Connection.StartOk(self._client_properties, authentication_type, response, self.params.locale)) def _send_connection_tune_ok(self): """Send a Connection.TuneOk frame""" self._send_method( 0, spec.Connection.TuneOk(self.params.channel_max, self.params.frame_max, self.params.heartbeat)) def _send_frame(self, frame_value): """This appends the fully generated frame to send to the broker to the output buffer which will be then sent via the connection adapter. :param pika.frame.Frame|pika.frame.ProtocolHeader frame_value: The frame to write :raises: exceptions.ConnectionClosed """ if self.is_closed: LOGGER.error('Attempted to send frame when closed') raise exceptions.ConnectionWrongStateError( 'Attempted to send a frame on closed connection.') marshaled_frame = frame_value.marshal() self._output_marshaled_frames([marshaled_frame]) def _send_method(self, channel_number, method, content=None): """Constructs a RPC method frame and then sends it to the broker. :param int channel_number: The channel number for the frame :param pika.amqp_object.Method method: The method to send :param tuple content: If set, is a content frame, is tuple of properties and body. """ if content: self._send_message(channel_number, method, content) else: self._send_frame(frame.Method(channel_number, method)) def _send_message(self, channel_number, method_frame, content): """Publish a message. :param int channel_number: The channel number for the frame :param pika.object.Method method_frame: The method frame to send :param tuple content: A content frame, which is tuple of properties and body. """ length = len(content[1]) marshaled_body_frames = [] # Note: we construct the Method, Header and Content objects, marshal them # *then* output in case the marshaling operation throws an exception frame_method = frame.Method(channel_number, method_frame) frame_header = frame.Header(channel_number, length, content[0]) marshaled_body_frames.append(frame_method.marshal()) marshaled_body_frames.append(frame_header.marshal()) if content[1]: chunks = int(math.ceil(float(length) / self._body_max_length)) for chunk in xrange(0, chunks): start = chunk * self._body_max_length end = start + self._body_max_length if end > length: end = length frame_body = frame.Body(channel_number, content[1][start:end]) marshaled_body_frames.append(frame_body.marshal()) self._output_marshaled_frames(marshaled_body_frames) def _set_connection_state(self, connection_state): """Set the connection state. :param int connection_state: The connection state to set """ LOGGER.debug('New Connection state: %s (prev=%s)', self._STATE_NAMES[connection_state], self._STATE_NAMES[self.connection_state]) self.connection_state = connection_state def _set_server_information(self, method_frame): """Set the server properties and capabilities :param spec.connection.Start method_frame: The Connection.Start frame """ self.server_properties = method_frame.method.server_properties self.server_capabilities = self.server_properties.get( 'capabilities', dict()) if hasattr(self.server_properties, 'capabilities'): del self.server_properties['capabilities'] def _trim_frame_buffer(self, byte_count): """Trim the leading N bytes off the frame buffer and increment the counter that keeps track of how many bytes have been read/used from the socket. :param int byte_count: The number of bytes consumed """ self._frame_buffer = self._frame_buffer[byte_count:] self.bytes_received += byte_count def _output_marshaled_frames(self, marshaled_frames): """Output list of marshaled frames to buffer and update stats :param list marshaled_frames: A list of frames marshaled to bytes """ for marshaled_frame in marshaled_frames: self.bytes_sent += len(marshaled_frame) self.frames_sent += 1 self._adapter_emit_data(marshaled_frame) pika-1.2.0/pika/credentials.py000066400000000000000000000110461400701476500162370ustar00rootroot00000000000000"""The credentials classes are used to encapsulate all authentication information for the :class:`~pika.connection.ConnectionParameters` class. The :class:`~pika.credentials.PlainCredentials` class returns the properly formatted username and password to the :class:`~pika.connection.Connection`. To authenticate with Pika, create a :class:`~pika.credentials.PlainCredentials` object passing in the username and password and pass it as the credentials argument value to the :class:`~pika.connection.ConnectionParameters` object. If you are using :class:`~pika.connection.URLParameters` you do not need a credentials object, one will automatically be created for you. If you are looking to implement SSL certificate style authentication, you would extend the :class:`~pika.credentials.ExternalCredentials` class implementing the required behavior. """ import logging from .compat import as_bytes LOGGER = logging.getLogger(__name__) class PlainCredentials(object): """A credentials object for the default authentication methodology with RabbitMQ. If you do not pass in credentials to the ConnectionParameters object, it will create credentials for 'guest' with the password of 'guest'. If you pass True to erase_on_connect the credentials will not be stored in memory after the Connection attempt has been made. :param str username: The username to authenticate with :param str password: The password to authenticate with :param bool erase_on_connect: erase credentials on connect. """ TYPE = 'PLAIN' def __init__(self, username, password, erase_on_connect=False): """Create a new instance of PlainCredentials :param str username: The username to authenticate with :param str password: The password to authenticate with :param bool erase_on_connect: erase credentials on connect. """ self.username = username self.password = password self.erase_on_connect = erase_on_connect def __eq__(self, other): if isinstance(other, PlainCredentials): return (self.username == other.username and self.password == other.password and self.erase_on_connect == other.erase_on_connect) return NotImplemented def __ne__(self, other): result = self.__eq__(other) if result is not NotImplemented: return not result return NotImplemented def response_for(self, start): """Validate that this type of authentication is supported :param spec.Connection.Start start: Connection.Start method :rtype: tuple(str|None, str|None) """ if as_bytes(PlainCredentials.TYPE) not in\ as_bytes(start.mechanisms).split(): return None, None return ( PlainCredentials.TYPE, b'\0' + as_bytes(self.username) + b'\0' + as_bytes(self.password)) def erase_credentials(self): """Called by Connection when it no longer needs the credentials""" if self.erase_on_connect: LOGGER.info("Erasing stored credential values") self.username = None self.password = None class ExternalCredentials(object): """The ExternalCredentials class allows the connection to use EXTERNAL authentication, generally with a client SSL certificate. """ TYPE = 'EXTERNAL' def __init__(self): """Create a new instance of ExternalCredentials""" self.erase_on_connect = False def __eq__(self, other): if isinstance(other, ExternalCredentials): return self.erase_on_connect == other.erase_on_connect return NotImplemented def __ne__(self, other): result = self.__eq__(other) if result is not NotImplemented: return not result return NotImplemented def response_for(self, start): # pylint: disable=R0201 """Validate that this type of authentication is supported :param spec.Connection.Start start: Connection.Start method :rtype: tuple(str or None, str or None) """ if as_bytes(ExternalCredentials.TYPE) not in\ as_bytes(start.mechanisms).split(): return None, None return ExternalCredentials.TYPE, b'' def erase_credentials(self): # pylint: disable=R0201 """Called by Connection when it no longer needs the credentials""" LOGGER.debug('Not supported by this Credentials type') # Append custom credential types to this list for validation support VALID_TYPES = [PlainCredentials, ExternalCredentials] pika-1.2.0/pika/data.py000066400000000000000000000230221400701476500146500ustar00rootroot00000000000000"""AMQP Table Encoding/Decoding""" import struct import decimal import calendar from datetime import datetime from pika import exceptions from pika.compat import PY2, basestring from pika.compat import unicode_type, long, as_bytes def encode_short_string(pieces, value): """Encode a string value as short string and append it to pieces list returning the size of the encoded value. :param list pieces: Already encoded values :param str value: String value to encode :rtype: int """ encoded_value = as_bytes(value) length = len(encoded_value) # 4.2.5.3 # Short strings, stored as an 8-bit unsigned integer length followed by zero # or more octets of data. Short strings can carry up to 255 octets of UTF-8 # data, but may not contain binary zero octets. # ... # 4.2.5.5 # The server SHOULD validate field names and upon receiving an invalid field # name, it SHOULD signal a connection exception with reply code 503 (syntax # error). # -> validate length (avoid truncated utf-8 / corrupted data), but skip null # byte check. if length > 255: raise exceptions.ShortStringTooLong(encoded_value) pieces.append(struct.pack('B', length)) pieces.append(encoded_value) return 1 + length if PY2: def decode_short_string(encoded, offset): """Decode a short string value from ``encoded`` data at ``offset``. """ length = struct.unpack_from('B', encoded, offset)[0] offset += 1 # Purely for compatibility with original python2 code. No idea what # and why this does. value = encoded[offset:offset + length] try: value = bytes(value) except UnicodeEncodeError: pass offset += length return value, offset else: def decode_short_string(encoded, offset): """Decode a short string value from ``encoded`` data at ``offset``. """ length = struct.unpack_from('B', encoded, offset)[0] offset += 1 value = encoded[offset:offset + length] try: value = value.decode('utf8') except UnicodeDecodeError: pass offset += length return value, offset def encode_table(pieces, table): """Encode a dict as an AMQP table appending the encded table to the pieces list passed in. :param list pieces: Already encoded frame pieces :param dict table: The dict to encode :rtype: int """ table = table or {} length_index = len(pieces) pieces.append(None) # placeholder tablesize = 0 for (key, value) in table.items(): tablesize += encode_short_string(pieces, key) tablesize += encode_value(pieces, value) pieces[length_index] = struct.pack('>I', tablesize) return tablesize + 4 def encode_value(pieces, value): # pylint: disable=R0911 """Encode the value passed in and append it to the pieces list returning the the size of the encoded value. :param list pieces: Already encoded values :param any value: The value to encode :rtype: int """ if PY2: if isinstance(value, basestring): if isinstance(value, unicode_type): value = value.encode('utf-8') pieces.append(struct.pack('>cI', b'S', len(value))) pieces.append(value) return 5 + len(value) else: # support only str on Python 3 if isinstance(value, basestring): value = value.encode('utf-8') pieces.append(struct.pack('>cI', b'S', len(value))) pieces.append(value) return 5 + len(value) if isinstance(value, bytes): pieces.append(struct.pack('>cI', b'x', len(value))) pieces.append(value) return 5 + len(value) if isinstance(value, bool): pieces.append(struct.pack('>cB', b't', int(value))) return 2 if isinstance(value, long): pieces.append(struct.pack('>cq', b'l', value)) return 9 elif isinstance(value, int): try: packed = struct.pack('>ci', b'I', value) pieces.append(packed) return 5 except struct.error: packed = struct.pack('>cq', b'l', long(value)) pieces.append(packed) return 9 elif isinstance(value, decimal.Decimal): value = value.normalize() if value.as_tuple().exponent < 0: decimals = -value.as_tuple().exponent raw = int(value * (decimal.Decimal(10)**decimals)) pieces.append(struct.pack('>cBi', b'D', decimals, raw)) else: # per spec, the "decimals" octet is unsigned (!) pieces.append(struct.pack('>cBi', b'D', 0, int(value))) return 6 elif isinstance(value, datetime): pieces.append( struct.pack('>cQ', b'T', calendar.timegm(value.utctimetuple()))) return 9 elif isinstance(value, dict): pieces.append(struct.pack('>c', b'F')) return 1 + encode_table(pieces, value) elif isinstance(value, list): list_pieces = [] for val in value: encode_value(list_pieces, val) piece = b''.join(list_pieces) pieces.append(struct.pack('>cI', b'A', len(piece))) pieces.append(piece) return 5 + len(piece) elif value is None: pieces.append(struct.pack('>c', b'V')) return 1 else: raise exceptions.UnsupportedAMQPFieldException(pieces, value) def decode_table(encoded, offset): """Decode the AMQP table passed in from the encoded value returning the decoded result and the number of bytes read plus the offset. :param str encoded: The binary encoded data to decode :param int offset: The starting byte offset :rtype: tuple """ result = {} tablesize = struct.unpack_from('>I', encoded, offset)[0] offset += 4 limit = offset + tablesize while offset < limit: key, offset = decode_short_string(encoded, offset) value, offset = decode_value(encoded, offset) result[key] = value return result, offset def decode_value(encoded, offset): # pylint: disable=R0912,R0915 """Decode the value passed in returning the decoded value and the number of bytes read in addition to the starting offset. :param str encoded: The binary encoded data to decode :param int offset: The starting byte offset :rtype: tuple :raises: pika.exceptions.InvalidFieldTypeException """ # slice to get bytes in Python 3 and str in Python 2 kind = encoded[offset:offset + 1] offset += 1 # Bool if kind == b't': value = struct.unpack_from('>B', encoded, offset)[0] value = bool(value) offset += 1 # Short-Short Int elif kind == b'b': value = struct.unpack_from('>B', encoded, offset)[0] offset += 1 # Short-Short Unsigned Int elif kind == b'B': value = struct.unpack_from('>b', encoded, offset)[0] offset += 1 # Short Int elif kind == b'U': value = struct.unpack_from('>h', encoded, offset)[0] offset += 2 # Short Unsigned Int elif kind == b'u': value = struct.unpack_from('>H', encoded, offset)[0] offset += 2 # Long Int elif kind == b'I': value = struct.unpack_from('>i', encoded, offset)[0] offset += 4 # Long Unsigned Int elif kind == b'i': value = struct.unpack_from('>I', encoded, offset)[0] offset += 4 # Long-Long Int elif kind == b'L': value = long(struct.unpack_from('>q', encoded, offset)[0]) offset += 8 # Long-Long Unsigned Int elif kind == b'l': value = long(struct.unpack_from('>Q', encoded, offset)[0]) offset += 8 # Float elif kind == b'f': value = long(struct.unpack_from('>f', encoded, offset)[0]) offset += 4 # Double elif kind == b'd': value = long(struct.unpack_from('>d', encoded, offset)[0]) offset += 8 # Decimal elif kind == b'D': decimals = struct.unpack_from('B', encoded, offset)[0] offset += 1 raw = struct.unpack_from('>i', encoded, offset)[0] offset += 4 value = decimal.Decimal(raw) * (decimal.Decimal(10)**-decimals) # https://github.com/pika/pika/issues/1205 # Short Signed Int elif kind == b's': value = struct.unpack_from('>h', encoded, offset)[0] offset += 2 # Long String elif kind == b'S': length = struct.unpack_from('>I', encoded, offset)[0] offset += 4 value = encoded[offset:offset + length] try: value = value.decode('utf8') except UnicodeDecodeError: pass offset += length elif kind == b'x': length = struct.unpack_from('>I', encoded, offset)[0] offset += 4 value = encoded[offset:offset + length] offset += length # Field Array elif kind == b'A': length = struct.unpack_from('>I', encoded, offset)[0] offset += 4 offset_end = offset + length value = [] while offset < offset_end: val, offset = decode_value(encoded, offset) value.append(val) # Timestamp elif kind == b'T': value = datetime.utcfromtimestamp( struct.unpack_from('>Q', encoded, offset)[0]) offset += 8 # Field Table elif kind == b'F': (value, offset) = decode_table(encoded, offset) # Null / Void elif kind == b'V': value = None else: raise exceptions.InvalidFieldTypeException(kind) return value, offset pika-1.2.0/pika/diagnostic_utils.py000066400000000000000000000033371400701476500173120ustar00rootroot00000000000000""" Diagnostic utilities """ import functools import sys import traceback def create_log_exception_decorator(logger): """Create a decorator that logs and reraises any exceptions that escape the decorated function :param logging.Logger logger: :returns: the decorator :rtype: callable Usage example import logging from pika.diagnostics_utils import create_log_exception_decorator _log_exception = create_log_exception_decorator(logging.getLogger(__name__)) @_log_exception def my_func_or_method(): raise Exception('Oops!') """ def log_exception(func): """The decorator returned by the parent function :param func: function to be wrapped :returns: the function wrapper :rtype: callable """ @functools.wraps(func) def log_exception_func_wrap(*args, **kwargs): """The wrapper function returned by the decorator. Invokes the function with the given args/kwargs and returns the function's return value. If the function exits with an exception, logs the exception traceback and re-raises the :param args: positional args passed to wrapped function :param kwargs: keyword args passed to wrapped function :returns: whatever the wrapped function returns :rtype: object """ try: return func(*args, **kwargs) except: logger.exception( 'Wrapped func exited with exception. Caller\'s stack:\n%s', ''.join(traceback.format_exception(*sys.exc_info()))) raise return log_exception_func_wrap return log_exception pika-1.2.0/pika/exceptions.py000066400000000000000000000236051400701476500161270ustar00rootroot00000000000000"""Pika specific exceptions""" # pylint: disable=C0111,E1136 class AMQPError(Exception): def __repr__(self): return '%s: An unspecified AMQP error has occurred; %s' % ( self.__class__.__name__, self.args) class AMQPConnectionError(AMQPError): def __repr__(self): if len(self.args) == 2: return '{}: ({}) {}'.format(self.__class__.__name__, self.args[0], self.args[1]) else: return '{}: {}'.format(self.__class__.__name__, self.args) class ConnectionOpenAborted(AMQPConnectionError): """Client closed connection while opening.""" class StreamLostError(AMQPConnectionError): """Stream (TCP) connection lost.""" class IncompatibleProtocolError(AMQPConnectionError): def __repr__(self): return ( '%s: The protocol returned by the server is not supported: %s' % ( self.__class__.__name__, self.args, )) class AuthenticationError(AMQPConnectionError): def __repr__(self): return ('%s: Server and client could not negotiate use of the %s ' 'authentication mechanism' % (self.__class__.__name__, self.args[0])) class ProbableAuthenticationError(AMQPConnectionError): def __repr__(self): return ( '%s: Client was disconnected at a connection stage indicating a ' 'probable authentication error: %s' % ( self.__class__.__name__, self.args, )) class ProbableAccessDeniedError(AMQPConnectionError): def __repr__(self): return ( '%s: Client was disconnected at a connection stage indicating a ' 'probable denial of access to the specified virtual host: %s' % ( self.__class__.__name__, self.args, )) class NoFreeChannels(AMQPConnectionError): def __repr__(self): return '%s: The connection has run out of free channels' % ( self.__class__.__name__) class ConnectionWrongStateError(AMQPConnectionError): """Connection is in wrong state for the requested operation.""" def __repr__(self): if self.args: return super(ConnectionWrongStateError, self).__repr__() else: return ('%s: The connection is in wrong state for the requested ' 'operation.' % self.__class__.__name__) class ConnectionClosed(AMQPConnectionError): def __init__(self, reply_code, reply_text): """ :param int reply_code: reply-code that was used in user's or broker's `Connection.Close` method. NEW in v1.0.0 :param str reply_text: reply-text that was used in user's or broker's `Connection.Close` method. Human-readable string corresponding to `reply_code`. NEW in v1.0.0 """ super(ConnectionClosed, self).__init__(int(reply_code), str(reply_text)) def __repr__(self): return '{}: ({}) {!r}'.format(self.__class__.__name__, self.reply_code, self.reply_text) @property def reply_code(self): """ NEW in v1.0.0 :rtype: int """ return self.args[0] @property def reply_text(self): """ NEW in v1.0.0 :rtype: str """ return self.args[1] class ConnectionClosedByBroker(ConnectionClosed): """Connection.Close from broker.""" class ConnectionClosedByClient(ConnectionClosed): """Connection was closed at request of Pika client.""" class ConnectionBlockedTimeout(AMQPConnectionError): """RabbitMQ-specific: timed out waiting for connection.unblocked.""" class AMQPHeartbeatTimeout(AMQPConnectionError): """Connection was dropped as result of heartbeat timeout.""" class AMQPChannelError(AMQPError): def __repr__(self): return '{}: {!r}'.format(self.__class__.__name__, self.args) class ChannelWrongStateError(AMQPChannelError): """Channel is in wrong state for the requested operation.""" class ChannelClosed(AMQPChannelError): """The channel closed by client or by broker """ def __init__(self, reply_code, reply_text): """ :param int reply_code: reply-code that was used in user's or broker's `Channel.Close` method. One of the AMQP-defined Channel Errors. NEW in v1.0.0 :param str reply_text: reply-text that was used in user's or broker's `Channel.Close` method. Human-readable string corresponding to `reply_code`; NEW in v1.0.0 """ super(ChannelClosed, self).__init__(int(reply_code), str(reply_text)) def __repr__(self): return '{}: ({}) {!r}'.format(self.__class__.__name__, self.reply_code, self.reply_text) @property def reply_code(self): """ NEW in v1.0.0 :rtype: int """ return self.args[0] @property def reply_text(self): """ NEW in v1.0.0 :rtype: str """ return self.args[1] class ChannelClosedByBroker(ChannelClosed): """`Channel.Close` from broker; may be passed as reason to channel's on-closed callback of non-blocking connection adapters or raised by `BlockingConnection`. NEW in v1.0.0 """ class ChannelClosedByClient(ChannelClosed): """Channel closed by client upon receipt of `Channel.CloseOk`; may be passed as reason to channel's on-closed callback of non-blocking connection adapters, but not raised by `BlockingConnection`. NEW in v1.0.0 """ class DuplicateConsumerTag(AMQPChannelError): def __repr__(self): return ('%s: The consumer tag specified already exists for this ' 'channel: %s' % (self.__class__.__name__, self.args[0])) class ConsumerCancelled(AMQPChannelError): def __repr__(self): return '%s: Server cancelled consumer' % self.__class__.__name__ class UnroutableError(AMQPChannelError): """Exception containing one or more unroutable messages returned by broker via Basic.Return. Used by BlockingChannel. In publisher-acknowledgements mode, this is raised upon receipt of Basic.Ack from broker; in the event of Basic.Nack from broker, `NackError` is raised instead """ def __init__(self, messages): """ :param sequence(blocking_connection.ReturnedMessage) messages: Sequence of returned unroutable messages """ super(UnroutableError, self).__init__( "%s unroutable message(s) returned" % (len(messages))) self.messages = messages def __repr__(self): return '%s: %i unroutable messages returned by broker' % ( self.__class__.__name__, len(self.messages)) class NackError(AMQPChannelError): """This exception is raised when a message published in publisher-acknowledgements mode is Nack'ed by the broker. Used by BlockingChannel. """ def __init__(self, messages): """ :param sequence(blocking_connection.ReturnedMessage) messages: Sequence of returned unroutable messages """ super(NackError, self).__init__("%s message(s) NACKed" % (len(messages))) self.messages = messages def __repr__(self): return '%s: %i unroutable messages returned by broker' % ( self.__class__.__name__, len(self.messages)) class InvalidChannelNumber(AMQPError): def __repr__(self): return '%s: An invalid channel number has been specified: %s' % ( self.__class__.__name__, self.args[0]) class ProtocolSyntaxError(AMQPError): def __repr__(self): return '%s: An unspecified protocol syntax error occurred' % ( self.__class__.__name__) class UnexpectedFrameError(ProtocolSyntaxError): def __repr__(self): return '%s: Received a frame out of sequence: %r' % ( self.__class__.__name__, self.args[0]) class ProtocolVersionMismatch(ProtocolSyntaxError): def __repr__(self): return '%s: Protocol versions did not match: %r vs %r' % ( self.__class__.__name__, self.args[0], self.args[1]) class BodyTooLongError(ProtocolSyntaxError): def __repr__(self): return ('%s: Received too many bytes for a message delivery: ' 'Received %i, expected %i' % (self.__class__.__name__, self.args[0], self.args[1])) class InvalidFrameError(ProtocolSyntaxError): def __repr__(self): return '%s: Invalid frame received: %r' % (self.__class__.__name__, self.args[0]) class InvalidFieldTypeException(ProtocolSyntaxError): def __repr__(self): return '%s: Unsupported field kind %s' % (self.__class__.__name__, self.args[0]) class UnsupportedAMQPFieldException(ProtocolSyntaxError): def __repr__(self): return '%s: Unsupported field kind %s' % (self.__class__.__name__, type(self.args[1])) class MethodNotImplemented(AMQPError): pass class ChannelError(Exception): def __repr__(self): return '%s: An unspecified error occurred with the Channel' % ( self.__class__.__name__) class ReentrancyError(Exception): """The requested operation would result in unsupported recursion or reentrancy. Used by BlockingConnection/BlockingChannel """ class ShortStringTooLong(AMQPError): def __repr__(self): return ('%s: AMQP Short String can contain up to 255 bytes: ' '%.300s' % (self.__class__.__name__, self.args[0])) class DuplicateGetOkCallback(ChannelError): def __repr__(self): return ('%s: basic_get can only be called again after the callback for ' 'the previous basic_get is executed' % self.__class__.__name__) pika-1.2.0/pika/exchange_type.py000066400000000000000000000002131400701476500165570ustar00rootroot00000000000000from enum import Enum class ExchangeType(Enum) : direct = 'direct' fanout = 'fanout' headers = 'headers' topic = 'topic' pika-1.2.0/pika/frame.py000066400000000000000000000171001400701476500150310ustar00rootroot00000000000000"""Frame objects that do the frame demarshaling and marshaling.""" import logging import struct from pika import amqp_object from pika import exceptions from pika import spec from pika.compat import byte LOGGER = logging.getLogger(__name__) class Frame(amqp_object.AMQPObject): """Base Frame object mapping. Defines a behavior for all child classes for assignment of core attributes and implementation of the a core _marshal method which child classes use to create the binary AMQP frame. """ NAME = 'Frame' def __init__(self, frame_type, channel_number): """Create a new instance of a frame :param int frame_type: The frame type :param int channel_number: The channel number for the frame """ self.frame_type = frame_type self.channel_number = channel_number def _marshal(self, pieces): """Create the full AMQP wire protocol frame data representation :rtype: bytes """ payload = b''.join(pieces) return struct.pack('>BHI', self.frame_type, self.channel_number, len(payload)) + payload + byte(spec.FRAME_END) def marshal(self): """To be ended by child classes :raises NotImplementedError """ raise NotImplementedError class Method(Frame): """Base Method frame object mapping. AMQP method frames are mapped on top of this class for creating or accessing their data and attributes. """ NAME = 'METHOD' def __init__(self, channel_number, method): """Create a new instance of a frame :param int channel_number: The frame type :param pika.Spec.Class.Method method: The AMQP Class.Method """ Frame.__init__(self, spec.FRAME_METHOD, channel_number) self.method = method def marshal(self): """Return the AMQP binary encoded value of the frame :rtype: str """ pieces = self.method.encode() pieces.insert(0, struct.pack('>I', self.method.INDEX)) return self._marshal(pieces) class Header(Frame): """Header frame object mapping. AMQP content header frames are mapped on top of this class for creating or accessing their data and attributes. """ NAME = 'Header' def __init__(self, channel_number, body_size, props): """Create a new instance of a AMQP ContentHeader object :param int channel_number: The channel number for the frame :param int body_size: The number of bytes for the body :param pika.spec.BasicProperties props: Basic.Properties object """ Frame.__init__(self, spec.FRAME_HEADER, channel_number) self.body_size = body_size self.properties = props def marshal(self): """Return the AMQP binary encoded value of the frame :rtype: str """ pieces = self.properties.encode() pieces.insert( 0, struct.pack('>HxxQ', self.properties.INDEX, self.body_size)) return self._marshal(pieces) class Body(Frame): """Body frame object mapping class. AMQP content body frames are mapped on to this base class for getting/setting of attributes/data. """ NAME = 'Body' def __init__(self, channel_number, fragment): """ Parameters: - channel_number: int - fragment: unicode or str """ Frame.__init__(self, spec.FRAME_BODY, channel_number) self.fragment = fragment def marshal(self): """Return the AMQP binary encoded value of the frame :rtype: str """ return self._marshal([self.fragment]) class Heartbeat(Frame): """Heartbeat frame object mapping class. AMQP Heartbeat frames are mapped on to this class for a common access structure to the attributes/data values. """ NAME = 'Heartbeat' def __init__(self): """Create a new instance of the Heartbeat frame""" Frame.__init__(self, spec.FRAME_HEARTBEAT, 0) def marshal(self): """Return the AMQP binary encoded value of the frame :rtype: str """ return self._marshal(list()) class ProtocolHeader(amqp_object.AMQPObject): """AMQP Protocol header frame class which provides a pythonic interface for creating AMQP Protocol headers """ NAME = 'ProtocolHeader' def __init__(self, major=None, minor=None, revision=None): """Construct a Protocol Header frame object for the specified AMQP version :param int major: Major version number :param int minor: Minor version number :param int revision: Revision """ self.frame_type = -1 self.major = major or spec.PROTOCOL_VERSION[0] self.minor = minor or spec.PROTOCOL_VERSION[1] self.revision = revision or spec.PROTOCOL_VERSION[2] def marshal(self): """Return the full AMQP wire protocol frame data representation of the ProtocolHeader frame :rtype: str """ return b'AMQP' + struct.pack('BBBB', 0, self.major, self.minor, self.revision) def decode_frame(data_in): # pylint: disable=R0911,R0914 """Receives raw socket data and attempts to turn it into a frame. Returns bytes used to make the frame and the frame :param str data_in: The raw data stream :rtype: tuple(bytes consumed, frame) :raises: pika.exceptions.InvalidFrameError """ # Look to see if it's a protocol header frame try: if data_in[0:4] == b'AMQP': major, minor, revision = struct.unpack_from('BBB', data_in, 5) return 8, ProtocolHeader(major, minor, revision) except (IndexError, struct.error): return 0, None # Get the Frame Type, Channel Number and Frame Size try: (frame_type, channel_number, frame_size) = struct.unpack( '>BHL', data_in[0:7]) except struct.error: return 0, None # Get the frame data frame_end = spec.FRAME_HEADER_SIZE + frame_size + spec.FRAME_END_SIZE # We don't have all of the frame yet if frame_end > len(data_in): return 0, None # The Frame termination chr is wrong if data_in[frame_end - 1:frame_end] != byte(spec.FRAME_END): raise exceptions.InvalidFrameError("Invalid FRAME_END marker") # Get the raw frame data frame_data = data_in[spec.FRAME_HEADER_SIZE:frame_end - 1] if frame_type == spec.FRAME_METHOD: # Get the Method ID from the frame data method_id = struct.unpack_from('>I', frame_data)[0] # Get a Method object for this method_id method = spec.methods[method_id]() # Decode the content method.decode(frame_data, 4) # Return the amount of data consumed and the Method object return frame_end, Method(channel_number, method) elif frame_type == spec.FRAME_HEADER: # Return the header class and body size class_id, weight, body_size = struct.unpack_from('>HHQ', frame_data) # Get the Properties type properties = spec.props[class_id]() # Decode the properties out = properties.decode(frame_data[12:]) # Return a Header frame return frame_end, Header(channel_number, body_size, properties) elif frame_type == spec.FRAME_BODY: # Return the amount of data consumed and the Body frame w/ data return frame_end, Body(channel_number, frame_data) elif frame_type == spec.FRAME_HEARTBEAT: # Return the amount of data and a Heartbeat frame return frame_end, Heartbeat() raise exceptions.InvalidFrameError("Unknown frame type: %i" % frame_type) pika-1.2.0/pika/heartbeat.py000066400000000000000000000177521400701476500157130ustar00rootroot00000000000000"""Handle AMQP Heartbeats""" import logging import pika.exceptions from pika import frame LOGGER = logging.getLogger(__name__) class HeartbeatChecker(object): """Sends heartbeats to the broker. The provided timeout is used to determine if the connection is stale - no received heartbeats or other activity will close the connection. See the parameter list for more details. """ _STALE_CONNECTION = "No activity or too many missed heartbeats in the last %i seconds" def __init__(self, connection, timeout): """Create an object that will check for activity on the provided connection as well as receive heartbeat frames from the broker. The timeout parameter defines a window within which this activity must happen. If not, the connection is considered dead and closed. The value passed for timeout is also used to calculate an interval at which a heartbeat frame is sent to the broker. The interval is equal to the timeout value divided by two. :param pika.connection.Connection: Connection object :param int timeout: Connection idle timeout. If no activity occurs on the connection nor heartbeat frames received during the timeout window the connection will be closed. The interval used to send heartbeats is calculated from this value by dividing it by two. """ if timeout < 1: raise ValueError('timeout must >= 0, but got %r' % (timeout,)) self._connection = connection # Note: see the following documents: # https://www.rabbitmq.com/heartbeats.html#heartbeats-timeout # https://github.com/pika/pika/pull/1072 # https://groups.google.com/d/topic/rabbitmq-users/Fmfeqe5ocTY/discussion # There is a certain amount of confusion around how client developers # interpret the spec. The spec talks about 2 missed heartbeats as a # *timeout*, plus that any activity on the connection counts for a # heartbeat. This is to avoid edge cases and not to depend on network # latency. self._timeout = timeout self._send_interval = float(timeout) / 2 # Note: Pika will calculate the heartbeat / connectivity check interval # by adding 5 seconds to the negotiated timeout to leave a bit of room # for broker heartbeats that may be right at the edge of the timeout # window. This is different behavior from the RabbitMQ Java client and # the spec that suggests a check interval equivalent to two times the # heartbeat timeout value. But, one advantage of adding a small amount # is that bad connections will be detected faster. # https://github.com/pika/pika/pull/1072#issuecomment-397850795 # https://github.com/rabbitmq/rabbitmq-java-client/blob/b55bd20a1a236fc2d1ea9369b579770fa0237615/src/main/java/com/rabbitmq/client/impl/AMQConnection.java#L773-L780 # https://github.com/ruby-amqp/bunny/blob/3259f3af2e659a49c38c2470aa565c8fb825213c/lib/bunny/session.rb#L1187-L1192 self._check_interval = timeout + 5 LOGGER.debug('timeout: %f send_interval: %f check_interval: %f', self._timeout, self._send_interval, self._check_interval) # Initialize counters self._bytes_received = 0 self._bytes_sent = 0 self._heartbeat_frames_received = 0 self._heartbeat_frames_sent = 0 self._idle_byte_intervals = 0 self._send_timer = None self._check_timer = None self._start_send_timer() self._start_check_timer() @property def bytes_received_on_connection(self): """Return the number of bytes received by the connection bytes object. :rtype int """ return self._connection.bytes_received @property def connection_is_idle(self): """Returns true if the byte count hasn't changed in enough intervals to trip the max idle threshold. """ return self._idle_byte_intervals > 0 def received(self): """Called when a heartbeat is received""" LOGGER.debug('Received heartbeat frame') self._heartbeat_frames_received += 1 def _send_heartbeat(self): """Invoked by a timer to send a heartbeat when we need to. """ LOGGER.debug('Sending heartbeat frame') self._send_heartbeat_frame() self._start_send_timer() def _check_heartbeat(self): """Invoked by a timer to check for broker heartbeats. Checks to see if we've missed any heartbeats and disconnect our connection if it's been idle too long. """ if self._has_received_data: self._idle_byte_intervals = 0 else: # Connection has not received any data, increment the counter self._idle_byte_intervals += 1 LOGGER.debug( 'Received %i heartbeat frames, sent %i, ' 'idle intervals %i', self._heartbeat_frames_received, self._heartbeat_frames_sent, self._idle_byte_intervals) if self.connection_is_idle: self._close_connection() return self._start_check_timer() def stop(self): """Stop the heartbeat checker""" if self._send_timer: LOGGER.debug('Removing timer for next heartbeat send interval') self._connection._adapter_remove_timeout(self._send_timer) # pylint: disable=W0212 self._send_timer = None if self._check_timer: LOGGER.debug('Removing timer for next heartbeat check interval') self._connection._adapter_remove_timeout(self._check_timer) # pylint: disable=W0212 self._check_timer = None def _close_connection(self): """Close the connection with the AMQP Connection-Forced value.""" LOGGER.info('Connection is idle, %i stale byte intervals', self._idle_byte_intervals) text = HeartbeatChecker._STALE_CONNECTION % self._timeout # Abort the stream connection. There is no point trying to gracefully # close the AMQP connection since lack of heartbeat suggests that the # stream is dead. self._connection._terminate_stream( # pylint: disable=W0212 pika.exceptions.AMQPHeartbeatTimeout(text)) @property def _has_received_data(self): """Returns True if the connection has received data. :rtype: bool """ return self._bytes_received != self.bytes_received_on_connection @staticmethod def _new_heartbeat_frame(): """Return a new heartbeat frame. :rtype pika.frame.Heartbeat """ return frame.Heartbeat() def _send_heartbeat_frame(self): """Send a heartbeat frame on the connection. """ LOGGER.debug('Sending heartbeat frame') self._connection._send_frame( # pylint: disable=W0212 self._new_heartbeat_frame()) self._heartbeat_frames_sent += 1 def _start_send_timer(self): """Start a new heartbeat send timer.""" self._send_timer = self._connection._adapter_call_later( # pylint: disable=W0212 self._send_interval, self._send_heartbeat) def _start_check_timer(self): """Start a new heartbeat check timer.""" # Note: update counters now to get current values # at the start of the timeout window. Values will be # checked against the connection's byte count at the # end of the window self._update_counters() self._check_timer = self._connection._adapter_call_later( # pylint: disable=W0212 self._check_interval, self._check_heartbeat) def _update_counters(self): """Update the internal counters for bytes sent and received and the number of frames received """ self._bytes_sent = self._connection.bytes_sent self._bytes_received = self._connection.bytes_received pika-1.2.0/pika/spec.py000066400000000000000000002310751400701476500147020ustar00rootroot00000000000000""" AMQP Specification ================== This module implements the constants and classes that comprise AMQP protocol level constructs. It should rarely be directly referenced outside of Pika's own internal use. .. note:: Auto-generated code by codegen.py, do not edit directly. Pull requests to this file without accompanying ``utils/codegen.py`` changes will be rejected. """ import struct from pika import amqp_object from pika import data from pika.compat import str_or_bytes, unicode_type from pika.exchange_type import ExchangeType # Python 3 support for str object str = bytes PROTOCOL_VERSION = (0, 9, 1) PORT = 5672 ACCESS_REFUSED = 403 CHANNEL_ERROR = 504 COMMAND_INVALID = 503 CONNECTION_FORCED = 320 CONTENT_TOO_LARGE = 311 FRAME_BODY = 3 FRAME_END = 206 FRAME_END_SIZE = 1 FRAME_ERROR = 501 FRAME_HEADER = 2 FRAME_HEADER_SIZE = 7 FRAME_HEARTBEAT = 8 FRAME_MAX_SIZE = 131072 FRAME_METHOD = 1 FRAME_MIN_SIZE = 4096 INTERNAL_ERROR = 541 INVALID_PATH = 402 NOT_ALLOWED = 530 NOT_FOUND = 404 NOT_IMPLEMENTED = 540 NO_CONSUMERS = 313 NO_ROUTE = 312 PERSISTENT_DELIVERY_MODE = 2 PRECONDITION_FAILED = 406 REPLY_SUCCESS = 200 RESOURCE_ERROR = 506 RESOURCE_LOCKED = 405 SYNTAX_ERROR = 502 TRANSIENT_DELIVERY_MODE = 1 UNEXPECTED_FRAME = 505 class Connection(amqp_object.Class): INDEX = 0x000A # 10 NAME = 'Connection' class Start(amqp_object.Method): INDEX = 0x000A000A # 10, 10; 655370 NAME = 'Connection.Start' def __init__(self, version_major=0, version_minor=9, server_properties=None, mechanisms='PLAIN', locales='en_US'): self.version_major = version_major self.version_minor = version_minor self.server_properties = server_properties self.mechanisms = mechanisms self.locales = locales @property def synchronous(self): return True def decode(self, encoded, offset=0): self.version_major = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.version_minor = struct.unpack_from('B', encoded, offset)[0] offset += 1 (self.server_properties, offset) = data.decode_table(encoded, offset) length = struct.unpack_from('>I', encoded, offset)[0] offset += 4 self.mechanisms = encoded[offset:offset + length] try: self.mechanisms = str(self.mechanisms) except UnicodeEncodeError: pass offset += length length = struct.unpack_from('>I', encoded, offset)[0] offset += 4 self.locales = encoded[offset:offset + length] try: self.locales = str(self.locales) except UnicodeEncodeError: pass offset += length return self def encode(self): pieces = list() pieces.append(struct.pack('B', self.version_major)) pieces.append(struct.pack('B', self.version_minor)) data.encode_table(pieces, self.server_properties) assert isinstance(self.mechanisms, str_or_bytes),\ 'A non-string value was supplied for self.mechanisms' value = self.mechanisms.encode('utf-8') if isinstance(self.mechanisms, unicode_type) else self.mechanisms pieces.append(struct.pack('>I', len(value))) pieces.append(value) assert isinstance(self.locales, str_or_bytes),\ 'A non-string value was supplied for self.locales' value = self.locales.encode('utf-8') if isinstance(self.locales, unicode_type) else self.locales pieces.append(struct.pack('>I', len(value))) pieces.append(value) return pieces class StartOk(amqp_object.Method): INDEX = 0x000A000B # 10, 11; 655371 NAME = 'Connection.StartOk' def __init__(self, client_properties=None, mechanism='PLAIN', response=None, locale='en_US'): self.client_properties = client_properties self.mechanism = mechanism self.response = response self.locale = locale @property def synchronous(self): return False def decode(self, encoded, offset=0): (self.client_properties, offset) = data.decode_table(encoded, offset) self.mechanism, offset = data.decode_short_string(encoded, offset) length = struct.unpack_from('>I', encoded, offset)[0] offset += 4 self.response = encoded[offset:offset + length] try: self.response = str(self.response) except UnicodeEncodeError: pass offset += length self.locale, offset = data.decode_short_string(encoded, offset) return self def encode(self): pieces = list() data.encode_table(pieces, self.client_properties) assert isinstance(self.mechanism, str_or_bytes),\ 'A non-string value was supplied for self.mechanism' data.encode_short_string(pieces, self.mechanism) assert isinstance(self.response, str_or_bytes),\ 'A non-string value was supplied for self.response' value = self.response.encode('utf-8') if isinstance(self.response, unicode_type) else self.response pieces.append(struct.pack('>I', len(value))) pieces.append(value) assert isinstance(self.locale, str_or_bytes),\ 'A non-string value was supplied for self.locale' data.encode_short_string(pieces, self.locale) return pieces class Secure(amqp_object.Method): INDEX = 0x000A0014 # 10, 20; 655380 NAME = 'Connection.Secure' def __init__(self, challenge=None): self.challenge = challenge @property def synchronous(self): return True def decode(self, encoded, offset=0): length = struct.unpack_from('>I', encoded, offset)[0] offset += 4 self.challenge = encoded[offset:offset + length] try: self.challenge = str(self.challenge) except UnicodeEncodeError: pass offset += length return self def encode(self): pieces = list() assert isinstance(self.challenge, str_or_bytes),\ 'A non-string value was supplied for self.challenge' value = self.challenge.encode('utf-8') if isinstance(self.challenge, unicode_type) else self.challenge pieces.append(struct.pack('>I', len(value))) pieces.append(value) return pieces class SecureOk(amqp_object.Method): INDEX = 0x000A0015 # 10, 21; 655381 NAME = 'Connection.SecureOk' def __init__(self, response=None): self.response = response @property def synchronous(self): return False def decode(self, encoded, offset=0): length = struct.unpack_from('>I', encoded, offset)[0] offset += 4 self.response = encoded[offset:offset + length] try: self.response = str(self.response) except UnicodeEncodeError: pass offset += length return self def encode(self): pieces = list() assert isinstance(self.response, str_or_bytes),\ 'A non-string value was supplied for self.response' value = self.response.encode('utf-8') if isinstance(self.response, unicode_type) else self.response pieces.append(struct.pack('>I', len(value))) pieces.append(value) return pieces class Tune(amqp_object.Method): INDEX = 0x000A001E # 10, 30; 655390 NAME = 'Connection.Tune' def __init__(self, channel_max=0, frame_max=0, heartbeat=0): self.channel_max = channel_max self.frame_max = frame_max self.heartbeat = heartbeat @property def synchronous(self): return True def decode(self, encoded, offset=0): self.channel_max = struct.unpack_from('>H', encoded, offset)[0] offset += 2 self.frame_max = struct.unpack_from('>I', encoded, offset)[0] offset += 4 self.heartbeat = struct.unpack_from('>H', encoded, offset)[0] offset += 2 return self def encode(self): pieces = list() pieces.append(struct.pack('>H', self.channel_max)) pieces.append(struct.pack('>I', self.frame_max)) pieces.append(struct.pack('>H', self.heartbeat)) return pieces class TuneOk(amqp_object.Method): INDEX = 0x000A001F # 10, 31; 655391 NAME = 'Connection.TuneOk' def __init__(self, channel_max=0, frame_max=0, heartbeat=0): self.channel_max = channel_max self.frame_max = frame_max self.heartbeat = heartbeat @property def synchronous(self): return False def decode(self, encoded, offset=0): self.channel_max = struct.unpack_from('>H', encoded, offset)[0] offset += 2 self.frame_max = struct.unpack_from('>I', encoded, offset)[0] offset += 4 self.heartbeat = struct.unpack_from('>H', encoded, offset)[0] offset += 2 return self def encode(self): pieces = list() pieces.append(struct.pack('>H', self.channel_max)) pieces.append(struct.pack('>I', self.frame_max)) pieces.append(struct.pack('>H', self.heartbeat)) return pieces class Open(amqp_object.Method): INDEX = 0x000A0028 # 10, 40; 655400 NAME = 'Connection.Open' def __init__(self, virtual_host='/', capabilities='', insist=False): self.virtual_host = virtual_host self.capabilities = capabilities self.insist = insist @property def synchronous(self): return True def decode(self, encoded, offset=0): self.virtual_host, offset = data.decode_short_string(encoded, offset) self.capabilities, offset = data.decode_short_string(encoded, offset) bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.insist = (bit_buffer & (1 << 0)) != 0 return self def encode(self): pieces = list() assert isinstance(self.virtual_host, str_or_bytes),\ 'A non-string value was supplied for self.virtual_host' data.encode_short_string(pieces, self.virtual_host) assert isinstance(self.capabilities, str_or_bytes),\ 'A non-string value was supplied for self.capabilities' data.encode_short_string(pieces, self.capabilities) bit_buffer = 0 if self.insist: bit_buffer |= 1 << 0 pieces.append(struct.pack('B', bit_buffer)) return pieces class OpenOk(amqp_object.Method): INDEX = 0x000A0029 # 10, 41; 655401 NAME = 'Connection.OpenOk' def __init__(self, known_hosts=''): self.known_hosts = known_hosts @property def synchronous(self): return False def decode(self, encoded, offset=0): self.known_hosts, offset = data.decode_short_string(encoded, offset) return self def encode(self): pieces = list() assert isinstance(self.known_hosts, str_or_bytes),\ 'A non-string value was supplied for self.known_hosts' data.encode_short_string(pieces, self.known_hosts) return pieces class Close(amqp_object.Method): INDEX = 0x000A0032 # 10, 50; 655410 NAME = 'Connection.Close' def __init__(self, reply_code=None, reply_text='', class_id=None, method_id=None): self.reply_code = reply_code self.reply_text = reply_text self.class_id = class_id self.method_id = method_id @property def synchronous(self): return True def decode(self, encoded, offset=0): self.reply_code = struct.unpack_from('>H', encoded, offset)[0] offset += 2 self.reply_text, offset = data.decode_short_string(encoded, offset) self.class_id = struct.unpack_from('>H', encoded, offset)[0] offset += 2 self.method_id = struct.unpack_from('>H', encoded, offset)[0] offset += 2 return self def encode(self): pieces = list() pieces.append(struct.pack('>H', self.reply_code)) assert isinstance(self.reply_text, str_or_bytes),\ 'A non-string value was supplied for self.reply_text' data.encode_short_string(pieces, self.reply_text) pieces.append(struct.pack('>H', self.class_id)) pieces.append(struct.pack('>H', self.method_id)) return pieces class CloseOk(amqp_object.Method): INDEX = 0x000A0033 # 10, 51; 655411 NAME = 'Connection.CloseOk' def __init__(self): pass @property def synchronous(self): return False def decode(self, encoded, offset=0): return self def encode(self): pieces = list() return pieces class Blocked(amqp_object.Method): INDEX = 0x000A003C # 10, 60; 655420 NAME = 'Connection.Blocked' def __init__(self, reason=''): self.reason = reason @property def synchronous(self): return False def decode(self, encoded, offset=0): self.reason, offset = data.decode_short_string(encoded, offset) return self def encode(self): pieces = list() assert isinstance(self.reason, str_or_bytes),\ 'A non-string value was supplied for self.reason' data.encode_short_string(pieces, self.reason) return pieces class Unblocked(amqp_object.Method): INDEX = 0x000A003D # 10, 61; 655421 NAME = 'Connection.Unblocked' def __init__(self): pass @property def synchronous(self): return False def decode(self, encoded, offset=0): return self def encode(self): pieces = list() return pieces class Channel(amqp_object.Class): INDEX = 0x0014 # 20 NAME = 'Channel' class Open(amqp_object.Method): INDEX = 0x0014000A # 20, 10; 1310730 NAME = 'Channel.Open' def __init__(self, out_of_band=''): self.out_of_band = out_of_band @property def synchronous(self): return True def decode(self, encoded, offset=0): self.out_of_band, offset = data.decode_short_string(encoded, offset) return self def encode(self): pieces = list() assert isinstance(self.out_of_band, str_or_bytes),\ 'A non-string value was supplied for self.out_of_band' data.encode_short_string(pieces, self.out_of_band) return pieces class OpenOk(amqp_object.Method): INDEX = 0x0014000B # 20, 11; 1310731 NAME = 'Channel.OpenOk' def __init__(self, channel_id=''): self.channel_id = channel_id @property def synchronous(self): return False def decode(self, encoded, offset=0): length = struct.unpack_from('>I', encoded, offset)[0] offset += 4 self.channel_id = encoded[offset:offset + length] try: self.channel_id = str(self.channel_id) except UnicodeEncodeError: pass offset += length return self def encode(self): pieces = list() assert isinstance(self.channel_id, str_or_bytes),\ 'A non-string value was supplied for self.channel_id' value = self.channel_id.encode('utf-8') if isinstance(self.channel_id, unicode_type) else self.channel_id pieces.append(struct.pack('>I', len(value))) pieces.append(value) return pieces class Flow(amqp_object.Method): INDEX = 0x00140014 # 20, 20; 1310740 NAME = 'Channel.Flow' def __init__(self, active=None): self.active = active @property def synchronous(self): return True def decode(self, encoded, offset=0): bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.active = (bit_buffer & (1 << 0)) != 0 return self def encode(self): pieces = list() bit_buffer = 0 if self.active: bit_buffer |= 1 << 0 pieces.append(struct.pack('B', bit_buffer)) return pieces class FlowOk(amqp_object.Method): INDEX = 0x00140015 # 20, 21; 1310741 NAME = 'Channel.FlowOk' def __init__(self, active=None): self.active = active @property def synchronous(self): return False def decode(self, encoded, offset=0): bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.active = (bit_buffer & (1 << 0)) != 0 return self def encode(self): pieces = list() bit_buffer = 0 if self.active: bit_buffer |= 1 << 0 pieces.append(struct.pack('B', bit_buffer)) return pieces class Close(amqp_object.Method): INDEX = 0x00140028 # 20, 40; 1310760 NAME = 'Channel.Close' def __init__(self, reply_code=None, reply_text='', class_id=None, method_id=None): self.reply_code = reply_code self.reply_text = reply_text self.class_id = class_id self.method_id = method_id @property def synchronous(self): return True def decode(self, encoded, offset=0): self.reply_code = struct.unpack_from('>H', encoded, offset)[0] offset += 2 self.reply_text, offset = data.decode_short_string(encoded, offset) self.class_id = struct.unpack_from('>H', encoded, offset)[0] offset += 2 self.method_id = struct.unpack_from('>H', encoded, offset)[0] offset += 2 return self def encode(self): pieces = list() pieces.append(struct.pack('>H', self.reply_code)) assert isinstance(self.reply_text, str_or_bytes),\ 'A non-string value was supplied for self.reply_text' data.encode_short_string(pieces, self.reply_text) pieces.append(struct.pack('>H', self.class_id)) pieces.append(struct.pack('>H', self.method_id)) return pieces class CloseOk(amqp_object.Method): INDEX = 0x00140029 # 20, 41; 1310761 NAME = 'Channel.CloseOk' def __init__(self): pass @property def synchronous(self): return False def decode(self, encoded, offset=0): return self def encode(self): pieces = list() return pieces class Access(amqp_object.Class): INDEX = 0x001E # 30 NAME = 'Access' class Request(amqp_object.Method): INDEX = 0x001E000A # 30, 10; 1966090 NAME = 'Access.Request' def __init__(self, realm='/data', exclusive=False, passive=True, active=True, write=True, read=True): self.realm = realm self.exclusive = exclusive self.passive = passive self.active = active self.write = write self.read = read @property def synchronous(self): return True def decode(self, encoded, offset=0): self.realm, offset = data.decode_short_string(encoded, offset) bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.exclusive = (bit_buffer & (1 << 0)) != 0 self.passive = (bit_buffer & (1 << 1)) != 0 self.active = (bit_buffer & (1 << 2)) != 0 self.write = (bit_buffer & (1 << 3)) != 0 self.read = (bit_buffer & (1 << 4)) != 0 return self def encode(self): pieces = list() assert isinstance(self.realm, str_or_bytes),\ 'A non-string value was supplied for self.realm' data.encode_short_string(pieces, self.realm) bit_buffer = 0 if self.exclusive: bit_buffer |= 1 << 0 if self.passive: bit_buffer |= 1 << 1 if self.active: bit_buffer |= 1 << 2 if self.write: bit_buffer |= 1 << 3 if self.read: bit_buffer |= 1 << 4 pieces.append(struct.pack('B', bit_buffer)) return pieces class RequestOk(amqp_object.Method): INDEX = 0x001E000B # 30, 11; 1966091 NAME = 'Access.RequestOk' def __init__(self, ticket=1): self.ticket = ticket @property def synchronous(self): return False def decode(self, encoded, offset=0): self.ticket = struct.unpack_from('>H', encoded, offset)[0] offset += 2 return self def encode(self): pieces = list() pieces.append(struct.pack('>H', self.ticket)) return pieces class Exchange(amqp_object.Class): INDEX = 0x0028 # 40 NAME = 'Exchange' class Declare(amqp_object.Method): INDEX = 0x0028000A # 40, 10; 2621450 NAME = 'Exchange.Declare' def __init__(self, ticket=0, exchange=None, type=ExchangeType.direct, passive=False, durable=False, auto_delete=False, internal=False, nowait=False, arguments=None): self.ticket = ticket self.exchange = exchange self.type = type self.passive = passive self.durable = durable self.auto_delete = auto_delete self.internal = internal self.nowait = nowait self.arguments = arguments @property def synchronous(self): return True def decode(self, encoded, offset=0): self.ticket = struct.unpack_from('>H', encoded, offset)[0] offset += 2 self.exchange, offset = data.decode_short_string(encoded, offset) self.type, offset = data.decode_short_string(encoded, offset) bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.passive = (bit_buffer & (1 << 0)) != 0 self.durable = (bit_buffer & (1 << 1)) != 0 self.auto_delete = (bit_buffer & (1 << 2)) != 0 self.internal = (bit_buffer & (1 << 3)) != 0 self.nowait = (bit_buffer & (1 << 4)) != 0 (self.arguments, offset) = data.decode_table(encoded, offset) return self def encode(self): pieces = list() pieces.append(struct.pack('>H', self.ticket)) assert isinstance(self.exchange, str_or_bytes),\ 'A non-string value was supplied for self.exchange' data.encode_short_string(pieces, self.exchange) assert isinstance(self.type, str_or_bytes),\ 'A non-string value was supplied for self.type' data.encode_short_string(pieces, self.type) bit_buffer = 0 if self.passive: bit_buffer |= 1 << 0 if self.durable: bit_buffer |= 1 << 1 if self.auto_delete: bit_buffer |= 1 << 2 if self.internal: bit_buffer |= 1 << 3 if self.nowait: bit_buffer |= 1 << 4 pieces.append(struct.pack('B', bit_buffer)) data.encode_table(pieces, self.arguments) return pieces class DeclareOk(amqp_object.Method): INDEX = 0x0028000B # 40, 11; 2621451 NAME = 'Exchange.DeclareOk' def __init__(self): pass @property def synchronous(self): return False def decode(self, encoded, offset=0): return self def encode(self): pieces = list() return pieces class Delete(amqp_object.Method): INDEX = 0x00280014 # 40, 20; 2621460 NAME = 'Exchange.Delete' def __init__(self, ticket=0, exchange=None, if_unused=False, nowait=False): self.ticket = ticket self.exchange = exchange self.if_unused = if_unused self.nowait = nowait @property def synchronous(self): return True def decode(self, encoded, offset=0): self.ticket = struct.unpack_from('>H', encoded, offset)[0] offset += 2 self.exchange, offset = data.decode_short_string(encoded, offset) bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.if_unused = (bit_buffer & (1 << 0)) != 0 self.nowait = (bit_buffer & (1 << 1)) != 0 return self def encode(self): pieces = list() pieces.append(struct.pack('>H', self.ticket)) assert isinstance(self.exchange, str_or_bytes),\ 'A non-string value was supplied for self.exchange' data.encode_short_string(pieces, self.exchange) bit_buffer = 0 if self.if_unused: bit_buffer |= 1 << 0 if self.nowait: bit_buffer |= 1 << 1 pieces.append(struct.pack('B', bit_buffer)) return pieces class DeleteOk(amqp_object.Method): INDEX = 0x00280015 # 40, 21; 2621461 NAME = 'Exchange.DeleteOk' def __init__(self): pass @property def synchronous(self): return False def decode(self, encoded, offset=0): return self def encode(self): pieces = list() return pieces class Bind(amqp_object.Method): INDEX = 0x0028001E # 40, 30; 2621470 NAME = 'Exchange.Bind' def __init__(self, ticket=0, destination=None, source=None, routing_key='', nowait=False, arguments=None): self.ticket = ticket self.destination = destination self.source = source self.routing_key = routing_key self.nowait = nowait self.arguments = arguments @property def synchronous(self): return True def decode(self, encoded, offset=0): self.ticket = struct.unpack_from('>H', encoded, offset)[0] offset += 2 self.destination, offset = data.decode_short_string(encoded, offset) self.source, offset = data.decode_short_string(encoded, offset) self.routing_key, offset = data.decode_short_string(encoded, offset) bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.nowait = (bit_buffer & (1 << 0)) != 0 (self.arguments, offset) = data.decode_table(encoded, offset) return self def encode(self): pieces = list() pieces.append(struct.pack('>H', self.ticket)) assert isinstance(self.destination, str_or_bytes),\ 'A non-string value was supplied for self.destination' data.encode_short_string(pieces, self.destination) assert isinstance(self.source, str_or_bytes),\ 'A non-string value was supplied for self.source' data.encode_short_string(pieces, self.source) assert isinstance(self.routing_key, str_or_bytes),\ 'A non-string value was supplied for self.routing_key' data.encode_short_string(pieces, self.routing_key) bit_buffer = 0 if self.nowait: bit_buffer |= 1 << 0 pieces.append(struct.pack('B', bit_buffer)) data.encode_table(pieces, self.arguments) return pieces class BindOk(amqp_object.Method): INDEX = 0x0028001F # 40, 31; 2621471 NAME = 'Exchange.BindOk' def __init__(self): pass @property def synchronous(self): return False def decode(self, encoded, offset=0): return self def encode(self): pieces = list() return pieces class Unbind(amqp_object.Method): INDEX = 0x00280028 # 40, 40; 2621480 NAME = 'Exchange.Unbind' def __init__(self, ticket=0, destination=None, source=None, routing_key='', nowait=False, arguments=None): self.ticket = ticket self.destination = destination self.source = source self.routing_key = routing_key self.nowait = nowait self.arguments = arguments @property def synchronous(self): return True def decode(self, encoded, offset=0): self.ticket = struct.unpack_from('>H', encoded, offset)[0] offset += 2 self.destination, offset = data.decode_short_string(encoded, offset) self.source, offset = data.decode_short_string(encoded, offset) self.routing_key, offset = data.decode_short_string(encoded, offset) bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.nowait = (bit_buffer & (1 << 0)) != 0 (self.arguments, offset) = data.decode_table(encoded, offset) return self def encode(self): pieces = list() pieces.append(struct.pack('>H', self.ticket)) assert isinstance(self.destination, str_or_bytes),\ 'A non-string value was supplied for self.destination' data.encode_short_string(pieces, self.destination) assert isinstance(self.source, str_or_bytes),\ 'A non-string value was supplied for self.source' data.encode_short_string(pieces, self.source) assert isinstance(self.routing_key, str_or_bytes),\ 'A non-string value was supplied for self.routing_key' data.encode_short_string(pieces, self.routing_key) bit_buffer = 0 if self.nowait: bit_buffer |= 1 << 0 pieces.append(struct.pack('B', bit_buffer)) data.encode_table(pieces, self.arguments) return pieces class UnbindOk(amqp_object.Method): INDEX = 0x00280033 # 40, 51; 2621491 NAME = 'Exchange.UnbindOk' def __init__(self): pass @property def synchronous(self): return False def decode(self, encoded, offset=0): return self def encode(self): pieces = list() return pieces class Queue(amqp_object.Class): INDEX = 0x0032 # 50 NAME = 'Queue' class Declare(amqp_object.Method): INDEX = 0x0032000A # 50, 10; 3276810 NAME = 'Queue.Declare' def __init__(self, ticket=0, queue='', passive=False, durable=False, exclusive=False, auto_delete=False, nowait=False, arguments=None): self.ticket = ticket self.queue = queue self.passive = passive self.durable = durable self.exclusive = exclusive self.auto_delete = auto_delete self.nowait = nowait self.arguments = arguments @property def synchronous(self): return True def decode(self, encoded, offset=0): self.ticket = struct.unpack_from('>H', encoded, offset)[0] offset += 2 self.queue, offset = data.decode_short_string(encoded, offset) bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.passive = (bit_buffer & (1 << 0)) != 0 self.durable = (bit_buffer & (1 << 1)) != 0 self.exclusive = (bit_buffer & (1 << 2)) != 0 self.auto_delete = (bit_buffer & (1 << 3)) != 0 self.nowait = (bit_buffer & (1 << 4)) != 0 (self.arguments, offset) = data.decode_table(encoded, offset) return self def encode(self): pieces = list() pieces.append(struct.pack('>H', self.ticket)) assert isinstance(self.queue, str_or_bytes),\ 'A non-string value was supplied for self.queue' data.encode_short_string(pieces, self.queue) bit_buffer = 0 if self.passive: bit_buffer |= 1 << 0 if self.durable: bit_buffer |= 1 << 1 if self.exclusive: bit_buffer |= 1 << 2 if self.auto_delete: bit_buffer |= 1 << 3 if self.nowait: bit_buffer |= 1 << 4 pieces.append(struct.pack('B', bit_buffer)) data.encode_table(pieces, self.arguments) return pieces class DeclareOk(amqp_object.Method): INDEX = 0x0032000B # 50, 11; 3276811 NAME = 'Queue.DeclareOk' def __init__(self, queue=None, message_count=None, consumer_count=None): self.queue = queue self.message_count = message_count self.consumer_count = consumer_count @property def synchronous(self): return False def decode(self, encoded, offset=0): self.queue, offset = data.decode_short_string(encoded, offset) self.message_count = struct.unpack_from('>I', encoded, offset)[0] offset += 4 self.consumer_count = struct.unpack_from('>I', encoded, offset)[0] offset += 4 return self def encode(self): pieces = list() assert isinstance(self.queue, str_or_bytes),\ 'A non-string value was supplied for self.queue' data.encode_short_string(pieces, self.queue) pieces.append(struct.pack('>I', self.message_count)) pieces.append(struct.pack('>I', self.consumer_count)) return pieces class Bind(amqp_object.Method): INDEX = 0x00320014 # 50, 20; 3276820 NAME = 'Queue.Bind' def __init__(self, ticket=0, queue='', exchange=None, routing_key='', nowait=False, arguments=None): self.ticket = ticket self.queue = queue self.exchange = exchange self.routing_key = routing_key self.nowait = nowait self.arguments = arguments @property def synchronous(self): return True def decode(self, encoded, offset=0): self.ticket = struct.unpack_from('>H', encoded, offset)[0] offset += 2 self.queue, offset = data.decode_short_string(encoded, offset) self.exchange, offset = data.decode_short_string(encoded, offset) self.routing_key, offset = data.decode_short_string(encoded, offset) bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.nowait = (bit_buffer & (1 << 0)) != 0 (self.arguments, offset) = data.decode_table(encoded, offset) return self def encode(self): pieces = list() pieces.append(struct.pack('>H', self.ticket)) assert isinstance(self.queue, str_or_bytes),\ 'A non-string value was supplied for self.queue' data.encode_short_string(pieces, self.queue) assert isinstance(self.exchange, str_or_bytes),\ 'A non-string value was supplied for self.exchange' data.encode_short_string(pieces, self.exchange) assert isinstance(self.routing_key, str_or_bytes),\ 'A non-string value was supplied for self.routing_key' data.encode_short_string(pieces, self.routing_key) bit_buffer = 0 if self.nowait: bit_buffer |= 1 << 0 pieces.append(struct.pack('B', bit_buffer)) data.encode_table(pieces, self.arguments) return pieces class BindOk(amqp_object.Method): INDEX = 0x00320015 # 50, 21; 3276821 NAME = 'Queue.BindOk' def __init__(self): pass @property def synchronous(self): return False def decode(self, encoded, offset=0): return self def encode(self): pieces = list() return pieces class Purge(amqp_object.Method): INDEX = 0x0032001E # 50, 30; 3276830 NAME = 'Queue.Purge' def __init__(self, ticket=0, queue='', nowait=False): self.ticket = ticket self.queue = queue self.nowait = nowait @property def synchronous(self): return True def decode(self, encoded, offset=0): self.ticket = struct.unpack_from('>H', encoded, offset)[0] offset += 2 self.queue, offset = data.decode_short_string(encoded, offset) bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.nowait = (bit_buffer & (1 << 0)) != 0 return self def encode(self): pieces = list() pieces.append(struct.pack('>H', self.ticket)) assert isinstance(self.queue, str_or_bytes),\ 'A non-string value was supplied for self.queue' data.encode_short_string(pieces, self.queue) bit_buffer = 0 if self.nowait: bit_buffer |= 1 << 0 pieces.append(struct.pack('B', bit_buffer)) return pieces class PurgeOk(amqp_object.Method): INDEX = 0x0032001F # 50, 31; 3276831 NAME = 'Queue.PurgeOk' def __init__(self, message_count=None): self.message_count = message_count @property def synchronous(self): return False def decode(self, encoded, offset=0): self.message_count = struct.unpack_from('>I', encoded, offset)[0] offset += 4 return self def encode(self): pieces = list() pieces.append(struct.pack('>I', self.message_count)) return pieces class Delete(amqp_object.Method): INDEX = 0x00320028 # 50, 40; 3276840 NAME = 'Queue.Delete' def __init__(self, ticket=0, queue='', if_unused=False, if_empty=False, nowait=False): self.ticket = ticket self.queue = queue self.if_unused = if_unused self.if_empty = if_empty self.nowait = nowait @property def synchronous(self): return True def decode(self, encoded, offset=0): self.ticket = struct.unpack_from('>H', encoded, offset)[0] offset += 2 self.queue, offset = data.decode_short_string(encoded, offset) bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.if_unused = (bit_buffer & (1 << 0)) != 0 self.if_empty = (bit_buffer & (1 << 1)) != 0 self.nowait = (bit_buffer & (1 << 2)) != 0 return self def encode(self): pieces = list() pieces.append(struct.pack('>H', self.ticket)) assert isinstance(self.queue, str_or_bytes),\ 'A non-string value was supplied for self.queue' data.encode_short_string(pieces, self.queue) bit_buffer = 0 if self.if_unused: bit_buffer |= 1 << 0 if self.if_empty: bit_buffer |= 1 << 1 if self.nowait: bit_buffer |= 1 << 2 pieces.append(struct.pack('B', bit_buffer)) return pieces class DeleteOk(amqp_object.Method): INDEX = 0x00320029 # 50, 41; 3276841 NAME = 'Queue.DeleteOk' def __init__(self, message_count=None): self.message_count = message_count @property def synchronous(self): return False def decode(self, encoded, offset=0): self.message_count = struct.unpack_from('>I', encoded, offset)[0] offset += 4 return self def encode(self): pieces = list() pieces.append(struct.pack('>I', self.message_count)) return pieces class Unbind(amqp_object.Method): INDEX = 0x00320032 # 50, 50; 3276850 NAME = 'Queue.Unbind' def __init__(self, ticket=0, queue='', exchange=None, routing_key='', arguments=None): self.ticket = ticket self.queue = queue self.exchange = exchange self.routing_key = routing_key self.arguments = arguments @property def synchronous(self): return True def decode(self, encoded, offset=0): self.ticket = struct.unpack_from('>H', encoded, offset)[0] offset += 2 self.queue, offset = data.decode_short_string(encoded, offset) self.exchange, offset = data.decode_short_string(encoded, offset) self.routing_key, offset = data.decode_short_string(encoded, offset) (self.arguments, offset) = data.decode_table(encoded, offset) return self def encode(self): pieces = list() pieces.append(struct.pack('>H', self.ticket)) assert isinstance(self.queue, str_or_bytes),\ 'A non-string value was supplied for self.queue' data.encode_short_string(pieces, self.queue) assert isinstance(self.exchange, str_or_bytes),\ 'A non-string value was supplied for self.exchange' data.encode_short_string(pieces, self.exchange) assert isinstance(self.routing_key, str_or_bytes),\ 'A non-string value was supplied for self.routing_key' data.encode_short_string(pieces, self.routing_key) data.encode_table(pieces, self.arguments) return pieces class UnbindOk(amqp_object.Method): INDEX = 0x00320033 # 50, 51; 3276851 NAME = 'Queue.UnbindOk' def __init__(self): pass @property def synchronous(self): return False def decode(self, encoded, offset=0): return self def encode(self): pieces = list() return pieces class Basic(amqp_object.Class): INDEX = 0x003C # 60 NAME = 'Basic' class Qos(amqp_object.Method): INDEX = 0x003C000A # 60, 10; 3932170 NAME = 'Basic.Qos' def __init__(self, prefetch_size=0, prefetch_count=0, global_qos=False): self.prefetch_size = prefetch_size self.prefetch_count = prefetch_count self.global_qos = global_qos @property def synchronous(self): return True def decode(self, encoded, offset=0): self.prefetch_size = struct.unpack_from('>I', encoded, offset)[0] offset += 4 self.prefetch_count = struct.unpack_from('>H', encoded, offset)[0] offset += 2 bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.global_qos = (bit_buffer & (1 << 0)) != 0 return self def encode(self): pieces = list() pieces.append(struct.pack('>I', self.prefetch_size)) pieces.append(struct.pack('>H', self.prefetch_count)) bit_buffer = 0 if self.global_qos: bit_buffer |= 1 << 0 pieces.append(struct.pack('B', bit_buffer)) return pieces class QosOk(amqp_object.Method): INDEX = 0x003C000B # 60, 11; 3932171 NAME = 'Basic.QosOk' def __init__(self): pass @property def synchronous(self): return False def decode(self, encoded, offset=0): return self def encode(self): pieces = list() return pieces class Consume(amqp_object.Method): INDEX = 0x003C0014 # 60, 20; 3932180 NAME = 'Basic.Consume' def __init__(self, ticket=0, queue='', consumer_tag='', no_local=False, no_ack=False, exclusive=False, nowait=False, arguments=None): self.ticket = ticket self.queue = queue self.consumer_tag = consumer_tag self.no_local = no_local self.no_ack = no_ack self.exclusive = exclusive self.nowait = nowait self.arguments = arguments @property def synchronous(self): return True def decode(self, encoded, offset=0): self.ticket = struct.unpack_from('>H', encoded, offset)[0] offset += 2 self.queue, offset = data.decode_short_string(encoded, offset) self.consumer_tag, offset = data.decode_short_string(encoded, offset) bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.no_local = (bit_buffer & (1 << 0)) != 0 self.no_ack = (bit_buffer & (1 << 1)) != 0 self.exclusive = (bit_buffer & (1 << 2)) != 0 self.nowait = (bit_buffer & (1 << 3)) != 0 (self.arguments, offset) = data.decode_table(encoded, offset) return self def encode(self): pieces = list() pieces.append(struct.pack('>H', self.ticket)) assert isinstance(self.queue, str_or_bytes),\ 'A non-string value was supplied for self.queue' data.encode_short_string(pieces, self.queue) assert isinstance(self.consumer_tag, str_or_bytes),\ 'A non-string value was supplied for self.consumer_tag' data.encode_short_string(pieces, self.consumer_tag) bit_buffer = 0 if self.no_local: bit_buffer |= 1 << 0 if self.no_ack: bit_buffer |= 1 << 1 if self.exclusive: bit_buffer |= 1 << 2 if self.nowait: bit_buffer |= 1 << 3 pieces.append(struct.pack('B', bit_buffer)) data.encode_table(pieces, self.arguments) return pieces class ConsumeOk(amqp_object.Method): INDEX = 0x003C0015 # 60, 21; 3932181 NAME = 'Basic.ConsumeOk' def __init__(self, consumer_tag=None): self.consumer_tag = consumer_tag @property def synchronous(self): return False def decode(self, encoded, offset=0): self.consumer_tag, offset = data.decode_short_string(encoded, offset) return self def encode(self): pieces = list() assert isinstance(self.consumer_tag, str_or_bytes),\ 'A non-string value was supplied for self.consumer_tag' data.encode_short_string(pieces, self.consumer_tag) return pieces class Cancel(amqp_object.Method): INDEX = 0x003C001E # 60, 30; 3932190 NAME = 'Basic.Cancel' def __init__(self, consumer_tag=None, nowait=False): self.consumer_tag = consumer_tag self.nowait = nowait @property def synchronous(self): return True def decode(self, encoded, offset=0): self.consumer_tag, offset = data.decode_short_string(encoded, offset) bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.nowait = (bit_buffer & (1 << 0)) != 0 return self def encode(self): pieces = list() assert isinstance(self.consumer_tag, str_or_bytes),\ 'A non-string value was supplied for self.consumer_tag' data.encode_short_string(pieces, self.consumer_tag) bit_buffer = 0 if self.nowait: bit_buffer |= 1 << 0 pieces.append(struct.pack('B', bit_buffer)) return pieces class CancelOk(amqp_object.Method): INDEX = 0x003C001F # 60, 31; 3932191 NAME = 'Basic.CancelOk' def __init__(self, consumer_tag=None): self.consumer_tag = consumer_tag @property def synchronous(self): return False def decode(self, encoded, offset=0): self.consumer_tag, offset = data.decode_short_string(encoded, offset) return self def encode(self): pieces = list() assert isinstance(self.consumer_tag, str_or_bytes),\ 'A non-string value was supplied for self.consumer_tag' data.encode_short_string(pieces, self.consumer_tag) return pieces class Publish(amqp_object.Method): INDEX = 0x003C0028 # 60, 40; 3932200 NAME = 'Basic.Publish' def __init__(self, ticket=0, exchange='', routing_key='', mandatory=False, immediate=False): self.ticket = ticket self.exchange = exchange self.routing_key = routing_key self.mandatory = mandatory self.immediate = immediate @property def synchronous(self): return False def decode(self, encoded, offset=0): self.ticket = struct.unpack_from('>H', encoded, offset)[0] offset += 2 self.exchange, offset = data.decode_short_string(encoded, offset) self.routing_key, offset = data.decode_short_string(encoded, offset) bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.mandatory = (bit_buffer & (1 << 0)) != 0 self.immediate = (bit_buffer & (1 << 1)) != 0 return self def encode(self): pieces = list() pieces.append(struct.pack('>H', self.ticket)) assert isinstance(self.exchange, str_or_bytes),\ 'A non-string value was supplied for self.exchange' data.encode_short_string(pieces, self.exchange) assert isinstance(self.routing_key, str_or_bytes),\ 'A non-string value was supplied for self.routing_key' data.encode_short_string(pieces, self.routing_key) bit_buffer = 0 if self.mandatory: bit_buffer |= 1 << 0 if self.immediate: bit_buffer |= 1 << 1 pieces.append(struct.pack('B', bit_buffer)) return pieces class Return(amqp_object.Method): INDEX = 0x003C0032 # 60, 50; 3932210 NAME = 'Basic.Return' def __init__(self, reply_code=None, reply_text='', exchange=None, routing_key=None): self.reply_code = reply_code self.reply_text = reply_text self.exchange = exchange self.routing_key = routing_key @property def synchronous(self): return False def decode(self, encoded, offset=0): self.reply_code = struct.unpack_from('>H', encoded, offset)[0] offset += 2 self.reply_text, offset = data.decode_short_string(encoded, offset) self.exchange, offset = data.decode_short_string(encoded, offset) self.routing_key, offset = data.decode_short_string(encoded, offset) return self def encode(self): pieces = list() pieces.append(struct.pack('>H', self.reply_code)) assert isinstance(self.reply_text, str_or_bytes),\ 'A non-string value was supplied for self.reply_text' data.encode_short_string(pieces, self.reply_text) assert isinstance(self.exchange, str_or_bytes),\ 'A non-string value was supplied for self.exchange' data.encode_short_string(pieces, self.exchange) assert isinstance(self.routing_key, str_or_bytes),\ 'A non-string value was supplied for self.routing_key' data.encode_short_string(pieces, self.routing_key) return pieces class Deliver(amqp_object.Method): INDEX = 0x003C003C # 60, 60; 3932220 NAME = 'Basic.Deliver' def __init__(self, consumer_tag=None, delivery_tag=None, redelivered=False, exchange=None, routing_key=None): self.consumer_tag = consumer_tag self.delivery_tag = delivery_tag self.redelivered = redelivered self.exchange = exchange self.routing_key = routing_key @property def synchronous(self): return False def decode(self, encoded, offset=0): self.consumer_tag, offset = data.decode_short_string(encoded, offset) self.delivery_tag = struct.unpack_from('>Q', encoded, offset)[0] offset += 8 bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.redelivered = (bit_buffer & (1 << 0)) != 0 self.exchange, offset = data.decode_short_string(encoded, offset) self.routing_key, offset = data.decode_short_string(encoded, offset) return self def encode(self): pieces = list() assert isinstance(self.consumer_tag, str_or_bytes),\ 'A non-string value was supplied for self.consumer_tag' data.encode_short_string(pieces, self.consumer_tag) pieces.append(struct.pack('>Q', self.delivery_tag)) bit_buffer = 0 if self.redelivered: bit_buffer |= 1 << 0 pieces.append(struct.pack('B', bit_buffer)) assert isinstance(self.exchange, str_or_bytes),\ 'A non-string value was supplied for self.exchange' data.encode_short_string(pieces, self.exchange) assert isinstance(self.routing_key, str_or_bytes),\ 'A non-string value was supplied for self.routing_key' data.encode_short_string(pieces, self.routing_key) return pieces class Get(amqp_object.Method): INDEX = 0x003C0046 # 60, 70; 3932230 NAME = 'Basic.Get' def __init__(self, ticket=0, queue='', no_ack=False): self.ticket = ticket self.queue = queue self.no_ack = no_ack @property def synchronous(self): return True def decode(self, encoded, offset=0): self.ticket = struct.unpack_from('>H', encoded, offset)[0] offset += 2 self.queue, offset = data.decode_short_string(encoded, offset) bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.no_ack = (bit_buffer & (1 << 0)) != 0 return self def encode(self): pieces = list() pieces.append(struct.pack('>H', self.ticket)) assert isinstance(self.queue, str_or_bytes),\ 'A non-string value was supplied for self.queue' data.encode_short_string(pieces, self.queue) bit_buffer = 0 if self.no_ack: bit_buffer |= 1 << 0 pieces.append(struct.pack('B', bit_buffer)) return pieces class GetOk(amqp_object.Method): INDEX = 0x003C0047 # 60, 71; 3932231 NAME = 'Basic.GetOk' def __init__(self, delivery_tag=None, redelivered=False, exchange=None, routing_key=None, message_count=None): self.delivery_tag = delivery_tag self.redelivered = redelivered self.exchange = exchange self.routing_key = routing_key self.message_count = message_count @property def synchronous(self): return False def decode(self, encoded, offset=0): self.delivery_tag = struct.unpack_from('>Q', encoded, offset)[0] offset += 8 bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.redelivered = (bit_buffer & (1 << 0)) != 0 self.exchange, offset = data.decode_short_string(encoded, offset) self.routing_key, offset = data.decode_short_string(encoded, offset) self.message_count = struct.unpack_from('>I', encoded, offset)[0] offset += 4 return self def encode(self): pieces = list() pieces.append(struct.pack('>Q', self.delivery_tag)) bit_buffer = 0 if self.redelivered: bit_buffer |= 1 << 0 pieces.append(struct.pack('B', bit_buffer)) assert isinstance(self.exchange, str_or_bytes),\ 'A non-string value was supplied for self.exchange' data.encode_short_string(pieces, self.exchange) assert isinstance(self.routing_key, str_or_bytes),\ 'A non-string value was supplied for self.routing_key' data.encode_short_string(pieces, self.routing_key) pieces.append(struct.pack('>I', self.message_count)) return pieces class GetEmpty(amqp_object.Method): INDEX = 0x003C0048 # 60, 72; 3932232 NAME = 'Basic.GetEmpty' def __init__(self, cluster_id=''): self.cluster_id = cluster_id @property def synchronous(self): return False def decode(self, encoded, offset=0): self.cluster_id, offset = data.decode_short_string(encoded, offset) return self def encode(self): pieces = list() assert isinstance(self.cluster_id, str_or_bytes),\ 'A non-string value was supplied for self.cluster_id' data.encode_short_string(pieces, self.cluster_id) return pieces class Ack(amqp_object.Method): INDEX = 0x003C0050 # 60, 80; 3932240 NAME = 'Basic.Ack' def __init__(self, delivery_tag=0, multiple=False): self.delivery_tag = delivery_tag self.multiple = multiple @property def synchronous(self): return False def decode(self, encoded, offset=0): self.delivery_tag = struct.unpack_from('>Q', encoded, offset)[0] offset += 8 bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.multiple = (bit_buffer & (1 << 0)) != 0 return self def encode(self): pieces = list() pieces.append(struct.pack('>Q', self.delivery_tag)) bit_buffer = 0 if self.multiple: bit_buffer |= 1 << 0 pieces.append(struct.pack('B', bit_buffer)) return pieces class Reject(amqp_object.Method): INDEX = 0x003C005A # 60, 90; 3932250 NAME = 'Basic.Reject' def __init__(self, delivery_tag=None, requeue=True): self.delivery_tag = delivery_tag self.requeue = requeue @property def synchronous(self): return False def decode(self, encoded, offset=0): self.delivery_tag = struct.unpack_from('>Q', encoded, offset)[0] offset += 8 bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.requeue = (bit_buffer & (1 << 0)) != 0 return self def encode(self): pieces = list() pieces.append(struct.pack('>Q', self.delivery_tag)) bit_buffer = 0 if self.requeue: bit_buffer |= 1 << 0 pieces.append(struct.pack('B', bit_buffer)) return pieces class RecoverAsync(amqp_object.Method): INDEX = 0x003C0064 # 60, 100; 3932260 NAME = 'Basic.RecoverAsync' def __init__(self, requeue=False): self.requeue = requeue @property def synchronous(self): return False def decode(self, encoded, offset=0): bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.requeue = (bit_buffer & (1 << 0)) != 0 return self def encode(self): pieces = list() bit_buffer = 0 if self.requeue: bit_buffer |= 1 << 0 pieces.append(struct.pack('B', bit_buffer)) return pieces class Recover(amqp_object.Method): INDEX = 0x003C006E # 60, 110; 3932270 NAME = 'Basic.Recover' def __init__(self, requeue=False): self.requeue = requeue @property def synchronous(self): return True def decode(self, encoded, offset=0): bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.requeue = (bit_buffer & (1 << 0)) != 0 return self def encode(self): pieces = list() bit_buffer = 0 if self.requeue: bit_buffer |= 1 << 0 pieces.append(struct.pack('B', bit_buffer)) return pieces class RecoverOk(amqp_object.Method): INDEX = 0x003C006F # 60, 111; 3932271 NAME = 'Basic.RecoverOk' def __init__(self): pass @property def synchronous(self): return False def decode(self, encoded, offset=0): return self def encode(self): pieces = list() return pieces class Nack(amqp_object.Method): INDEX = 0x003C0078 # 60, 120; 3932280 NAME = 'Basic.Nack' def __init__(self, delivery_tag=0, multiple=False, requeue=True): self.delivery_tag = delivery_tag self.multiple = multiple self.requeue = requeue @property def synchronous(self): return False def decode(self, encoded, offset=0): self.delivery_tag = struct.unpack_from('>Q', encoded, offset)[0] offset += 8 bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.multiple = (bit_buffer & (1 << 0)) != 0 self.requeue = (bit_buffer & (1 << 1)) != 0 return self def encode(self): pieces = list() pieces.append(struct.pack('>Q', self.delivery_tag)) bit_buffer = 0 if self.multiple: bit_buffer |= 1 << 0 if self.requeue: bit_buffer |= 1 << 1 pieces.append(struct.pack('B', bit_buffer)) return pieces class Tx(amqp_object.Class): INDEX = 0x005A # 90 NAME = 'Tx' class Select(amqp_object.Method): INDEX = 0x005A000A # 90, 10; 5898250 NAME = 'Tx.Select' def __init__(self): pass @property def synchronous(self): return True def decode(self, encoded, offset=0): return self def encode(self): pieces = list() return pieces class SelectOk(amqp_object.Method): INDEX = 0x005A000B # 90, 11; 5898251 NAME = 'Tx.SelectOk' def __init__(self): pass @property def synchronous(self): return False def decode(self, encoded, offset=0): return self def encode(self): pieces = list() return pieces class Commit(amqp_object.Method): INDEX = 0x005A0014 # 90, 20; 5898260 NAME = 'Tx.Commit' def __init__(self): pass @property def synchronous(self): return True def decode(self, encoded, offset=0): return self def encode(self): pieces = list() return pieces class CommitOk(amqp_object.Method): INDEX = 0x005A0015 # 90, 21; 5898261 NAME = 'Tx.CommitOk' def __init__(self): pass @property def synchronous(self): return False def decode(self, encoded, offset=0): return self def encode(self): pieces = list() return pieces class Rollback(amqp_object.Method): INDEX = 0x005A001E # 90, 30; 5898270 NAME = 'Tx.Rollback' def __init__(self): pass @property def synchronous(self): return True def decode(self, encoded, offset=0): return self def encode(self): pieces = list() return pieces class RollbackOk(amqp_object.Method): INDEX = 0x005A001F # 90, 31; 5898271 NAME = 'Tx.RollbackOk' def __init__(self): pass @property def synchronous(self): return False def decode(self, encoded, offset=0): return self def encode(self): pieces = list() return pieces class Confirm(amqp_object.Class): INDEX = 0x0055 # 85 NAME = 'Confirm' class Select(amqp_object.Method): INDEX = 0x0055000A # 85, 10; 5570570 NAME = 'Confirm.Select' def __init__(self, nowait=False): self.nowait = nowait @property def synchronous(self): return True def decode(self, encoded, offset=0): bit_buffer = struct.unpack_from('B', encoded, offset)[0] offset += 1 self.nowait = (bit_buffer & (1 << 0)) != 0 return self def encode(self): pieces = list() bit_buffer = 0 if self.nowait: bit_buffer |= 1 << 0 pieces.append(struct.pack('B', bit_buffer)) return pieces class SelectOk(amqp_object.Method): INDEX = 0x0055000B # 85, 11; 5570571 NAME = 'Confirm.SelectOk' def __init__(self): pass @property def synchronous(self): return False def decode(self, encoded, offset=0): return self def encode(self): pieces = list() return pieces class BasicProperties(amqp_object.Properties): CLASS = Basic INDEX = 0x003C # 60 NAME = 'BasicProperties' FLAG_CONTENT_TYPE = (1 << 15) FLAG_CONTENT_ENCODING = (1 << 14) FLAG_HEADERS = (1 << 13) FLAG_DELIVERY_MODE = (1 << 12) FLAG_PRIORITY = (1 << 11) FLAG_CORRELATION_ID = (1 << 10) FLAG_REPLY_TO = (1 << 9) FLAG_EXPIRATION = (1 << 8) FLAG_MESSAGE_ID = (1 << 7) FLAG_TIMESTAMP = (1 << 6) FLAG_TYPE = (1 << 5) FLAG_USER_ID = (1 << 4) FLAG_APP_ID = (1 << 3) FLAG_CLUSTER_ID = (1 << 2) def __init__(self, content_type=None, content_encoding=None, headers=None, delivery_mode=None, priority=None, correlation_id=None, reply_to=None, expiration=None, message_id=None, timestamp=None, type=None, user_id=None, app_id=None, cluster_id=None): self.content_type = content_type self.content_encoding = content_encoding self.headers = headers self.delivery_mode = delivery_mode self.priority = priority self.correlation_id = correlation_id self.reply_to = reply_to self.expiration = expiration self.message_id = message_id self.timestamp = timestamp self.type = type self.user_id = user_id self.app_id = app_id self.cluster_id = cluster_id def decode(self, encoded, offset=0): flags = 0 flagword_index = 0 while True: partial_flags = struct.unpack_from('>H', encoded, offset)[0] offset += 2 flags = flags | (partial_flags << (flagword_index * 16)) if not (partial_flags & 1): break flagword_index += 1 if flags & BasicProperties.FLAG_CONTENT_TYPE: self.content_type, offset = data.decode_short_string(encoded, offset) else: self.content_type = None if flags & BasicProperties.FLAG_CONTENT_ENCODING: self.content_encoding, offset = data.decode_short_string(encoded, offset) else: self.content_encoding = None if flags & BasicProperties.FLAG_HEADERS: (self.headers, offset) = data.decode_table(encoded, offset) else: self.headers = None if flags & BasicProperties.FLAG_DELIVERY_MODE: self.delivery_mode = struct.unpack_from('B', encoded, offset)[0] offset += 1 else: self.delivery_mode = None if flags & BasicProperties.FLAG_PRIORITY: self.priority = struct.unpack_from('B', encoded, offset)[0] offset += 1 else: self.priority = None if flags & BasicProperties.FLAG_CORRELATION_ID: self.correlation_id, offset = data.decode_short_string(encoded, offset) else: self.correlation_id = None if flags & BasicProperties.FLAG_REPLY_TO: self.reply_to, offset = data.decode_short_string(encoded, offset) else: self.reply_to = None if flags & BasicProperties.FLAG_EXPIRATION: self.expiration, offset = data.decode_short_string(encoded, offset) else: self.expiration = None if flags & BasicProperties.FLAG_MESSAGE_ID: self.message_id, offset = data.decode_short_string(encoded, offset) else: self.message_id = None if flags & BasicProperties.FLAG_TIMESTAMP: self.timestamp = struct.unpack_from('>Q', encoded, offset)[0] offset += 8 else: self.timestamp = None if flags & BasicProperties.FLAG_TYPE: self.type, offset = data.decode_short_string(encoded, offset) else: self.type = None if flags & BasicProperties.FLAG_USER_ID: self.user_id, offset = data.decode_short_string(encoded, offset) else: self.user_id = None if flags & BasicProperties.FLAG_APP_ID: self.app_id, offset = data.decode_short_string(encoded, offset) else: self.app_id = None if flags & BasicProperties.FLAG_CLUSTER_ID: self.cluster_id, offset = data.decode_short_string(encoded, offset) else: self.cluster_id = None return self def encode(self): pieces = list() flags = 0 if self.content_type is not None: flags = flags | BasicProperties.FLAG_CONTENT_TYPE assert isinstance(self.content_type, str_or_bytes),\ 'A non-string value was supplied for self.content_type' data.encode_short_string(pieces, self.content_type) if self.content_encoding is not None: flags = flags | BasicProperties.FLAG_CONTENT_ENCODING assert isinstance(self.content_encoding, str_or_bytes),\ 'A non-string value was supplied for self.content_encoding' data.encode_short_string(pieces, self.content_encoding) if self.headers is not None: flags = flags | BasicProperties.FLAG_HEADERS data.encode_table(pieces, self.headers) if self.delivery_mode is not None: flags = flags | BasicProperties.FLAG_DELIVERY_MODE pieces.append(struct.pack('B', self.delivery_mode)) if self.priority is not None: flags = flags | BasicProperties.FLAG_PRIORITY pieces.append(struct.pack('B', self.priority)) if self.correlation_id is not None: flags = flags | BasicProperties.FLAG_CORRELATION_ID assert isinstance(self.correlation_id, str_or_bytes),\ 'A non-string value was supplied for self.correlation_id' data.encode_short_string(pieces, self.correlation_id) if self.reply_to is not None: flags = flags | BasicProperties.FLAG_REPLY_TO assert isinstance(self.reply_to, str_or_bytes),\ 'A non-string value was supplied for self.reply_to' data.encode_short_string(pieces, self.reply_to) if self.expiration is not None: flags = flags | BasicProperties.FLAG_EXPIRATION assert isinstance(self.expiration, str_or_bytes),\ 'A non-string value was supplied for self.expiration' data.encode_short_string(pieces, self.expiration) if self.message_id is not None: flags = flags | BasicProperties.FLAG_MESSAGE_ID assert isinstance(self.message_id, str_or_bytes),\ 'A non-string value was supplied for self.message_id' data.encode_short_string(pieces, self.message_id) if self.timestamp is not None: flags = flags | BasicProperties.FLAG_TIMESTAMP pieces.append(struct.pack('>Q', self.timestamp)) if self.type is not None: flags = flags | BasicProperties.FLAG_TYPE assert isinstance(self.type, str_or_bytes),\ 'A non-string value was supplied for self.type' data.encode_short_string(pieces, self.type) if self.user_id is not None: flags = flags | BasicProperties.FLAG_USER_ID assert isinstance(self.user_id, str_or_bytes),\ 'A non-string value was supplied for self.user_id' data.encode_short_string(pieces, self.user_id) if self.app_id is not None: flags = flags | BasicProperties.FLAG_APP_ID assert isinstance(self.app_id, str_or_bytes),\ 'A non-string value was supplied for self.app_id' data.encode_short_string(pieces, self.app_id) if self.cluster_id is not None: flags = flags | BasicProperties.FLAG_CLUSTER_ID assert isinstance(self.cluster_id, str_or_bytes),\ 'A non-string value was supplied for self.cluster_id' data.encode_short_string(pieces, self.cluster_id) flag_pieces = list() while True: remainder = flags >> 16 partial_flags = flags & 0xFFFE if remainder != 0: partial_flags |= 1 flag_pieces.append(struct.pack('>H', partial_flags)) flags = remainder if not flags: break return flag_pieces + pieces methods = { 0x000A000A: Connection.Start, 0x000A000B: Connection.StartOk, 0x000A0014: Connection.Secure, 0x000A0015: Connection.SecureOk, 0x000A001E: Connection.Tune, 0x000A001F: Connection.TuneOk, 0x000A0028: Connection.Open, 0x000A0029: Connection.OpenOk, 0x000A0032: Connection.Close, 0x000A0033: Connection.CloseOk, 0x000A003C: Connection.Blocked, 0x000A003D: Connection.Unblocked, 0x0014000A: Channel.Open, 0x0014000B: Channel.OpenOk, 0x00140014: Channel.Flow, 0x00140015: Channel.FlowOk, 0x00140028: Channel.Close, 0x00140029: Channel.CloseOk, 0x001E000A: Access.Request, 0x001E000B: Access.RequestOk, 0x0028000A: Exchange.Declare, 0x0028000B: Exchange.DeclareOk, 0x00280014: Exchange.Delete, 0x00280015: Exchange.DeleteOk, 0x0028001E: Exchange.Bind, 0x0028001F: Exchange.BindOk, 0x00280028: Exchange.Unbind, 0x00280033: Exchange.UnbindOk, 0x0032000A: Queue.Declare, 0x0032000B: Queue.DeclareOk, 0x00320014: Queue.Bind, 0x00320015: Queue.BindOk, 0x0032001E: Queue.Purge, 0x0032001F: Queue.PurgeOk, 0x00320028: Queue.Delete, 0x00320029: Queue.DeleteOk, 0x00320032: Queue.Unbind, 0x00320033: Queue.UnbindOk, 0x003C000A: Basic.Qos, 0x003C000B: Basic.QosOk, 0x003C0014: Basic.Consume, 0x003C0015: Basic.ConsumeOk, 0x003C001E: Basic.Cancel, 0x003C001F: Basic.CancelOk, 0x003C0028: Basic.Publish, 0x003C0032: Basic.Return, 0x003C003C: Basic.Deliver, 0x003C0046: Basic.Get, 0x003C0047: Basic.GetOk, 0x003C0048: Basic.GetEmpty, 0x003C0050: Basic.Ack, 0x003C005A: Basic.Reject, 0x003C0064: Basic.RecoverAsync, 0x003C006E: Basic.Recover, 0x003C006F: Basic.RecoverOk, 0x003C0078: Basic.Nack, 0x005A000A: Tx.Select, 0x005A000B: Tx.SelectOk, 0x005A0014: Tx.Commit, 0x005A0015: Tx.CommitOk, 0x005A001E: Tx.Rollback, 0x005A001F: Tx.RollbackOk, 0x0055000A: Confirm.Select, 0x0055000B: Confirm.SelectOk } props = { 0x003C: BasicProperties } def has_content(methodNumber): return methodNumber in ( Basic.Publish.INDEX, Basic.Return.INDEX, Basic.Deliver.INDEX, Basic.GetOk.INDEX, ) pika-1.2.0/pika/tcp_socket_opts.py000066400000000000000000000027511400701476500171500ustar00rootroot00000000000000# pylint: disable=C0111 import logging import socket import pika.compat LOGGER = logging.getLogger(__name__) _SUPPORTED_TCP_OPTIONS = {} try: _SUPPORTED_TCP_OPTIONS['TCP_USER_TIMEOUT'] = socket.TCP_USER_TIMEOUT except AttributeError: if pika.compat.LINUX_VERSION and pika.compat.LINUX_VERSION >= (2, 6, 37): # this is not the timeout value, but the number corresponding # to the constant in tcp.h # https://github.com/torvalds/linux/blob/master/include/uapi/linux/tcp.h# # #define TCP_USER_TIMEOUT 18 /* How long for loss retry before timeout */ _SUPPORTED_TCP_OPTIONS['TCP_USER_TIMEOUT'] = 18 try: _SUPPORTED_TCP_OPTIONS['TCP_KEEPIDLE'] = socket.TCP_KEEPIDLE _SUPPORTED_TCP_OPTIONS['TCP_KEEPCNT'] = socket.TCP_KEEPCNT _SUPPORTED_TCP_OPTIONS['TCP_KEEPINTVL'] = socket.TCP_KEEPINTVL except AttributeError: pass def socket_requires_keepalive(tcp_options): return ('TCP_KEEPIDLE' in tcp_options or 'TCP_KEEPCNT' in tcp_options or 'TCP_KEEPINTVL' in tcp_options) def set_sock_opts(tcp_options, sock): if not tcp_options: return if socket_requires_keepalive(tcp_options): sock.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1) for key, value in tcp_options.items(): option = _SUPPORTED_TCP_OPTIONS.get(key) if option: sock.setsockopt(pika.compat.SOL_TCP, option, value) else: LOGGER.warning('Unsupported TCP option %s:%s', key, value) pika-1.2.0/pika/validators.py000066400000000000000000000026131400701476500161120ustar00rootroot00000000000000""" Common validation functions """ from pika.compat import basestring def require_string(value, value_name): """Require that value is a string :raises: TypeError """ if not isinstance(value, basestring): raise TypeError('%s must be a str or unicode str, but got %r' % ( value_name, value, )) def require_callback(callback, callback_name='callback'): """Require that callback is callable and is not None :raises: TypeError """ if not callable(callback): raise TypeError('callback %s must be callable, but got %r' % ( callback_name, callback, )) def rpc_completion_callback(callback): """Verify callback is callable if not None :returns: boolean indicating nowait :rtype: bool :raises: TypeError """ if callback is None: # No callback means we will not expect a response # i.e. nowait=True return True if callable(callback): # nowait=False return False else: raise TypeError('completion callback must be callable if not None') def zero_or_greater(name, value): """Verify that value is zero or greater. If not, 'name' will be used in error message :raises: ValueError """ if int(value) < 0: errmsg = '{} must be >= 0, but got {}'.format(name, value) raise ValueError(errmsg) pika-1.2.0/pylintrc000066400000000000000000000302231400701476500142310ustar00rootroot00000000000000[MASTER] # Specify a configuration file. #rcfile= # Python code to execute, usually for sys.path manipulation such as # pygtk.require(). #init-hook= # Profiled execution. profile=no # Add files or directories to the blacklist. They should be base names, not # paths. ignore=CVS # Pickle collected data for later comparisons. persistent=no # List of plugins (as comma separated values of python modules names) to load, # usually to register additional checkers. load-plugins= # Deprecated. It was used to include message's id in output. Use --msg-template # instead. #include-ids=no # Deprecated. It was used to include symbolic ids of messages in output. Use # --msg-template instead. #symbols=no # Use multiple processes to speed up Pylint. #jobs=1 # Allow loading of arbitrary C extensions. Extensions are imported into the # active Python interpreter and may run arbitrary code. #unsafe-load-any-extension=no # A comma-separated list of package or module names from where C extensions may # be loaded. Extensions are loading into the active Python interpreter and may # run arbitrary code #extension-pkg-whitelist= # Allow optimization of some AST trees. This will activate a peephole AST # optimizer, which will apply various small optimizations. For instance, it can # be used to obtain the result of joining multiple strings with the addition # operator. Joining a lot of strings can lead to a maximum recursion error in # Pylint and this flag can prevent that. It has one side effect, the resulting # AST will be different than the one from reality. #optimize-ast=no [MESSAGES CONTROL] # Only show warnings with the listed confidence levels. Leave empty to show # all. Valid levels: HIGH, INFERENCE, INFERENCE_FAILURE, UNDEFINED confidence= # Enable the message, report, category or checker with the given id(s). You can # either give multiple identifier separated by comma (,) or put this option # multiple time. See also the "--disable" option for examples. #enable= # Disable the message, report, category or checker with the given id(s). You # can either give multiple identifiers separated by comma (,) or put this # option multiple times (only on the command line, not in the configuration # file where it should appear only once).You can also use "--disable=all" to # disable everything first and then reenable specific checks. For example, if # you want to run only the similarities checker, you can use "--disable=all # --enable=similarities". If you want to run only the classes checker, but have # no Warning level messages displayed, use"--disable=all --enable=classes # --disable=W" # # disable R0205: - Class 'Foo' inherits from object, can be safely removed from bases in python3 (useless-object-inheritance)# # # disable C0302: too many lines in module # # disable W0511: TODOs # # disable R0801: similar lines in files disable=R0801,R1705,R0205,C0302,W0511 [REPORTS] # Set the output format. Available formats are text, parseable, colorized, msvs # (visual studio) and html. You can also give a reporter class, eg # mypackage.mymodule.MyReporterClass. output-format=text # Put messages in a separate file for each module / package specified on the # command line instead of printing them on stdout. Reports (if any) will be # written in a file name "pylint_global.[txt|html]". files-output=no # Tells whether to display a full report or only the messages reports=no # Python expression which should return a note less than 10 (10 is the highest # note). You have access to the variables errors warning, statement which # respectively contain the number of errors / warnings messages and the total # number of statements analyzed. This is used by the global evaluation report # (RP0004). evaluation=10.0 - ((float(5 * error + warning + refactor + convention) / statement) * 10) # Add a comment according to your evaluation note. This is used by the global # evaluation report (RP0004). comment=no # Template used to display messages. This is a python new-style format string # used to format the message information. See doc for all details msg-template={msg_id}, {line:3d}:{column:2d} - {msg} ({symbol}) #msg-template= [BASIC] # Required attributes for module, separated by a comma required-attributes= # List of builtins function names that should not be used, separated by a comma bad-functions=map,filter,input # Good variable names which should always be accepted, separated by a comma good-names=i,j,k,ex,fd,Run,_ # Bad variable names which should always be refused, separated by a comma bad-names=foo,bar,baz,toto,tutu,tata # Colon-delimited sets of names that determine each other's naming style when # the name regexes allow several styles. name-group= # Include a hint for the correct naming format with invalid-name include-naming-hint=no # Regular expression matching correct function names function-rgx=[a-z_][a-z0-9_]{2,40}$ # Naming hint for function names function-name-hint=[a-z_][a-z0-9_]{2,40}$ # Regular expression matching correct variable names variable-rgx=[a-z_][a-z0-9_]{2,30}$ # Naming hint for variable names variable-name-hint=[a-z_][a-z0-9_]{2,30}$ # Regular expression matching correct constant names const-rgx=(([A-Z_][A-Z0-9_]*)|(__.*__))$ # Naming hint for constant names const-name-hint=(([A-Z_][A-Z0-9_]*)|(__.*__))$ # Regular expression matching correct attribute names attr-rgx=[a-z_][a-z0-9_]{2,40}$ # Naming hint for attribute names attr-name-hint=[a-z_][a-z0-9_]{2,40}$ # Regular expression matching correct argument names argument-rgx=[a-z_][a-z0-9_]{2,30}$ # Naming hint for argument names argument-name-hint=[a-z_][a-z0-9_]{2,30}$ # Regular expression matching correct class attribute names class-attribute-rgx=([A-Za-z_][A-Za-z0-9_]{2,40}|(__.*__))$ # Naming hint for class attribute names class-attribute-name-hint=([A-Za-z_][A-Za-z0-9_]{2,40}|(__.*__))$ # Regular expression matching correct inline iteration names inlinevar-rgx=[A-Za-z_][A-Za-z0-9_]*$ # Naming hint for inline iteration names inlinevar-name-hint=[A-Za-z_][A-Za-z0-9_]*$ # Regular expression matching correct class names class-rgx=[A-Z_][a-zA-Z0-9]+$ # Naming hint for class names class-name-hint=[A-Z_][a-zA-Z0-9]+$ # Regular expression matching correct module names module-rgx=(([a-z_][a-z0-9_]*)|([A-Z][a-zA-Z0-9]+))$ # Naming hint for module names module-name-hint=(([a-z_][a-z0-9_]*)|([A-Z][a-zA-Z0-9]+))$ # Regular expression matching correct method names method-rgx=[a-z_][a-z0-9_]{2,40}$ # Naming hint for method names method-name-hint=[a-z_][a-z0-9_]{2,40}$ # Regular expression which should only match function or class names that do # not require a docstring. no-docstring-rgx=__.*__ # Minimum line length for functions/classes that require docstrings, shorter # ones are exempt. docstring-min-length=-1 [FORMAT] # Maximum number of characters on a single line. max-line-length=100 # Regexp for a line that is allowed to be longer than the limit. ignore-long-lines=^\s*(# )??$ # Allow the body of an if to be on the same line as the test if there is no # else. single-line-if-stmt=no # List of optional constructs for which whitespace checking is disabled no-space-check=trailing-comma,dict-separator # Maximum number of lines in a module max-module-lines=1000 # String used as indentation unit. This is usually " " (4 spaces) or "\t" (1 # tab). indent-string=' ' # Number of spaces of indent required inside a hanging or continued line. indent-after-paren=4 # Expected format of line ending, e.g. empty (any line ending), LF or CRLF. expected-line-ending-format= [LOGGING] # Logging modules to check that the string format arguments are in logging # function parameter format logging-modules=logging [MISCELLANEOUS] # List of note tags to take in consideration, separated by a comma. notes=FIXME,XXX,TODO [SIMILARITIES] # Minimum lines number of a similarity. min-similarity-lines=4 # Ignore comments when computing similarities. ignore-comments=yes # Ignore docstrings when computing similarities. ignore-docstrings=yes # Ignore imports when computing similarities. ignore-imports=no [SPELLING] # Spelling dictionary name. Available dictionaries: none. To make it working # install python-enchant package. spelling-dict= # List of comma separated words that should not be checked. spelling-ignore-words= # A path to a file that contains private dictionary; one word per line. spelling-private-dict-file= # Tells whether to store unknown words to indicated private dictionary in # --spelling-private-dict-file option instead of raising a message. spelling-store-unknown-words=no [TYPECHECK] # Tells whether missing members accessed in mixin class should be ignored. A # mixin class is detected if its name ends with "mixin" (case insensitive). ignore-mixin-members=yes # List of module names for which member attributes should not be checked # (useful for modules/projects where namespaces are manipulated during runtime # and thus existing member attributes cannot be deduced by static analysis ignored-modules= # List of classes names for which member attributes should not be checked # (useful for classes with attributes dynamically set). ignored-classes=SQLObject # When zope mode is activated, add a predefined set of Zope acquired attributes # to generated-members. zope=no # List of members which are set dynamically and missed by pylint inference # system, and so shouldn't trigger E0201 when accessed. Python regular # expressions are accepted. generated-members=REQUEST,acl_users,aq_parent [VARIABLES] # Tells whether we should check for unused import in __init__ files. init-import=no # A regular expression matching the name of dummy variables (i.e. expectedly # not used). dummy-variables-rgx=_|_$|dummy # List of additional names supposed to be defined in builtins. Remember that # you should avoid to define new builtins when possible. additional-builtins= # List of strings which can identify a callback function by name. A callback # name must start or end with one of those strings. callbacks=cb_,_cb [CLASSES] # List of interface methods to ignore, separated by a comma. This is used for # instance to not check methods defines in Zope's Interface base class. ignore-iface-methods=isImplementedBy,deferred,extends,names,namesAndDescriptions,queryDescriptionFor,getBases,getDescriptionFor,getDoc,getName,getTaggedValue,getTaggedValueTags,isEqualOrExtendedBy,setTaggedValue,isImplementedByInstancesOf,adaptWith,is_implemented_by # List of method names used to declare (i.e. assign) instance attributes. defining-attr-methods=__init__,__new__,setUp # List of valid names for the first argument in a class method. valid-classmethod-first-arg=cls # List of valid names for the first argument in a metaclass class method. valid-metaclass-classmethod-first-arg=mcs # List of member names, which should be excluded from the protected access # warning. exclude-protected=_asdict,_fields,_replace,_source,_make [DESIGN] # Maximum number of arguments for function / method max-args=10 # Argument names that match this expression will be ignored. Default to name # with leading underscore ignored-argument-names=_.* # Maximum number of locals for function / method body max-locals=15 # Maximum number of return / yield for function / method body max-returns=6 # Maximum number of branch for function / method body max-branches=20 # Maximum number of statements in function / method body max-statements=50 # Maximum number of parents for a class (see R0901). max-parents=7 # Maximum number of attributes for a class (see R0902). max-attributes=20 # Minimum number of public methods for a class (see R0903). min-public-methods=0 # Maximum number of public methods for a class (see R0904). max-public-methods=40 [IMPORTS] # Deprecated modules which should not be used, separated by a comma deprecated-modules=regsub,TERMIOS,Bastion,rexec # Create a graph of every (i.e. internal and external) dependencies in the # given file (report RP0402 must not be disabled) import-graph= # Create a graph of external dependencies in the given file (report RP0402 must # not be disabled) ext-import-graph= # Create a graph of internal dependencies in the given file (report RP0402 must # not be disabled) int-import-graph= [EXCEPTIONS] # Exceptions that will emit a warning when being caught. Defaults to # "Exception" overgeneral-exceptions=Exception pika-1.2.0/setup.cfg000066400000000000000000000004251400701476500142640ustar00rootroot00000000000000[bdist_wheel] universal = 1 [nosetests] cover-branches = 1 cover-erase = 1 cover-html = 1 cover-html-dir = build/coverage cover-package = pika cover-tests = 1 logging-level = DEBUG stop = 1 tests=tests/unit,tests/acceptance verbosity = 3 with-coverage = 1 detailed-errors = 1 pika-1.2.0/setup.py000066400000000000000000000043111400701476500141530ustar00rootroot00000000000000import setuptools import os # Conditionally include additional modules for docs on_rtd = os.environ.get('READTHEDOCS', None) == 'True' requirements = list() if on_rtd: requirements.append('gevent') requirements.append('tornado') requirements.append('twisted') long_description = ('Pika is a pure-Python implementation of the AMQP 0-9-1 ' 'protocol that tries to stay fairly independent of the ' 'underlying network support library. Pika was developed ' 'primarily for use with RabbitMQ, but should also work ' 'with other AMQP 0-9-1 brokers.') setuptools.setup( name='pika', version='1.2.0', description='Pika Python AMQP Client Library', long_description=open('README.rst').read(), maintainer='Gavin M. Roy', maintainer_email='gavinmroy@gmail.com', url='https://pika.readthedocs.io', packages=setuptools.find_packages(include=['pika', 'pika.*']), license='BSD', install_requires=requirements, package_data={'': ['LICENSE', 'README.rst']}, extras_require={ 'gevent': ['gevent'], 'tornado': ['tornado'], 'twisted': ['twisted'], }, classifiers=[ 'Development Status :: 5 - Production/Stable', 'Intended Audience :: Developers', 'License :: OSI Approved :: BSD License', 'Natural Language :: English', 'Operating System :: OS Independent', 'Programming Language :: Python :: 2', 'Programming Language :: Python :: 2.7', 'Programming Language :: Python :: 3', 'Programming Language :: Python :: 3.4', 'Programming Language :: Python :: 3.5', 'Programming Language :: Python :: 3.6', 'Programming Language :: Python :: 3.7', 'Programming Language :: Python :: Implementation :: CPython', 'Programming Language :: Python :: Implementation :: Jython', 'Programming Language :: Python :: Implementation :: PyPy', 'Topic :: Communications', 'Topic :: Internet', 'Topic :: Software Development :: Libraries', 'Topic :: Software Development :: Libraries :: Python Modules', 'Topic :: System :: Networking' ], zip_safe=True) pika-1.2.0/test-requirements.txt000066400000000000000000000002131400701476500166770ustar00rootroot00000000000000coverage codecov gevent mock nose tornado twisted enum34; python_version == '2.7' or (python_version >= '3.0' and python_version <= '3.4') pika-1.2.0/testdata/000077500000000000000000000000001400701476500142535ustar00rootroot00000000000000pika-1.2.0/testdata/certs/000077500000000000000000000000001400701476500153735ustar00rootroot00000000000000pika-1.2.0/testdata/certs/ca_certificate.pem000066400000000000000000000022701400701476500210240ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIDUDCCAjigAwIBAgIUJfxfGSJyCxB7m5iZpxOARMVhqQswDQYJKoZIhvcNAQEL BQAwMTEgMB4GA1UEAwwXVExTR2VuU2VsZlNpZ25lZHRSb290Q0ExDTALBgNVBAcM BCQkJCQwHhcNMjAwNjA5MTUzNzI5WhcNMzAwNjA3MTUzNzI5WjAxMSAwHgYDVQQD DBdUTFNHZW5TZWxmU2lnbmVkdFJvb3RDQTENMAsGA1UEBwwEJCQkJDCCASIwDQYJ KoZIhvcNAQEBBQADggEPADCCAQoCggEBAMFZ06vJ8BDCNzG0XsfCDIAoOcCSjSYw /NHHYjudepPRBfnxKIeKrOFrelmO27BGWZfXLq/lunE0jBXpYrIhXy6UI0nZdmsv KYBUxUkfzBbRhefPdhtiXhSGRS/G1o9/QafgVtLrPp8aajVlW1kJ5rrKrZZNIjWx hVNyfpoakTXNV5fnGFWz2BJCqa7re3+Gz516T1vVIiZFjDglAEpv/eFrz2b2Lj/J vxmmsNjXZrFZ990Ln4ESDxust7fD4TI8aAHvjXRxmQgsi+uhtOmwJYJ2YCc6JG3N D2qCe/RblBqMny2bz6nwOATh78GKoG17duX2336Fkb2BLU/ATVrJEoMCAwEAAaNg MF4wCwYDVR0PBAQDAgEGMB0GA1UdDgQWBBQSXvRXFGHRhSlXeDKvVwS5750CgDAf BgNVHSMEGDAWgBQSXvRXFGHRhSlXeDKvVwS5750CgDAPBgNVHRMBAf8EBTADAQH/ MA0GCSqGSIb3DQEBCwUAA4IBAQBOKeiAFNcKGef1jbGpO2eQTXtQZnIjixPaTW7W wLSMxKi7sJgbsj9zZDSO/MIYFh/s95bKv5KFXB0l5sh6Uhlw7jKs2eOjO1EmpaJO eV61yTYinVImvGP4t0WGftrADZKjxRT38+ClGataM7lyScVvubvsulrzPNPazMi2 AVE3j1BmW1cNUdQyVLScJmjYBywIHCB7cBqt5vidj9fktlZ1oyQBMjMvmcBRCrRR t82r2ay0NP2sEXN7oFSDKwkRKpV3auzquUsnKr4UkT7jr9lvZ5a/aW4yMskT8G+D RdMnkOwp+8Ve0B89Oxta2LcEFL1DMQq+M8bYQZ7+PBb6nx4f -----END CERTIFICATE----- pika-1.2.0/testdata/certs/ca_key.pem000066400000000000000000000032541400701476500173350ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIEwAIBADANBgkqhkiG9w0BAQEFAASCBKowggSmAgEAAoIBAQDBWdOryfAQwjcx tF7HwgyAKDnAko0mMPzRx2I7nXqT0QX58SiHiqzha3pZjtuwRlmX1y6v5bpxNIwV 6WKyIV8ulCNJ2XZrLymAVMVJH8wW0YXnz3YbYl4UhkUvxtaPf0Gn4FbS6z6fGmo1 ZVtZCea6yq2WTSI1sYVTcn6aGpE1zVeX5xhVs9gSQqmu63t/hs+dek9b1SImRYw4 JQBKb/3ha89m9i4/yb8ZprDY12axWffdC5+BEg8brLe3w+EyPGgB7410cZkILIvr obTpsCWCdmAnOiRtzQ9qgnv0W5QajJ8tm8+p8DgE4e/BiqBte3bl9t9+hZG9gS1P wE1ayRKDAgMBAAECggEBAIqRTdm4B4x7JANDOQoAT+Uo7vrMDMxqH0ZYllYTLl3x V08hPeKlx/BljnHQqDFUubDQTm1RPqUS+7JYaJswv34qPIAYkX2Pdza5igo195YC 4uyXChXmEXa0I7Fx1yNQGEWvyYmvflmYHLXXxfex2OHVj0JAVVwVtW9whrr6f0cG e4SPnmP/EUBXZ24NfC95ldhQiZudgH29JbUwhWRTwYEUHP8qUuuWfRJuHraxbpTC k7eCztUQGRfTfT8awPP8pbRurONC8RaafKeyFENAEjzZNioAvi3Vv8DJXAP6ndS+ yjZGTey7I/rDjwN+DB/BOXExonmZistvs95bCPwJCIECgYEA6mMJsyvzhhMOQW8z 0VHafrPKs3TfwU29Ez+SHevsf+xPGrG8t+r3IzPgI0ZE3bD4imoQ95t1MYWKRf+y 6b+IM2UMTgisYhfbgGNV/9GiLr/GLNU/R49QsyrblcoNrjkNEpox1lBXaxDYTxrx fgbQB3tb6l5+q6WfUkmMeNWHnDsCgYEA0y4WzkXjyRVDzP0FF/JXCntVKdrMPLlT TN/cIArrEE9t6EYUx57xLAEZFuY/e+4ap8e1xTMaaXgQ+ZNO1zAXANlMlpM7qdPH rJghhUZWqm0pZ1KNgKdvmPQfnOGUe7VNUqRUXfjKM0KSOsvwecCNYqu7ryqZ0iOZ IKuvq2gpJlkCgYEA4lJzYVEFOudXkpOAI5S4ODP/fL1T+eHIurddrgrLONLDp3EM W0NFE/bZbPZDNRXXSEAW1iCETyc1V1YKOm85YvclpIv3eFi1GQnSVszjn+SJxWy8 R6r5L6golECgaSSpnNbLXLgDUVzYobnQifKmGTNik7Je+ftZFinyvBLjeVUCgYEA tz7zAyKaOc59+s5DMThUVwAWMi9tsfOOWNKXjCZtOsXxtO+68Ez3MRvyzXAV/k/q SVR+YhOqA4LwF+C/NPLBwzbLwo0X5JGkXhvUWnVilpgKqWF08AJaT/rlw5fq5D26 Ts/RdYmAy2IkyWhVzxBKnygtwB3TRAknwrW3xaCotGECgYEAkNfYTAEpAYwMqCUz MCZEXZqTjEszd7frPxFT6A6Ae7Nu8PMZRW+zRJ1n+9FX5b1u6DqqE82I2/LILBtu 5fjUCKJPUP19GDltcUDZZnn7u0MCm7uV+/KwIdZltA/ABrfGaG7OHSP4s0DCl9cT FeWvwJqn34bsSxyc2grZ5vRPzeU= -----END PRIVATE KEY----- pika-1.2.0/testdata/certs/client_certificate.pem000066400000000000000000000022241400701476500217160ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIDNTCCAh2gAwIBAgIBAjANBgkqhkiG9w0BAQsFADAxMSAwHgYDVQQDDBdUTFNH ZW5TZWxmU2lnbmVkdFJvb3RDQTENMAsGA1UEBwwEJCQkJDAeFw0yMDA2MDkxNTM3 MjlaFw0zMDA2MDcxNTM3MjlaMCgxFTATBgNVBAMMDHNob3N0YWtvdmljaDEPMA0G A1UECgwGY2xpZW50MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAol1V N8FOk7Q0DmAceUpblZZKeIpG/IB/uFVLrbGMk+fLVRI3dUfth6fTVTPPW1899PnG WEEXACZuzBuzTIOW5C7OxaWTdDRJKvjjgtxW3FFzyBxu1gLBY3uH91fSeUJvzsfC h5faCZEmK3Z4XxmQJ1lIS0mBb4GmSPLy3Bs3x+f3bpYRjGAGoyBg/5H+/4a8BhFF RJFmb9uQTA8YywYtdIaST6TnDlwK1lilFGiB1o6CEdxE9rSW+okL+WajkRvub4l4 vLd0j2Pjy5/4o5nKkHNnmbSC6W/Xxe1tF9RH1QhcRlv2rYRbHljnwVq9QWOz57Az xkrk8nKgJXHDesfeuwIDAQABo2EwXzAJBgNVHRMEAjAAMAsGA1UdDwQEAwIFoDAT BgNVHSUEDDAKBggrBgEFBQcDAjAwBgNVHREEKTAnggxzaG9zdGFrb3ZpY2iCDHNo b3N0YWtvdmljaIIJbG9jYWxob3N0MA0GCSqGSIb3DQEBCwUAA4IBAQCm+R3lbsUE rhXcZWDNBc+1JsQlBaPEUjdmflOOD+FHaX/uj/4f+xXO6l86LPgjp6cz2i6iO+H7 IBt2OBZMFIDi/FA4qM6MqZt4zT1BwE/lFUJsxhAhTRNCxg3RXA2mfJwSFM+8eiS4 5z9xALef1w+02kAvKg/fikBziyyW07rmCbWGSzAHfLxB1JOREsNOv3Fd7gTVMyXs 8yeGizQbW++C53Jou5l2UqWdyxDBGvVoKDn9EXg26ulVbxGs3xylrZ3UKkOi+sdr xzo38Dp1w7om/MrR2xMH1mxk6H1/JztcuuY5hmGTSYd9dNkxATXDN773Cccd/GS7 j88yi6reMkr/ -----END CERTIFICATE----- pika-1.2.0/testdata/certs/client_key.pem000066400000000000000000000032501400701476500202240ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQCiXVU3wU6TtDQO YBx5SluVlkp4ikb8gH+4VUutsYyT58tVEjd1R+2Hp9NVM89bXz30+cZYQRcAJm7M G7NMg5bkLs7FpZN0NEkq+OOC3FbcUXPIHG7WAsFje4f3V9J5Qm/Ox8KHl9oJkSYr dnhfGZAnWUhLSYFvgaZI8vLcGzfH5/dulhGMYAajIGD/kf7/hrwGEUVEkWZv25BM DxjLBi10hpJPpOcOXArWWKUUaIHWjoIR3ET2tJb6iQv5ZqORG+5viXi8t3SPY+PL n/ijmcqQc2eZtILpb9fF7W0X1EfVCFxGW/athFseWOfBWr1BY7PnsDPGSuTycqAl ccN6x967AgMBAAECggEAC748X5/3ku9BpJiG9q7OGO+Zy0YVBEY29poUsydYR4pI Rorev/jH0TvuKQdqQ+2LiaBXHCL5CuW8tb57JVmPHEnWYq0rEQiHEjiWG+zby2uI uDx2N7xTSGKy3szXSDXp6EbCZxQwjOiWniYfZqFur6nhdLCIUAxMfpIzxn+hdS/Q U0cNv5Y/JMMyT/W3YoNsTcqj04WeP5YhAk6YpHMEmbu1wnjfuxSf3+DGpl1Unsj6 9ro3N9mZOBK1TeRSPic5hADEpkooLURP4IaWBgEKFbOi81j9rZbyV2Z5uWoe4ZOE CQqV/NyxyV3/bhM2gTglpp2jmw+p86+JMeaH6G2EoQKBgQDONyMxkhPyxSt5Bcri VECjIYzpU6uQodDe1nS4+N4jEpSey0KqWl5Or7hadBqI4eN72VJNJJriUFmw+t50 AYmw2XGJ0PG2Qmm0n+ycMSlr/lfQ6IKJJqzPTn/aixa4d7vUfJRKM4l0P5wB43Wk 9Az+VsSk9K3p7uXCgxNKMmruPQKBgQDJkA5dInrPOMfOGOKVa2Rn5TjgQMCU7f6Q KqkfD/XdMOOBHg66pznZSc30QlS5b4E8Pr1+uvDRTpcxe75nKfTFFMgufF0MwIdo lycEfUqPB8DRuWhcWU26/+rPsXl+UGxql/NdcB0br97FrH8YfqrOsV0SWHUs8+pc XlVNe+sIVwKBgQCxUhQ/Md9ZaFYjcOmuiMgz/kuO71Wdvqc+lqYz0DwjaHzHtvyS Q7bIbq1VinSus57LBmqQzyMn6/PUDURv+EqP+cp5uWO/V4hRuxrYjCWUKVcV2nk0 uj7q3BNwtx1IbhzjcGSLEZnmjjP2I8Mrnnf11GKvfX52o+iJw/A4YvYz+QKBgQDG RZmLjgZGXzFUj/AbUVekR7xqA+gs1+voPr68GoQdACFa+ok8nJDwKISauMEE5CW5 cHIQ/q0zB178wx/p9UCcuTOtXpJdn+nTPZSY+vJjvhmzc/Gvnf0zbNi7U3YSheQP +sbfbBCGErtNscAYBUnaJmhKSo+BF7K7B+RbYwEw4wKBgE6V0MIHA4HUltygJWHL 42Zneaw1PlMniYTi+yR1L6g/yVk7etqF+cvABTl2gibEqkOoL1hOfHqcnRLlPC/p E2dvldULFJCjJuWq7DSuUvdoubIDAGOHeaTVSUpqhuXEZYUBx0mZGOvPsZtZbGb9 5GithT9zse1eZlqb5zQrQ1lc -----END PRIVATE KEY----- pika-1.2.0/testdata/certs/server_certificate.pem000066400000000000000000000023551400701476500217530ustar00rootroot00000000000000-----BEGIN CERTIFICATE----- MIIDdzCCAl+gAwIBAgIBATANBgkqhkiG9w0BAQsFADAxMSAwHgYDVQQDDBdUTFNH ZW5TZWxmU2lnbmVkdFJvb3RDQTENMAsGA1UEBwwEJCQkJDAeFw0yMDA2MDkxNTM3 MjlaFw0zMDA2MDcxNTM3MjlaMCgxFTATBgNVBAMMDHNob3N0YWtvdmljaDEPMA0G A1UECgwGc2VydmVyMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAnJHB R2EhGh1BW7+bQskXoQcb5+bttHvPMTTTIlj6O4f7ONfnIE+nR7htRRL31Qmnhwnj bkEJfwwV6KZCBIGjnIBM7oxjPJxYt2s02KgzzYWaIOg80/5/RftCpnYEyxoRaGZ2 eOyGO8h5qowrCmgqAQkL+nnXoKI3+DrPvhGtjV4KSW3iUJr3H1gnilDVUtrJg8rV 3qaYleJYg9PmNAM5t0CejkxggQlgrjxxqGZWcdvzKSTgPTZVHq0ZkT8ADmk2WK40 zOdmjmlE1cpdurkGBW/bCZ+1XzKPQNz7dqSR6ShrbN+IwokoKonKuClVjaNdjiXX Q2diCLwP42rzyBflLwIDAQABo4GiMIGfMAkGA1UdEwQCMAAwCwYDVR0PBAQDAgWg MBMGA1UdJQQMMAoGCCsGAQUFBwMBMDAGA1UdEQQpMCeCDHNob3N0YWtvdmljaIIM c2hvc3Rha292aWNogglsb2NhbGhvc3QwHQYDVR0OBBYEFF2N1RpeuLglunCu2pPT DdQ/57v5MB8GA1UdIwQYMBaAFBJe9FcUYdGFKVd4Mq9XBLnvnQKAMA0GCSqGSIb3 DQEBCwUAA4IBAQAkQCoNyEOQNDIPdebHdli8fIx3stHPsZIRPo/Be7yJwqZndnGX LqT/tq3xmGb0vvMvkJwIxk5asZ5nCfW++eKadXYSAU1pX0LePBaRmkpsxrzoaQ31 QnLyVEC8rfuljhVa5JYRghVqYyDAkqF7bYx3En94+2yBqnoz0n1Kzq2wbeBQEbrY O5drU3Q+7LxwNmTbKuTdia40BPGvu9iLYS5M1p4Q6O9MaRAbM9TwxRf8IIGr4q7p l7IOweqDc8lIgBx3DxMs05j4wIkfXxc+lS4fjyc7ffHoPg4yVQe5lA5rkZvA61oM acyBqEdfjRv4QJaZZGBUptZ+TIRQUkdzCeUW -----END CERTIFICATE----- pika-1.2.0/testdata/certs/server_key.pem000066400000000000000000000032501400701476500202540ustar00rootroot00000000000000-----BEGIN PRIVATE KEY----- MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQCckcFHYSEaHUFb v5tCyRehBxvn5u20e88xNNMiWPo7h/s41+cgT6dHuG1FEvfVCaeHCeNuQQl/DBXo pkIEgaOcgEzujGM8nFi3azTYqDPNhZog6DzT/n9F+0KmdgTLGhFoZnZ47IY7yHmq jCsKaCoBCQv6edegojf4Os++Ea2NXgpJbeJQmvcfWCeKUNVS2smDytXeppiV4liD 0+Y0Azm3QJ6OTGCBCWCuPHGoZlZx2/MpJOA9NlUerRmRPwAOaTZYrjTM52aOaUTV yl26uQYFb9sJn7VfMo9A3Pt2pJHpKGts34jCiSgqicq4KVWNo12OJddDZ2IIvA/j avPIF+UvAgMBAAECggEAC+li5QVUuWHkaRCIxWn7LDsYOmptRz/sIXg9X+2ZDBJq YIa8hM7KkUBMv+aTiFe1sSZlcSvthwbqW8raVvFt+ygfsj5oBmJ2tW2olZsmZcp2 UW6Zwh8om7Bj/7oW30TacjDpboXAKKg16W3EkIQgPffsO2AtsYDl9TK0P2ek5o5U jlMU2pouJgFXAavOq9PfYW+WP8hnSIzkOmT0ruboeIJCT5nu8s7omd7HHoWCybV7 1Vi3hnl04Fl+8vpgzHrrPW3acPvE/67/v6IeTZQstxLVd6R3nsHqYH+ePd8v2CmA hMaYhDOUOznZt+dS+npzZwu5ihh+RVwhjrpMbgJgwQKBgQDLnkWTspZMM1UB4jxP EGG23XPly7mWDlduwTi/jrjrFLlvTY3dOi2EqnIyrPWtB44HT1qpCJgd4UjA7VDI d1vRV02bXnJDZ3wR94gzmLmSKl1r13ZXyMm4P0G6ILjFiz5xTbVfIJd67P4wfeYG +1jk9mglZFe4NSaIvkonyIxnPwKBgQDE2PmGFbLjzSEuvmyc3qH7ruppyn9jNk7N C1nJT1yeJRvvzJeQ5WpriC+mhvXHv4VlY+BZ9Jfh6ptR8PsQhlQpHuUs81MLxHMH mrJnwsgxC3YDc8hC024EyJ158VEQEkROnoruPP9P2yqd790WaVkPDkmGcZg4DmdF e83EE1J2EQKBgQCWAiJML7Oeq+qimqPHs8/pQrkRwMcXD7XGOL+wEFuXhQsgPsiX BTdnl5LOVrIgKYKvS/0ErvoyeTh6Odvb9GNGlMTuA+S2V9UF/5DuQkPktSViP7hF 7/z8qk2n7Fdz4aO9IXzPellfDJ4v53DCEAZrmEUd1xDw+udKsrY7HPqjTQKBgBfs et8F8hjnjFnfANpL4InzJ0A2QScwqYEYGRBzWcFKp0uMpPsSNs3c3lWx31sodrDs 445rQK5PUhMyY4ENolrhC19cL0Kl3IkXDwm3TZdQWkZvIu6kPaHNM/5nCsAWSED5 2c68kRcGfvZ0+XzIzu8agGsbkTF25qw2MLrc0k/RAoGAVEpMYL5QAVVLqU0nTtWz MXW4ChwLq0Uat8KP8Ws8p0LO4E0jZ+f0Ye5+bGKxFbFEirT2fba1H21lu/E/PKct VCzxBQaEbxURxwiMGaJK9+C9XwyjFpkDdBWSzX5KVLSwjWpBuiG0akGIkCEs5ZrS f6SSbCf6GwIel2AT6DR5Ba0= -----END PRIVATE KEY----- pika-1.2.0/testdata/rabbitmq.conf.in000066400000000000000000000007361400701476500173360ustar00rootroot00000000000000listeners.tcp.default = 5672 listeners.ssl.default = 5671 num_acceptors.tcp = 10 num_acceptors.ssl = 10 reverse_dns_lookups = false loopback_users.guest = true ssl_options.verify = verify_peer ssl_options.fail_if_no_peer_cert = true ssl_options.cacertfile = PIKA_DIR/testdata/certs/ca_certificate.pem ssl_options.certfile = PIKA_DIR/testdata/certs/server_certificate.pem ssl_options.keyfile = PIKA_DIR/testdata/certs/server_key.pem log.console = false log.console.level = debug pika-1.2.0/testdata/wait-epmd.ps1000066400000000000000000000013261400701476500165710ustar00rootroot00000000000000$running = $false [int]$count = 1 $epmd = [System.IO.Path]::Combine($env:ERLANG_HOME, $env:erlang_erts_version, "bin", "epmd.exe") Do { Write-Host '[INFO] epmd -names output:' & $epmd -names $running = & $epmd -names | Select-String -CaseSensitive -SimpleMatch -Quiet -Pattern 'name rabbit at port 25672' if ($running -eq $true) { Write-Host '[INFO] epmd reports that RabbitMQ is at port 25672' break } if ($count -gt 120) { throw '[ERROR] too many tries waiting for epmd to report RabbitMQ on port 25672' } Write-Host "[INFO] epmd NOT reporting yet that RabbitMQ is at port 25672, count: $count" $count = $count + 1 Start-Sleep -Seconds 5 } While ($true) pika-1.2.0/testdata/wait-rabbitmq.ps1000066400000000000000000000011571400701476500174470ustar00rootroot00000000000000[int]$count = 1 Do { $proc_id = (Get-Process -Name erl).Id if (-Not ($proc_id -is [array])) { & "C:\Program Files\RabbitMQ Server\rabbitmq_server-$env:rabbitmq_version\sbin\rabbitmqctl.bat" wait -t 300000 -P $proc_id if ($LASTEXITCODE -ne 0) { throw "[ERROR] rabbitmqctl wait returned error: $LASTEXITCODE" } break } if ($count -gt 120) { throw '[ERROR] too many tries waiting for just one erl process to be running' } Write-Host '[INFO] multiple erl instances running still' $count = $count + 1 Start-Sleep -Seconds 5 } While ($true) pika-1.2.0/tests/000077500000000000000000000000001400701476500136045ustar00rootroot00000000000000pika-1.2.0/tests/acceptance/000077500000000000000000000000001400701476500156725ustar00rootroot00000000000000pika-1.2.0/tests/acceptance/async_adapter_tests.py000066400000000000000000001267051400701476500223160ustar00rootroot00000000000000 # too-many-lines # pylint: disable=C0302 # Suppress pylint messages concerning missing class and method docstrings # pylint: disable=C0111 # Suppress pylint warning about attribute defined outside __init__ # pylint: disable=W0201 # Suppress pylint warning about access to protected member # pylint: disable=W0212 # Suppress pylint warning about unused argument # pylint: disable=W0613 # invalid-name # pylint: disable=C0103 import functools import socket import threading import uuid import pika from pika.adapters.utils import connection_workflow from pika import spec from pika.compat import as_bytes, time_now import pika.connection import pika.exceptions from pika.exchange_type import ExchangeType import pika.frame from tests.base import async_test_base from tests.base.async_test_base import (AsyncTestCase, BoundQueueTestCase, AsyncAdapters) class TestA_Connect(AsyncTestCase, AsyncAdapters): # pylint: disable=C0103 DESCRIPTION = "Connect, open channel and disconnect" def begin(self, channel): self.stop() class TestConstructAndImmediatelyCloseConnection(AsyncTestCase, AsyncAdapters): DESCRIPTION = "Construct and immediately close connection." @async_test_base.stop_on_error_in_async_test_case_method def begin(self, channel): connection_class = self.connection.__class__ params = self.new_connection_params() @async_test_base.make_stop_on_error_with_self(self) def on_opened(connection): self.fail('Connection should have aborted, but got ' 'on_opened({!r})'.format(connection)) @async_test_base.make_stop_on_error_with_self(self) def on_open_error(connection, error): self.assertIsInstance(error, pika.exceptions.ConnectionOpenAborted) self.stop() conn = connection_class(params, on_open_callback=on_opened, on_open_error_callback=on_open_error, custom_ioloop=self.connection.ioloop) conn.close() class TestCloseConnectionDuringAMQPHandshake(AsyncTestCase, AsyncAdapters): DESCRIPTION = "Close connection during AMQP handshake." @async_test_base.stop_on_error_in_async_test_case_method def begin(self, channel): base_class = self.connection.__class__ # type: pika.adapters.BaseConnection params = self.new_connection_params() class MyConnectionClass(base_class): # Cause an exception if _on_stream_connected doesn't exist base_class._on_stream_connected # pylint: disable=W0104 @async_test_base.make_stop_on_error_with_self(self) def _on_stream_connected(self, *args, **kwargs): # Now that AMQP handshake has begun, schedule imminent closing # of the connection self._nbio.add_callback_threadsafe(self.close) return super(MyConnectionClass, self)._on_stream_connected( *args, **kwargs) @async_test_base.make_stop_on_error_with_self(self) def on_opened(connection): self.fail('Connection should have aborted, but got ' 'on_opened({!r})'.format(connection)) @async_test_base.make_stop_on_error_with_self(self) def on_open_error(connection, error): self.assertIsInstance(error, pika.exceptions.ConnectionOpenAborted) self.stop() conn = MyConnectionClass(params, on_open_callback=on_opened, on_open_error_callback=on_open_error, custom_ioloop=self.connection.ioloop) conn.close() class TestSocketConnectTimeoutWithTinySocketTimeout(AsyncTestCase, AsyncAdapters): DESCRIPTION = "Force socket.connect() timeout with very tiny socket_timeout." @async_test_base.stop_on_error_in_async_test_case_method def begin(self, channel): connection_class = self.connection.__class__ params = self.new_connection_params() # socket_timeout expects something > 0 params.socket_timeout = 0.0000000000000000001 @async_test_base.make_stop_on_error_with_self(self) def on_opened(connection): self.fail('Socket connection should have timed out, but got ' 'on_opened({!r})'.format(connection)) @async_test_base.make_stop_on_error_with_self(self) def on_open_error(connection, error): self.assertIsInstance(error, pika.exceptions.AMQPConnectionError) self.stop() connection_class( params, on_open_callback=on_opened, on_open_error_callback=on_open_error, custom_ioloop=self.connection.ioloop) class TestStackConnectionTimeoutWithTinyStackTimeout(AsyncTestCase, AsyncAdapters): DESCRIPTION = "Force stack bring-up timeout with very tiny stack_timeout." @async_test_base.stop_on_error_in_async_test_case_method def begin(self, channel): connection_class = self.connection.__class__ params = self.new_connection_params() # stack_timeout expects something > 0 params.stack_timeout = 0.0000000000000000001 @async_test_base.make_stop_on_error_with_self(self) def on_opened(connection): self.fail('Stack connection should have timed out, but got ' 'on_opened({!r})'.format(connection)) def on_open_error(connection, exception): error = None if not isinstance(exception, pika.exceptions.AMQPConnectionError): error = AssertionError( 'Expected AMQPConnectionError, but got {!r}'.format( exception)) self.stop(error) connection_class( params, on_open_callback=on_opened, on_open_error_callback=on_open_error, custom_ioloop=self.connection.ioloop) class TestCreateConnectionViaDefaultConnectionWorkflow(AsyncTestCase, AsyncAdapters): DESCRIPTION = "Connect via adapter's create_connection() method with single config." @async_test_base.stop_on_error_in_async_test_case_method def begin(self, channel): configs = [self.parameters] connection_class = self.connection.__class__ # type: pika.adapters.BaseConnection @async_test_base.make_stop_on_error_with_self(self) def on_done(conn): self.assertIsInstance(conn, connection_class) conn.add_on_close_callback(on_my_connection_closed) conn.close() @async_test_base.make_stop_on_error_with_self(self) def on_my_connection_closed(_conn, error): self.assertIsInstance(error, pika.exceptions.ConnectionClosedByClient) self.stop() workflow = connection_class.create_connection(configs, on_done, self.connection.ioloop) self.assertIsInstance( workflow, connection_workflow.AbstractAMQPConnectionWorkflow) class TestCreateConnectionViaCustomConnectionWorkflow(AsyncTestCase, AsyncAdapters): DESCRIPTION = "Connect via adapter's create_connection() method using custom workflow." @async_test_base.stop_on_error_in_async_test_case_method def begin(self, channel): configs = [self.parameters] connection_class = self.connection.__class__ # type: pika.adapters.BaseConnection @async_test_base.make_stop_on_error_with_self(self) def on_done(conn): self.assertIsInstance(conn, connection_class) self.assertIs(conn.i_was_here, MyWorkflow) conn.add_on_close_callback(on_my_connection_closed) conn.close() @async_test_base.make_stop_on_error_with_self(self) def on_my_connection_closed(_conn, error): self.assertIsInstance(error, pika.exceptions.ConnectionClosedByClient) self.stop() class MyWorkflow(connection_workflow.AMQPConnectionWorkflow): if not hasattr(connection_workflow.AMQPConnectionWorkflow, '_report_completion_and_cleanup'): raise AssertionError('_report_completion_and_cleanup not in ' 'AMQPConnectionWorkflow.') def _report_completion_and_cleanup(self, result): """Override implementation to tag the presumed connection""" result.i_was_here = MyWorkflow super(MyWorkflow, self)._report_completion_and_cleanup(result) original_workflow = MyWorkflow() workflow = connection_class.create_connection( configs, on_done, self.connection.ioloop, workflow=original_workflow) self.assertIs(workflow, original_workflow) class TestCreateConnectionMultipleConfigsDefaultConnectionWorkflow( AsyncTestCase, AsyncAdapters): DESCRIPTION = "Connect via adapter's create_connection() method with multiple configs." @async_test_base.stop_on_error_in_async_test_case_method def begin(self, channel): good_params = self.parameters connection_class = self.connection.__class__ # type: pika.adapters.BaseConnection sock = socket.socket() self.addCleanup(sock.close) sock.bind(('127.0.0.1', 0)) bad_host, bad_port = sock.getsockname() sock.close() # so that attempt to connect will fail immediately bad_params = pika.ConnectionParameters(host=bad_host, port=bad_port) @async_test_base.make_stop_on_error_with_self(self) def on_done(conn): self.assertIsInstance(conn, connection_class) self.assertEqual(conn.params.host, good_params.host) self.assertEqual(conn.params.port, good_params.port) self.assertNotEqual((conn.params.host, conn.params.port), (bad_host, bad_port)) conn.add_on_close_callback(on_my_connection_closed) conn.close() @async_test_base.make_stop_on_error_with_self(self) def on_my_connection_closed(_conn, error): self.assertIsInstance(error, pika.exceptions.ConnectionClosedByClient) self.stop() workflow = connection_class.create_connection([bad_params, good_params], on_done, self.connection.ioloop) self.assertIsInstance( workflow, connection_workflow.AbstractAMQPConnectionWorkflow) class TestCreateConnectionRetriesWithDefaultConnectionWorkflow( AsyncTestCase, AsyncAdapters): DESCRIPTION = "Connect via adapter's create_connection() method with multiple retries." @async_test_base.stop_on_error_in_async_test_case_method def begin(self, channel): base_class = self.connection.__class__ # type: pika.adapters.BaseConnection first_config = self.parameters second_config = self.new_connection_params() # Configure retries (default connection workflow keys off the last # config in the sequence) second_config.retry_delay = 0.001 second_config.connection_attempts = 2 # MyConnectionClass will use connection_attempts to distinguish between # first and second configs self.assertNotEqual(first_config.connection_attempts, second_config.connection_attempts) logger = self.logger class MyConnectionClass(base_class): got_second_config = False def __init__(self, parameters, *args, **kwargs): logger.info('Entered MyConnectionClass constructor: %s', parameters) if (parameters.connection_attempts == second_config.connection_attempts): MyConnectionClass.got_second_config = True logger.info('Got second config.') raise Exception('Reject second config.') if not MyConnectionClass.got_second_config: logger.info('Still on first attempt with first config.') raise Exception('Still on first attempt with first config.') logger.info('Start of retry cycle detected.') super(MyConnectionClass, self).__init__(parameters, *args, **kwargs) @async_test_base.make_stop_on_error_with_self(self) def on_done(conn): self.assertIsInstance(conn, MyConnectionClass) self.assertEqual(conn.params.connection_attempts, first_config.connection_attempts) conn.add_on_close_callback(on_my_connection_closed) conn.close() @async_test_base.make_stop_on_error_with_self(self) def on_my_connection_closed(_conn, error): self.assertIsInstance(error, pika.exceptions.ConnectionClosedByClient) self.stop() MyConnectionClass.create_connection([first_config, second_config], on_done, self.connection.ioloop) class TestCreateConnectionConnectionWorkflowSocketConnectionFailure( AsyncTestCase, AsyncAdapters): DESCRIPTION = "Connect via adapter's create_connection() fails to connect socket." @async_test_base.stop_on_error_in_async_test_case_method def begin(self, channel): connection_class = self.connection.__class__ # type: pika.adapters.BaseConnection sock = socket.socket() self.addCleanup(sock.close) sock.bind(('127.0.0.1', 0)) bad_host, bad_port = sock.getsockname() sock.close() # so that attempt to connect will fail immediately bad_params = pika.ConnectionParameters(host=bad_host, port=bad_port) @async_test_base.make_stop_on_error_with_self(self) def on_done(exc): self.assertIsInstance( exc, connection_workflow.AMQPConnectionWorkflowFailed) self.assertIsInstance( exc.exceptions[-1], connection_workflow.AMQPConnectorSocketConnectError) self.stop() connection_class.create_connection([bad_params,], on_done, self.connection.ioloop) class TestCreateConnectionAMQPHandshakeTimesOutDefaultWorkflow(AsyncTestCase, AsyncAdapters): DESCRIPTION = "AMQP handshake timeout handling in adapter's create_connection()." @async_test_base.stop_on_error_in_async_test_case_method def begin(self, channel): base_class = self.connection.__class__ # type: pika.adapters.BaseConnection params = self.parameters workflow = None # type: connection_workflow.AMQPConnectionWorkflow class MyConnectionClass(base_class): # Cause an exception if _on_stream_connected doesn't exist base_class._on_stream_connected # pylint: disable=W0104 @async_test_base.make_stop_on_error_with_self(self) def _on_stream_connected(self, *args, **kwargs): # Now that AMQP handshake has begun, simulate imminent stack # timeout in AMQPConnector connector = workflow._connector # type: connection_workflow.AMQPConnector connector._stack_timeout_ref.cancel() connector._stack_timeout_ref = connector._nbio.call_later( 0, connector._on_overall_timeout) return super(MyConnectionClass, self)._on_stream_connected( *args, **kwargs) @async_test_base.make_stop_on_error_with_self(self) def on_done(error): self.assertIsInstance( error, connection_workflow.AMQPConnectionWorkflowFailed) self.assertIsInstance( error.exceptions[-1], connection_workflow.AMQPConnectorAMQPHandshakeError) self.assertIsInstance( error.exceptions[-1].exception, connection_workflow.AMQPConnectorStackTimeout) self.stop() workflow = MyConnectionClass.create_connection([params], on_done, self.connection.ioloop) class TestCreateConnectionAndImmediatelyAbortDefaultConnectionWorkflow( AsyncTestCase, AsyncAdapters): DESCRIPTION = "Immediately abort workflow initiated via adapter's create_connection()." @async_test_base.stop_on_error_in_async_test_case_method def begin(self, channel): configs = [self.parameters] connection_class = self.connection.__class__ # type: pika.adapters.BaseConnection @async_test_base.make_stop_on_error_with_self(self) def on_done(exc): self.assertIsInstance( exc, connection_workflow.AMQPConnectionWorkflowAborted) self.stop() workflow = connection_class.create_connection(configs, on_done, self.connection.ioloop) workflow.abort() class TestCreateConnectionAndAsynchronouslyAbortDefaultConnectionWorkflow( AsyncTestCase, AsyncAdapters): DESCRIPTION = "Asyncrhonously abort workflow initiated via adapter's create_connection()." @async_test_base.stop_on_error_in_async_test_case_method def begin(self, channel): configs = [self.parameters] connection_class = self.connection.__class__ # type: pika.adapters.BaseConnection @async_test_base.make_stop_on_error_with_self(self) def on_done(exc): self.assertIsInstance( exc, connection_workflow.AMQPConnectionWorkflowAborted) self.stop() workflow = connection_class.create_connection(configs, on_done, self.connection.ioloop) self.connection._nbio.add_callback_threadsafe(workflow.abort) class TestConfirmSelect(AsyncTestCase, AsyncAdapters): DESCRIPTION = "Receive confirmation of Confirm.Select" def begin(self, channel): channel.confirm_delivery(ack_nack_callback=self.ack_nack_callback, callback=self.on_complete) @staticmethod def ack_nack_callback(frame): pass def on_complete(self, frame): self.assertIsInstance(frame.method, spec.Confirm.SelectOk) self.stop() class TestBlockingNonBlockingBlockingRPCWontStall(AsyncTestCase, AsyncAdapters): DESCRIPTION = ("Verify that a sequence of blocking, non-blocking, blocking " "RPC requests won't stall") def begin(self, channel): # Queue declaration params table: queue name, nowait value self._expected_queue_params = ( ("blocking-non-blocking-stall-check-" + uuid.uuid1().hex, False), ("blocking-non-blocking-stall-check-" + uuid.uuid1().hex, True), ("blocking-non-blocking-stall-check-" + uuid.uuid1().hex, False) ) self._declared_queue_names = [] for queue, nowait in self._expected_queue_params: cb = self._queue_declare_ok_cb if not nowait else None channel.queue_declare(queue=queue, auto_delete=True, arguments={'x-expires': self.TIMEOUT * 1000}, callback=cb) def _queue_declare_ok_cb(self, declare_ok_frame): self._declared_queue_names.append(declare_ok_frame.method.queue) if len(self._declared_queue_names) == 2: # Initiate check for creation of queue declared with nowait=True self.channel.queue_declare(queue=self._expected_queue_params[1][0], passive=True, callback=self._queue_declare_ok_cb) elif len(self._declared_queue_names) == 3: self.assertSequenceEqual( sorted(self._declared_queue_names), sorted(item[0] for item in self._expected_queue_params)) self.stop() class TestConsumeCancel(AsyncTestCase, AsyncAdapters): DESCRIPTION = "Consume and cancel" def begin(self, channel): self.queue_name = self.__class__.__name__ + ':' + uuid.uuid1().hex channel.queue_declare(self.queue_name, callback=self.on_queue_declared) def on_queue_declared(self, frame): for i in range(0, 100): msg_body = '{}:{}:{}'.format(self.__class__.__name__, i, time_now()) self.channel.basic_publish('', self.queue_name, msg_body) self.ctag = self.channel.basic_consume(self.queue_name, self.on_message, auto_ack=True) def on_message(self, _channel, _frame, _header, body): self.channel.basic_cancel(self.ctag, callback=self.on_cancel) def on_cancel(self, _frame): self.channel.queue_delete(self.queue_name, callback=self.on_deleted) def on_deleted(self, _frame): self.stop() class TestExchangeDeclareAndDelete(AsyncTestCase, AsyncAdapters): DESCRIPTION = "Create and delete and exchange" X_TYPE = ExchangeType.direct def begin(self, channel): self.name = self.__class__.__name__ + ':' + uuid.uuid1().hex channel.exchange_declare(self.name, exchange_type=self.X_TYPE, passive=False, durable=False, auto_delete=True, callback=self.on_exchange_declared) def on_exchange_declared(self, frame): self.assertIsInstance(frame.method, spec.Exchange.DeclareOk) self.channel.exchange_delete(self.name, callback=self.on_exchange_delete) def on_exchange_delete(self, frame): self.assertIsInstance(frame.method, spec.Exchange.DeleteOk) self.stop() class TestExchangeRedeclareWithDifferentValues(AsyncTestCase, AsyncAdapters): DESCRIPTION = "should close chan: re-declared exchange w/ diff params" X_TYPE1 = ExchangeType.direct X_TYPE2 = ExchangeType.topic def begin(self, channel): self.name = self.__class__.__name__ + ':' + uuid.uuid1().hex self.channel.add_on_close_callback(self.on_channel_closed) channel.exchange_declare(self.name, exchange_type=self.X_TYPE1, passive=False, durable=False, auto_delete=True, callback=self.on_exchange_declared) def on_cleanup_channel(self, channel): channel.exchange_delete(self.name) self.stop() def on_channel_closed(self, _channel, _reason): self.connection.channel(on_open_callback=self.on_cleanup_channel) def on_exchange_declared(self, frame): self.channel.exchange_declare(self.name, exchange_type=self.X_TYPE2, passive=False, durable=False, auto_delete=True, callback=self.on_bad_result) def on_bad_result(self, frame): self.channel.exchange_delete(self.name) raise AssertionError("Should not have received an Exchange.DeclareOk") class TestNoDeadlockWhenClosingChannelWithPendingBlockedRequestsAndConcurrentChannelCloseFromBroker( AsyncTestCase, AsyncAdapters): DESCRIPTION = ("No deadlock when closing a channel with pending blocked " "requests and concurrent Channel.Close from broker.") # To observe the behavior that this is testing, comment out this line # in pika/channel.py - _on_close: # # self._drain_blocked_methods_on_remote_close() # # With the above line commented out, this test will hang def begin(self, channel): base_exch_name = self.__class__.__name__ + ':' + uuid.uuid1().hex self.channel.add_on_close_callback(self.on_channel_closed) for i in range(0, 99): # Passively declare a non-existent exchange to force Channel.Close # from broker exch_name = base_exch_name + ':' + str(i) cb = functools.partial(self.on_bad_result, exch_name) channel.exchange_declare(exch_name, exchange_type=ExchangeType.direct, passive=True, callback=cb) channel.close() def on_channel_closed(self, _channel, _reason): # The close is expected because the requested exchange doesn't exist self.stop() def on_bad_result(self, exch_name, frame): self.fail("Should not have received an Exchange.DeclareOk") class TestClosingAChannelPermitsBlockedRequestToComplete(AsyncTestCase, AsyncAdapters): DESCRIPTION = "Closing a channel permits blocked requests to complete." @async_test_base.stop_on_error_in_async_test_case_method def begin(self, channel): self._queue_deleted = False channel.add_on_close_callback(self.on_channel_closed) q_name = self.__class__.__name__ + ':' + uuid.uuid1().hex # NOTE we pass callback to make it a blocking request channel.queue_declare(q_name, exclusive=True, callback=lambda _frame: None) self.assertIsNotNone(channel._blocking) # The Queue.Delete should block on completion of Queue.Declare channel.queue_delete(q_name, callback=self.on_queue_deleted) self.assertTrue(channel._blocked) # This Channel.Close should allow the blocked Queue.Delete to complete # Before closing the channel channel.close() def on_queue_deleted(self, _frame): # Getting this callback shows that the blocked request was processed self._queue_deleted = True @async_test_base.stop_on_error_in_async_test_case_method def on_channel_closed(self, _channel, _reason): self.assertTrue(self._queue_deleted) self.stop() class TestQueueUnnamedDeclareAndDelete(AsyncTestCase, AsyncAdapters): DESCRIPTION = "Create and delete an unnamed queue" @async_test_base.stop_on_error_in_async_test_case_method def begin(self, channel): channel.queue_declare(queue='', passive=False, durable=False, exclusive=True, auto_delete=False, arguments={'x-expires': self.TIMEOUT * 1000}, callback=self.on_queue_declared) @async_test_base.stop_on_error_in_async_test_case_method def on_queue_declared(self, frame): self.assertIsInstance(frame.method, spec.Queue.DeclareOk) self.channel.queue_delete(frame.method.queue, callback=self.on_queue_delete) @async_test_base.stop_on_error_in_async_test_case_method def on_queue_delete(self, frame): self.assertIsInstance(frame.method, spec.Queue.DeleteOk) self.stop() class TestQueueNamedDeclareAndDelete(AsyncTestCase, AsyncAdapters): DESCRIPTION = "Create and delete a named queue" def begin(self, channel): self._q_name = self.__class__.__name__ + ':' + uuid.uuid1().hex channel.queue_declare(self._q_name, passive=False, durable=False, exclusive=True, auto_delete=True, arguments={'x-expires': self.TIMEOUT * 1000}, callback=self.on_queue_declared) def on_queue_declared(self, frame): self.assertIsInstance(frame.method, spec.Queue.DeclareOk) # Frame's method's queue is encoded (impl detail) self.assertEqual(frame.method.queue, self._q_name) self.channel.queue_delete(frame.method.queue, callback=self.on_queue_delete) def on_queue_delete(self, frame): self.assertIsInstance(frame.method, spec.Queue.DeleteOk) self.stop() class TestQueueRedeclareWithDifferentValues(AsyncTestCase, AsyncAdapters): DESCRIPTION = "Should close chan: re-declared queue w/ diff params" def begin(self, channel): self._q_name = self.__class__.__name__ + ':' + uuid.uuid1().hex self.channel.add_on_close_callback(self.on_channel_closed) channel.queue_declare(self._q_name, passive=False, durable=False, exclusive=True, auto_delete=True, arguments={'x-expires': self.TIMEOUT * 1000}, callback=self.on_queue_declared) def on_channel_closed(self, _channel, _reason): self.stop() def on_queue_declared(self, frame): self.channel.queue_declare(self._q_name, passive=False, durable=True, exclusive=False, auto_delete=True, arguments={'x-expires': self.TIMEOUT * 1000}, callback=self.on_bad_result) def on_bad_result(self, frame): self.channel.queue_delete(self._q_name) raise AssertionError("Should not have received a Queue.DeclareOk") class TestTX1_Select(AsyncTestCase, AsyncAdapters): # pylint: disable=C0103 DESCRIPTION = "Receive confirmation of Tx.Select" def begin(self, channel): channel.tx_select(callback=self.on_complete) def on_complete(self, frame): self.assertIsInstance(frame.method, spec.Tx.SelectOk) self.stop() class TestTX2_Commit(AsyncTestCase, AsyncAdapters): # pylint: disable=C0103 DESCRIPTION = "Start a transaction, and commit it" def begin(self, channel): channel.tx_select(callback=self.on_selectok) def on_selectok(self, frame): self.assertIsInstance(frame.method, spec.Tx.SelectOk) self.channel.tx_commit(callback=self.on_commitok) def on_commitok(self, frame): self.assertIsInstance(frame.method, spec.Tx.CommitOk) self.stop() class TestTX2_CommitFailure(AsyncTestCase, AsyncAdapters): # pylint: disable=C0103 DESCRIPTION = "Close the channel: commit without a TX" def begin(self, channel): self.channel.add_on_close_callback(self.on_channel_closed) self.channel.tx_commit(callback=self.on_commitok) def on_channel_closed(self, _channel, _reason): self.stop() def on_selectok(self, frame): self.assertIsInstance(frame.method, spec.Tx.SelectOk) @staticmethod def on_commitok(frame): raise AssertionError("Should not have received a Tx.CommitOk") class TestTX3_Rollback(AsyncTestCase, AsyncAdapters): # pylint: disable=C0103 DESCRIPTION = "Start a transaction, then rollback" def begin(self, channel): channel.tx_select(callback=self.on_selectok) def on_selectok(self, frame): self.assertIsInstance(frame.method, spec.Tx.SelectOk) self.channel.tx_rollback(callback=self.on_rollbackok) def on_rollbackok(self, frame): self.assertIsInstance(frame.method, spec.Tx.RollbackOk) self.stop() class TestTX3_RollbackFailure(AsyncTestCase, AsyncAdapters): # pylint: disable=C0103 DESCRIPTION = "Close the channel: rollback without a TX" def begin(self, channel): self.channel.add_on_close_callback(self.on_channel_closed) self.channel.tx_rollback(callback=self.on_commitok) def on_channel_closed(self, _channel, _reason): self.stop() @staticmethod def on_commitok(frame): raise AssertionError("Should not have received a Tx.RollbackOk") class TestZ_PublishAndConsume(BoundQueueTestCase, AsyncAdapters): # pylint: disable=C0103 DESCRIPTION = "Publish a message and consume it" def on_ready(self, frame): self.ctag = self.channel.basic_consume(self.queue, self.on_message) self.msg_body = "%s: %i" % (self.__class__.__name__, time_now()) self.channel.basic_publish(self.exchange, self.routing_key, self.msg_body) def on_cancelled(self, frame): self.assertIsInstance(frame.method, spec.Basic.CancelOk) self.stop() def on_message(self, channel, method, header, body): self.assertIsInstance(method, spec.Basic.Deliver) self.assertEqual(body, as_bytes(self.msg_body)) self.channel.basic_ack(method.delivery_tag) self.channel.basic_cancel(self.ctag, callback=self.on_cancelled) class TestZ_PublishAndConsumeBig(BoundQueueTestCase, AsyncAdapters): # pylint: disable=C0103 DESCRIPTION = "Publish a big message and consume it" @staticmethod def _get_msg_body(): return '\n'.join(["%s" % i for i in range(0, 2097152)]) def on_ready(self, frame): self.ctag = self.channel.basic_consume(self.queue, self.on_message) self.msg_body = self._get_msg_body() self.channel.basic_publish(self.exchange, self.routing_key, self.msg_body) def on_cancelled(self, frame): self.assertIsInstance(frame.method, spec.Basic.CancelOk) self.stop() def on_message(self, channel, method, header, body): self.assertIsInstance(method, spec.Basic.Deliver) self.assertEqual(body, as_bytes(self.msg_body)) self.channel.basic_ack(method.delivery_tag) self.channel.basic_cancel(self.ctag, callback=self.on_cancelled) class TestZ_PublishAndGet(BoundQueueTestCase, AsyncAdapters): # pylint: disable=C0103 DESCRIPTION = "Publish a message and get it" def on_ready(self, frame): self.msg_body = "%s: %i" % (self.__class__.__name__, time_now()) self.channel.basic_publish(self.exchange, self.routing_key, self.msg_body) self.channel.basic_get(self.queue, self.on_get) def on_get(self, channel, method, header, body): self.assertIsInstance(method, spec.Basic.GetOk) self.assertEqual(body, as_bytes(self.msg_body)) self.channel.basic_ack(method.delivery_tag) self.stop() class TestZ_AccessDenied(AsyncTestCase, AsyncAdapters): # pylint: disable=C0103 DESCRIPTION = "Unknown vhost results in ProbableAccessDeniedError." def start(self, *args, **kwargs): # pylint: disable=W0221 self.parameters.virtual_host = str(uuid.uuid4()) self.error_captured = None super(TestZ_AccessDenied, self).start(*args, **kwargs) self.assertIsInstance(self.error_captured, pika.exceptions.ProbableAccessDeniedError) def on_open_error(self, connection, error): self.error_captured = error self.stop() def on_open(self, connection): super(TestZ_AccessDenied, self).on_open(connection) self.stop() class TestBlockedConnectionTimesOut(AsyncTestCase, AsyncAdapters): # pylint: disable=C0103 DESCRIPTION = "Verify that blocked connection terminates on timeout" def start(self, *args, **kwargs): # pylint: disable=W0221 self.parameters.blocked_connection_timeout = 0.001 self.on_closed_error = None super(TestBlockedConnectionTimesOut, self).start(*args, **kwargs) self.assertIsInstance(self.on_closed_error, pika.exceptions.ConnectionBlockedTimeout) def begin(self, channel): # Simulate Connection.Blocked channel.connection._on_connection_blocked( channel.connection, pika.frame.Method(0, spec.Connection.Blocked( 'Testing blocked connection timeout'))) def on_closed(self, connection, error): """called when the connection has finished closing""" self.on_closed_error = error self.stop() # acknowledge that closed connection is expected super(TestBlockedConnectionTimesOut, self).on_closed(connection, error) class TestBlockedConnectionUnblocks(AsyncTestCase, AsyncAdapters): # pylint: disable=C0103 DESCRIPTION = "Verify that blocked-unblocked connection closes normally" def start(self, *args, **kwargs): # pylint: disable=W0221 self.parameters.blocked_connection_timeout = 0.001 self.on_closed_error = None super(TestBlockedConnectionUnblocks, self).start(*args, **kwargs) self.assertIsInstance(self.on_closed_error, pika.exceptions.ConnectionClosedByClient) self.assertEqual( (self.on_closed_error.reply_code, self.on_closed_error.reply_text), (200, 'Normal shutdown')) def begin(self, channel): # Simulate Connection.Blocked channel.connection._on_connection_blocked( channel.connection, pika.frame.Method(0, spec.Connection.Blocked( 'Testing blocked connection unblocks'))) # Simulate Connection.Unblocked channel.connection._on_connection_unblocked( channel.connection, pika.frame.Method(0, spec.Connection.Unblocked())) # Schedule shutdown after blocked connection timeout would expire channel.connection._adapter_call_later(0.005, self.on_cleanup_timer) def on_cleanup_timer(self): self.stop() def on_closed(self, connection, error): """called when the connection has finished closing""" self.on_closed_error = error super(TestBlockedConnectionUnblocks, self).on_closed(connection, error) class TestAddCallbackThreadsafeRequestBeforeIOLoopStarts(AsyncTestCase, AsyncAdapters): DESCRIPTION = ( "Test _adapter_add_callback_threadsafe request before ioloop starts.") def _run_ioloop(self, *args, **kwargs): # pylint: disable=W0221 """We intercept this method from AsyncTestCase in order to call _adapter_add_callback_threadsafe before AsyncTestCase starts the ioloop. """ self.my_start_time = time_now() # Request a callback from our current (ioloop's) thread self.connection._adapter_add_callback_threadsafe( self.on_requested_callback) return super( TestAddCallbackThreadsafeRequestBeforeIOLoopStarts, self)._run_ioloop( *args, **kwargs) def start(self, *args, **kwargs): # pylint: disable=W0221 self.loop_thread_ident = threading.current_thread().ident self.my_start_time = None self.got_callback = False super(TestAddCallbackThreadsafeRequestBeforeIOLoopStarts, self).start(*args, **kwargs) self.assertTrue(self.got_callback) def begin(self, channel): self.stop() def on_requested_callback(self): self.assertEqual(threading.current_thread().ident, self.loop_thread_ident) self.assertLess(time_now() - self.my_start_time, 0.25) self.got_callback = True class TestAddCallbackThreadsafeFromIOLoopThread(AsyncTestCase, AsyncAdapters): DESCRIPTION = ( "Test _adapter_add_callback_threadsafe request from same thread.") def start(self, *args, **kwargs): # pylint: disable=W0221 self.loop_thread_ident = threading.current_thread().ident self.my_start_time = None self.got_callback = False super(TestAddCallbackThreadsafeFromIOLoopThread, self).start(*args, **kwargs) self.assertTrue(self.got_callback) def begin(self, channel): self.my_start_time = time_now() # Request a callback from our current (ioloop's) thread channel.connection._adapter_add_callback_threadsafe( self.on_requested_callback) def on_requested_callback(self): self.assertEqual(threading.current_thread().ident, self.loop_thread_ident) self.assertLess(time_now() - self.my_start_time, 0.25) self.got_callback = True self.stop() class TestAddCallbackThreadsafeFromAnotherThread(AsyncTestCase, AsyncAdapters): DESCRIPTION = ( "Test _adapter_add_callback_threadsafe request from another thread.") def start(self, *args, **kwargs): # pylint: disable=W0221 self.loop_thread_ident = threading.current_thread().ident self.my_start_time = None self.got_callback = False super(TestAddCallbackThreadsafeFromAnotherThread, self).start(*args, **kwargs) self.assertTrue(self.got_callback) def begin(self, channel): self.my_start_time = time_now() # Request a callback from ioloop while executing in another thread timer = threading.Timer( 0, lambda: channel.connection._adapter_add_callback_threadsafe( self.on_requested_callback)) self.addCleanup(timer.cancel) timer.start() def on_requested_callback(self): self.assertEqual(threading.current_thread().ident, self.loop_thread_ident) self.assertLess(time_now() - self.my_start_time, 0.25) self.got_callback = True self.stop() class TestIOLoopStopBeforeIOLoopStarts(AsyncTestCase, AsyncAdapters): DESCRIPTION = "Test ioloop.stop() before ioloop starts causes ioloop to exit quickly." def _run_ioloop(self, *args, **kwargs): # pylint: disable=W0221 """We intercept this method from AsyncTestCase in order to call ioloop.stop() before AsyncTestCase starts the ioloop. """ # Request ioloop to stop before it starts my_start_time = time_now() self.stop_ioloop_only() super( TestIOLoopStopBeforeIOLoopStarts, self)._run_ioloop(*args, **kwargs) self.assertLess(time_now() - my_start_time, 0.25) # Resume I/O loop to facilitate normal course of events and closure # of connection in order to prevent reporting of a socket resource leak # from an unclosed connection. super( TestIOLoopStopBeforeIOLoopStarts, self)._run_ioloop(*args, **kwargs) def begin(self, channel): self.stop() class TestViabilityOfMultipleTimeoutsWithSameDeadlineAndCallback(AsyncTestCase, AsyncAdapters): # pylint: disable=C0103 DESCRIPTION = "Test viability of multiple timeouts with same deadline and callback" def begin(self, channel): timer1 = channel.connection._adapter_call_later(0, self.on_my_timer) timer2 = channel.connection._adapter_call_later(0, self.on_my_timer) self.assertIsNot(timer1, timer2) channel.connection._adapter_remove_timeout(timer1) # Wait for timer2 to fire def on_my_timer(self): self.stop() pika-1.2.0/tests/acceptance/blocking_adapter_test.py000066400000000000000000003464231400701476500226070ustar00rootroot00000000000000"""blocking adapter test""" from datetime import datetime import functools import logging import socket import threading import unittest import uuid import pika from pika.adapters import blocking_connection from pika.compat import as_bytes, time_now import pika.connection import pika.exceptions from pika.exchange_type import ExchangeType from tests.misc.forward_server import ForwardServer from tests.misc.test_utils import retry_assertion # too-many-lines # pylint: disable=C0302 # Disable warning about access to protected member # pylint: disable=W0212 # Disable warning Attribute defined outside __init__ # pylint: disable=W0201 # Disable warning Missing docstring # pylint: disable=C0111 # Disable warning Too many public methods # pylint: disable=R0904 # Disable warning Invalid variable name # pylint: disable=C0103 LOGGER = logging.getLogger(__name__) PARAMS_URL_TEMPLATE = ( 'amqp://guest:guest@127.0.0.1:%(port)s/%%2f?socket_timeout=1') DEFAULT_URL = PARAMS_URL_TEMPLATE % {'port': 5672} DEFAULT_PARAMS = pika.URLParameters(DEFAULT_URL) DEFAULT_TIMEOUT = 15 def setUpModule(): logging.basicConfig(level=logging.DEBUG) class BlockingTestCaseBase(unittest.TestCase): TIMEOUT = DEFAULT_TIMEOUT def _connect(self, url=DEFAULT_URL, connection_class=pika.BlockingConnection, impl_class=None): parameters = pika.URLParameters(url) return self._connect_params(parameters, connection_class, impl_class) def _connect_params(self, parameters, connection_class=pika.BlockingConnection, impl_class=None): connection = connection_class(parameters, _impl_class=impl_class) self.addCleanup(lambda: connection.close() if connection.is_open else None) # We use impl's timer directly in order to get a callback regardless # of BlockingConnection's event dispatch modality connection._impl._adapter_call_later(self.TIMEOUT, # pylint: disable=E1101 self._on_test_timeout) # Patch calls into I/O loop to fail test if exceptions are # leaked back through SelectConnection or the I/O loop. self._instrument_io_loop_exception_leak_detection(connection) return connection def _instrument_io_loop_exception_leak_detection(self, connection): """Instrument the given connection to detect and fail test when an exception is leaked through the I/O loop NOTE: BlockingConnection's underlying asynchronous connection adapter (SelectConnection) uses callbacks to communicate with its user ( BlockingConnection in this case). If BlockingConnection leaks exceptions back into the I/O loop or the asynchronous connection adapter, we interrupt their normal workflow and introduce a high likelihood of state inconsistency. """ # Patch calls into I/O loop to fail test if exceptions are # leaked back through SelectConnection or the I/O loop. real_poll = connection._impl.ioloop.poll def my_poll(*args, **kwargs): try: return real_poll(*args, **kwargs) except BaseException as exc: self.fail('Unwanted exception leaked into asynchronous layer ' 'via ioloop.poll(): {!r}'.format(exc)) connection._impl.ioloop.poll = my_poll self.addCleanup(setattr, connection._impl.ioloop, 'poll', real_poll) real_process_timeouts = connection._impl.ioloop.process_timeouts def my_process_timeouts(*args, **kwargs): try: return real_process_timeouts(*args, **kwargs) except AssertionError: # Our test timeout logic and unit test assert* routines rely # on being able to pass AssertionError raise except BaseException as exc: self.fail('Unwanted exception leaked into asynchronous layer ' 'via ioloop.process_timeouts(): {!r}'.format(exc)) connection._impl.ioloop.process_timeouts = my_process_timeouts self.addCleanup(setattr, connection._impl.ioloop, 'process_timeouts', real_process_timeouts) def _on_test_timeout(self): """Called when test times out""" LOGGER.info('%s TIMED OUT (%s)', datetime.utcnow(), self) self.fail('Test timed out') @retry_assertion(TIMEOUT/2) def _assert_exact_message_count_with_retries(self, channel, queue, expected_count): frame = channel.queue_declare(queue, passive=True) self.assertEqual(frame.method.message_count, expected_count) class TestCreateAndCloseConnection(BlockingTestCaseBase): def test(self): """BlockingConnection: Create and close connection""" connection = self._connect() self.assertIsInstance(connection, pika.BlockingConnection) self.assertTrue(connection.is_open) self.assertFalse(connection.is_closed) self.assertFalse(connection._impl.is_closing) connection.close() self.assertTrue(connection.is_closed) self.assertFalse(connection.is_open) self.assertFalse(connection._impl.is_closing) class TestCreateConnectionWithNoneSocketAndStackTimeouts(BlockingTestCaseBase): def test(self): """ BlockingConnection: create a connection with socket and stack timeouts both None """ params = pika.URLParameters(DEFAULT_URL) params.socket_timeout = None params.stack_timeout = None with self._connect_params(params) as connection: self.assertTrue(connection.is_open) class TestCreateConnectionFromTwoConfigsFirstUnreachable(BlockingTestCaseBase): def test(self): """ BlockingConnection: create a connection from two configs, first unreachable """ # Reserve a port for use in connect sock = socket.socket() self.addCleanup(sock.close) sock.bind(('127.0.0.1', 0)) port = sock.getsockname()[1] sock.close() bad_params = pika.URLParameters(PARAMS_URL_TEMPLATE % {"port": port}) good_params = pika.URLParameters(DEFAULT_URL) with self._connect_params([bad_params, good_params]) as connection: self.assertNotEqual(connection._impl.params.port, bad_params.port) self.assertEqual(connection._impl.params.port, good_params.port) class TestCreateConnectionFromTwoUnreachableConfigs(BlockingTestCaseBase): def test(self): """ BlockingConnection: creating a connection from two unreachable \ configs raises AMQPConnectionError """ # Reserve a port for use in connect sock = socket.socket() self.addCleanup(sock.close) sock.bind(('127.0.0.1', 0)) port = sock.getsockname()[1] sock.close() bad_params = pika.URLParameters(PARAMS_URL_TEMPLATE % {"port": port}) with self.assertRaises(pika.exceptions.AMQPConnectionError): self._connect_params([bad_params, bad_params]) class TestMultiCloseConnectionRaisesWrongState(BlockingTestCaseBase): def test(self): """BlockingConnection: Close connection twice raises ConnectionWrongStateError""" connection = self._connect() self.assertIsInstance(connection, pika.BlockingConnection) self.assertTrue(connection.is_open) self.assertFalse(connection.is_closed) self.assertFalse(connection._impl.is_closing) connection.close() self.assertTrue(connection.is_closed) self.assertFalse(connection.is_open) self.assertFalse(connection._impl.is_closing) with self.assertRaises(pika.exceptions.ConnectionWrongStateError): connection.close() class TestConnectionContextManagerClosesConnection(BlockingTestCaseBase): def test(self): """BlockingConnection: connection context manager closes connection""" with self._connect() as connection: self.assertIsInstance(connection, pika.BlockingConnection) self.assertTrue(connection.is_open) self.assertTrue(connection.is_closed) class TestConnectionContextManagerExitSurvivesClosedConnection(BlockingTestCaseBase): def test(self): """BlockingConnection: connection context manager exit survives closed connection""" with self._connect() as connection: self.assertTrue(connection.is_open) connection.close() self.assertTrue(connection.is_closed) self.assertTrue(connection.is_closed) class TestConnectionContextManagerClosesConnectionAndPassesOriginalException(BlockingTestCaseBase): def test(self): """BlockingConnection: connection context manager closes connection and passes original exception""" # pylint: disable=C0301 class MyException(Exception): pass with self.assertRaises(MyException): with self._connect() as connection: self.assertTrue(connection.is_open) raise MyException() self.assertTrue(connection.is_closed) class TestConnectionContextManagerClosesConnectionAndPassesSystemException(BlockingTestCaseBase): def test(self): """BlockingConnection: connection context manager closes connection and passes system exception""" # pylint: disable=C0301 with self.assertRaises(SystemExit): with self._connect() as connection: self.assertTrue(connection.is_open) raise SystemExit() self.assertTrue(connection.is_closed) class TestLostConnectionResultsInIsClosedConnectionAndChannel(BlockingTestCaseBase): def test(self): connection = self._connect() channel = connection.channel() # Simulate the server dropping the socket connection connection._impl._transport._sock.shutdown(socket.SHUT_RDWR) with self.assertRaises(pika.exceptions.StreamLostError): # Changing QoS should result in ConnectionClosed channel.basic_qos() # Now check is_open/is_closed on channel and connection self.assertFalse(channel.is_open) self.assertTrue(channel.is_closed) self.assertFalse(connection.is_open) self.assertTrue(connection.is_closed) class TestInvalidExchangeTypeRaisesConnectionClosed(BlockingTestCaseBase): def test(self): """BlockingConnection: ConnectionClosed raised when creating exchange with invalid type""" # pylint: disable=C0301 # This test exploits behavior specific to RabbitMQ whereby the broker # closes the connection if an attempt is made to declare an exchange # with an invalid exchange type connection = self._connect() ch = connection.channel() exg_name = ("TestInvalidExchangeTypeRaisesConnectionClosed_" + uuid.uuid1().hex) with self.assertRaises(pika.exceptions.ConnectionClosed) as ex_cm: # Attempt to create an exchange with invalid exchange type ch.exchange_declare(exg_name, exchange_type='ZZwwInvalid') self.assertEqual(ex_cm.exception.args[0], 503) class TestCreateAndCloseConnectionWithChannelAndConsumer(BlockingTestCaseBase): def test(self): """BlockingConnection: Create and close connection with channel and consumer""" # pylint: disable=C0301 connection = self._connect() ch = connection.channel() q_name = ( 'TestCreateAndCloseConnectionWithChannelAndConsumer_q' + uuid.uuid1().hex) body1 = 'a' * 1024 # Declare a new queue ch.queue_declare(q_name, auto_delete=True) self.addCleanup(lambda: self._connect().channel().queue_delete(q_name)) # Publish the message to the queue by way of default exchange ch.basic_publish(exchange='', routing_key=q_name, body=body1) # Create a consumer that uses automatic ack mode ch.basic_consume(q_name, lambda *x: None, auto_ack=True, exclusive=False, arguments=None) connection.close() self.assertTrue(connection.is_closed) self.assertFalse(connection.is_open) self.assertFalse(connection._impl.is_closing) self.assertFalse(connection._impl._channels) self.assertFalse(ch._consumer_infos) self.assertFalse(ch._impl._consumers) class TestUsingInvalidQueueArgument(BlockingTestCaseBase): def test(self): """BlockingConnection raises expected exception when invalid queue parameter is used """ connection = self._connect() ch = connection.channel() with self.assertRaises(TypeError): ch.queue_declare(queue=[1, 2, 3]) class TestSuddenBrokerDisconnectBeforeChannel(BlockingTestCaseBase): def test(self): """BlockingConnection resets properly on TCP/IP drop during channel() """ with ForwardServer(remote_addr=(DEFAULT_PARAMS.host, DEFAULT_PARAMS.port), local_linger_args=(1, 0)) as fwd: self.connection = self._connect( PARAMS_URL_TEMPLATE % {"port": fwd.server_address[1]}) # Once outside the context, the connection is broken # BlockingConnection should raise ConnectionClosed with self.assertRaises(pika.exceptions.StreamLostError): self.connection.channel() self.assertTrue(self.connection.is_closed) self.assertFalse(self.connection.is_open) self.assertIsNone(self.connection._impl._transport) class TestNoAccessToConnectionAfterConnectionLost(BlockingTestCaseBase): def test(self): """BlockingConnection no access file descriptor after StreamLostError """ with ForwardServer(remote_addr=(DEFAULT_PARAMS.host, DEFAULT_PARAMS.port), local_linger_args=(1, 0)) as fwd: self.connection = self._connect( PARAMS_URL_TEMPLATE % {"port": fwd.server_address[1]}) # Once outside the context, the connection is broken # BlockingConnection should raise ConnectionClosed with self.assertRaises(pika.exceptions.StreamLostError): self.connection.channel() self.assertTrue(self.connection.is_closed) self.assertFalse(self.connection.is_open) self.assertIsNone(self.connection._impl._transport) # Attempt to operate on the connection once again after ConnectionClosed with self.assertRaises(pika.exceptions.ConnectionWrongStateError): self.connection.channel() class TestConnectWithDownedBroker(BlockingTestCaseBase): def test(self): """ BlockingConnection to downed broker results in AMQPConnectionError """ # Reserve a port for use in connect sock = socket.socket() self.addCleanup(sock.close) sock.bind(('127.0.0.1', 0)) port = sock.getsockname()[1] sock.close() with self.assertRaises(pika.exceptions.AMQPConnectionError): self.connection = self._connect( PARAMS_URL_TEMPLATE % {"port": port}) class TestDisconnectDuringConnectionStart(BlockingTestCaseBase): def test(self): """ BlockingConnection TCP/IP connection loss in CONNECTION_START """ fwd = ForwardServer( remote_addr=(DEFAULT_PARAMS.host, DEFAULT_PARAMS.port), local_linger_args=(1, 0)) fwd.start() self.addCleanup(lambda: fwd.stop() if fwd.running else None) class MySelectConnection(pika.SelectConnection): assert hasattr(pika.SelectConnection, '_on_connection_start') def _on_connection_start(self, *args, **kwargs): # pylint: disable=W0221 fwd.stop() return super(MySelectConnection, self)._on_connection_start( *args, **kwargs) with self.assertRaises(pika.exceptions.ProbableAuthenticationError): self._connect( PARAMS_URL_TEMPLATE % {"port": fwd.server_address[1]}, impl_class=MySelectConnection) class TestDisconnectDuringConnectionTune(BlockingTestCaseBase): def test(self): """ BlockingConnection TCP/IP connection loss in CONNECTION_TUNE """ fwd = ForwardServer( remote_addr=(DEFAULT_PARAMS.host, DEFAULT_PARAMS.port), local_linger_args=(1, 0)) fwd.start() self.addCleanup(lambda: fwd.stop() if fwd.running else None) class MySelectConnection(pika.SelectConnection): assert hasattr(pika.SelectConnection, '_on_connection_tune') def _on_connection_tune(self, *args, **kwargs): # pylint: disable=W0221 fwd.stop() return super(MySelectConnection, self)._on_connection_tune( *args, **kwargs) with self.assertRaises(pika.exceptions.ProbableAccessDeniedError): self._connect( PARAMS_URL_TEMPLATE % {"port": fwd.server_address[1]}, impl_class=MySelectConnection) class TestDisconnectDuringConnectionProtocol(BlockingTestCaseBase): def test(self): """ BlockingConnection TCP/IP connection loss in CONNECTION_PROTOCOL """ fwd = ForwardServer( remote_addr=(DEFAULT_PARAMS.host, DEFAULT_PARAMS.port), local_linger_args=(1, 0)) fwd.start() self.addCleanup(lambda: fwd.stop() if fwd.running else None) class MySelectConnection(pika.SelectConnection): assert hasattr(pika.SelectConnection, '_on_stream_connected') def _on_stream_connected(self, *args, **kwargs): # pylint: disable=W0221 fwd.stop() return super(MySelectConnection, self)._on_stream_connected( *args, **kwargs) with self.assertRaises(pika.exceptions.IncompatibleProtocolError): self._connect(PARAMS_URL_TEMPLATE % {"port": fwd.server_address[1]}, impl_class=MySelectConnection) class TestProcessDataEvents(BlockingTestCaseBase): def test(self): """BlockingConnection.process_data_events""" connection = self._connect() # Try with time_limit=0 start_time = time_now() connection.process_data_events(time_limit=0) elapsed = time_now() - start_time self.assertLess(elapsed, 0.25) # Try with time_limit=0.005 start_time = time_now() connection.process_data_events(time_limit=0.005) elapsed = time_now() - start_time self.assertGreaterEqual(elapsed, 0.005) self.assertLess(elapsed, 0.25) class TestConnectionRegisterForBlockAndUnblock(BlockingTestCaseBase): def test(self): """BlockingConnection register for Connection.Blocked/Unblocked""" connection = self._connect() # NOTE: I haven't figured out yet how to coerce RabbitMQ to emit # Connection.Block and Connection.Unblock from the test, so we'll # just call the registration functions for now and simulate incoming # blocked/unblocked frames blocked_buffer = [] connection.add_on_connection_blocked_callback( lambda conn, frame: blocked_buffer.append((conn, frame))) # Simulate dispatch of blocked connection blocked_frame = pika.frame.Method( 0, pika.spec.Connection.Blocked('reason')) connection._impl._process_frame(blocked_frame) connection.sleep(0) # facilitate dispatch of pending events self.assertEqual(len(blocked_buffer), 1) conn, frame = blocked_buffer[0] self.assertIs(conn, connection) self.assertIs(frame, blocked_frame) unblocked_buffer = [] connection.add_on_connection_unblocked_callback( lambda conn, frame: unblocked_buffer.append((conn, frame))) # Simulate dispatch of unblocked connection unblocked_frame = pika.frame.Method(0, pika.spec.Connection.Unblocked()) connection._impl._process_frame(unblocked_frame) connection.sleep(0) # facilitate dispatch of pending events self.assertEqual(len(unblocked_buffer), 1) conn, frame = unblocked_buffer[0] self.assertIs(conn, connection) self.assertIs(frame, unblocked_frame) class TestBlockedConnectionTimeout(BlockingTestCaseBase): def test(self): """BlockingConnection Connection.Blocked timeout """ url = DEFAULT_URL + '&blocked_connection_timeout=0.001' conn = self._connect(url=url) # NOTE: I haven't figured out yet how to coerce RabbitMQ to emit # Connection.Block and Connection.Unblock from the test, so we'll # simulate it for now # Simulate Connection.Blocked conn._impl._on_connection_blocked( conn._impl, pika.frame.Method( 0, pika.spec.Connection.Blocked('TestBlockedConnectionTimeout'))) # Wait for connection teardown with self.assertRaises(pika.exceptions.ConnectionBlockedTimeout): while True: conn.process_data_events(time_limit=1) class TestAddCallbackThreadsafeFromSameThread(BlockingTestCaseBase): def test(self): """BlockingConnection.add_callback_threadsafe from same thread""" connection = self._connect() # Test timer completion start_time = time_now() rx_callback = [] connection.add_callback_threadsafe( lambda: rx_callback.append(time_now())) while not rx_callback: connection.process_data_events(time_limit=None) self.assertEqual(len(rx_callback), 1) elapsed = time_now() - start_time self.assertLess(elapsed, 0.25) class TestAddCallbackThreadsafeFromAnotherThread(BlockingTestCaseBase): def test(self): """BlockingConnection.add_callback_threadsafe from another thread""" connection = self._connect() # Test timer completion start_time = time_now() rx_callback = [] timer = threading.Timer( 0, functools.partial(connection.add_callback_threadsafe, lambda: rx_callback.append(time_now()))) self.addCleanup(timer.cancel) timer.start() while not rx_callback: connection.process_data_events(time_limit=None) self.assertEqual(len(rx_callback), 1) elapsed = time_now() - start_time self.assertLess(elapsed, 0.25) class TestAddCallbackThreadsafeOnClosedConnectionRaisesWrongState( BlockingTestCaseBase): def test(self): """BlockingConnection.add_callback_threadsafe on closed connection raises ConnectionWrongStateError""" connection = self._connect() connection.close() with self.assertRaises(pika.exceptions.ConnectionWrongStateError): connection.add_callback_threadsafe(lambda: None) class TestAddTimeoutRemoveTimeout(BlockingTestCaseBase): def test(self): """BlockingConnection.call_later and remove_timeout""" connection = self._connect() # Test timer completion start_time = time_now() rx_callback = [] timer_id = connection.call_later( 0.005, lambda: rx_callback.append(time_now())) while not rx_callback: connection.process_data_events(time_limit=None) self.assertEqual(len(rx_callback), 1) elapsed = time_now() - start_time self.assertLess(elapsed, 0.25) # Test removing triggered timeout connection.remove_timeout(timer_id) # Test aborted timer rx_callback = [] timer_id = connection.call_later( 0.001, lambda: rx_callback.append(time_now())) connection.remove_timeout(timer_id) connection.process_data_events(time_limit=0.1) self.assertFalse(rx_callback) # Make sure _TimerEvt repr doesn't crash evt = blocking_connection._TimerEvt(lambda: None) repr(evt) class TestViabilityOfMultipleTimeoutsWithSameDeadlineAndCallback(BlockingTestCaseBase): def test(self): """BlockingConnection viability of multiple timeouts with same deadline and callback""" connection = self._connect() rx_callback = [] def callback(): rx_callback.append(1) timer1 = connection.call_later(0, callback) timer2 = connection.call_later(0, callback) self.assertIsNot(timer1, timer2) connection.remove_timeout(timer1) # Wait for second timer to fire start_wait_time = time_now() while not rx_callback and time_now() - start_wait_time < 0.25: connection.process_data_events(time_limit=0.001) self.assertListEqual(rx_callback, [1]) class TestRemoveTimeoutFromTimeoutCallback(BlockingTestCaseBase): def test(self): """BlockingConnection.remove_timeout from timeout callback""" connection = self._connect() # Test timer completion timer_id1 = connection.call_later(5, lambda: 0/0) rx_timer2 = [] def on_timer2(): connection.remove_timeout(timer_id1) connection.remove_timeout(timer_id2) rx_timer2.append(1) timer_id2 = connection.call_later(0, on_timer2) while not rx_timer2: connection.process_data_events(time_limit=None) self.assertFalse(connection._ready_events) class TestSleep(BlockingTestCaseBase): def test(self): """BlockingConnection.sleep""" connection = self._connect() # Try with duration=0 start_time = time_now() connection.sleep(duration=0) elapsed = time_now() - start_time self.assertLess(elapsed, 0.25) # Try with duration=0.005 start_time = time_now() connection.sleep(duration=0.005) elapsed = time_now() - start_time self.assertGreaterEqual(elapsed, 0.005) self.assertLess(elapsed, 0.25) class TestConnectionProperties(BlockingTestCaseBase): def test(self): """Test BlockingConnection properties""" connection = self._connect() self.assertTrue(connection.is_open) self.assertFalse(connection._impl.is_closing) self.assertFalse(connection.is_closed) self.assertTrue(connection.basic_nack_supported) self.assertTrue(connection.consumer_cancel_notify_supported) self.assertTrue(connection.exchange_exchange_bindings_supported) self.assertTrue(connection.publisher_confirms_supported) connection.close() self.assertFalse(connection.is_open) self.assertFalse(connection._impl.is_closing) self.assertTrue(connection.is_closed) class TestCreateAndCloseChannel(BlockingTestCaseBase): def test(self): """BlockingChannel: Create and close channel""" connection = self._connect() ch = connection.channel() self.assertIsInstance(ch, blocking_connection.BlockingChannel) self.assertTrue(ch.is_open) self.assertFalse(ch.is_closed) self.assertFalse(ch._impl.is_closing) self.assertIs(ch.connection, connection) ch.close() self.assertTrue(ch.is_closed) self.assertFalse(ch.is_open) self.assertFalse(ch._impl.is_closing) class TestExchangeDeclareAndDelete(BlockingTestCaseBase): def test(self): """BlockingChannel: Test exchange_declare and exchange_delete""" connection = self._connect() ch = connection.channel() name = "TestExchangeDeclareAndDelete_" + uuid.uuid1().hex # Declare a new exchange frame = ch.exchange_declare(name, exchange_type=ExchangeType.direct) self.addCleanup(connection.channel().exchange_delete, name) self.assertIsInstance(frame.method, pika.spec.Exchange.DeclareOk) # Check if it exists by declaring it passively frame = ch.exchange_declare(name, passive=True) self.assertIsInstance(frame.method, pika.spec.Exchange.DeclareOk) # Delete the exchange frame = ch.exchange_delete(name) self.assertIsInstance(frame.method, pika.spec.Exchange.DeleteOk) # Verify that it's been deleted with self.assertRaises(pika.exceptions.ChannelClosedByBroker) as cm: ch.exchange_declare(name, passive=True) self.assertEqual(cm.exception.args[0], 404) class TestExchangeBindAndUnbind(BlockingTestCaseBase): def test(self): """BlockingChannel: Test exchange_bind and exchange_unbind""" connection = self._connect() ch = connection.channel() q_name = 'TestExchangeBindAndUnbind_q' + uuid.uuid1().hex src_exg_name = 'TestExchangeBindAndUnbind_src_exg_' + uuid.uuid1().hex dest_exg_name = 'TestExchangeBindAndUnbind_dest_exg_' + uuid.uuid1().hex routing_key = 'TestExchangeBindAndUnbind' # Place channel in publisher-acknowledgments mode so that we may test # whether the queue is reachable by publishing with mandatory=True res = ch.confirm_delivery() self.assertIsNone(res) # Declare both exchanges ch.exchange_declare(src_exg_name, exchange_type=ExchangeType.direct) self.addCleanup(connection.channel().exchange_delete, src_exg_name) ch.exchange_declare(dest_exg_name, exchange_type=ExchangeType.direct) self.addCleanup(connection.channel().exchange_delete, dest_exg_name) # Declare a new queue ch.queue_declare(q_name, auto_delete=True) self.addCleanup(lambda: self._connect().channel().queue_delete(q_name)) # Bind the queue to the destination exchange ch.queue_bind(q_name, exchange=dest_exg_name, routing_key=routing_key) # Verify that the queue is unreachable without exchange-exchange binding with self.assertRaises(pika.exceptions.UnroutableError): ch.basic_publish(src_exg_name, routing_key, body='', mandatory=True) # Bind the exchanges frame = ch.exchange_bind(destination=dest_exg_name, source=src_exg_name, routing_key=routing_key) self.assertIsInstance(frame.method, pika.spec.Exchange.BindOk) # Publish a message via the source exchange ch.basic_publish(src_exg_name, routing_key, body='TestExchangeBindAndUnbind', mandatory=True) # Check that the queue now has one message self._assert_exact_message_count_with_retries(channel=ch, queue=q_name, expected_count=1) # Unbind the exchanges frame = ch.exchange_unbind(destination=dest_exg_name, source=src_exg_name, routing_key=routing_key) self.assertIsInstance(frame.method, pika.spec.Exchange.UnbindOk) # Verify that the queue is now unreachable via the source exchange with self.assertRaises(pika.exceptions.UnroutableError): ch.basic_publish(src_exg_name, routing_key, body='', mandatory=True) class TestQueueDeclareAndDelete(BlockingTestCaseBase): def test(self): """BlockingChannel: Test queue_declare and queue_delete""" connection = self._connect() ch = connection.channel() q_name = 'TestQueueDeclareAndDelete_' + uuid.uuid1().hex # Declare a new queue frame = ch.queue_declare(q_name, auto_delete=True) self.addCleanup(lambda: self._connect().channel().queue_delete(q_name)) self.assertIsInstance(frame.method, pika.spec.Queue.DeclareOk) # Check if it exists by declaring it passively frame = ch.queue_declare(q_name, passive=True) self.assertIsInstance(frame.method, pika.spec.Queue.DeclareOk) # Delete the queue frame = ch.queue_delete(q_name) self.assertIsInstance(frame.method, pika.spec.Queue.DeleteOk) # Verify that it's been deleted with self.assertRaises(pika.exceptions.ChannelClosedByBroker) as cm: ch.queue_declare(q_name, passive=True) self.assertEqual(cm.exception.args[0], 404) class TestPassiveQueueDeclareOfUnknownQueueRaisesChannelClosed( BlockingTestCaseBase): def test(self): """BlockingChannel: ChannelClosed raised when passive-declaring unknown queue""" # pylint: disable=C0301 connection = self._connect() ch = connection.channel() q_name = ("TestPassiveQueueDeclareOfUnknownQueueRaisesChannelClosed_q_" + uuid.uuid1().hex) with self.assertRaises(pika.exceptions.ChannelClosedByBroker) as ex_cm: ch.queue_declare(q_name, passive=True) self.assertEqual(ex_cm.exception.args[0], 404) class TestQueueBindAndUnbindAndPurge(BlockingTestCaseBase): def test(self): """BlockingChannel: Test queue_bind and queue_unbind""" connection = self._connect() ch = connection.channel() q_name = 'TestQueueBindAndUnbindAndPurge_q' + uuid.uuid1().hex exg_name = 'TestQueueBindAndUnbindAndPurge_exg_' + uuid.uuid1().hex routing_key = 'TestQueueBindAndUnbindAndPurge' # Place channel in publisher-acknowledgments mode so that we may test # whether the queue is reachable by publishing with mandatory=True res = ch.confirm_delivery() self.assertIsNone(res) # Declare a new exchange ch.exchange_declare(exg_name, exchange_type=ExchangeType.direct) self.addCleanup(connection.channel().exchange_delete, exg_name) # Declare a new queue ch.queue_declare(q_name, auto_delete=True) self.addCleanup(lambda: self._connect().channel().queue_delete(q_name)) # Bind the queue to the exchange using routing key frame = ch.queue_bind(q_name, exchange=exg_name, routing_key=routing_key) self.assertIsInstance(frame.method, pika.spec.Queue.BindOk) # Check that the queue is empty frame = ch.queue_declare(q_name, passive=True) self.assertEqual(frame.method.message_count, 0) # Deposit a message in the queue ch.basic_publish(exg_name, routing_key, body='TestQueueBindAndUnbindAndPurge', mandatory=True) # Check that the queue now has one message frame = ch.queue_declare(q_name, passive=True) self.assertEqual(frame.method.message_count, 1) # Unbind the queue frame = ch.queue_unbind(queue=q_name, exchange=exg_name, routing_key=routing_key) self.assertIsInstance(frame.method, pika.spec.Queue.UnbindOk) # Verify that the queue is now unreachable via that binding with self.assertRaises(pika.exceptions.UnroutableError): ch.basic_publish(exg_name, routing_key, body='TestQueueBindAndUnbindAndPurge-2', mandatory=True) # Purge the queue and verify that 1 message was purged frame = ch.queue_purge(q_name) self.assertIsInstance(frame.method, pika.spec.Queue.PurgeOk) self.assertEqual(frame.method.message_count, 1) # Verify that the queue is now empty frame = ch.queue_declare(q_name, passive=True) self.assertEqual(frame.method.message_count, 0) class TestBasicGet(BlockingTestCaseBase): def tearDown(self): LOGGER.info('%s TEARING DOWN (%s)', datetime.utcnow(), self) def test(self): """BlockingChannel.basic_get""" LOGGER.info('%s STARTED (%s)', datetime.utcnow(), self) connection = self._connect() LOGGER.info('%s CONNECTED (%s)', datetime.utcnow(), self) ch = connection.channel() LOGGER.info('%s CREATED CHANNEL (%s)', datetime.utcnow(), self) q_name = 'TestBasicGet_q' + uuid.uuid1().hex # Place channel in publisher-acknowledgments mode so that the message # may be delivered synchronously to the queue by publishing it with # mandatory=True ch.confirm_delivery() LOGGER.info('%s ENABLED PUB-ACKS (%s)', datetime.utcnow(), self) # Declare a new queue ch.queue_declare(q_name, auto_delete=True) self.addCleanup(lambda: self._connect().channel().queue_delete(q_name)) LOGGER.info('%s DECLARED QUEUE (%s)', datetime.utcnow(), self) # Verify result of getting a message from an empty queue msg = ch.basic_get(q_name, auto_ack=False) self.assertTupleEqual(msg, (None, None, None)) LOGGER.info('%s GOT FROM EMPTY QUEUE (%s)', datetime.utcnow(), self) body = 'TestBasicGet' # Deposit a message in the queue via default exchange ch.basic_publish(exchange='', routing_key=q_name, body=body, mandatory=True) LOGGER.info('%s PUBLISHED (%s)', datetime.utcnow(), self) # Get the message (method, properties, body) = ch.basic_get(q_name, auto_ack=False) LOGGER.info('%s GOT FROM NON-EMPTY QUEUE (%s)', datetime.utcnow(), self) self.assertIsInstance(method, pika.spec.Basic.GetOk) self.assertEqual(method.delivery_tag, 1) self.assertFalse(method.redelivered) self.assertEqual(method.exchange, '') self.assertEqual(method.routing_key, q_name) self.assertEqual(method.message_count, 0) self.assertIsInstance(properties, pika.BasicProperties) self.assertIsNone(properties.headers) self.assertEqual(body, as_bytes(body)) # Ack it ch.basic_ack(delivery_tag=method.delivery_tag) LOGGER.info('%s ACKED (%s)', datetime.utcnow(), self) # Verify that the queue is now empty self._assert_exact_message_count_with_retries(channel=ch, queue=q_name, expected_count=0) class TestBasicReject(BlockingTestCaseBase): def test(self): """BlockingChannel.basic_reject""" connection = self._connect() ch = connection.channel() q_name = 'TestBasicReject_q' + uuid.uuid1().hex # Place channel in publisher-acknowledgments mode so that the message # may be delivered synchronously to the queue by publishing it with # mandatory=True ch.confirm_delivery() # Declare a new queue ch.queue_declare(q_name, auto_delete=True) self.addCleanup(lambda: self._connect().channel().queue_delete(q_name)) # Deposit two messages in the queue via default exchange ch.basic_publish(exchange='', routing_key=q_name, body='TestBasicReject1', mandatory=True) ch.basic_publish(exchange='', routing_key=q_name, body='TestBasicReject2', mandatory=True) # Get the messages (rx_method, _, rx_body) = ch.basic_get(q_name, auto_ack=False) self.assertEqual(rx_body, as_bytes('TestBasicReject1')) (rx_method, _, rx_body) = ch.basic_get(q_name, auto_ack=False) self.assertEqual(rx_body, as_bytes('TestBasicReject2')) # Nack the second message ch.basic_reject(rx_method.delivery_tag, requeue=True) # Verify that exactly one message is present in the queue, namely the # second one self._assert_exact_message_count_with_retries(channel=ch, queue=q_name, expected_count=1) (rx_method, _, rx_body) = ch.basic_get(q_name, auto_ack=False) self.assertEqual(rx_body, as_bytes('TestBasicReject2')) class TestBasicRejectNoRequeue(BlockingTestCaseBase): def test(self): """BlockingChannel.basic_reject with requeue=False""" connection = self._connect() ch = connection.channel() q_name = 'TestBasicRejectNoRequeue_q' + uuid.uuid1().hex # Place channel in publisher-acknowledgments mode so that the message # may be delivered synchronously to the queue by publishing it with # mandatory=True ch.confirm_delivery() # Declare a new queue ch.queue_declare(q_name, auto_delete=True) self.addCleanup(lambda: self._connect().channel().queue_delete(q_name)) # Deposit two messages in the queue via default exchange ch.basic_publish(exchange='', routing_key=q_name, body='TestBasicRejectNoRequeue1', mandatory=True) ch.basic_publish(exchange='', routing_key=q_name, body='TestBasicRejectNoRequeue2', mandatory=True) # Get the messages (rx_method, _, rx_body) = ch.basic_get(q_name, auto_ack=False) self.assertEqual(rx_body, as_bytes('TestBasicRejectNoRequeue1')) (rx_method, _, rx_body) = ch.basic_get(q_name, auto_ack=False) self.assertEqual(rx_body, as_bytes('TestBasicRejectNoRequeue2')) # Nack the second message ch.basic_reject(rx_method.delivery_tag, requeue=False) # Verify that no messages are present in the queue self._assert_exact_message_count_with_retries(channel=ch, queue=q_name, expected_count=0) class TestBasicNack(BlockingTestCaseBase): def test(self): """BlockingChannel.basic_nack single message""" connection = self._connect() ch = connection.channel() q_name = 'TestBasicNack_q' + uuid.uuid1().hex # Place channel in publisher-acknowledgments mode so that the message # may be delivered synchronously to the queue by publishing it with # mandatory=True ch.confirm_delivery() # Declare a new queue ch.queue_declare(q_name, auto_delete=True) self.addCleanup(lambda: self._connect().channel().queue_delete(q_name)) # Deposit two messages in the queue via default exchange ch.basic_publish(exchange='', routing_key=q_name, body='TestBasicNack1', mandatory=True) ch.basic_publish(exchange='', routing_key=q_name, body='TestBasicNack2', mandatory=True) # Get the messages (rx_method, _, rx_body) = ch.basic_get(q_name, auto_ack=False) self.assertEqual(rx_body, as_bytes('TestBasicNack1')) (rx_method, _, rx_body) = ch.basic_get(q_name, auto_ack=False) self.assertEqual(rx_body, as_bytes('TestBasicNack2')) # Nack the second message ch.basic_nack(rx_method.delivery_tag, multiple=False, requeue=True) # Verify that exactly one message is present in the queue, namely the # second one self._assert_exact_message_count_with_retries(channel=ch, queue=q_name, expected_count=1) (rx_method, _, rx_body) = ch.basic_get(q_name, auto_ack=False) self.assertEqual(rx_body, as_bytes('TestBasicNack2')) class TestBasicNackNoRequeue(BlockingTestCaseBase): def test(self): """BlockingChannel.basic_nack with requeue=False""" connection = self._connect() ch = connection.channel() q_name = 'TestBasicNackNoRequeue_q' + uuid.uuid1().hex # Place channel in publisher-acknowledgments mode so that the message # may be delivered synchronously to the queue by publishing it with # mandatory=True ch.confirm_delivery() # Declare a new queue ch.queue_declare(q_name, auto_delete=True) self.addCleanup(lambda: self._connect().channel().queue_delete(q_name)) # Deposit two messages in the queue via default exchange ch.basic_publish(exchange='', routing_key=q_name, body='TestBasicNackNoRequeue1', mandatory=True) ch.basic_publish(exchange='', routing_key=q_name, body='TestBasicNackNoRequeue2', mandatory=True) # Get the messages (rx_method, _, rx_body) = ch.basic_get(q_name, auto_ack=False) self.assertEqual(rx_body, as_bytes('TestBasicNackNoRequeue1')) (rx_method, _, rx_body) = ch.basic_get(q_name, auto_ack=False) self.assertEqual(rx_body, as_bytes('TestBasicNackNoRequeue2')) # Nack the second message ch.basic_nack(rx_method.delivery_tag, requeue=False) # Verify that no messages are present in the queue self._assert_exact_message_count_with_retries(channel=ch, queue=q_name, expected_count=0) class TestBasicNackMultiple(BlockingTestCaseBase): def test(self): """BlockingChannel.basic_nack multiple messages""" connection = self._connect() ch = connection.channel() q_name = 'TestBasicNackMultiple_q' + uuid.uuid1().hex # Place channel in publisher-acknowledgments mode so that the message # may be delivered synchronously to the queue by publishing it with # mandatory=True ch.confirm_delivery() # Declare a new queue ch.queue_declare(q_name, auto_delete=True) self.addCleanup(lambda: self._connect().channel().queue_delete(q_name)) # Deposit two messages in the queue via default exchange ch.basic_publish(exchange='', routing_key=q_name, body='TestBasicNackMultiple1', mandatory=True) ch.basic_publish(exchange='', routing_key=q_name, body='TestBasicNackMultiple2', mandatory=True) # Get the messages (rx_method, _, rx_body) = ch.basic_get(q_name, auto_ack=False) self.assertEqual(rx_body, as_bytes('TestBasicNackMultiple1')) (rx_method, _, rx_body) = ch.basic_get(q_name, auto_ack=False) self.assertEqual(rx_body, as_bytes('TestBasicNackMultiple2')) # Nack both messages via the "multiple" option ch.basic_nack(rx_method.delivery_tag, multiple=True, requeue=True) # Verify that both messages are present in the queue self._assert_exact_message_count_with_retries(channel=ch, queue=q_name, expected_count=2) (rx_method, _, rx_body) = ch.basic_get(q_name, auto_ack=False) self.assertEqual(rx_body, as_bytes('TestBasicNackMultiple1')) (rx_method, _, rx_body) = ch.basic_get(q_name, auto_ack=False) self.assertEqual(rx_body, as_bytes('TestBasicNackMultiple2')) class TestBasicRecoverWithRequeue(BlockingTestCaseBase): def test(self): """BlockingChannel.basic_recover with requeue=True. NOTE: the requeue=False option is not supported by RabbitMQ broker as of this writing (using RabbitMQ 3.5.1) """ connection = self._connect() ch = connection.channel() q_name = ( 'TestBasicRecoverWithRequeue_q' + uuid.uuid1().hex) # Place channel in publisher-acknowledgments mode so that the message # may be delivered synchronously to the queue by publishing it with # mandatory=True ch.confirm_delivery() # Declare a new queue ch.queue_declare(q_name, auto_delete=True) self.addCleanup(lambda: self._connect().channel().queue_delete(q_name)) # Deposit two messages in the queue via default exchange ch.basic_publish(exchange='', routing_key=q_name, body='TestBasicRecoverWithRequeue1', mandatory=True) ch.basic_publish(exchange='', routing_key=q_name, body='TestBasicRecoverWithRequeue2', mandatory=True) rx_messages = [] num_messages = 0 for msg in ch.consume(q_name, auto_ack=False): num_messages += 1 if num_messages == 2: ch.basic_recover(requeue=True) if num_messages > 2: rx_messages.append(msg) if num_messages == 4: break else: self.fail('consumer aborted prematurely') # Get the messages (_, _, rx_body) = rx_messages[0] self.assertEqual(rx_body, as_bytes('TestBasicRecoverWithRequeue1')) (_, _, rx_body) = rx_messages[1] self.assertEqual(rx_body, as_bytes('TestBasicRecoverWithRequeue2')) class TestTxCommit(BlockingTestCaseBase): def test(self): """BlockingChannel.tx_commit""" connection = self._connect() ch = connection.channel() q_name = 'TestTxCommit_q' + uuid.uuid1().hex # Declare a new queue ch.queue_declare(q_name, auto_delete=True) self.addCleanup(lambda: self._connect().channel().queue_delete(q_name)) # Select standard transaction mode frame = ch.tx_select() self.assertIsInstance(frame.method, pika.spec.Tx.SelectOk) # Deposit a message in the queue via default exchange ch.basic_publish(exchange='', routing_key=q_name, body='TestTxCommit1', mandatory=True) # Verify that queue is still empty frame = ch.queue_declare(q_name, passive=True) self.assertEqual(frame.method.message_count, 0) # Commit the transaction ch.tx_commit() # Verify that the queue has the expected message frame = ch.queue_declare(q_name, passive=True) self.assertEqual(frame.method.message_count, 1) (_, _, rx_body) = ch.basic_get(q_name, auto_ack=False) self.assertEqual(rx_body, as_bytes('TestTxCommit1')) class TestTxRollback(BlockingTestCaseBase): def test(self): """BlockingChannel.tx_commit""" connection = self._connect() ch = connection.channel() q_name = 'TestTxRollback_q' + uuid.uuid1().hex # Declare a new queue ch.queue_declare(q_name, auto_delete=True) self.addCleanup(lambda: self._connect().channel().queue_delete(q_name)) # Select standard transaction mode frame = ch.tx_select() self.assertIsInstance(frame.method, pika.spec.Tx.SelectOk) # Deposit a message in the queue via default exchange ch.basic_publish(exchange='', routing_key=q_name, body='TestTxRollback1', mandatory=True) # Verify that queue is still empty frame = ch.queue_declare(q_name, passive=True) self.assertEqual(frame.method.message_count, 0) # Roll back the transaction ch.tx_rollback() # Verify that the queue continues to be empty frame = ch.queue_declare(q_name, passive=True) self.assertEqual(frame.method.message_count, 0) class TestBasicConsumeFromUnknownQueueRaisesChannelClosed(BlockingTestCaseBase): def test(self): """ChannelClosed raised when consuming from unknown queue""" connection = self._connect() ch = connection.channel() q_name = ("TestBasicConsumeFromUnknownQueueRaisesChannelClosed_q_" + uuid.uuid1().hex) with self.assertRaises(pika.exceptions.ChannelClosedByBroker) as ex_cm: ch.basic_consume(q_name, lambda *args: None) self.assertEqual(ex_cm.exception.args[0], 404) class TestPublishAndBasicPublishWithPubacksUnroutable(BlockingTestCaseBase): def test(self): # pylint: disable=R0914 """BlockingChannel.publish amd basic_publish unroutable message with pubacks""" # pylint: disable=C0301 connection = self._connect() ch = connection.channel() exg_name = ('TestPublishAndBasicPublishUnroutable_exg_' + uuid.uuid1().hex) routing_key = 'TestPublishAndBasicPublishUnroutable' # Place channel in publisher-acknowledgments mode so that publishing # with mandatory=True will be synchronous res = ch.confirm_delivery() self.assertIsNone(res) # Declare a new exchange ch.exchange_declare(exg_name, exchange_type=ExchangeType.direct) self.addCleanup(connection.channel().exchange_delete, exg_name) # Verify unroutable message handling using basic_publish msg2_headers = dict( test_name='TestPublishAndBasicPublishWithPubacksUnroutable') msg2_properties = pika.spec.BasicProperties(headers=msg2_headers) with self.assertRaises(pika.exceptions.UnroutableError) as cm: ch.basic_publish(exg_name, routing_key=routing_key, body='', properties=msg2_properties, mandatory=True) (msg,) = cm.exception.messages self.assertIsInstance(msg, blocking_connection.ReturnedMessage) self.assertIsInstance(msg.method, pika.spec.Basic.Return) self.assertEqual(msg.method.reply_code, 312) self.assertEqual(msg.method.exchange, exg_name) self.assertEqual(msg.method.routing_key, routing_key) self.assertIsInstance(msg.properties, pika.BasicProperties) self.assertEqual(msg.properties.headers, msg2_headers) self.assertEqual(msg.body, as_bytes('')) class TestConfirmDeliveryAfterUnroutableMessage(BlockingTestCaseBase): def test(self): # pylint: disable=R0914 """BlockingChannel.confirm_delivery following unroutable message""" connection = self._connect() ch = connection.channel() exg_name = ('TestConfirmDeliveryAfterUnroutableMessage_exg_' + uuid.uuid1().hex) routing_key = 'TestConfirmDeliveryAfterUnroutableMessage' # Declare a new exchange ch.exchange_declare(exg_name, exchange_type=ExchangeType.direct) self.addCleanup(connection.channel().exchange_delete, exg_name) # Register on-return callback returned_messages = [] ch.add_on_return_callback(lambda *args: returned_messages.append(args)) # Emit unroutable message without pubacks ch.basic_publish(exg_name, routing_key=routing_key, body='', mandatory=True) # Select delivery confirmations ch.confirm_delivery() # Verify that unroutable message is in pending events self.assertEqual(len(ch._pending_events), 1) self.assertIsInstance(ch._pending_events[0], blocking_connection._ReturnedMessageEvt) # Verify that repr of _ReturnedMessageEvt instance does crash repr(ch._pending_events[0]) # Dispach events connection.process_data_events() self.assertEqual(len(ch._pending_events), 0) # Verify that unroutable message was dispatched ((channel, method, properties, body,),) = returned_messages # pylint: disable=E0632 self.assertIs(channel, ch) self.assertIsInstance(method, pika.spec.Basic.Return) self.assertEqual(method.reply_code, 312) self.assertEqual(method.exchange, exg_name) self.assertEqual(method.routing_key, routing_key) self.assertIsInstance(properties, pika.BasicProperties) self.assertEqual(body, as_bytes('')) class TestUnroutableMessagesReturnedInNonPubackMode(BlockingTestCaseBase): def test(self): # pylint: disable=R0914 """BlockingChannel: unroutable messages is returned in non-puback mode""" # pylint: disable=C0301 connection = self._connect() ch = connection.channel() exg_name = ( 'TestUnroutableMessageReturnedInNonPubackMode_exg_' + uuid.uuid1().hex) routing_key = 'TestUnroutableMessageReturnedInNonPubackMode' # Declare a new exchange ch.exchange_declare(exg_name, exchange_type=ExchangeType.direct) self.addCleanup(connection.channel().exchange_delete, exg_name) # Register on-return callback returned_messages = [] ch.add_on_return_callback( lambda *args: returned_messages.append(args)) # Emit unroutable messages without pubacks ch.basic_publish(exg_name, routing_key=routing_key, body='msg1', mandatory=True) ch.basic_publish(exg_name, routing_key=routing_key, body='msg2', mandatory=True) # Process I/O until Basic.Return are dispatched while len(returned_messages) < 2: connection.process_data_events() self.assertEqual(len(returned_messages), 2) self.assertEqual(len(ch._pending_events), 0) # Verify returned messages (channel, method, properties, body,) = returned_messages[0] self.assertIs(channel, ch) self.assertIsInstance(method, pika.spec.Basic.Return) self.assertEqual(method.reply_code, 312) self.assertEqual(method.exchange, exg_name) self.assertEqual(method.routing_key, routing_key) self.assertIsInstance(properties, pika.BasicProperties) self.assertEqual(body, as_bytes('msg1')) (channel, method, properties, body,) = returned_messages[1] self.assertIs(channel, ch) self.assertIsInstance(method, pika.spec.Basic.Return) self.assertEqual(method.reply_code, 312) self.assertEqual(method.exchange, exg_name) self.assertEqual(method.routing_key, routing_key) self.assertIsInstance(properties, pika.BasicProperties) self.assertEqual(body, as_bytes('msg2')) class TestUnroutableMessageReturnedInPubackMode(BlockingTestCaseBase): def test(self): # pylint: disable=R0914 """BlockingChannel: unroutable messages is returned in puback mode""" connection = self._connect() ch = connection.channel() exg_name = ( 'TestUnroutableMessageReturnedInPubackMode_exg_' + uuid.uuid1().hex) routing_key = 'TestUnroutableMessageReturnedInPubackMode' # Declare a new exchange ch.exchange_declare(exg_name, exchange_type=ExchangeType.direct) self.addCleanup(connection.channel().exchange_delete, exg_name) # Select delivery confirmations ch.confirm_delivery() # Register on-return callback returned_messages = [] ch.add_on_return_callback( lambda *args: returned_messages.append(args)) # Emit unroutable messages with pubacks with self.assertRaises(pika.exceptions.UnroutableError): ch.basic_publish(exg_name, routing_key=routing_key, body='msg1', mandatory=True) with self.assertRaises(pika.exceptions.UnroutableError): ch.basic_publish(exg_name, routing_key=routing_key, body='msg2', mandatory=True) # Verify that unroutable messages are already in pending events self.assertEqual(len(ch._pending_events), 2) self.assertIsInstance(ch._pending_events[0], blocking_connection._ReturnedMessageEvt) self.assertIsInstance(ch._pending_events[1], blocking_connection._ReturnedMessageEvt) # Verify that repr of _ReturnedMessageEvt instance does crash repr(ch._pending_events[0]) repr(ch._pending_events[1]) # Dispatch events connection.process_data_events() self.assertEqual(len(ch._pending_events), 0) # Verify returned messages (channel, method, properties, body,) = returned_messages[0] self.assertIs(channel, ch) self.assertIsInstance(method, pika.spec.Basic.Return) self.assertEqual(method.reply_code, 312) self.assertEqual(method.exchange, exg_name) self.assertEqual(method.routing_key, routing_key) self.assertIsInstance(properties, pika.BasicProperties) self.assertEqual(body, as_bytes('msg1')) (channel, method, properties, body,) = returned_messages[1] self.assertIs(channel, ch) self.assertIsInstance(method, pika.spec.Basic.Return) self.assertEqual(method.reply_code, 312) self.assertEqual(method.exchange, exg_name) self.assertEqual(method.routing_key, routing_key) self.assertIsInstance(properties, pika.BasicProperties) self.assertEqual(body, as_bytes('msg2')) class TestBasicPublishDeliveredWhenPendingUnroutable(BlockingTestCaseBase): def test(self): # pylint: disable=R0914 """BlockingChannel.basic_publish msg delivered despite pending unroutable message""" # pylint: disable=C0301 connection = self._connect() ch = connection.channel() q_name = ('TestBasicPublishDeliveredWhenPendingUnroutable_q' + uuid.uuid1().hex) exg_name = ('TestBasicPublishDeliveredWhenPendingUnroutable_exg_' + uuid.uuid1().hex) routing_key = 'TestBasicPublishDeliveredWhenPendingUnroutable' # Declare a new exchange ch.exchange_declare(exg_name, exchange_type=ExchangeType.direct) self.addCleanup(connection.channel().exchange_delete, exg_name) # Declare a new queue ch.queue_declare(q_name, auto_delete=True) self.addCleanup(lambda: self._connect().channel().queue_delete(q_name)) # Bind the queue to the exchange using routing key ch.queue_bind(q_name, exchange=exg_name, routing_key=routing_key) # Attempt to send an unroutable message in the queue via basic_publish ch.basic_publish(exg_name, routing_key='', body='unroutable-message', mandatory=True) # Flush connection to force Basic.Return connection.channel().close() # Deposit a routable message in the queue ch.basic_publish(exg_name, routing_key=routing_key, body='routable-message', mandatory=True) # Wait for the queue to get the routable message self._assert_exact_message_count_with_retries(channel=ch, queue=q_name, expected_count=1) msg = ch.basic_get(q_name) # Check the first message self.assertIsInstance(msg, tuple) rx_method, rx_properties, rx_body = msg self.assertIsInstance(rx_method, pika.spec.Basic.GetOk) self.assertEqual(rx_method.delivery_tag, 1) self.assertFalse(rx_method.redelivered) self.assertEqual(rx_method.exchange, exg_name) self.assertEqual(rx_method.routing_key, routing_key) self.assertIsInstance(rx_properties, pika.BasicProperties) self.assertEqual(rx_body, as_bytes('routable-message')) # There shouldn't be any more events now self.assertFalse(ch._pending_events) # Ack the message ch.basic_ack(delivery_tag=rx_method.delivery_tag, multiple=False) # Verify that the queue is now empty self._assert_exact_message_count_with_retries(channel=ch, queue=q_name, expected_count=0) class TestPublishAndConsumeWithPubacksAndQosOfOne(BlockingTestCaseBase): def test(self): # pylint: disable=R0914,R0915 """BlockingChannel.basic_publish, publish, basic_consume, QoS, \ Basic.Cancel from broker """ connection = self._connect() ch = connection.channel() q_name = 'TestPublishAndConsumeAndQos_q' + uuid.uuid1().hex exg_name = 'TestPublishAndConsumeAndQos_exg_' + uuid.uuid1().hex routing_key = 'TestPublishAndConsumeAndQos' # Place channel in publisher-acknowledgments mode so that publishing # with mandatory=True will be synchronous res = ch.confirm_delivery() self.assertIsNone(res) # Declare a new exchange ch.exchange_declare(exg_name, exchange_type=ExchangeType.direct) self.addCleanup(connection.channel().exchange_delete, exg_name) # Declare a new queue ch.queue_declare(q_name, auto_delete=True) self.addCleanup(lambda: self._connect().channel().queue_delete(q_name)) # Bind the queue to the exchange using routing key ch.queue_bind(q_name, exchange=exg_name, routing_key=routing_key) # Deposit a message in the queue msg1_headers = dict( test_name='TestPublishAndConsumeWithPubacksAndQosOfOne') msg1_properties = pika.spec.BasicProperties(headers=msg1_headers) ch.basic_publish(exg_name, routing_key=routing_key, body='via-basic_publish', properties=msg1_properties, mandatory=True) # Deposit another message in the queue ch.basic_publish(exg_name, routing_key, body='via-publish', mandatory=True) # Check that the queue now has two messages frame = ch.queue_declare(q_name, passive=True) self.assertEqual(frame.method.message_count, 2) # Configure QoS for one message ch.basic_qos(prefetch_size=0, prefetch_count=1, global_qos=False) # Create a consumer rx_messages = [] consumer_tag = ch.basic_consume( q_name, lambda *args: rx_messages.append(args), auto_ack=False, exclusive=False, arguments=None) # Wait for first message to arrive while not rx_messages: connection.process_data_events(time_limit=None) self.assertEqual(len(rx_messages), 1) # Check the first message msg = rx_messages[0] self.assertIsInstance(msg, tuple) rx_ch, rx_method, rx_properties, rx_body = msg self.assertIs(rx_ch, ch) self.assertIsInstance(rx_method, pika.spec.Basic.Deliver) self.assertEqual(rx_method.consumer_tag, consumer_tag) self.assertEqual(rx_method.delivery_tag, 1) self.assertFalse(rx_method.redelivered) self.assertEqual(rx_method.exchange, exg_name) self.assertEqual(rx_method.routing_key, routing_key) self.assertIsInstance(rx_properties, pika.BasicProperties) self.assertEqual(rx_properties.headers, msg1_headers) self.assertEqual(rx_body, as_bytes('via-basic_publish')) # There shouldn't be any more events now self.assertFalse(ch._pending_events) # Ack the message so that the next one can arrive (we configured QoS # with prefetch_count=1) ch.basic_ack(delivery_tag=rx_method.delivery_tag, multiple=False) # Get the second message while len(rx_messages) < 2: connection.process_data_events(time_limit=None) self.assertEqual(len(rx_messages), 2) msg = rx_messages[1] self.assertIsInstance(msg, tuple) rx_ch, rx_method, rx_properties, rx_body = msg self.assertIs(rx_ch, ch) self.assertIsInstance(rx_method, pika.spec.Basic.Deliver) self.assertEqual(rx_method.consumer_tag, consumer_tag) self.assertEqual(rx_method.delivery_tag, 2) self.assertFalse(rx_method.redelivered) self.assertEqual(rx_method.exchange, exg_name) self.assertEqual(rx_method.routing_key, routing_key) self.assertIsInstance(rx_properties, pika.BasicProperties) self.assertEqual(rx_body, as_bytes('via-publish')) # There shouldn't be any more events now self.assertFalse(ch._pending_events) ch.basic_ack(delivery_tag=rx_method.delivery_tag, multiple=False) # Verify that the queue is now empty self._assert_exact_message_count_with_retries(channel=ch, queue=q_name, expected_count=0) # Attempt to consume again with a short timeout connection.process_data_events(time_limit=0.005) self.assertEqual(len(rx_messages), 2) # Delete the queue and wait for consumer cancellation rx_cancellations = [] ch.add_on_cancel_callback(rx_cancellations.append) ch.queue_delete(q_name) ch.start_consuming() self.assertEqual(len(rx_cancellations), 1) frame, = rx_cancellations # pylint: disable=E0632 self.assertEqual(frame.method.consumer_tag, consumer_tag) class TestBasicConsumeWithAckFromAnotherThread(BlockingTestCaseBase): def test(self): # pylint: disable=R0914,R0915 """BlockingChannel.basic_consume with ack from another thread and \ requesting basic_ack via add_callback_threadsafe """ # This test simulates processing of a message on another thread and # then requesting an ACK to be dispatched on the connection's thread # via BlockingConnection.add_callback_threadsafe connection = self._connect() ch = connection.channel() q_name = 'TestBasicConsumeWithAckFromAnotherThread_q' + uuid.uuid1().hex exg_name = ('TestBasicConsumeWithAckFromAnotherThread_exg' + uuid.uuid1().hex) routing_key = 'TestBasicConsumeWithAckFromAnotherThread' # Place channel in publisher-acknowledgments mode so that publishing # with mandatory=True will be synchronous (for convenience) res = ch.confirm_delivery() self.assertIsNone(res) # Declare a new exchange ch.exchange_declare(exg_name, exchange_type=ExchangeType.direct) self.addCleanup(connection.channel().exchange_delete, exg_name) # Declare a new queue ch.queue_declare(q_name, auto_delete=True) self.addCleanup(lambda: self._connect().channel().queue_delete(q_name)) # Bind the queue to the exchange using routing key ch.queue_bind(q_name, exchange=exg_name, routing_key=routing_key) # Publish 2 messages with mandatory=True for synchronous processing ch.basic_publish(exg_name, routing_key, body='msg1', mandatory=True) ch.basic_publish(exg_name, routing_key, body='last-msg', mandatory=True) # Configure QoS for one message so that the 2nd message will be # delivered only after the 1st one is ACKed ch.basic_qos(prefetch_size=0, prefetch_count=1, global_qos=False) # Create a consumer rx_messages = [] def ackAndEnqueueMessageViaAnotherThread(rx_ch, rx_method, rx_properties, # pylint: disable=W0613 rx_body): LOGGER.debug( '%s: Got message body=%r; delivery-tag=%r', datetime.now(), rx_body, rx_method.delivery_tag) # Request ACK dispatch via add_callback_threadsafe from other # thread; if last message, cancel consumer so that start_consuming # can return def processOnConnectionThread(): LOGGER.debug('%s: ACKing message body=%r; delivery-tag=%r', datetime.now(), rx_body, rx_method.delivery_tag) ch.basic_ack(delivery_tag=rx_method.delivery_tag, multiple=False) rx_messages.append(rx_body) # NOTE on python3, `b'last-msg' != 'last-msg'` if rx_body == b'last-msg': LOGGER.debug('%s: Canceling consumer consumer-tag=%r', datetime.now(), rx_method.consumer_tag) rx_ch.basic_cancel(rx_method.consumer_tag) # Spawn a thread to initiate ACKing timer = threading.Timer(0, lambda: connection.add_callback_threadsafe( processOnConnectionThread)) self.addCleanup(timer.cancel) timer.start() consumer_tag = ch.basic_consume( q_name, ackAndEnqueueMessageViaAnotherThread, auto_ack=False, exclusive=False, arguments=None) # Wait for both messages LOGGER.debug('%s: calling start_consuming(); consumer tag=%r', datetime.now(), consumer_tag) ch.start_consuming() LOGGER.debug('%s: Returned from start_consuming(); consumer tag=%r', datetime.now(), consumer_tag) self.assertEqual(len(rx_messages), 2) self.assertEqual(rx_messages[0], b'msg1') self.assertEqual(rx_messages[1], b'last-msg') class TestConsumeGeneratorWithAckFromAnotherThread(BlockingTestCaseBase): def test(self): # pylint: disable=R0914,R0915 """BlockingChannel.consume and requesting basic_ack from another \ thread via add_callback_threadsafe """ connection = self._connect() ch = connection.channel() q_name = ('TestConsumeGeneratorWithAckFromAnotherThread_q' + uuid.uuid1().hex) exg_name = ('TestConsumeGeneratorWithAckFromAnotherThread_exg' + uuid.uuid1().hex) routing_key = 'TestConsumeGeneratorWithAckFromAnotherThread' # Place channel in publisher-acknowledgments mode so that publishing # with mandatory=True will be synchronous (for convenience) res = ch.confirm_delivery() self.assertIsNone(res) # Declare a new exchange ch.exchange_declare(exg_name, exchange_type=ExchangeType.direct) self.addCleanup(connection.channel().exchange_delete, exg_name) # Declare a new queue ch.queue_declare(q_name, auto_delete=True) self.addCleanup(lambda: self._connect().channel().queue_delete(q_name)) # Bind the queue to the exchange using routing key ch.queue_bind(q_name, exchange=exg_name, routing_key=routing_key) # Publish 2 messages with mandatory=True for synchronous processing ch.basic_publish(exg_name, routing_key, body='msg1', mandatory=True) ch.basic_publish(exg_name, routing_key, body='last-msg', mandatory=True) # Configure QoS for one message so that the 2nd message will be # delivered only after the 1st one is ACKed ch.basic_qos(prefetch_size=0, prefetch_count=1, global_qos=False) # Create a consumer rx_messages = [] def ackAndEnqueueMessageViaAnotherThread(rx_ch, rx_method, rx_properties, # pylint: disable=W0613 rx_body): LOGGER.debug( '%s: Got message body=%r; delivery-tag=%r', datetime.now(), rx_body, rx_method.delivery_tag) # Request ACK dispatch via add_callback_threadsafe from other # thread; if last message, cancel consumer so that consumer # generator completes def processOnConnectionThread(): LOGGER.debug('%s: ACKing message body=%r; delivery-tag=%r', datetime.now(), rx_body, rx_method.delivery_tag) ch.basic_ack(delivery_tag=rx_method.delivery_tag, multiple=False) rx_messages.append(rx_body) # NOTE on python3, `b'last-msg' != 'last-msg'` if rx_body == b'last-msg': LOGGER.debug('%s: Canceling consumer consumer-tag=%r', datetime.now(), rx_method.consumer_tag) # NOTE Need to use cancel() for the consumer generator # instead of basic_cancel() rx_ch.cancel() # Spawn a thread to initiate ACKing timer = threading.Timer(0, lambda: connection.add_callback_threadsafe( processOnConnectionThread)) self.addCleanup(timer.cancel) timer.start() for method, properties, body in ch.consume(q_name, auto_ack=False): ackAndEnqueueMessageViaAnotherThread(rx_ch=ch, rx_method=method, rx_properties=properties, rx_body=body) self.assertEqual(len(rx_messages), 2) self.assertEqual(rx_messages[0], b'msg1') self.assertEqual(rx_messages[1], b'last-msg') class TestTwoBasicConsumersOnSameChannel(BlockingTestCaseBase): def test(self): # pylint: disable=R0914 """BlockingChannel: two basic_consume consumers on same channel """ connection = self._connect() ch = connection.channel() exg_name = 'TestPublishAndConsumeAndQos_exg_' + uuid.uuid1().hex q1_name = 'TestTwoBasicConsumersOnSameChannel_q1' + uuid.uuid1().hex q2_name = 'TestTwoBasicConsumersOnSameChannel_q2' + uuid.uuid1().hex q1_routing_key = 'TestTwoBasicConsumersOnSameChannel1' q2_routing_key = 'TestTwoBasicConsumersOnSameChannel2' # Place channel in publisher-acknowledgments mode so that publishing # with mandatory=True will be synchronous ch.confirm_delivery() # Declare a new exchange ch.exchange_declare(exg_name, exchange_type=ExchangeType.direct) self.addCleanup(connection.channel().exchange_delete, exg_name) # Declare the two new queues and bind them to the exchange ch.queue_declare(q1_name, auto_delete=True) self.addCleanup(lambda: self._connect().channel().queue_delete(q1_name)) ch.queue_bind(q1_name, exchange=exg_name, routing_key=q1_routing_key) ch.queue_declare(q2_name, auto_delete=True) self.addCleanup(lambda: self._connect().channel().queue_delete(q2_name)) ch.queue_bind(q2_name, exchange=exg_name, routing_key=q2_routing_key) # Deposit messages in the queues q1_tx_message_bodies = ['q1_message+%s' % (i,) for i in pika.compat.xrange(100)] for message_body in q1_tx_message_bodies: ch.basic_publish(exg_name, q1_routing_key, body=message_body, mandatory=True) q2_tx_message_bodies = ['q2_message+%s' % (i,) for i in pika.compat.xrange(150)] for message_body in q2_tx_message_bodies: ch.basic_publish(exg_name, q2_routing_key, body=message_body, mandatory=True) # Create the consumers q1_rx_messages = [] q1_consumer_tag = ch.basic_consume( q1_name, lambda *args: q1_rx_messages.append(args), auto_ack=False, exclusive=False, arguments=None) q2_rx_messages = [] q2_consumer_tag = ch.basic_consume( q2_name, lambda *args: q2_rx_messages.append(args), auto_ack=False, exclusive=False, arguments=None) # Wait for all messages to be delivered while (len(q1_rx_messages) < len(q1_tx_message_bodies) or len(q2_rx_messages) < len(q2_tx_message_bodies)): connection.process_data_events(time_limit=None) self.assertEqual(len(q2_rx_messages), len(q2_tx_message_bodies)) # Verify the messages def validate_messages(rx_messages, routing_key, consumer_tag, tx_message_bodies): self.assertEqual(len(rx_messages), len(tx_message_bodies)) for msg, expected_body in zip(rx_messages, tx_message_bodies): self.assertIsInstance(msg, tuple) rx_ch, rx_method, rx_properties, rx_body = msg self.assertIs(rx_ch, ch) self.assertIsInstance(rx_method, pika.spec.Basic.Deliver) self.assertEqual(rx_method.consumer_tag, consumer_tag) self.assertFalse(rx_method.redelivered) self.assertEqual(rx_method.exchange, exg_name) self.assertEqual(rx_method.routing_key, routing_key) self.assertIsInstance(rx_properties, pika.BasicProperties) self.assertEqual(rx_body, as_bytes(expected_body)) # Validate q1 consumed messages validate_messages(rx_messages=q1_rx_messages, routing_key=q1_routing_key, consumer_tag=q1_consumer_tag, tx_message_bodies=q1_tx_message_bodies) # Validate q2 consumed messages validate_messages(rx_messages=q2_rx_messages, routing_key=q2_routing_key, consumer_tag=q2_consumer_tag, tx_message_bodies=q2_tx_message_bodies) # There shouldn't be any more events now self.assertFalse(ch._pending_events) class TestBasicCancelPurgesPendingConsumerCancellationEvt(BlockingTestCaseBase): def test(self): """BlockingChannel.basic_cancel purges pending _ConsumerCancellationEvt""" # pylint: disable=C0301 connection = self._connect() ch = connection.channel() q_name = ('TestBasicCancelPurgesPendingConsumerCancellationEvt_q' + uuid.uuid1().hex) ch.queue_declare(q_name) self.addCleanup(lambda: self._connect().channel().queue_delete(q_name)) ch.basic_publish('', routing_key=q_name, body='via-publish', mandatory=True) # Create a consumer. Not passing a 'callback' to test client-generated # consumer tags rx_messages = [] consumer_tag = ch.basic_consume( q_name, lambda *args: rx_messages.append(args)) # Wait for the published message to arrive, but don't consume it while not ch._pending_events: # Issue synchronous command that forces processing of incoming I/O connection.channel().close() self.assertEqual(len(ch._pending_events), 1) self.assertIsInstance(ch._pending_events[0], blocking_connection._ConsumerDeliveryEvt) # Delete the queue and wait for broker-initiated consumer cancellation ch.queue_delete(q_name) while len(ch._pending_events) < 2: # Issue synchronous command that forces processing of incoming I/O connection.channel().close() self.assertEqual(len(ch._pending_events), 2) self.assertIsInstance(ch._pending_events[1], blocking_connection._ConsumerCancellationEvt) # Issue consumer cancellation and verify that the pending # _ConsumerCancellationEvt instance was removed messages = ch.basic_cancel(consumer_tag) self.assertEqual(messages, []) self.assertEqual(len(ch._pending_events), 0) class TestBasicPublishWithoutPubacks(BlockingTestCaseBase): def test(self): # pylint: disable=R0914,R0915 """BlockingChannel.basic_publish without pubacks""" connection = self._connect() ch = connection.channel() q_name = 'TestBasicPublishWithoutPubacks_q' + uuid.uuid1().hex exg_name = 'TestBasicPublishWithoutPubacks_exg_' + uuid.uuid1().hex routing_key = 'TestBasicPublishWithoutPubacks' # Declare a new exchange ch.exchange_declare(exg_name, exchange_type=ExchangeType.direct) self.addCleanup(connection.channel().exchange_delete, exg_name) # Declare a new queue ch.queue_declare(q_name, auto_delete=True) self.addCleanup(lambda: self._connect().channel().queue_delete(q_name)) # Bind the queue to the exchange using routing key ch.queue_bind(q_name, exchange=exg_name, routing_key=routing_key) # Deposit a message in the queue with mandatory=True msg1_headers = dict( test_name='TestBasicPublishWithoutPubacks') msg1_properties = pika.spec.BasicProperties(headers=msg1_headers) ch.basic_publish(exg_name, routing_key=routing_key, body='via-basic_publish_mandatory=True', properties=msg1_properties, mandatory=True) # Deposit a message in the queue with mandatory=False ch.basic_publish(exg_name, routing_key=routing_key, body='via-basic_publish_mandatory=False', mandatory=False) # Wait for the messages to arrive in queue self._assert_exact_message_count_with_retries(channel=ch, queue=q_name, expected_count=2) # Create a consumer. Not passing a 'callback' to test client-generated # consumer tags rx_messages = [] consumer_tag = ch.basic_consume( q_name, lambda *args: rx_messages.append(args), auto_ack=False, exclusive=False, arguments=None) # Wait for first message to arrive while not rx_messages: connection.process_data_events(time_limit=None) self.assertGreaterEqual(len(rx_messages), 1) # Check the first message msg = rx_messages[0] self.assertIsInstance(msg, tuple) rx_ch, rx_method, rx_properties, rx_body = msg self.assertIs(rx_ch, ch) self.assertIsInstance(rx_method, pika.spec.Basic.Deliver) self.assertEqual(rx_method.consumer_tag, consumer_tag) self.assertEqual(rx_method.delivery_tag, 1) self.assertFalse(rx_method.redelivered) self.assertEqual(rx_method.exchange, exg_name) self.assertEqual(rx_method.routing_key, routing_key) self.assertIsInstance(rx_properties, pika.BasicProperties) self.assertEqual(rx_properties.headers, msg1_headers) self.assertEqual(rx_body, as_bytes('via-basic_publish_mandatory=True')) # There shouldn't be any more events now self.assertFalse(ch._pending_events) # Ack the message so that the next one can arrive (we configured QoS # with prefetch_count=1) ch.basic_ack(delivery_tag=rx_method.delivery_tag, multiple=False) # Get the second message while len(rx_messages) < 2: connection.process_data_events(time_limit=None) self.assertEqual(len(rx_messages), 2) msg = rx_messages[1] self.assertIsInstance(msg, tuple) rx_ch, rx_method, rx_properties, rx_body = msg self.assertIs(rx_ch, ch) self.assertIsInstance(rx_method, pika.spec.Basic.Deliver) self.assertEqual(rx_method.consumer_tag, consumer_tag) self.assertEqual(rx_method.delivery_tag, 2) self.assertFalse(rx_method.redelivered) self.assertEqual(rx_method.exchange, exg_name) self.assertEqual(rx_method.routing_key, routing_key) self.assertIsInstance(rx_properties, pika.BasicProperties) self.assertEqual(rx_body, as_bytes('via-basic_publish_mandatory=False')) # There shouldn't be any more events now self.assertFalse(ch._pending_events) ch.basic_ack(delivery_tag=rx_method.delivery_tag, multiple=False) # Verify that the queue is now empty self._assert_exact_message_count_with_retries(channel=ch, queue=q_name, expected_count=0) # Attempt to consume again with a short timeout connection.process_data_events(time_limit=0.005) self.assertEqual(len(rx_messages), 2) class TestPublishFromBasicConsumeCallback(BlockingTestCaseBase): def test(self): """BlockingChannel.basic_publish from basic_consume callback """ connection = self._connect() ch = connection.channel() src_q_name = ( 'TestPublishFromBasicConsumeCallback_src_q' + uuid.uuid1().hex) dest_q_name = ( 'TestPublishFromBasicConsumeCallback_dest_q' + uuid.uuid1().hex) # Place channel in publisher-acknowledgments mode so that publishing # with mandatory=True will be synchronous ch.confirm_delivery() # Declare source and destination queues ch.queue_declare(src_q_name, auto_delete=True) self.addCleanup(lambda: self._connect().channel().queue_delete(src_q_name)) ch.queue_declare(dest_q_name, auto_delete=True) self.addCleanup(lambda: self._connect().channel().queue_delete(dest_q_name)) # Deposit a message in the source queue ch.basic_publish('', routing_key=src_q_name, body='via-publish', mandatory=True) # Create a consumer def on_consume(channel, method, props, body): channel.basic_publish( '', routing_key=dest_q_name, body=body, properties=props, mandatory=True) channel.basic_ack(method.delivery_tag) ch.basic_consume(src_q_name, on_consume, auto_ack=False, exclusive=False, arguments=None) # Consume from destination queue for _, _, rx_body in ch.consume(dest_q_name, auto_ack=True): self.assertEqual(rx_body, as_bytes('via-publish')) break else: self.fail('failed to consume a messages from destination q') class TestStopConsumingFromBasicConsumeCallback(BlockingTestCaseBase): def test(self): """BlockingChannel.stop_consuming from basic_consume callback """ connection = self._connect() ch = connection.channel() q_name = ( 'TestStopConsumingFromBasicConsumeCallback_q' + uuid.uuid1().hex) # Place channel in publisher-acknowledgments mode so that publishing # with mandatory=True will be synchronous ch.confirm_delivery() # Declare the queue ch.queue_declare(q_name, auto_delete=False) self.addCleanup(connection.channel().queue_delete, q_name) # Deposit two messages in the queue ch.basic_publish('', routing_key=q_name, body='via-publish1', mandatory=True) ch.basic_publish('', routing_key=q_name, body='via-publish2', mandatory=True) # Create a consumer def on_consume(channel, method, props, body): # pylint: disable=W0613 channel.stop_consuming() channel.basic_ack(method.delivery_tag) ch.basic_consume(q_name, on_consume, auto_ack=False, exclusive=False, arguments=None) ch.start_consuming() ch.close() ch = connection.channel() # Verify that only the second message is present in the queue _, _, rx_body = ch.basic_get(q_name) self.assertEqual(rx_body, as_bytes('via-publish2')) msg = ch.basic_get(q_name) self.assertTupleEqual(msg, (None, None, None)) class TestCloseChannelFromBasicConsumeCallback(BlockingTestCaseBase): def test(self): """BlockingChannel.close from basic_consume callback """ connection = self._connect() ch = connection.channel() q_name = ( 'TestCloseChannelFromBasicConsumeCallback_q' + uuid.uuid1().hex) # Place channel in publisher-acknowledgments mode so that publishing # with mandatory=True will be synchronous ch.confirm_delivery() # Declare the queue ch.queue_declare(q_name, auto_delete=False) self.addCleanup(connection.channel().queue_delete, q_name) # Deposit two messages in the queue ch.basic_publish('', routing_key=q_name, body='via-publish1', mandatory=True) ch.basic_publish('', routing_key=q_name, body='via-publish2', mandatory=True) # Create a consumer def on_consume(channel, method, props, body): # pylint: disable=W0613 channel.close() ch.basic_consume(q_name, on_consume, auto_ack=False, exclusive=False, arguments=None) ch.start_consuming() self.assertTrue(ch.is_closed) # Verify that both messages are present in the queue ch = connection.channel() _, _, rx_body = ch.basic_get(q_name) self.assertEqual(rx_body, as_bytes('via-publish1')) _, _, rx_body = ch.basic_get(q_name) self.assertEqual(rx_body, as_bytes('via-publish2')) class TestCloseConnectionFromBasicConsumeCallback(BlockingTestCaseBase): def test(self): """BlockingConnection.close from basic_consume callback """ connection = self._connect() ch = connection.channel() q_name = ( 'TestCloseConnectionFromBasicConsumeCallback_q' + uuid.uuid1().hex) # Place channel in publisher-acknowledgments mode so that publishing # with mandatory=True will be synchronous ch.confirm_delivery() # Declare the queue ch.queue_declare(q_name, auto_delete=False) self.addCleanup(lambda: self._connect().channel().queue_delete(q_name)) # Deposit two messages in the queue ch.basic_publish('', routing_key=q_name, body='via-publish1', mandatory=True) ch.basic_publish('', routing_key=q_name, body='via-publish2', mandatory=True) # Create a consumer def on_consume(channel, method, props, body): # pylint: disable=W0613 connection.close() ch.basic_consume(q_name, on_consume, auto_ack=False, exclusive=False, arguments=None) ch.start_consuming() self.assertTrue(ch.is_closed) self.assertTrue(connection.is_closed) # Verify that both messages are present in the queue ch = self._connect().channel() _, _, rx_body = ch.basic_get(q_name) self.assertEqual(rx_body, as_bytes('via-publish1')) _, _, rx_body = ch.basic_get(q_name) self.assertEqual(rx_body, as_bytes('via-publish2')) class TestStartConsumingRaisesChannelClosedOnSameChannelFailure(BlockingTestCaseBase): def test(self): """start_consuming() exits with ChannelClosed exception on same channel failure """ connection = self._connect() # Fail test if exception leaks back ito I/O loop self._instrument_io_loop_exception_leak_detection(connection) ch = connection.channel() q_name = ( 'TestStartConsumingPassesChannelClosedOnSameChannelFailure_q' + uuid.uuid1().hex) # Declare the queue ch.queue_declare(q_name, auto_delete=False) self.addCleanup(lambda: self._connect().channel().queue_delete(q_name)) ch.basic_consume(q_name, lambda *args, **kwargs: None, auto_ack=False, exclusive=False, arguments=None) # Schedule a callback that will cause a channel error on the consumer's # channel by publishing to an unknown exchange. This will cause the # broker to close our channel. connection.add_callback_threadsafe( lambda: ch.basic_publish( exchange=q_name, routing_key='123', body=b'Nope this is wrong')) with self.assertRaises(pika.exceptions.ChannelClosedByBroker): ch.start_consuming() class TestStartConsumingReturnsAfterCancelFromBroker(BlockingTestCaseBase): def test(self): """start_consuming() returns after Cancel from broker """ connection = self._connect() ch = connection.channel() q_name = ( 'TestStartConsumingExitsOnCancelFromBroker_q' + uuid.uuid1().hex) # Declare the queue ch.queue_declare(q_name, auto_delete=False) self.addCleanup(lambda: self._connect().channel().queue_delete(q_name)) consumer_tag = ch.basic_consume(q_name, lambda *args, **kwargs: None, auto_ack=False, exclusive=False, arguments=None) # Schedule a callback that will run while start_consuming() is # executing and delete the queue. This will cause the broker to cancel # our consumer connection.add_callback_threadsafe( lambda: self._connect().channel().queue_delete(q_name)) ch.start_consuming() self.assertNotIn(consumer_tag, ch._consumer_infos) class TestNonPubAckPublishAndConsumeHugeMessage(BlockingTestCaseBase): def test(self): """BlockingChannel.publish/consume huge message""" connection = self._connect() ch = connection.channel() q_name = 'TestPublishAndConsumeHugeMessage_q' + uuid.uuid1().hex body = 'a' * 1000000 # Declare a new queue ch.queue_declare(q_name, auto_delete=False) self.addCleanup(lambda: self._connect().channel().queue_delete(q_name)) # Publish a message to the queue by way of default exchange ch.basic_publish(exchange='', routing_key=q_name, body=body) LOGGER.info('Published message body size=%s', len(body)) # Consume the message for rx_method, rx_props, rx_body in ch.consume(q_name, auto_ack=False, exclusive=False, arguments=None): self.assertIsInstance(rx_method, pika.spec.Basic.Deliver) self.assertEqual(rx_method.delivery_tag, 1) self.assertFalse(rx_method.redelivered) self.assertEqual(rx_method.exchange, '') self.assertEqual(rx_method.routing_key, q_name) self.assertIsInstance(rx_props, pika.BasicProperties) self.assertEqual(rx_body, as_bytes(body)) # Ack the message ch.basic_ack(delivery_tag=rx_method.delivery_tag, multiple=False) break # There shouldn't be any more events now self.assertFalse(ch._queue_consumer_generator.pending_events) # Verify that the queue is now empty ch.close() ch = connection.channel() self._assert_exact_message_count_with_retries(channel=ch, queue=q_name, expected_count=0) class TestNonPubAckPublishAndConsumeManyMessages(BlockingTestCaseBase): def test(self): """BlockingChannel non-pub-ack publish/consume many messages""" connection = self._connect() ch = connection.channel() q_name = ('TestNonPubackPublishAndConsumeManyMessages_q' + uuid.uuid1().hex) body = 'b' * 1024 num_messages_to_publish = 500 # Declare a new queue ch.queue_declare(q_name, auto_delete=False) self.addCleanup(lambda: self._connect().channel().queue_delete(q_name)) for _ in pika.compat.xrange(num_messages_to_publish): # Publish a message to the queue by way of default exchange ch.basic_publish(exchange='', routing_key=q_name, body=body) # Consume the messages num_consumed = 0 for rx_method, rx_props, rx_body in ch.consume(q_name, auto_ack=False, exclusive=False, arguments=None): num_consumed += 1 self.assertIsInstance(rx_method, pika.spec.Basic.Deliver) self.assertEqual(rx_method.delivery_tag, num_consumed) self.assertFalse(rx_method.redelivered) self.assertEqual(rx_method.exchange, '') self.assertEqual(rx_method.routing_key, q_name) self.assertIsInstance(rx_props, pika.BasicProperties) self.assertEqual(rx_body, as_bytes(body)) # Ack the message ch.basic_ack(delivery_tag=rx_method.delivery_tag, multiple=False) if num_consumed >= num_messages_to_publish: break # There shouldn't be any more events now self.assertFalse(ch._queue_consumer_generator.pending_events) ch.close() self.assertIsNone(ch._queue_consumer_generator) # Verify that the queue is now empty ch = connection.channel() self._assert_exact_message_count_with_retries(channel=ch, queue=q_name, expected_count=0) class TestBasicCancelWithNonAckableConsumer(BlockingTestCaseBase): def test(self): """BlockingChannel user cancels non-ackable consumer via basic_cancel""" connection = self._connect() ch = connection.channel() q_name = ( 'TestBasicCancelWithNonAckableConsumer_q' + uuid.uuid1().hex) body1 = 'a' * 1024 body2 = 'b' * 2048 # Declare a new queue ch.queue_declare(q_name, auto_delete=False) self.addCleanup(lambda: self._connect().channel().queue_delete(q_name)) # Publish two messages to the queue by way of default exchange ch.basic_publish(exchange='', routing_key=q_name, body=body1) ch.basic_publish(exchange='', routing_key=q_name, body=body2) # Wait for queue to contain both messages self._assert_exact_message_count_with_retries(channel=ch, queue=q_name, expected_count=2) # Create a consumer that uses automatic ack mode consumer_tag = ch.basic_consume(q_name, lambda *x: None, auto_ack=True, exclusive=False, arguments=None) # Wait for all messages to be sent by broker to client self._assert_exact_message_count_with_retries(channel=ch, queue=q_name, expected_count=0) # Cancel the consumer messages = ch.basic_cancel(consumer_tag) # Both messages should have been on their way when we cancelled self.assertEqual(len(messages), 2) _, _, rx_body1 = messages[0] self.assertEqual(rx_body1, as_bytes(body1)) _, _, rx_body2 = messages[1] self.assertEqual(rx_body2, as_bytes(body2)) ch.close() ch = connection.channel() # Verify that the queue is now empty frame = ch.queue_declare(q_name, passive=True) self.assertEqual(frame.method.message_count, 0) class TestBasicCancelWithAckableConsumer(BlockingTestCaseBase): def test(self): """BlockingChannel user cancels ackable consumer via basic_cancel""" connection = self._connect() ch = connection.channel() q_name = ( 'TestBasicCancelWithAckableConsumer_q' + uuid.uuid1().hex) body1 = 'a' * 1024 body2 = 'b' * 2048 # Declare a new queue ch.queue_declare(q_name, auto_delete=False) self.addCleanup(lambda: self._connect().channel().queue_delete(q_name)) # Publish two messages to the queue by way of default exchange ch.basic_publish(exchange='', routing_key=q_name, body=body1) ch.basic_publish(exchange='', routing_key=q_name, body=body2) # Wait for queue to contain both messages self._assert_exact_message_count_with_retries(channel=ch, queue=q_name, expected_count=2) # Create an ackable consumer consumer_tag = ch.basic_consume(q_name, lambda *x: None, auto_ack=False, exclusive=False, arguments=None) # Wait for all messages to be sent by broker to client self._assert_exact_message_count_with_retries(channel=ch, queue=q_name, expected_count=0) # Cancel the consumer messages = ch.basic_cancel(consumer_tag) # Both messages should have been on their way when we cancelled self.assertEqual(len(messages), 0) ch.close() ch = connection.channel() # Verify that canceling the ackable consumer restored both messages self._assert_exact_message_count_with_retries(channel=ch, queue=q_name, expected_count=2) class TestUnackedMessageAutoRestoredToQueueOnChannelClose(BlockingTestCaseBase): def test(self): """BlockingChannel unacked message restored to q on channel close """ connection = self._connect() ch = connection.channel() q_name = ('TestUnackedMessageAutoRestoredToQueueOnChannelClose_q' + uuid.uuid1().hex) body1 = 'a' * 1024 body2 = 'b' * 2048 # Declare a new queue ch.queue_declare(q_name, auto_delete=False) self.addCleanup(lambda: self._connect().channel().queue_delete(q_name)) # Publish two messages to the queue by way of default exchange ch.basic_publish(exchange='', routing_key=q_name, body=body1) ch.basic_publish(exchange='', routing_key=q_name, body=body2) # Consume the events, but don't ack rx_messages = [] ch.basic_consume(q_name, lambda *args: rx_messages.append(args), auto_ack=False, exclusive=False, arguments=None) while len(rx_messages) != 2: connection.process_data_events(time_limit=None) self.assertEqual(rx_messages[0][1].delivery_tag, 1) self.assertEqual(rx_messages[1][1].delivery_tag, 2) # Verify no more ready messages in queue frame = ch.queue_declare(q_name, passive=True) self.assertEqual(frame.method.message_count, 0) # Closing channel should restore messages back to queue ch.close() # Verify that there are two messages in q now ch = connection.channel() self._assert_exact_message_count_with_retries(channel=ch, queue=q_name, expected_count=2) class TestNoAckMessageNotRestoredToQueueOnChannelClose(BlockingTestCaseBase): def test(self): """BlockingChannel unacked message restored to q on channel close """ connection = self._connect() ch = connection.channel() q_name = ('TestNoAckMessageNotRestoredToQueueOnChannelClose_q' + uuid.uuid1().hex) body1 = 'a' * 1024 body2 = 'b' * 2048 # Declare a new queue ch.queue_declare(q_name, auto_delete=False) self.addCleanup(lambda: self._connect().channel().queue_delete(q_name)) # Publish two messages to the queue by way of default exchange ch.basic_publish(exchange='', routing_key=q_name, body=body1) ch.basic_publish(exchange='', routing_key=q_name, body=body2) # Consume, but don't ack num_messages = 0 for rx_method, _, _ in ch.consume(q_name, auto_ack=True, exclusive=False): num_messages += 1 self.assertEqual(rx_method.delivery_tag, num_messages) if num_messages == 2: break else: self.fail('expected 2 messages, but consumed %i' % (num_messages,)) # Verify no more ready messages in queue frame = ch.queue_declare(q_name, passive=True) self.assertEqual(frame.method.message_count, 0) # Closing channel should not restore no-ack messages back to queue ch.close() # Verify that there are no messages in q now ch = connection.channel() frame = ch.queue_declare(q_name, passive=True) self.assertEqual(frame.method.message_count, 0) class TestConsumeGeneratorInactivityTimeout(BlockingTestCaseBase): def test(self): """BlockingChannel consume returns 3-tuple of None values on inactivity timeout """ connection = self._connect() ch = connection.channel() q_name = ('TestConsumeGeneratorInactivityTimeout_q' + uuid.uuid1().hex) # Declare a new queue ch.queue_declare(q_name, auto_delete=True) # Expect to get only (None, None, None) upon inactivity timeout, since # there are no messages in queue for msg in ch.consume(q_name, inactivity_timeout=0.1): self.assertEqual(msg, (None, None, None)) break else: self.fail('expected (None, None, None), but iterator stopped') class TestConsumeGeneratorInterruptedByCancelFromBroker(BlockingTestCaseBase): def test(self): """BlockingChannel consume generator is interrupted broker's Cancel """ connection = self._connect() self.assertTrue(connection.consumer_cancel_notify_supported) ch = connection.channel() q_name = ('TestConsumeGeneratorInterruptedByCancelFromBroker_q' + uuid.uuid1().hex) # Declare a new queue ch.queue_declare(q_name, auto_delete=True) queue_deleted = False for _ in ch.consume(q_name, auto_ack=False, inactivity_timeout=0.001): if not queue_deleted: # Delete the queue to force Basic.Cancel from the broker ch.queue_delete(q_name) queue_deleted = True self.assertIsNone(ch._queue_consumer_generator) class TestConsumeGeneratorCancelEncountersCancelFromBroker(BlockingTestCaseBase): def test(self): """BlockingChannel consume generator cancel called when broker's Cancel is enqueued """ connection = self._connect() self.assertTrue(connection.consumer_cancel_notify_supported) ch = connection.channel() q_name = ('TestConsumeGeneratorCancelEncountersCancelFromBroker_q' + uuid.uuid1().hex) # Declare a new queue ch.queue_declare(q_name, auto_delete=True) for _ in ch.consume(q_name, auto_ack=False, inactivity_timeout=0.001): # Delete the queue to force Basic.Cancel from the broker ch.queue_delete(q_name) # Wait for server's Basic.Cancel while not ch._queue_consumer_generator.pending_events: connection.process_data_events() # Confirm it's Basic.Cancel self.assertIsInstance(ch._queue_consumer_generator.pending_events[0], blocking_connection._ConsumerCancellationEvt) # Now attempt to cancel the consumer generator ch.cancel() self.assertIsNone(ch._queue_consumer_generator) class TestConsumeGeneratorPassesChannelClosedOnSameChannelFailure(BlockingTestCaseBase): def test(self): """consume() exits with ChannelClosed exception on same channel failure """ connection = self._connect() # Fail test if exception leaks back ito I/O loop self._instrument_io_loop_exception_leak_detection(connection) ch = connection.channel() q_name = ( 'TestConsumeGeneratorPassesChannelClosedOnSameChannelFailure_q' + uuid.uuid1().hex) # Declare the queue ch.queue_declare(q_name, auto_delete=False) self.addCleanup(lambda: self._connect().channel().queue_delete(q_name)) # Schedule a callback that will cause a channel error on the consumer's # channel by publishing to an unknown exchange. This will cause the # broker to close our channel. connection.add_callback_threadsafe( lambda: ch.basic_publish( exchange=q_name, routing_key='123', body=b'Nope this is wrong')) with self.assertRaises(pika.exceptions.ChannelClosedByBroker): for _ in ch.consume(q_name): pass class TestChannelFlow(BlockingTestCaseBase): def test(self): """BlockingChannel Channel.Flow activate and deactivate """ connection = self._connect() ch = connection.channel() q_name = ('TestChannelFlow_q' + uuid.uuid1().hex) # Declare a new queue ch.queue_declare(q_name, auto_delete=False) self.addCleanup(lambda: self._connect().channel().queue_delete(q_name)) # Verify zero active consumers on the queue frame = ch.queue_declare(q_name, passive=True) self.assertEqual(frame.method.consumer_count, 0) # Create consumer ch.basic_consume(q_name, lambda *args: None) # Verify one active consumer on the queue now frame = ch.queue_declare(q_name, passive=True) self.assertEqual(frame.method.consumer_count, 1) # Activate flow from default state (active by default) active = ch.flow(True) self.assertEqual(active, True) # Verify still one active consumer on the queue now frame = ch.queue_declare(q_name, passive=True) self.assertEqual(frame.method.consumer_count, 1) class TestChannelRaisesWrongStateWhenDeclaringQueueOnClosedChannel(BlockingTestCaseBase): def test(self): """BlockingConnection: Declaring queue on closed channel raises ChannelWrongStateError""" q_name = ( 'TestChannelRaisesWrongStateWhenDeclaringQueueOnClosedChannel_q' + uuid.uuid1().hex) channel = self._connect().channel() channel.close() with self.assertRaises(pika.exceptions.ChannelWrongStateError): channel.queue_declare(q_name) class TestChannelRaisesWrongStateWhenClosingClosedChannel(BlockingTestCaseBase): def test(self): """BlockingConnection: Closing closed channel raises ChannelWrongStateError""" channel = self._connect().channel() channel.close() with self.assertRaises(pika.exceptions.ChannelWrongStateError): channel.close() class TestChannelContextManagerClosesChannel(BlockingTestCaseBase): def test(self): """BlockingConnection: chanel context manager exit survives closed channel""" with self._connect().channel() as channel: self.assertTrue(channel.is_open) self.assertTrue(channel.is_closed) class TestChannelContextManagerExitSurvivesClosedChannel(BlockingTestCaseBase): def test(self): """BlockingConnection: chanel context manager exit survives closed channel""" with self._connect().channel() as channel: self.assertTrue(channel.is_open) channel.close() self.assertTrue(channel.is_closed) self.assertTrue(channel.is_closed) class TestChannelContextManagerDoesNotSuppressChannelClosedByBroker( BlockingTestCaseBase): def test(self): """BlockingConnection: chanel context manager doesn't suppress ChannelClosedByBroker exception""" exg_name = ( "TestChannelContextManagerDoesNotSuppressChannelClosedByBroker" + uuid.uuid1().hex) with self.assertRaises(pika.exceptions.ChannelClosedByBroker): with self._connect().channel() as channel: # Passively declaring non-existent exchange should force broker # to close channel channel.exchange_declare(exg_name, passive=True) self.assertTrue(channel.is_closed) if __name__ == '__main__': unittest.main() pika-1.2.0/tests/acceptance/enforce_one_basicget_test.py000066400000000000000000000020671400701476500234330ustar00rootroot00000000000000import unittest from mock import MagicMock from pika.frame import Method, Header from pika.exceptions import DuplicateGetOkCallback from pika.channel import Channel from pika.connection import Connection class OnlyOneBasicGetTestCase(unittest.TestCase): def setUp(self): self.channel = Channel(MagicMock(Connection)(), 0, None) self.channel._state = Channel.OPEN self.callback = MagicMock() def test_two_basic_get_with_callback(self): self.channel.basic_get('test-queue', self.callback) self.channel._on_getok(MagicMock(Method)(), MagicMock(Header)(), '') self.channel.basic_get('test-queue', self.callback) self.channel._on_getok(MagicMock(Method)(), MagicMock(Header)(), '') self.assertEqual(self.callback.call_count, 2) def test_two_basic_get_without_callback(self): self.channel.basic_get('test-queue', self.callback) with self.assertRaises(DuplicateGetOkCallback): self.channel.basic_get('test-queue', self.callback) if __name__ == '__main__': unittest.main() pika-1.2.0/tests/acceptance/io_services_tests.py000066400000000000000000001255711400701476500220130ustar00rootroot00000000000000""" Tests of nbio_interface.AbstractIOServices adaptations """ import collections import errno import logging import os import platform import socket import time import unittest import pika.compat from pika.adapters.utils import nbio_interface from pika.compat import ON_WINDOWS from tests.misc.forward_server import ForwardServer from tests.stubs.io_services_test_stubs import IOServicesTestStubs # too-many-lines # pylint: disable=C0302 # Suppress missing-docstring to allow test method names to be printed by our the # test runner # pylint: disable=C0111 # invalid-name # pylint: disable=C0103 # protected-access # pylint: disable=W0212 # too-many-locals # pylint: disable=R0914 class AsyncServicesTestBase(unittest.TestCase): @property def logger(self): """Return the logger for tests to use """ return logging.getLogger(self.__class__.__module__ + '.' + self.__class__.__name__) def create_nonblocking_tcp_socket(self): """Create a TCP stream socket and schedule cleanup to close it """ sock = socket.socket() sock.setblocking(False) self.addCleanup(sock.close) return sock def create_nonblocking_socketpair(self): """Creates a non-blocking socket pair and schedules cleanup to close them :returns: two-tuple of connected non-blocking sockets """ pair = pika.compat._nonblocking_socketpair() # pylint: disable=W0212 self.addCleanup(pair[0].close) self.addCleanup(pair[1].close) return pair def create_blocking_socketpair(self): """Creates a blocking socket pair and schedules cleanup to close them :returns: two-tuple of connected non-blocking sockets """ pair = self.create_nonblocking_socketpair() pair[0].setblocking(True) # pylint: disable=E1101 pair[1].setblocking(True) return pair @staticmethod def safe_connect_nonblocking_socket(sock, addr_pair): """Initiate socket connection, suppressing EINPROGRESS/EWOULDBLOCK :param socket.socket sock :param addr_pair: two tuple of address string and port integer """ try: sock.connect(addr_pair) except pika.compat.SOCKET_ERROR as error: # EINPROGRESS for posix and EWOULDBLOCK for windows if error.errno not in (errno.EINPROGRESS, errno.EWOULDBLOCK,): raise def get_dead_socket_address(self): """ :return: socket address pair (ip-addr, port) that will refuse connection """ s1, s2 = pika.compat._nonblocking_socketpair() # pylint: disable=W0212 s2.close() self.addCleanup(s1.close) return s1.getsockname() # pylint: disable=E1101 class TestGetNativeIOLoop(AsyncServicesTestBase, IOServicesTestStubs): def start(self): native_loop = self.create_nbio().get_native_ioloop() self.assertIsNotNone(self._native_loop) self.assertIs(native_loop, self._native_loop) class TestRunWithStopFromThreadsafeCallback(AsyncServicesTestBase, IOServicesTestStubs): def start(self): loop = self.create_nbio() bucket = [] def callback(): loop.stop() bucket.append('I was called') loop.add_callback_threadsafe(callback) loop.run() self.assertEqual(bucket, ['I was called']) class TestCallLaterDoesNotCallAheadOfTime(AsyncServicesTestBase, IOServicesTestStubs): def start(self): loop = self.create_nbio() bucket = [] def callback(): loop.stop() bucket.append('I was here') start_time = pika.compat.time_now() loop.call_later(0.1, callback) loop.run() self.assertGreaterEqual(round(pika.compat.time_now() - start_time, 3), 0.1) self.assertEqual(bucket, ['I was here']) class TestCallLaterCancelReturnsNone(AsyncServicesTestBase, IOServicesTestStubs): def start(self): loop = self.create_nbio() self.assertIsNone(loop.call_later(0, lambda: None).cancel()) class TestCallLaterCancelTwiceFromOwnCallback(AsyncServicesTestBase, IOServicesTestStubs): def start(self): loop = self.create_nbio() bucket = [] def callback(): timer.cancel() timer.cancel() loop.stop() bucket.append('I was here') timer = loop.call_later(0.1, callback) loop.run() self.assertEqual(bucket, ['I was here']) class TestCallLaterCallInOrder(AsyncServicesTestBase, IOServicesTestStubs): def start(self): loop = self.create_nbio() bucket = [] loop.call_later(0.3, lambda: bucket.append(3) or loop.stop()) loop.call_later(0, lambda: bucket.append(1)) loop.call_later(0.15, lambda: bucket.append(2)) loop.run() self.assertEqual(bucket, [1, 2, 3]) class TestCallLaterCancelledDoesNotCallBack(AsyncServicesTestBase, IOServicesTestStubs): def start(self): loop = self.create_nbio() bucket = [] timer1 = loop.call_later(0, lambda: bucket.append(1)) timer1.cancel() loop.call_later(0.15, lambda: bucket.append(2) or loop.stop()) loop.run() self.assertEqual(bucket, [2]) class SocketWatcherTestBase(AsyncServicesTestBase): WatcherActivity = collections.namedtuple( "io_services_test_WatcherActivity", ['readable', 'writable']) def _check_socket_watchers_fired(self, sock, expected): # pylint: disable=R0914 """Registers reader and writer for the given socket, runs the event loop until either one fires and asserts against expectation. :param AsyncServicesTestBase | IOServicesTestStubs self: :param socket.socket sock: :param WatcherActivity expected: What's expected by caller """ # provided by IOServicesTestStubs mixin nbio = self.create_nbio() # pylint: disable=E1101 stops_requested = [] def stop_loop(): if not stops_requested: nbio.stop() stops_requested.append(1) reader_bucket = [False] def on_readable(): self.logger.debug('on_readable() called.') reader_bucket.append(True) stop_loop() writer_bucket = [False] def on_writable(): self.logger.debug('on_writable() called.') writer_bucket.append(True) stop_loop() timeout_bucket = [] def on_timeout(): timeout_bucket.append(True) stop_loop() timeout_timer = nbio.call_later(5, on_timeout) nbio.set_reader(sock.fileno(), on_readable) nbio.set_writer(sock.fileno(), on_writable) try: nbio.run() finally: timeout_timer.cancel() nbio.remove_reader(sock.fileno()) nbio.remove_writer(sock.fileno()) if timeout_bucket: raise AssertionError('which_socket_watchers_fired() timed out.') readable = reader_bucket[-1] writable = writer_bucket[-1] if readable != expected.readable: raise AssertionError( 'Expected readable={!r}, but got {!r} (writable={!r})'.format( expected.readable, readable, writable)) if writable != expected.writable: raise AssertionError( 'Expected writable={!r}, but got {!r} (readable={!r})'.format( expected.writable, writable, readable)) class TestSocketWatchersUponConnectionAndNoIncomingData(SocketWatcherTestBase, IOServicesTestStubs): def start(self): s1, _s2 = self.create_blocking_socketpair() expected = self.WatcherActivity(readable=False, writable=True) self._check_socket_watchers_fired(s1, expected) class TestSocketWatchersUponConnectionAndIncomingData( SocketWatcherTestBase, IOServicesTestStubs): def start(self): s1, s2 = self.create_blocking_socketpair() s2.send(b'abc') expected = self.WatcherActivity(readable=True, writable=True) self._check_socket_watchers_fired(s1, expected) class TestSocketWatchersWhenFailsToConnect(SocketWatcherTestBase, IOServicesTestStubs): def start(self): sock = self.create_nonblocking_tcp_socket() self.safe_connect_nonblocking_socket(sock, self.get_dead_socket_address()) # NOTE: Unlike POSIX, Windows select doesn't indicate as # readable/writable a socket that failed to connect - it reflects the # failure only via exceptfds, which native ioloop's usually attribute to # the writable indication. expected = self.WatcherActivity(readable=False if ON_WINDOWS else True, writable=True) self._check_socket_watchers_fired(sock, expected) class TestSocketWatchersAfterRemotePeerCloses(SocketWatcherTestBase, IOServicesTestStubs): def start(self): s1, s2 = self.create_blocking_socketpair() s2.close() expected = self.WatcherActivity(readable=True, writable=True) self._check_socket_watchers_fired(s1, expected) class TestSocketWatchersAfterRemotePeerClosesWithIncomingData( SocketWatcherTestBase, IOServicesTestStubs): def start(self): s1, s2 = self.create_blocking_socketpair() s2.send(b'abc') s2.close() expected = self.WatcherActivity(readable=True, writable=True) self._check_socket_watchers_fired(s1, expected) class TestSocketWatchersAfterRemotePeerShutsRead(SocketWatcherTestBase, IOServicesTestStubs): def start(self): s1, s2 = self.create_blocking_socketpair() s2.shutdown(socket.SHUT_RD) expected = self.WatcherActivity(readable=False, writable=True) self._check_socket_watchers_fired(s1, expected) class TestSocketWatchersAfterRemotePeerShutsWrite(SocketWatcherTestBase, IOServicesTestStubs): def start(self): s1, s2 = self.create_blocking_socketpair() s2.shutdown(socket.SHUT_WR) expected = self.WatcherActivity(readable=True, writable=True) self._check_socket_watchers_fired(s1, expected) class TestSocketWatchersAfterRemotePeerShutsWriteWithIncomingData( SocketWatcherTestBase, IOServicesTestStubs): def start(self): s1, s2 = self.create_blocking_socketpair() s2.send(b'abc') s2.shutdown(socket.SHUT_WR) expected = self.WatcherActivity(readable=True, writable=True) self._check_socket_watchers_fired(s1, expected) class TestSocketWatchersAfterRemotePeerShutsReadWrite(SocketWatcherTestBase, IOServicesTestStubs): def start(self): s1, s2 = self.create_blocking_socketpair() s2.shutdown(socket.SHUT_RDWR) expected = self.WatcherActivity(readable=True, writable=True) self._check_socket_watchers_fired(s1, expected) class TestSocketWatchersAfterLocalPeerShutsRead(SocketWatcherTestBase, IOServicesTestStubs): def start(self): s1, _s2 = self.create_blocking_socketpair() s1.shutdown(socket.SHUT_RD) # pylint: disable=E1101 # NOTE: Unlike POSIX, Windows select doesn't indicate as readable socket # that was shut down locally with SHUT_RD. expected = self.WatcherActivity(readable=False if ON_WINDOWS else True, writable=True) self._check_socket_watchers_fired(s1, expected) class TestSocketWatchersAfterLocalPeerShutsWrite(SocketWatcherTestBase, IOServicesTestStubs): def start(self): s1, _s2 = self.create_blocking_socketpair() s1.shutdown(socket.SHUT_WR) # pylint: disable=E1101 expected = self.WatcherActivity(readable=False, writable=True) self._check_socket_watchers_fired(s1, expected) class TestSocketWatchersAfterLocalPeerShutsReadWrite(SocketWatcherTestBase, IOServicesTestStubs): def start(self): s1, _s2 = self.create_blocking_socketpair() s1.shutdown(socket.SHUT_RDWR) # pylint: disable=E1101 # NOTE: Unlike POSIX, Windows select doesn't indicate as readable socket # that was shut down locally with SHUT_RDWR. expected = self.WatcherActivity(readable=False if ON_WINDOWS else True, writable=True) self._check_socket_watchers_fired(s1, expected) class TestGetaddrinfoWWWGoogleDotComPort80(AsyncServicesTestBase, IOServicesTestStubs): def start(self): # provided by IOServicesTestStubs mixin nbio = self.create_nbio() result_bucket = [] def on_done(result): result_bucket.append(result) nbio.stop() ref = nbio.getaddrinfo('www.google.com', 80, socktype=socket.SOCK_STREAM, on_done=on_done) nbio.run() self.assertEqual(len(result_bucket), 1) result = result_bucket[0] self.logger.debug('TestGetaddrinfoWWWGoogleDotComPort80: result=%r', result) self.assertIsInstance(result, list) self.assertEqual(len(result[0]), 5) for family, socktype, proto, canonname, sockaddr in result: self.assertIn(family, [socket.AF_INET, socket.AF_INET6]) self.assertEqual(socktype, socket.SOCK_STREAM) if pika.compat.ON_WINDOWS: self.assertEqual(proto, socket.IPPROTO_IP) else: self.assertEqual(proto, socket.IPPROTO_TCP) self.assertEqual(canonname, '') # AI_CANONNAME not requested ipaddr, port = sockaddr[:2] self.assertIsInstance(ipaddr, str) self.assertGreater(len(ipaddr), 0) socket.inet_pton(family, ipaddr) self.assertEqual(port, 80) self.assertEqual(ref.cancel(), False) class TestGetaddrinfoNonExistentHost(AsyncServicesTestBase, IOServicesTestStubs): def start(self): # provided by IOServicesTestStubs mixin nbio = self.create_nbio() result_bucket = [] def on_done(result): result_bucket.append(result) nbio.stop() ref = nbio.getaddrinfo('www.google.comSSS', 80, socktype=socket.SOCK_STREAM, proto=socket.IPPROTO_TCP, on_done=on_done) nbio.run() self.assertEqual(len(result_bucket), 1) result = result_bucket[0] self.assertIsInstance(result, socket.gaierror) self.assertEqual(ref.cancel(), False) class TestGetaddrinfoCancelBeforeLoopRun(AsyncServicesTestBase, IOServicesTestStubs): def start(self): # NOTE: this test elicits an occasional asyncio # `RuntimeError: Event loop is closed` message on the terminal, # presumably when the `getaddrinfo()` executing in the thread pool # finally completes and attempts to set the value on the future, but # our cleanup logic will have closed the loop before then. # Provided by IOServicesTestStubs mixin nbio = self.create_nbio() on_done_bucket = [] def on_done(result): on_done_bucket.append(result) ref = nbio.getaddrinfo('www.google.com', 80, socktype=socket.SOCK_STREAM, on_done=on_done) self.assertEqual(ref.cancel(), True) nbio.add_callback_threadsafe(nbio.stop) nbio.run() self.assertFalse(on_done_bucket) class TestGetaddrinfoCancelAfterLoopRun(AsyncServicesTestBase, IOServicesTestStubs): def start(self): # NOTE: this test elicits an occasional asyncio # `RuntimeError: Event loop is closed` message on the terminal, # presumably when the `getaddrinfo()` executing in the thread pool # finally completes and attempts to set the value on the future, but # our cleanup logic will have closed the loop before then. # Provided by IOServicesTestStubs mixin nbio = self.create_nbio() on_done_bucket = [] def on_done(result): self.logger.error( 'Unexpected completion of cancelled getaddrinfo()') on_done_bucket.append(result) # NOTE: there is some probability that getaddrinfo() will have completed # and added its completion reporting callback quickly, so we add our # cancellation callback before requesting getaddrinfo() in order to # avoid the race condition wehreby it invokes our completion callback # before we had a chance to cancel it. cancel_result_bucket = [] def cancel_and_stop_from_loop(): self.logger.debug('Cancelling getaddrinfo() from loop callback.') cancel_result_bucket.append(getaddr_ref.cancel()) nbio.stop() nbio.add_callback_threadsafe(cancel_and_stop_from_loop) getaddr_ref = nbio.getaddrinfo('www.google.com', 80, socktype=socket.SOCK_STREAM, on_done=on_done) nbio.run() self.assertEqual(cancel_result_bucket, [True]) self.assertFalse(on_done_bucket) class SocketConnectorTestBase(AsyncServicesTestBase): def set_up_sockets_for_connect(self, family): """ :param IOServicesTestStubs | SocketConnectorTestBase self: :return: two-tuple (lsock, csock), where lscok is the listening sock and csock is the socket that's can be connected to the listening socket. :rtype: tuple """ # Create listener lsock = socket.socket(family, socket.SOCK_STREAM) self.addCleanup(lsock.close) ipaddr = (pika.compat._LOCALHOST_V6 if family == socket.AF_INET6 else pika.compat._LOCALHOST) lsock.bind((ipaddr, 0)) lsock.listen(1) # NOTE: don't even need to accept for this test, connection completes # from backlog # Create connection initiator csock = socket.socket(family, socket.SOCK_STREAM) self.addCleanup(csock.close) csock.setblocking(False) return lsock, csock def check_successful_connect(self, family): """ :param IOServicesTestStubs | SocketConnectorTestBase self: """ # provided by IOServicesTestStubs mixin nbio = self.create_nbio() # pylint: disable=E1101 lsock, csock = self.set_up_sockets_for_connect(family) # Initiate connection on_done_result_bucket = [] def on_done(result): on_done_result_bucket.append(result) nbio.stop() connect_ref = nbio.connect_socket(csock, lsock.getsockname(), on_done) nbio.run() self.assertEqual(on_done_result_bucket, [None]) self.assertEqual(csock.getpeername(), lsock.getsockname()) self.assertEqual(connect_ref.cancel(), False) def check_failed_connect(self, family): """ :param IOServicesTestStubs | SocketConnectorTestBase self: """ # provided by IOServicesTestStubs mixin nbio = self.create_nbio() # pylint: disable=E1101 lsock, csock = self.set_up_sockets_for_connect(family) laddr = lsock.getsockname() # Close the listener to force failure lsock.close() # Initiate connection on_done_result_bucket = [] def on_done(result): on_done_result_bucket.append(result) nbio.stop() connect_ref = nbio.connect_socket(csock, laddr, on_done) nbio.run() self.assertEqual(len(on_done_result_bucket), 1) self.assertIsInstance(on_done_result_bucket[0], Exception) with self.assertRaises(Exception): csock.getpeername() # raises when not connected self.assertEqual(connect_ref.cancel(), False) def check_cancel_connect(self, family): """ :param IOServicesTestStubs | SocketConnectorTestBase self: """ # provided by IOServicesTestStubs mixin nbio = self.create_nbio() # pylint: disable=E1101 lsock, csock = self.set_up_sockets_for_connect(family) # Initiate connection on_done_result_bucket = [] def on_done(result): on_done_result_bucket.append(result) self.fail('Got done callacks on cancelled connection request.') connect_ref = nbio.connect_socket(csock, lsock.getsockname(), on_done) self.assertEqual(connect_ref.cancel(), True) # Now let the loop run for an iteration nbio.add_callback_threadsafe(nbio.stop) nbio.run() self.assertFalse(on_done_result_bucket) with self.assertRaises(Exception): csock.getpeername() self.assertEqual(connect_ref.cancel(), False) class TestConnectSocketIPv4Success(SocketConnectorTestBase, IOServicesTestStubs): def start(self): self.check_successful_connect(family=socket.AF_INET) class TestConnectSocketIPv4Fail(SocketConnectorTestBase, IOServicesTestStubs): def start(self): self.check_failed_connect(socket.AF_INET) class TestConnectSocketToDisconnectedPeer(SocketConnectorTestBase, IOServicesTestStubs): def start(self): """Differs from `TestConnectSocketIPV4Fail` in that this test attempts to connect to the address of a socket whose peer had disconnected from it. `TestConnectSocketIPv4Fail` attempts to connect to a closed socket that was previously listening. We want to see what happens in this case because we're seeing strange behavior in TestConnectSocketIPv4Fail when testing with Twisted on Linux, such that the reactor calls the descriptors's `connectionLost()` method, but not its `write()` method. """ nbio = self.create_nbio() csock = self.create_nonblocking_tcp_socket() badaddr = self.get_dead_socket_address() # Initiate connection on_done_result_bucket = [] def on_done(result): on_done_result_bucket.append(result) nbio.stop() connect_ref = nbio.connect_socket(csock, badaddr, on_done) nbio.run() self.assertEqual(len(on_done_result_bucket), 1) self.assertIsInstance(on_done_result_bucket[0], Exception) with self.assertRaises(Exception): csock.getpeername() # raises when not connected self.assertEqual(connect_ref.cancel(), False) class TestConnectSocketIPv4Cancel(SocketConnectorTestBase, IOServicesTestStubs): def start(self): self.check_cancel_connect(socket.AF_INET) class TestConnectSocketIPv6Success(SocketConnectorTestBase, IOServicesTestStubs): def start(self): self.check_successful_connect(family=socket.AF_INET6) class TestConnectSocketIPv6Fail(SocketConnectorTestBase, IOServicesTestStubs): def start(self): self.check_failed_connect(socket.AF_INET6) class StreamingTestBase(AsyncServicesTestBase): pass class TestStreamConnectorTxRx(StreamingTestBase, IOServicesTestStubs): def start(self): nbio = self.create_nbio() original_data = tuple( os.urandom(1000) for _ in pika.compat.xrange(1000)) original_data_length = sum(len(s) for s in original_data) my_protocol_bucket = [] logger = self.logger class TestStreamConnectorTxRxStreamProtocol( nbio_interface.AbstractStreamProtocol): def __init__(self): self.transport = None # type: nbio_interface.AbstractStreamTransport self.connection_lost_error_bucket = [] self.eof_rx = False self.all_rx_data = b'' my_protocol_bucket.append(self) def connection_made(self, transport): logger.info('connection_made(%r)', transport) self.transport = transport for chunk in original_data: self.transport.write(chunk) def connection_lost(self, error): logger.info('connection_lost(%r)', error) self.connection_lost_error_bucket.append(error) nbio.stop() def eof_received(self): logger.info('eof_received()') self.eof_rx = True # False tells transport to close the sock and call # connection_lost(None) return False def data_received(self, data): # logger.info('data_received: len=%s', len(data)) self.all_rx_data += data if (self.transport.get_write_buffer_size() == 0 and len(self.all_rx_data) >= original_data_length): self.transport.abort() streaming_connection_result_bucket = [] socket_connect_done_result_bucket = [] with ForwardServer(remote_addr=None) as echo: sock = self.create_nonblocking_tcp_socket() logger.info('created sock=%s', sock) def on_streaming_creation_done(result): logger.info('on_streaming_creation_done(%r)', result) streaming_connection_result_bucket.append(result) def on_socket_connect_done(result): logger.info('on_socket_connect_done(%r)', result) socket_connect_done_result_bucket.append(result) nbio.create_streaming_connection( TestStreamConnectorTxRxStreamProtocol, sock, on_streaming_creation_done) nbio.connect_socket(sock, echo.server_address, on_socket_connect_done) nbio.run() self.assertEqual(socket_connect_done_result_bucket, [None]) my_proto = my_protocol_bucket[0] # type: TestStreamConnectorTxRxStreamProtocol transport, protocol = streaming_connection_result_bucket[0] self.assertIsInstance(transport, nbio_interface.AbstractStreamTransport) self.assertIs(protocol, my_proto) self.assertIs(transport, my_proto.transport) self.assertEqual(my_proto.connection_lost_error_bucket, [None]) self.assertFalse(my_proto.eof_rx) self.assertEqual(len(my_proto.all_rx_data), original_data_length) self.assertEqual(my_proto.all_rx_data, b''.join(original_data)) class TestStreamConnectorRaisesValueErrorFromUnconnectedSocket( StreamingTestBase, IOServicesTestStubs): def start(self): nbio = self.create_nbio() with self.assertRaises(ValueError) as exc_ctx: nbio.create_streaming_connection( lambda: None, # dummy protocol factory self.create_nonblocking_tcp_socket(), lambda result: None) # dummy on_done callback self.assertIn('getpeername() failed', exc_ctx.exception.args[0]) class TestStreamConnectorBrokenPipe(StreamingTestBase, IOServicesTestStubs): def start(self): nbio = self.create_nbio() my_protocol_bucket = [] logger = self.logger streaming_connection_result_bucket = [] socket_connect_done_result_bucket = [] echo = ForwardServer(remote_addr=None) echo.start() self.addCleanup(lambda: echo.stop() if echo.running else None) class TestStreamConnectorTxRxStreamProtocol( nbio_interface.AbstractStreamProtocol): def __init__(self): self.transport = None # type: nbio_interface.AbstractStreamTransport self.connection_lost_error_bucket = [] self.eof_rx = False self.all_rx_data = b'' my_protocol_bucket.append(self) self._timer_ref = None def connection_made(self, transport): logger.info('connection_made(%r)', transport) self.transport = transport # Simulate Broken Pipe echo.stop() self._on_write_timer() def connection_lost(self, error): logger.info('connection_lost(%r)', error) self.connection_lost_error_bucket.append(error) self._timer_ref.cancel() nbio.stop() def eof_received(self): logger.info('eof_received()') self.eof_rx = True # Force write self.transport.write(b'eof_received') # False tells transport to close the sock and call # connection_lost(None) return True # Don't close sock, let writer logic detect error def data_received(self, data): logger.info('data_received: len=%s', len(data)) self.all_rx_data += data def _on_write_timer(self): self.transport.write(b'_on_write_timer') self._timer_ref = nbio.call_later(0.01, self._on_write_timer) sock = self.create_nonblocking_tcp_socket() logger.info('created sock=%s', sock) def on_streaming_creation_done(result): logger.info('on_streaming_creation_done(%r)', result) streaming_connection_result_bucket.append(result) def on_socket_connect_done(result): logger.info('on_socket_connect_done(%r)', result) socket_connect_done_result_bucket.append(result) nbio.create_streaming_connection( TestStreamConnectorTxRxStreamProtocol, sock, on_streaming_creation_done) nbio.connect_socket(sock, echo.server_address, on_socket_connect_done) nbio.run() self.assertEqual(socket_connect_done_result_bucket, [None]) my_proto = my_protocol_bucket[0] # type: TestStreamConnectorTxRxStreamProtocol error = my_proto.connection_lost_error_bucket[0] self.assertIsInstance(error, pika.compat.SOCKET_ERROR) # NOTE: we occasionally see EPROTOTYPE on OSX self.assertIn(error.errno, [errno.EPIPE, errno.ECONNRESET, errno.EPROTOTYPE]) class TestStreamConnectorEOFReceived(StreamingTestBase, IOServicesTestStubs): def start(self): nbio = self.create_nbio() original_data = [b'A' * 1000] my_protocol_bucket = [] logger = self.logger streaming_connection_result_bucket = [] class TestStreamConnectorTxRxStreamProtocol( nbio_interface.AbstractStreamProtocol): def __init__(self): self.transport = None # type: nbio_interface.AbstractStreamTransport self.connection_lost_error_bucket = [] self.eof_rx = False self.all_rx_data = b'' my_protocol_bucket.append(self) def connection_made(self, transport): logger.info('connection_made(%r)', transport) self.transport = transport for chunk in original_data: self.transport.write(chunk) def connection_lost(self, error): logger.info('connection_lost(%r)', error) self.connection_lost_error_bucket.append(error) nbio.stop() def eof_received(self): logger.info('eof_received()') self.eof_rx = True # False tells transport to close the sock and call # connection_lost(None) return False def data_received(self, data): # logger.info('data_received: len=%s', len(data)) self.all_rx_data += data local_sock, remote_sock = self.create_nonblocking_socketpair() remote_sock.settimeout(10) logger.info('created local_sock=%s, remote_sock=%s', local_sock, remote_sock) def on_streaming_creation_done(result): logger.info('on_streaming_creation_done(%r)', result) streaming_connection_result_bucket.append(result) # Simulate EOF remote_sock.shutdown(socket.SHUT_WR) nbio.create_streaming_connection( TestStreamConnectorTxRxStreamProtocol, local_sock, on_streaming_creation_done) nbio.run() my_proto = my_protocol_bucket[0] # type: TestStreamConnectorTxRxStreamProtocol self.assertTrue(my_proto.eof_rx) self.assertEqual(my_proto.connection_lost_error_bucket, [None]) # Verify that stream connector closed "local socket" # First, purge remote sock in case some or all sent data was delivered remote_sock.recv(sum(len(chunk) for chunk in original_data)) self.assertEqual(remote_sock.recv(1), b'') class TestStreamConnectorProtocolInterfaceFailsBase(StreamingTestBase): """Base test class for streaming protocol method fails""" def linkup_streaming_connection(self, nbio, sock, on_create_done, proto_constructor_exc=None, proto_connection_made_exc=None, proto_eof_received_exc=None, proto_data_received_exc=None): """Links up transport and protocol. On protocol.connection_lost(), requests stop of ioloop. :param nbio_interface.AbstractIOServices nbio: :param socket.socket sock: connected socket :param on_create_done: `create_streaming_connection()` completion function. :param proto_constructor_exc: None or exception to raise in constructor :param proto_connection_made_exc: None or exception to raise in `connection_made()` :param proto_eof_received_exc: None or exception to raise in `eof_received()` :param proto_data_received_exc: None or exception to raise in `data_received()` :return: return value of `create_streaming_connection()` :rtype: nbio_interface.AbstractIOReference """ logger = self.logger class TestStreamConnectorProtocol( nbio_interface.AbstractStreamProtocol): def __init__(self): self.transport = None # type: nbio_interface.AbstractStreamTransport self.connection_lost_error_bucket = [] self.eof_rx = False self.all_rx_data = b'' if proto_constructor_exc is not None: logger.info('Raising proto_constructor_exc=%r', proto_constructor_exc) raise proto_constructor_exc # pylint: disable=E0702 def connection_made(self, transport): logger.info('connection_made(%r)', transport) self.transport = transport if proto_connection_made_exc is not None: logger.info('Raising proto_connection_made_exc=%r', proto_connection_made_exc) raise proto_connection_made_exc # pylint: disable=E0702 def connection_lost(self, error): logger.info('connection_lost(%r), stopping ioloop', error) self.connection_lost_error_bucket.append(error) nbio.stop() def eof_received(self): logger.info('eof_received()') self.eof_rx = True if proto_eof_received_exc is not None: logger.info('Raising proto_eof_received_exc=%r', proto_eof_received_exc) raise proto_eof_received_exc # pylint: disable=E0702 # False tells transport to close the sock and call # connection_lost(None) return False def data_received(self, data): logger.info('data_received: len=%s', len(data)) self.all_rx_data += data if proto_data_received_exc is not None: logger.info('Raising proto_data_received_exc=%r', proto_data_received_exc) raise proto_data_received_exc # pylint: disable=E0702 return nbio.create_streaming_connection( TestStreamConnectorProtocol, sock, on_create_done) class TestStreamConnectorProtocolConstructorFails( TestStreamConnectorProtocolInterfaceFailsBase, IOServicesTestStubs): def start(self): nbio = self.create_nbio() class ProtocolConstructorError(Exception): pass result_bucket = [] def on_completed(result): result_bucket.append(result) nbio.stop() local_sock, remote_sock = self.create_nonblocking_socketpair() remote_sock.settimeout(10) self.linkup_streaming_connection( nbio, local_sock, on_completed, proto_constructor_exc=ProtocolConstructorError) nbio.run() self.assertIsInstance(result_bucket[0], ProtocolConstructorError) # Verify that stream connector closed "local socket" self.assertEqual(remote_sock.recv(1), b'') class TestStreamConnectorConnectionMadeFails( TestStreamConnectorProtocolInterfaceFailsBase, IOServicesTestStubs): def start(self): nbio = self.create_nbio() class ConnectionMadeError(Exception): pass result_bucket = [] def on_completed(result): result_bucket.append(result) nbio.stop() local_sock, remote_sock = self.create_nonblocking_socketpair() remote_sock.settimeout(10) self.linkup_streaming_connection( nbio, local_sock, on_completed, proto_connection_made_exc=ConnectionMadeError) nbio.run() self.assertIsInstance(result_bucket[0], ConnectionMadeError) # Verify that stream connector closed "local socket" self.assertEqual(remote_sock.recv(1), b'') class TestStreamConnectorEOFReceivedFails( TestStreamConnectorProtocolInterfaceFailsBase, IOServicesTestStubs): def start(self): nbio = self.create_nbio() class EOFReceivedError(Exception): pass local_sock, remote_sock = self.create_nonblocking_socketpair() remote_sock.settimeout(10) linkup_result_bucket = [] def on_linkup_completed(result): linkup_result_bucket.append(result) # Simulate EOF remote_sock.shutdown(socket.SHUT_WR) self.linkup_streaming_connection( nbio, local_sock, on_linkup_completed, proto_eof_received_exc=EOFReceivedError) nbio.run() _transport, proto = linkup_result_bucket[0] self.assertTrue(proto.eof_rx) self.assertIsInstance(proto.connection_lost_error_bucket[0], EOFReceivedError) # Verify that stream connector closed "local socket" self.assertEqual(remote_sock.recv(1), b'') class TestStreamConnectorDataReceivedFails( TestStreamConnectorProtocolInterfaceFailsBase, IOServicesTestStubs): def start(self): nbio = self.create_nbio() class DataReceivedError(Exception): pass local_sock, remote_sock = self.create_nonblocking_socketpair() remote_sock.settimeout(10) linkup_result_bucket = [] def on_linkup_completed(result): linkup_result_bucket.append(result) # Simulate EOF remote_sock.shutdown(socket.SHUT_WR) self.linkup_streaming_connection( nbio, local_sock, on_linkup_completed, proto_data_received_exc=DataReceivedError) remote_sock.send(b'abc') nbio.run() _transport, proto = linkup_result_bucket[0] self.assertFalse(proto.eof_rx) self.assertIsInstance(proto.connection_lost_error_bucket[0], DataReceivedError) # Verify that stream connector closed "local socket" self.assertEqual(remote_sock.recv(1), b'') class TestStreamConnectorAbortTransport( TestStreamConnectorProtocolInterfaceFailsBase, IOServicesTestStubs): def start(self): nbio = self.create_nbio() local_sock, remote_sock = self.create_nonblocking_socketpair() remote_sock.settimeout(10) linkup_result_bucket = [] def on_linkup_completed(result): linkup_result_bucket.append(result) # Abort the transport result[0].abort() self.linkup_streaming_connection(nbio, local_sock, on_linkup_completed) nbio.run() _transport, proto = linkup_result_bucket[0] self.assertFalse(proto.eof_rx) self.assertIsNone(proto.connection_lost_error_bucket[0]) # Verify that stream connector closed "local socket" self.assertEqual(remote_sock.recv(1), b'') class TestStreamConnectorCancelLinkup( TestStreamConnectorProtocolInterfaceFailsBase, IOServicesTestStubs): def start(self): nbio = self.create_nbio() local_sock, remote_sock = self.create_nonblocking_socketpair() remote_sock.settimeout(10) linkup_result_bucket = [] def on_linkup_completed(result): linkup_result_bucket.append(result) ref = self.linkup_streaming_connection(nbio, local_sock, on_linkup_completed) # NOTE: cancel() completes without callback ref.cancel() nbio.add_callback_threadsafe(nbio.stop) nbio.run() self.assertEqual(linkup_result_bucket, []) # Verify that stream connector closed "local socket" self.assertEqual(remote_sock.recv(1), b'') pika-1.2.0/tests/acceptance/twisted_adapter_tests.py000066400000000000000000001076201400701476500226570ustar00rootroot00000000000000# Disable warning Missing docstring # pylint: disable=C0111 # Disable warning Invalid variable name # pylint: disable=C0103 # Suppress pylint warning about access to protected member # pylint: disable=W0212 # Suppress no-member: Twisted's reactor methods are not easily discoverable # pylint: disable=E1101 """twisted adapter test""" import functools import unittest import mock from nose.twistedtools import reactor, deferred from twisted.internet import defer, error as twisted_error from twisted.python.failure import Failure from pika.adapters.twisted_connection import ( ClosableDeferredQueue, ReceivedMessage, TwistedChannel, _TwistedConnectionAdapter, TwistedProtocolConnection, _TimerHandle) from pika import spec from pika.exceptions import ( AMQPConnectionError, ConsumerCancelled, DuplicateGetOkCallback, NackError, UnroutableError, ChannelClosedByBroker) from pika.frame import Method class TestCase(unittest.TestCase): """Imported from twisted.trial.unittest.TestCase We only want the assertFailure implementation, using the class directly hides some assertion errors. """ def assertFailure(self, d, *expectedFailures): """ Fail if C{deferred} does not errback with one of C{expectedFailures}. Returns the original Deferred with callbacks added. You will need to return this Deferred from your test case. """ def _cb(ignore): raise self.failureException( "did not catch an error, instead got %r" % (ignore,)) def _eb(failure): if failure.check(*expectedFailures): return failure.value else: output = ('\nExpected: %r\nGot:\n%s' % (expectedFailures, str(failure))) raise self.failureException(output) return d.addCallbacks(_cb, _eb) class ClosableDeferredQueueTestCase(TestCase): @deferred(timeout=5.0) def test_put_closed(self): # Verify that the .put() method errbacks when the queue is closed. q = ClosableDeferredQueue() q.closed = RuntimeError("testing") d = self.assertFailure(q.put(None), RuntimeError) d.addCallback(lambda e: self.assertEqual(e.args[0], "testing")) return d @deferred(timeout=5.0) def test_get_closed(self): # Verify that the .get() method errbacks when the queue is closed. q = ClosableDeferredQueue() q.closed = RuntimeError("testing") d = self.assertFailure(q.get(), RuntimeError) d.addCallback(lambda e: self.assertEqual(e.args[0], "testing")) return d def test_close(self): # Verify that the queue can be closed. q = ClosableDeferredQueue() q.close("testing") self.assertEqual(q.closed, "testing") self.assertEqual(q.waiting, []) self.assertEqual(q.pending, []) def test_close_waiting(self): # Verify that the deferred waiting for new data are errbacked when the # queue is closed. q = ClosableDeferredQueue() d = q.get() q.close(RuntimeError("testing")) self.assertTrue(q.closed) self.assertEqual(q.waiting, []) self.assertEqual(q.pending, []) return self.assertFailure(d, RuntimeError) def test_close_twice(self): # If a queue it called twice, it must not crash. q = ClosableDeferredQueue() q.close("testing") self.assertEqual(q.closed, "testing") q.close("testing") self.assertEqual(q.closed, "testing") class TwistedChannelTestCase(TestCase): def setUp(self): self.pika_channel = mock.Mock() self.channel = TwistedChannel(self.pika_channel) # This is only needed on Python2 for functools.wraps to work. wrapped = ( "basic_cancel", "basic_get", "basic_qos", "basic_recover", "exchange_bind", "exchange_unbind", "exchange_declare", "exchange_delete", "confirm_delivery", "flow", "queue_bind", "queue_declare", "queue_delete", "queue_purge", "queue_unbind", "tx_commit", "tx_rollback", "tx_select", ) for meth_name in wrapped: getattr(self.pika_channel, meth_name).__name__ = meth_name def test_repr(self): self.pika_channel.__repr__ = lambda _s: "" self.assertEqual( repr(self.channel), ">", ) @deferred(timeout=5.0) def test_on_close(self): # Verify that the channel can be closed and that pending calls and # consumers are errbacked. self.pika_channel.add_on_close_callback.assert_called_with( self.channel._on_channel_closed) calls = self.channel._calls = [defer.Deferred()] consumers = self.channel._consumers = { "test-delivery-tag": mock.Mock() } error = RuntimeError("testing") self.channel._on_channel_closed(None, error) consumers["test-delivery-tag"].close.assert_called_once_with(error) self.assertEqual(len(self.channel._calls), 0) self.assertEqual(len(self.channel._consumers), 0) return self.assertFailure(calls[0], RuntimeError) @deferred(timeout=5.0) def test_basic_consume(self): # Verify that the basic_consume method works properly. d = self.channel.basic_consume(queue="testqueue") self.pika_channel.basic_consume.assert_called_once() kwargs = self.pika_channel.basic_consume.call_args_list[0][1] self.assertEqual(kwargs["queue"], "testqueue") on_message = kwargs["on_message_callback"] def check_cb(result): queue, _consumer_tag = result # Make sure the queue works queue_get_d = queue.get() queue_get_d.addCallback( self.assertEqual, (self.channel, "testmethod", "testprops", "testbody") ) # Simulate reception of a message on_message("testchan", "testmethod", "testprops", "testbody") return queue_get_d d.addCallback(check_cb) # Simulate a ConsumeOk from the server frame = Method(1, spec.Basic.ConsumeOk(consumer_tag="testconsumertag")) kwargs["callback"](frame) return d @deferred(timeout=5.0) def test_basic_consume_while_closed(self): # Verify that a Failure is returned when the channel's basic_consume # is called and the channel is closed. error = RuntimeError("testing") self.channel._on_channel_closed(None, error) d = self.channel.basic_consume(queue="testqueue") return self.assertFailure(d, RuntimeError) @deferred(timeout=5.0) def test_basic_consume_failure(self): # Verify that a Failure is returned when the channel's basic_consume # method fails. self.pika_channel.basic_consume.side_effect = RuntimeError() d = self.channel.basic_consume(queue="testqueue") return self.assertFailure(d, RuntimeError) def test_basic_consume_errback_on_close(self): # Verify Deferreds that haven't had their callback invoked errback when # the channel closes. d = self.channel.basic_consume(queue="testqueue") self.channel._on_channel_closed( self, ChannelClosedByBroker(404, "NOT FOUND")) return self.assertFailure(d, ChannelClosedByBroker) @deferred(timeout=5.0) def test_queue_delete(self): # Verify that the consumers are cleared when a queue is deleted. queue_obj = mock.Mock() self.channel._consumers = { "test-delivery-tag": queue_obj, } self.channel._queue_name_to_consumer_tags["testqueue"] = set([ "test-delivery-tag" ]) self.channel._calls = set() self.pika_channel.queue_delete.__name__ = "queue_delete" d = self.channel.queue_delete(queue="testqueue") self.pika_channel.queue_delete.assert_called_once() call_kw = self.pika_channel.queue_delete.call_args_list[0][1] self.assertEqual(call_kw["queue"], "testqueue") def check(_): self.assertEqual(len(self.channel._consumers), 0) queue_obj.close.assert_called_once() close_call_args = queue_obj.close.call_args_list[0][0] self.assertEqual(len(close_call_args), 1) self.assertTrue(isinstance(close_call_args[0], ConsumerCancelled)) d.addCallback(check) # Simulate a server response self.assertEqual(len(self.channel._calls), 1) list(self.channel._calls)[0].callback(None) return d @deferred(timeout=5.0) def test_wrapped_method(self): # Verify that the wrapped method is called and the result is properly # transmitted via the Deferred. self.pika_channel.queue_declare.__name__ = "queue_declare" d = self.channel.queue_declare(queue="testqueue") self.pika_channel.queue_declare.assert_called_once() call_kw = self.pika_channel.queue_declare.call_args_list[0][1] self.assertIn("queue", call_kw) self.assertEqual(call_kw["queue"], "testqueue") self.assertIn("callback", call_kw) self.assertTrue(callable(call_kw["callback"])) call_kw["callback"]("testresult") d.addCallback(self.assertEqual, "testresult") return d @deferred(timeout=5.0) def test_wrapped_method_while_closed(self): # Verify that a Failure is returned when one of the channel's wrapped # methods is called and the channel is closed. error = RuntimeError("testing") self.channel._on_channel_closed(None, error) self.pika_channel.queue_declare.__name__ = "queue_declare" d = self.channel.queue_declare(queue="testqueue") return self.assertFailure(d, RuntimeError) @deferred(timeout=5.0) def test_wrapped_method_multiple_args(self): # Verify that multiple arguments to the callback are properly converted # to a tuple for the Deferred's result. self.pika_channel.queue_declare.__name__ = "queue_declare" d = self.channel.queue_declare(queue="testqueue") call_kw = self.pika_channel.queue_declare.call_args_list[0][1] call_kw["callback"]("testresult-1", "testresult-2") d.addCallback(self.assertEqual, ("testresult-1", "testresult-2")) return d @deferred(timeout=5.0) def test_wrapped_method_failure(self): # Verify that exceptions are properly handled in wrapped methods. error = RuntimeError("testing") self.pika_channel.queue_declare.__name__ = "queue_declare" self.pika_channel.queue_declare.side_effect = error d = self.channel.queue_declare(queue="testqueue") return self.assertFailure(d, RuntimeError) def test_method_not_wrapped(self): # Test that only methods that can be wrapped are wrapped. result = self.channel.basic_ack() self.assertFalse(isinstance(result, defer.Deferred)) self.pika_channel.basic_ack.assert_called_once() def test_passthrough(self): # Check the simple attribute passthroughs attributes = ( "channel_number", "connection", "is_closed", "is_closing", "is_open", "flow_active", "consumer_tags", ) for name in attributes: value = "testvalue-{}".format(name) setattr(self.pika_channel, name, value) self.assertEqual(getattr(self.channel, name), value) def test_callback_deferred(self): # Check that the deferred will be called back. d = defer.Deferred() replies = [spec.Basic.CancelOk] self.channel.callback_deferred(d, replies) self.pika_channel.add_callback.assert_called_with( d.callback, replies) def test_add_on_return_callback(self): # Check that the deferred contains the right value. cb = mock.Mock() self.channel.add_on_return_callback(cb) self.pika_channel.add_on_return_callback.assert_called_once() self.pika_channel.add_on_return_callback.call_args[0][0]( "testchannel", "testmethod", "testprops", "testbody") cb.assert_called_once() self.assertEqual(len(cb.call_args[0]), 1) self.assertEqual( cb.call_args[0][0], (self.channel, "testmethod", "testprops", "testbody") ) @deferred(timeout=5.0) def test_basic_cancel(self): # Verify that basic_cancels calls clean up the consumer queue. queue_obj = mock.Mock() queue_obj_2 = mock.Mock() self.channel._consumers["test-consumer"] = queue_obj self.channel._consumers["test-consumer-2"] = queue_obj_2 self.channel._queue_name_to_consumer_tags.update({ "testqueue": set(["test-consumer"]), "testqueue-2": set(["test-consumer-2"]), }) d = self.channel.basic_cancel("test-consumer") def check(result): self.assertTrue(isinstance(result, Method)) queue_obj.close.assert_called_once() self.assertTrue(isinstance( queue_obj.close.call_args[0][0], ConsumerCancelled)) self.assertEqual(len(self.channel._consumers), 1) queue_obj_2.close.assert_not_called() self.assertEqual( self.channel._queue_name_to_consumer_tags["testqueue"], set()) d.addCallback(check) self.pika_channel.basic_cancel.assert_called_once() self.pika_channel.basic_cancel.call_args[1]["callback"]( Method(1, spec.Basic.CancelOk(consumer_tag="test-consumer")) ) return d @deferred(timeout=5.0) def test_basic_cancel_no_consumer(self): # Verify that basic_cancel does not crash if there is no consumer. d = self.channel.basic_cancel("test-consumer") def check(result): self.assertTrue(isinstance(result, Method)) d.addCallback(check) self.pika_channel.basic_cancel.assert_called_once() self.pika_channel.basic_cancel.call_args[1]["callback"]( Method(1, spec.Basic.CancelOk(consumer_tag="test-consumer")) ) return d def test_consumer_cancelled_by_broker(self): # Verify that server-originating cancels are handled. self.pika_channel.add_on_cancel_callback.assert_called_with( self.channel._on_consumer_cancelled_by_broker) queue_obj = mock.Mock() self.channel._consumers["test-consumer"] = queue_obj self.channel._queue_name_to_consumer_tags["testqueue"] = set([ "test-consumer"]) self.channel._on_consumer_cancelled_by_broker( Method(1, spec.Basic.Cancel(consumer_tag="test-consumer")) ) queue_obj.close.assert_called_once() self.assertTrue(isinstance( queue_obj.close.call_args[0][0], ConsumerCancelled)) self.assertEqual(self.channel._consumers, {}) self.assertEqual( self.channel._queue_name_to_consumer_tags["testqueue"], set()) @deferred(timeout=5.0) def test_basic_get(self): # Verify that the basic_get method works properly. d = self.channel.basic_get(queue="testqueue") self.pika_channel.basic_get.assert_called_once() kwargs = self.pika_channel.basic_get.call_args_list[0][1] self.assertEqual(kwargs["queue"], "testqueue") def check_cb(result): self.assertEqual( result, (self.channel, "testmethod", "testprops", "testbody") ) d.addCallback(check_cb) # Simulate reception of a message kwargs["callback"]( "testchannel", "testmethod", "testprops", "testbody") return d def test_basic_get_twice(self): # Verify that the basic_get method raises the proper exception when # called twice. self.channel.basic_get(queue="testqueue") self.assertRaises( DuplicateGetOkCallback, self.channel.basic_get, "testqueue") @deferred(timeout=5.0) def test_basic_get_empty(self): # Verify that the basic_get method works when the queue is empty. self.pika_channel.add_callback.assert_called_with( self.channel._on_getempty, [spec.Basic.GetEmpty], False) d = self.channel.basic_get(queue="testqueue") self.channel._on_getempty("testmethod") d.addCallback(self.assertIsNone) return d def test_basic_nack(self): # Verify that basic_nack is transmitted properly. self.channel.basic_nack("testdeliverytag") self.pika_channel.basic_nack.assert_called_once_with( delivery_tag="testdeliverytag", multiple=False, requeue=True) @deferred(timeout=5.0) def test_basic_publish(self): # Verify that basic_publish wraps properly. args = [object()] kwargs = {"routing_key": object(), "body": object()} d = self.channel.basic_publish(*args, **kwargs) kwargs.update(dict( # Args are converted to kwargs exchange=args[0], # Defaults mandatory=False, properties=None, )) self.pika_channel.basic_publish.assert_called_once_with( **kwargs) return d @deferred(timeout=5.0) def test_basic_publish_closed(self): # Verify that a Failure is returned when the channel's basic_publish # is called and the channel is closed. self.channel._on_channel_closed(None, RuntimeError("testing")) d = self.channel.basic_publish(None, None, None) self.pika_channel.basic_publish.assert_not_called() d = self.assertFailure(d, RuntimeError) d.addCallback(lambda e: self.assertEqual(e.args[0], "testing")) return d def _test_wrapped_func(self, func, kwargs, do_callback=False): func.assert_called_once() call_kw = dict( (key, value) for key, value in func.call_args[1].items() if key != "callback" ) self.assertEqual(kwargs, call_kw) if do_callback: func.call_args[1]["callback"](do_callback) @deferred(timeout=5.0) def test_basic_qos(self): # Verify that basic_qos wraps properly. kwargs = {"prefetch_size": 2} d = self.channel.basic_qos(**kwargs) # Defaults kwargs.update(dict(prefetch_count=0, global_qos=False)) self._test_wrapped_func(self.pika_channel.basic_qos, kwargs, True) return d def test_basic_reject(self): # Verify that basic_reject is transmitted properly. self.channel.basic_reject("testdeliverytag") self.pika_channel.basic_reject.assert_called_once_with( delivery_tag="testdeliverytag", requeue=True) @deferred(timeout=5.0) def test_basic_recover(self): # Verify that basic_recover wraps properly. d = self.channel.basic_recover() self._test_wrapped_func( self.pika_channel.basic_recover, {"requeue": False}, True) return d def test_close(self): # Verify that close wraps properly. self.channel.close() self.pika_channel.close.assert_called_once_with( reply_code=0, reply_text="Normal shutdown") @deferred(timeout=5.0) def test_confirm_delivery(self): # Verify that confirm_delivery works d = self.channel.confirm_delivery() self.pika_channel.confirm_delivery.assert_called_once() self.assertEqual( self.pika_channel.confirm_delivery.call_args[1][ "ack_nack_callback"], self.channel._on_delivery_confirmation) def send_message(_result): d = self.channel.basic_publish("testexch", "testrk", "testbody") frame = Method(1, spec.Basic.Ack(delivery_tag=1)) self.channel._on_delivery_confirmation(frame) return d def check_response(frame_method): self.assertTrue(isinstance(frame_method, spec.Basic.Ack)) d.addCallback(send_message) d.addCallback(check_response) # Simulate Confirm.SelectOk self.pika_channel.confirm_delivery.call_args[1]["callback"](None) return d @deferred(timeout=5.0) def test_confirm_delivery_nacked(self): # Verify that messages can be nacked when delivery # confirmation is on. d = self.channel.confirm_delivery() def send_message(_result): d = self.channel.basic_publish("testexch", "testrk", "testbody") frame = Method(1, spec.Basic.Nack(delivery_tag=1)) self.channel._on_delivery_confirmation(frame) return d def check_response(error): self.assertIsInstance(error.value, NackError) self.assertEqual(len(error.value.messages), 0) d.addCallback(send_message) d.addCallbacks(self.fail, check_response) # Simulate Confirm.SelectOk self.pika_channel.confirm_delivery.call_args[1]["callback"](None) return d @deferred(timeout=5.0) def test_confirm_delivery_returned(self): # Verify handling of unroutable messages. d = self.channel.confirm_delivery() self.pika_channel.add_on_return_callback.assert_called_once() return_cb = self.pika_channel.add_on_return_callback.call_args[0][0] def send_message(_result): d = self.channel.basic_publish("testexch", "testrk", "testbody") # Send the Basic.Return frame method = spec.Basic.Return( exchange="testexch", routing_key="testrk") return_cb(self.channel, method, spec.BasicProperties(), "testbody") # Send the Basic.Ack frame frame = Method(1, spec.Basic.Ack(delivery_tag=1)) self.channel._on_delivery_confirmation(frame) return d def check_response(error): self.assertIsInstance(error.value, UnroutableError) self.assertEqual(len(error.value.messages), 1) msg = error.value.messages[0] self.assertEqual(msg.body, "testbody") d.addCallbacks(send_message, self.fail) d.addCallbacks(self.fail, check_response) # Simulate Confirm.SelectOk self.pika_channel.confirm_delivery.call_args[1]["callback"](None) return d @deferred(timeout=5.0) def test_confirm_delivery_returned_nacked(self): # Verify that messages can be nacked when delivery # confirmation is on. d = self.channel.confirm_delivery() self.pika_channel.add_on_return_callback.assert_called_once() return_cb = self.pika_channel.add_on_return_callback.call_args[0][0] def send_message(_result): d = self.channel.basic_publish("testexch", "testrk", "testbody") # Send the Basic.Return frame method = spec.Basic.Return( exchange="testexch", routing_key="testrk") return_cb(self.channel, method, spec.BasicProperties(), "testbody") # Send the Basic.Nack frame frame = Method(1, spec.Basic.Nack(delivery_tag=1)) self.channel._on_delivery_confirmation(frame) return d def check_response(error): self.assertTrue(isinstance(error.value, NackError)) self.assertEqual(len(error.value.messages), 1) msg = error.value.messages[0] self.assertEqual(msg.body, "testbody") d.addCallback(send_message) d.addCallbacks(self.fail, check_response) self.pika_channel.confirm_delivery.call_args[1]["callback"](None) return d @deferred(timeout=5.0) def test_confirm_delivery_multiple(self): # Verify that multiple messages can be acked at once when # delivery confirmation is on. d = self.channel.confirm_delivery() def send_message(_result): d1 = self.channel.basic_publish("testexch", "testrk", "testbody1") d2 = self.channel.basic_publish("testexch", "testrk", "testbody2") frame = Method(1, spec.Basic.Ack(delivery_tag=2, multiple=True)) self.channel._on_delivery_confirmation(frame) return defer.DeferredList([d1, d2]) def check_response(results): self.assertTrue(len(results), 2) for is_ok, result in results: self.assertTrue(is_ok) self.assertTrue(isinstance(result, spec.Basic.Ack)) d.addCallback(send_message) d.addCallback(check_response) self.pika_channel.confirm_delivery.call_args[1]["callback"](None) return d @deferred(timeout=5.0) def test_delivery_confirmation_errback_on_close(self): # Verify deliveries that haven't had their callback invoked errback when # the channel closes. d = self.channel.confirm_delivery() # Simulate Confirm.SelectOk self.pika_channel.confirm_delivery.call_args[1]["callback"](None) def send_message_and_close_channel(_result): d = self.channel.basic_publish("testexch", "testrk", "testbody") self.channel._on_channel_closed(None, RuntimeError("testing")) self.assertEqual(len(self.channel._deliveries), 0) return d d.addCallback(send_message_and_close_channel) return self.assertFailure(d, RuntimeError) class TwistedProtocolConnectionTestCase(TestCase): def setUp(self): self.conn = TwistedProtocolConnection() self.conn._impl = mock.Mock() @deferred(timeout=5.0) def test_connection(self): # Verify that the connection opening is properly wrapped. transport = mock.Mock() self.conn.connectionMade = mock.Mock() self.conn.makeConnection(transport) self.conn._impl.connection_made.assert_called_once_with( transport) self.conn.connectionMade.assert_called_once() d = self.conn.ready self.conn._on_connection_ready(None) return d @deferred(timeout=5.0) def test_channel(self): # Verify that the request for a channel works properly. channel = mock.Mock() self.conn._impl.channel.side_effect = lambda n, cb: cb(channel) d = self.conn.channel() self.conn._impl.channel.assert_called_once() def check(result): self.assertTrue(isinstance(result, TwistedChannel)) d.addCallback(check) return d @deferred(timeout=5.0) def test_channel_errback_if_connection_closed(self): # Verify calls to channel() that haven't had their callback invoked # errback when the connection closes. self.conn._on_connection_ready("dummy") d = self.conn.channel() self.conn._on_connection_closed("test conn", RuntimeError("testing")) self.assertEqual(len(self.conn._calls), 0) return self.assertFailure(d, RuntimeError) def test_dataReceived(self): # Verify that the data is transmitted to the callback method. self.conn.dataReceived("testdata") self.conn._impl.data_received.assert_called_once_with("testdata") @deferred(timeout=5.0) def test_connectionLost(self): # Verify that the "ready" Deferred errbacks on connectionLost, and that # the underlying implementation callback is called. ready_d = self.conn.ready error = RuntimeError("testreason") self.conn.connectionLost(error) self.conn._impl.connection_lost.assert_called_with(error) self.assertIsNone(self.conn.ready) return self.assertFailure(ready_d, RuntimeError) def test_connectionLost_twice(self): # Verify that calling connectionLost twice will not cause an # AlreadyCalled error on the Deferred. ready_d = self.conn.ready error = RuntimeError("testreason") self.conn.connectionLost(error) self.assertTrue(ready_d.called) ready_d.addErrback(lambda f: None) # silence the error self.assertIsNone(self.conn.ready) # A second call must not raise AlreadyCalled self.conn.connectionLost(error) @deferred(timeout=5.0) def test_on_connection_ready(self): # Verify that the "ready" Deferred is resolved on _on_connection_ready. d = self.conn.ready self.conn._on_connection_ready("testresult") self.assertTrue(d.called) d.addCallback(functools.partial(self.assertIsInstance, cls=TwistedProtocolConnection)) return d def test_on_connection_ready_twice(self): # Verify that calling _on_connection_ready twice will not cause an # AlreadyCalled error on the Deferred. d = self.conn.ready self.conn._on_connection_ready("testresult") self.assertTrue(d.called) # A second call must not raise AlreadyCalled self.conn._on_connection_ready("testresult") @deferred(timeout=5.0) def test_on_connection_ready_method(self): # Verify that the connectionReady method is called when the "ready" # Deferred is resolved. d = self.conn.ready self.conn.connectionReady = mock.Mock() self.conn._on_connection_ready("testresult") self.conn.connectionReady.assert_called_once() return d @deferred(timeout=5.0) def test_on_connection_failed(self): # Verify that the "ready" Deferred errbacks on _on_connection_failed. d = self.conn.ready self.conn._on_connection_failed(None) return self.assertFailure(d, AMQPConnectionError) def test_on_connection_failed_twice(self): # Verify that calling _on_connection_failed twice will not cause an # AlreadyCalled error on the Deferred. d = self.conn.ready self.conn._on_connection_failed(None) self.assertTrue(d.called) d.addErrback(lambda f: None) # silence the error # A second call must not raise AlreadyCalled self.conn._on_connection_failed(None) @deferred(timeout=5.0) def test_on_connection_closed(self): # Verify that the "closed" Deferred is resolved on # _on_connection_closed. self.conn._on_connection_ready("dummy") d = self.conn.closed self.conn._on_connection_closed("test conn", "test reason") self.assertTrue(d.called) d.addCallback(self.assertEqual, "test reason") return d def test_on_connection_closed_twice(self): # Verify that calling _on_connection_closed twice will not cause an # AlreadyCalled error on the Deferred. self.conn._on_connection_ready("dummy") d = self.conn.closed self.conn._on_connection_closed("test conn", "test reason") self.assertTrue(d.called) # A second call must not raise AlreadyCalled self.conn._on_connection_closed("test conn", "test reason") @deferred(timeout=5.0) def test_on_connection_closed_Failure(self): # Verify that the _on_connection_closed method can be called with # a Failure instance without triggering the errback path. self.conn._on_connection_ready("dummy") error = RuntimeError() d = self.conn.closed self.conn._on_connection_closed("test conn", Failure(error)) self.assertTrue(d.called) def _check_cb(result): self.assertEqual(result, error) def _check_eb(_failure): self.fail("The errback path should not have been triggered") d.addCallbacks(_check_cb, _check_eb) return d def test_close(self): # Verify that the close method is properly wrapped. self.conn._impl.is_closed = False self.conn.closed = "TESTING" value = self.conn.close() self.assertEqual(value, "TESTING") self.conn._impl.close.assert_called_once_with(200, "Normal shutdown") def test_close_twice(self): # Verify that the close method is only transmitted when open. self.conn._impl.is_closed = True self.conn.close() self.conn._impl.close.assert_not_called() class TwistedConnectionAdapterTestCase(TestCase): def setUp(self): self.conn = _TwistedConnectionAdapter( None, None, None, None, None ) def tearDown(self): if self.conn._transport is None: self.conn._transport = mock.Mock() self.conn.close() def test_adapter_disconnect_stream(self): # Verify that the underlying transport is aborted. transport = mock.Mock() self.conn.connection_made(transport) self.conn._adapter_disconnect_stream() transport.loseConnection.assert_called_once() def test_adapter_emit_data(self): # Verify that the data is transmitted to the underlying transport. transport = mock.Mock() self.conn.connection_made(transport) self.conn._adapter_emit_data("testdata") transport.write.assert_called_with("testdata") def test_timeout(self): # Verify that timeouts are registered and cancelled properly. callback = mock.Mock() timer_id = self.conn._adapter_call_later(5, callback) self.assertEqual(len(reactor.getDelayedCalls()), 1) self.conn._adapter_remove_timeout(timer_id) self.assertEqual(len(reactor.getDelayedCalls()), 0) callback.assert_not_called() @deferred(timeout=5.0) def test_call_threadsafe(self): # Verify that the method is actually called using the reactor's # callFromThread method. callback = mock.Mock() self.conn._adapter_add_callback_threadsafe(callback) d = defer.Deferred() def check(): callback.assert_called_once() d.callback(None) # Give time to run the callFromThread call reactor.callLater(0.1, check) return d def test_connection_made(self): # Verify the connection callback transport = mock.Mock() self.conn.connection_made(transport) self.assertEqual(self.conn._transport, transport) self.assertEqual( self.conn.connection_state, self.conn.CONNECTION_PROTOCOL) def test_connection_lost(self): # Verify that the correct callback is called and that the # attributes are reinitialized. self.conn._on_stream_terminated = mock.Mock() error = Failure(RuntimeError("testreason")) self.conn.connection_lost(error) self.conn._on_stream_terminated.assert_called_with(error.value) self.assertIsNone(self.conn._transport) def test_connection_lost_connectiondone(self): # When the ConnectionDone is transmitted, consider it an expected # disconnection. self.conn._on_stream_terminated = mock.Mock() error = Failure(twisted_error.ConnectionDone()) self.conn.connection_lost(error) self.assertEqual(self.conn._error, error.value) self.conn._on_stream_terminated.assert_called_with(None) self.assertIsNone(self.conn._transport) def test_data_received(self): # Verify that the received data is forwarded to the Connection. data = b"test data" self.conn._on_data_available = mock.Mock() self.conn.data_received(data) self.conn._on_data_available.assert_called_once_with(data) class TimerHandleTestCase(TestCase): def setUp(self): self.handle = mock.Mock() self.timer = _TimerHandle(self.handle) def test_cancel(self): # Verify that the cancel call is properly transmitted. self.timer.cancel() self.handle.cancel.assert_called_once() self.assertIsNone(self.timer._handle) def test_cancel_twice(self): # Verify that cancel() can be called twice. self.timer.cancel() self.timer.cancel() # This must not traceback def test_cancel_already_called(self): # Verify that the timer gracefully handles AlreadyCalled errors. self.handle.cancel.side_effect = twisted_error.AlreadyCalled() self.timer.cancel() self.handle.cancel.assert_called_once() def test_cancel_already_cancelled(self): # Verify that the timer gracefully handles AlreadyCancelled errors. self.handle.cancel.side_effect = twisted_error.AlreadyCancelled() self.timer.cancel() self.handle.cancel.assert_called_once() pika-1.2.0/tests/base/000077500000000000000000000000001400701476500145165ustar00rootroot00000000000000pika-1.2.0/tests/base/__init__.py000066400000000000000000000000001400701476500166150ustar00rootroot00000000000000pika-1.2.0/tests/base/async_test_base.py000066400000000000000000000340441400701476500202430ustar00rootroot00000000000000"""Base test classes for async_adapter_tests.py """ import datetime import functools import os import select import sys import logging import platform import unittest import uuid try: from unittest import mock # pylint: disable=C0412 except ImportError: import mock import pika import pika.compat from pika import adapters from pika.adapters import select_connection from pika.exchange_type import ExchangeType from tests.wrappers.threaded_test_wrapper import create_run_in_thread_decorator # invalid-name # pylint: disable=C0103 # Suppress pylint warnings concerning attribute defined outside __init__ # pylint: disable=W0201 # Suppress pylint messages concerning missing docstrings # pylint: disable=C0111 # protected-access # pylint: disable=W0212 TEST_TIMEOUT = 15 # Decorator for running our tests in threads with timeout # NOTE: we give it a little more time to give our I/O loop-based timeout logic # sufficient time to mop up. run_test_in_thread_with_timeout = create_run_in_thread_decorator( # pylint: disable=C0103 TEST_TIMEOUT * 1.1) def make_stop_on_error_with_self(the_self=None): """Create a decorator that stops test if the decorated method exits with exception and causes the test to fail by re-raising that exception after ioloop exits. :param None | AsyncTestCase the_self: if None, will use the first arg of decorated method if it is an instance of AsyncTestCase, raising exception otherwise. """ def stop_on_error_with_self_decorator(fun): @functools.wraps(fun) def stop_on_error_wrapper(*args, **kwargs): this = the_self if this is None and args and isinstance(args[0], AsyncTestCase): this = args[0] if not isinstance(this, AsyncTestCase): raise AssertionError('Decorated method is not an AsyncTestCase ' 'instance method: {!r}'.format(fun)) try: return fun(*args, **kwargs) except Exception as error: # pylint: disable=W0703 this.logger.exception('Stopping test due to failure in %r: %r', fun, error) this.stop(error) return stop_on_error_wrapper return stop_on_error_with_self_decorator # Decorator that stops test if AsyncTestCase-based method exits with # exception and causes the test to fail by re-raising that exception after # ioloop exits. # # NOTE: only use it to decorate instance methods where self arg is a # AsyncTestCase instance. stop_on_error_in_async_test_case_method = make_stop_on_error_with_self() def enable_tls(): if 'PIKA_TEST_TLS' in os.environ and \ os.environ['PIKA_TEST_TLS'].lower() == 'true': return True return False class AsyncTestCase(unittest.TestCase): DESCRIPTION = "" ADAPTER = None TIMEOUT = TEST_TIMEOUT def setUp(self): self.logger = logging.getLogger(self.__class__.__name__) self.parameters = self.new_connection_params() self._timed_out = False self._conn_open_error = None self._public_stop_requested = False self._conn_closed_reason = None self._public_stop_error_in = None # exception passed to our stop() super(AsyncTestCase, self).setUp() def new_connection_params(self): """ :rtype: pika.ConnectionParameters """ if enable_tls(): return self._new_tls_connection_params() else: return self._new_plaintext_connection_params() def _new_tls_connection_params(self): """ :rtype: pika.ConnectionParameters """ self.logger.info('testing using TLS/SSL connection to port 5671') url = 'amqps://localhost:5671/%2F?ssl_options=%7B%27ca_certs%27%3A%27testdata%2Fcerts%2Fca_certificate.pem%27%2C%27keyfile%27%3A%27testdata%2Fcerts%2Fclient_key.pem%27%2C%27certfile%27%3A%27testdata%2Fcerts%2Fclient_certificate.pem%27%7D' params = pika.URLParameters(url) return params @staticmethod def _new_plaintext_connection_params(): """ :rtype: pika.ConnectionParameters """ return pika.ConnectionParameters(host='localhost', port=5672) def tearDown(self): self._stop() def shortDescription(self): method_desc = super(AsyncTestCase, self).shortDescription() if self.DESCRIPTION: return "%s (%s)" % (self.DESCRIPTION, method_desc) else: return method_desc def begin(self, channel): # pylint: disable=R0201,W0613 """Extend to start the actual tests on the channel""" self.fail("AsyncTestCase.begin_test not extended") def start(self, adapter, ioloop_factory): self.logger.info('start at %s', datetime.datetime.utcnow()) self.adapter = adapter or self.ADAPTER self.connection = self.adapter(self.parameters, self.on_open, self.on_open_error, self.on_closed, custom_ioloop=ioloop_factory()) try: self.timeout = self.connection._adapter_call_later(self.TIMEOUT, self.on_timeout) self._run_ioloop() self.assertFalse(self._timed_out) self.assertIsNone(self._conn_open_error) # Catch unexpected loss of connection self.assertTrue(self._public_stop_requested, 'Unexpected end of test; connection close reason: ' '{!r}'.format(self._conn_closed_reason)) if self._public_stop_error_in is not None: raise self._public_stop_error_in # pylint: disable=E0702 finally: self.connection._nbio.close() self.connection = None def stop_ioloop_only(self): """Request stopping of the connection's ioloop to end the test without closing the connection """ self._safe_remove_test_timeout() self.connection._nbio.stop() def stop(self, error=None): """close the connection and stop the ioloop :param None | Exception error: if not None, will raise the given exception after ioloop exits. """ if error is not None: if self._public_stop_error_in is None: self.logger.error('stop(): stopping with error=%r.', error) else: self.logger.error('stop(): replacing pending error=%r with %r', self._public_stop_error_in, error) self._public_stop_error_in = error self.logger.info('Stopping test') self._public_stop_requested = True if self.connection.is_open: self.connection.close() # NOTE: on_closed() will stop the ioloop elif self.connection.is_closed: self.logger.info( 'Connection already closed, so just stopping ioloop') self._stop() def _run_ioloop(self): """Some tests need to subclass this in order to bootstrap their test logic after we instantiate the connection and assign it to `self.connection`, but before we run the ioloop """ self.connection._nbio.run() def _safe_remove_test_timeout(self): if hasattr(self, 'timeout') and self.timeout is not None: self.logger.info("Removing timeout") self.connection._adapter_remove_timeout(self.timeout) self.timeout = None def _stop(self): if hasattr(self, 'connection') and self.connection is not None: self._safe_remove_test_timeout() self.logger.info("Stopping ioloop") self.connection._nbio.stop() def on_closed(self, connection, error): """called when the connection has finished closing""" self.logger.info('on_closed: %r %r', connection, error) self._conn_closed_reason = error self._stop() def on_open(self, connection): self.logger.debug('on_open: %r', connection) self.channel = connection.channel( on_open_callback=self.on_channel_opened) def on_open_error(self, connection, error): self._conn_open_error = error self.logger.error('on_open_error: %r %r', connection, error) self._stop() def on_channel_opened(self, channel): self.begin(channel) def on_timeout(self): """called when stuck waiting for connection to close""" self.logger.error('%s timed out; on_timeout called at %s', self, datetime.datetime.utcnow()) self.timeout = None # the dispatcher should have removed it self._timed_out = True # initiate cleanup self.stop() class BoundQueueTestCase(AsyncTestCase): def start(self, adapter, ioloop_factory): # PY3 compat encoding self.exchange = 'e-' + self.__class__.__name__ + ':' + uuid.uuid1().hex self.queue = 'q-' + self.__class__.__name__ + ':' + uuid.uuid1().hex self.routing_key = self.__class__.__name__ super(BoundQueueTestCase, self).start(adapter, ioloop_factory) def begin(self, channel): self.channel.exchange_declare(self.exchange, exchange_type=ExchangeType.direct, passive=False, durable=False, auto_delete=True, callback=self.on_exchange_declared) def on_exchange_declared(self, frame): # pylint: disable=W0613 self.channel.queue_declare(self.queue, passive=False, durable=False, exclusive=True, auto_delete=True, arguments={'x-expires': self.TIMEOUT * 1000}, callback=self.on_queue_declared) def on_queue_declared(self, frame): # pylint: disable=W0613 self.channel.queue_bind(self.queue, self.exchange, self.routing_key, callback=self.on_ready) def on_ready(self, frame): raise NotImplementedError # # In order to write test cases that will tested using all the Async Adapters # write a class that inherits both from one of TestCase classes above and # from the AsyncAdapters class below. This allows you to avoid duplicating the # test methods for each adapter in each test class. # class AsyncAdapters(object): def start(self, adapter_class, ioloop_factory): """ :param adapter_class: pika connection adapter class to test. :param ioloop_factory: to be called without args to instantiate a non-shared ioloop to be passed as the `custom_ioloop` arg to the `adapter_class` constructor. This is needed because some of the adapters default to using a singleton ioloop, which results in tests errors after prior tests close the ioloop to release resources, in order to eliminate ResourceWarning warnings concerning unclosed sockets from our adapters. :return: """ raise NotImplementedError @run_test_in_thread_with_timeout def test_with_select_default(self): """SelectConnection:DefaultPoller""" with mock.patch.multiple(select_connection, SELECT_TYPE=None): self.start(adapters.SelectConnection, select_connection.IOLoop) @run_test_in_thread_with_timeout def test_with_select_select(self): """SelectConnection:select""" with mock.patch.multiple(select_connection, SELECT_TYPE='select'): self.start(adapters.SelectConnection, select_connection.IOLoop) @unittest.skipIf( not hasattr(select, 'poll') or not hasattr(select.poll(), 'modify'), "poll not supported") # pylint: disable=E1101 @run_test_in_thread_with_timeout def test_with_select_poll(self): """SelectConnection:poll""" with mock.patch.multiple(select_connection, SELECT_TYPE='poll'): self.start(adapters.SelectConnection, select_connection.IOLoop) @unittest.skipIf(not hasattr(select, 'epoll'), "epoll not supported") @run_test_in_thread_with_timeout def test_with_select_epoll(self): """SelectConnection:epoll""" with mock.patch.multiple(select_connection, SELECT_TYPE='epoll'): self.start(adapters.SelectConnection, select_connection.IOLoop) @unittest.skipIf(not hasattr(select, 'kqueue'), "kqueue not supported") @run_test_in_thread_with_timeout def test_with_select_kqueue(self): """SelectConnection:kqueue""" with mock.patch.multiple(select_connection, SELECT_TYPE='kqueue'): self.start(adapters.SelectConnection, select_connection.IOLoop) @unittest.skipIf(pika.compat.ON_WINDOWS, "Windows not supported") @run_test_in_thread_with_timeout def test_with_gevent(self): """GeventConnection""" import gevent from pika.adapters.gevent_connection import GeventConnection from pika.adapters.gevent_connection import _GeventSelectorIOLoop def ioloop_factory(): return _GeventSelectorIOLoop(gevent.get_hub()) self.start(GeventConnection, ioloop_factory) @run_test_in_thread_with_timeout def test_with_tornado(self): """TornadoConnection""" import tornado.ioloop from pika.adapters.tornado_connection import TornadoConnection ioloop_factory = tornado.ioloop.IOLoop self.start(TornadoConnection, ioloop_factory) @unittest.skipIf(sys.version_info < (3, 4), 'Asyncio is available only with Python 3.4+') @run_test_in_thread_with_timeout def test_with_asyncio(self): """AsyncioConnection""" import asyncio from pika.adapters.asyncio_connection import AsyncioConnection ioloop_factory = asyncio.new_event_loop self.start(AsyncioConnection, ioloop_factory) pika-1.2.0/tests/misc/000077500000000000000000000000001400701476500145375ustar00rootroot00000000000000pika-1.2.0/tests/misc/__init__.py000066400000000000000000000000001400701476500166360ustar00rootroot00000000000000pika-1.2.0/tests/misc/forward_server.py000066400000000000000000000476121400701476500201550ustar00rootroot00000000000000"""TCP/IP forwarding/echo service for testing.""" from __future__ import print_function import array from datetime import datetime import errno from functools import partial import logging import multiprocessing import os import socket import struct import sys import threading import traceback import pika.compat if pika.compat.PY3: def buffer(object, offset, size): # pylint: disable=W0622 """array etc. have the buffer protocol""" return object[offset:offset + size] try: import SocketServer except ImportError: import socketserver as SocketServer # pylint: disable=F0401 def _trace(fmt, *args): """Format and output the text to stderr""" print((fmt % args) + "\n", end="", file=sys.stderr) class ForwardServer(object): # pylint: disable=R0902 """ Implement a TCP/IP forwarding/echo service for testing. Listens for an incoming TCP/IP connection, accepts it, then connects to the given remote address and forwards data back and forth between the two endpoints. This is similar to a subset of `netcat` functionality, but without dependency on any specific flavor of netcat Connection forwarding example; forward local connection to default rabbitmq addr, connect to rabbit via forwarder, then disconnect forwarder, then attempt another pika operation to see what happens with ForwardServer(("localhost", 5672)) as fwd: params = pika.ConnectionParameters( host=fwd.server_address[0], port=fwd.server_address[1]) conn = pika.BlockingConnection(params) # Once outside the context, the forwarder is disconnected # Let's see what happens in pika with a disconnected server channel = conn.channel() Echo server example def produce(sock): sock.sendall("12345") sock.shutdown(socket.SHUT_WR) with ForwardServer(None) as echo: sock = socket.socket() sock.connect(echo.server_address) worker = threading.Thread(target=produce, args=[sock]) worker.start() data = sock.makefile().read() assert data == "12345", data worker.join() """ # Amount of time, in seconds, we're willing to wait for the subprocess _SUBPROC_TIMEOUT = 10 def __init__( self, # pylint: disable=R0913 remote_addr, remote_addr_family=socket.AF_INET, remote_socket_type=socket.SOCK_STREAM, server_addr=("127.0.0.1", 0), server_addr_family=socket.AF_INET, server_socket_type=socket.SOCK_STREAM, local_linger_args=None): """ :param tuple remote_addr: remote server's IP address, whose structure depends on remote_addr_family; pair (host-or-ip-addr, port-number). Pass None to have ForwardServer behave as echo server. :param remote_addr_family: socket.AF_INET (the default), socket.AF_INET6 or socket.AF_UNIX. :param remote_socket_type: only socket.SOCK_STREAM is supported at this time :param server_addr: optional address for binding this server's listening socket; the format depends on server_addr_family; defaults to ("127.0.0.1", 0) :param server_addr_family: Address family for this server's listening socket; socket.AF_INET (the default), socket.AF_INET6 or socket.AF_UNIX; defaults to socket.AF_INET :param server_socket_type: only socket.SOCK_STREAM is supported at this time :param tuple local_linger_args: SO_LINGER sockoverride for the local connection sockets, to be configured after connection is accepted. None for default, which is to not change the SO_LINGER option. Otherwise, its a two-tuple, where the first element is the `l_onoff` switch, and the second element is the `l_linger` value, in seconds """ self._logger = logging.getLogger(__name__) self._remote_addr = remote_addr self._remote_addr_family = remote_addr_family assert remote_socket_type == socket.SOCK_STREAM, remote_socket_type self._remote_socket_type = remote_socket_type assert server_addr is not None self._server_addr = server_addr assert server_addr_family is not None self._server_addr_family = server_addr_family assert server_socket_type == socket.SOCK_STREAM, server_socket_type self._server_socket_type = server_socket_type self._local_linger_args = local_linger_args self._subproc = None @property def running(self): """Property: True if ForwardServer is active""" return self._subproc is not None @property def server_address_family(self): """Property: Get listening socket's address family NOTE: undefined before server starts and after it shuts down """ assert self._server_addr_family is not None, "Not in context" return self._server_addr_family @property def server_address(self): """ Property: Get listening socket's address; the returned value depends on the listening socket's address family NOTE: undefined before server starts and after it shuts down """ assert self._server_addr is not None, "Not in context" return self._server_addr def __enter__(self): """ Context manager entry. Starts the forwarding server :returns: self """ return self.start() def __exit__(self, *args): """ Context manager exit; stops the forwarding server """ self.stop() def start(self): """ Start the server NOTE: The context manager is the recommended way to use ForwardServer. start()/stop() are alternatives to the context manager use case and are mutually exclusive with it. :returns: self """ queue = multiprocessing.Queue() self._subproc = multiprocessing.Process( target=_run_server, kwargs=dict( local_addr=self._server_addr, local_addr_family=self._server_addr_family, local_socket_type=self._server_socket_type, local_linger_args=self._local_linger_args, remote_addr=self._remote_addr, remote_addr_family=self._remote_addr_family, remote_socket_type=self._remote_socket_type, queue=queue)) self._subproc.daemon = True self._subproc.start() try: # Get server socket info from subprocess self._server_addr_family, self._server_addr = queue.get( block=True, timeout=self._SUBPROC_TIMEOUT) queue.close() except Exception: # pylint: disable=W0703 try: self._logger.exception( "Failed while waiting for local socket info") # Preserve primary exception and traceback raise finally: # Clean up try: self.stop() except Exception: # pylint: disable=W0703 # Suppress secondary exception in favor of the primary self._logger.exception( "Emergency subprocess shutdown failed") return self def stop(self): """Stop the server NOTE: The context manager is the recommended way to use ForwardServer. start()/stop() are alternatives to the context manager use case and are mutually exclusive with it. """ self._logger.info("ForwardServer STOPPING") try: self._subproc.terminate() self._subproc.join(timeout=self._SUBPROC_TIMEOUT) if self._subproc.is_alive(): self._logger.error( "ForwardServer failed to terminate, killing it") os.kill(self._subproc.pid) self._subproc.join(timeout=self._SUBPROC_TIMEOUT) assert not self._subproc.is_alive(), self._subproc # Log subprocess's exit code; NOTE: negative signal.SIGTERM (usually # -15) is normal on POSIX systems - it corresponds to SIGTERM exit_code = self._subproc.exitcode self._logger.info("ForwardServer terminated with exitcode=%s", exit_code) finally: self._subproc = None def _run_server( local_addr, local_addr_family, local_socket_type, # pylint: disable=R0913 local_linger_args, remote_addr, remote_addr_family, remote_socket_type, queue): """ Run the server; executed in the subprocess :param local_addr: listening address :param local_addr_family: listening address family; one of socket.AF_* :param local_socket_type: listening socket type; typically socket.SOCK_STREAM :param tuple local_linger_args: SO_LINGER sockoverride for the local connection sockets, to be configured after connection is accepted. Pass None to not change SO_LINGER. Otherwise, its a two-tuple, where the first element is the `l_onoff` switch, and the second element is the `l_linger` value in seconds :param remote_addr: address of the target server. Pass None to have ForwardServer behave as echo server :param remote_addr_family: address family for connecting to target server; one of socket.AF_* :param remote_socket_type: socket type for connecting to target server; typically socket.SOCK_STREAM :param multiprocessing.Queue queue: queue for depositing the forwarding server's actual listening socket address family and bound address. The parent process waits for this. """ # NOTE: We define _ThreadedTCPServer class as a closure in order to # override some of its class members dynamically # NOTE: we add `object` to the base classes because `_ThreadedTCPServer` # isn't derived from `object`, which prevents `super` from working properly class _ThreadedTCPServer(SocketServer.ThreadingMixIn, SocketServer.TCPServer, object): """Threaded streaming server for forwarding""" # Override TCPServer's class members address_family = local_addr_family socket_type = local_socket_type allow_reuse_address = True def __init__(self): handler_class_factory = partial( _TCPHandler, local_linger_args=local_linger_args, remote_addr=remote_addr, remote_addr_family=remote_addr_family, remote_socket_type=remote_socket_type) super(_ThreadedTCPServer, self).__init__( local_addr, handler_class_factory, bind_and_activate=True) server = _ThreadedTCPServer() # Send server socket info back to parent process queue.put([server.socket.family, server.server_address]) queue.close() server.serve_forever() # NOTE: we add `object` to the base classes because `StreamRequestHandler` isn't # derived from `object`, which prevents `super` from working properly class _TCPHandler(SocketServer.StreamRequestHandler, object): """TCP/IP session handler instantiated by TCPServer upon incoming connection. Implements forwarding/echo of the incoming connection. """ _SOCK_RX_BUF_SIZE = 16 * 1024 def __init__( self, # pylint: disable=R0913 request, client_address, server, local_linger_args, remote_addr, remote_addr_family, remote_socket_type): """ :param request: for super :param client_address: for super "paarm server: for super :param tuple local_linger_args: SO_LINGER sockoverride for the local connection sockets, to be configured after connection is accepted. Pass None to not change SO_LINGER. Otherwise, its a two-tuple, where the first element is the `l_onoff` switch, and the second element is the `l_linger` value in seconds :param remote_addr: address of the target server. Pass None to have ForwardServer behave as echo server. :param remote_addr_family: address family for connecting to target server; one of socket.AF_* :param remote_socket_type: socket type for connecting to target server; typically socket.SOCK_STREAM :param **kwargs: kwargs for super class """ self._local_linger_args = local_linger_args self._remote_addr = remote_addr self._remote_addr_family = remote_addr_family self._remote_socket_type = remote_socket_type super(_TCPHandler, self).__init__( request=request, client_address=client_address, server=server) def handle(self): # pylint: disable=R0912 """Connect to remote and forward data between local and remote""" local_sock = self.connection if self._local_linger_args is not None: # Set SO_LINGER socket options on local socket l_onoff, l_linger = self._local_linger_args local_sock.setsockopt(socket.SOL_SOCKET, socket.SO_LINGER, struct.pack('ii', l_onoff, l_linger)) if self._remote_addr is not None: # Forwarding set-up remote_dest_sock = remote_src_sock = socket.socket( family=self._remote_addr_family, type=self._remote_socket_type, proto=socket.IPPROTO_IP) remote_dest_sock.connect(self._remote_addr) _trace("%s _TCPHandler connected to remote %s", datetime.utcnow(), remote_dest_sock.getpeername()) else: # Echo set-up # NOTE: Use pika.compat._nonblocking_socketpair() since # socket.socketpair() isn't available on Windows under python 2 yet. remote_dest_sock, remote_src_sock = \ pika.compat._nonblocking_socketpair() # We rely on blocking I/O remote_dest_sock.setblocking(True) remote_src_sock.setblocking(True) try: local_forwarder = threading.Thread( target=self._forward, args=( local_sock, remote_dest_sock, )) local_forwarder.setDaemon(True) local_forwarder.start() try: self._forward(remote_src_sock, local_sock) finally: # Wait for local forwarder thread to exit local_forwarder.join() finally: try: try: _safe_shutdown_socket(remote_dest_sock, socket.SHUT_RDWR) finally: if remote_src_sock is not remote_dest_sock: _safe_shutdown_socket(remote_src_sock, socket.SHUT_RDWR) finally: remote_dest_sock.close() if remote_src_sock is not remote_dest_sock: remote_src_sock.close() def _forward(self, src_sock, dest_sock): # pylint: disable=R0912 """Forward from src_sock to dest_sock""" src_peername = src_sock.getpeername() _trace("%s forwarding from %s to %s", datetime.utcnow(), src_peername, dest_sock.getpeername()) try: # NOTE: python 2.6 doesn't support bytearray with recv_into, so # we use array.array instead; this is only okay as long as the # array instance isn't shared across threads. See # http://bugs.python.org/issue7827 and # groups.google.com/forum/#!topic/comp.lang.python/M6Pqr-KUjQw rx_buf = array.array("B", [0] * self._SOCK_RX_BUF_SIZE) while True: try: nbytes = src_sock.recv_into(rx_buf) except pika.compat.SOCKET_ERROR as exc: if exc.errno == errno.EINTR: continue elif exc.errno == errno.ECONNRESET: # Source peer forcibly closed connection _trace("%s errno.ECONNRESET from %s", datetime.utcnow(), src_peername) break else: _trace("%s Unexpected errno=%s from %s\n%s", datetime.utcnow(), exc.errno, src_peername, "".join(traceback.format_stack())) raise if not nbytes: # Source input EOF _trace("%s EOF on %s", datetime.utcnow(), src_peername) break try: dest_sock.sendall(buffer(rx_buf, 0, nbytes)) except pika.compat.SOCKET_ERROR as exc: if exc.errno == errno.EPIPE: # Destination peer closed its end of the connection _trace("%s Destination peer %s closed its end of " "the connection: errno.EPIPE", datetime.utcnow(), dest_sock.getpeername()) break elif exc.errno == errno.ECONNRESET: # Destination peer forcibly closed connection _trace("%s Destination peer %s forcibly closed " "connection: errno.ECONNRESET", datetime.utcnow(), dest_sock.getpeername()) break else: _trace("%s Unexpected errno=%s in sendall to %s\n%s", datetime.utcnow(), exc.errno, dest_sock.getpeername(), "".join( traceback.format_stack())) raise except: _trace("forward failed\n%s", "".join(traceback.format_exc())) raise finally: _trace("%s done forwarding from %s", datetime.utcnow(), src_peername) try: # Let source peer know we're done receiving _safe_shutdown_socket(src_sock, socket.SHUT_RD) finally: # Let destination peer know we're done sending _safe_shutdown_socket(dest_sock, socket.SHUT_WR) def echo(port=0): """ This function implements a simple echo server for testing the Forwarder class. :param int port: port number on which to listen We run this function and it prints out the listening socket binding. Then, we run Forwarder and point it at this echo "server". Then, we run telnet and point it at forwarder and see if whatever we type gets echoed back to us. This function waits for the client to connect and exits after the client closes the connection """ lsock = socket.socket() lsock.bind(("", port)) lsock.listen(1) _trace("Listening on sockname=%s", lsock.getsockname()) sock, remote_addr = lsock.accept() try: _trace("Connection from peer=%s", remote_addr) while True: try: data = sock.recv(4 * 1024) # pylint: disable=E1101 except pika.compat.SOCKET_ERROR as exc: if exc.errno == errno.EINTR: continue else: raise if not data: break sock.sendall(data) # pylint: disable=E1101 finally: try: _safe_shutdown_socket(sock, socket.SHUT_RDWR) finally: sock.close() def _safe_shutdown_socket(sock, how=socket.SHUT_RDWR): """ Shutdown a socket, suppressing ENOTCONN """ try: sock.shutdown(how) except pika.compat.SOCKET_ERROR as exc: if exc.errno != errno.ENOTCONN: raise pika-1.2.0/tests/misc/test_utils.py000066400000000000000000000050671400701476500173200ustar00rootroot00000000000000"""Acceptance test utils""" import functools import logging import time import traceback import pika.compat def retry_assertion(timeout_sec, retry_interval_sec=0.1): """Creates a decorator that retries the decorated function or method only upon `AssertionError` exception at the given retry interval not to exceed the overall given timeout. :param float timeout_sec: overall timeout in seconds :param float retry_interval_sec: amount of time to sleep between retries in seconds. :returns: decorator that implements the following behavior 1. This decorator guarantees to call the decorated function or method at least once. 2. It passes through all exceptions besides `AssertionError`, preserving the original exception and its traceback. 3. If no exception, it returns the return value from the decorated function/method. 4. It sleeps `time.sleep(retry_interval_sec)` between retries. 5. It checks for expiry of the overall timeout before sleeping. 6. If the overall timeout is exceeded, it re-raises the latest `AssertionError`, preserving its original traceback """ def retry_assertion_decorator(func): """Decorator""" @functools.wraps(func) def retry_assertion_wrap(*args, **kwargs): """The wrapper""" num_attempts = 0 start_time = pika.compat.time_now() while True: num_attempts += 1 try: result = func(*args, **kwargs) except AssertionError: now = pika.compat.time_now() # Compensate for time adjustment if now < start_time: start_time = now if (now - start_time) > timeout_sec: logging.exception( 'Exceeded retry timeout of %s sec in %s attempts ' 'with func %r. Caller\'s stack:\n%s', timeout_sec, num_attempts, func, ''.join(traceback.format_stack())) raise logging.debug('Attempt %s failed; retrying %r in %s sec.', num_attempts, func, retry_interval_sec) time.sleep(retry_interval_sec) else: logging.debug('%r succeeded at attempt %s', func, num_attempts) return result return retry_assertion_wrap return retry_assertion_decorator pika-1.2.0/tests/stubs/000077500000000000000000000000001400701476500147445ustar00rootroot00000000000000pika-1.2.0/tests/stubs/__init__.py000066400000000000000000000000001400701476500170430ustar00rootroot00000000000000pika-1.2.0/tests/stubs/io_services_test_stubs.py000066400000000000000000000077301400701476500221160ustar00rootroot00000000000000""" Test stubs for running tests against all supported adaptations of nbio_interface.AbstractIOServices and variations such as without SSL and with SSL. Usage example: ``` import unittest from ..io_services_test_stubs import IOServicesTestStubs class TestGetNativeIOLoop(unittest.TestCase, IOServicesTestStubs): def start(self): native_loop = self.create_nbio().get_native_ioloop() self.assertIsNotNone(self._native_loop) self.assertIs(native_loop, self._native_loop) ``` """ import sys import unittest from tests.wrappers.threaded_test_wrapper import run_in_thread_with_timeout # Suppress missing-docstring to allow test method names to be printed by our the # test runner # pylint: disable=C0111 # invalid-name # pylint: disable=C0103 class IOServicesTestStubs(object): """Provides a stub test method for each combination of parameters we wish to test """ # Overridden by framework-specific test methods _nbio_factory = None _native_loop = None _use_ssl = None def start(self): """ Subclasses must override to run the test. This method is called from a thread. """ raise NotImplementedError def create_nbio(self): """Create the configured AbstractIOServices adaptation and schedule it to be closed automatically when the test terminates. :param unittest.TestCase self: :rtype: pika.adapters.utils.nbio_interface.AbstractIOServices """ nbio = self._nbio_factory() self.addCleanup(nbio.close) # pylint: disable=E1101 return nbio def _run_start(self, nbio_factory, native_loop, use_ssl=False): """Called by framework-specific test stubs to initialize test paramters and execute the `self.start()` method. :param nbio_interface.AbstractIOServices _() nbio_factory: function to call to create an instance of `AbstractIOServices` adaptation. :param native_loop: native loop implementation instance :param bool use_ssl: Whether to test with SSL instead of Plaintext transport. Defaults to Plaintext. """ self._nbio_factory = nbio_factory self._native_loop = native_loop self._use_ssl = use_ssl self.start() # Suppress missing-docstring to allow test method names to be printed by our # test runner # pylint: disable=C0111 @run_in_thread_with_timeout def test_with_select_connection_io_services(self): # Test entry point for `select_connection.IOLoop`-based async services # implementation. from pika.adapters.select_connection import IOLoop from pika.adapters.utils.selector_ioloop_adapter import ( SelectorIOServicesAdapter) native_loop = IOLoop() self._run_start( nbio_factory=lambda: SelectorIOServicesAdapter(native_loop), native_loop=native_loop) @run_in_thread_with_timeout def test_with_tornado_io_services(self): # Test entry point for `tornado.ioloop.IOLoop`-based async services # implementation. from tornado.ioloop import IOLoop from pika.adapters.utils.selector_ioloop_adapter import ( SelectorIOServicesAdapter) native_loop = IOLoop() self._run_start( nbio_factory=lambda: SelectorIOServicesAdapter(native_loop), native_loop=native_loop) @unittest.skipIf(sys.version_info < (3, 4), "Asyncio is available only with Python 3.4+") @run_in_thread_with_timeout def test_with_asyncio_io_services(self): # Test entry point for `asyncio` event loop-based io services # implementation. import asyncio from pika.adapters.asyncio_connection import ( _AsyncioIOServicesAdapter) native_loop = asyncio.new_event_loop() self._run_start( nbio_factory=lambda: _AsyncioIOServicesAdapter(native_loop), native_loop=native_loop) pika-1.2.0/tests/unit/000077500000000000000000000000001400701476500145635ustar00rootroot00000000000000pika-1.2.0/tests/unit/amqp_object_tests.py000066400000000000000000000045131400701476500206460ustar00rootroot00000000000000import unittest from pika import amqp_object class AMQPObjectTests(unittest.TestCase): def test_base_name(self): self.assertEqual(amqp_object.AMQPObject().NAME, 'AMQPObject') def test_repr_no_items(self): obj = amqp_object.AMQPObject() self.assertEqual(repr(obj), '') def test_repr_items(self): obj = amqp_object.AMQPObject() setattr(obj, 'foo', 'bar') setattr(obj, 'baz', 'qux') self.assertEqual(repr(obj), "") def test_equality(self): a = amqp_object.AMQPObject() b = amqp_object.AMQPObject() self.assertEqual(a, b) setattr(a, "a_property", "test") self.assertNotEqual(a, b) setattr(b, "a_property", "test") self.assertEqual(a, b) class ClassTests(unittest.TestCase): def test_base_name(self): self.assertEqual(amqp_object.Class().NAME, 'Unextended Class') def test_equality(self): a = amqp_object.Class() b = amqp_object.Class() self.assertEqual(a, b) class MethodTests(unittest.TestCase): def test_base_name(self): self.assertEqual(amqp_object.Method().NAME, 'Unextended Method') def test_set_content_body(self): properties = amqp_object.Properties() body = 'This is a test' obj = amqp_object.Method() obj._set_content(properties, body) self.assertEqual(obj._body, body) def test_set_content_properties(self): properties = amqp_object.Properties() body = 'This is a test' obj = amqp_object.Method() obj._set_content(properties, body) self.assertEqual(obj._properties, properties) def test_get_body(self): properties = amqp_object.Properties() body = 'This is a test' obj = amqp_object.Method() obj._set_content(properties, body) self.assertEqual(obj.get_body(), body) def test_get_properties(self): properties = amqp_object.Properties() body = 'This is a test' obj = amqp_object.Method() obj._set_content(properties, body) self.assertEqual(obj.get_properties(), properties) class PropertiesTests(unittest.TestCase): def test_base_name(self): self.assertEqual(amqp_object.Properties().NAME, 'Unextended Properties') pika-1.2.0/tests/unit/base_connection_tests.py000066400000000000000000000064261400701476500215200ustar00rootroot00000000000000""" Tests for pika.base_connection.BaseConnection """ import socket import unittest import mock import pika import pika.tcp_socket_opts from pika.adapters import base_connection # pylint: disable=C0111,W0212,C0103 # If this is missing, set it manually. We need it to test tcp opt setting. try: TCP_KEEPIDLE = socket.TCP_KEEPIDLE except AttributeError: TCP_KEEPIDLE = 4 class ConstructibleBaseConnection(base_connection.BaseConnection): """Adds dummy overrides for `BaseConnection`'s abstract methods so that we can instantiate and test it. """ @classmethod def create_connection(cls, *args, **kwargs): # pylint: disable=W0221 raise NotImplementedError class BaseConnectionTests(unittest.TestCase): def setUp(self): with mock.patch.object(ConstructibleBaseConnection, '_adapter_connect_stream'): self.connection = ConstructibleBaseConnection( None, None, None, None, None, internal_connection_workflow=True) self.connection._set_connection_state( ConstructibleBaseConnection.CONNECTION_OPEN) def test_repr(self): text = repr(self.connection) self.assertTrue(text.startswith(', , } # but got these 3: # {, , } @classmethod def tearDownClass(cls): # Now check against what was made available to us by # IOServicesTestStubs if cls._native_loop_classes != _SUPPORTED_LOOP_CLASSES: raise AssertionError( 'Expected these {} native I/O loop classes from ' 'IOServicesTestStubs: {!r}, but got these {}: {!r}'.format( len(_SUPPORTED_LOOP_CLASSES), _SUPPORTED_LOOP_CLASSES, len(cls._native_loop_classes), cls._native_loop_classes)) def setUp(self): self._runner_thread_id = threading.current_thread().ident def start(self): nbio = self.create_nbio() native_loop = nbio.get_native_ioloop() self.assertIsNotNone(self._native_loop) self.assertIs(native_loop, self._native_loop) self._native_loop_classes.add(native_loop.__class__) # Check that we're called from a different thread than the one that # set up this test. self.assertNotEqual(threading.current_thread().ident, self._runner_thread_id) # And make sure the loop actually works using this rudimentary test nbio.add_callback_threadsafe(nbio.stop) nbio.run() pika-1.2.0/tests/unit/select_connection_ioloop_tests.py000066400000000000000000001133701400701476500234430ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ Tests for SelectConnection IOLoops """ from __future__ import print_function import errno import datetime import functools import logging import os import platform import select import signal import socket import sys import time import threading import unittest try: from unittest import mock except ImportError: import mock import pika from pika import compat from pika.adapters import select_connection # protected-access # pylint: disable=W0212 # missing-docstring # pylint: disable=C0111 # invalid-name # pylint: disable=C0103 # attribute-defined-outside-init # pylint: disable=W0201 LOGGER = logging.getLogger(__name__) EPOLL_SUPPORTED = hasattr(select, 'epoll') POLL_SUPPORTED = hasattr(select, 'poll') and hasattr(select.poll(), 'modify') KQUEUE_SUPPORTED = hasattr(select, 'kqueue') POLLIN = getattr(select, 'POLLIN', 0) or 1 POLLOUT = getattr(select, 'POLLOUT', 0) or 4 POLLERR = getattr(select, 'POLLERR', 0) or 8 POLLHUP = getattr(select, 'POLLHUP', 0) or 16 POLLNVAL = getattr(select, 'POLLNVAL', 0) or 32 POLLPRI = getattr(select, 'POLLPRI', 0) or 2 def _trace_stderr(fmt, *args): """Format and output the text to stderr""" print((fmt % args) + "\n", end="", file=sys.stderr) def _fd_events_to_str(events): str_events = '{}: '.format(events) if events & POLLIN: str_events += "In." if events & POLLOUT: str_events += "Out." if events & POLLERR: str_events += "Err." if events & POLLHUP: str_events += "Hup." if events & POLLNVAL: str_events += "Inval." if events & POLLPRI: str_events += "Pri." remainig_events = events & ~(POLLIN | POLLOUT | POLLERR | POLLHUP | POLLNVAL | POLLPRI) if remainig_events: str_events += '+{}'.format(bin(remainig_events)) return str_events class IOLoopBaseTest(unittest.TestCase): SELECT_POLLER = None TIMEOUT = 1.5 def setUp(self): select_type_patch = mock.patch.multiple( select_connection, SELECT_TYPE=self.SELECT_POLLER) select_type_patch.start() self.addCleanup(select_type_patch.stop) self.ioloop = select_connection.IOLoop() self.addCleanup(setattr, self, 'ioloop', None) self.addCleanup(self.ioloop.close) activate_poller_patch = mock.patch.object( self.ioloop._poller, 'activate_poller', wraps=self.ioloop._poller.activate_poller) activate_poller_patch.start() self.addCleanup(activate_poller_patch.stop) deactivate_poller_patch = mock.patch.object( self.ioloop._poller, 'deactivate_poller', wraps=self.ioloop._poller.deactivate_poller) deactivate_poller_patch.start() self.addCleanup(deactivate_poller_patch.stop) def shortDescription(self): method_desc = super(IOLoopBaseTest, self).shortDescription() return '%s (%s)' % (method_desc, self.SELECT_POLLER) def start(self): """Setup timeout handler for detecting 'no-activity' and start polling. """ fail_timer = self.ioloop.call_later(self.TIMEOUT, self.on_timeout) self.addCleanup(self.ioloop.remove_timeout, fail_timer) self.ioloop.start() self.ioloop._poller.activate_poller.assert_called_once_with() # pylint: disable=E1101 self.ioloop._poller.deactivate_poller.assert_called_once_with() # pylint: disable=E1101 def on_timeout(self): """Called when stuck waiting for connection to close""" self.ioloop.stop() # force the ioloop to stop self.fail('Test timed out') class IOLoopCloseClosesSubordinateObjectsTestSelect(IOLoopBaseTest): """ Test ioloop being closed """ SELECT_POLLER = 'select' def start_test(self): with mock.patch.multiple(self.ioloop, _timer=mock.DEFAULT, _poller=mock.DEFAULT, _callbacks=mock.DEFAULT) as mocks: self.ioloop.close() mocks['_timer'].close.assert_called_once_with() mocks['_poller'].close.assert_called_once_with() self.assertEqual(self.ioloop._callbacks, []) class IOLoopCloseAfterStartReturnsTest(IOLoopBaseTest): """ Test IOLoop.close() after normal return from start(). """ SELECT_POLLER = 'select' def start_test(self): self.ioloop.stop() # so start will terminate quickly self.start() self.ioloop.close() self.assertEqual(self.ioloop._callbacks, []) class IOLoopStartReentrancyNotAllowedTestSelect(IOLoopBaseTest): """ Test calling IOLoop.start() while arleady in start() raises exception. """ SELECT_POLLER = 'select' def start_test(self): callback_completed = [] def call_close_from_callback(): with self.assertRaises(RuntimeError) as cm: self.ioloop.start() self.assertEqual(cm.exception.args[0], 'IOLoop is not reentrant and is already running') self.ioloop.stop() callback_completed.append(1) self.ioloop.add_callback_threadsafe(call_close_from_callback) self.start() self.assertEqual(callback_completed, [1]) class IOLoopCloseBeforeStartReturnsTestSelect(IOLoopBaseTest): """ Test calling IOLoop.close() before return from start() raises exception. """ SELECT_POLLER = 'select' def start_test(self): callback_completed = [] def call_close_from_callback(): with self.assertRaises(AssertionError) as cm: self.ioloop.close() self.assertEqual(cm.exception.args[0], 'Cannot call close() before start() unwinds.') self.ioloop.stop() callback_completed.append(1) self.ioloop.add_callback_threadsafe(call_close_from_callback) self.start() self.assertEqual(callback_completed, [1]) class IOLoopThreadStopTestSelect(IOLoopBaseTest): """ Test ioloop being stopped by another Thread. """ SELECT_POLLER = 'select' def start_test(self): """Starts a thread that stops ioloop after a while and start polling""" timer = threading.Timer( 0.1, lambda: self.ioloop.add_callback_threadsafe(self.ioloop.stop)) self.addCleanup(timer.cancel) timer.start() self.start() # NOTE: Normal return from `start()` constitutes success @unittest.skipIf(not POLL_SUPPORTED, 'poll not supported') class IOLoopThreadStopTestPoll(IOLoopThreadStopTestSelect): """Same as IOLoopThreadStopTestSelect but uses 'poll' syscall.""" SELECT_POLLER = 'poll' @unittest.skipIf(not EPOLL_SUPPORTED, 'epoll not supported') class IOLoopThreadStopTestEPoll(IOLoopThreadStopTestSelect): """Same as IOLoopThreadStopTestSelect but uses 'epoll' syscall.""" SELECT_POLLER = 'epoll' @unittest.skipIf(not KQUEUE_SUPPORTED, 'kqueue not supported') class IOLoopThreadStopTestKqueue(IOLoopThreadStopTestSelect): """Same as IOLoopThreadStopTestSelect but uses 'kqueue' syscall.""" SELECT_POLLER = 'kqueue' class IOLoopAddCallbackAfterCloseDoesNotRaiseTestSelect(IOLoopBaseTest): """ Test ioloop add_callback_threadsafe() after ioloop close doesn't raise exception. """ SELECT_POLLER = 'select' def start_test(self): # Simulate closing after start returns self.ioloop.stop() # so that start() returns ASAP self.start() # NOTE: Normal return from `start()` constitutes success self.ioloop.close() # Expect: add_callback_threadsafe() won't raise after ioloop.close() self.ioloop.add_callback_threadsafe(lambda: None) # TODO FUTURE - fix this flaky test @unittest.skipIf(platform.python_implementation() == 'PyPy', 'test is flaky on PyPy') class IOLoopTimerTestSelect(IOLoopBaseTest): """Set a bunch of very short timers to fire in reverse order and check that they fire in order of time, not """ NUM_TIMERS = 5 TIMER_INTERVAL = 0.25 SELECT_POLLER = 'select' def set_timers(self): """Set timers that timers that fires in succession with the specified interval. """ self.timer_stack = list() for i in range(self.NUM_TIMERS, 0, -1): deadline = i * self.TIMER_INTERVAL self.ioloop.call_later( deadline, functools.partial(self.on_timer, i)) self.timer_stack.append(i) def start_test(self): """Set timers and start ioloop.""" self.set_timers() self.start() def on_timer(self, val): """A timeout handler that verifies that the given parameter matches what is expected. """ self.assertEqual(val, self.timer_stack.pop()) if not self.timer_stack: self.ioloop.stop() def test_normal(self): """Setup 5 timeout handlers and observe them get invoked one by one.""" self.start_test() def test_timer_for_deleting_itself(self): """Verifies that an attempt to delete a timeout within the corresponding handler generates no exceptions. """ self.timer_stack = list() handle_holder = [] self.timer_got_fired = False self.handle = self.ioloop.call_later( 0.1, functools.partial( self._on_timer_delete_itself, handle_holder)) handle_holder.append(self.handle) self.start() self.assertTrue(self.timer_got_called) def _on_timer_delete_itself(self, handle_holder): """A timeout handler that tries to remove itself.""" self.assertEqual(self.handle, handle_holder.pop()) # This removal here should not raise exception by itself nor # in the caller SelectPoller._process_timeouts(). self.timer_got_called = True self.ioloop.remove_timeout(self.handle) self.ioloop.stop() def test_timer_delete_another(self): """Verifies that an attempt by a timeout handler to delete another, that is ready to run, cancels the execution of the latter without generating an exception. This should pose no issues. """ holder_for_target_timer = [] self.ioloop.call_later( 0.01, functools.partial( self._on_timer_delete_another, holder_for_target_timer)) timer_2 = self.ioloop.call_later(0.02, self._on_timer_no_call) holder_for_target_timer.append(timer_2) time.sleep(0.03) # so that timer_1 and timer_2 fires at the same time. self.start() self.assertTrue(self.deleted_another_timer) self.assertTrue(self.concluded) def _on_timer_delete_another(self, holder): """A timeout handler that tries to remove another timeout handler that is ready to run. This should pose no issues. """ target_timer = holder[0] self.ioloop.remove_timeout(target_timer) self.deleted_another_timer = True def _on_timer_conclude(): """A timeout handler that is called to verify outcome of calling or not calling of previously set handlers. """ self.concluded = True self.assertTrue(self.deleted_another_timer) self.assertIsNone(target_timer.callback) self.ioloop.stop() self.ioloop.call_later(0.01, _on_timer_conclude) def _on_timer_no_call(self): """A timeout handler that is used when it's assumed not be called.""" self.fail('deleted timer callback was called.') @unittest.skipIf(not POLL_SUPPORTED, 'poll not supported') class IOLoopTimerTestPoll(IOLoopTimerTestSelect): """Same as IOLoopTimerTestSelect but uses 'poll' syscall""" SELECT_POLLER = 'poll' @unittest.skipIf(not EPOLL_SUPPORTED, 'epoll not supported') class IOLoopTimerTestEPoll(IOLoopTimerTestSelect): """Same as IOLoopTimerTestSelect but uses 'epoll' syscall""" SELECT_POLLER = 'epoll' @unittest.skipIf(not KQUEUE_SUPPORTED, 'kqueue not supported') class IOLoopTimerTestKqueue(IOLoopTimerTestSelect): """Same as IOLoopTimerTestSelect but uses 'kqueue' syscall""" SELECT_POLLER = 'kqueue' class IOLoopSleepTimerTestSelect(IOLoopTimerTestSelect): """Sleep until all the timers should have passed and check they still fire in deadline order""" def start_test(self): """ Setup timers, sleep and start polling """ self.set_timers() time.sleep(self.NUM_TIMERS * self.TIMER_INTERVAL) self.start() @unittest.skipIf(not POLL_SUPPORTED, 'poll not supported') class IOLoopSleepTimerTestPoll(IOLoopSleepTimerTestSelect): """Same as IOLoopSleepTimerTestSelect but uses 'poll' syscall""" SELECT_POLLER = 'poll' @unittest.skipIf(not EPOLL_SUPPORTED, 'epoll not supported') class IOLoopSleepTimerTestEPoll(IOLoopSleepTimerTestSelect): """Same as IOLoopSleepTimerTestSelect but uses 'epoll' syscall""" SELECT_POLLER = 'epoll' @unittest.skipIf(not KQUEUE_SUPPORTED, 'kqueue not supported') class IOLoopSleepTimerTestKqueue(IOLoopSleepTimerTestSelect): """Same as IOLoopSleepTimerTestSelect but uses 'kqueue' syscall""" SELECT_POLLER = 'kqueue' class IOLoopSocketBaseSelect(IOLoopBaseTest): """A base class for setting up a communicating pair of sockets.""" SELECT_POLLER = 'select' READ_SIZE = 1024 def save_sock(self, sock): """Store 'sock' in self.sock_map and return the fileno.""" fd_ = sock.fileno() self.sock_map[fd_] = sock return fd_ def setUp(self): super(IOLoopSocketBaseSelect, self).setUp() self.sock_map = dict() self.create_accept_socket() def tearDown(self): for fd_ in self.sock_map: self.ioloop.remove_handler(fd_) self.sock_map[fd_].close() super(IOLoopSocketBaseSelect, self).tearDown() def create_accept_socket(self): """Create a socket and setup 'accept' handler""" listen_sock = socket.socket() listen_sock.setblocking(0) listen_sock.bind(('localhost', 0)) listen_sock.listen(1) fd_ = self.save_sock(listen_sock) self.listen_addr = listen_sock.getsockname() self.ioloop.add_handler(fd_, self.do_accept, self.ioloop.READ) def create_write_socket(self, on_connected): """ Create a pair of socket and setup 'connected' handler """ write_sock = socket.socket() write_sock.setblocking(0) err = write_sock.connect_ex(self.listen_addr) # NOTE we get errno.EWOULDBLOCK 10035 on Windows self.assertIn(err, (errno.EINPROGRESS, errno.EWOULDBLOCK)) fd_ = self.save_sock(write_sock) self.ioloop.add_handler(fd_, on_connected, self.ioloop.WRITE) return write_sock def do_accept(self, fd_, events): """ Create socket from the given fd_ and setup 'read' handler """ self.assertEqual(events, self.ioloop.READ) listen_sock = self.sock_map[fd_] read_sock, _ = listen_sock.accept() fd_ = self.save_sock(read_sock) self.ioloop.add_handler(fd_, self.do_read, self.ioloop.READ) def connected(self, _fd, _events): """ Create socket from given _fd and respond to 'connected'. Implemenation is subclass's responsibility. """ self.fail("IOLoopSocketBase.connected not extended") def do_read(self, fd_, events): """ read from fd and check the received content """ self.assertEqual(events, self.ioloop.READ) # NOTE Use socket.recv instead of os.read for Windows compatibility self.verify_message(self.sock_map[fd_].recv(self.READ_SIZE)) def verify_message(self, _msg): """ See if 'msg' matches what is expected. This is a stub. Real implementation is subclass's responsibility """ self.fail("IOLoopSocketBase.verify_message not extended") def on_timeout(self): """called when stuck waiting for connection to close""" # force the ioloop to stop self.ioloop.stop() self.fail('Test timed out') @unittest.skipIf(not POLL_SUPPORTED, 'poll not supported') class IOLoopSocketBasePoll(IOLoopSocketBaseSelect): """Same as IOLoopSocketBaseSelect but uses 'poll' syscall""" SELECT_POLLER = 'poll' @unittest.skipIf(not EPOLL_SUPPORTED, 'epoll not supported') class IOLoopSocketBaseEPoll(IOLoopSocketBaseSelect): """Same as IOLoopSocketBaseSelect but uses 'epoll' syscall""" SELECT_POLLER = 'epoll' @unittest.skipIf(not KQUEUE_SUPPORTED, 'kqueue not supported') class IOLoopSocketBaseKqueue(IOLoopSocketBaseSelect): """ Same as IOLoopSocketBaseSelect but uses 'kqueue' syscall """ SELECT_POLLER = 'kqueue' class IOLoopSimpleMessageTestCaseSelect(IOLoopSocketBaseSelect): """Test read/write by creating a pair of sockets, writing to one end and reading from the other """ def start(self): """Create a pair of sockets and poll""" self.create_write_socket(self.connected) super(IOLoopSimpleMessageTestCaseSelect, self).start() def connected(self, fd, events): """Respond to 'connected' event by writing to the write-side.""" self.assertEqual(events, self.ioloop.WRITE) # NOTE Use socket.send instead of os.write for Windows compatibility self.sock_map[fd].send(b'X') self.ioloop.update_handler(fd, 0) def verify_message(self, msg): """Make sure we get what is expected and stop polling """ self.assertEqual(msg, b'X') self.ioloop.stop() def start_test(self): """Simple message Test""" self.start() @unittest.skipIf(not POLL_SUPPORTED, 'poll not supported') class IOLoopSimpleMessageTestCasetPoll(IOLoopSimpleMessageTestCaseSelect): """Same as IOLoopSimpleMessageTestCaseSelect but uses 'poll' syscall""" SELECT_POLLER = 'poll' @unittest.skipIf(not EPOLL_SUPPORTED, 'epoll not supported') class IOLoopSimpleMessageTestCasetEPoll(IOLoopSimpleMessageTestCaseSelect): """Same as IOLoopSimpleMessageTestCaseSelect but uses 'epoll' syscall""" SELECT_POLLER = 'epoll' @unittest.skipIf(not KQUEUE_SUPPORTED, 'kqueue not supported') class IOLoopSimpleMessageTestCasetKqueue(IOLoopSimpleMessageTestCaseSelect): """Same as IOLoopSimpleMessageTestCaseSelect but uses 'kqueue' syscall""" SELECT_POLLER = 'kqueue' class IOLoopEintrTestCaseSelect(IOLoopBaseTest): """ Tests if EINTR is properly caught and polling gets resumed. """ SELECT_POLLER = 'select' MSG_CONTENT = b'hello' @staticmethod def signal_handler(signum, interrupted_stack): """A signal handler that gets called in response to os.kill(signal.SIGUSR1).""" pass def _eintr_read_handler(self, fileno, events): """Read from within poll loop that gets receives eintr error.""" self.assertEqual(events, self.ioloop.READ) sock = socket.fromfd( os.dup(fileno), socket.AF_INET, socket.SOCK_STREAM) self.addCleanup(sock.close) mesg = sock.recv(256) self.assertEqual(mesg, self.MSG_CONTENT) self.poller.stop() self._eintr_read_handler_is_called = True def _eintr_test_fail(self): """This function gets called when eintr-test failed to get _eintr_read_handler called.""" self.poller.stop() self.fail('Eintr-test timed out') @unittest.skipUnless(compat.HAVE_SIGNAL, "This platform doesn't support posix signals") @mock.patch('pika.adapters.select_connection._is_resumable') def test_eintr( self, is_resumable_mock, is_resumable_raw=pika.adapters.select_connection._is_resumable): """Test that poll() is properly restarted after receiving EINTR error. Class of an exception raised to signal the error differs in one implementation of polling mechanism and another.""" is_resumable_mock.side_effect = is_resumable_raw timer = select_connection._Timer() self.poller = self.ioloop._get_poller(timer.get_remaining_interval, timer.process_timeouts) self.addCleanup(self.poller.close) sockpair = self.poller._get_interrupt_pair() self.addCleanup(sockpair[0].close) self.addCleanup(sockpair[1].close) self._eintr_read_handler_is_called = False self.poller.add_handler(sockpair[0].fileno(), self._eintr_read_handler, self.ioloop.READ) self.ioloop.call_later(self.TIMEOUT, self._eintr_test_fail) original_signal_handler = \ signal.signal(signal.SIGUSR1, self.signal_handler) self.addCleanup(signal.signal, signal.SIGUSR1, original_signal_handler) tmr_k = threading.Timer(0.1, lambda: os.kill(os.getpid(), signal.SIGUSR1)) self.addCleanup(tmr_k.cancel) tmr_w = threading.Timer(0.2, lambda: sockpair[1].send(self.MSG_CONTENT)) self.addCleanup(tmr_w.cancel) tmr_k.start() tmr_w.start() self.poller.start() self.assertTrue(self._eintr_read_handler_is_called) if pika.compat.EINTR_IS_EXPOSED: self.assertEqual(is_resumable_mock.call_count, 1) else: self.assertEqual(is_resumable_mock.call_count, 0) @unittest.skipIf(not POLL_SUPPORTED, 'poll not supported') class IOLoopEintrTestCasePoll(IOLoopEintrTestCaseSelect): """Same as IOLoopEintrTestCaseSelect but uses poll syscall""" SELECT_POLLER = 'poll' @unittest.skipIf(not EPOLL_SUPPORTED, 'epoll not supported') class IOLoopEintrTestCaseEPoll(IOLoopEintrTestCaseSelect): """Same as IOLoopEINTRrTestCaseSelect but uses epoll syscall""" SELECT_POLLER = 'epoll' @unittest.skipIf(not KQUEUE_SUPPORTED, 'kqueue not supported') class IOLoopEintrTestCaseKqueue(IOLoopEintrTestCaseSelect): """Same as IOLoopEINTRTestCaseSelect but uses kqueue syscall""" SELECT_POLLER = 'kqueue' class SelectPollerTestPollWithoutSockets(unittest.TestCase): def start_test(self): timer = select_connection._Timer() poller = select_connection.SelectPoller( get_wait_seconds=timer.get_remaining_interval, process_timeouts=timer.process_timeouts) self.addCleanup(poller.close) timer_call_container = [] timer.call_later(0.00001, lambda: timer_call_container.append(1)) poller.poll() delay = poller._get_wait_seconds() self.assertIsNotNone(delay) deadline = pika.compat.time_now() + delay while True: poller._process_timeouts() if pika.compat.time_now() < deadline: self.assertEqual(timer_call_container, []) else: # One last time in case deadline reached after previous # processing cycle poller._process_timeouts() break self.assertEqual(timer_call_container, [1]) class PollerTestCaseSelect(unittest.TestCase): SELECT_POLLER = 'select' def setUp(self): select_type_patch = mock.patch.multiple( select_connection, SELECT_TYPE=self.SELECT_POLLER) select_type_patch.start() self.addCleanup(select_type_patch.stop) timer = select_connection._Timer() self.addCleanup(timer.close) self.poller = select_connection.IOLoop._get_poller( timer.get_remaining_interval, timer.process_timeouts) self.addCleanup(self.poller.close) def test_poller_close(self): self.poller.close() self.assertIsNone(self.poller._r_interrupt) self.assertIsNone(self.poller._w_interrupt) self.assertIsNone(self.poller._fd_handlers) self.assertIsNone(self.poller._fd_events) self.assertIsNone(self.poller._processing_fd_event_map) @unittest.skipIf(not POLL_SUPPORTED, 'poll not supported') class PollerTestCasePoll(PollerTestCaseSelect): """Same as PollerTestCaseSelect but uses poll syscall""" SELECT_POLLER = 'poll' @unittest.skipIf(not EPOLL_SUPPORTED, 'epoll not supported') class PollerTestCaseEPoll(PollerTestCaseSelect): """Same as PollerTestCaseSelect but uses epoll syscall""" SELECT_POLLER = 'epoll' @unittest.skipIf(not KQUEUE_SUPPORTED, 'kqueue not supported') class PollerTestCaseKqueue(PollerTestCaseSelect): """Same as PollerTestCaseSelect but uses kqueue syscall""" SELECT_POLLER = 'kqueue' class DefaultPollerSocketEventsTestCase(unittest.TestCase): """This test suite outputs diagnostic information usful for debugging the IOLoop poller's fd watcher """ DEFAULT_TEST_TIMEOUT = 15 IOLOOP_CLS = select_connection.IOLoop READ = IOLOOP_CLS.READ WRITE = IOLOOP_CLS.WRITE ERROR = IOLOOP_CLS.ERROR def create_ioloop_with_timeout(self): """Create IOLoop with test timeout and schedule cleanup to close it """ ioloop = select_connection.IOLoop() self.addCleanup(ioloop.close) def _on_test_timeout(): """Called when test times out""" LOGGER.info('%s TIMED OUT (%s)', datetime.datetime.utcnow(), self) self.fail('Test timed out') ioloop.call_later(self.DEFAULT_TEST_TIMEOUT, _on_test_timeout) return ioloop def create_nonblocking_tcp_socket(self): """Create a TCP stream socket and schedule cleanup to close it """ sock = socket.socket() sock.setblocking(False) self.addCleanup(sock.close) return sock def create_nonblocking_socketpair(self): """Creates a non-blocking socket pair and schedules cleanup to close them :returns: two-tuple of connected non-blocking sockets """ pair = pika.compat._nonblocking_socketpair() self.addCleanup(pair[0].close) self.addCleanup(pair[1].close) return pair def create_blocking_socketpair(self): """Creates a blocking socket pair and schedules cleanup to close them :returns: two-tuple of connected non-blocking sockets """ pair = self.create_nonblocking_socketpair() pair[0].setblocking(True) # pylint: disable=E1101 pair[1].setblocking(True) return pair @staticmethod def safe_connect_nonblocking_socket(sock, addr_pair): """Initiate socket connection, suppressing EINPROGRESS/EWOULDBLOCK :param socket.socket sock :param addr_pair: two tuple of address string and port integer """ try: sock.connect(addr_pair) except pika.compat.SOCKET_ERROR as error: # EINPROGRESS for posix and EWOULDBLOCK for windows if error.errno not in (errno.EINPROGRESS, errno.EWOULDBLOCK,): raise def get_dead_socket_address(self): """ :return: socket address pair (ip-addr, port) that will refuse connection """ s1, s2 = pika.compat._nonblocking_socketpair() s2.close() self.addCleanup(s1.close) return s1.getsockname() # pylint: disable=E1101 def which_events_are_set_with_varying_eventmasks(self, sock, requested_eventmasks, msg_prefix): """Common logic for which_events_are_set_* tests. Runs the event loop while varying eventmasks at each socket event callback :param ioloop: :param sock: :param requested_eventmasks: a mutable list of eventmasks to apply after each socket event callback :param msg_prefix: Message prefix to apply when printing watched vs. indicated events. """ ioloop = self.create_ioloop_with_timeout() def handle_socket_events(_fd, in_events): socket_error = sock.getsockopt(socket.SOL_SOCKET, socket.SO_ERROR) socket_error = 0 if socket_error == 0 else '{} ({})'.format( socket_error, os.strerror(socket_error)) _trace_stderr('[%s] %s: watching=%s; indicated=%s; sockerr=%s', ioloop._poller.__class__.__name__, msg_prefix, _fd_events_to_str(requested_eventmasks[0]), _fd_events_to_str(in_events), socket_error) # NOTE ERROR may be added automatically by some pollers # without being requested. self.assertTrue( in_events & (requested_eventmasks[0] | self.ERROR), 'watching={}; indicated={}'.format( _fd_events_to_str(requested_eventmasks[0]), _fd_events_to_str(in_events))) requested_eventmasks.pop(0) if requested_eventmasks: ioloop.update_handler(sock.fileno(), requested_eventmasks[0]) else: ioloop.stop() ioloop.add_handler(sock.fileno(), handle_socket_events, requested_eventmasks[0]) ioloop.start() def test_which_events_are_set_when_failed_to_connect(self): msg_prefix = '@ Failed to connect' sock = self.create_nonblocking_tcp_socket() self.safe_connect_nonblocking_socket(sock, self.get_dead_socket_address()) requested_eventmasks = [ self.READ | self.WRITE | self.ERROR, self.READ | self.WRITE, self.READ, self.WRITE | self.ERROR ] # NOTE: on OS X, we just get POLLHUP when requesting WRITE in this case # with PollPoller. It's documented in `man poll` on OS X as mutually # exclusive with POLLOUT; so it looks like PollPoller on OS X needs to # translate POLLHUP TO POLLERR and we need to request ERROR just in # case. # NOTE: Unlike POSIX, Windows select doesn't indicate as # readable/writable a socket that failed to connect - it reflects the # failure only via exceptfds. if platform.system() == 'Windows': _trace_stderr( '%s: setting `ERROR` to all event filters on ' 'Windows, because its `select()` does not indicate a socket ' 'that failed to connect as readable or writable.', msg_prefix) for i in pika.compat.xrange(len(requested_eventmasks)): requested_eventmasks[i] |= self.ERROR self.which_events_are_set_with_varying_eventmasks( sock=sock, requested_eventmasks=requested_eventmasks, msg_prefix=msg_prefix) def test_which_events_are_set_after_remote_end_closes(self): s1, s2 = self.create_blocking_socketpair() s2.close() requested_eventmasks = [ self.READ | self.WRITE | self.ERROR, self.READ | self.WRITE, self.READ, self.WRITE ] self.which_events_are_set_with_varying_eventmasks( sock=s1, requested_eventmasks=requested_eventmasks, msg_prefix='@ Remote closed') def test_which_events_are_set_after_remote_end_closes_with_pending_data(self): s1, s2 = self.create_blocking_socketpair() s2.send(b'abc') s2.close() requested_eventmasks = [ self.READ | self.WRITE | self.ERROR, self.READ | self.WRITE, self.READ, self.WRITE ] self.which_events_are_set_with_varying_eventmasks( sock=s1, requested_eventmasks=requested_eventmasks, msg_prefix='@ Remote closed with pending data') def test_which_events_are_set_after_remote_shuts_rd(self): s1, s2 = self.create_blocking_socketpair() s2.shutdown(socket.SHUT_RD) requested_eventmasks = [ self.READ | self.WRITE | self.ERROR, self.WRITE ] self.which_events_are_set_with_varying_eventmasks( sock=s1, requested_eventmasks=requested_eventmasks, msg_prefix='@ Remote shut RD') def test_which_events_are_set_after_remote_shuts_wr(self): s1, s2 = self.create_blocking_socketpair() s2.shutdown(socket.SHUT_WR) requested_eventmasks = [ (self.READ | self.WRITE | self.ERROR), self.READ | self.WRITE, self.READ, self.WRITE ] self.which_events_are_set_with_varying_eventmasks( sock=s1, requested_eventmasks=requested_eventmasks, msg_prefix='@ Remote shut WR') def test_which_events_are_set_after_remote_shuts_wr_with_pending_data(self): s1, s2 = self.create_blocking_socketpair() s2.send(b'abc') s2.shutdown(socket.SHUT_WR) requested_eventmasks = [ self.READ | self.WRITE | self.ERROR, self.READ | self.WRITE, self.READ, self.WRITE ] self.which_events_are_set_with_varying_eventmasks( sock=s1, requested_eventmasks=requested_eventmasks, msg_prefix='@ Remote shut WR with pending data') def test_which_events_are_set_after_remote_shuts_rdwr(self): s1, s2 = self.create_blocking_socketpair() s2.shutdown(socket.SHUT_RDWR) requested_eventmasks = [ self.READ | self.WRITE | self.ERROR, self.READ | self.WRITE, self.READ, self.WRITE ] self.which_events_are_set_with_varying_eventmasks( sock=s1, requested_eventmasks=requested_eventmasks, msg_prefix='@ Remote shut RDWR') def test_which_events_are_set_after_local_shuts_rd(self): msg_prefix = '@ Local shut RD' s1, _s2 = self.create_blocking_socketpair() s1.shutdown(socket.SHUT_RD) # pylint: disable=E1101 requested_eventmasks = [ self.READ | self.WRITE | self.ERROR, self.READ | self.WRITE, self.READ, self.WRITE ] # NOTE: Unlike POSIX, Windows select doesn't indicate as readable socket # that was shut down locally with SHUT_RD. if platform.system() == 'Windows': _trace_stderr( '%s: removing check for solo READ on Windows, ' 'because its `select()` does not indicate a socket shut ' 'locally with SHUT_RD as readable.', msg_prefix) requested_eventmasks.remove(self.READ) self.which_events_are_set_with_varying_eventmasks( sock=s1, requested_eventmasks=requested_eventmasks, msg_prefix=msg_prefix) def test_which_events_are_set_after_local_shuts_wr(self): s1, _s2 = self.create_blocking_socketpair() s1.shutdown(socket.SHUT_WR) # pylint: disable=E1101 requested_eventmasks = [ self.READ | self.WRITE | self.ERROR, self.WRITE | self.ERROR ] # NOTE: on OS X, we just get POLLHUP when requesting WRITE in this case # with PollPoller. It's documented in `man poll` on OS X as mutually # exclusive with POLLOUT; so it looks like PollPoller on OS X needs to # translate POLLHUP TO POLLERR and we need to request ERROR just in # case. self.which_events_are_set_with_varying_eventmasks( sock=s1, requested_eventmasks=requested_eventmasks, msg_prefix='@ Local shut WR') def test_which_events_are_set_after_local_shuts_rdwr(self): msg_prefix = '@ Local shut RDWR' s1, _s2 = self.create_blocking_socketpair() s1.shutdown(socket.SHUT_RDWR) # pylint: disable=E1101 requested_eventmasks = [ self.READ | self.WRITE | self.ERROR, self.READ | self.WRITE, self.READ, self.WRITE | self.ERROR ] # NOTE: on OS X, we just get POLLHUP when requesting WRITE in this case # with PollPoller. It's documented in `man poll` on OS X as mutually # exclusive with POLLOUT; so it looks like PollPoller on OS X needs to # translate POLLHUP TO POLLERR and we need to request ERROR just in # case. # NOTE: Unlike POSIX, Windows select doesn't indicate as readable socket # that was shut down locally with SHUT_RDWR. if platform.system() == 'Windows': _trace_stderr( '%s: removing check for solo READ on Windows, ' 'because its `select()` does not indicate a socket shut ' 'locally with SHUT_RDWR as readable.', msg_prefix) requested_eventmasks.remove(self.READ) self.which_events_are_set_with_varying_eventmasks( sock=s1, requested_eventmasks=requested_eventmasks, msg_prefix=msg_prefix) @mock.patch.multiple(select_connection, SELECT_TYPE='select') class SelectPollerSocketEventsTestCase(DefaultPollerSocketEventsTestCase): """Runs `DefaultPollerSocketEventsTestCase` tests with forced use of SelectPoller """ pass @unittest.skipIf(not POLL_SUPPORTED, 'poll not supported') @mock.patch.multiple(select_connection, SELECT_TYPE='poll') class PollPollerSocketEventsTestCase(DefaultPollerSocketEventsTestCase): """Same as DefaultPollerSocketEventsTestCase but uses poll syscall""" pass @unittest.skipIf(not EPOLL_SUPPORTED, 'epoll not supported') @mock.patch.multiple(select_connection, SELECT_TYPE='epoll') class EpollPollerSocketEventsTestCase(DefaultPollerSocketEventsTestCase): """Same as DefaultPollerSocketEventsTestCase but uses epoll syscall""" pass @unittest.skipIf(not KQUEUE_SUPPORTED, 'kqueue not supported') @mock.patch.multiple(select_connection, SELECT_TYPE='kqueue') class KqueuePollerSocketEventsTestCase(DefaultPollerSocketEventsTestCase): """Same as DefaultPollerSocketEventsTestCase but uses kqueue syscall""" pass pika-1.2.0/tests/unit/select_connection_timer_tests.py000066400000000000000000000460101400701476500232560ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ Tests for SelectConnection _Timer and _Timeout classes """ import math import time import unittest import mock import pika.compat from pika.adapters import select_connection # Suppress protected-access # pylint: disable=W0212 # Suppress missing-docstring # pylint: disable=C0111 # Suppress invalid-name # pylint: disable=C0103 def test_now(): # pika/pika#1184 # Note that time is a float, and these tests depend # on exact math. Round up the value to ensure that # CI doesn't fail because of something like this: # raise self.failureException('6.000000000000028 != 6') # https://travis-ci.org/pika/pika/jobs/489828602 return math.ceil(pika.compat.time_now()) class ChildTimeout(select_connection._Timeout): def __init__(self, *args, **kwargs): super(ChildTimeout, self).__init__(*args, **kwargs) self.extra = 'e' def __eq__(self, other): if isinstance(other, ChildTimeout): return self.extra == other.extra and super( ChildTimeout, self).__eq__(other) return NotImplemented class TimeoutClassTests(unittest.TestCase): """Test select_connection._Timeout class""" def test_properties(self): now = test_now() cb = lambda: None timeout = select_connection._Timeout(now + 5.3, cb) self.assertIs(timeout.callback, cb) self.assertEqual(timeout.deadline, now + 5.3) def test_non_negative_deadline(self): select_connection._Timeout(0, lambda: None) select_connection._Timeout(5, lambda: None) with self.assertRaises(ValueError) as cm: select_connection._Timeout(-1, lambda: None) self.assertIn('deadline must be non-negative epoch number', cm.exception.args[0]) def test_non_callable_callback_raises(self): with self.assertRaises(TypeError) as cm: select_connection._Timeout(5, None) self.assertIn('callback must be a callable, but got', cm.exception.args[0]) with self.assertRaises(TypeError) as cm: select_connection._Timeout(5, dict()) self.assertIn('callback must be a callable, but got', cm.exception.args[0]) def test_eq(self): # Comparison should be by deadline only self.assertEqual( select_connection._Timeout(5, lambda: None), select_connection._Timeout(5, lambda: 5)) self.assertEqual( select_connection._Timeout(5, lambda: 5), select_connection._Timeout(5, lambda: None)) self.assertEqual( select_connection._Timeout(5, lambda: None), ChildTimeout(5, lambda: 5)) self.assertEqual( ChildTimeout(5, lambda: 5), select_connection._Timeout(5, lambda: None)) class Foreign(object): def __eq__(self, other): return 'foobar' self.assertEqual( select_connection._Timeout(5, lambda: None) == Foreign(), 'foobar') self.assertEqual( Foreign() == select_connection._Timeout(5, lambda: None), 'foobar') def test_ne(self): # Comparison should be by deadline only self.assertNotEqual( select_connection._Timeout(5, lambda: None), select_connection._Timeout(10, lambda: None)) self.assertNotEqual( select_connection._Timeout(10, lambda: None), select_connection._Timeout(5, lambda: None)) self.assertNotEqual( select_connection._Timeout(5, lambda: None), ChildTimeout(10, lambda: None)) self.assertNotEqual( ChildTimeout(10, lambda: None), select_connection._Timeout(5, lambda: None)) self.assertNotEqual( select_connection._Timeout(5, lambda: None), dict(deadline=5, callback=lambda: None)) self.assertNotEqual( dict(deadline=5, callback=lambda: None), select_connection._Timeout(5, lambda: None)) class Foreign(object): def __ne__(self, other): return 'foobar' self.assertEqual( select_connection._Timeout(5, lambda: None) != Foreign(), 'foobar') self.assertEqual( Foreign() != select_connection._Timeout(5, lambda: None), 'foobar') def test_lt(self): # Comparison should be by deadline only self.assertLess( select_connection._Timeout(5, lambda: None), select_connection._Timeout(10, lambda: None)) self.assertLess( select_connection._Timeout(5, lambda: None), ChildTimeout(10, lambda: None)) class Foreign(object): def __gt__(self, other): return 'foobar' self.assertEqual( select_connection._Timeout(5, lambda: None) < Foreign(), 'foobar') self.assertFalse( select_connection._Timeout(5, lambda: None) < select_connection._Timeout(5, lambda: None)) self.assertFalse( select_connection._Timeout(5, lambda: None) < select_connection._Timeout(1, lambda: None)) def test_gt(self): # Comparison should be by deadline only self.assertGreater( select_connection._Timeout(10, lambda: None), select_connection._Timeout(5, lambda: None)) self.assertGreater( select_connection._Timeout(10, lambda: None), ChildTimeout(5, lambda: None)) class Foreign(object): def __lt__(self, other): return 'foobar' self.assertEqual( select_connection._Timeout(5, lambda: None) > Foreign(), 'foobar') self.assertFalse( select_connection._Timeout(5, lambda: None) > select_connection._Timeout(5, lambda: None)) self.assertFalse( select_connection._Timeout(1, lambda: None) > select_connection._Timeout(5, lambda: None)) def test_le(self): # Comparison should be by deadline only self.assertLessEqual( select_connection._Timeout(5, lambda: None), select_connection._Timeout(10, lambda: None)) self.assertLessEqual( select_connection._Timeout(5, lambda: None), select_connection._Timeout(5, lambda: None)) self.assertLessEqual( select_connection._Timeout(5, lambda: None), ChildTimeout(10, lambda: None)) self.assertLessEqual( select_connection._Timeout(5, lambda: None), ChildTimeout(5, lambda: None)) class Foreign(object): def __ge__(self, other): return 'foobar' self.assertEqual( select_connection._Timeout(5, lambda: None) <= Foreign(), 'foobar') self.assertFalse( select_connection._Timeout(5, lambda: None) <= select_connection._Timeout(1, lambda: None)) def test_ge(self): # Comparison should be by deadline only self.assertGreaterEqual( select_connection._Timeout(10, lambda: None), select_connection._Timeout(5, lambda: None)) self.assertGreaterEqual( select_connection._Timeout(5, lambda: None), select_connection._Timeout(5, lambda: None)) self.assertGreaterEqual( select_connection._Timeout(10, lambda: None), ChildTimeout(5, lambda: None)) self.assertGreaterEqual( select_connection._Timeout(5, lambda: None), ChildTimeout(5, lambda: None)) class Foreign(object): def __le__(self, other): return 'foobar' self.assertEqual( select_connection._Timeout(5, lambda: None) >= Foreign(), 'foobar') self.assertFalse( select_connection._Timeout(1, lambda: None) >= select_connection._Timeout(5, lambda: None)) class TimerClassTests(unittest.TestCase): """Test select_connection._Timer class""" def test_close_empty(self): timer = select_connection._Timer() timer.close() self.assertIsNone(timer._timeout_heap) def test_close_non_empty(self): timer = select_connection._Timer() t1 = timer.call_later(10, lambda: 10) t2 = timer.call_later(20, lambda: 20) timer.close() self.assertIsNone(timer._timeout_heap) self.assertIsNone(t1.callback) self.assertIsNone(t2.callback) def test_no_timeouts_remaining_interval_is_none(self): timer = select_connection._Timer() self.assertIsNone(timer.get_remaining_interval()) def test_call_later_non_negative_delay_check(self): now = test_now() # 0 delay is okay with mock.patch('pika.compat.time_now', return_value=now): timer = select_connection._Timer() timer.call_later(0, lambda: None) self.assertEqual(timer._timeout_heap[0].deadline, now) self.assertEqual(timer.get_remaining_interval(), 0) # Positive delay is okay with mock.patch('pika.compat.time_now', return_value=now): timer = select_connection._Timer() timer.call_later(0.5, lambda: None) self.assertEqual(timer._timeout_heap[0].deadline, now + 0.5) self.assertEqual(timer.get_remaining_interval(), 0.5) # Negative delay raises ValueError timer = select_connection._Timer() with self.assertRaises(ValueError) as cm: timer.call_later(-5, lambda: None) self.assertIn('call_later: delay must be non-negative, but got', cm.exception.args[0]) def test_call_later_single_timer_expires(self): now = test_now() with mock.patch('pika.compat.time_now', return_value=now): bucket = [] timer = select_connection._Timer() timer.call_later(5, lambda: bucket.append(1)) # Nothing is ready to expire timer.process_timeouts() self.assertEqual(bucket, []) self.assertEqual(timer.get_remaining_interval(), 5) # Advance time by 5 seconds and expect the timer to expire with mock.patch('pika.compat.time_now', return_value=now + 5): self.assertEqual(timer.get_remaining_interval(), 0) timer.process_timeouts() self.assertEqual(bucket, [1]) self.assertEqual(len(timer._timeout_heap), 0) self.assertIsNone(timer.get_remaining_interval()) def test_call_later_multiple_timers(self): now = test_now() bucket = [] timer = select_connection._Timer() with mock.patch('pika.compat.time_now', return_value=now): timer.call_later(5, lambda: bucket.append(1)) timer.call_later(5, lambda: bucket.append(2)) timer.call_later(10, lambda: bucket.append(3)) # Nothing is ready to fire yet self.assertEqual(timer.get_remaining_interval(), 5) timer.process_timeouts() self.assertEqual(bucket, []) self.assertEqual(timer.get_remaining_interval(), 5) # Advance time by 6 seconds and expect first two timers to expire with mock.patch('pika.compat.time_now', return_value=now + 6): self.assertEqual(timer.get_remaining_interval(), 0) timer.process_timeouts() self.assertEqual(bucket, [1, 2]) self.assertEqual(len(timer._timeout_heap), 1) self.assertEqual(timer.get_remaining_interval(), 4) # Advance time by 10 seconds and expect the 3rd timeout to expire with mock.patch('pika.compat.time_now', return_value=now + 10): self.assertEqual(timer.get_remaining_interval(), 0) timer.process_timeouts() self.assertEqual(bucket, [1, 2, 3]) self.assertEqual(len(timer._timeout_heap), 0) self.assertIsNone(timer.get_remaining_interval()) def test_add_and_remove_timeout(self): now = test_now() bucket = [] timer = select_connection._Timer() with mock.patch('pika.compat.time_now', return_value=now): timer.call_later(10, lambda: bucket.append(3)) # t3 t2 = timer.call_later(6, lambda: bucket.append(2)) t1 = timer.call_later(5, lambda: bucket.append(1)) # Nothing is ready to fire yet self.assertEqual(timer.get_remaining_interval(), 5) timer.process_timeouts() self.assertEqual(bucket, []) self.assertEqual(timer.get_remaining_interval(), 5) # Cancel t1 and t2 that haven't expired yet timer.remove_timeout(t1) self.assertIsNone(t1.callback) self.assertEqual(timer._num_cancellations, 1) timer.remove_timeout(t2) self.assertIsNone(t2.callback) self.assertEqual(timer._num_cancellations, 2) self.assertEqual(timer.get_remaining_interval(), 5) timer.process_timeouts() self.assertEqual(bucket, []) self.assertEqual(timer._num_cancellations, 2) self.assertEqual(timer.get_remaining_interval(), 5) self.assertEqual(len(timer._timeout_heap), 3) # Advance time by 6 seconds to expire t1 and t2 and verify they don't # fire with mock.patch('pika.compat.time_now', return_value=now + 6): self.assertEqual(timer.get_remaining_interval(), 0) timer.process_timeouts() self.assertEqual(bucket, []) self.assertEqual(timer._num_cancellations, 0) self.assertEqual(len(timer._timeout_heap), 1) self.assertEqual(timer.get_remaining_interval(), 4) # Advance time by 10 seconds to expire t3 and verify it fires with mock.patch('pika.compat.time_now', return_value=now + 10): self.assertEqual(timer.get_remaining_interval(), 0) timer.process_timeouts() self.assertEqual(bucket, [3]) self.assertEqual(len(timer._timeout_heap), 0) self.assertIsNone(timer.get_remaining_interval()) def test_gc_of_unexpired_timeouts(self): now = test_now() bucket = [] timer = select_connection._Timer() with mock.patch.multiple(select_connection._Timer, _GC_CANCELLATION_THRESHOLD=1): with mock.patch('pika.compat.time_now', return_value=now): t3 = timer.call_later(10, lambda: bucket.append(3)) t2 = timer.call_later(6, lambda: bucket.append(2)) t1 = timer.call_later(5, lambda: bucket.append(1)) # Cancel t1 and check that it doesn't trigger GC because it's # not greater than half the timeouts timer.remove_timeout(t1) self.assertEqual(timer._num_cancellations, 1) timer.process_timeouts() self.assertEqual(timer._num_cancellations, 1) self.assertEqual(bucket, []) self.assertEqual(len(timer._timeout_heap), 3) self.assertEqual(timer.get_remaining_interval(), 5) # Cancel t3 and verify GC since it's now greater than half of # total timeouts timer.remove_timeout(t3) self.assertEqual(timer._num_cancellations, 2) timer.process_timeouts() self.assertEqual(bucket, []) self.assertEqual(len(timer._timeout_heap), 1) self.assertIs(t2, timer._timeout_heap[0]) self.assertEqual(timer.get_remaining_interval(), 6) self.assertEqual(timer._num_cancellations, 0) def test_add_timeout_from_another_timeout(self): now = test_now() bucket = [] timer = select_connection._Timer() with mock.patch('pika.compat.time_now', return_value=now): t1 = timer.call_later( 5, lambda: bucket.append( timer.call_later(0, lambda: bucket.append(2)))) # Advance time by 10 seconds and verify that t1 fires and creates t2, # but timer manager defers firing of t2 to next `process_timeouts` in # order to avoid IO starvation with mock.patch('pika.compat.time_now', return_value=now + 10): timer.process_timeouts() t2 = bucket.pop() self.assertIsInstance(t2, select_connection._Timeout) self.assertIsNot(t2, t1) self.assertEqual(bucket, []) self.assertEqual(len(timer._timeout_heap), 1) self.assertIs(t2, timer._timeout_heap[0]) self.assertEqual(timer.get_remaining_interval(), 0) # t2 should now fire timer.process_timeouts() self.assertEqual(bucket, [2]) self.assertEqual(timer.get_remaining_interval(), None) def test_cancel_unexpired_timeout_from_another_timeout(self): now = test_now() bucket = [] timer = select_connection._Timer() with mock.patch('pika.compat.time_now', return_value=now): t2 = timer.call_later(10, lambda: bucket.append(2)) t1 = timer.call_later(5, lambda: timer.remove_timeout(t2)) self.assertIs(t1, timer._timeout_heap[0]) # Advance time by 6 seconds and check that t2 is cancelled, but not # removed from timeout heap with mock.patch('pika.compat.time_now', return_value=now + 6): timer.process_timeouts() self.assertIsNone(t2.callback) self.assertEqual(timer.get_remaining_interval(), 4) self.assertIs(t2, timer._timeout_heap[0]) self.assertEqual(timer._num_cancellations, 1) # Advance time by 10 seconds and verify that t2 is removed without # firing with mock.patch('pika.compat.time_now', return_value=now + 10): timer.process_timeouts() self.assertEqual(bucket, []) self.assertIsNone(timer.get_remaining_interval()) self.assertEqual(len(timer._timeout_heap), 0) self.assertEqual(timer._num_cancellations, 0) def test_cancel_expired_timeout_from_another_timeout(self): now = test_now() bucket = [] timer = select_connection._Timer() with mock.patch('pika.compat.time_now', return_value=now): t2 = timer.call_later(10, lambda: bucket.append(2)) t1 = timer.call_later( 5, lambda: (self.assertEqual(timer._num_cancellations, 0), timer.remove_timeout(t2))) self.assertIs(t1, timer._timeout_heap[0]) # Advance time by 10 seconds and check that t2 is cancelled and # removed from timeout heap with mock.patch('pika.compat.time_now', return_value=now + 10): timer.process_timeouts() self.assertEqual(bucket, []) self.assertIsNone(t2.callback) self.assertIsNone(timer.get_remaining_interval()) self.assertEqual(len(timer._timeout_heap), 0) self.assertEqual(timer._num_cancellations, 0) pika-1.2.0/tests/unit/spec_tests.py000066400000000000000000000015261400701476500173150ustar00rootroot00000000000000# -*- coding: utf8 -*- """ Tests for pika.spec """ import unittest from pika import spec from pika.compat import long class BasicPropertiesTests(unittest.TestCase): def test_equality(self): a = spec.BasicProperties(content_type='text/plain') self.assertEqual(a, a) self.assertNotEqual(a, None) b = spec.BasicProperties() self.assertNotEqual(a, b) b.content_type = 'text/plain' self.assertEqual(a, b) a.correlation_id = 'abc123' self.assertNotEqual(a, b) b.correlation_id = 'abc123' self.assertEqual(a, b) def test_headers_repr(self): hdr = 'timestamp_in_ms' v = long(912598613) h = { hdr : v } p = spec.BasicProperties(content_type='text/plain', headers=h) self.assertEqual(repr(p.headers[hdr]), '912598613L') pika-1.2.0/tests/unit/threaded_test_wrapper_test.py000066400000000000000000000112221400701476500225510ustar00rootroot00000000000000""" Tests for threaded_test_wrapper.py """ from __future__ import print_function import sys import threading import time import unittest try: from unittest import mock except ImportError: import mock import pika.compat from tests.wrappers import threaded_test_wrapper from tests.wrappers.threaded_test_wrapper import (_ThreadedTestWrapper, run_in_thread_with_timeout) # Suppress invalid-name, since our test names are descriptive and quite long # pylint: disable=C0103 # Suppress missing-docstring to allow test method names to be printed by our the # test runner # pylint: disable=C0111 class ThreadedTestWrapperSelfChecks(unittest.TestCase): """Tests for threaded_test_wrapper.py. """ def start(self): """Each of the tests in this test case patches this method to run its own test """ raise NotImplementedError def test_propagation_of_failure_from_test_execution_thread(self): class SelfCheckExceptionHandling(Exception): pass caller_thread_id = threading.current_thread().ident @run_in_thread_with_timeout def my_errant_function(*_args, **_kwargs): if threading.current_thread().ident != caller_thread_id: raise SelfCheckExceptionHandling() # Suppress error output by redirecting to stringio_stderr stringio_stderr = pika.compat.StringIO() try: with mock.patch.object(_ThreadedTestWrapper, '_stderr', stringio_stderr): with self.assertRaises(AssertionError) as exc_ctx: my_errant_function() self.assertIn('raise SelfCheckExceptionHandling()', exc_ctx.exception.args[0]) expected_tail = 'SelfCheckExceptionHandling\n' self.assertEqual(exc_ctx.exception.args[0][-len(expected_tail):], expected_tail) self.assertIn('raise SelfCheckExceptionHandling()', stringio_stderr.getvalue()) self.assertEqual(stringio_stderr.getvalue()[-len(expected_tail):], expected_tail) except Exception: try: print('This stderr was captured from our thread wrapper:\n', stringio_stderr.getvalue(), file=sys.stderr) except Exception: # pylint: disable=W0703 pass raise def test_handling_of_test_execution_thread_timeout(self): # Suppress error output by redirecting to our stringio_stderr object stringio_stderr = pika.compat.StringIO() @run_in_thread_with_timeout def my_sleeper(*_args, **_kwargs): time.sleep(1.1) # Redirect _ThreadedTestWrapper error output to our StringIO instance with mock.patch.object(_ThreadedTestWrapper, '_stderr', stringio_stderr): # Patch DEFAULT_TEST_TIMEOUT to much smaller value than sleep in # my_start() with mock.patch.object(threaded_test_wrapper, 'DEFAULT_TEST_TIMEOUT', 0.01): # Redirect start() call from thread to our own my_start() with self.assertRaises(AssertionError) as exc_ctx: my_sleeper() self.assertEqual(len(stringio_stderr.getvalue()), 0) self.assertIn('The test timed out.', exc_ctx.exception.args[0]) def test_integrity_of_args_and_return_value(self): args_bucket = [] kwargs_bucket = [] value_to_return = dict() @run_in_thread_with_timeout def my_guinea_pig(*args, **kwargs): args_bucket.append(args) kwargs_bucket.append(kwargs) return value_to_return arg0 = dict() arg1 = tuple() kwarg0 = list() result = my_guinea_pig(arg0, arg1, kwarg0=kwarg0) self.assertIs(result, value_to_return) args_ut = args_bucket[0] self.assertEqual(len(args_ut), 2, repr(args_ut)) self.assertIs(args_ut[0], arg0) self.assertIs(args_ut[1], arg1) kwargs_ut = kwargs_bucket[0] self.assertEqual(len(kwargs_ut), 1, repr(kwargs_ut)) self.assertIn('kwarg0', kwargs_ut, repr(kwargs_ut)) self.assertIs(kwargs_ut['kwarg0'], kwarg0) def test_skip_test_is_passed_through(self): @run_in_thread_with_timeout def my_test_skipper(): raise unittest.SkipTest('I SKIP') with self.assertRaises(unittest.SkipTest) as ctx: my_test_skipper() self.assertEqual(ctx.exception.args[0], 'I SKIP') pika-1.2.0/tests/unit/tornado_tests.py000066400000000000000000000023231400701476500200250ustar00rootroot00000000000000""" Tests for pika.adapters.tornado_connection """ import unittest import mock from pika.adapters import tornado_connection from pika.adapters.utils import selector_ioloop_adapter # missing-docstring # pylint: disable=C0111 # invalid-name # pylint: disable=C0103 class TornadoConnectionTests(unittest.TestCase): @mock.patch('pika.adapters.base_connection.BaseConnection.__init__') def test_tornado_connection_call_parent(self, mock_init): _SelectorIOServicesAdapter = ( selector_ioloop_adapter.SelectorIOServicesAdapter) bucket = [] def construct_io_services_adapter(ioloop): adapter = _SelectorIOServicesAdapter(ioloop) bucket.append(adapter) return adapter with mock.patch('pika.adapters.utils.selector_ioloop_adapter.SelectorIOServicesAdapter', side_effect=construct_io_services_adapter): tornado_connection.TornadoConnection() mock_init.assert_called_once_with( None, None, None, None, bucket[0], internal_connection_workflow=True) self.assertIs(bucket[0].get_native_ioloop(), tornado_connection.ioloop.IOLoop.instance()) pika-1.2.0/tests/wrappers/000077500000000000000000000000001400701476500154475ustar00rootroot00000000000000pika-1.2.0/tests/wrappers/__init__.py000066400000000000000000000000001400701476500175460ustar00rootroot00000000000000pika-1.2.0/tests/wrappers/threaded_test_wrapper.py000066400000000000000000000135551400701476500224110ustar00rootroot00000000000000""" Implements run_in_thread_with_timeout decorator for running tests that might deadlock. """ from __future__ import print_function import functools import os import sys import threading import traceback import unittest MODULE_PID = os.getpid() DEFAULT_TEST_TIMEOUT = 15 def create_run_in_thread_decorator(test_timeout=None): """Create a decorator that will run the decorated method in a thread via `_ThreadedTestWrapper` and return the value that is returned by the given function, unless it exits with exception or times out, in which case AssertionError will be raised :param int | float | None test_timeout: maximum number of seconds to wait for test to complete. If None, `DEFAULT_TEST_TIMEOUT` will be used. NOTE: we handle default this way to facilitate patching of the timeout in our self-tests. :return: decorator """ def run_in_thread_with_timeout_decorator(fun): """Create a wrapper that will run the decorated method in a thread via `_ThreadedTestWrapper` and return the value that is returned by the given function, unless it exits with exception or times out, in which case AssertionError will be raised :param fun: function to run in thread :return: wrapper function """ @functools.wraps(fun) def run_in_thread_with_timeout_wrapper(*args, **kwargs): """ :param args: positional args to pass to wrapped function :param kwargs: keyword args to pass to wrapped function :return: value returned by the function, unless it exits with exception or times out :raises AssertionError: if wrapped function exits with exception or times out """ runner = _ThreadedTestWrapper( functools.partial(fun, *args, **kwargs), test_timeout) return runner.kick_off() return run_in_thread_with_timeout_wrapper return run_in_thread_with_timeout_decorator run_in_thread_with_timeout = create_run_in_thread_decorator() # pylint: disable=C0103 class _ThreadedTestWrapper(object): """Runs user's function in a thread. Then wait on the thread to terminate up to the given `test_timeout` seconds, raising `AssertionError` if user's function exits with exception or times out. """ # We use the saved member when printing to facilitate patching by our # self-tests _stderr = sys.stderr def __init__(self, fun, test_timeout): """ :param callable fun: the function to run in thread, no args. :param int | float test_timeout: maximum number of seconds to wait for thread to exit. """ self._fun = fun if test_timeout is None: # NOTE: we handle default here to facilitate patching of # DEFAULT_TEST_TIMEOUT in our self-tests self._test_timeout = DEFAULT_TEST_TIMEOUT else: self._test_timeout = test_timeout # Save possibly-patched class-level _stderr value in instance so in case # user's function times out and later exits with exception, our # exception handler in `_thread_entry` won't inadvertently output to the # wrong object. self._stderr = self._stderr self._fun_result = None # result returned by function being run self._exc_info = None def kick_off(self): """Run user's function in a thread. Then wait on the thread to terminate up to self._test_timeout seconds, raising `AssertionError` if user's function exits with exception or times out. :return: the value returned by function if function exited without exception and didn't time out :raises AssertionError: if user's function timed out or exited with exception. """ try: runner = threading.Thread(target=self._thread_entry) # `daemon = True` so that the script won't wait for thread's exit runner.daemon = True runner.start() runner.join(self._test_timeout) if runner.is_alive(): raise AssertionError('The test timed out.') if self._exc_info is not None: if isinstance(self._exc_info[1], unittest.SkipTest): raise self._exc_info[1] # Fail the test because the thread running the test's start() # failed raise AssertionError(self._exc_info_to_str(self._exc_info)) return self._fun_result finally: # Facilitate garbage collection self._exc_info = None self._fun = None def _thread_entry(self): """Our test-execution thread entry point that calls the test's `start()` method. Here, we catch all exceptions from `start()`, save the `exc_info` for processing by `_kick_off()`, and print the stack trace to `sys.stderr`. """ try: self._fun_result = self._fun() except: # pylint: disable=W0702 self._exc_info = sys.exc_info() del self._fun_result # to force exception on inadvertent access if not isinstance(self._exc_info[1], unittest.SkipTest): print( 'ERROR start() of test {} failed:\n{}'.format( self, self._exc_info_to_str(self._exc_info)), end='', file=self._stderr) @staticmethod def _exc_info_to_str(exc_info): """Convenience method for converting the value returned by `sys.exc_info()` to a string. :param tuple exc_info: Value returned by `sys.exc_info()`. :return: A string representation of the given `exc_info`. :rtype: str """ return ''.join(traceback.format_exception(*exc_info)) pika-1.2.0/utils/000077500000000000000000000000001400701476500136025ustar00rootroot00000000000000pika-1.2.0/utils/codegen.py000066400000000000000000000357121400701476500155700ustar00rootroot00000000000000""" codegen.py generates pika/spec.py The required spec json file can be found at https://github.com/rabbitmq/rabbitmq-codegen . After cloning it run the following to generate a spec.py file: python2 ./codegen.py ../../rabbitmq-codegen """ from __future__ import nested_scopes import os import re import sys if sys.version_info.major != 2: sys.exit('Python 2 is required at this time') RABBITMQ_CODEGEN_PATH = sys.argv[1] PIKA_SPEC = '../pika/spec.py' print('codegen-path: %s' % RABBITMQ_CODEGEN_PATH) sys.path.append(RABBITMQ_CODEGEN_PATH) import amqp_codegen DRIVER_METHODS = { "Exchange.Bind": ["Exchange.BindOk"], "Exchange.Unbind": ["Exchange.UnbindOk"], "Exchange.Declare": ["Exchange.DeclareOk"], "Exchange.Delete": ["Exchange.DeleteOk"], "Queue.Declare": ["Queue.DeclareOk"], "Queue.Bind": ["Queue.BindOk"], "Queue.Purge": ["Queue.PurgeOk"], "Queue.Delete": ["Queue.DeleteOk"], "Queue.Unbind": ["Queue.UnbindOk"], "Basic.Qos": ["Basic.QosOk"], "Basic.Get": ["Basic.GetOk", "Basic.GetEmpty"], "Basic.Ack": [], "Basic.Reject": [], "Basic.Recover": ["Basic.RecoverOk"], "Basic.RecoverAsync": [], "Tx.Select": ["Tx.SelectOk"], "Tx.Commit": ["Tx.CommitOk"], "Tx.Rollback": ["Tx.RollbackOk"] } def fieldvalue(v): if isinstance(v, unicode): return repr(v.encode('ascii')) elif isinstance(v, dict): return repr(None) elif isinstance(v, list): return repr(None) else: return repr(v) def normalize_separators(s): s = s.replace('-', '_') s = s.replace(' ', '_') return s def pyize(s): s = normalize_separators(s) if s in ('global', 'class'): s += '_' if s == 'global_': s = 'global_qos' return s def camel(s): return normalize_separators(s).title().replace('_', '') amqp_codegen.AmqpMethod.structName = lambda m: camel(m.klass.name) + '.' + camel(m.name) amqp_codegen.AmqpClass.structName = lambda c: camel(c.name) + "Properties" def constantName(s): return '_'.join(re.split('[- ]', s.upper())) def flagName(c, f): if c: return c.structName() + '.' + constantName('flag_' + f.name) else: return constantName('flag_' + f.name) def generate(specPath): spec = amqp_codegen.AmqpSpec(specPath) def genSingleDecode(prefix, cLvalue, unresolved_domain): type = spec.resolveDomain(unresolved_domain) if type == 'shortstr': print(prefix + "%s, offset = data.decode_short_string(encoded, offset)" % cLvalue) elif type == 'longstr': print(prefix + "length = struct.unpack_from('>I', encoded, offset)[0]") print(prefix + "offset += 4") print(prefix + "%s = encoded[offset:offset + length]" % cLvalue) print(prefix + "try:") print(prefix + " %s = str(%s)" % (cLvalue, cLvalue)) print(prefix + "except UnicodeEncodeError:") print(prefix + " pass") print(prefix + "offset += length") elif type == 'octet': print(prefix + "%s = struct.unpack_from('B', encoded, offset)[0]" % cLvalue) print(prefix + "offset += 1") elif type == 'short': print(prefix + "%s = struct.unpack_from('>H', encoded, offset)[0]" % cLvalue) print(prefix + "offset += 2") elif type == 'long': print(prefix + "%s = struct.unpack_from('>I', encoded, offset)[0]" % cLvalue) print(prefix + "offset += 4") elif type == 'longlong': print(prefix + "%s = struct.unpack_from('>Q', encoded, offset)[0]" % cLvalue) print(prefix + "offset += 8") elif type == 'timestamp': print(prefix + "%s = struct.unpack_from('>Q', encoded, offset)[0]" % cLvalue) print(prefix + "offset += 8") elif type == 'bit': raise Exception("Can't decode bit in genSingleDecode") elif type == 'table': print( Exception(prefix + "(%s, offset) = data.decode_table(encoded, offset)" % cLvalue)) else: raise Exception("Illegal domain in genSingleDecode", type) def genSingleEncode(prefix, cValue, unresolved_domain): type = spec.resolveDomain(unresolved_domain) if type == 'shortstr': print( prefix + "assert isinstance(%s, str_or_bytes),\\\n%s 'A non-string value was supplied for %s'" % (cValue, prefix, cValue)) print(prefix + "data.encode_short_string(pieces, %s)" % cValue) elif type == 'longstr': print( prefix + "assert isinstance(%s, str_or_bytes),\\\n%s 'A non-string value was supplied for %s'" % (cValue, prefix, cValue)) print( prefix + "value = %s.encode('utf-8') if isinstance(%s, unicode_type) else %s" % (cValue, cValue, cValue)) print(prefix + "pieces.append(struct.pack('>I', len(value)))") print(prefix + "pieces.append(value)") elif type == 'octet': print(prefix + "pieces.append(struct.pack('B', %s))" % cValue) elif type == 'short': print(prefix + "pieces.append(struct.pack('>H', %s))" % cValue) elif type == 'long': print(prefix + "pieces.append(struct.pack('>I', %s))" % cValue) elif type == 'longlong': print(prefix + "pieces.append(struct.pack('>Q', %s))" % cValue) elif type == 'timestamp': print(prefix + "pieces.append(struct.pack('>Q', %s))" % cValue) elif type == 'bit': raise Exception("Can't encode bit in genSingleEncode") elif type == 'table': print(Exception(prefix + "data.encode_table(pieces, %s)" % cValue)) else: raise Exception("Illegal domain in genSingleEncode", type) def genDecodeMethodFields(m): print(" def decode(self, encoded, offset=0):") bitindex = None for f in m.arguments: if spec.resolveDomain(f.domain) == 'bit': if bitindex is None: bitindex = 0 if bitindex >= 8: bitindex = 0 if not bitindex: print( " bit_buffer = struct.unpack_from('B', encoded, offset)[0]" ) print(" offset += 1") print(" self.%s = (bit_buffer & (1 << %d)) != 0" % (pyize(f.name), bitindex)) bitindex += 1 else: bitindex = None genSingleDecode(" ", "self.%s" % (pyize(f.name),), f.domain) print(" return self") print('') def genDecodeProperties(c): print(" def decode(self, encoded, offset=0):") print(" flags = 0") print(" flagword_index = 0") print(" while True:") print( " partial_flags = struct.unpack_from('>H', encoded, offset)[0]" ) print(" offset += 2") print( " flags = flags | (partial_flags << (flagword_index * 16))" ) print(" if not (partial_flags & 1):") print(" break") print(" flagword_index += 1") for f in c.fields: if spec.resolveDomain(f.domain) == 'bit': print(" self.%s = (flags & %s) != 0" % (pyize(f.name), flagName(c, f))) else: print(" if flags & %s:" % (flagName(c, f),)) genSingleDecode(" ", "self.%s" % (pyize(f.name),), f.domain) print(" else:") print(" self.%s = None" % (pyize(f.name),)) print(" return self") print('') def genEncodeMethodFields(m): print(" def encode(self):") print(" pieces = list()") bitindex = None def finishBits(): if bitindex is not None: print(" pieces.append(struct.pack('B', bit_buffer))") for f in m.arguments: if spec.resolveDomain(f.domain) == 'bit': if bitindex is None: bitindex = 0 print(" bit_buffer = 0") if bitindex >= 8: finishBits() print(" bit_buffer = 0") bitindex = 0 print(" if self.%s:" % pyize(f.name)) print(" bit_buffer |= 1 << %d" % bitindex) bitindex += 1 else: finishBits() bitindex = None genSingleEncode(" ", "self.%s" % (pyize(f.name),), f.domain) finishBits() print(" return pieces") print('') def genEncodeProperties(c): print(" def encode(self):") print(" pieces = list()") print(" flags = 0") for f in c.fields: if spec.resolveDomain(f.domain) == 'bit': print(" if self.%s: flags = flags | %s" % (pyize( f.name), flagName(c, f))) else: print(" if self.%s is not None:" % (pyize(f.name),)) print(" flags = flags | %s" % (flagName(c, f),)) genSingleEncode(" ", "self.%s" % (pyize(f.name),), f.domain) print(" flag_pieces = list()") print(" while True:") print(" remainder = flags >> 16") print(" partial_flags = flags & 0xFFFE") print(" if remainder != 0:") print(" partial_flags |= 1") print( " flag_pieces.append(struct.pack('>H', partial_flags))") print(" flags = remainder") print(" if not flags:") print(" break") print(" return flag_pieces + pieces") print('') def fieldDeclList(fields): return ''.join([ ", %s=%s" % (pyize(f.name), fieldvalue(f.defaultvalue)) for f in fields ]) def fieldInitList(prefix, fields): if fields: return ''.join(["%sself.%s = %s\n" % (prefix, pyize(f.name), pyize(f.name)) \ for f in fields]) else: return '%spass\n' % (prefix,) print("""\"\"\" AMQP Specification ================== This module implements the constants and classes that comprise AMQP protocol level constructs. It should rarely be directly referenced outside of Pika's own internal use. .. note:: Auto-generated code by codegen.py, do not edit directly. Pull requests to this file without accompanying ``utils/codegen.py`` changes will be rejected. \"\"\" import struct from pika import amqp_object from pika import data from pika.compat import str_or_bytes, unicode_type # Python 3 support for str object str = bytes """) print("PROTOCOL_VERSION = (%d, %d, %d)" % (spec.major, spec.minor, spec.revision)) print("PORT = %d" % spec.port) print('') # Append some constants that arent in the spec json file spec.constants.append(('FRAME_MAX_SIZE', 131072, '')) spec.constants.append(('FRAME_HEADER_SIZE', 7, '')) spec.constants.append(('FRAME_END_SIZE', 1, '')) spec.constants.append(('TRANSIENT_DELIVERY_MODE', 1, '')) spec.constants.append(('PERSISTENT_DELIVERY_MODE', 2, '')) constants = {} for c, v, cls in spec.constants: constants[constantName(c)] = v for key in sorted(constants.keys()): print("%s = %s" % (key, constants[key])) print('') for c in spec.allClasses(): print('') print('class %s(amqp_object.Class):' % (camel(c.name),)) print('') print(" INDEX = 0x%.04X # %d" % (c.index, c.index)) print(" NAME = %s" % (fieldvalue(camel(c.name)),)) print('') for m in c.allMethods(): print(' class %s(amqp_object.Method):' % (camel(m.name),)) print('') methodid = m.klass.index << 16 | m.index print(" INDEX = 0x%.08X # %d, %d; %d" % (methodid, m.klass.index, m.index, methodid)) print(" NAME = %s" % (fieldvalue(m.structName(),))) print('') print( " def __init__(self%s):" % (fieldDeclList(m.arguments),)) print(fieldInitList(' ', m.arguments)) print(" @property") print(" def synchronous(self):") print(" return %s" % m.isSynchronous) print('') genDecodeMethodFields(m) genEncodeMethodFields(m) for c in spec.allClasses(): if c.fields: print('') print('class %s(amqp_object.Properties):' % (c.structName(),)) print('') print(" CLASS = %s" % (camel(c.name),)) print(" INDEX = 0x%.04X # %d" % (c.index, c.index)) print(" NAME = %s" % (fieldvalue(c.structName(),))) print('') index = 0 if c.fields: for f in c.fields: if index % 16 == 15: index += 1 shortnum = index / 16 partialindex = 15 - (index % 16) bitindex = shortnum * 16 + partialindex print(' %s = (1 << %d)' % (flagName(None, f), bitindex)) index += 1 print('') print(" def __init__(self%s):" % (fieldDeclList(c.fields),)) print(fieldInitList(' ', c.fields)) genDecodeProperties(c) genEncodeProperties(c) print("methods = {") print(',\n'.join([ " 0x%08X: %s" % (m.klass.index << 16 | m.index, m.structName()) for m in spec.allMethods() ])) print("}") print('') print("props = {") print(',\n'.join([ " 0x%04X: %s" % (c.index, c.structName()) for c in spec.allClasses() if c.fields ])) print("}") print('') print('') print("def has_content(methodNumber):") print(' return methodNumber in (') for m in spec.allMethods(): if m.hasContent: print(' %s.INDEX,' % m.structName()) print(' )') if __name__ == "__main__": with open(PIKA_SPEC, 'w') as handle: sys.stdout = handle generate(['%s/amqp-rabbitmq-0.9.1.json' % RABBITMQ_CODEGEN_PATH])